\n\nText: <{text}>\n\"\"\"\nresponse = get_completion(prompt_2)\nprint(\"\\nCompletion for prompt 2:\")\nprint(response)",
128 | "metadata": {},
129 | "execution_count": null,
130 | "outputs": []
131 | },
132 | {
133 | "cell_type": "markdown",
134 | "source": "#### Tactic 2: Instruct the model to work out its own solution before rushing to a conclusion",
135 | "metadata": {}
136 | },
137 | {
138 | "cell_type": "code",
139 | "source": "prompt = f\"\"\"\nDetermine if the student's solution is correct or not.\n\nQuestion:\nI'm building a solar power installation and I need \\\n help working out the financials. \n- Land costs $100 / square foot\n- I can buy solar panels for $250 / square foot\n- I negotiated a contract for maintenance that will cost \\ \nme a flat $100k per year, and an additional $10 / square \\\nfoot\nWhat is the total cost for the first year of operations \nas a function of the number of square feet.\n\nStudent's Solution:\nLet x be the size of the installation in square feet.\nCosts:\n1. Land cost: 100x\n2. Solar panel cost: 250x\n3. Maintenance cost: 100,000 + 100x\nTotal cost: 100x + 250x + 100,000 + 100x = 450x + 100,000\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
140 | "metadata": {},
141 | "execution_count": null,
142 | "outputs": []
143 | },
144 | {
145 | "cell_type": "markdown",
146 | "source": "#### Note that the student's solution is actually not correct.\n#### We can fix this by instructing the model to work out its own solution first.",
147 | "metadata": {}
148 | },
149 | {
150 | "cell_type": "code",
151 | "source": "prompt = f\"\"\"\nYour task is to determine if the student's solution \\\nis correct or not.\nTo solve the problem do the following:\n- First, work out your own solution to the problem. \n- Then compare your solution to the student's solution \\ \nand evaluate if the student's solution is correct or not. \nDon't decide if the student's solution is correct until \nyou have done the problem yourself.\n\nUse the following format:\nQuestion:\n```\nquestion here\n```\nStudent's solution:\n```\nstudent's solution here\n```\nActual solution:\n```\nsteps to work out the solution and your solution here\n```\nIs the student's solution the same as actual solution \\\njust calculated:\n```\nyes or no\n```\nStudent grade:\n```\ncorrect or incorrect\n```\n\nQuestion:\n```\nI'm building a solar power installation and I need help \\\nworking out the financials. \n- Land costs $100 / square foot\n- I can buy solar panels for $250 / square foot\n- I negotiated a contract for maintenance that will cost \\\nme a flat $100k per year, and an additional $10 / square \\\nfoot\nWhat is the total cost for the first year of operations \\\nas a function of the number of square feet.\n``` \nStudent's solution:\n```\nLet x be the size of the installation in square feet.\nCosts:\n1. Land cost: 100x\n2. Solar panel cost: 250x\n3. Maintenance cost: 100,000 + 100x\nTotal cost: 100x + 250x + 100,000 + 100x = 450x + 100,000\n```\nActual solution:\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
152 | "metadata": {},
153 | "execution_count": null,
154 | "outputs": []
155 | },
156 | {
157 | "cell_type": "markdown",
158 | "source": "## Model Limitations: Hallucinations\n- Boie is a real company, the product name is not real.",
159 | "metadata": {}
160 | },
161 | {
162 | "cell_type": "code",
163 | "source": "prompt = f\"\"\"\nTell me about AeroGlide UltraSlim Smart Toothbrush by Boie\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
164 | "metadata": {},
165 | "execution_count": null,
166 | "outputs": []
167 | },
168 | {
169 | "cell_type": "markdown",
170 | "source": "## Try experimenting on your own!",
171 | "metadata": {}
172 | },
173 | {
174 | "cell_type": "code",
175 | "source": "",
176 | "metadata": {},
177 | "execution_count": null,
178 | "outputs": []
179 | },
180 | {
181 | "cell_type": "markdown",
182 | "source": "#### Notes on using the OpenAI API outside of this classroom\n\nTo install the OpenAI Python library:\n```\n!pip install openai\n```\n\nThe library needs to be configured with your account's secret key, which is available on the [website](https://platform.openai.com/account/api-keys). \n\nYou can either set it as the `OPENAI_API_KEY` environment variable before using the library:\n ```\n !export OPENAI_API_KEY='sk-...'\n ```\n\nOr, set `openai.api_key` to its value:\n\n```\nimport openai\nopenai.api_key = \"sk-...\"\n```",
183 | "metadata": {}
184 | },
185 | {
186 | "cell_type": "markdown",
187 | "source": "#### A note about the backslash\n- In the course, we are using a backslash `\\` to make the text fit on the screen without inserting newline '\\n' characters.\n- GPT-3 isn't really affected whether you insert newline characters or not. But when working with LLMs in general, you may consider whether newline characters in your prompt may affect the model's performance.",
188 | "metadata": {}
189 | },
190 | {
191 | "cell_type": "code",
192 | "source": "",
193 | "metadata": {},
194 | "execution_count": null,
195 | "outputs": []
196 | }
197 | ]
198 | }
--------------------------------------------------------------------------------
/ChatGPT-Notebook2.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "language_info": {
4 | "codemirror_mode": {
5 | "name": "python",
6 | "version": 3
7 | },
8 | "file_extension": ".py",
9 | "mimetype": "text/x-python",
10 | "name": "python",
11 | "nbconvert_exporter": "python",
12 | "pygments_lexer": "ipython3",
13 | "version": "3.8"
14 | },
15 | "kernelspec": {
16 | "name": "python",
17 | "display_name": "Python (Pyodide)",
18 | "language": "python"
19 | }
20 | },
21 | "nbformat_minor": 4,
22 | "nbformat": 4,
23 | "cells": [
24 | {
25 | "cell_type": "markdown",
26 | "source": "# Iterative Prompt Develelopment\nIn this lesson, you'll iteratively analyze and refine your prompts to generate marketing copy from a product fact sheet.\n\n## Setup",
27 | "metadata": {}
28 | },
29 | {
30 | "cell_type": "code",
31 | "source": "import openai\nimport os\n\nfrom dotenv import load_dotenv, find_dotenv\n_ = load_dotenv(find_dotenv()) # read local .env file\n\nopenai.api_key = os.getenv('OPENAI_API_KEY')",
32 | "metadata": {},
33 | "execution_count": null,
34 | "outputs": []
35 | },
36 | {
37 | "cell_type": "code",
38 | "source": "def get_completion(prompt, model=\"gpt-3.5-turbo\"):\n messages = [{\"role\": \"user\", \"content\": prompt}]\n response = openai.ChatCompletion.create(\n model=model,\n messages=messages,\n temperature=0, # this is the degree of randomness of the model's output\n )\n return response.choices[0].message[\"content\"]",
39 | "metadata": {},
40 | "execution_count": null,
41 | "outputs": []
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "source": "## Generate a marketing product description from a product fact sheet",
46 | "metadata": {}
47 | },
48 | {
49 | "cell_type": "code",
50 | "source": "fact_sheet_chair = \"\"\"\nOVERVIEW\n- Part of a beautiful family of mid-century inspired office furniture, \nincluding filing cabinets, desks, bookcases, meeting tables, and more.\n- Several options of shell color and base finishes.\n- Available with plastic back and front upholstery (SWC-100) \nor full upholstery (SWC-110) in 10 fabric and 6 leather options.\n- Base finish options are: stainless steel, matte black, \ngloss white, or chrome.\n- Chair is available with or without armrests.\n- Suitable for home or business settings.\n- Qualified for contract use.\n\nCONSTRUCTION\n- 5-wheel plastic coated aluminum base.\n- Pneumatic chair adjust for easy raise/lower action.\n\nDIMENSIONS\n- WIDTH 53 CM | 20.87”\n- DEPTH 51 CM | 20.08”\n- HEIGHT 80 CM | 31.50”\n- SEAT HEIGHT 44 CM | 17.32”\n- SEAT DEPTH 41 CM | 16.14”\n\nOPTIONS\n- Soft or hard-floor caster options.\n- Two choices of seat foam densities: \n medium (1.8 lb/ft3) or high (2.8 lb/ft3)\n- Armless or 8 position PU armrests \n\nMATERIALS\nSHELL BASE GLIDER\n- Cast Aluminum with modified nylon PA6/PA66 coating.\n- Shell thickness: 10 mm.\nSEAT\n- HD36 foam\n\nCOUNTRY OF ORIGIN\n- Italy\n\"\"\"",
51 | "metadata": {},
52 | "execution_count": null,
53 | "outputs": []
54 | },
55 | {
56 | "cell_type": "code",
57 | "source": "prompt = f\"\"\"\nYour task is to help a marketing team create a \ndescription for a retail website of a product based \non a technical fact sheet.\n\nWrite a product description based on the information \nprovided in the technical specifications delimited by \ntriple backticks.\n\nTechnical specifications: ```{fact_sheet_chair}```\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)\n",
58 | "metadata": {},
59 | "execution_count": null,
60 | "outputs": []
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "source": "## Issue 1: The text is too long \n- Limit the number of words/sentences/characters.",
65 | "metadata": {}
66 | },
67 | {
68 | "cell_type": "code",
69 | "source": "prompt = f\"\"\"\nYour task is to help a marketing team create a \ndescription for a retail website of a product based \non a technical fact sheet.\n\nWrite a product description based on the information \nprovided in the technical specifications delimited by \ntriple backticks.\n\nUse at most 50 words.\n\nTechnical specifications: ```{fact_sheet_chair}```\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)\n",
70 | "metadata": {},
71 | "execution_count": null,
72 | "outputs": []
73 | },
74 | {
75 | "cell_type": "code",
76 | "source": "len(response)",
77 | "metadata": {},
78 | "execution_count": null,
79 | "outputs": []
80 | },
81 | {
82 | "cell_type": "markdown",
83 | "source": "## Issue 2. Text focuses on the wrong details\n- Ask it to focus on the aspects that are relevant to the intended audience.",
84 | "metadata": {}
85 | },
86 | {
87 | "cell_type": "code",
88 | "source": "prompt = f\"\"\"\nYour task is to help a marketing team create a \ndescription for a retail website of a product based \non a technical fact sheet.\n\nWrite a product description based on the information \nprovided in the technical specifications delimited by \ntriple backticks.\n\nThe description is intended for furniture retailers, \nso should be technical in nature and focus on the \nmaterials the product is constructed from.\n\nUse at most 50 words.\n\nTechnical specifications: ```{fact_sheet_chair}```\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
89 | "metadata": {},
90 | "execution_count": null,
91 | "outputs": []
92 | },
93 | {
94 | "cell_type": "code",
95 | "source": "prompt = f\"\"\"\nYour task is to help a marketing team create a \ndescription for a retail website of a product based \non a technical fact sheet.\n\nWrite a product description based on the information \nprovided in the technical specifications delimited by \ntriple backticks.\n\nThe description is intended for furniture retailers, \nso should be technical in nature and focus on the \nmaterials the product is constructed from.\n\nAt the end of the description, include every 7-character \nProduct ID in the technical specification.\n\nUse at most 50 words.\n\nTechnical specifications: ```{fact_sheet_chair}```\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
96 | "metadata": {},
97 | "execution_count": null,
98 | "outputs": []
99 | },
100 | {
101 | "cell_type": "markdown",
102 | "source": "## Issue 3. Description needs a table of dimensions\n- Ask it to extract information and organize it in a table.",
103 | "metadata": {}
104 | },
105 | {
106 | "cell_type": "code",
107 | "source": "prompt = f\"\"\"\nYour task is to help a marketing team create a \ndescription for a retail website of a product based \non a technical fact sheet.\n\nWrite a product description based on the information \nprovided in the technical specifications delimited by \ntriple backticks.\n\nThe description is intended for furniture retailers, \nso should be technical in nature and focus on the \nmaterials the product is constructed from.\n\nAt the end of the description, include every 7-character \nProduct ID in the technical specification.\n\nAfter the description, include a table that gives the \nproduct's dimensions. The table should have two columns.\nIn the first column include the name of the dimension. \nIn the second column include the measurements in inches only.\n\nGive the table the title 'Product Dimensions'.\n\nFormat everything as HTML that can be used in a website. \nPlace the description in a element.\n\nTechnical specifications: ```{fact_sheet_chair}```\n\"\"\"\n\nresponse = get_completion(prompt)\nprint(response)",
108 | "metadata": {},
109 | "execution_count": null,
110 | "outputs": []
111 | },
112 | {
113 | "cell_type": "markdown",
114 | "source": "## Load Python libraries to view HTML",
115 | "metadata": {}
116 | },
117 | {
118 | "cell_type": "code",
119 | "source": "from IPython.display import display, HTML",
120 | "metadata": {},
121 | "execution_count": null,
122 | "outputs": []
123 | },
124 | {
125 | "cell_type": "code",
126 | "source": "display(HTML(response))",
127 | "metadata": {},
128 | "execution_count": null,
129 | "outputs": []
130 | },
131 | {
132 | "cell_type": "markdown",
133 | "source": "## Try experimenting on your own!",
134 | "metadata": {}
135 | },
136 | {
137 | "cell_type": "code",
138 | "source": "",
139 | "metadata": {},
140 | "execution_count": null,
141 | "outputs": []
142 | }
143 | ]
144 | }
--------------------------------------------------------------------------------
/ChatGPT-Notebook3.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "language_info": {
4 | "codemirror_mode": {
5 | "name": "python",
6 | "version": 3
7 | },
8 | "file_extension": ".py",
9 | "mimetype": "text/x-python",
10 | "name": "python",
11 | "nbconvert_exporter": "python",
12 | "pygments_lexer": "ipython3",
13 | "version": "3.8"
14 | },
15 | "kernelspec": {
16 | "name": "python",
17 | "display_name": "Python (Pyodide)",
18 | "language": "python"
19 | }
20 | },
21 | "nbformat_minor": 4,
22 | "nbformat": 4,
23 | "cells": [
24 | {
25 | "cell_type": "markdown",
26 | "source": "# Summarizing\nIn this lesson, you will summarize text with a focus on specific topics.\n\n## Setup",
27 | "metadata": {}
28 | },
29 | {
30 | "cell_type": "code",
31 | "source": "import openai\nimport os\n\nfrom dotenv import load_dotenv, find_dotenv\n_ = load_dotenv(find_dotenv()) # read local .env file\n\nopenai.api_key = os.getenv('OPENAI_API_KEY')",
32 | "metadata": {},
33 | "execution_count": null,
34 | "outputs": []
35 | },
36 | {
37 | "cell_type": "code",
38 | "source": "def get_completion(prompt, model=\"gpt-3.5-turbo\"): # Andrew mentioned that the prompt/ completion paradigm is preferable for this class\n messages = [{\"role\": \"user\", \"content\": prompt}]\n response = openai.ChatCompletion.create(\n model=model,\n messages=messages,\n temperature=0, # this is the degree of randomness of the model's output\n )\n return response.choices[0].message[\"content\"]\n",
39 | "metadata": {},
40 | "execution_count": null,
41 | "outputs": []
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "source": "## Text to summarize",
46 | "metadata": {}
47 | },
48 | {
49 | "cell_type": "code",
50 | "source": "prod_review = \"\"\"\nGot this panda plush toy for my daughter's birthday, \\\nwho loves it and takes it everywhere. It's soft and \\ \nsuper cute, and its face has a friendly look. It's \\ \na bit small for what I paid though. I think there \\ \nmight be other options that are bigger for the \\ \nsame price. It arrived a day earlier than expected, \\ \nso I got to play with it myself before I gave it \\ \nto her.\n\"\"\"",
51 | "metadata": {},
52 | "execution_count": null,
53 | "outputs": []
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "source": "## Summarize with a word/sentence/character limit",
58 | "metadata": {}
59 | },
60 | {
61 | "cell_type": "code",
62 | "source": "prompt = f\"\"\"\nYour task is to generate a short summary of a product \\\nreview from an ecommerce site. \n\nSummarize the review below, delimited by triple \nbackticks, in at most 30 words. \n\nReview: ```{prod_review}```\n\"\"\"\n\nresponse = get_completion(prompt)\nprint(response)\n",
63 | "metadata": {},
64 | "execution_count": null,
65 | "outputs": []
66 | },
67 | {
68 | "cell_type": "markdown",
69 | "source": "## Summarize with a focus on shipping and delivery",
70 | "metadata": {}
71 | },
72 | {
73 | "cell_type": "code",
74 | "source": "prompt = f\"\"\"\nYour task is to generate a short summary of a product \\\nreview from an ecommerce site to give feedback to the \\\nShipping deparmtment. \n\nSummarize the review below, delimited by triple \nbackticks, in at most 30 words, and focusing on any aspects \\\nthat mention shipping and delivery of the product. \n\nReview: ```{prod_review}```\n\"\"\"\n\nresponse = get_completion(prompt)\nprint(response)\n",
75 | "metadata": {},
76 | "execution_count": null,
77 | "outputs": []
78 | },
79 | {
80 | "cell_type": "markdown",
81 | "source": "## Summarize with a focus on price and value",
82 | "metadata": {}
83 | },
84 | {
85 | "cell_type": "code",
86 | "source": "prompt = f\"\"\"\nYour task is to generate a short summary of a product \\\nreview from an ecommerce site to give feedback to the \\\npricing deparmtment, responsible for determining the \\\nprice of the product. \n\nSummarize the review below, delimited by triple \nbackticks, in at most 30 words, and focusing on any aspects \\\nthat are relevant to the price and perceived value. \n\nReview: ```{prod_review}```\n\"\"\"\n\nresponse = get_completion(prompt)\nprint(response)\n",
87 | "metadata": {},
88 | "execution_count": null,
89 | "outputs": []
90 | },
91 | {
92 | "cell_type": "markdown",
93 | "source": "#### Comment\n- Summaries include topics that are not related to the topic of focus.",
94 | "metadata": {}
95 | },
96 | {
97 | "cell_type": "markdown",
98 | "source": "## Try \"extract\" instead of \"summarize\"",
99 | "metadata": {}
100 | },
101 | {
102 | "cell_type": "code",
103 | "source": "prompt = f\"\"\"\nYour task is to extract relevant information from \\ \na product review from an ecommerce site to give \\\nfeedback to the Shipping department. \n\nFrom the review below, delimited by triple quotes \\\nextract the information relevant to shipping and \\ \ndelivery. Limit to 30 words. \n\nReview: ```{prod_review}```\n\"\"\"\n\nresponse = get_completion(prompt)\nprint(response)",
104 | "metadata": {},
105 | "execution_count": null,
106 | "outputs": []
107 | },
108 | {
109 | "cell_type": "markdown",
110 | "source": "## Summarize multiple product reviews",
111 | "metadata": {}
112 | },
113 | {
114 | "cell_type": "code",
115 | "source": "\nreview_1 = prod_review \n\n# review for a standing lamp\nreview_2 = \"\"\"\nNeeded a nice lamp for my bedroom, and this one \\\nhad additional storage and not too high of a price \\\npoint. Got it fast - arrived in 2 days. The string \\\nto the lamp broke during the transit and the company \\\nhappily sent over a new one. Came within a few days \\\nas well. It was easy to put together. Then I had a \\\nmissing part, so I contacted their support and they \\\nvery quickly got me the missing piece! Seems to me \\\nto be a great company that cares about their customers \\\nand products. \n\"\"\"\n\n# review for an electric toothbrush\nreview_3 = \"\"\"\nMy dental hygienist recommended an electric toothbrush, \\\nwhich is why I got this. The battery life seems to be \\\npretty impressive so far. After initial charging and \\\nleaving the charger plugged in for the first week to \\\ncondition the battery, I've unplugged the charger and \\\nbeen using it for twice daily brushing for the last \\\n3 weeks all on the same charge. But the toothbrush head \\\nis too small. I’ve seen baby toothbrushes bigger than \\\nthis one. I wish the head was bigger with different \\\nlength bristles to get between teeth better because \\\nthis one doesn’t. Overall if you can get this one \\\naround the $50 mark, it's a good deal. The manufactuer's \\\nreplacements heads are pretty expensive, but you can \\\nget generic ones that're more reasonably priced. This \\\ntoothbrush makes me feel like I've been to the dentist \\\nevery day. My teeth feel sparkly clean! \n\"\"\"\n\n# review for a blender\nreview_4 = \"\"\"\nSo, they still had the 17 piece system on seasonal \\\nsale for around $49 in the month of November, about \\\nhalf off, but for some reason (call it price gouging) \\\naround the second week of December the prices all went \\\nup to about anywhere from between $70-$89 for the same \\\nsystem. And the 11 piece system went up around $10 or \\\nso in price also from the earlier sale price of $29. \\\nSo it looks okay, but if you look at the base, the part \\\nwhere the blade locks into place doesn’t look as good \\\nas in previous editions from a few years ago, but I \\\nplan to be very gentle with it (example, I crush \\\nvery hard items like beans, ice, rice, etc. in the \\ \nblender first then pulverize them in the serving size \\\nI want in the blender then switch to the whipping \\\nblade for a finer flour, and use the cross cutting blade \\\nfirst when making smoothies, then use the flat blade \\\nif I need them finer/less pulpy). Special tip when making \\\nsmoothies, finely cut and freeze the fruits and \\\nvegetables (if using spinach-lightly stew soften the \\ \nspinach then freeze until ready for use-and if making \\\nsorbet, use a small to medium sized food processor) \\ \nthat you plan to use that way you can avoid adding so \\\nmuch ice if at all-when making your smoothie. \\\nAfter about a year, the motor was making a funny noise. \\\nI called customer service but the warranty expired \\\nalready, so I had to buy another one. FYI: The overall \\\nquality has gone done in these types of products, so \\\nthey are kind of counting on brand recognition and \\\nconsumer loyalty to maintain sales. Got it in about \\\ntwo days.\n\"\"\"\n\nreviews = [review_1, review_2, review_3, review_4]\n\n",
116 | "metadata": {},
117 | "execution_count": null,
118 | "outputs": []
119 | },
120 | {
121 | "cell_type": "code",
122 | "source": "for i in range(len(reviews)):\n prompt = f\"\"\"\n Your task is to generate a short summary of a product \\ \n review from an ecommerce site. \n\n Summarize the review below, delimited by triple \\\n backticks in at most 20 words. \n\n Review: ```{reviews[i]}```\n \"\"\"\n\n response = get_completion(prompt)\n print(i, response, \"\\n\")\n",
123 | "metadata": {},
124 | "execution_count": null,
125 | "outputs": []
126 | },
127 | {
128 | "cell_type": "markdown",
129 | "source": "## Try experimenting on your own!",
130 | "metadata": {}
131 | },
132 | {
133 | "cell_type": "code",
134 | "source": "",
135 | "metadata": {},
136 | "execution_count": null,
137 | "outputs": []
138 | }
139 | ]
140 | }
--------------------------------------------------------------------------------
/ChatGPT-Notebook4.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "language_info": {
4 | "codemirror_mode": {
5 | "name": "python",
6 | "version": 3
7 | },
8 | "file_extension": ".py",
9 | "mimetype": "text/x-python",
10 | "name": "python",
11 | "nbconvert_exporter": "python",
12 | "pygments_lexer": "ipython3",
13 | "version": "3.8"
14 | },
15 | "kernelspec": {
16 | "name": "python",
17 | "display_name": "Python (Pyodide)",
18 | "language": "python"
19 | }
20 | },
21 | "nbformat_minor": 4,
22 | "nbformat": 4,
23 | "cells": [
24 | {
25 | "cell_type": "markdown",
26 | "source": "# Inferring\nIn this lesson, you will infer sentiment and topics from product reviews and news articles.\n\n## Setup",
27 | "metadata": {}
28 | },
29 | {
30 | "cell_type": "code",
31 | "source": "import openai\nimport os\n\nfrom dotenv import load_dotenv, find_dotenv\n_ = load_dotenv(find_dotenv()) # read local .env file\n\nopenai.api_key = os.getenv('OPENAI_API_KEY')",
32 | "metadata": {},
33 | "execution_count": null,
34 | "outputs": []
35 | },
36 | {
37 | "cell_type": "code",
38 | "source": "def get_completion(prompt, model=\"gpt-3.5-turbo\"):\n messages = [{\"role\": \"user\", \"content\": prompt}]\n response = openai.ChatCompletion.create(\n model=model,\n messages=messages,\n temperature=0, # this is the degree of randomness of the model's output\n )\n return response.choices[0].message[\"content\"]",
39 | "metadata": {},
40 | "execution_count": null,
41 | "outputs": []
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "source": "## Product review text",
46 | "metadata": {}
47 | },
48 | {
49 | "cell_type": "code",
50 | "source": "lamp_review = \"\"\"\nNeeded a nice lamp for my bedroom, and this one had \\\nadditional storage and not too high of a price point. \\\nGot it fast. The string to our lamp broke during the \\\ntransit and the company happily sent over a new one. \\\nCame within a few days as well. It was easy to put \\\ntogether. I had a missing part, so I contacted their \\\nsupport and they very quickly got me the missing piece! \\\nLumina seems to me to be a great company that cares \\\nabout their customers and products!!\n\"\"\"",
51 | "metadata": {},
52 | "execution_count": null,
53 | "outputs": []
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "source": "## Sentiment (positive/negative)",
58 | "metadata": {}
59 | },
60 | {
61 | "cell_type": "code",
62 | "source": "prompt = f\"\"\"\nWhat is the sentiment of the following product review, \nwhich is delimited with triple backticks?\n\nReview text: '''{lamp_review}'''\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
63 | "metadata": {},
64 | "execution_count": null,
65 | "outputs": []
66 | },
67 | {
68 | "cell_type": "code",
69 | "source": "prompt = f\"\"\"\nWhat is the sentiment of the following product review, \nwhich is delimited with triple backticks?\n\nGive your answer as a single word, either \"positive\" \\\nor \"negative\".\n\nReview text: '''{lamp_review}'''\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
70 | "metadata": {},
71 | "execution_count": null,
72 | "outputs": []
73 | },
74 | {
75 | "cell_type": "markdown",
76 | "source": "## Identify types of emotions",
77 | "metadata": {}
78 | },
79 | {
80 | "cell_type": "code",
81 | "source": "prompt = f\"\"\"\nIdentify a list of emotions that the writer of the \\\nfollowing review is expressing. Include no more than \\\nfive items in the list. Format your answer as a list of \\\nlower-case words separated by commas.\n\nReview text: '''{lamp_review}'''\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
82 | "metadata": {},
83 | "execution_count": null,
84 | "outputs": []
85 | },
86 | {
87 | "cell_type": "markdown",
88 | "source": "## Identify anger",
89 | "metadata": {}
90 | },
91 | {
92 | "cell_type": "code",
93 | "source": "prompt = f\"\"\"\nIs the writer of the following review expressing anger?\\\nThe review is delimited with triple backticks. \\\nGive your answer as either yes or no.\n\nReview text: '''{lamp_review}'''\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
94 | "metadata": {},
95 | "execution_count": null,
96 | "outputs": []
97 | },
98 | {
99 | "cell_type": "markdown",
100 | "source": "## Extract product and company name from customer reviews",
101 | "metadata": {}
102 | },
103 | {
104 | "cell_type": "code",
105 | "source": "prompt = f\"\"\"\nIdentify the following items from the review text: \n- Item purchased by reviewer\n- Company that made the item\n\nThe review is delimited with triple backticks. \\\nFormat your response as a JSON object with \\\n\"Item\" and \"Brand\" as the keys. \nIf the information isn't present, use \"unknown\" \\\nas the value.\nMake your response as short as possible.\n \nReview text: '''{lamp_review}'''\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
106 | "metadata": {},
107 | "execution_count": null,
108 | "outputs": []
109 | },
110 | {
111 | "cell_type": "markdown",
112 | "source": "## Doing multiple tasks at once",
113 | "metadata": {}
114 | },
115 | {
116 | "cell_type": "code",
117 | "source": "prompt = f\"\"\"\nIdentify the following items from the review text: \n- Sentiment (positive or negative)\n- Is the reviewer expressing anger? (true or false)\n- Item purchased by reviewer\n- Company that made the item\n\nThe review is delimited with triple backticks. \\\nFormat your response as a JSON object with \\\n\"Sentiment\", \"Anger\", \"Item\" and \"Brand\" as the keys.\nIf the information isn't present, use \"unknown\" \\\nas the value.\nMake your response as short as possible.\nFormat the Anger value as a boolean.\n\nReview text: '''{lamp_review}'''\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
118 | "metadata": {},
119 | "execution_count": null,
120 | "outputs": []
121 | },
122 | {
123 | "cell_type": "markdown",
124 | "source": "## Inferring topics",
125 | "metadata": {}
126 | },
127 | {
128 | "cell_type": "code",
129 | "source": "story = \"\"\"\nIn a recent survey conducted by the government, \npublic sector employees were asked to rate their level \nof satisfaction with the department they work at. \nThe results revealed that NASA was the most popular \ndepartment with a satisfaction rating of 95%.\n\nOne NASA employee, John Smith, commented on the findings, \nstating, \"I'm not surprised that NASA came out on top. \nIt's a great place to work with amazing people and \nincredible opportunities. I'm proud to be a part of \nsuch an innovative organization.\"\n\nThe results were also welcomed by NASA's management team, \nwith Director Tom Johnson stating, \"We are thrilled to \nhear that our employees are satisfied with their work at NASA. \nWe have a talented and dedicated team who work tirelessly \nto achieve our goals, and it's fantastic to see that their \nhard work is paying off.\"\n\nThe survey also revealed that the \nSocial Security Administration had the lowest satisfaction \nrating, with only 45% of employees indicating they were \nsatisfied with their job. The government has pledged to \naddress the concerns raised by employees in the survey and \nwork towards improving job satisfaction across all departments.\n\"\"\"",
130 | "metadata": {},
131 | "execution_count": null,
132 | "outputs": []
133 | },
134 | {
135 | "cell_type": "markdown",
136 | "source": "## Infer 5 topics",
137 | "metadata": {}
138 | },
139 | {
140 | "cell_type": "code",
141 | "source": "prompt = f\"\"\"\nDetermine five topics that are being discussed in the \\\nfollowing text, which is delimited by triple backticks.\n\nMake each item one or two words long. \n\nFormat your response as a list of items separated by commas.\n\nText sample: '''{story}'''\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
142 | "metadata": {},
143 | "execution_count": null,
144 | "outputs": []
145 | },
146 | {
147 | "cell_type": "code",
148 | "source": "response.split(sep=',')",
149 | "metadata": {},
150 | "execution_count": null,
151 | "outputs": []
152 | },
153 | {
154 | "cell_type": "code",
155 | "source": "topic_list = [\n \"nasa\", \"local government\", \"engineering\", \n \"employee satisfaction\", \"federal government\"\n]",
156 | "metadata": {},
157 | "execution_count": null,
158 | "outputs": []
159 | },
160 | {
161 | "cell_type": "markdown",
162 | "source": "## Make a news alert for certain topics",
163 | "metadata": {}
164 | },
165 | {
166 | "cell_type": "code",
167 | "source": "prompt = f\"\"\"\nDetermine whether each item in the following list of \\\ntopics is a topic in the text below, which\nis delimited with triple backticks.\n\nGive your answer as list with 0 or 1 for each topic.\\\n\nList of topics: {\", \".join(topic_list)}\n\nText sample: '''{story}'''\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
168 | "metadata": {},
169 | "execution_count": null,
170 | "outputs": []
171 | },
172 | {
173 | "cell_type": "code",
174 | "source": "topic_dict = {i.split(': ')[0]: int(i.split(': ')[1]) for i in response.split(sep='\\n')}\nif topic_dict['nasa'] == 1:\n print(\"ALERT: New NASA story!\")",
175 | "metadata": {},
176 | "execution_count": null,
177 | "outputs": []
178 | },
179 | {
180 | "cell_type": "markdown",
181 | "source": "## Try experimenting on your own!",
182 | "metadata": {}
183 | },
184 | {
185 | "cell_type": "code",
186 | "source": "",
187 | "metadata": {},
188 | "execution_count": null,
189 | "outputs": []
190 | }
191 | ]
192 | }
--------------------------------------------------------------------------------
/ChatGPT-Notebook5.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "language_info": {
4 | "codemirror_mode": {
5 | "name": "python",
6 | "version": 3
7 | },
8 | "file_extension": ".py",
9 | "mimetype": "text/x-python",
10 | "name": "python",
11 | "nbconvert_exporter": "python",
12 | "pygments_lexer": "ipython3",
13 | "version": "3.8"
14 | },
15 | "kernelspec": {
16 | "name": "python",
17 | "display_name": "Python (Pyodide)",
18 | "language": "python"
19 | }
20 | },
21 | "nbformat_minor": 4,
22 | "nbformat": 4,
23 | "cells": [
24 | {
25 | "cell_type": "markdown",
26 | "source": "# Transforming\n\nIn this notebook, we will explore how to use Large Language Models for text transformation tasks such as language translation, spelling and grammar checking, tone adjustment, and format conversion.\n\n## Setup",
27 | "metadata": {}
28 | },
29 | {
30 | "cell_type": "code",
31 | "source": "import openai\nimport os\n\nfrom dotenv import load_dotenv, find_dotenv\n_ = load_dotenv(find_dotenv()) # read local .env file\n\nopenai.api_key = os.getenv('OPENAI_API_KEY')",
32 | "metadata": {},
33 | "execution_count": null,
34 | "outputs": []
35 | },
36 | {
37 | "cell_type": "code",
38 | "source": "def get_completion(prompt, model=\"gpt-3.5-turbo\", temperature=0): \n messages = [{\"role\": \"user\", \"content\": prompt}]\n response = openai.ChatCompletion.create(\n model=model,\n messages=messages,\n temperature=temperature, \n )\n return response.choices[0].message[\"content\"]",
39 | "metadata": {},
40 | "execution_count": null,
41 | "outputs": []
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "source": "## Translation\n\nChatGPT is trained with sources in many languages. This gives the model the ability to do translation. Here are some examples of how to use this capability.",
46 | "metadata": {}
47 | },
48 | {
49 | "cell_type": "code",
50 | "source": "prompt = f\"\"\"\nTranslate the following English text to Spanish: \\ \n```Hi, I would like to order a blender```\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
51 | "metadata": {},
52 | "execution_count": null,
53 | "outputs": []
54 | },
55 | {
56 | "cell_type": "code",
57 | "source": "prompt = f\"\"\"\nTell me which language this is: \n```Combien coûte le lampadaire?```\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
58 | "metadata": {},
59 | "execution_count": null,
60 | "outputs": []
61 | },
62 | {
63 | "cell_type": "code",
64 | "source": "prompt = f\"\"\"\nTranslate the following text to French and Spanish\nand English pirate: \\\n```I want to order a basketball```\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
65 | "metadata": {},
66 | "execution_count": null,
67 | "outputs": []
68 | },
69 | {
70 | "cell_type": "code",
71 | "source": "prompt = f\"\"\"\nTranslate the following text to Spanish in both the \\\nformal and informal forms: \n'Would you like to order a pillow?'\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
72 | "metadata": {},
73 | "execution_count": null,
74 | "outputs": []
75 | },
76 | {
77 | "cell_type": "markdown",
78 | "source": "### Universal Translator\nImagine you are in charge of IT at a large multinational e-commerce company. Users are messaging you with IT issues in all their native languages. Your staff is from all over the world and speaks only their native languages. You need a universal translator!",
79 | "metadata": {}
80 | },
81 | {
82 | "cell_type": "code",
83 | "source": "user_messages = [\n \"La performance du système est plus lente que d'habitude.\", # System performance is slower than normal \n \"Mi monitor tiene píxeles que no se iluminan.\", # My monitor has pixels that are not lighting\n \"Il mio mouse non funziona\", # My mouse is not working\n \"Mój klawisz Ctrl jest zepsuty\", # My keyboard has a broken control key\n \"我的屏幕在闪烁\" # My screen is flashing\n] ",
84 | "metadata": {},
85 | "execution_count": null,
86 | "outputs": []
87 | },
88 | {
89 | "cell_type": "code",
90 | "source": "for issue in user_messages:\n prompt = f\"Tell me what language this is: ```{issue}```\"\n lang = get_completion(prompt)\n print(f\"Original message ({lang}): {issue}\")\n\n prompt = f\"\"\"\n Translate the following text to English \\\n and Korean: ```{issue}```\n \"\"\"\n response = get_completion(prompt)\n print(response, \"\\n\")",
91 | "metadata": {},
92 | "execution_count": null,
93 | "outputs": []
94 | },
95 | {
96 | "cell_type": "markdown",
97 | "source": "## Try it yourself!\nTry some translations on your own!",
98 | "metadata": {}
99 | },
100 | {
101 | "cell_type": "code",
102 | "source": "",
103 | "metadata": {},
104 | "execution_count": null,
105 | "outputs": []
106 | },
107 | {
108 | "cell_type": "markdown",
109 | "source": "## Tone Transformation\nWriting can vary based on the intended audience. ChatGPT can produce different tones.\n",
110 | "metadata": {}
111 | },
112 | {
113 | "cell_type": "code",
114 | "source": "prompt = f\"\"\"\nTranslate the following from slang to a business letter: \n'Dude, This is Joe, check out this spec on this standing lamp.'\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
115 | "metadata": {},
116 | "execution_count": null,
117 | "outputs": []
118 | },
119 | {
120 | "cell_type": "markdown",
121 | "source": "## Format Conversion\nChatGPT can translate between formats. The prompt should describe the input and output formats.",
122 | "metadata": {}
123 | },
124 | {
125 | "cell_type": "code",
126 | "source": "data_json = { \"resturant employees\" :[ \n {\"name\":\"Shyam\", \"email\":\"shyamjaiswal@gmail.com\"},\n {\"name\":\"Bob\", \"email\":\"bob32@gmail.com\"},\n {\"name\":\"Jai\", \"email\":\"jai87@gmail.com\"}\n]}\n\nprompt = f\"\"\"\nTranslate the following python dictionary from JSON to an HTML \\\ntable with column headers and title: {data_json}\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
127 | "metadata": {},
128 | "execution_count": null,
129 | "outputs": []
130 | },
131 | {
132 | "cell_type": "code",
133 | "source": "from IPython.display import display, Markdown, Latex, HTML, JSON\ndisplay(HTML(response))",
134 | "metadata": {},
135 | "execution_count": null,
136 | "outputs": []
137 | },
138 | {
139 | "cell_type": "markdown",
140 | "source": "## Spellcheck/Grammar check.\n\nHere are some examples of common grammar and spelling problems and the LLM's response. \n\nTo signal to the LLM that you want it to proofread your text, you instruct the model to 'proofread' or 'proofread and correct'.",
141 | "metadata": {}
142 | },
143 | {
144 | "cell_type": "code",
145 | "source": "text = [ \n \"The girl with the black and white puppies have a ball.\", # The girl has a ball.\n \"Yolanda has her notebook.\", # ok\n \"Its going to be a long day. Does the car need it’s oil changed?\", # Homonyms\n \"Their goes my freedom. There going to bring they’re suitcases.\", # Homonyms\n \"Your going to need you’re notebook.\", # Homonyms\n \"That medicine effects my ability to sleep. Have you heard of the butterfly affect?\", # Homonyms\n \"This phrase is to cherck chatGPT for speling abilitty\" # spelling\n]\nfor t in text:\n prompt = f\"\"\"Proofread and correct the following text\n and rewrite the corrected version. If you don't find\n and errors, just say \"No errors found\". Don't use \n any punctuation around the text:\n ```{t}```\"\"\"\n response = get_completion(prompt)\n print(response)",
146 | "metadata": {},
147 | "execution_count": null,
148 | "outputs": []
149 | },
150 | {
151 | "cell_type": "code",
152 | "source": "text = f\"\"\"\nGot this for my daughter for her birthday cuz she keeps taking \\\nmine from my room. Yes, adults also like pandas too. She takes \\\nit everywhere with her, and it's super soft and cute. One of the \\\nears is a bit lower than the other, and I don't think that was \\\ndesigned to be asymmetrical. It's a bit small for what I paid for it \\\nthough. I think there might be other options that are bigger for \\\nthe same price. It arrived a day earlier than expected, so I got \\\nto play with it myself before I gave it to my daughter.\n\"\"\"\nprompt = f\"proofread and correct this review: ```{text}```\"\nresponse = get_completion(prompt)\nprint(response)",
153 | "metadata": {},
154 | "execution_count": null,
155 | "outputs": []
156 | },
157 | {
158 | "cell_type": "code",
159 | "source": "from redlines import Redlines\n\ndiff = Redlines(text,response)\ndisplay(Markdown(diff.output_markdown))",
160 | "metadata": {},
161 | "execution_count": null,
162 | "outputs": []
163 | },
164 | {
165 | "cell_type": "code",
166 | "source": "prompt = f\"\"\"\nproofread and correct this review. Make it more compelling. \nEnsure it follows APA style guide and targets an advanced reader. \nOutput in markdown format.\nText: ```{text}```\n\"\"\"\nresponse = get_completion(prompt)\ndisplay(Markdown(response))",
167 | "metadata": {},
168 | "execution_count": null,
169 | "outputs": []
170 | },
171 | {
172 | "cell_type": "markdown",
173 | "source": "## Try it yourself!\nTry changing the instructions to form your own review.",
174 | "metadata": {}
175 | },
176 | {
177 | "cell_type": "code",
178 | "source": "",
179 | "metadata": {},
180 | "execution_count": null,
181 | "outputs": []
182 | },
183 | {
184 | "cell_type": "markdown",
185 | "source": "Thanks to the following sites:\n\nhttps://writingprompts.com/bad-grammar-examples/\n",
186 | "metadata": {}
187 | }
188 | ]
189 | }
--------------------------------------------------------------------------------
/ChatGPT-Notebook6.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "language_info": {
4 | "codemirror_mode": {
5 | "name": "python",
6 | "version": 3
7 | },
8 | "file_extension": ".py",
9 | "mimetype": "text/x-python",
10 | "name": "python",
11 | "nbconvert_exporter": "python",
12 | "pygments_lexer": "ipython3",
13 | "version": "3.8"
14 | },
15 | "kernelspec": {
16 | "name": "python",
17 | "display_name": "Python (Pyodide)",
18 | "language": "python"
19 | }
20 | },
21 | "nbformat_minor": 4,
22 | "nbformat": 4,
23 | "cells": [
24 | {
25 | "cell_type": "markdown",
26 | "source": "# Expanding\nIn this lesson, you will generate customer service emails that are tailored to each customer's review.\n\n## Setup",
27 | "metadata": {}
28 | },
29 | {
30 | "cell_type": "code",
31 | "source": "import openai\nimport os\n\nfrom dotenv import load_dotenv, find_dotenv\n_ = load_dotenv(find_dotenv()) # read local .env file\n\nopenai.api_key = os.getenv('OPENAI_API_KEY')",
32 | "metadata": {},
33 | "execution_count": null,
34 | "outputs": []
35 | },
36 | {
37 | "cell_type": "code",
38 | "source": "def get_completion(prompt, model=\"gpt-3.5-turbo\",temperature=0): # Andrew mentioned that the prompt/ completion paradigm is preferable for this class\n messages = [{\"role\": \"user\", \"content\": prompt}]\n response = openai.ChatCompletion.create(\n model=model,\n messages=messages,\n temperature=temperature, # this is the degree of randomness of the model's output\n )\n return response.choices[0].message[\"content\"]",
39 | "metadata": {},
40 | "execution_count": null,
41 | "outputs": []
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "source": "## Customize the automated reply to a customer email",
46 | "metadata": {}
47 | },
48 | {
49 | "cell_type": "code",
50 | "source": "# given the sentiment from the lesson on \"inferring\",\n# and the original customer message, customize the email\nsentiment = \"negative\"\n\n# review for a blender\nreview = f\"\"\"\nSo, they still had the 17 piece system on seasonal \\\nsale for around $49 in the month of November, about \\\nhalf off, but for some reason (call it price gouging) \\\naround the second week of December the prices all went \\\nup to about anywhere from between $70-$89 for the same \\\nsystem. And the 11 piece system went up around $10 or \\\nso in price also from the earlier sale price of $29. \\\nSo it looks okay, but if you look at the base, the part \\\nwhere the blade locks into place doesn’t look as good \\\nas in previous editions from a few years ago, but I \\\nplan to be very gentle with it (example, I crush \\\nvery hard items like beans, ice, rice, etc. in the \\ \nblender first then pulverize them in the serving size \\\nI want in the blender then switch to the whipping \\\nblade for a finer flour, and use the cross cutting blade \\\nfirst when making smoothies, then use the flat blade \\\nif I need them finer/less pulpy). Special tip when making \\\nsmoothies, finely cut and freeze the fruits and \\\nvegetables (if using spinach-lightly stew soften the \\ \nspinach then freeze until ready for use-and if making \\\nsorbet, use a small to medium sized food processor) \\ \nthat you plan to use that way you can avoid adding so \\\nmuch ice if at all-when making your smoothie. \\\nAfter about a year, the motor was making a funny noise. \\\nI called customer service but the warranty expired \\\nalready, so I had to buy another one. FYI: The overall \\\nquality has gone done in these types of products, so \\\nthey are kind of counting on brand recognition and \\\nconsumer loyalty to maintain sales. Got it in about \\\ntwo days.\n\"\"\"",
51 | "metadata": {},
52 | "execution_count": null,
53 | "outputs": []
54 | },
55 | {
56 | "cell_type": "code",
57 | "source": "prompt = f\"\"\"\nYou are a customer service AI assistant.\nYour task is to send an email reply to a valued customer.\nGiven the customer email delimited by ```, \\\nGenerate a reply to thank the customer for their review.\nIf the sentiment is positive or neutral, thank them for \\\ntheir review.\nIf the sentiment is negative, apologize and suggest that \\\nthey can reach out to customer service. \nMake sure to use specific details from the review.\nWrite in a concise and professional tone.\nSign the email as `AI customer agent`.\nCustomer review: ```{review}```\nReview sentiment: {sentiment}\n\"\"\"\nresponse = get_completion(prompt)\nprint(response)",
58 | "metadata": {},
59 | "execution_count": null,
60 | "outputs": []
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "source": "## Remind the model to use details from the customer's email",
65 | "metadata": {}
66 | },
67 | {
68 | "cell_type": "code",
69 | "source": "prompt = f\"\"\"\nYou are a customer service AI assistant.\nYour task is to send an email reply to a valued customer.\nGiven the customer email delimited by ```, \\\nGenerate a reply to thank the customer for their review.\nIf the sentiment is positive or neutral, thank them for \\\ntheir review.\nIf the sentiment is negative, apologize and suggest that \\\nthey can reach out to customer service. \nMake sure to use specific details from the review.\nWrite in a concise and professional tone.\nSign the email as `AI customer agent`.\nCustomer review: ```{review}```\nReview sentiment: {sentiment}\n\"\"\"\nresponse = get_completion(prompt, temperature=0.7)\nprint(response)",
70 | "metadata": {},
71 | "execution_count": null,
72 | "outputs": []
73 | },
74 | {
75 | "cell_type": "markdown",
76 | "source": "## Try experimenting on your own!",
77 | "metadata": {}
78 | },
79 | {
80 | "cell_type": "code",
81 | "source": "",
82 | "metadata": {},
83 | "execution_count": null,
84 | "outputs": []
85 | }
86 | ]
87 | }
--------------------------------------------------------------------------------
/ChatGPT-Notebook7.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "language_info": {
4 | "codemirror_mode": {
5 | "name": "python",
6 | "version": 3
7 | },
8 | "file_extension": ".py",
9 | "mimetype": "text/x-python",
10 | "name": "python",
11 | "nbconvert_exporter": "python",
12 | "pygments_lexer": "ipython3",
13 | "version": "3.8"
14 | },
15 | "kernelspec": {
16 | "name": "python",
17 | "display_name": "Python (Pyodide)",
18 | "language": "python"
19 | }
20 | },
21 | "nbformat_minor": 4,
22 | "nbformat": 4,
23 | "cells": [
24 | {
25 | "cell_type": "markdown",
26 | "source": "# The Chat Format\n\nIn this notebook, you will explore how you can utilize the chat format to have extended conversations with chatbots personalized or specialized for specific tasks or behaviors.\n\n## Setup",
27 | "metadata": {}
28 | },
29 | {
30 | "cell_type": "code",
31 | "source": "import os\nimport openai\nfrom dotenv import load_dotenv, find_dotenv\n_ = load_dotenv(find_dotenv()) # read local .env file\n\nopenai.api_key = os.getenv('OPENAI_API_KEY')",
32 | "metadata": {},
33 | "execution_count": null,
34 | "outputs": []
35 | },
36 | {
37 | "cell_type": "code",
38 | "source": "def get_completion(prompt, model=\"gpt-3.5-turbo\"):\n messages = [{\"role\": \"user\", \"content\": prompt}]\n response = openai.ChatCompletion.create(\n model=model,\n messages=messages,\n temperature=0, # this is the degree of randomness of the model's output\n )\n return response.choices[0].message[\"content\"]\n\ndef get_completion_from_messages(messages, model=\"gpt-3.5-turbo\", temperature=0):\n response = openai.ChatCompletion.create(\n model=model,\n messages=messages,\n temperature=temperature, # this is the degree of randomness of the model's output\n )\n# print(str(response.choices[0].message))\n return response.choices[0].message[\"content\"]",
39 | "metadata": {},
40 | "execution_count": null,
41 | "outputs": []
42 | },
43 | {
44 | "cell_type": "code",
45 | "source": "messages = [ \n{'role':'system', 'content':'You are an assistant that speaks like Shakespeare.'}, \n{'role':'user', 'content':'tell me a joke'}, \n{'role':'assistant', 'content':'Why did the chicken cross the road'}, \n{'role':'user', 'content':'I don\\'t know'} ]",
46 | "metadata": {},
47 | "execution_count": null,
48 | "outputs": []
49 | },
50 | {
51 | "cell_type": "code",
52 | "source": "response = get_completion_from_messages(messages, temperature=1)\nprint(response)",
53 | "metadata": {},
54 | "execution_count": null,
55 | "outputs": []
56 | },
57 | {
58 | "cell_type": "code",
59 | "source": "messages = [ \n{'role':'system', 'content':'You are friendly chatbot.'}, \n{'role':'user', 'content':'Hi, my name is Isa'} ]\nresponse = get_completion_from_messages(messages, temperature=1)\nprint(response)",
60 | "metadata": {},
61 | "execution_count": null,
62 | "outputs": []
63 | },
64 | {
65 | "cell_type": "code",
66 | "source": "messages = [ \n{'role':'system', 'content':'You are friendly chatbot.'}, \n{'role':'user', 'content':'Yes, can you remind me, What is my name?'} ]\nresponse = get_completion_from_messages(messages, temperature=1)\nprint(response)",
67 | "metadata": {},
68 | "execution_count": null,
69 | "outputs": []
70 | },
71 | {
72 | "cell_type": "code",
73 | "source": "messages = [ \n{'role':'system', 'content':'You are friendly chatbot.'},\n{'role':'user', 'content':'Hi, my name is Isa'},\n{'role':'assistant', 'content': \"Hi Isa! It's nice to meet you. \\\nIs there anything I can help you with today?\"},\n{'role':'user', 'content':'Yes, you can remind me, What is my name?'} ]\nresponse = get_completion_from_messages(messages, temperature=1)\nprint(response)",
74 | "metadata": {},
75 | "execution_count": null,
76 | "outputs": []
77 | },
78 | {
79 | "cell_type": "markdown",
80 | "source": "# OrderBot\nWe can automate the collection of user prompts and assistant responses to build a OrderBot. The OrderBot will take orders at a pizza restaurant. ",
81 | "metadata": {}
82 | },
83 | {
84 | "cell_type": "code",
85 | "source": "def collect_messages(_):\n prompt = inp.value_input\n inp.value = ''\n context.append({'role':'user', 'content':f\"{prompt}\"})\n response = get_completion_from_messages(context) \n context.append({'role':'assistant', 'content':f\"{response}\"})\n panels.append(\n pn.Row('User:', pn.pane.Markdown(prompt, width=600)))\n panels.append(\n pn.Row('Assistant:', pn.pane.Markdown(response, width=600, style={'background-color': '#F6F6F6'})))\n \n return pn.Column(*panels)\n",
86 | "metadata": {},
87 | "execution_count": null,
88 | "outputs": []
89 | },
90 | {
91 | "cell_type": "code",
92 | "source": "import panel as pn # GUI\npn.extension()\n\npanels = [] # collect display \n\ncontext = [ {'role':'system', 'content':\"\"\"\nYou are OrderBot, an automated service to collect orders for a pizza restaurant. \\\nYou first greet the customer, then collects the order, \\\nand then asks if it's a pickup or delivery. \\\nYou wait to collect the entire order, then summarize it and check for a final \\\ntime if the customer wants to add anything else. \\\nIf it's a delivery, you ask for an address. \\\nFinally you collect the payment.\\\nMake sure to clarify all options, extras and sizes to uniquely \\\nidentify the item from the menu.\\\nYou respond in a short, very conversational friendly style. \\\nThe menu includes \\\npepperoni pizza 12.95, 10.00, 7.00 \\\ncheese pizza 10.95, 9.25, 6.50 \\\neggplant pizza 11.95, 9.75, 6.75 \\\nfries 4.50, 3.50 \\\ngreek salad 7.25 \\\nToppings: \\\nextra cheese 2.00, \\\nmushrooms 1.50 \\\nsausage 3.00 \\\ncanadian bacon 3.50 \\\nAI sauce 1.50 \\\npeppers 1.00 \\\nDrinks: \\\ncoke 3.00, 2.00, 1.00 \\\nsprite 3.00, 2.00, 1.00 \\\nbottled water 5.00 \\\n\"\"\"} ] # accumulate messages\n\n\ninp = pn.widgets.TextInput(value=\"Hi\", placeholder='Enter text here…')\nbutton_conversation = pn.widgets.Button(name=\"Chat!\")\n\ninteractive_conversation = pn.bind(collect_messages, button_conversation)\n\ndashboard = pn.Column(\n inp,\n pn.Row(button_conversation),\n pn.panel(interactive_conversation, loading_indicator=True, height=300),\n)\n\ndashboard",
93 | "metadata": {},
94 | "execution_count": null,
95 | "outputs": []
96 | },
97 | {
98 | "cell_type": "code",
99 | "source": "messages = context.copy()\nmessages.append(\n{'role':'system', 'content':'create a json summary of the previous food order. Itemize the price for each item\\\n The fields should be 1) pizza, include size 2) list of toppings 3) list of drinks, include size 4) list of sides include size 5)total price '}, \n)\n #The fields should be 1) pizza, price 2) list of toppings 3) list of drinks, include size include price 4) list of sides include size include price, 5)total price '}, \n\nresponse = get_completion_from_messages(messages, temperature=0)\nprint(response)",
100 | "metadata": {},
101 | "execution_count": null,
102 | "outputs": []
103 | },
104 | {
105 | "cell_type": "markdown",
106 | "source": "## Try experimenting on your own!\n\nYou can modify the menu or instructions to create your own orderbot!",
107 | "metadata": {}
108 | },
109 | {
110 | "cell_type": "code",
111 | "source": "",
112 | "metadata": {},
113 | "execution_count": null,
114 | "outputs": []
115 | }
116 | ]
117 | }
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ChatGPT-Prompt-Engineering-for-Developers
2 |
3 | All notebooks collected from the (currently) free course [ChatGPT Prompt Engineering for Developers](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/) offered by DeepLearning.AI and OpenAI
4 |
5 | ## Table of Contents
6 |
7 | 1. Guidelines for Prompting
8 | - [ChatGPT Notebook 1](https://github.com/ginny100/ChatGPT-Prompt-Engineering-for-Developers/blob/master/ChatGPT-Notebook1.ipynb)
9 | 2. Iterative Prompt Develelopment
10 | - [ChatGPT Notebook 2](https://github.com/ginny100/ChatGPT-Prompt-Engineering-for-Developers/blob/master/ChatGPT-Notebook2.ipynb)
11 | 3. Summarizing
12 | - [ChatGPT Notebook 3](https://github.com/ginny100/ChatGPT-Prompt-Engineering-for-Developers/blob/master/ChatGPT-Notebook3.ipynb)
13 | 4. Inferring
14 | - [ChatGPT Notebook 4](https://github.com/ginny100/ChatGPT-Prompt-Engineering-for-Developers/blob/master/ChatGPT-Notebook4.ipynb)
15 | 5. Transforming
16 | - [ChatGPT Notebook 5](https://github.com/ginny100/ChatGPT-Prompt-Engineering-for-Developers/blob/master/ChatGPT-Notebook5.ipynb)
17 | 6. Expanding
18 | - [ChatGPT Notebook 6](https://github.com/ginny100/ChatGPT-Prompt-Engineering-for-Developers/blob/master/ChatGPT-Notebook6.ipynb)
19 | 7. The Chat Format
20 | - [ChatGPT Notebook 7](https://github.com/ginny100/ChatGPT-Prompt-Engineering-for-Developers/blob/master/ChatGPT-Notebook7.ipynb)
21 |
--------------------------------------------------------------------------------