├── .gitignore
├── LICENSE
├── README.md
├── documents
└── conda_jupyter.md
├── images
├── cpl_example.png
├── csv_example.png
├── demo.png
└── game24.png
├── langpy
├── __init__.py
├── auto.py
├── base.py
├── colab_lp.py
├── data_loader
│ ├── __init__.py
│ ├── csv_loader.py
│ ├── json_loader.py
│ └── loader.py
└── jupyter_lp.py
├── requirements.txt
├── research
└── nlu
│ ├── direct_NLEP
│ ├── direct_NLEP_execution.py
│ └── direct_NLEP_generation.py
│ ├── human_generated_tree.py
│ ├── readme.md
│ └── tree_NLEP
│ ├── tree_NLEP_execution.py
│ └── tree_NLEP_generation.py
├── server
├── nlep_server.py
├── prompt.txt
└── readme.md
└── setup.py
/.gitignore:
--------------------------------------------------------------------------------
1 | *.DS_Store
2 | *.pyc
3 | *build*
4 | *dist*
5 | *egg-info*
6 | .vscode/
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Hongyin Luo
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # LangCode - Enable NLEP Reasoning for LLMs
2 | - Implementation and iPython toolkit for the paper [Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning](https://arxiv.org/pdf/2309.10814.pdf).
3 | - Improved interpretability and problem solving ability with **natural language embedded programs (NLEP)**, a hybrid language-symbolic reasoning technique.
4 | - Multi-round, interactive, smooth programming on Colab and Jupyter notebook.
5 |
6 |
7 |
8 |

9 |
10 | Respresenting the Language of Thought
11 |
12 | [Hongyin Luo](https://luohongyin.github.io/) @ MIT *[hyluo at mit dot edu]*
13 |
14 |
15 |
16 | ## NLEP: Natural Language Embedded Programming
17 | The auto-regressive large language models (LLMs) generate natural and programming languages by scoring the next tokens, sampling flexible but undeterministic texts.
18 |
19 | The Python interpreter reads structured source code and output mostly *deterministic* results, ensuring the mathematical correctness in reasoning and computation based on the given Python code.
20 |
21 | `NLEP` is an end-to-end, hybrid language-symbolic framework with an LLM and a code interpreter.
22 | - LLM plans for the reasoning by generating step-by-step programs.
23 | - Code interpreter executes the generated program and prints desired responses.
24 | - No further edition needed for program standard outputs.
25 |
26 | `NLEP` can deal with natural language, math, symbolic, and programming tasks.
27 |
28 | Game of 24 Example
29 |
30 | - Instruction: `What is the correct mathematical expression consisting of the numbers {4, 9, 10, 13} in any order, along with the parentheses and arithmetic operations that results in 24?`
31 | - Program generated by LLM
32 | ```python
33 | # Step 1: Import necessary built-in libraries
34 | import itertools
35 | import operator
36 |
37 | # Step 2: Define necessary functions that generally solve this type of problem
38 | def find_expression(numbers, target):
39 | operations = [operator.add, operator.sub, operator.mul, operator.truediv]
40 | ops = {operator.add: '+', operator.sub: '-', operator.mul: '*', operator.truediv: '/'}
41 |
42 | for permutation in itertools.permutations(numbers):
43 | for ops_combo in itertools.product(operations, repeat=len(numbers)-1):
44 | expression = f"({permutation[0]} {ops[ops_combo[0]]} {permutation[1]})"
45 | for i in range(2, len(numbers)):
46 | expression = f"({expression} {ops[ops_combo[i-1]]} {permutation[i]})"
47 | try:
48 | if eval(expression) == target:
49 | return expression.replace("/", "//")
50 | except ZeroDivisionError:
51 | continue
52 | return None
53 |
54 | # Step 3: Define constant variables for the task
55 | numbers = [4, 9, 10, 13]
56 | target = 24
57 |
58 | # Step 4: Print an answer in natural language.
59 | expression = find_expression(numbers, target)
60 | if expression:
61 | print(f"The correct mathematical expression that results in 24 is {expression}.")
62 | else:
63 | print("There is no correct mathematical expression that results in 24 using these numbers.")
64 | ```
65 |
66 | - Result: `The correct mathematical expression that results in 24 is (((9 + 10) - 13) * 4).`
67 |
68 |
69 |
70 | Question answering example
71 |
72 | - Instruction: `How many keystrokes are needed to type the numbers from 1 to 104892?`
73 | - Program generated by LLM
74 | ```python
75 | # Step 1: Import necessary built-in libraries
76 | # No need to import
77 |
78 | # Step 2: Define necessary functions that generally solve this type of problem
79 | def calculate_keystrokes(number):
80 | keystrokes = 0
81 | for i in range(1, number + 1):
82 | keystrokes += len(str(i))
83 | return keystrokes
84 |
85 | # Step 3: Define constant variables for the task
86 | number = 104892
87 |
88 | # Step 4: Print an answer in natural language.
89 | keystrokes = calculate_keystrokes(number)
90 | print(f"To type the numbers from 1 to {number}, {keystrokes} keystrokes are needed.")
91 | ```
92 | - Result: `To type the numbers from 1 to 104892, 518247 keystrokes are needed.`
93 |
94 |
95 | Pytorch programming example
96 |
97 | - Instruction: `Build a multi-layer perceptron with two hiden states and dropout in Pytorch.`
98 | - Program generated by LLM
99 | ```python
100 | # Step 1: Import necessary built-in libraries
101 | import torch
102 | import torch.nn as nn
103 | import torch.nn.functional as F
104 |
105 | # Step 2: Define necessary classes that generally solve this type of problem
106 | class MultiLayerPerceptron(nn.Module):
107 | def __init__(self, input_size, hidden_size1, hidden_size2, num_classes, dropout_rate):
108 | super(MultiLayerPerceptron, self).__init__()
109 | self.fc1 = nn.Linear(input_size, hidden_size1)
110 | self.fc2 = nn.Linear(hidden_size1, hidden_size2)
111 | self.fc3 = nn.Linear(hidden_size2, num_classes)
112 | self.dropout = nn.Dropout(dropout_rate)
113 |
114 | def forward(self, x):
115 | out = F.relu(self.fc1(x))
116 | out = self.dropout(out)
117 | out = F.relu(self.fc2(out))
118 | out = self.dropout(out)
119 | out = self.fc3(out)
120 | return out
121 |
122 | # Step 3: Define constant variables for the task
123 | input_size = 784 # 28x28 pixels for MNIST dataset
124 | hidden_size1 = 500
125 | hidden_size2 = 100
126 | num_classes = 10 # 10 digits for MNIST dataset
127 | dropout_rate = 0.5
128 |
129 | # Step 4: Instantiate the network
130 | net = MultiLayerPerceptron(input_size, hidden_size1, hidden_size2, num_classes, dropout_rate)
131 |
132 | # Print out the network architecture
133 | print(net)
134 | ```
135 | - Result
136 | ```
137 | MultiLayerPerceptron(
138 | (fc1): Linear(in_features=784, out_features=500, bias=True)
139 | (fc2): Linear(in_features=500, out_features=100, bias=True)
140 | (fc3): Linear(in_features=100, out_features=10, bias=True)
141 | (dropout): Dropout(p=0.5, inplace=False)
142 | )
143 | ```
144 |
145 |
146 |
147 | Multi-turn conversation example
148 |
149 | - Instruction 1: `who is the first emperor of the Tang dynasty?`
150 | - First program generated by LLM
151 | ```python
152 | # Step 1: Import necessary built-in libraries
153 | # No need to import
154 |
155 | # Step 2: Define necessary functions that generally solve this type of problem
156 | # No need to define a function
157 |
158 | # Step 3: Define constant variables for the task
159 | first_emperor_tang_dynasty = "Emperor Gaozu"
160 |
161 | # Step 4: Print an answer in natural language.
162 | print(f"The first emperor of the Tang dynasty is {first_emperor_tang_dynasty}.")
163 | ```
164 | - Result 1: `The first emperor of the Tang dynasty is Emperor Gaozu.`
165 | - Instruction 2: `Tell me more about him.`
166 | - Second program generated by LLM
167 | ```python
168 | # Step 1: Import necessary built-in libraries
169 | # No need to import
170 |
171 | # Step 2: Define necessary functions that generally solve this type of problem
172 | # No need to define a function
173 |
174 | # Step 3: Define constant variables for the task
175 | first_emperor_tang_dynasty_info = {
176 | "name": "Emperor Gaozu",
177 | "born": "566",
178 | "died": "635",
179 | "reign": "618-626",
180 | "full_name": "Li Yuan",
181 | "era_name": "Wude",
182 | "children": "27, including Emperor Taizong",
183 | "notable_events": "founded the Tang dynasty, expanded the empire, implemented policies promoting economic prosperity and cultural innovation"
184 | }
185 |
186 | # Step 4: Print an answer in natural language.
187 | print(f"Emperor Gaozu, born in the year {first_emperor_tang_dynasty_info['born']} and died in the year {first_emperor_tang_dynasty_info['died']}, was the first emperor of the Tang dynasty. His reign, from {first_emperor_tang_dynasty_info['reign']}, is known as the Wude era. His given name was Li Yuan. He had 27 children, including his successor, Emperor Taizong. During his reign, he not only founded the Tang dynasty but also expanded the empire and implemented policies that promoted economic prosperity and cultural innovation.")
188 | ```
189 | - Result 2: `Emperor Gaozu, born in the year 566 and died in the year 635, was the first emperor of the Tang dynasty. His reign, from 618-626, is known as the Wude era. His given name was Li Yuan. He had 27 children, including his successor, Emperor Taizong. During his reign, he not only founded the Tang dynasty but also expanded the empire and implemented policies that promoted economic prosperity and cultural innovation.`
190 |
191 |
192 |
193 | ## Contents
194 | - [Supported Environments](#supported-environments)
195 | - [Config API Endpoint](#config-api-endpoint)
196 | - [Quick Start with Colab](#quick-start-with-colab)
197 | - [Installation](#installation)
198 | - [Define LangPy agent](#define-langpy-agent)
199 | - [Code generation](#generated-code-to-answer-any-question)
200 | - [Run generated code](#run-the-generated-code)
201 | - [Code completion](#code-completion)
202 | - [CSV file processing](#csv-file-processing)
203 | - [Chat with Python by Code Generation](#chat-with-python-through-code-generation)
204 | - [Hirarchical Instructng by Code Completion](#hirarchical-instruction-following-through-code-completion)
205 | - [Contact](#contact)
206 |
207 | ## Supported Environments
208 | - [Option 1] `Colab` - Recommended! you are already good to go!
209 | - [Example Colab notebook](https://colab.research.google.com/drive/132vl1t3MJlq8ekzMNycGejvBTcwoDcDM?usp=sharing) might be helpfu!
210 | - [Option 2] `Jupyter notebook 6.X` deployment
211 | - [Create a conda enviroment and launch Jupyter notebook](documents/conda_jupyter.md)
212 | - LangCode does not work in vscode for now.
213 |
214 | ## Customizing NLEP Generation API
215 | By default, we provide an API server that generates NLEPs without saving users' API keys. We will continue improving the prompt strategy and the quality of generated NELPs. However, we provide an option for customized servers.
216 |
217 | Please refer to [LangCode/server/](https://github.com/luohongyin/LangCode/tree/main/server) for the server setup. We provide complete code and minimal prompt to generate high-quality NLEP for different tasks. Use the following command to switch to the preferred server.
218 | ```python
219 | clp.config_api_endpoint()
220 | # Then input your server url in the following format:
221 | # http://{HOSTNAME}:{PORT}/items/0
222 | ```
223 |
224 | Our server timeouts at 30s since the GPT-4 API is not efficient enough, but the custom server we provide does not involve this limitation.
225 |
226 | ## Quick Start with Colab
227 |
228 | ### Installation
229 | Install the LangCode package in the IPython notebook by runing the following command in a code cell.
230 | ```bash
231 | !pip install langpy-notebook
232 | ```
233 | As an early release, we first deal with the `Python` programming language and release the `LangPy` class. We also look forward to `LangJulia`, `LangR`, `LangGo`, etc.
234 |
235 | ### Define LangPy agent
236 | Import the library and define a Langpy agent. The `langpy.auto.get_langpy` function automatically identify if the backend is Jupyter 6 or Colab.
237 | ```python
238 | from langpy.auto import get_langpy
239 |
240 | # clp: an instance of ColabLangPy
241 | clp = get_langpy(
242 | api_key='YOUR-API-KEY',
243 | platform='gpt',
244 | model='gpt-4'
245 | )
246 | ```
247 | The platforms we support are `gpt` (OpenAI) and `palm` (Google), and you can select models on these platforms. For example, `platform='palm',model='text-bison-001'` uses Google's `text-bison-001`.
248 |
249 | ### Generated code to answer *any* question
250 | Add the following code to a code cell, and run it to generate a program that answers your question. For example, the **Game of 24** task.
251 |
252 | ```python
253 | # mode: control history reading to enable multi-turn programming
254 |
255 | # hist_mode = 'all' (default) -> read all cell history
256 | # hist_mode = 'single' -> only read the current instruction
257 | # hist_mode = INTERGER -> Read {INTEGER} history cells.
258 | # hist_mode = 0 is equivalent to hist_mode = 'single'
259 |
260 | numbers = '{4, 9, 10, 13}'
261 | target = 24
262 | clp.generate(
263 | f'What is the correct mathematical expression consisting of the numbers {numbers} in any order, along with the parentheses and arithmetic operations that results in {target}?',
264 | hist_mode = 'single'
265 | )
266 | ```
267 | The generated program would be placed in
268 | - in a scratch cell (Colab), or
269 | - the next code cell (Jupyter notebook 6)
270 |
271 | ### Run the generated Code
272 | Run the generated code, you'll get the answer in **natural language**
273 |
274 |
275 |
276 | We did not use any Game of 24 example to prompt GPT-4, but it still gets the correct answer `(4 - 10) * (9 - 13)` with only sampling **one** output! This is more efficient than tree-of-thoughts.
277 |
278 | ### Code Completion
279 | The `LangPy` agent can also complete code that is not finished yet.
280 |
281 | Run the code in the upper cell will complete the code in the lower cell, getting the following program
282 | ```python
283 | # Step 1: Import necessary built-in libraries
284 | import torch
285 | import torch.nn as nn
286 |
287 | # Step 2: Define necessary functions that generally solve this type of problem
288 | class MLP(nn.Module):
289 | def __init__(self):
290 | super(MLP, self).__init__()
291 | self.layers = nn.Sequential(
292 | nn.Linear(10, 5),
293 | nn.ReLU(),
294 | nn.Linear(5, 2)
295 | )
296 |
297 | def forward(self, x):
298 | x = self.layers(x)
299 | return x
300 |
301 | # Step 3: Instantiate the defined class
302 | mlp = MLP()
303 |
304 | # Step 4: Print an answer in natural language.
305 | print(f"A multi-layer perceptron (MLP) has been defined. It consists of an input layer, a hidden layer with 5 neurons activated by ReLU function, and an output layer with 2 neurons. The input layer has 10 neurons, representing the dimension of the input vectors.")
306 | ```
307 | Note that both the code and comment in the generated cell can be edited for another completion, enabling more flexible and direct instructions.
308 |
309 | ### CSV File Processing
310 | As an early release, we provide basic CSV file processing ability. Running the following code,
311 | ```python
312 | clp.preview_data(file_name = 'example_data.csv', data_type = 'csv)
313 | ```
314 | a new code cell will be inserted:
315 | ```python
316 | # First three rows of the input file:
317 | # placeName,placeDcid,xDate,xValue-DifferenceRelativeToBaseDate2006_Max_Temperature_RCP45,yDate,yValue-Percent_Person_WithCoronaryHeartDisease,xPopulation-Count_Person,yPopulation-Count_Person
318 | # "Autauga County, AL",geoId/01001,2050-06,-1.15811100000001,2020,6.3,59095,56145
319 | # "Baldwin County, AL",geoId/01003,2050-06,-1.073211,2020,5.9,239294,229287
320 | # ...
321 | file_name = 'example_data.csv'
322 | input_file = open(file_name)
323 | ```
324 | Running this code snipet, you can further analyze the data in the csv file using LangCode. For example, asking LangCode to generate code for visualization:
325 | ```python
326 | # hist_mode = 'all' allows LangCode to read previous cells
327 |
328 | clp.generate('Visualize the correlation of xData and yData of Alabama in the file.', hist_mode = 'all')
329 | ```
330 | The generated code and execution results are shown below.
331 |
332 |
333 | ## Chat with Python Through Code Generation
334 | We currently offer three parameters for `LangPy.generate(instruction, mode = 'all', http = 'optional')`
335 | - `instruction [str]`: The input question / request in natural language.
336 | - `mode [str or int]`: Wether or note read previous code and markdown cells
337 | - `mode = 'all'`: Read all previous cells
338 | - `mode = 'single'`: Just read the instruction in the current cell
339 | - `mode = [int]`: The number of latest history cells to read
340 | - `http`: Wether allow the generated code to include http requests.
341 | - `http = 'optional'`: (Default) LangCode decides itself about sending HTTP requests or not.
342 | - `http = 'forbid'`: LangCode will try to avoid generated HTTP requesting code.
343 | - `http = 'force'`: LangCode will try to generated code that sends HTTP requests to solve the given question.
344 |
345 | ## Hirarchical Instruction Following Through Code Completion
346 | In the previous example we have shown,
347 |
348 | LangCode implemented a Multi-layer perceptron (MLP) with Pytorch using the following code
349 | ```python
350 | # Step 1: Import necessary built-in libraries
351 | import torch
352 | import torch.nn as nn
353 |
354 | # Step 2: Define necessary functions that generally solve this type of problem
355 | class MLP(nn.Module):
356 | def __init__(self):
357 | super(MLP, self).__init__()
358 | self.layers = nn.Sequential(
359 | nn.Linear(10, 5),
360 | nn.ReLU(),
361 | nn.Linear(5, 2)
362 | )
363 |
364 | def forward(self, x):
365 | x = self.layers(x)
366 | return x
367 |
368 | # Step 3: Instantiate the defined class
369 | mlp = MLP()
370 |
371 | # Step 4: Print an answer in natural language.
372 | print(f"A multi-layer perceptron (MLP) has been defined. It consists of an input layer, a hidden layer with 5 neurons activated by ReLU function, and an output layer with 2 neurons. The input layer has 10 neurons, representing the dimension of the input vectors.")
373 | ```
374 |
375 | Unlike ChatGPT that requires user to write a good instruction at once, LangCode allows adding multiple instructions in different places for one task. Editting the code for completion can guide model to generate different code. For example,
376 |
377 | ```python
378 | # Step 1: Import necessary built-in libraries
379 | import torch
380 | import torch.nn as nn
381 |
382 | # Step 2: Define a multi-layer perceptron that has three linear layers and uses tanh activation function.
383 | ```
384 | The code completion result would be
385 | ```python
386 | # Step 1: Import necessary built-in libraries
387 | import torch
388 | import torch.nn as nn
389 |
390 | # Step 2: Define a multi-layer perceptron that has three linear layers and uses tanh activation function.
391 | # Python program:
392 | class MultiLayerPerceptron(nn.Module):
393 | def __init__(self, input_size, hidden_size, output_size):
394 | super(MultiLayerPerceptron, self).__init__()
395 | self.layer1 = nn.Linear(input_size, hidden_size)
396 | self.layer2 = nn.Linear(hidden_size, hidden_size)
397 | self.layer3 = nn.Linear(hidden_size, output_size)
398 | self.tanh = nn.Tanh()
399 |
400 | def forward(self, x):
401 | x = self.tanh(self.layer1(x))
402 | x = self.tanh(self.layer2(x))
403 | x = self.layer3(x)
404 | return x
405 |
406 | # Step 3: Print an answer in natural language.
407 | print("A multi-layer perceptron is a type of artificial neural network. It has an input layer, one or more hidden layers, and an output layer. In each layer, the inputs are multiplied by weights, summed, and passed through an activation function. In this implementation, the activation function is the hyperbolic tangent function (tanh).")
408 | ```
409 |
410 | Besides the comments, you can also control the direction of the generated code by modifying the imported libraries. For example, changing `import torch` to `import tensorflow.keras as keras`. Your are encourged to explore more possibilities!
411 |
412 | ## Contact
413 |
414 | If there is any question, feel free to post an issue or contact Hongyin Luo at *hyluo [at] mit [dot] edu*.
415 |
--------------------------------------------------------------------------------
/documents/conda_jupyter.md:
--------------------------------------------------------------------------------
1 | # Anaconda and Jupyter notebook 6
2 | Follow the instruction of this document to setup conda and Jupyter notebook on your machine.
3 |
4 | ## Anaconda setup
5 |
6 | ### Installation
7 | 1. Download Miniconda from [official website](https://docs.conda.io/en/latest/miniconda.html)
8 | 2. Install Miniconda following [official instructions](https://conda.io/projects/conda/en/stable/user-guide/install/index.html)
9 |
10 | Make sure adding conda python to your `PATH`!
11 |
12 | ### Setup environment
13 | Run the following command to set up a conda environment with python and Jupyter 6.
14 | ```bash
15 | conda create -n myenv python=3.9 notebook=6.4.8
16 | ```
17 |
18 | Activate the environment:
19 | ```bash
20 | conda activate myenv
21 | ```
22 |
23 | Run Jupyter notebook:
24 | ```bash
25 | jupyter notebook # For localhost
26 | jupyter notebook --ip 0.0.0.0 # For public access
27 | ```
--------------------------------------------------------------------------------
/images/cpl_example.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/luohongyin/LangCode/097f34119f52a8a8dfc868a4a03c7e2029e28e89/images/cpl_example.png
--------------------------------------------------------------------------------
/images/csv_example.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/luohongyin/LangCode/097f34119f52a8a8dfc868a4a03c7e2029e28e89/images/csv_example.png
--------------------------------------------------------------------------------
/images/demo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/luohongyin/LangCode/097f34119f52a8a8dfc868a4a03c7e2029e28e89/images/demo.png
--------------------------------------------------------------------------------
/images/game24.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/luohongyin/LangCode/097f34119f52a8a8dfc868a4a03c7e2029e28e89/images/game24.png
--------------------------------------------------------------------------------
/langpy/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/luohongyin/LangCode/097f34119f52a8a8dfc868a4a03c7e2029e28e89/langpy/__init__.py
--------------------------------------------------------------------------------
/langpy/auto.py:
--------------------------------------------------------------------------------
1 | def get_langpy(api_key='', platform='gpt', model='gpt-4'):
2 | try:
3 | import google.colab
4 | from .colab_lp import ColabLangPy
5 | autolp = ColabLangPy(api_key, platform=platform, model=model)
6 | except:
7 | from .jupyter_lp import JupyterLangPy
8 | autolp = JupyterLangPy(api_key, platform=platform, model=model)
9 | return autolp
10 |
--------------------------------------------------------------------------------
/langpy/base.py:
--------------------------------------------------------------------------------
1 | from .data_loader.csv_loader import CsvLoader
2 | from .data_loader.json_loader import JsonLoader
3 |
4 | _API_ENDPOINT = 'https://lang-py-522564686dd7.herokuapp.com/items/0'
5 | _SUPPORTED_PLATFORMS = {'gpt', 'palm'}
6 |
7 |
8 | class LangPy:
9 | def __init__(self, api_key='', platform='gpt', model='gpt-4'):
10 | if not api_key:
11 | raise ValueError("Empty api_key")
12 |
13 | if not model:
14 | raise ValueError("Empty model")
15 |
16 | if platform not in _SUPPORTED_PLATFORMS:
17 | supported_platforms = ", ".join(_SUPPORTED_PLATFORMS)
18 | raise ValueError(
19 | "Non-supported platform. Supported ones: {supported_platforms}")
20 |
21 | self.api_key = api_key
22 | self.api_end_point = _API_ENDPOINT
23 | self.platform = platform
24 | self.model = model
25 |
26 | def get_http_prompt(self, http):
27 | prompt = ''
28 | if http == 'forbid':
29 | prompt = f'Do not send HTTP requests.'
30 | elif http == 'force':
31 | prompt = f'Send an HTTP request to solve the problem.'
32 | elif http == 'optional':
33 | pass
34 | else:
35 | raise ValueError(
36 | f'http mode: {http} not supported. try [forbid | force | optional]!')
37 | return prompt
38 |
39 | def get_data_loader_prompt(self,
40 | notebook,
41 | data=None,
42 | file_name=None,
43 | data_type='csv', # csv or json
44 | read_mode='r',
45 | num_case=2):
46 | if data_type == 'csv':
47 | loader = CsvLoader(data, file_name, read_mode)
48 | elif data_type == 'json':
49 | loader = JsonLoader(data, file_name, read_mode)
50 | else:
51 | raise ValueError(
52 | f'data_type = {data_type} not supported. Please select from [csv | json]')
53 | return loader.get_prompt(notebook, num_case)
54 |
55 | def config_api_key(self):
56 | api_key = input('API key:')
57 | self.api_key = api_key
58 |
59 | def config_end_point(self):
60 | self.api_end_point = input('API endpoint:')
61 |
62 | def generate(self, instruction, hist_mode='all', http='optional'):
63 | pass
64 |
65 | def complete(self, instruction, hist_mode='all', http='optional'):
66 | pass
67 |
68 | def process_csv(self, file_name):
69 | pass
70 |
--------------------------------------------------------------------------------
/langpy/colab_lp.py:
--------------------------------------------------------------------------------
1 | import json
2 | import requests
3 | from .base import LangPy
4 | from google.colab import _message
5 |
6 |
7 | class ColabLangPy(LangPy):
8 | def __init__(self, api_key='', platform='gpt', model='gpt-4'):
9 | super().__init__(api_key=api_key, platform=platform, model=model)
10 | self._colab_list = []
11 | self._cur_idx = -1
12 |
13 | def format_cell(self, cell_tuple):
14 | cell_str, cell_type = cell_tuple
15 | if cell_type == 'code':
16 | return f'```\n{cell_str}\n```'
17 | else:
18 | return cell_str
19 |
20 | def post_code(self, code):
21 | _message.blocking_request(
22 | 'add_scratch_cell',
23 | request={'content': code, 'openInRightPane': True},
24 | timeout_sec=None
25 | )
26 |
27 | def update_idx_hist(self, instruction):
28 | colab_hist = _message.blocking_request('get_ipynb')['ipynb']['cells']
29 | self._code_list = [
30 | [''.join(x['source']), x['cell_type']] for x in colab_hist
31 | ]
32 |
33 | self._code_list = [
34 | self.format_cell(ct) for ct in self._code_list
35 | ]
36 |
37 | self._cur_idx = -1
38 | for i, code in enumerate(self._code_list):
39 | if instruction in code:
40 | self._cur_idx = i
41 | break
42 |
43 | def get_history_prompt(self, instruction, hist_mode):
44 | prompt = ''
45 | if hist_mode == 'all':
46 | previous_code = '\n\n'.join(self._code_list[:self._cur_idx])
47 | prompt = f'{previous_code}\n\n{instruction}'
48 | elif hist_mode == 'single':
49 | prompt = instruction
50 | elif isinstance(hist_mode, int):
51 | previous_code = '\n\n'.join(
52 | self._code_list[self._cur_idx - hist_mode: self._cur_idx])
53 | prompt = f'{previous_code}\n\n{instruction}'
54 | else:
55 | raise ValueError(
56 | f'hist_mode = {hist_mode} not supported. try [all | single | INTEGER].')
57 | return prompt
58 |
59 | def get_prompt(self, instruction, hist_mode, http):
60 | history_prompt = self.get_history_prompt(instruction, hist_mode)
61 | http_prompt = self.get_http_prompt(http)
62 | return f'{history_prompt} {http_prompt}'
63 |
64 | def get_code_of_thought(self, json_data):
65 | response = requests.put(self.api_end_point, json=json_data)
66 | if response.status_code != 200:
67 | raise SystemError(
68 | f'Failed to get valid response from server: {response.status_code}')
69 | return response.content
70 |
71 | def generate(self, instruction, hist_mode='single', http='optional'):
72 | self.update_idx_hist(instruction)
73 | response = self.get_code_of_thought(
74 | json_data={
75 | 'instruction': self.get_prompt(instruction, hist_mode, http),
76 | 'api_key_str': self.api_key,
77 | 'exist_code': 'none',
78 | 'platform': self.platform,
79 | 'model': self.model
80 | }
81 | )
82 | ans_str = json.loads(response)['output']
83 | self.post_code(ans_str)
84 |
85 | def complete(self, instruction, hist_mode='single', http='optional'):
86 | self.update_idx_hist(instruction)
87 | response = self.get_code_of_thought(
88 | json_data={
89 | 'instruction': self.get_prompt(instruction, hist_mode, http),
90 | 'api_key_str': self.api_key,
91 | 'exist_code': self._code_list[self._cur_idx + 1].replace('```', '').strip(),
92 | 'platform': self.platform,
93 | 'model': self.model
94 | }
95 | )
96 | ans_str = json.loads(response)['output']
97 | self.post_code(ans_str)
98 |
99 | def preview_data(
100 | self,
101 | data=None,
102 | file_name=None,
103 | data_type='csv', # csv or json
104 | read_mode='r',
105 | num_case=2):
106 | prompt = self.get_data_loader_prompt(
107 | 'colab', data, file_name, data_type, read_mode, num_case)
108 | self.post_code(prompt)
109 |
--------------------------------------------------------------------------------
/langpy/data_loader/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/luohongyin/LangCode/097f34119f52a8a8dfc868a4a03c7e2029e28e89/langpy/data_loader/__init__.py
--------------------------------------------------------------------------------
/langpy/data_loader/csv_loader.py:
--------------------------------------------------------------------------------
1 | from .loader import Loader
2 |
3 |
4 | class CsvLoader(Loader):
5 | def __init__(self, data=None, file_name=None, read_mode='r'):
6 | super().__init__(data, file_name, read_mode)
7 | if file_name is None:
8 | raise ValueError('CsvLoader must process a file. set file_name.')
9 |
10 | def file_prompt(self, num_row, content_str):
11 | f_prompt = f"""# First {num_row} rows of the input file:\nexample_data = '''\n{content_str}\n ...\n'''\nfile_name = '{self.file_name}' \ninput_file = open(file_name)"""
12 | return f_prompt
13 |
14 | def get_prompt(self, notebook='jupyter', num_case=2):
15 | num_row = num_case + 1
16 | content_list = open(self.file_name, self.read_mode).readlines()[
17 | :num_row]
18 | content_list = [x.strip() for x in content_list]
19 | content_str = '\n# '.join(content_list)
20 |
21 | prompt = self.file_prompt(num_row, content_str)
22 | return self.format_prompt(prompt, notebook)
23 |
--------------------------------------------------------------------------------
/langpy/data_loader/json_loader.py:
--------------------------------------------------------------------------------
1 | import json
2 | from .loader import Loader
3 |
4 |
5 | class JsonLoader(Loader):
6 | def __init__(self, data=None, file_name=None, read_mode='r'):
7 | super().__init__(data, file_name, read_mode)
8 | if data is None and file_name is None:
9 | raise ValueError('Must provide either `data` or `file_name`.')
10 |
11 | def file_prompt(self, num_case, content_str):
12 | f_prompt = f"""# First {num_case} cases of the Json data:\nexample_data = '''\n{content_str}\n...\n'''\nfile_name = '{self.file_name}' \ninput_file = open(file_name)"""
13 | return f_prompt
14 |
15 | def data_prompt(self, num_case, content_str):
16 | d_prompt = f"""# First {num_case} cases of the Json data:\nexample_data = '''\n{content_str}\n...\n'''"""
17 | return d_prompt
18 |
19 | def get_prompt(self, notebook='jupyter', num_case=2):
20 | if self.data is None:
21 | self.data = json.load(open(self.file_name, self.read_mode))
22 |
23 | example_list = self.data[:num_case]
24 | content_str = ',\n'.join([json.dumps(x, indent=4)
25 | for x in example_list])
26 |
27 | if self.file_name is not None:
28 | prompt = self.file_prompt(num_case, content_str)
29 | else:
30 | prompt = self.data_prompt(num_case, content_str)
31 |
32 | return self.format_prompt(prompt, notebook)
33 |
--------------------------------------------------------------------------------
/langpy/data_loader/loader.py:
--------------------------------------------------------------------------------
1 | class Loader:
2 | def __init__(self, data=None, file_name=None, read_mode='r'):
3 | self.data = data
4 | self.file_name = file_name
5 | self.read_mode = read_mode
6 |
7 | def format_prompt(self, prompt, notebook):
8 | if notebook == 'jupyter':
9 | prompt = prompt.replace('\n', '\\n').replace('\t', '\\t')
10 | elif notebook == 'colab':
11 | pass
12 | else:
13 | raise ValueError(
14 | f'Notebook = {notebook} note supported. Use [jupyter | colab]')
15 | return prompt
--------------------------------------------------------------------------------
/langpy/jupyter_lp.py:
--------------------------------------------------------------------------------
1 | import IPython
2 | from IPython.core.display import display, Javascript
3 | from .base import LangPy
4 |
5 |
6 | class JupyterLangPy(LangPy):
7 | def __init__(self, api_key='', platform='gpt', model='gpt-4'):
8 | super().__init__(api_key=api_key, platform=platform, model=model)
9 |
10 | def get_prompt(self, instruction, http):
11 | http_prompt = self.get_http_prompt(http)
12 | return f'{instruction} {http_prompt}'
13 |
14 | def init_js_code(self):
15 | return """
16 | async function myCallbackFunction(data) {
17 | const url = '%s';
18 | try {
19 | let response = await fetch(url, {
20 | method: "PUT",
21 | headers: {
22 | "Content-Type": "application/json",
23 | },
24 | body: JSON.stringify(data),
25 | });
26 | let responseData = await response.json();
27 | return responseData;
28 | } catch (error) {
29 | console.log(error);
30 | }
31 | }
32 | var current_index = Jupyter.notebook.get_selected_index();
33 | var previous_cell_content = "";
34 | """ % self.api_end_point
35 |
36 | def js_get_history(self, hist_mode):
37 | if hist_mode == "all":
38 | js_code = """
39 | for (var i = 1; i < current_index; i++) {
40 | var cell = Jupyter.notebook.get_cell(i);
41 | if (cell.cell_type === "code") {
42 | previous_cell_content += '```\\n' + cell.get_text() + '\\n```\\n';
43 | } else {
44 | previous_cell_content += cell.get_text() + '\\n';
45 | }
46 | }
47 | """
48 | elif isinstance(hist_mode, int):
49 | js_code = """
50 | for (var i = current_index - %d; i < current_index; i++) {
51 | var cell = Jupyter.notebook.get_cell(i);
52 | if (cell.cell_type === "code") {
53 | previous_cell_content += '```\\n' + cell.get_text() + '\\n```\\n';
54 | } else {
55 | previous_cell_content += cell.get_text() + '\\n';
56 | }
57 | }
58 | """ % hist_mode
59 | elif hist_mode == 'single':
60 | js_code = ''
61 | else:
62 | raise ValueError(
63 | f"hist_mode {hist_mode} not supported. Try [all | single | INT]")
64 |
65 | return js_code
66 |
67 | def build_request_data(self, instruction, exist_code=False):
68 | if exist_code:
69 | js_request = """
70 | previous_cell_content += '%s'
71 | var api_key_str = "%s"
72 | var next_cell = Jupyter.notebook.get_cell(current_index)
73 | var exist_code = next_cell.get_text()
74 |
75 | var data = {
76 | "instruction": previous_cell_content,
77 | "api_key_str": api_key_str,
78 | "exist_code": exist_code,
79 | "platform": "%s",
80 | "model": "%s"
81 | }
82 | """ % (instruction, self.api_key, self.platform, self.model)
83 | else:
84 | js_request = """
85 | previous_cell_content += '%s'
86 | var api_key_str = "%s"
87 | var data = {
88 | "instruction": previous_cell_content,
89 | "api_key_str": api_key_str,
90 | "exist_code": "none",
91 | "platform": "%s",
92 | "model": "%s"
93 | }
94 | """ % (instruction, self.api_key, self.platform, self.model)
95 | return js_request
96 |
97 | def build_js_code(self, instruction, hist_mode, http, exist_code=False):
98 | instruction = self.get_prompt(instruction, http)
99 |
100 | js_code = self.init_js_code()
101 | js_code += self.js_get_history(hist_mode)
102 | js_code += self.build_request_data(instruction, exist_code=exist_code)
103 | return js_code
104 |
105 | def generate(self, instruction, hist_mode='single', http='optional'):
106 | js_code = self.build_js_code(
107 | instruction, hist_mode, http, exist_code=False
108 | )
109 |
110 | js_code += """
111 | myCallbackFunction(data).then(responseData => {
112 | var print_content = responseData["output"]
113 | Jupyter.notebook.select_prev()
114 | var new_cell = Jupyter.notebook.insert_cell_below('code');
115 |
116 | var next_cell = Jupyter.notebook.get_cell(current_index)
117 | next_cell.set_text(print_content);
118 | }).catch()
119 | """
120 |
121 | print('The code will be generated in the next cell ...')
122 |
123 | display(Javascript(js_code))
124 |
125 | def complete(self, instruction, hist_mode='single', http='optional'):
126 | js_code = self.build_js_code(
127 | instruction, hist_mode, http, exist_code=True
128 | )
129 |
130 | js_code += """
131 | myCallbackFunction(data).then(responseData => {
132 | var print_content = responseData["output"]
133 | var next_cell = Jupyter.notebook.get_cell(current_index)
134 | next_cell.set_text(print_content);
135 | })
136 | """
137 | print('The code in the following cell will be completed ...')
138 |
139 | display(Javascript(js_code))
140 |
141 | def preview_data(
142 | self,
143 | data=None,
144 | file_name=None,
145 | data_type='csv', # csv or json
146 | read_mode='r',
147 | num_case=2):
148 | prompt = self.get_data_loader_prompt(
149 | 'jupyter', data, file_name, data_type, read_mode, num_case)
150 |
151 | js_code = f"""
152 | Jupyter.notebook.select_prev()
153 | var new_cell = Jupyter.notebook.insert_cell_below('code');
154 | new_cell.set_text(`{prompt}`);
155 | """
156 | print('Run the following cell to read process the format of your csv file.')
157 |
158 | display(Javascript(js_code))
159 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | notebook==6.4.8
2 | fastapi==0.95.0
--------------------------------------------------------------------------------
/research/nlu/direct_NLEP/direct_NLEP_execution.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from datasets import load_dataset
3 | from tqdm import tqdm
4 | import json
5 |
6 | def classify_sst2():
7 | # Step 1: Import necessary built-in libraries
8 | from textblob import TextBlob
9 |
10 | # Step 2: Define a function that analyze the sentiment of a text
11 | def analyze_sentiment(text):
12 | sentiment = TextBlob(text).sentiment.polarity
13 | if sentiment > 0:
14 | return "positive"
15 | elif sentiment < 0:
16 | return "negative"
17 | else:
18 | return "neutral"
19 |
20 | dataset = load_dataset('sst2')
21 | val_dataset = dataset["validation"]
22 | acc = 0
23 | tot = 0
24 | for item in tqdm(val_dataset):
25 | tot += 1
26 |
27 | # Step 3: Define the text of movie review
28 | review = item['sentence']
29 |
30 | # Step 4: Print an answer in natural language using the knowledge and function defined above
31 | sentiment_result = analyze_sentiment(review)
32 | if sentiment_result == 'positive':
33 | label = 1
34 | else:
35 | label = 0
36 | if label == item['label']:
37 | acc += 1
38 |
39 | print(acc/tot)
40 |
41 |
42 | def classify_cola():
43 | # Step 1: Import necessary built-in libraries
44 | import nltk
45 |
46 | # Step 2: Define a function that checks the grammar correctness of sentences
47 | def check_sentence(sentence):
48 | try:
49 | nltk.pos_tag(nltk.word_tokenize(sentence))
50 | return "acceptable"
51 | except:
52 | return "unacceptable"
53 |
54 | # Step 3: Define the input sentence
55 | dataset = load_dataset('glue', 'cola')
56 | val_dataset = dataset["validation"]
57 | acc = 0
58 | tot = 0
59 | for item in tqdm(val_dataset):
60 | tot += 1
61 | sentence = item['sentence']
62 | label = check_sentence(sentence)
63 | # Step 4: Classify the grammar of the sentence and print the result
64 | if item['label'] == 1:
65 | truth_label = 'acceptable'
66 | else:
67 | truth_label = 'unacceptable'
68 | if truth_label == label:
69 | acc += 1
70 |
71 | print(acc/tot)
72 |
73 |
74 | def classifying_emotion():
75 | # Step 1: Import necessary built-in libraries
76 | from textblob import TextBlob
77 |
78 | # Step 2: Define a function that classify the emotion of the sentence
79 | def classify_emotion(sentence):
80 | blob = TextBlob(sentence)
81 | if blob.sentiment.polarity < 0:
82 | return "sad"
83 | elif blob.sentiment.polarity == 0:
84 | return "surprised"
85 | else:
86 | if "love" in sentence:
87 | return "love"
88 | elif "angry" in sentence:
89 | return "angry"
90 | elif "afraid" in sentence:
91 | return "afraid"
92 | else:
93 | return "happy"
94 |
95 | dataset = load_dataset('emotion')
96 | val_dataset = dataset["validation"]
97 | tot = 0
98 | acc = 0
99 | all_classes = ['sad', 'happy', 'love', 'angry', 'afraid', 'surprised']
100 | for item in tqdm(val_dataset):
101 | tot += 1
102 | # Step 3: Define a sentence
103 | sentence = item['text']
104 |
105 | # Step 5: Print a classification answer in natural language using the defined function and sentence
106 | emotion = classify_emotion(sentence)
107 | if emotion == all_classes[item['label']]:
108 | acc += 1
109 | print(acc/tot)
110 |
111 |
112 | def classify_review():
113 | # Step 1: Import necessary built-in libraries
114 | from textblob import TextBlob
115 |
116 | # Step 2: Define a function to analyze the stars of a given review
117 | def get_stars(review):
118 | analysis = TextBlob(review)
119 | # The polarity of TextBlob is in a range from -1 to 1, where -1 refers to negative sentiment and 1 to positive sentiment.
120 | # In order to map it to the star range of 1-5, we need to add 1 to make the polarity in 0-2 range, then divide it by 2 to make it in 0-1 range,
121 | # and finally multiply it by (5 - 1) and add 1 to make it in 1-5 range.
122 | stars = round((analysis.sentiment.polarity + 1) / 2 * 4 + 1)
123 | return stars
124 |
125 | dataset = load_dataset('amazon_reviews_multi', 'en')
126 | val_dataset = dataset["validation"]
127 | tot = 0
128 | acc = 0
129 | classes = [1, 2, 3, 4, 5] # Possible star rating
130 |
131 | for item in tqdm(val_dataset):
132 | tot += 1
133 | # Step 3: Specify a review to analyze
134 | review = item['review_title'] + item['review_body']
135 |
136 | # Step 4: Print the star from the analysis
137 | stars = get_stars(review)
138 | label = item['stars']
139 | if stars == label:
140 | acc += 1
141 | print(acc/tot)
142 |
143 |
144 |
145 | def classify_hsd():
146 | # Step 1: Import necessary built-in libraries
147 | from nltk.sentiment.vader import SentimentIntensityAnalyzer
148 | from nltk.tokenize import word_tokenize
149 | import nltk
150 | nltk.download('vader_lexicon')
151 |
152 | # Step 2: Define a function to check feelings of sentences
153 | def hate_speech_check(speech):
154 | words = word_tokenize(speech)
155 | # We use nltk library to determine the feelings of the speech
156 | sid = SentimentIntensityAnalyzer()
157 | sentiment = sid.polarity_scores(speech)
158 | # Considered hate speech if compound sentiment is less than -0.3
159 | if sentiment['compound'] < -0.3:
160 | return True
161 | else:
162 | return False
163 |
164 | with open('../hate-speech-dataset/test_sample.json', 'r') as f:
165 | data = json.load(f)
166 | tot = 0
167 | acc = 0
168 | for item in data:
169 | tot += 1
170 | # Step 3: Define dictionaries storing the speech
171 | speech = item['sentence']
172 |
173 | # Step 4: Print an result using the defined function and speech
174 | is_hate_speech = hate_speech_check(speech)
175 | if is_hate_speech:
176 | if item['label'] == 'hate':
177 | acc += 1
178 | else:
179 | if item['label'] != 'hate':
180 | acc += 1
181 | print(acc/tot)
182 |
183 |
184 |
185 | def classify_sbic():
186 | # Step 1: Import necessary built-in libraries
187 | import re
188 |
189 | # Step 2: Define a dictionary storing keywords which identify offensiveness
190 | keywords = {
191 | "Yes": ["hate", "kill", "racist", "sexist", "explicit"], # A list containing some offensive words for example
192 | "Maybe": ["idiot", "stupid", "dumb", "weirdo"], # A list containing some maybe offensive words for example
193 | "No": ["happy", "love", "beautiful", "amazing", "fantastic"] # A list containing some positive words for example
194 | }
195 |
196 | # Step 3: Define a function that classify a post
197 | def classify_post(post, keywords):
198 | word_occurrences = {"Yes": 0, "Maybe": 0, "No": 0} # Initialize a dictionary to store the number of occurrences
199 | for keyword in re.findall(r'\w+', post): # Split the post into words
200 | for key in keywords.keys():
201 | if keyword.lower() in keywords[key]:
202 | word_occurrences[key] += 1
203 |
204 | max_key = max(word_occurrences, key=word_occurrences.get) # Find the key with maximum occurrences
205 | if word_occurrences[max_key] > 0:
206 | return max_key
207 | else:
208 | return "No"
209 |
210 | dataset = load_dataset('social_bias_frames')
211 | val_dataset = dataset["validation"]
212 | tot = 0
213 | acc = 0
214 | for item in tqdm(val_dataset):
215 | tot += 1
216 | # Step 4: Define dictionaries storing the post
217 | post = item['post']
218 | truth = item['offensiveYN']
219 |
220 | if truth == '1.0':
221 | truth = 'Yes'
222 | elif truth == '0.5':
223 | truth = 'Maybe'
224 | else:
225 | truth = 'No'
226 |
227 | # Step 5: Print an answer in natural language using the knowledge and function defined above
228 | classification = classify_post(post, keywords)
229 | if truth == classification:
230 | acc += 1
231 | print(acc/tot)
232 |
233 |
234 | def classify_agnews():
235 | # Step 1: Import necessary built-in libraries
236 | from sklearn.feature_extraction.text import CountVectorizer
237 | from sklearn.naive_bayes import MultinomialNB
238 |
239 | # Step 2: Define the training dataset for the classification model
240 | news_data = {
241 | "world news": [
242 | "The 193-member United Nations General Assembly elected the United Arab Emirates, Brazil, Albania, Gabon, and Ghana to serve on the U.N. Security Council for two years.",
243 | "North Korea announced that it has test launched series of tactical guided missiles.",
244 | "European Union leaders reach agreement on a new tax proposal that affects multinational companies."
245 | ],
246 | "sport news": [
247 | "The German team edged out Portugal in a high-scoring Euro 2020 group-stage game.",
248 | "Serena Williams withdraws from Wimbledon due to injury.",
249 | "Tokyo Olympic organizers announced a limit on spectators in the upcoming summer games."
250 | ],
251 | "business news": [
252 | "Airline companies see a rise in domestic travel bookings as Covid-19 restrictions begin to ease.",
253 | "Shares of AMC Entertainment soared due to coordinated trading by individual investors.",
254 | "Microsoft Corp is set to acquire LinkedIn Corp for $26.2 billion in cash."
255 | ],
256 | "technology news": [
257 | "Apple Inc. announces new privacy features for its upcoming iOS 15 operating system.",
258 | "Facebook Inc tests new tool for users to manage content from their News Feed.",
259 | "Google's proposed acquisition of Fitbit to be closely scrutinized by the EU."
260 | ]
261 | }
262 |
263 | # Step 3: Prepare the data
264 | X_train = []
265 | y_train = []
266 | for category, news_list in news_data.items():
267 | for news in news_list:
268 | X_train.append(news)
269 | y_train.append(category)
270 |
271 | vectorizer = CountVectorizer()
272 | X_train = vectorizer.fit_transform(X_train)
273 |
274 | # Step 4: Train the classifier
275 | clf = MultinomialNB()
276 | clf.fit(X_train, y_train)
277 |
278 | classes = ['world news', 'sport news', 'business news', 'technology news']
279 | dataset = load_dataset('ag_news', cache_dir = '/shared/jiaxin/.cache')
280 | val_dataset = dataset["test"]
281 | tot = 0
282 | acc = 0
283 | for item in tqdm(val_dataset):
284 | tot += 1
285 | # Step 5: Define the news piece to be classified
286 | piece_of_news = item['text']
287 | # Step 6: Make the prediction
288 | X_test = vectorizer.transform([piece_of_news])
289 | predicted_category = clf.predict(X_test)[0]
290 |
291 | if predicted_category == classes[item['label']]:
292 | acc += 1
293 | print(acc/tot)
294 |
--------------------------------------------------------------------------------
/research/nlu/direct_NLEP/direct_NLEP_generation.py:
--------------------------------------------------------------------------------
1 | import openai
2 | import backoff
3 |
4 | prompt_head = """Write a bug-free Python program that can generate the answer to the given instruction when correctly executed. Do not ask for user input. For reasoning tasks, define functions first and then define variables. For knowledge intensive tasks, define variables before defining functions.
5 |
6 | ### Instruction: Discuss the causes of the Great Depression
7 | ### Input: None
8 | ### Python program:
9 |
10 | # Step 1: Import necessary built-in ligraries
11 | # No need to import
12 |
13 | # Step 2: Define dictionaries storing the knowledge about the grat depression
14 | depression_name = "The Great Depression"
15 | depression_period = "1929-1939"
16 | depression_countries = "the United States and countries around the world"
17 | depression_causes = {
18 | "Stock Market Crash of 1929": "In October of 1929, the stock market experienced a significant fall that wiped out millions of investors. This event is considered by many to be the initial trigger of the Great Depression.",
19 | "Overproduction": "During the 1920s, many industries produced more goods than consumers wanted or could afford. This ultimately led to a decline in demand for goods, causing job loss, lower wages, and business failure.",
20 | "High Tariffs and War Debts": "Protectionist trade policies in the form of high tariffs led to a decline in global trade, as other countries retaliated with tariffs of their own. Additionally, many countries were struggling to repay war debts, which led to economic instability.",
21 | "Bank Failures": "As demand for goods declined, many banks began to fail, causing a loss of confidence in the banking system. This led to a massive withdrawal of money from banks, causing even more banks to fail.",
22 | "Drought Conditions": "The Dust Bowl was a severe drought and dust storm that hit the Great Plains region of the United States in the 1930s. This had a significant impact on agriculture, causing many farmers to lose their land and livelihoods which worsened the effects of the depression."
23 | }
24 |
25 | # Step 3: Define necessary functions that generally solve this type of problem
26 | # Do not need to define functions
27 |
28 | # Step 4: Print an answer in natural language using the knowledge defined above
29 | print(f"{depression_name} was a period of economic decline that lasted from {depression_period}, making it the longest-lasting depression in modern history. It affected not only {depression_countries}, causing substantial social and economic upheaval.\n")
30 | print(f"There were several major causes of {depression_name}, which include:\n")
31 | for i, (cause, description) in enumerate(depression_causes.items(), 1):
32 | print(f"{i}. {cause} - {description}\n")
33 | print(f"Overall, {depression_name} was caused by a combination of factors, including economic, environmental, and political factors. Its impact was widespread, affecting millions of people around the world.")
34 | ```
35 |
36 | ### Instruction: Identify the odd one out.
37 | ### Input: Twitter, Instagram, Telegram
38 | ### Python program:
39 | ```
40 | # Step 1: Import necessary built-in libraries
41 | from collections import OrderedDict
42 |
43 | # Step 2: Define dictionaries storing the knowledge about the main function of each application
44 | services = {
45 | "Twitter": "a social media platform mainly for sharing information, images and videos",
46 | "Instagram": "a social media platform mainly for sharing information, images and videos",
47 | "Telegram": "a cloud-based instant messaging and voice-over-IP service",
48 | }
49 |
50 | # Step 3: Define a function that finds the different application
51 | def find_odd_one_out(services, input_services):
52 | descriptions = [services[service] for service in input_services]
53 | for description in descriptions:
54 | if descriptions.count(description) == 1:
55 | return input_services[descriptions.index(description)]
56 | return None
57 |
58 | # Step 4: Print an answer in natural language using the knowledge and function defined above
59 | input_services = ["Twitter", "Instagram", "Telegram"]
60 | odd_one_out = find_odd_one_out(services, input_services)
61 | if odd_one_out:
62 | other_services = [service for service in input_services if service != odd_one_out]
63 | print(f"The odd one out is {odd_one_out}. {other_services[0]} and {other_services[1]} are {services[other_services[0]]} while {odd_one_out} is {services[odd_one_out]}.")
64 | ```
65 |
66 | ### Instruction: Calculate the total surface area of a cube with a side length of 5 cm.
67 | ### Input: None
68 | ### Python program:
69 | ```
70 | # Step 1: Import necessary built-in libraries
71 | # No need to import
72 |
73 | # Step 2: Define a function that calculate the surface area of cubes
74 | def calculate_surface_area(side_length):
75 | return 6 * (side_length ** 2)
76 |
77 | # Step 3: Define dictionaries storing the cube information
78 | cube = {
79 | "side_length": 5 # Side length of the cube
80 | }
81 |
82 | # Step 4: Print a step-by-step calculation answer in natural language using the defined function and varible
83 | side_length = cube["side_length"]
84 | surface_area = calculate_surface_area(side_length)
85 | print(f"The surface area of a cube is found by calculating the area of one of its faces and multiplying it by six (since a cube has six faces). The area of a cube face is simply its side length squared.\n")
86 | print(f"Thus for this particular cube:")
87 | print(f"Surface Area = 6 × (Side Length)²")
88 | print(f" = 6 × ({side_length} cm)²")
89 | print(f" = 6 × {side_length**2} cm²")
90 | print(f" = {surface_area} cm²\n")
91 | print(f"The total surface area of this cube is {surface_area} square centimeters.")
92 | """
93 | prompt_sst2 = """
94 |
95 | ### Instruction: Given a movie review, determine the attitude of this review as either positive or negative.
96 | ### Input: A moview review.
97 | ### Python program:
98 |
99 | """
100 | prompt_cola = """
101 |
102 | ### Instruction: Given a sentence, determine the grammar correctness of this sentence as either acceptable or unacceptable.
103 | ### Input: A sentence.
104 | ### Python program:
105 |
106 | """
107 | prompt_emotion = """
108 |
109 | ### Instruction: Given a sentence, classify the emotion of the sentence as one of these classes: ['sad', 'happy', 'love', 'angry', 'afraid', 'surprised']. Your choice should be strictly the name of one of these 6 classes.
110 | ### Input: A sentence "I feel xxx"
111 | ### Python program:
112 |
113 | """
114 | prompt_amazon = """
115 |
116 | ### Instruction: Given an amazon review, determine the star(1-5) of this review, where 1 is the worst and 5 is the best.
117 | Your choice should be strictly in [1,2,3,4,5]
118 | ### Input: An amazon review.
119 | ### Python program:
120 |
121 | """
122 |
123 | prompt_hsd = """
124 |
125 | ### Instruction: Given a speech from a white supremacy forum, determine if this is speech contains hate (eg. bias/racisim/prejudice...) towards a certain group of people.
126 | ### Input: A speech from a forum.
127 | ### Python program:
128 |
129 | """
130 |
131 | prompt_sbic = """
132 |
133 | ### Instruction: Given a post, determine if the content of the post is offensive, maybe offensive, or not offensive. Classify the post as: ['Yes', 'Maybe', 'No']
134 | ### Input: A potentially offensive post.
135 | ### Python program:
136 |
137 | """
138 | prompt_agnews = """
139 |
140 | ### Instruction: Given a piece of news, determine which category it belongs to. Classify it as one of the four classes: ['world news', 'sport news', 'business news', 'technology news']
141 | ### Input: A piece of news.
142 | ### Python program:
143 |
144 | """
145 | @backoff.on_exception(backoff.expo, openai.error.RateLimitError)
146 | def completions_with_backoff(**kwargs):
147 | return openai.ChatCompletion.create(**kwargs)
148 |
149 | model_name = 'gpt-4'
150 | content = prompt_head + prompt_agnews
151 | response = completions_with_backoff(
152 | model=model_name,
153 | messages=[
154 | {"role": "user","content": content}
155 | ],
156 | )
157 | response = response["choices"][0]["message"]["content"]
158 | print(response)
159 |
160 |
161 |
162 |
163 |
164 |
165 |
--------------------------------------------------------------------------------
/research/nlu/human_generated_tree.py:
--------------------------------------------------------------------------------
1 | import openai
2 | import backoff
3 |
4 | prompt_head = """Write a Python function that constructs a decision tree according to the given examples that can generate the correct label of the given classification task.
5 |
6 | ### Available APIs(shared for all tasks):
7 |
8 | \"\"\"Returns whether the hypothesis in entailed by the premise.\"\"\"
9 | def entailment(hypothesis, premise, model, tokenizer):
10 | proposition = f'{hypothesis} is entailed by {premise}.'
11 | inputs = tokenizer(proposition, return_tensors="pt", truncation=True, padding=True, max_length=128)
12 | outputs = model(**inputs)['logits'][0]
13 | ent_label = int(outputs[0] > outputs[2])
14 | if ent_label == 1:
15 | return 'yes'
16 | else:
17 | return 'no'
18 |
19 | \"\"\"Use the constructed decision tree to predict the label of the sentence.\"\"\"
20 | def tree_predict(sentence, criterions, tree, model, tokenizer):
21 | node = tree['root']
22 | while node not in POSSIBLE_CLASSES:
23 | ent_label = entailment(criterions[node], sentence, model, tokenizer)
24 | node = tree[node][ent_label]
25 | return node
26 |
27 | ### Task: Movie review classification
28 | ### Description: Determine if a movie review expresses positive attitude or negative attitude.
29 | ### Possible classes: [positive, negative]
30 | ### Examples:
31 | - contains no wit, only labored gags
32 | - [The movie is wise|The movie is not wise|1], [the story is fun|the story is not boring|1], [the review is positive|the review is negative|1]
33 | - that loves its characters and communicates something rather beautiful about human nature
34 | - [The characters are lovely|The characters are awful|0], [the script is touching|the script is dry|0], [the review is positive|the review is negative|0]
35 | - on the worst revenge-of-the-nerds clichés the filmmakers could dredge up
36 | - [The movie is novel|The movie is mostly platitudes|1], [the review is negative|1]
37 | - are more deeply thought through than in most right-thinking films
38 | - [The takeaway of the movie is profound|The idea of the movie is shallow|0], [the review is positive|the review is negative|0]
39 |
40 | ### Define possible classes
41 | POSSIBLE_CLASSES = ['positive', 'negative']
42 |
43 | ### Decision Tree Logic:
44 | Start by assessing whether the movie is interesting.
45 | - If uninteresting:
46 | Probe for depth by determining if the movie is wise.
47 | - If wise, label as a positive review.
48 | - Otherwise, label as a negative review.
49 | - If interesting:
50 | Examine the quality of the script.
51 | - If the script is commendable, label as a positive review.
52 | - If not a good script
53 | Evaluate the portrayal of characters.
54 | - If the characters are well-depicted, label the review as positive.
55 | - If characters are not good, label the review negative.
56 |
57 | ### Python program:
58 | \"""Decision tree for this task\"""
59 | def get_decision_tree(sentence, model, tokenizer):
60 | # Step 1: define criterions of the decision tree.
61 | criterions = {
62 | 'is_interesting':'This movie is interesting',
63 | 'is_good_script':'The movie has a good script',
64 | 'is_good_character':'The characters are awsome',
65 | 'is_wise': 'This movie is wise'
66 | }
67 |
68 | # Step 2: define the balanced decision tree for this classification task
69 | tree = {
70 | 'root': 'is_interesting',
71 | 'is_interesting': {'yes': 'is_good_script', 'no': 'is_wise'},
72 | 'is_good_script': {'yes': 'positive', 'no': 'is_good_character'},
73 | 'is_good_character': {'yes': 'positive', 'no': 'negative'},
74 | 'is_wise': {'yes': 'positive', 'no': 'negative'}
75 | }
76 |
77 | return criterions, tree
78 |
79 | """
80 |
81 | prompt_cola ="""
82 | ### Task: Grammar correctness classification
83 | ### Possible classes: [acceptable, unacceptable]
84 |
85 | ### Define possible classes
86 | POSSIBLE_CLASSES = ['acceptable', 'unacceptable']
87 |
88 | ### Python program:
89 | \"""Decision tree for this task\"""
90 | Try to generate a more balanced tree!
91 | def get_decision_tree(sentence, model, tokenizer):
92 |
93 | """
94 |
95 | def get_cola_tree():
96 | # Step 1: define criterions of the decision tree.
97 | criterions = {
98 | 'has_subject': 'The sentence has subject',
99 | 'has_verb': 'The sentence has verb',
100 | 'punctuation_correct':'The sentence has proper punctuations',
101 | 'pronouns_reference_match':'This sentence has matched pronouns and references',
102 | 'subject_verb_match':'The sentence has matched subject and verb',
103 | }
104 |
105 | # Step 2: define the balanced decision tree for this classification task
106 | tree = {
107 | 'root': 'has_subject',
108 | 'has_subject': {'yes': 'has_verb', 'no': 'unacceptable'},
109 | 'has_verb': {'yes': 'punctuation_correct', 'no': 'unacceptable'},
110 | 'punctuation_correct': {'yes': 'pronouns_reference_match', 'no': 'unacceptable'},
111 | 'pronouns_reference_match': {'yes': 'subject_verb_match', 'no': 'unacceptable'},
112 | 'subject_verb_match':{'yes': 'acceptable', 'no': 'unacceptable'}
113 | }
114 |
115 | return criterions, tree
116 |
117 | prompt_emotion ="""
118 | ### Task: Emotion classification
119 | ### Description: I wrote a sentence about my feeling, determine which emotion am I feeling.
120 | ### Possible classes: ['I feel sad', 'I feel happy', 'I feel love', 'I feel angry', 'I feel afraid', 'I feel surprised']
121 |
122 | ### Define possible classes
123 | POSSIBLE_CLASSES = ['I feel sad', 'I feel happy', 'I feel love', 'I feel angry', 'I feel afraid', 'I feel surprised']
124 |
125 | ### Decision Tree Logic:
126 |
127 | """
128 |
129 | def get_emotion_tree():
130 | # Step 1: define criterions of the decision tree.
131 | criterions = {
132 | 'positive_words': 'The sentence includes positive words',
133 | 'intensifiers': 'The sentence includes intensifiers, adverb or exclamation point',
134 | 'unexpected': 'The sentence includes words that are synonyms or antonyms of unexpected',
135 | 'fear': 'The sentence includes words that are synonyms or antonyms of fear',
136 | 'upset': 'The sentence includes words that are synonyms or antonyms of upset'
137 | }
138 |
139 | # Step 2: define the balanced decision tree for this classification task
140 | tree = {
141 | 'root': 'positive words',
142 | 'positive_words': {'yes': 'intensifiers', 'no': 'fear'},
143 | 'intensifiers': {'yes': 'unexpected', 'no': 'I feel happy'},
144 | 'unexpected': {'yes': 'I feel surprised', 'no': 'I feel love'},
145 | 'fear': {'yes': 'I feel afraid', 'no': 'upset'},
146 | 'upset': {'yes': 'I feel sad', 'no': 'I feel angry'}
147 | }
148 |
149 | return criterions, tree
150 |
151 | prompt_amazon = """
152 | ### Task: Amazon Review Star classification
153 | ### Description: Given an amazon review, determine the star(1-5) of this review, where 1 is the worst and 5 is the best.
154 | ### Possible classes: ['1', '2', '3', '4', '5']
155 |
156 | ### Define possible classes
157 | POSSIBLE_CLASSES = ['1', '2', '3', '4', '5']
158 |
159 | ### Python program:
160 | \"""Decision tree for this task\"""
161 | Try to generate a balanced tree!
162 | def get_decision_tree(sentence, model, tokenizer):
163 |
164 | """
165 |
166 | def get_amazon_tree():
167 | # Step 1: define criterions of the decision tree.
168 | criterions = {
169 | 'extreme_word': 'The review includes intensifiers or exclamation point',
170 | 'positive': 'The review includes positive words or sentences',
171 | 'both_positive_negative':'The review includes both positive and negative sentences',
172 | }
173 |
174 | # Step 2: define the balanced decision tree for this classification task
175 | tree = {
176 | 'root': 'extreme_word',
177 | 'extreme_word': {'yes': 'positive', 'no': 'both_positive_negative'},
178 | 'positive': {'yes': '5', 'no': '1'},
179 | 'both_positive_negative': {'yes': '3', 'no': 'positive'},
180 | 'positive': {'yes': '4', 'no': '2'}
181 | }
182 |
183 | return criterions, tree
184 |
185 | prompt_hate_speech = """
186 | ### Task: Bias/Hateful speech classification
187 | ### Description: Given a speech from a white supremacy forum, determine if this is speech contains hate (eg. bias/racisim/prejudice...) towards a certain group of people.
188 | ### Possible classes: ['this is hate', 'this is noHate']
189 |
190 | ### Define possible classes
191 | POSSIBLE_CLASSES = ['this is hate', 'this is noHate']
192 |
193 | ### Each criterion should be simple, clear, and concise.
194 | ### Start each criterion with 'This is'
195 | ### Decision Tree Logic:
196 |
197 | """
198 |
199 | def get_hate_tree():
200 | # Step 1: define criterions of the decision tree.
201 | criterions = {
202 | 'oi_words': 'The speech includes offensive words about minorities',
203 | 'identity_words': 'The sentence includes mention of identities',
204 | 'swear_words': 'The sentence includes swear words',
205 | 'negative_words':'The sentence includes negative words',
206 | }
207 |
208 | # Step 2: define the balanced decision tree for this classification task
209 | tree = {
210 | 'root': 'oi_words',
211 | 'oi_words': {'yes': 'this is hate', 'no': 'identity_words'},
212 | 'identity_words': {'yes': 'swear_words', 'no': 'this is noHate'},
213 | 'swear_words': {'yes': 'this is hate', 'no': 'negative_words'},
214 | 'negative_words': {'yes': 'this is hate', 'no': 'this is noHate'},
215 | }
216 |
217 | return criterions, tree
218 |
219 | prompt_social_bias_offensive = """
220 | ### Task: Offensive classification
221 | ### Description: Given a post, determine if this post is offensive.
222 | ### Possible classes: ['Yes','Maybe', 'No']
223 |
224 | ### Define possible classes
225 | POSSIBLE_CLASSES = ['Yes','Maybe', 'No']
226 |
227 | ### Each criterion should be simple, clear, and concise.
228 | ### Start each criterion with 'This is'
229 | ### Generate at least 5 criterions!
230 | ### Decision Tree Logic:
231 | """
232 |
233 | def get_sbic_tree():
234 | # Step 1: define criterions of the decision tree.
235 | criterions = {
236 | 'racist_words': 'The post includes racist words',
237 | 'sexist_words': 'The post includes sexist words',
238 | 'swear_words': 'The post includes swear words',
239 | 'identity_words': 'The post includes mention of identities',
240 | 'negative_words':'The post include negative words',
241 | }
242 |
243 | # Step 2: define the balanced decision tree for this classification task
244 | tree = {
245 | 'root': 'racist_words',
246 | 'racist_words': {'yes': 'Yes', 'no': 'sexist_words'},
247 | 'sexist_words': {'yes': 'Yes', 'no': 'swear_words'},
248 | 'swear_words': {'yes': 'Yes', 'no': 'identity_words'},
249 | 'identity_words': {'yes': 'negative_words', 'no': 'Maybe'},
250 | 'negative_words': {'yes': 'Maybe', 'no': 'No'}
251 | }
252 |
253 | return criterions, tree
254 |
255 | @backoff.on_exception(backoff.expo, openai.error.RateLimitError)
256 | def completions_with_backoff(**kwargs):
257 | return openai.ChatCompletion.create(**kwargs)
258 |
259 | model_name = 'gpt-3.5-turbo'
260 | content = prompt_head + prompt_cola
261 | response = completions_with_backoff(
262 | model=model_name,
263 | messages=[
264 | {"role": "user","content": content}
265 | ],
266 | )
267 | response = response["choices"][0]["message"]["content"]
268 | print(response)
269 |
--------------------------------------------------------------------------------
/research/nlu/readme.md:
--------------------------------------------------------------------------------
1 | # NLEP for Text Classification
2 | ## Description
3 | - nlu/direct_NLEP contains the code generated from direct NLEP prompting, without using decision tree. Use direct_NLEP_generation.py to generate the code and direct_NLEP_execution.py to test on the generated code.
4 | - nlu/tree_NLEP contains decision-tree code generation(tree_NLEP_generation.py) for different tasks and testing on the generated code(tree_NLEP_execution.py)
5 | - nlu/human_generated_tree.py contains the decision-tree manually crafted by human for each task
6 | ## Dataset
7 | To set up the Hate-Speech dataset, download from the original repo and put it under the nlu folder
8 | - HSD https://github.com/aymeam/Datasets-for-Hate-Speech-Detection
9 | - For the other datasets, download from huggingface.
10 |
11 |
--------------------------------------------------------------------------------
/research/nlu/tree_NLEP/tree_NLEP_execution.py:
--------------------------------------------------------------------------------
1 |
2 | import torch
3 | from transformers import RobertaTokenizer, RobertaModel, RobertaForSequenceClassification
4 | from transformers import DebertaTokenizer, DebertaModel, DebertaForSequenceClassification
5 | from sklearn.metrics.pairwise import cosine_similarity
6 | from datasets import load_dataset
7 | from tqdm import tqdm
8 | import json
9 | # Load pre-trained RoBERTa model and tokenizer
10 | device = 'cuda'
11 | # Load Roberta:
12 | model = RobertaForSequenceClassification.from_pretrained("luohy/ESP-roberta-large")
13 | tokenizer = RobertaTokenizer.from_pretrained("roberta-large")
14 |
15 | # Load Deberta:
16 | model = DebertaForSequenceClassification.from_pretrained("luohy/ESP-deberta-large")
17 | tokenizer = DebertaTokenizer.from_pretrained("microsoft/deberta-large")
18 |
19 | model.eval() # Set to evaluation mode
20 | model = model.to(device)
21 | softmax = torch.nn.Softmax(dim=0)
22 |
23 | # General Functions that are used for entailment model:
24 | def get_entailment_score(hypothesis, model, tokenizer):
25 | with torch.no_grad():
26 | proposition = f'{hypothesis} is entailed by .'
27 | inputs = tokenizer(proposition, return_tensors="pt", truncation=True, padding=True, max_length=128).to(device)
28 | outputs = model(**inputs)['logits'][0]
29 | ent = outputs[0]
30 | contra = outputs[2]
31 | return ent, contra
32 |
33 | def entailment(hypothesis, premise, model, tokenizer, reduce=(0,0), return_val=False):
34 | with torch.no_grad():
35 | proposition = f'{hypothesis} is entailed by {premise}.'
36 | inputs = tokenizer(proposition, return_tensors="pt", truncation=True, padding=True, max_length=128).to(device)
37 | outputs = model(**inputs)['logits'][0]
38 | ent_label = int((outputs[0]-reduce[0]) > (outputs[2]-reduce[1]))
39 | if ent_label == 1:
40 | if return_val:
41 | val = outputs[0] - reduce[0]
42 | return 'yes', val
43 | return 'yes'
44 | else:
45 | if return_val:
46 | val = outputs[0] - reduce[0]
47 | return 'no', val
48 | return 'no'
49 |
50 | def tree_predict(sentence, criterions, tree, model, tokenizer, score_list, return_val=False):
51 | node = tree['root']
52 | score_tuple = score_list[node]
53 | tot_num = 0
54 | tot_val = 0.0
55 | while node not in POSSIBLE_CLASSES:
56 | score_tuple = score_list[node]
57 | if not return_val:
58 | ent_label = entailment(criterions[node], sentence, model, tokenizer, score_tuple, return_val)
59 | else:
60 | ent_label, val = entailment(criterions[node], sentence, model, tokenizer, score_tuple, return_val)
61 | tot_num += 1
62 | tot_val += val
63 | node = tree[node][ent_label]
64 | if return_val:
65 | return node, tot_val/tot_num
66 | return node
67 |
68 | # SST2 decision tree and possible classes
69 | POSSIBLE_CLASSES = ["positive", 'negative']
70 | def get_decision_tree_sst2(which_tree='gpt4'):
71 | if which_tree == 'human':
72 | # Human crafted tree
73 | # Step 1: define criterions of the decision tree.
74 | criterions = {
75 | 'is_interesting':'This movie is interesting',
76 | 'is_good_script':'The movie has a good script',
77 | 'is_good_character':'The characters are awsome',
78 | 'is_wise': 'This movie is wise'
79 | }
80 |
81 | # Step 2: define the balanced decision tree for this classification task
82 | tree = {
83 | 'root': 'is_interesting',
84 | 'is_interesting': {'yes': 'is_good_script', 'no': 'is_wise'},
85 | 'is_good_script': {'yes': 'positive', 'no': 'is_good_character'},
86 | 'is_good_character': {'yes': 'positive', 'no': 'negative'},
87 | 'is_wise': {'yes': 'positive', 'no': 'negative'}
88 | }
89 | else:
90 | # Automatic Tree
91 | # Step 1: define criterions of the decision tree
92 | criterions = {
93 | 'positive_adjectives': 'The review uses positive adjectives',
94 | 'negative_adjectives': 'The review uses negative adjectives',
95 | 'positive_director_mention': 'The review mentions the director\'s name positively',
96 | 'negative_cast_comments': 'The review comments negatively on the cast'
97 | }
98 |
99 | # Step 2: define the balanced decision tree for this classification task
100 | tree = {
101 | 'root': 'positive_adjectives',
102 | 'positive_adjectives': {'yes': 'positive', 'no': 'negative_adjectives'},
103 | 'negative_adjectives': {'yes': 'negative', 'no': 'positive_director_mention'},
104 | 'positive_director_mention': {'yes': 'positive', 'no': 'negative_cast_comments'},
105 | 'negative_cast_comments': {'yes': 'negative', 'no': 'positive'}
106 | }
107 | all_scores = {}
108 | for key in criterions.keys():
109 | sentence = criterions[key]
110 | score = get_entailment_score(sentence, model, tokenizer)
111 | all_scores[key] = score
112 | return criterions, tree, all_scores
113 |
114 |
115 |
116 | def get_sst2_tree_score():
117 | dataset = load_dataset('sst2')
118 | val_dataset = dataset["validation"]
119 | criterions, tree, all_score = get_decision_tree_sst2()
120 | tot = 0
121 | acc = 0
122 | for item in tqdm(val_dataset):
123 | tot += 1
124 | sentence = item['sentence']
125 |
126 | label, val = tree_predict(sentence, criterions, tree, model, tokenizer, all_score, True)
127 | print(sentence)
128 | print(POSSIBLE_CLASSES[item['label']])
129 | print(label)
130 | if label == POSSIBLE_CLASSES[1-item['label']]:
131 | acc += 1
132 | print(acc/tot)
133 |
134 |
135 | # COLA decision tree and testing
136 | POSSIBLE_CLASSES = ["acceptable", 'unacceptable']
137 | def get_decision_tree_cola(which_tree='gpt4'):
138 | if which_tree == 'gpt4':
139 | # GPT 4
140 | # Step 1: define criterions of the decision tree.
141 | criterions = {
142 | 0:'This sentence is grammatically correct',
143 | 1:'The sentence does not contain any spelling errors',
144 | 2:'The sentence uses punctuation correctly',
145 | 3:'This sentence is semantically clear'
146 | }
147 |
148 | # Step 2: define the Decision Tree for classification
149 | tree = {
150 | 'root': 0,
151 | 0: {'yes': 1, 'no': 3},
152 | 1: {'yes': 'acceptable', 'no': 2},
153 | 2: {'yes': 'acceptable', 'no': 'unacceptable'},
154 | 3: {'yes': 2, 'no': 'unacceptable'}
155 | }
156 | elif which_tree == 'gpt3.5'
157 | # GPT3.5
158 | # Step 1: define criterions of the decision tree.
159 | criterions = {
160 | 'is_grammatically_correct': 'The sentence is grammatically correct',
161 | 'is_clear': 'The sentence is clear',
162 | 'has_minor_errors': 'The sentence has minor errors',
163 | 'has_major_errors': 'The sentence has major errors'
164 | }
165 |
166 | # Step 2: define the balanced decision tree for this classification task
167 | tree = {
168 | 'root': 'is_grammatically_correct',
169 | 'is_grammatically_correct': {'yes': 'is_clear', 'no': 'has_minor_errors'},
170 | 'is_clear': {'yes': 'acceptable', 'no': 'has_major_errors'},
171 | 'has_minor_errors': {'yes': 'acceptable', 'no': 'unacceptable'},
172 | 'has_major_errors': {'yes': 'unacceptable', 'no': 'acceptable'}
173 | }
174 | elif which_tree == 'human':
175 | # Step 1: define criterions of the decision tree.
176 | criterions = {
177 | 'has_subject': 'The sentence has subject',
178 | 'has_verb': 'The sentence has verb',
179 | 'punctuation_correct':'The sentence has proper punctuations',
180 | 'pronouns_reference_match':'This sentence has matched pronouns and references',
181 | 'subject_verb_match':'The sentence has matched subject and verb',
182 | }
183 |
184 | # Step 2: define the balanced decision tree for this classification task
185 | tree = {
186 | 'root': 'has_subject',
187 | 'has_subject': {'yes': 'has_verb', 'no': 'unacceptable'},
188 | 'has_verb': {'yes': 'punctuation_correct', 'no': 'unacceptable'},
189 | 'punctuation_correct': {'yes': 'pronouns_reference_match', 'no': 'unacceptable'},
190 | 'pronouns_reference_match': {'yes': 'subject_verb_match', 'no': 'unacceptable'},
191 | 'subject_verb_match':{'yes': 'acceptable', 'no': 'unacceptable'}
192 | }
193 | all_scores = {}
194 | for key in criterions.keys():
195 | sentence = criterions[key]
196 | score = get_entailment_score(sentence, model, tokenizer)
197 | all_scores[key] = score
198 | return criterions, tree, all_scores
199 |
200 | # baseline of cola using entailment model
201 | def get_decision_cola_baseline():
202 | # Step 1: define criterions of the decision tree.
203 | criterions = {
204 | 0:'This sentence is acceptable',
205 | }
206 |
207 | # Step 2: define the Decision Tree for classification
208 | tree = {
209 | 'root': 0,
210 | 0: {'yes': 'acceptable', 'no':'unacceptable'},
211 | }
212 |
213 | all_scores = {}
214 | for key in criterions.keys():
215 | sentence = criterions[key]
216 | score = get_entailment_score(sentence, model, tokenizer)
217 | all_scores[key] = score
218 | return criterions, tree, all_scores
219 |
220 | def get_cola_tree_score():
221 | dataset = load_dataset('glue', 'cola')
222 | val_dataset = dataset["validation"]
223 | criterions, tree, all_score = get_decision_tree_cola()
224 | tot = 0
225 | acc = 0
226 | for item in tqdm(val_dataset):
227 | tot += 1
228 | sentence = item['sentence']
229 |
230 | label, val = tree_predict(sentence, criterions, tree, model, tokenizer, all_score, True)
231 | if label == POSSIBLE_CLASSES[item['label']]:
232 | acc += 1
233 | print(acc/tot)
234 |
235 |
236 |
237 | # Emotion Classification tree and testing
238 | POSSIBLE_CLASSES = ['I feel sad', 'I feel happy', 'I feel love', 'I feel angry', 'I feel afraid', 'I feel surprised']
239 | def get_emotion_tree(which_tree='gpt4'):
240 | if which_tree == 'gpt4':
241 | # GPT4
242 | # Step 1: define criterions of the decision tree.
243 | criterions = {
244 | 'is_positive':'This feeling is positive',
245 | 'is_sad':'I feel sad',
246 | 'is_angry':'I feel angry',
247 | 'is_afraid':'I feel afraid',
248 | 'is_happy':'I feel happy',
249 | 'is_love':'I feel love',
250 | 'is_surprised':'I feel surprised'
251 | }
252 |
253 | # Step 2: define the balanced decision tree for this classification task
254 | tree = {
255 | 'root': 'is_positive',
256 | 'is_positive': {'yes': 'is_happy', 'no': 'is_sad'},
257 | 'is_happy': {'yes': 'I feel happy', 'no': 'is_love'},
258 | 'is_love': {'yes': 'I feel love', 'no': 'is_surprised'},
259 | 'is_surprised': {'yes': 'I feel surprised', 'no': 'I feel happy'},
260 | 'is_sad': {'yes': 'I feel sad', 'no': 'is_angry'},
261 | 'is_angry': {'yes': 'I feel angry', 'no': 'is_afraid'},
262 | 'is_afraid': {'yes': 'I feel afraid', 'no': 'I feel sad'}
263 | }
264 | elif which_tree == 'gpt3.5':
265 | # GPT3.5
266 | # Step 1: define criterions of the decision tree.
267 | criterions = {
268 | 'is_positive':'The sentence expresses positive emotion',
269 | 'is_love':'The sentence expresses love',
270 | 'is_happy':'The sentence expresses happiness',
271 | 'is_negative':'The sentence expresses negative emotion',
272 | 'is_anger':'The sentence expresses anger',
273 | 'is_sad':'The sentence expresses sadness'
274 | }
275 |
276 | # Step 2: define the balanced decision tree for this classification task
277 | tree = {
278 | 'root': 'is_positive',
279 | 'is_positive': {'yes': 'is_love', 'no': 'is_negative'},
280 | 'is_love': {'yes': 'I feel love', 'no': 'is_happy'},
281 | 'is_happy': {'yes': 'I feel happy', 'no': 'I feel surprised'},
282 | 'is_negative': {'yes': 'is_anger', 'no': 'is_sad'},
283 | 'is_anger': {'yes': 'I feel angry', 'no': 'is_sad'},
284 | 'is_sad': {'yes': 'I feel sad', 'no': 'I feel afraid'}
285 | }
286 | elif which_tree == 'human':
287 | # Human Prompt
288 | # Step 1: define criterions of the decision tree.
289 | criterions = {
290 | 'positive_words': 'The sentence includes positive words',
291 | 'intensifiers': 'The sentence includes intensifiers, adverb or exclamation point',
292 | 'unexpected': 'The sentence includes words that are synonyms or antonyms of unexpected',
293 | 'fear': 'The sentence includes words that are synonyms or antonyms of fear',
294 | 'upset': 'The sentence includes words that are synonyms or antonyms of upset'
295 | }
296 |
297 | # Step 2: define the balanced decision tree for this classification task
298 | tree = {
299 | 'root': 'positive_words',
300 | 'positive_words': {'yes': 'intensifiers', 'no': 'fear'},
301 | 'intensifiers': {'yes': 'unexpected', 'no': 'I feel happy'},
302 | 'unexpected': {'yes': 'I feel surprised', 'no': 'I feel love'},
303 | 'fear': {'yes': 'I feel afraid', 'no': 'upset'},
304 | 'upset': {'yes': 'I feel sad', 'no': 'I feel angry'}
305 | }
306 | all_scores = {}
307 | for key in criterions.keys():
308 | sentence = criterions[key]
309 | score = get_entailment_score(sentence, model, tokenizer)
310 | all_scores[key] = score
311 | return criterions, tree, all_scores
312 |
313 | # Another side-note: generate a tree for each emotion and get the score for each class; choose the highest class label as the result
314 | def get_happy_tree():
315 | # Step 1: define criterions of the decision tree.
316 | criterions = [
317 | 'I am feeling good',
318 | 'I have a positive attitude',
319 | 'I am smiling',
320 | 'I am excited'
321 | ]
322 |
323 | # Step 2: define the balanced decision tree for this classification task
324 | tree = {
325 | 'root': 0,
326 | 0: {'yes': 1, 'no': 2},
327 | 1: {'yes': 'Yes', 'no': 3},
328 | 2: {'yes': 3, 'no': 'No'},
329 | 3: {'yes': 'Yes', 'no': 'No'}
330 | }
331 |
332 | all_scores = []
333 | for sentence in criterions:
334 | score = get_entailment_score(sentence, model, tokenizer)
335 | all_scores.append(score)
336 | del score
337 | return criterions, tree, all_scores
338 | def get_sad_tree():
339 | # Step 1: define criterions of the decision tree.
340 | criterions = [
341 | 'I am feeling depressed',
342 | 'I am not feeling good',
343 | 'I am missing someone',
344 | 'I feel like crying'
345 | ]
346 |
347 | # Step 2: define the balanced decision tree for this classification task
348 | tree = {
349 | 'root': 0,
350 | 0: {'yes': 1, 'no': 3},
351 | 1: {'yes': 'Yes', 'no': 2},
352 | 2: {'yes': 'Yes', 'no': 'No'},
353 | 3: {'yes': 'Yes', 'no': 'No'}
354 | }
355 |
356 |
357 | all_scores = []
358 | for sentence in criterions:
359 | score = get_entailment_score(sentence, model, tokenizer)
360 | all_scores.append(score)
361 | del score
362 | return criterions, tree, all_scores
363 |
364 | def get_love_tree():
365 | # Step 1: define criterions of the decision tree.
366 | criterions = [
367 | 'I mention love in this sentence',
368 | 'I talk about affection in this sentence',
369 | 'I express happiness in this sentence',
370 | 'I mention someone special in this sentence'
371 | ]
372 |
373 | # Step 2: define the balanced decision tree for this classification task
374 | tree = {
375 | 'root': 0,
376 | 0: {'yes': 1, 'no': 2},
377 | 1: {'yes': 'Yes', 'no': 3},
378 | 2: {'yes': 'Yes', 'no': 'No'},
379 | 3: {'yes': 'Yes', 'no': 'No'}
380 | }
381 |
382 | all_scores = []
383 | for sentence in criterions:
384 | score = get_entailment_score(sentence, model, tokenizer)
385 | all_scores.append(score)
386 | del score
387 | return criterions, tree, all_scores
388 |
389 | def get_anger_tree():
390 | # Step 1: define criterions of the decision tree.
391 | criterions = [
392 | 'I am irritated',
393 | 'I am frustrated',
394 | 'I am outraged',
395 | 'I am moody'
396 | ]
397 |
398 | # Step 2: define the balanced decision tree for this classification task
399 | tree = {
400 | 'root': 0,
401 | 0: {'yes': 1, 'no': 2},
402 | 1: {'yes': 'Yes', 'no': 3},
403 | 2: {'yes': 'Yes', 'no': 'No'},
404 | 3: {'yes': 'Yes', 'no': 'No'}
405 | }
406 |
407 | all_scores = []
408 | for sentence in criterions:
409 | score = get_entailment_score(sentence, model, tokenizer)
410 | all_scores.append(score)
411 | del score
412 | return criterions, tree, all_scores
413 |
414 | def get_fear_tree():
415 | # Step 1: define criterions of the decision tree.
416 | criterions = [
417 | 'I feel scared',
418 | 'I am frightened',
419 | 'I feel terrified',
420 | 'There is danger'
421 | ]
422 |
423 | # Step 2: define the balanced decision tree for this classification task
424 | tree = {
425 | 'root': 0,
426 | 0: {'yes': 1, 'no': 2},
427 | 1: {'yes': 'Yes', 'no': 'No'},
428 | 2: {'yes': 3, 'no': 'No'},
429 | 3: {'yes': 'Yes', 'no': 'No'}
430 | }
431 |
432 | all_scores = []
433 | for sentence in criterions:
434 | score = get_entailment_score(sentence, model, tokenizer)
435 | all_scores.append(score)
436 | del score
437 | return criterions, tree, all_scores
438 |
439 | def get_surprise_tree():
440 | # Step 1: define criterions of the decision tree.
441 | criterions = [
442 | 'I did not expect this',
443 | 'This is unusual',
444 | 'I am startled',
445 | 'This is a surprise'
446 | ]
447 |
448 | # Step 2: define the balanced decision tree for this classification task
449 | tree = {
450 | 'root': 0,
451 | 0: {'yes': 1, 'no': 2},
452 | 1: {'yes': 'Yes', 'no': 3},
453 | 2: {'yes': 'Yes', 'no': 'No'},
454 | 3: {'yes': 'Yes', 'no': 'No'}
455 | }
456 |
457 | all_scores = []
458 | for sentence in criterions:
459 | score = get_entailment_score(sentence, model, tokenizer)
460 | all_scores.append(score)
461 | return criterions, tree, all_scores
462 |
463 | # test emotion by using only one tree
464 | def get_full_tree_emotion():
465 | dataset = load_dataset('emotion')
466 | val_dataset = dataset["validation"]
467 | criterions, tree, all_score = get_emotion_tree()
468 | tot = 0
469 | acc = 0
470 | for item in tqdm(val_dataset):
471 | tot += 1
472 | sentence = item['text']
473 |
474 | label, val = tree_predict(sentence, criterions, tree, model, tokenizer, all_score, True)
475 | if label == POSSIBLE_CLASSES[item['label']]:
476 | acc += 1
477 | print(acc/tot)
478 |
479 | # Use multiple trees for each class and choose the highest-scoring class
480 | def get_all_tree_emotion():
481 | EMOTION_CLASSES = ['sadness', 'joy', 'love', 'anger', 'fear', 'surprise']
482 | criterions_sad, tree_sad, score_sad = get_sad_tree()
483 | criterions_happy, tree_happy, score_happy = get_happy_tree()
484 | criterions_love, tree_love, score_love = get_love_tree()
485 | criterions_anger, tree_anger, score_anger = get_anger_tree()
486 | criterions_fear, tree_fear, score_fear = get_fear_tree()
487 | criterions_surprise, tree_surprise, score_surprise = get_surprise_tree()
488 | all_criterion = [criterions_sad, criterions_happy,criterions_love, criterions_anger,criterions_fear,criterions_surprise]
489 | all_tree = [tree_sad, tree_happy, tree_love, tree_anger,tree_fear, tree_surprise]
490 | all_score = [score_sad, score_happy, score_love, score_anger, score_fear, score_surprise]
491 | tot = 0
492 | acc = 0
493 | for item in tqdm(val_dataset):
494 | tot += 1
495 | sentence = item['text']
496 | correct_emotions = []
497 | emotion_val = []
498 | for criterions, tree, score, emotion in zip(all_criterion, all_tree,all_score, EMOTION_CLASSES):
499 | label, val = tree_predict(sentence, criterions, tree, model, tokenizer, score, True)
500 | if label == 'Yes':
501 | correct_emotions.append(emotion)
502 | emotion_val.append(val)
503 |
504 | if len(correct_emotions) == 1:
505 | label = correct_emotions[0]
506 | elif len(correct_emotions) > 1:
507 | init = -100
508 | for num, emotion in enumerate(EMOTION_CLASSES):
509 | if emotion in correct_emotions and emotion_val[num] > init:
510 | label = emotion
511 | init = emotion_val[num]
512 | else:
513 | index = torch.argmax(torch.tensor(emotion_val))
514 | label = EMOTION_CLASSES[index]
515 |
516 | if label == EMOTION_CLASSES[item['label']]:
517 | acc += 1
518 | print(acc/tot)
519 | with open('./results/emotion_tree', 'w') as f:
520 | f.write(str(acc/tot))
521 |
522 | # Tree and testing for amazon review
523 | POSSIBLE_CLASSES = ['1', '2', '3', '4', '5']
524 | def get_amazon_review_tree(which_tree='gpt4'):
525 | if which_tree == 'gpt4':
526 | # Step 1: define criterions of the decision tree.
527 |
528 | criterions = {
529 | 'is_very_satisified':'This review is very satisfied',
530 | 'is_superior_quality':'The product is of superior quality',
531 | 'is_moderately_satisfied':'This review is moderately satisfied',
532 | 'is_critic':'The review contains criticism',
533 | 'is_satisfied':'The review is satisfied',
534 | 'is_improvable':'The product has features to improve',
535 | 'is_unsatisfied':'The review is unsatisfied',
536 | 'is_extremely_negative':'The review contains extremely negative words'
537 | }
538 |
539 | # Step 2: define the balanced decision tree for this classification task
540 | tree = {
541 | 'root': 'is_very_satisified',
542 | 'is_very_satisified': {'yes': 'is_superior_quality', 'no': 'is_moderately_satisfied'},
543 | 'is_superior_quality': {'yes': '5', 'no': '4'},
544 | 'is_moderately_satisfied': {'yes': 'is_critic', 'no': 'is_satisfied'},
545 | 'is_critic': {'yes': '3', 'no': '4'},
546 | 'is_satisfied': {'yes': 'is_improvable', 'no': '2'},
547 | 'is_improvable': {'yes': '3', 'no': '2'},
548 | 'is_unsatisfied': {'yes': 'is_extremely_negative', 'no': '2'},
549 | 'is_extremely_negative': {'yes': '1', 'no': '2'}
550 | }
551 | elif which_tree == 'human':
552 | # Step 1: define criterions of the decision tree.
553 | criterions = {
554 | 'extreme_word': 'The review includes intensifiers or exclamation point',
555 | 'positive': 'The review includes positive words or sentences',
556 | 'both_positive_negative':'The review includes both positive and negative sentences',
557 | }
558 |
559 | # Step 2: define the balanced decision tree for this classification task
560 | tree = {
561 | 'root': 'extreme_word',
562 | 'extreme_word': {'yes': 'positive', 'no': 'both_positive_negative'},
563 | 'positive': {'yes': '5', 'no': '1'},
564 | 'both_positive_negative': {'yes': '3', 'no': 'positive'},
565 | 'positive': {'yes': '4', 'no': '2'}
566 | }
567 | elif which_tree == 'gpt3.5':
568 | # GPT3.5
569 | # Step 1: define criterions of the decision tree.
570 | criterions = {
571 | 'is_negative_sentiment':'This review has negative sentiment',
572 | 'is_short_review':'This review is short',
573 | 'has_positive_keywords':'This review has positive keywords',
574 | 'has_negative_keywords':'This review has negative keywords'
575 | }
576 |
577 | # Step 2: define the balanced decision tree for this classification task
578 | tree = {
579 | 'root': 'is_negative_sentiment',
580 | 'is_negative_sentiment': {'yes': '1', 'no': 'is_short_review'},
581 | 'is_short_review': {'yes': '3', 'no': 'has_positive_keywords'},
582 | 'has_positive_keywords': {'yes': '5', 'no': 'has_negative_keywords'},
583 | 'has_negative_keywords': {'yes': '2', 'no': '4'}
584 | }
585 | all_scores = {}
586 | for key in criterions.keys():
587 | sentence = criterions[key]
588 | score = get_entailment_score(sentence, model, tokenizer)
589 | all_scores[key] = score
590 | return criterions, tree, all_scores
591 |
592 | def get_full_tree_amazon():
593 |
594 | dataset = load_dataset('amazon_reviews_multi', 'en')
595 | val_dataset = dataset["validation"]
596 | criterions, tree, all_score = get_amazon_review_tree()
597 | tot = 0
598 | acc = 0
599 | for item in tqdm(val_dataset):
600 | tot += 1
601 | sentence = item['review_title'] + item['review_body']
602 | label, val = tree_predict(sentence, criterions, tree, model, tokenizer, all_score, True)
603 | if label == POSSIBLE_CLASSES[item['stars']-1]:
604 | acc += 1
605 | print(acc/tot)
606 |
607 |
608 | POSSIBLE_CLASSES = ['this is hate', 'this is noHate']
609 | def get_hate_tree(which_tree='gpt4'):
610 | if which_tree=='gpt4':
611 | # GPT4
612 | # # Step 1: define criterions of the decision tree.
613 | criterions = {
614 | 'is_aggressive':'This speech contains aggressive language',
615 | 'is_targeting_group':'This speech targets a specific group',
616 | 'is_explicit_discrimination':'This speech contains explicit discrimination',
617 | 'has_covert_prejudice': 'This speech has covert prejudice or bias'
618 | }
619 |
620 | # Step 2: define the balanced decision tree for this classification task
621 | tree = {
622 | 'root': 'is_aggressive',
623 | 'is_aggressive': {'yes': 'is_targeting_group', 'no': 'has_covert_prejudice'},
624 | 'is_targeting_group': {'yes': 'this is hate', 'no': 'is_explicit_discrimination'},
625 | 'is_explicit_discrimination': {'yes': 'this is hate', 'no': 'this is noHate'},
626 | 'has_covert_prejudice': {'yes': 'this is hate', 'no': 'this is noHate'}
627 | }
628 | elif which_tree == 'human':
629 | # Human Tree
630 | # Step 1: define criterions of the decision tree.
631 | criterions = {
632 | 'oi_words': 'The speech includes offensive words about minorities',
633 | 'identity_words': 'The sentence includes mention of identities',
634 | 'swear_words': 'The sentence includes swear words',
635 | 'negative_words':'The sentence includes negative words',
636 | }
637 |
638 | # Step 2: define the balanced decision tree for this classification task
639 | tree = {
640 | 'root': 'oi_words',
641 | 'oi_words': {'yes': 'this is hate', 'no': 'identity_words'},
642 | 'identity_words': {'yes': 'swear_words', 'no': 'this is noHate'},
643 | 'swear_words': {'yes': 'this is hate', 'no': 'negative_words'},
644 | 'negative_words': {'yes': 'this is hate', 'no': 'this is noHate'},
645 | }
646 | elif which_tree=='gpt3.5':
647 | # GPT3.5
648 | # Step 1: define criterions of the decision tree.
649 | criterions = {
650 | 'is_explicit_hate':'This speech contains explicit hate speech',
651 | 'is_derogatory_language':'This speech contains derogatory language'
652 | }
653 |
654 | # Step 2: define the balanced decision tree for this classification task
655 | tree = {
656 | 'root': 'is_explicit_hate',
657 | 'is_explicit_hate': {'yes': 'this is hate', 'no': 'is_derogatory_language'},
658 | 'is_derogatory_language': {'yes': 'this is hate', 'no': 'this is noHate'}
659 | }
660 | all_scores = {}
661 | for key in criterions.keys():
662 | sentence = criterions[key]
663 | score = get_entailment_score(sentence, model, tokenizer)
664 | all_scores[key] = score
665 | return criterions, tree, all_scores
666 |
667 |
668 | def get_hate_dataset():
669 | import os
670 | import csv
671 | import json
672 | # Paths to CSV file and test set folder
673 | csv_file_path = '../hate-speech-dataset/annotations_metadata.csv'
674 | test_folder_path = '../hate-speech-dataset/sampled_test/'
675 |
676 | # Read the CSV file and create a dictionary of file IDs and labels
677 | file_labels = {}
678 | with open(csv_file_path, 'r') as csv_file:
679 | csv_reader = csv.DictReader(csv_file)
680 | for row in csv_reader:
681 | print(row)
682 | file_id = row['file_id']
683 | label = row['label']
684 | file_labels[file_id] = label
685 |
686 | # Iterate over files in the test set folder
687 | data = []
688 | for filename in os.listdir(test_folder_path):
689 | if filename.endswith('.txt'):
690 | file_id = filename.split('.')[0]
691 | file_path = os.path.join(test_folder_path, filename)
692 |
693 | # Check if the file ID has a corresponding label
694 | if file_id in file_labels:
695 | label = file_labels[file_id]
696 | with open(file_path, 'r') as txt_file:
697 | sentence = txt_file.read().strip()
698 | print(f'File: {filename}\nLabel: {label}\nSentence: {sentence}\n')
699 | new_dict = {}
700 | new_dict['file'] = filename
701 | new_dict['label'] = label
702 | new_dict['sentence'] = sentence
703 | data.append(new_dict)
704 | else:
705 | print(f'No label found for File: {filename}\n')
706 | with open('../hate-speech-dataset/test_sample.json', 'w') as f:
707 | json.dump(data, f)
708 |
709 | def get_full_tree_hate_speech():
710 | with open('../hate-speech-dataset/test_sample.json', 'r') as f:
711 | data = json.load(f)
712 | criterions, tree, all_score = get_hate_tree()
713 | tot = 0
714 | acc = 0
715 | for item in tqdm(data):
716 | tot += 1
717 | sentence = item['sentence']
718 |
719 | label, val = tree_predict(sentence, criterions, tree, model, tokenizer, all_score, True)
720 |
721 | if label == 'this is ' + item['label']:
722 | acc += 1
723 | print(acc/tot)
724 | # get_full_tree_hate_speech()
725 |
726 |
727 | def get_baseline_hate_speech():
728 | with open('../hate-speech-dataset/test_sample.json', 'r') as f:
729 | data = json.load(f)
730 | tot = 0
731 | acc = 0
732 | hate_tuple = get_entailment_score('this is hate ', model, tokenizer)
733 | nonhate_tuple = get_entailment_score('this is not hate ', model, tokenizer)
734 | for item in tqdm(data):
735 | tot += 1
736 | sentence = item['sentence']
737 |
738 | hatelabel, hateval = entailment('this is hate ', sentence, model, tokenizer, hate_tuple, True)
739 | nohatelabel, nohateval = entailment('this is not hate ', sentence, model, tokenizer, nonhate_tuple, True)
740 | if hateval > nohateval:
741 | label = 'hate'
742 | else:
743 | label = 'noHate'
744 |
745 | if label == item['label']:
746 | acc += 1
747 | print(acc/tot)
748 |
749 |
750 |
751 | POSSIBLE_CLASSES = ['Yes','Maybe', 'No']
752 | def get_bias_offense_tree(which_tree='gpt4', use_model='deberta'):
753 | if which_tree=='gpt4':
754 | if use_model=='roberta':
755 | # Step 1: define criterions of the decision tree.
756 | criterions = {
757 | 'is_hate_speech':'This post contains hate speech',
758 | 'is_disrespectful':'This post is disrespectful',
759 | 'has_strong_language':'This post contains strong language',
760 | }
761 |
762 | # Step 2: define the balanced decision tree for this classification task
763 | tree = {
764 | 'root': 'is_hate_speech',
765 | 'is_hate_speech': {'yes': 'Yes', 'no': 'is_disrespectful'},
766 | 'is_disrespectful': {'yes': 'Maybe', 'no': 'has_strong_language'},
767 | 'has_strong_language': {'yes': 'Maybe', 'no': 'No'},
768 | }
769 | else:
770 | # Step 1: define criterions of the decision tree.
771 | criterions = {
772 | 'is_politically_incorrect':'This is politically incorrect',
773 | 'includes_offensive_words':'This includes offensive words',
774 | 'is_discriminating':'This is discriminative'
775 | }
776 |
777 | # Step 2: define the balanced decision tree for this task.
778 | tree = {
779 | 'root': 'is_politically_incorrect',
780 | 'is_politically_incorrect': {'yes': 'includes_offensive_words', 'no': 'is_discriminating'},
781 | 'includes_offensive_words': {'yes': 'Yes', 'no': 'Maybe'},
782 | 'is_discriminating': {'yes': 'Maybe', 'no': 'No'}
783 | }
784 | elif which_tree == 'human'
785 | # Human Tree
786 | # Step 1: define criterions of the decision tree.
787 | criterions = {
788 | 'racist_words': 'The post includes racist words',
789 | 'sexist_words': 'The post includes sexist words',
790 | 'swear_words': 'The post includes swear words',
791 | 'identity_words': 'The post includes mention of identities',
792 | 'negative_words':'The post include negative words',
793 | }
794 |
795 | # Step 2: define the balanced decision tree for this classification task
796 | tree = {
797 | 'root': 'racist_words',
798 | 'racist_words': {'yes': 'Yes', 'no': 'sexist_words'},
799 | 'sexist_words': {'yes': 'Yes', 'no': 'swear_words'},
800 | 'swear_words': {'yes': 'Yes', 'no': 'identity_words'},
801 | 'identity_words': {'yes': 'negative_words', 'no': 'Maybe'},
802 | 'negative_words': {'yes': 'Maybe', 'no': 'No'}
803 | }
804 | elif which_tree=='gpt3.5':
805 | # GPT3.5
806 | # Step 1: define criterions of the decision tree.
807 | criterions = {
808 | 'contains_offensive_language':'This post contains offensive language',
809 | 'intention_to_offend':'The intention of the post is to offend',
810 | 'tone_neutral':'The tone of the post is neutral',
811 | 'tone_ambiguous':'The tone of the post is ambiguous'
812 | }
813 |
814 | # Step 2: define the balanced decision tree for this classification task
815 | tree = {
816 | 'root': 'contains_offensive_language',
817 | 'contains_offensive_language': {'yes': 'intention_to_offend', 'no': 'tone_neutral'},
818 | 'intention_to_offend': {'yes': 'Yes', 'no': 'Maybe'},
819 | 'tone_neutral': {'yes': 'No', 'no': 'tone_ambiguous'},
820 | 'tone_ambiguous': {'yes': 'Maybe', 'no': 'No'}
821 | }
822 | all_scores = {}
823 | for key in criterions.keys():
824 | sentence = criterions[key]
825 | score = get_entailment_score(sentence, model, tokenizer)
826 | all_scores[key] = score
827 | return criterions, tree, all_scores
828 |
829 |
830 | def get_full_tree_social_speech():
831 | dataset = load_dataset('social_bias_frames')
832 | val_dataset = dataset["validation"]
833 | criterions, tree, all_score = get_bias_offense_tree()
834 | tot = 0
835 | acc = 0
836 | for item in tqdm(val_dataset):
837 | tot += 1
838 | sentence = item['post']
839 |
840 | label, val = tree_predict(sentence, criterions, tree, model, tokenizer, all_score, True)
841 | truth = item['offensiveYN']
842 |
843 | if truth == '1.0':
844 | truth = 'Yes'
845 | elif truth == '0.5':
846 | truth = 'Maybe'
847 | else:
848 | truth = 'No'
849 |
850 | if label == truth:
851 | acc += 1
852 |
853 | print(acc/tot)
854 |
855 |
856 |
857 | print(acc/tot)
858 |
859 | def get_full_tree_social_speech_baseline():
860 | dataset = load_dataset('social_bias_frames')
861 | val_dataset = dataset["validation"]
862 | tot = 0
863 | acc = 0
864 | hate_tuple = get_entailment_score('this is offensive ', model, tokenizer)
865 | nonhate_tuple = get_entailment_score('this is not offensive ', model, tokenizer)
866 | maybehate_tuple = get_entailment_score('this is maybe offensive ', model, tokenizer)
867 | for item in tqdm(val_dataset):
868 | tot += 1
869 | sentence = item['post']
870 |
871 | hatelabel, hateval = entailment('this is offensive ', sentence, model, tokenizer, hate_tuple, True)
872 | nohatelabel, nohateval = entailment('this is not offensive ', sentence, model, tokenizer, nonhate_tuple, True)
873 | maybehatelabel, maybehateval = entailment('this is maybe offensive ', sentence, model, tokenizer, maybehate_tuple, True)
874 | if hateval > nohateval and hateval > maybehateval:
875 | label = 'Yes'
876 | elif nohateval > hateval and nohateval > maybehateval:
877 | label = 'No'
878 | else:
879 | label = 'Maybe'
880 | truth = item['offensiveYN']
881 | if truth == '1.0':
882 | truth = 'Yes'
883 | elif truth == '0.5':
884 | truth = 'Maybe'
885 | else:
886 | truth = 'No'
887 | if label == truth:
888 | acc += 1
889 |
890 | print(acc/tot)
891 |
892 |
--------------------------------------------------------------------------------
/research/nlu/tree_NLEP/tree_NLEP_generation.py:
--------------------------------------------------------------------------------
1 | import openai
2 | import backoff
3 | openai.api_key_path = './api_key.txt'
4 | prompt_head = """Write a Python function that constructs a decision tree according to the given examples that can generate the correct label of the given classification task.
5 |
6 | ### Available APIs(shared for all tasks):
7 |
8 | \"\"\"Returns whether the hypothesis in entailed by the premise.\"\"\"
9 | def entailment(hypothesis, premise, model, tokenizer):
10 | proposition = f'{hypothesis} is entailed by {premise}.'
11 | inputs = tokenizer(proposition, return_tensors="pt", truncation=True, padding=True, max_length=128)
12 | outputs = model(**inputs)['logits'][0]
13 | ent_label = int(outputs[0] > outputs[2])
14 | if ent_label == 1:
15 | return 'yes'
16 | else:
17 | return 'no'
18 |
19 | \"\"\"Use the constructed decision tree to predict the label of the sentence.\"\"\"
20 | def tree_predict(sentence, criterions, tree, model, tokenizer):
21 | node = tree['root']
22 | while node not in POSSIBLE_CLASSES:
23 | ent_label = entailment(criterions[node], sentence, model, tokenizer)
24 | node = tree[node][ent_label]
25 | return node
26 |
27 | ### Task: Movie review classification
28 | ### Description: Determine if a movie review expresses positive attitude or negative attitude.
29 | ### Possible classes: [positive, negative]
30 | ### Examples:
31 | - contains no wit, only labored gags
32 | - [The movie is wise|The movie is not wise|1], [the story is fun|the story is not boring|1], [the review is positive|the review is negative|1]
33 | - that loves its characters and communicates something rather beautiful about human nature
34 | - [The characters are lovely|The characters are awful|0], [the script is touching|the script is dry|0], [the review is positive|the review is negative|0]
35 | - on the worst revenge-of-the-nerds clichés the filmmakers could dredge up
36 | - [The movie is novel|The movie is mostly platitudes|1], [the review is negative|1]
37 | - are more deeply thought through than in most right-thinking films
38 | - [The takeaway of the movie is profound|The idea of the movie is shallow|0], [the review is positive|the review is negative|0]
39 |
40 | ### Define possible classes
41 | POSSIBLE_CLASSES = ['positive', 'negative']
42 |
43 | ### Decision Tree Logic:
44 | Start by assessing whether the movie is interesting.
45 | - If uninteresting:
46 | Probe for depth by determining if the movie is wise.
47 | - If wise, label as a positive review.
48 | - Otherwise, label as a negative review.
49 | - If interesting:
50 | Examine the quality of the script.
51 | - If the script is commendable, label as a positive review.
52 | - If not a good script
53 | Evaluate the portrayal of characters.
54 | - If the characters are well-depicted, label the review as positive.
55 | - If characters are not good, label the review negative.
56 |
57 | ### Python program:
58 | \"""Decision tree for this task\"""
59 | def get_decision_tree(sentence, model, tokenizer):
60 | # Step 1: define criterions of the decision tree.
61 | criterions = {
62 | 'is_interesting':'This movie is interesting',
63 | 'is_good_script':'The movie has a good script',
64 | 'is_good_character':'The characters are awsome',
65 | 'is_wise': 'This movie is wise'
66 | }
67 |
68 | # Step 2: define the balanced decision tree for this classification task
69 | tree = {
70 | 'root': 'is_interesting',
71 | 'is_interesting': {'yes': 'is_good_script', 'no': 'is_wise'},
72 | 'is_good_script': {'yes': 'positive', 'no': 'is_good_character'},
73 | 'is_good_character': {'yes': 'positive', 'no': 'negative'},
74 | 'is_wise': {'yes': 'positive', 'no': 'negative'}
75 | }
76 |
77 | return criterions, tree
78 |
79 | """
80 |
81 | prompt_head2 = """
82 | Write a Python function that constructs a decision tree according to the given examples that can generate the correct label of the given classification task.
83 |
84 | ### Available APIs(shared for all tasks):
85 |
86 | \"\"\"Returns whether the hypothesis in entailed by the premise.\"\"\"
87 | def entailment(hypothesis, premise, model, tokenizer):
88 | proposition = f'{hypothesis} is entailed by {premise}.'
89 | inputs = tokenizer(proposition, return_tensors="pt", truncation=True, padding=True, max_length=128)
90 | outputs = model(**inputs)['logits'][0]
91 | ent_label = int(outputs[0] > outputs[2])
92 | if ent_label == 1:
93 | return 'yes'
94 | else:
95 | return 'no'
96 |
97 | \"\"\"Use the constructed decision tree to predict the label of the sentence.\"\"\"
98 | def tree_predict(sentence, criterions, tree, model, tokenizer):
99 | node = tree['root']
100 | while node not in POSSIBLE_CLASSES:
101 | ent_label = entailment(criterions[node], sentence, model, tokenizer)
102 | node = tree[node][ent_label]
103 | return node
104 |
105 | ### Task: Grammar correctness classification
106 | ### Possible classes: ['accpetable', 'unacceptable']
107 |
108 | ### Define possible classes
109 | POSSIBLE_CLASSES = ['accpetable', 'unacceptable']
110 |
111 | ### Decision Tree Logic:
112 | - If verbs are not correctly constructed, the sentence is immediately labeled as unacceptable.
113 | - If verbs are correct:
114 | The tree then checks if the sentence has correct punctuation
115 | - If incorrect, label the sentence as unacceptable
116 | - If correct:
117 | The next criterion to be assessed is the subject-verb agreement.
118 | - If subject and verb disagree, label the sentence as unacceptable.
119 | - If they agree, check for sentence fragments.
120 | - If the sentence is a fragment, label it as unacceptable.
121 | - If it is not a sentence fragment, label the sentence as acceptable.
122 |
123 | ### Python code for the decision tree:
124 |
125 | ```python
126 | def get_decision_tree(sentence, model, tokenizer):
127 | # Step 1: define criterions of the decision tree
128 | criterions = {
129 | 'correct_verbs': 'The verbs are correctly constructed in the sentence',
130 | 'correct_punctuation': 'The sentence is punctuated correctly',
131 | 'subject_verb_agreement': 'The subject and verb agree in the sentence',
132 | 'no_sentence_fragments': 'The sentence is not a fragment',
133 | }
134 |
135 | # Step 2: define the balanced decision tree for this classification task
136 | tree = {
137 | 'root': 'correct_verbs',
138 | 'correct_verbs': {'yes': 'correct_punctuation', 'no': 'unacceptable'},
139 | 'correct_punctuation': {'yes': 'subject_verb_agreement', 'no': 'unacceptable'},
140 | 'subject_verb_agreement': {'yes': 'no_sentence_fragments', 'no': 'unacceptable'},
141 | 'no_sentence_fragments': {'yes': 'acceptable', 'no': 'unacceptable'}
142 | }
143 |
144 | return criterions, tree
145 | ```
146 |
147 | """
148 |
149 | prompt_sst2 = """
150 | ### Task: Movie review classification
151 | ### Possible classes: [positive, negative]
152 |
153 | ### Define possible classes
154 | POSSIBLE_CLASSES = [positive, negative]
155 |
156 | ### Decision Tree Logic:
157 |
158 | """
159 | prompt_cola ="""
160 | ### Task: Grammar correctness classification
161 | ### Possible classes: [acceptable, unacceptable]
162 |
163 | ### Define possible classes
164 | POSSIBLE_CLASSES = ['acceptable', 'unacceptable']
165 |
166 | ### Decision Tree Logic:
167 |
168 | """
169 |
170 | prompt_qnli ="""
171 | ### Task: the sentence answers the question classification
172 | ### Description: Determine if the sentence is an answer to the question
173 | ### Possible classes: [true, false]
174 |
175 | ### Define possible classes
176 | POSSIBLE_CLASSES = ['true', 'false']
177 |
178 | ### Python program:
179 | \"""Decision tree for this task\"""
180 | Try to generate a balanced tree!
181 | def get_decision_tree(sentence, model, tokenizer):
182 |
183 | """
184 |
185 | prompt_emotion_separate ="""
186 | ### Task: Sadness classification
187 | ### Description: Assume that "I" wrote a sentence, determine if "I" am feeling sad.
188 | ### Possible classes: ['Yes', 'No']
189 |
190 | ### Define possible classes
191 | POSSIBLE_CLASSES = ['Yes', 'No']
192 |
193 | ### Python program:
194 | \"""Decision tree for this task\"""
195 | Try to generate a balanced tree!
196 | def get_decision_tree(sentence, model, tokenizer):
197 |
198 | """
199 |
200 | prompt_emotion ="""
201 | ### Task: Emotion classification
202 | ### Description: I wrote a sentence about my feeling, determine which emotion am I feeling.
203 | ### Possible classes: ['I feel sad', 'I feel happy', 'I feel love', 'I feel angry', 'I feel afraid', 'I feel surprised']
204 |
205 | ### Define possible classes
206 | POSSIBLE_CLASSES = ['I feel sad', 'I feel happy', 'I feel love', 'I feel angry', 'I feel afraid', 'I feel surprised']
207 |
208 | ### Decision Tree Logic:
209 |
210 | """
211 | prompt_amazon = """
212 | ### Task: Amazon Review Star classification
213 | ### Description: Given an amazon review, determine the star(1-5) of this review, where 1 is the worst and 5 is the best.
214 | ### Possible classes: ['1', '2', '3', '4', '5']
215 |
216 | ### Define possible classes
217 | POSSIBLE_CLASSES = ['1', '2', '3', '4', '5']
218 |
219 | ### Decision Tree Logic:
220 |
221 | """
222 |
223 | prompt_agnews = """
224 | ### Task: News content classification
225 | ### Description: Given a piece of news, determine which category it belongs to.
226 | ### Possible classes: ['this is world news', 'this is sport news', 'this is business news', 'this is technology news']
227 |
228 |
229 | ### Define possible classes
230 | POSSIBLE_CLASSES = ['this is world news', 'this is sport news', 'this is business news', 'this is technology news']
231 |
232 | ### Start each criterion with "This is"
233 | ### Each criterion should be general/high-level!
234 | ### Python program:
235 | \"""Decision tree for this task\"""
236 | Try to generate a balanced tree!
237 | def get_decision_tree(sentence, model, tokenizer):
238 |
239 | """
240 |
241 | prompt_agnews_separate = """
242 | ### Task: Science news classification
243 | ### Description: Given a piece of news, determine if this is a science news.
244 | ### Possible classes: ['Yes', 'No']
245 |
246 | ### Define possible classes
247 | POSSIBLE_CLASSES = ['Yes', 'No']
248 |
249 | ### Decision Tree Logic:
250 |
251 | """
252 |
253 | prompt_hate_speech = """
254 | ### Task: Bias/Hateful speech classification
255 | ### Description: Given a speech from a white supremacy forum, determine if this is speech contains hate (eg. bias/racisim/prejudice...) towards a certain group of people.
256 | ### Possible classes: ['this is hate', 'this is noHate']
257 |
258 | ### Define possible classes
259 | POSSIBLE_CLASSES = ['this is hate', 'this is noHate']
260 |
261 | ### Each criterion should be simple, clear, and concise.
262 | ### Start each criterion with 'This is'
263 | ### Decision Tree Logic:
264 |
265 | """
266 |
267 | prompt_social_bias_offensive = """
268 | ### Task: Offensive classification
269 | ### Description: Given a post, determine if this post is offensive.
270 | ### Possible classes: ['Yes','Maybe', 'No']
271 |
272 | ### Define possible classes
273 | POSSIBLE_CLASSES = ['Yes','Maybe', 'No']
274 |
275 | ### Each criterion should be simple, clear, and concise.
276 | ### Start each criterion with 'This is'
277 | ### Generate at least 5 criterions!
278 | ### Decision Tree Logic:
279 | """
280 | prompt_social_bias_offensive_deberta = """
281 | ### Task: Offensive classification
282 | ### Description: Given a post, determine if this post is potentially offensive to anyone.
283 | ### Possible classes: ['this is offensive','this is not offensive', 'this is maybe offensive']
284 |
285 | ### Define possible classes
286 | POSSIBLE_CLASSES = ['this is offensive','this is not offensive', 'this is maybe offensive']
287 |
288 | ### Each criterion should be simple, clear, and concise.
289 | ### Start each criterion with 'This is'
290 | ### Decision Tree Logic:
291 | """
292 | @backoff.on_exception(backoff.expo, openai.error.RateLimitError)
293 | def completions_with_backoff(**kwargs):
294 | return openai.ChatCompletion.create(**kwargs)
295 |
296 | model_name = 'gpt-4'
297 | # To generate a tree for SST2, use prompt_head2 instead of prompt_head
298 | content = prompt_head + prompt_social_bias_offensive
299 | response = completions_with_backoff(
300 | model=model_name,
301 | messages=[
302 | {"role": "user","content": content}
303 | ],
304 | )
305 | response = response["choices"][0]["message"]["content"]
306 | print(response)
--------------------------------------------------------------------------------
/server/nlep_server.py:
--------------------------------------------------------------------------------
1 | import openai
2 | import google.generativeai as palm
3 |
4 | import csv
5 |
6 | from fastapi import FastAPI, UploadFile, File
7 | from fastapi.middleware.cors import CORSMiddleware
8 |
9 | from pydantic import BaseModel
10 | import os
11 | from urllib.parse import urlparse
12 |
13 |
14 | app = FastAPI()
15 |
16 | fs_prompt = open('prompts.txt').read()
17 |
18 | app.add_middleware(
19 | CORSMiddleware,
20 | allow_origins=["*"],
21 | allow_credentials=True,
22 | allow_methods=["*"],
23 | allow_headers=["*"],
24 | )
25 |
26 |
27 | class Item(BaseModel):
28 | instruction: str
29 | api_key_str: str
30 | exist_code: str
31 | platform: str
32 | model: str
33 |
34 |
35 | def parse_api_key(api_key_str):
36 | print(api_key_str)
37 | if 'api_key =' in api_key_str:
38 | return api_key_str.split('api_key =')[1].replace('\'', '').strip()
39 | else:
40 | return api_key_str
41 |
42 |
43 | def construct_fspy_prompt(fs_prompt, inst_str, input_txt = 'None', exist_code = 'none'):
44 | if exist_code == 'none':
45 | prompt = f'''{fs_prompt}\n\n### Instruction: {inst_str}\n### Input: {input_txt}\n### Python program:'''
46 | else:
47 | prompt = f'''{fs_prompt}\n\n### Instruction: {inst_str}\n### Input: {input_txt}\n### Python program:\n```\n{exist_code.strip()}'''
48 | return prompt
49 |
50 |
51 | def gpt4_py(ques_str, api_key_str, exist_code, platform, model='gpt-4'):
52 | api_key = parse_api_key(api_key_str)
53 |
54 | prompt = construct_fspy_prompt(fs_prompt, ques_str, exist_code = exist_code)
55 |
56 | if platform == 'gpt':
57 | openai.api_key = api_key
58 |
59 | gpt4_output = openai.ChatCompletion.create(
60 | model = model,
61 | messages = [{'role': 'user', 'content': prompt}],
62 | temperature = 0.5,
63 | top_p = 1.0,
64 | max_tokens = 1024
65 | )
66 | gen_txt = gpt4_output['choices'][0]['message']['content'].replace('```python', '```')
67 |
68 | elif platform == 'palm':
69 | palm.configure(api_key = api_key)
70 | completion = palm.generate_text(
71 | model = f'models/{model}',
72 | prompt = prompt,
73 | temperature = 0.5,
74 | # The maximum length of the response
75 | max_output_tokens = 1024,
76 | )
77 | gen_txt = completion.result
78 |
79 | else:
80 | return 'Platform not supported.'
81 |
82 | if exist_code != 'none' and gen_txt.startswith('```'):
83 | gen_txt = gen_txt[3:]
84 |
85 | if exist_code != 'none':
86 | gen_txt = f'\n{gen_txt}'
87 |
88 | res_str = (
89 | prompt + gen_txt
90 | ).split('### Python program:')[-1].strip()
91 |
92 | sections = res_str.split('```')
93 | if len(sections) > 1:
94 | ans_str = sections[1].strip()
95 | else:
96 | ans_str = sections[0].strip()
97 |
98 | return ans_str
99 |
100 |
101 | @app.post("/process-file/")
102 | async def process_file(file: UploadFile = File(...)):
103 | # Do something with the file
104 | content = await file.read()
105 | content_list = content.decode('utf-8').split('\n')[:3]
106 | content_str = '\n# '.join(content_list)
107 | prompt = f"# First three rows of the input file:\n# {content_str}\n# ...\nfile_path = 'cache/{file.filename}' # Please fill\ninput_file = open(file_path)"
108 | # For demonstration purposes, we'll just return the filename
109 | return {"filename": file.filename, "message": f"{prompt}"}
110 |
111 |
112 |
113 | @app.put("/items/{item_id}")
114 | async def update_item(item_id: int, item: Item):
115 | ans_str = gpt4_py(
116 | item.instruction, item.api_key_str, item.exist_code,
117 | item.platform, item.model
118 | )
119 |
120 | return {"output": ans_str, "item_id": item_id}
--------------------------------------------------------------------------------
/server/prompt.txt:
--------------------------------------------------------------------------------
1 | You are an AI model. Write a bug-free Python program that can generate factual and correct answer to the given instruction when correctly executed. Do not ask for user input. Answer the question directly and do not add unrelated information or comments. Do not make any http request.
2 |
3 | ### Instruction: Calculate the total surface area of a cube with a side length of 5 cm.
4 | ### Input: None
5 | ### Python program:
6 | ```
7 | # Step 1: Import necessary built-in libraries
8 | # No need to import
9 |
10 | # Step 2: Define a function that calculate the surface area of cubes
11 | def calculate_surface_area(side_length):
12 | return 6 * (side_length ** 2)
13 |
14 | # Step 3: Define dictionaries storing the cube information
15 | cube = {
16 | "side_length": 5 # Side length of the cube
17 | }
18 |
19 | # Step 4: Print a step-by-step calculation answer in natural language using the defined function and varible
20 | side_length = cube["side_length"]
21 | surface_area = calculate_surface_area(side_length)
22 | print(f"The surface area of a cube is found by calculating the area of one of its faces and multiplying it by six (since a cube has six faces). The area of a cube face is simply its side length squared.\n")
23 | print(f"Thus for this particular cube:")
24 | print(f"Surface Area = 6 × (Side Length)²")
25 | print(f" = 6 × ({side_length} cm)²")
26 | print(f" = 6 × {side_length**2} cm²")
27 | print(f" = {surface_area} cm²\n")
28 | print(f"The total surface area of this cube is {surface_area} square centimeters.")
29 | ```
30 |
31 | ### Instruction: Identify the odd one out.
32 | ### Input: Twitter, Instagram, Telegram
33 | ### Python program:
34 | ```
35 | # Step 1: Import necessary built-in libraries
36 | from collections import OrderedDict
37 |
38 | # Step 2: Define dictionaries storing the knowledge about the main function of each application
39 | services = {
40 | "Twitter": "a social media platform mainly for sharing information, images and videos",
41 | "Instagram": "a social media platform mainly for sharing information, images and videos",
42 | "Telegram": "a cloud-based instant messaging and voice-over-IP service",
43 | }
44 |
45 | # Step 3: Define a function that finds the different application
46 | def find_odd_one_out(services, input_services):
47 | descriptions = [services[service] for service in input_services]
48 | for description in descriptions:
49 | if descriptions.count(description) == 1:
50 | return input_services[descriptions.index(description)]
51 | return None
52 |
53 | # Step 4: Print an answer in natural language using the knowledge and function defined above
54 | input_services = ["Twitter", "Instagram", "Telegram"]
55 | odd_one_out = find_odd_one_out(services, input_services)
56 | if odd_one_out:
57 | other_services = [service for service in input_services if service != odd_one_out]
58 | print(f"The odd one out is {odd_one_out}. {other_services[0]} and {other_services[1]} are {services[other_services[0]]} while {odd_one_out} is {services[odd_one_out]}.")
59 | ```
--------------------------------------------------------------------------------
/server/readme.md:
--------------------------------------------------------------------------------
1 | # NLEP Server
2 |
3 | The complete source code of building an NLEP API is included in this folder by running the following command
4 | ```python
5 | uvicorn nlep_server:app --reload --host 0.0.0.0 --port 8000
6 | ```
7 | The LangCode agent can be connected to the preferred API endpoint via
8 | ```python
9 | # For example, clp is a LangCode agent
10 | clp.config_api_endpoint()
11 | ```
12 | and input your url in this format: `http://{HOSTNAME}:{PORT}/items/0`
13 |
14 | This server does not set any timeout.
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup, find_packages
2 |
3 | setup(
4 | name='langpy-notebook',
5 | version='0.4',
6 | packages=find_packages(),
7 | install_requires=[
8 | # Any dependencies your project needs, e.g.
9 | # 'requests',
10 | 'IPython',
11 | 'notebook==6.4.8'
12 | ],
13 | author='Hongyin Luo',
14 | author_email='hyluo@mit.edu',
15 | description='Chat in Python on IPython notebook.',
16 | long_description=open('README.md').read(),
17 | long_description_content_type='text/markdown',
18 | url='https://github.com/luohongyin/langpy-notebook',
19 | classifiers=[
20 | 'Programming Language :: Python :: 3',
21 | 'License :: OSI Approved :: MIT License',
22 | 'Operating System :: OS Independent',
23 | ],
24 | )
25 |
--------------------------------------------------------------------------------