├── run.sh
├── .gitignore
├── figs
├── tods_framework.png
└── example_hierarchy.png
├── evaluation
└── evaluation_instructions.pdf
├── requirements.txt
├── dataset
└── README.md
├── no_tree
├── paper_details.py
├── .ipynb_checkpoints
│ ├── paper_details-checkpoint.py
│ ├── data_pairer-checkpoint.py
│ ├── debate-checkpoint.py
│ └── moderator-checkpoint.py
├── data_pairer.py
├── debate.py
└── moderator.py
├── no_delib
├── paper_details.py
├── .ipynb_checkpoints
│ ├── paper_details-checkpoint.py
│ ├── data_pairer-checkpoint.py
│ ├── debate-checkpoint.py
│ ├── moderator-checkpoint.py
│ └── persona-checkpoint.py
├── data_pairer.py
├── debate.py
├── moderator.py
└── persona.py
├── paper_details.py
├── data_pairer.py
├── retrieval
├── retrieval.py
└── e5_model.py
├── run.py
├── README.md
├── baselines
└── run_baseline.py
├── tod_no_deliberation
├── debate.py
└── moderator.py
├── tree_of_debate.py
├── debate.py
├── tree_of_debate_no_tree.py
├── tree_of_debate_no_delib.py
├── LICENSE
└── moderator.py
/run.sh:
--------------------------------------------------------------------------------
1 | python run.py --experiment "tod"
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | data.json
2 | .cache/*
3 | *.pkl
4 | __pycache__/*
5 | *pyc
6 | *.sh
7 | logs/
8 | run.sh
--------------------------------------------------------------------------------
/figs/tods_framework.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pkargupta/tree-of-debate/HEAD/figs/tods_framework.png
--------------------------------------------------------------------------------
/figs/example_hierarchy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pkargupta/tree-of-debate/HEAD/figs/example_hierarchy.png
--------------------------------------------------------------------------------
/evaluation/evaluation_instructions.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pkargupta/tree-of-debate/HEAD/evaluation/evaluation_instructions.pdf
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | adapters
2 | arxiv2text
3 | docling
4 | docling_core
5 | fastcoref
6 | nltk
7 | numpy
8 | outlines
9 | pandas
10 | pydantic
11 | scikit_learn
12 | setuptools
13 | spacy
14 | torch
15 | tqdm
16 | transformers
17 | typing_extensions
18 | Unidecode
19 | vllm
--------------------------------------------------------------------------------
/dataset/README.md:
--------------------------------------------------------------------------------
1 | Tree-of-Debate's dataset contains 100 samples. Each sample contains the following:
2 | - Topic: a short, vague description of the theme of the two papers
3 | - Paper #1 arXiv Link
4 | - Paper #1 Title
5 | - Paper #1 Abstract
6 | - Paper #1 Introduction
7 | - Paper #2 arXiv Link
8 | - Paper #2 Title
9 | - Paper #2 Abstract
10 | - Paper #2 Introduction
11 | - Method or Task: 0 if the papers differ in methodology (but have the same task) and 1 if the papers differ in the task (but the methodology is generally the same)
12 | - No cite or cite: 0 if the papers do not cite each other, and 1 if the papers cite each other.
13 |
14 | The dataset is in `tree_of_debate_dataset.tsv` and is a tab-separated file.
--------------------------------------------------------------------------------
/no_tree/paper_details.py:
--------------------------------------------------------------------------------
1 | from retrieval.retrieval import load_corpus, embed_texts, find_top_k
2 |
3 | class Paper:
4 | def __init__(self, info, chunk_size=3) -> None:
5 | self.title = info['title']
6 | self.abstract = info['abstract']
7 | self.text = info['full_text']
8 | self.chunks = []
9 | sentences = self.text.split('. ')
10 | for i in range(0, len(sentences), chunk_size):
11 | self.chunks.append('. '.join(sentences[i:i+chunk_size]))
12 | # use e5 model to embed each chunk of the paper
13 | self.embed_paper()
14 |
15 | def embed_paper(self):
16 | self.emb = embed_texts(self.chunks)
17 | return self.emb
18 |
19 | def retrieve_top_k(self, query, k=5):
20 | return find_top_k(query, self.emb, k=k)
--------------------------------------------------------------------------------
/no_tree/.ipynb_checkpoints/paper_details-checkpoint.py:
--------------------------------------------------------------------------------
1 | from retrieval.retrieval import load_corpus, embed_texts, find_top_k
2 |
3 | class Paper:
4 | def __init__(self, info, chunk_size=3) -> None:
5 | self.title = info['title']
6 | self.abstract = info['abstract']
7 | self.text = info['full_text']
8 | self.chunks = []
9 | sentences = self.text.split('. ')
10 | for i in range(0, len(sentences), chunk_size):
11 | self.chunks.append('. '.join(sentences[i:i+chunk_size]))
12 | # use e5 model to embed each chunk of the paper
13 | self.embed_paper()
14 |
15 | def embed_paper(self):
16 | self.emb = embed_texts(self.chunks)
17 | return self.emb
18 |
19 | def retrieve_top_k(self, query, k=5):
20 | return find_top_k(query, self.emb, k=k)
--------------------------------------------------------------------------------
/no_delib/paper_details.py:
--------------------------------------------------------------------------------
1 | from retrieval.retrieval import load_corpus, embed_texts, find_top_k
2 |
3 | class Paper:
4 | def __init__(self, info, chunk_size=3) -> None:
5 | self.title = info['title']
6 | self.abstract = info['abstract']
7 | self.introduction = info['introduction']
8 | self.text = info['full_text']
9 | self.chunks = []
10 | sentences = self.text.split('. ')
11 | for i in range(0, len(sentences), chunk_size):
12 | self.chunks.append('. '.join(sentences[i:i+chunk_size]))
13 | # use e5 model to embed each chunk of the paper
14 | self.embed_paper()
15 |
16 | def embed_paper(self):
17 | self.emb = embed_texts(self.chunks)
18 | return self.emb
19 |
20 | def retrieve_top_k(self, query, k=5):
21 | return find_top_k(query, self.emb, k=k)
--------------------------------------------------------------------------------
/no_delib/.ipynb_checkpoints/paper_details-checkpoint.py:
--------------------------------------------------------------------------------
1 | from retrieval.retrieval import load_corpus, embed_texts, find_top_k
2 |
3 | class Paper:
4 | def __init__(self, info, chunk_size=3) -> None:
5 | self.title = info['title']
6 | self.abstract = info['abstract']
7 | self.introduction = info['introduction']
8 | self.text = info['full_text']
9 | self.chunks = []
10 | sentences = self.text.split('. ')
11 | for i in range(0, len(sentences), chunk_size):
12 | self.chunks.append('. '.join(sentences[i:i+chunk_size]))
13 | # use e5 model to embed each chunk of the paper
14 | self.embed_paper()
15 |
16 | def embed_paper(self):
17 | self.emb = embed_texts(self.chunks)
18 | return self.emb
19 |
20 | def retrieve_top_k(self, query, k=5):
21 | return find_top_k(query, self.emb, k=k)
--------------------------------------------------------------------------------
/paper_details.py:
--------------------------------------------------------------------------------
1 | from retrieval.retrieval import load_corpus, embed_texts, find_top_k
2 |
3 | class Paper:
4 | def __init__(self, info, chunk_size=3) -> None:
5 | self.title = info['title']
6 | self.abstract = info['abstract']
7 | self.introduction = info['introduction'] if 'introduction' in info else None
8 | self.text = info['full_text']
9 | self.introduction = info['introduction']
10 | self.chunks = []
11 | sentences = self.text.split('. ')
12 | for i in range(0, len(sentences), chunk_size):
13 | self.chunks.append('. '.join(sentences[i:i+chunk_size]))
14 | # use e5 model to embed each chunk of the paper
15 | self.embed_paper()
16 |
17 | def embed_paper(self):
18 | self.emb = embed_texts(self.chunks)
19 | return self.emb
20 |
21 | def retrieve_top_k(self, query, k=5):
22 | return find_top_k(query, self.emb, k=k)
--------------------------------------------------------------------------------
/data_pairer.py:
--------------------------------------------------------------------------------
1 | from arxiv2text import arxiv_to_text
2 | from unidecode import unidecode
3 | import string
4 | import argparse
5 | import os
6 | import csv
7 | import string
8 | import numpy as np
9 | import re
10 | from tqdm import tqdm
11 | import json
12 | from nltk import word_tokenize
13 |
14 | def extract_text(pdf_url):
15 | try:
16 | raw_extracted_text = arxiv_to_text(pdf_url).strip()
17 | except:
18 | raise Exception(f"PDF Link INVALID! {pdf_url}")
19 |
20 | raw_extracted_text = unidecode(raw_extracted_text)
21 |
22 | printable = set(string.printable)
23 | raw_extracted_text = ''.join(filter(lambda x: x in printable, raw_extracted_text)).replace('\r', '')
24 | raw_extracted_text = re.sub(r'\/uni\w{4,8}', "", raw_extracted_text)
25 | raw_extracted_text = raw_extracted_text.split('\n')
26 |
27 | extracted_text = [""]
28 | for text in raw_extracted_text:
29 | try:
30 | float(text)
31 | continue
32 | except:
33 | if text == "\n":
34 | if extracted_text[-1] != "\n":
35 | extracted_text.append(text)
36 | elif len(text) < 4:
37 | extracted_text[-1] += text
38 | else:
39 | extracted_text.append(text)
40 |
41 |
42 | extracted_text = " ".join(extracted_text).replace('\n', ' ') # remove new lines
43 | # extracted_text = extracted_text.replace("- ", "") # remove justified text errors that result in half words ("arbi-\ntrary")
44 | extracted_text = " ".join(extracted_text.split()) # remove unnecessary whitespace in between
45 | return extracted_text[:extracted_text.find("References")] # only take the text before the references section
46 |
47 |
48 | def parse_papers_url(focus_url, cited_url):
49 | focus_text = extract_text(focus_url)
50 | cited_text = extract_text(cited_url)
51 |
52 | return focus_text, cited_text
--------------------------------------------------------------------------------
/retrieval/retrieval.py:
--------------------------------------------------------------------------------
1 | import json
2 | import numpy as np
3 | from retrieval.e5_model import e5_embed
4 | from typing import List, Dict, Tuple
5 |
6 | def load_corpus(path: str) -> List[str]:
7 | """Load corpus chunks from a JSON file."""
8 | with open(path, 'r', encoding='utf-8') as file:
9 | data = json.load(file)
10 | if isinstance(data, list):
11 | return data
12 | elif isinstance(data, dict):
13 | return list(data.values())
14 | else:
15 | raise ValueError(f"Unsupported JSON structure in {path}")
16 |
17 | def embed_texts(texts: List[str]) -> Dict[str, np.ndarray]:
18 | """Embed a list of texts using e5_embed."""
19 | return e5_embed(texts)
20 |
21 | def cosine_similarity(vec1: np.ndarray, vec2: np.ndarray) -> float:
22 | """Compute cosine similarity between two vectors."""
23 | norm1 = np.linalg.norm(vec1)
24 | norm2 = np.linalg.norm(vec2)
25 | if norm1 == 0.0 or norm2 == 0.0:
26 | return 0.0
27 | return np.dot(vec1, vec2) / (norm1 * norm2)
28 |
29 | def find_top_k(query: str, corpus_emb: Dict[str, np.ndarray], k: int = 10) -> List[Tuple[str, float]]:
30 | """Find top-k similar chunks in a corpus to the query."""
31 | query_emb = e5_embed([query])[query]
32 | similarities = []
33 | for chunk, emb in corpus_emb.items():
34 | sim = cosine_similarity(query_emb, emb)
35 | if len(chunk) > 150:
36 | similarities.append((chunk, sim))
37 | similarities.sort(key=lambda x: x[1], reverse=True)
38 | return similarities[:k]
39 |
40 | def main():
41 | corpus_paths = {
42 | # "C99 Segmentation": "chunking/chunks/c99_segmentation.json"
43 | }
44 | query = ""
45 |
46 | for name, path in corpus_paths.items():
47 | print(f"\nProcessing Corpus: {name}")
48 | corpus = load_corpus(path)
49 | print(f"Loaded {len(corpus)} chunks.")
50 | top_chunks = find_top_k(query, corpus, k=30)
51 | print(f"Top 5 similar chunks from {name}:")
52 | for idx, (chunk, score) in enumerate(top_chunks, 1):
53 | print(f"{idx}. Score: {score:.4f}\n Chunk: {chunk}\n")
54 | breakpoint()
55 |
56 | if __name__ == "__main__":
57 | main()
--------------------------------------------------------------------------------
/run.py:
--------------------------------------------------------------------------------
1 | import os
2 | from unidecode import unidecode
3 | from vllm import LLM
4 | import argparse
5 |
6 | from data_pairer import parse_papers_url
7 | from tree_of_debate import run_code
8 | from baselines.run_baseline import run_baseline_code
9 | from tree_of_debate_no_delib import run_no_delib_code
10 | from tree_of_debate_no_tree import run_no_tree_code
11 |
12 | if __name__ == '__main__':
13 | parser = argparse.ArgumentParser()
14 | parser.add_argument("--tsv_file", default="dataset/tree_of_debate_dataset.tsv")
15 | parser.add_argument("--log", default="dataset/")
16 | parser.add_argument("--experiment", default="tod") # options: tod, single, two, no-tree, no-delib
17 |
18 |
19 | args = parser.parse_args()
20 |
21 | model_server = LLM(model="nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",tensor_parallel_size=4,max_num_seqs=256,enable_prefix_caching=True)
22 |
23 | with open(args.tsv_file, 'r', encoding="utf-8") as f:
24 | rows = f.readlines()
25 |
26 | for row in rows:
27 | cols = row.split('\t')
28 | contributor, topic, paper_0_url, paper_0_title, paper_0_abstract, paper_0_intro, paper_1_url, paper_1_title, paper_1_abstract, paper_1_intro, method, cite = cols
29 |
30 | args.topic = topic
31 | print(f"######## NOW PROCESSING {unidecode(paper_0_title)} VS. {unidecode(paper_1_title)}")
32 | log_id = paper_0_title.lower().replace(' ', '_')[:15] + "_vs_" + paper_1_title.lower().replace(' ', '_')[:15]
33 | args.log_dir = f"{args.log}/{log_id}/{args.experiment}"
34 |
35 | if not os.path.exists(args.log_dir):
36 | os.makedirs(args.log_dir)
37 |
38 | paper_0_text, paper_1_text = parse_papers_url(paper_0_url, paper_1_url)
39 | paper_0 = {'title':unidecode(paper_0_title), 'abstract':unidecode(paper_0_abstract), 'introduction':unidecode(paper_0_intro), 'full_text':paper_0_text}
40 | paper_1 = {'title':unidecode(paper_1_title), 'abstract':unidecode(paper_1_abstract), 'introduction':unidecode(paper_1_intro), 'full_text':paper_1_text}
41 |
42 | with open(os.path.join(args.log_dir, f"{paper_0_title.lower().replace(' ', '_')[:15]}_text.txt"), "w", encoding='utf-8') as f:
43 | f.write(paper_0["full_text"])
44 |
45 | with open(os.path.join(args.log_dir, f"{paper_1_title.lower().replace(' ', '_')[:15]}_text.txt"), "w", encoding='utf-8') as f:
46 | f.write(paper_1["full_text"])
47 |
48 | if args.experiment == 'tod':
49 | run_code(args, paper_0, paper_1, model_server)
50 | elif (args.experiment == 'single') or (args.experiment == 'two'):
51 | run_baseline_code(args, paper_0, paper_1, model_server)
52 | elif args.experiment == 'no-delib':
53 | run_no_delib_code(args, paper_0, paper_1, model_server)
54 | elif args.experiment == 'no-tree':
55 | run_no_tree_code(args, paper_0, paper_1, model_server)
56 | else:
57 | print("experiment is not supported!")
--------------------------------------------------------------------------------
/no_delib/data_pairer.py:
--------------------------------------------------------------------------------
1 | from arxiv2text import arxiv_to_text
2 | from docling.document_converter import DocumentConverter
3 | from unidecode import unidecode
4 | import string
5 | import json
6 | import re
7 |
8 |
9 | import argparse
10 | import os
11 | import csv
12 | import string
13 | import numpy as np
14 | import re
15 | from tqdm import tqdm
16 | import json
17 | from nltk import word_tokenize
18 |
19 | def extract_text(pdf_url):
20 | raw_extracted_text = arxiv_to_text(pdf_url).strip()
21 | raw_extracted_text = unidecode(raw_extracted_text)
22 |
23 | printable = set(string.printable)
24 | raw_extracted_text = ''.join(filter(lambda x: x in printable, raw_extracted_text)).replace('\r', '')
25 | raw_extracted_text = re.sub(r'\/uni\w{4,8}', "", raw_extracted_text)
26 | raw_extracted_text = raw_extracted_text.split('\n')
27 |
28 | extracted_text = [""]
29 | for text in raw_extracted_text:
30 | try:
31 | float(text)
32 | continue
33 | except:
34 | if text == "\n":
35 | if extracted_text[-1] != "\n":
36 | extracted_text.append(text)
37 | elif len(text) < 4:
38 | extracted_text[-1] += text
39 | else:
40 | extracted_text.append(text)
41 |
42 |
43 | extracted_text = " ".join(extracted_text).replace('\n', ' ') # remove new lines
44 | extracted_text = extracted_text.replace("- ", "") # remove justified text errors that result in half words ("arbi-\ntrary")
45 | extracted_text = " ".join(extracted_text.split()) # remove unnecessary whitespace in between
46 | return extracted_text[:extracted_text.find("References")] # only take the text before the references section
47 |
48 | def parse_papers(focus_paper, cited_paper):
49 | with open(os.path.join("abstracts", focus_paper), 'r') as file:
50 | focus_data = json.load(file)
51 | with open(os.path.join("abstracts", cited_paper), 'r') as file:
52 | cited_data = json.load(file)
53 |
54 | focus = extract_text(f"https://arxiv.org/pdf/{focus_data['arxiv_key'].replace('_', '.')}")
55 | cited = extract_text(f"https://arxiv.org/pdf/{cited_data['arxiv_key'].replace('_', '.')}")
56 |
57 | data = []
58 | data.append({'focus':{'title':unidecode(focus_data['title']), 'abstract':unidecode(focus_data['abstract']),
59 | 'introduction':unidecode(focus_data['introduction']), 'full_text':focus},
60 | 'cited':{'title':unidecode(cited_data['title']), 'abstract':unidecode(cited_data['abstract']),
61 | 'introduction':unidecode(cited_data['introduction']),'full_text':cited}})
62 |
63 | with open('data.json', 'w') as file:
64 | json.dump(data, file)
65 |
66 | def parse_papers_docling(focus_url, cited_url):
67 | converter = DocumentConverter()
68 | focus = converter.convert(focus_url).document.export_to_dict()
69 | cited = converter.convert(cited_url).document.export_to_dict()
70 |
71 | data = []
72 | data.append({'focus':focus,'cited':cited})
73 |
74 | with open('data.json', 'w') as file:
75 | json.dump(data, file)
--------------------------------------------------------------------------------
/no_tree/data_pairer.py:
--------------------------------------------------------------------------------
1 | from arxiv2text import arxiv_to_text
2 | from docling.document_converter import DocumentConverter
3 | from unidecode import unidecode
4 | import string
5 | import json
6 | import re
7 |
8 |
9 | import argparse
10 | import os
11 | import csv
12 | import string
13 | import numpy as np
14 | import re
15 | from tqdm import tqdm
16 | import json
17 | from nltk import word_tokenize
18 |
19 | def extract_text(pdf_url):
20 | raw_extracted_text = arxiv_to_text(pdf_url).strip()
21 | raw_extracted_text = unidecode(raw_extracted_text)
22 |
23 | printable = set(string.printable)
24 | raw_extracted_text = ''.join(filter(lambda x: x in printable, raw_extracted_text)).replace('\r', '')
25 | raw_extracted_text = re.sub(r'\/uni\w{4,8}', "", raw_extracted_text)
26 | raw_extracted_text = raw_extracted_text.split('\n')
27 |
28 | extracted_text = [""]
29 | for text in raw_extracted_text:
30 | try:
31 | float(text)
32 | continue
33 | except:
34 | if text == "\n":
35 | if extracted_text[-1] != "\n":
36 | extracted_text.append(text)
37 | elif len(text) < 4:
38 | extracted_text[-1] += text
39 | else:
40 | extracted_text.append(text)
41 |
42 |
43 | extracted_text = " ".join(extracted_text).replace('\n', ' ') # remove new lines
44 | extracted_text = extracted_text.replace("- ", "") # remove justified text errors that result in half words ("arbi-\ntrary")
45 | extracted_text = " ".join(extracted_text.split()) # remove unnecessary whitespace in between
46 | return extracted_text[:extracted_text.find("References")] # only take the text before the references section
47 |
48 | def parse_papers(focus_paper, cited_paper):
49 | with open(os.path.join("abstracts", focus_paper), 'r') as file:
50 | focus_data = json.load(file)
51 | with open(os.path.join("abstracts", cited_paper), 'r') as file:
52 | cited_data = json.load(file)
53 |
54 | focus = extract_text(f"https://arxiv.org/pdf/{focus_data['arxiv_key'].replace('_', '.')}")
55 | cited = extract_text(f"https://arxiv.org/pdf/{cited_data['arxiv_key'].replace('_', '.')}")
56 |
57 | data = []
58 | data.append({'focus':{'title':unidecode(focus_data['title']), 'abstract':unidecode(focus_data['abstract']), 'full_text':focus},
59 | 'cited':{'title':unidecode(cited_data['title']), 'abstract':unidecode(cited_data['abstract']), 'full_text':cited}})
60 |
61 | with open('data.json', 'w') as file:
62 | json.dump(data, file)
63 |
64 | def parse_papers_url(focus_url, cited_url):
65 | focus_text = extract_text(focus_url)
66 | cited_text = extract_text(cited_url)
67 |
68 | return focus_text, cited_text
69 |
70 | def parse_papers_docling(focus_url, cited_url):
71 | converter = DocumentConverter()
72 | focus = converter.convert(focus_url).document.export_to_dict()
73 | cited = converter.convert(cited_url).document.export_to_dict()
74 |
75 | data = []
76 | data.append({'focus':focus,'cited':cited})
77 |
78 | with open('data.json', 'w') as file:
79 | json.dump(data, file)
--------------------------------------------------------------------------------
/no_delib/.ipynb_checkpoints/data_pairer-checkpoint.py:
--------------------------------------------------------------------------------
1 | from arxiv2text import arxiv_to_text
2 | from docling.document_converter import DocumentConverter
3 | from unidecode import unidecode
4 | import string
5 | import json
6 | import re
7 |
8 |
9 | import argparse
10 | import os
11 | import csv
12 | import string
13 | import numpy as np
14 | import re
15 | from tqdm import tqdm
16 | import json
17 | from nltk import word_tokenize
18 |
19 | def extract_text(pdf_url):
20 | raw_extracted_text = arxiv_to_text(pdf_url).strip()
21 | raw_extracted_text = unidecode(raw_extracted_text)
22 |
23 | printable = set(string.printable)
24 | raw_extracted_text = ''.join(filter(lambda x: x in printable, raw_extracted_text)).replace('\r', '')
25 | raw_extracted_text = re.sub(r'\/uni\w{4,8}', "", raw_extracted_text)
26 | raw_extracted_text = raw_extracted_text.split('\n')
27 |
28 | extracted_text = [""]
29 | for text in raw_extracted_text:
30 | try:
31 | float(text)
32 | continue
33 | except:
34 | if text == "\n":
35 | if extracted_text[-1] != "\n":
36 | extracted_text.append(text)
37 | elif len(text) < 4:
38 | extracted_text[-1] += text
39 | else:
40 | extracted_text.append(text)
41 |
42 |
43 | extracted_text = " ".join(extracted_text).replace('\n', ' ') # remove new lines
44 | extracted_text = extracted_text.replace("- ", "") # remove justified text errors that result in half words ("arbi-\ntrary")
45 | extracted_text = " ".join(extracted_text.split()) # remove unnecessary whitespace in between
46 | return extracted_text[:extracted_text.find("References")] # only take the text before the references section
47 |
48 | def parse_papers(focus_paper, cited_paper):
49 | with open(os.path.join("abstracts", focus_paper), 'r') as file:
50 | focus_data = json.load(file)
51 | with open(os.path.join("abstracts", cited_paper), 'r') as file:
52 | cited_data = json.load(file)
53 |
54 | focus = extract_text(f"https://arxiv.org/pdf/{focus_data['arxiv_key'].replace('_', '.')}")
55 | cited = extract_text(f"https://arxiv.org/pdf/{cited_data['arxiv_key'].replace('_', '.')}")
56 |
57 | data = []
58 | data.append({'focus':{'title':unidecode(focus_data['title']), 'abstract':unidecode(focus_data['abstract']),
59 | 'introduction':unidecode(focus_data['introduction']), 'full_text':focus},
60 | 'cited':{'title':unidecode(cited_data['title']), 'abstract':unidecode(cited_data['abstract']),
61 | 'introduction':unidecode(cited_data['introduction']),'full_text':cited}})
62 |
63 | with open('data.json', 'w') as file:
64 | json.dump(data, file)
65 |
66 | def parse_papers_docling(focus_url, cited_url):
67 | converter = DocumentConverter()
68 | focus = converter.convert(focus_url).document.export_to_dict()
69 | cited = converter.convert(cited_url).document.export_to_dict()
70 |
71 | data = []
72 | data.append({'focus':focus,'cited':cited})
73 |
74 | with open('data.json', 'w') as file:
75 | json.dump(data, file)
--------------------------------------------------------------------------------
/no_tree/.ipynb_checkpoints/data_pairer-checkpoint.py:
--------------------------------------------------------------------------------
1 | from arxiv2text import arxiv_to_text
2 | from docling.document_converter import DocumentConverter
3 | from unidecode import unidecode
4 | import string
5 | import json
6 | import re
7 |
8 |
9 | import argparse
10 | import os
11 | import csv
12 | import string
13 | import numpy as np
14 | import re
15 | from tqdm import tqdm
16 | import json
17 | from nltk import word_tokenize
18 |
19 | def extract_text(pdf_url):
20 | raw_extracted_text = arxiv_to_text(pdf_url).strip()
21 | raw_extracted_text = unidecode(raw_extracted_text)
22 |
23 | printable = set(string.printable)
24 | raw_extracted_text = ''.join(filter(lambda x: x in printable, raw_extracted_text)).replace('\r', '')
25 | raw_extracted_text = re.sub(r'\/uni\w{4,8}', "", raw_extracted_text)
26 | raw_extracted_text = raw_extracted_text.split('\n')
27 |
28 | extracted_text = [""]
29 | for text in raw_extracted_text:
30 | try:
31 | float(text)
32 | continue
33 | except:
34 | if text == "\n":
35 | if extracted_text[-1] != "\n":
36 | extracted_text.append(text)
37 | elif len(text) < 4:
38 | extracted_text[-1] += text
39 | else:
40 | extracted_text.append(text)
41 |
42 |
43 | extracted_text = " ".join(extracted_text).replace('\n', ' ') # remove new lines
44 | extracted_text = extracted_text.replace("- ", "") # remove justified text errors that result in half words ("arbi-\ntrary")
45 | extracted_text = " ".join(extracted_text.split()) # remove unnecessary whitespace in between
46 | return extracted_text[:extracted_text.find("References")] # only take the text before the references section
47 |
48 | def parse_papers(focus_paper, cited_paper):
49 | with open(os.path.join("abstracts", focus_paper), 'r') as file:
50 | focus_data = json.load(file)
51 | with open(os.path.join("abstracts", cited_paper), 'r') as file:
52 | cited_data = json.load(file)
53 |
54 | focus = extract_text(f"https://arxiv.org/pdf/{focus_data['arxiv_key'].replace('_', '.')}")
55 | cited = extract_text(f"https://arxiv.org/pdf/{cited_data['arxiv_key'].replace('_', '.')}")
56 |
57 | data = []
58 | data.append({'focus':{'title':unidecode(focus_data['title']), 'abstract':unidecode(focus_data['abstract']), 'full_text':focus},
59 | 'cited':{'title':unidecode(cited_data['title']), 'abstract':unidecode(cited_data['abstract']), 'full_text':cited}})
60 |
61 | with open('data.json', 'w') as file:
62 | json.dump(data, file)
63 |
64 | def parse_papers_url(focus_url, cited_url):
65 | focus_text = extract_text(focus_url)
66 | cited_text = extract_text(cited_url)
67 |
68 | return focus_text, cited_text
69 |
70 | def parse_papers_docling(focus_url, cited_url):
71 | converter = DocumentConverter()
72 | focus = converter.convert(focus_url).document.export_to_dict()
73 | cited = converter.convert(cited_url).document.export_to_dict()
74 |
75 | data = []
76 | data.append({'focus':focus,'cited':cited})
77 |
78 | with open('data.json', 'w') as file:
79 | json.dump(data, file)
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Tree-of-Debate: Multi-Persona Debate Trees Elicit Critical Thinking for Scientific Comparative Analysis
2 |

3 |
4 | This repository contains the source code for **Tree-of-Debate: Multi-Persona Debate Trees Elicit Critical Thinking for Scientific Comparative Analysis**. This work has been accepted at [ACL 2025 (Oral)](https://2025.aclweb.org/).
5 |
6 | We introduce Tree-of-Debate (ToD), a framework which converts scientific papers into LLM personas that debate their respective novelties. To emphasize structured, critical reasoning rather than focusing solely on outcomes, ToD dynamically constructs a debate tree, enabling fine-grained analysis of independent novelty arguments within scholarly articles. Through experiments on scientific literature across various domains, evaluated by expert researchers, we demonstrate that ToD generates informative arguments, effectively contrasts papers, and supports researchers in their literature review.
7 |
8 | ## Links
9 |
10 | - [Paper](https://arxiv.org/abs/2502.14767)
11 | - [Installation](#installation)
12 | - [Quick Start](#quick-start)
13 | - [Video](#video)
14 | - [Citations](#-citations)
15 |
16 | 
17 |
18 | ## Installation
19 | The code is written in Python 3.8.10. The Python dependencies are summarized in the file `requirements.txt`. You can install them like this:
20 | ```
21 | pip install -r requirements.txt
22 | ```
23 |
24 | ## Quick Start
25 | In order to run Tree-of-Debate, we provide the `run.sh` script. We have provided the full dataset for Tree-of-Debate in `dataset/tree_of_debate_dataset.tsv`, with a corresponding README specifying the file format. We note that the introductions for each paper are optional (only necessary for the baseline experiments).
26 |
27 | The following are the primary arguments for Tree-of-Debate:
28 |
29 | - `--tsv_file` $\rightarrow$ the path to the dataset in tsv file format.
30 | - `--log` $\rightarrow$ the output directory where all logs and final output will be saved.
31 | - `--experiment` $\rightarrow$ the type of experiment to be performed. Select one of the experiment settings: `tod` for our full Tree-of-Debate method, `single` for the single stage baseline, `two` for the two-stage baseline, `no-tree` for the No-Tree ablation study, or `no-delib` for the No-Self-Deliberation ablation study.
32 |
33 | We provide our human evaluation instructions under the `evaluation` directory.
34 |
35 | ## Video
36 | We have a video that explains the Tree-of-Debate framework and its evaluation. You can find it on [YouTube](https://youtu.be/HlMVvHnM5MM).
37 |
38 | ## 📖 Citations
39 | Please cite the paper and star this repo if you use Tree-of-Debate and find it interesting/useful, thanks! Feel free to open an issue if you have any questions.
40 |
41 | ```bibtex
42 | @article{kargupta2025tree,
43 | title={Tree-of-Debate: Multi-Persona Debate Trees Elicit Critical Thinking for Scientific Comparative Analysis},
44 | author={Kargupta, Priyanka and Agarwal, Ishika and August, Tal and Han, Jiawei},
45 | journal={arXiv preprint arXiv:2502.14767},
46 | year={2025}
47 | }
48 | ```
49 |
--------------------------------------------------------------------------------
/baselines/run_baseline.py:
--------------------------------------------------------------------------------
1 | import json
2 | import argparse
3 | from vllm import SamplingParams
4 | from pydantic import BaseModel, StringConstraints, conlist
5 | from typing_extensions import Annotated
6 | from outlines.serve.vllm import JSONLogitsProcessor
7 |
8 | class summary_schema(BaseModel):
9 | summary: Annotated[str, StringConstraints(strip_whitespace=True, min_length=50)]
10 |
11 | ##### SINGLE-STAGE PROMPTS #####
12 | single_stage_prompt = lambda paper_0, paper_1: f"""
13 |
14 | Paper 0's Title: {paper_0['title']}
15 |
16 | Paper 0's Abstract: {paper_0['abstract']}
17 |
18 | Paper 0's Introduction: {paper_0['introduction']}
19 |
20 |
21 |
22 | Paper 1's Title: {paper_1['title']}
23 |
24 | Paper 1's Abstract: {paper_1['abstract']}
25 |
26 | Paper 1's Introduction: {paper_1['introduction']}
27 |
28 |
29 | Output an approximately paragraph-long comparative summary of the similarities and differences between the two papers. Loosely structure your summary with initially their similarities (which ideas/aspects overlap between the two papers?) to their differences (what makes the papers unique) in novelties strictly based the information discussed within the input title, abstract, and introduction. Focus more on the differences than the similarities. Do not refer to the papers as Paper 0 or 1 in your output summary.
30 |
31 | Format your output in the following JSON schema:
32 | {{
33 | "summary": <5-20 sentence string to summarize the similarities and differences between the two papers>
34 | }}"""
35 |
36 | ##### TWO-STAGE PROMPTS #####
37 |
38 | two_stage_prompt_a = lambda paper: f"""
39 |
40 | Paper Title: {paper['title']}
41 |
42 | Paper Abstract: {paper['abstract']}
43 |
44 | Paper Introduction: {paper['introduction']}
45 |
46 |
47 | Output an approximately paragraph-long summary of the motivation and novelties of the paper () based on its abstract and introduction.
48 |
49 | Format your output in the following JSON schema:
50 | {{
51 | "summary": <5-20 sentence string to summarize the similarities and differences between the two papers>
52 | }}"""
53 |
54 | two_stage_prompt_b = lambda paper_0, paper_1: f"""
55 |
56 | Paper 0's Title: {paper_0['title']}
57 | Paper 0's Summary: {paper_0['summary']}
58 |
59 |
60 |
61 | Paper 1's Title: {paper_1['title']}
62 | Paper 1's Summary: {paper_1['summary']}
63 |
64 |
65 | For each of two papers, you are provided with a summary of their motivations and novelties. Output an approximately paragraph-long comparative summary of the similarities and differences between the two papers. Loosely structure your summary with initially their similarities (which ideas/aspects overlap between the two papers?) to their differences (what makes the papers unique) in novelties strictly based the information discussed within the input titles and summaries. Focus more on the differences than the similarities. Do not refer to the papers as Paper 0 or 1 in your output summary.
66 |
67 | Format your output in the following JSON schema:
68 | {{
69 | "summary": <5-20 sentence string to summarize the similarities and differences between the two papers>
70 | }}"""
71 |
72 |
73 | def run_baseline_code(args, paper_0, paper_1, model_server):
74 | if args.experiment == 'single':
75 | logits_processor = JSONLogitsProcessor(schema=summary_schema, llm=model_server.llm_engine)
76 | sampling_params = SamplingParams(max_tokens=3000, temperature=0.4, top_p=0.99,logits_processors=[logits_processor])
77 | output = model_server.generate([single_stage_prompt(paper_0, paper_1)], sampling_params=sampling_params, use_tqdm=False)[0].outputs[0].text
78 | summary = json.loads(output)['summary']
79 |
80 | with open(f'{args.log_dir}/{args.experiment}_summary.txt', 'w') as f:
81 | f.write(summary)
82 |
83 | elif args.experiment == 'two':
84 | # generate individual summaries
85 | logits_processor = JSONLogitsProcessor(schema=summary_schema, llm=model_server.llm_engine)
86 | sampling_params = SamplingParams(max_tokens=3000, temperature=0.4, top_p=0.99,logits_processors=[logits_processor])
87 |
88 | stage_a = model_server.generate([two_stage_prompt_a(paper_0), two_stage_prompt_a(paper_1)],
89 | sampling_params=sampling_params, use_tqdm=False)
90 | paper_0['summary'] = json.loads(stage_a[0].outputs[0].text)['summary']
91 | paper_1['summary'] = json.loads(stage_a[1].outputs[0].text)['summary']
92 |
93 | # generate joint comparative summary
94 | stage_b = model_server.generate([two_stage_prompt_b(paper_0, paper_1)], sampling_params=sampling_params, use_tqdm=False)[0].outputs[0].text
95 | summary = json.loads(stage_b)['summary']
96 |
97 | with open(f'{args.log_dir}/{args.experiment}_summary.txt', 'w', encoding="utf-8") as f:
98 | f.write(summary)
99 |
100 | else:
101 | print("invalid experiment!")
--------------------------------------------------------------------------------
/no_delib/debate.py:
--------------------------------------------------------------------------------
1 | from no_delib.persona import PaperAuthor
2 | from typing import List
3 | from collections import defaultdict
4 | import os
5 |
6 | def collect_arguments(arguments):
7 | text_args = ""
8 | counter = 1
9 | for args in arguments:
10 | for a in args['argument_list']:
11 | text_args += f"{counter}. {a['argument_title']}. "
12 | return text_args
13 |
14 | def topic_dict_to_str(topic):
15 | if topic['topic_title'] == topic['topic_description']:
16 | return topic['topic_title']
17 | else:
18 | return f"{topic['topic_title']}: {topic['topic_description']}." # keeping this in case we want to represent topics with the title and description
19 |
20 | class DebateNode:
21 | def __init__(self, round_topic: str, parent=None) -> None:
22 | self.children = []
23 | self.content = []
24 |
25 | self.self_delib = {} # paper_id: [] (format for all below)
26 | self.evidence = {}
27 | self.preemption = {}
28 | self.topics = {}
29 | self.init_arguments = {}
30 | self.response = {}
31 | self.final_arguments = {}
32 |
33 | self.parent = parent
34 | self.round_topic = round_topic
35 |
36 |
37 | def __repr__(self):
38 | return topic_dict_to_str(self.round_topic)
39 |
40 | def conduct_self_deliberation(self, topic, paper_authors: List[PaperAuthor], moderator, log=None, num_evidence=5, num_arg=3):
41 | focus_paper = None
42 | for paper_author in paper_authors:
43 |
44 | # develop k arguments
45 | if paper_author.id not in self.self_delib.keys(): self.self_delib[paper_author.id] = []
46 | author_args = paper_author.generate_arguments(topic, k=num_arg)
47 | self.self_delib[paper_author.id].extend(author_args)
48 |
49 | # check if paper is the focus
50 | if paper_author.focus:
51 | focus_paper = paper_author
52 |
53 | # logging
54 | if log is not None:
55 | with open(os.path.join(log, 'self_deliberation.txt'), 'a') as f:
56 | f.write(f'Topic: {topic}\n\n')
57 |
58 | f.write(f'Develop Arguments:\n\n')
59 | temp = ""
60 | for i, arg in enumerate(author_args):
61 | temp += f"Argument #{i+1} - {arg['argument_title']}.\n\t{arg['description']}\n"
62 | f.write(f'{paper_author.focus} paper:\n{temp}\n\n')
63 |
64 | self.topics = moderator.generate_topics(round=self, parent_topic=topic, paper_authors=paper_authors)
65 |
66 | for subtopic in self.topics:
67 | self.children.append(DebateNode(subtopic, parent=self))
68 | return self.children
69 |
70 |
71 | def conduct_debate(self, paper_authors: List[PaperAuthor]):
72 |
73 | convo_history = f"Debate Topic Information:\n\t- Topic: {self.round_topic['topic_title']}\n\t- Topic Description: {self.round_topic['topic_description']}\n\n"
74 |
75 | # each paper presents their arguments
76 | convo_history = "Debate History:\n\n"
77 | for author in paper_authors:
78 | opposition = paper_authors[1-author.id]
79 |
80 | print(f"\nPRESENT ARGUMENT FOR AUTHOR {author.id}:\n")
81 | author_arg = author.present_argument(debate_node=self, parent_debate_node=self.parent, opposition=opposition)
82 | self.init_arguments[author.id] = author_arg
83 | convo_history += f"\t-Author {author.id}: I argue that {author_arg['argument_title'].lower()}. {author_arg['description']}\n"
84 |
85 | convo_history += "\n"
86 | # each paper responds to opposing side's arguments
87 | for author in paper_authors:
88 | opposition = paper_authors[1-author.id]
89 |
90 | print(f"\nRESPOND ARGUMENT FOR AUTHOR {author.id}:\n")
91 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
92 |
93 | author_response = author.respond_to_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
94 | self.response[author.id] = author_response
95 | convo_history += f"\t-Author {author.id}: {author_response['author_response']}\n"
96 |
97 | convo_history += "\n"
98 | # each paper revises their arguments
99 | for author in paper_authors:
100 | opposition = paper_authors[1-author.id]
101 |
102 | print(f"\nREVISE ARGUMENT FOR AUTHOR {author.id}:\n")
103 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
104 | author_revision = author.revise_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
105 | self.final_arguments[author.id] = author_revision
106 | convo_history += f"\t-Author {author.id}: I argue that {author_revision['revised_argument_title'].lower()}. {author_revision['revised_argument_description']}\n"
107 |
108 | self.conversation_history = convo_history
109 | return convo_history
110 |
111 | def expand_node(self, parent_node, new_node):
112 | parent_node.children.append(new_node)
113 | new_node.parents = parent_node
114 |
115 |
--------------------------------------------------------------------------------
/no_delib/.ipynb_checkpoints/debate-checkpoint.py:
--------------------------------------------------------------------------------
1 | from persona import PaperAuthor
2 | from typing import List
3 | from collections import defaultdict
4 | import os
5 |
6 | def collect_arguments(arguments):
7 | text_args = ""
8 | counter = 1
9 | for args in arguments:
10 | for a in args['argument_list']:
11 | text_args += f"{counter}. {a['argument_title']}. "
12 | return text_args
13 |
14 | def topic_dict_to_str(topic):
15 | if topic['topic_title'] == topic['topic_description']:
16 | return topic['topic_title']
17 | else:
18 | return f"{topic['topic_title']}: {topic['topic_description']}." # keeping this in case we want to represent topics with the title and description
19 |
20 | class DebateNode:
21 | def __init__(self, round_topic: str, parent=None) -> None:
22 | self.children = []
23 | self.content = []
24 |
25 | self.self_delib = {} # paper_id: [] (format for all below)
26 | self.evidence = {}
27 | self.preemption = {}
28 | self.topics = {}
29 | self.init_arguments = {}
30 | self.response = {}
31 | self.final_arguments = {}
32 |
33 | self.parent = parent
34 | self.round_topic = round_topic
35 |
36 |
37 | def __repr__(self):
38 | return topic_dict_to_str(self.round_topic)
39 |
40 | def conduct_self_deliberation(self, topic, paper_authors: List[PaperAuthor], moderator, log=None, num_evidence=5, num_arg=3):
41 | focus_paper = None
42 | for paper_author in paper_authors:
43 |
44 | # develop k arguments
45 | if paper_author.id not in self.self_delib.keys(): self.self_delib[paper_author.id] = []
46 | author_args = paper_author.generate_arguments(topic, k=num_arg)
47 | self.self_delib[paper_author.id].extend(author_args)
48 |
49 | # check if paper is the focus
50 | if paper_author.focus:
51 | focus_paper = paper_author
52 |
53 | # logging
54 | if log is not None:
55 | with open(os.path.join(log, 'self_deliberation.txt'), 'a') as f:
56 | f.write(f'Topic: {topic}\n\n')
57 |
58 | f.write(f'Develop Arguments:\n\n')
59 | temp = ""
60 | for i, arg in enumerate(author_args):
61 | temp += f"Argument #{i+1} - {arg['argument_title']}.\n\t{arg['description']}\n"
62 | f.write(f'{paper_author.focus} paper:\n{temp}\n\n')
63 |
64 | self.topics = moderator.generate_topics(round=self, parent_topic=topic, paper_authors=paper_authors)
65 |
66 | for subtopic in self.topics:
67 | self.children.append(DebateNode(subtopic, parent=self))
68 | return self.children
69 |
70 |
71 | def conduct_debate(self, paper_authors: List[PaperAuthor]):
72 |
73 | convo_history = f"Debate Topic Information:\n\t- Topic: {self.round_topic['topic_title']}\n\t- Topic Description: {self.round_topic['topic_description']}\n\n"
74 |
75 | # each paper presents their arguments
76 | convo_history = "Debate History:\n\n"
77 | for author in paper_authors:
78 | opposition = paper_authors[1-author.id]
79 |
80 | print(f"\nPRESENT ARGUMENT FOR AUTHOR {author.id}:\n")
81 | author_arg = author.present_argument(debate_node=self, parent_debate_node=self.parent, opposition=opposition)
82 | self.init_arguments[author.id] = author_arg
83 | convo_history += f"\t-Author {author.id}: I argue that {author_arg['argument_title'].lower()}. {author_arg['description']}\n"
84 |
85 | convo_history += "\n"
86 | # each paper responds to opposing side's arguments
87 | for author in paper_authors:
88 | opposition = paper_authors[1-author.id]
89 |
90 | print(f"\nRESPOND ARGUMENT FOR AUTHOR {author.id}:\n")
91 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
92 |
93 | author_response = author.respond_to_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
94 | self.response[author.id] = author_response
95 | convo_history += f"\t-Author {author.id}: {author_response['author_response']}\n"
96 |
97 | convo_history += "\n"
98 | # each paper revises their arguments
99 | for author in paper_authors:
100 | opposition = paper_authors[1-author.id]
101 |
102 | print(f"\nREVISE ARGUMENT FOR AUTHOR {author.id}:\n")
103 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
104 | author_revision = author.revise_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
105 | self.final_arguments[author.id] = author_revision
106 | convo_history += f"\t-Author {author.id}: I argue that {author_revision['revised_argument_title'].lower()}. {author_revision['revised_argument_description']}\n"
107 |
108 | self.conversation_history = convo_history
109 | return convo_history
110 |
111 | def expand_node(self, parent_node, new_node):
112 | parent_node.children.append(new_node)
113 | new_node.parents = parent_node
114 |
115 |
--------------------------------------------------------------------------------
/tod_no_deliberation/debate.py:
--------------------------------------------------------------------------------
1 | from persona import PaperAuthor
2 | from typing import List
3 | from collections import defaultdict
4 | import os
5 |
6 | def collect_arguments(arguments):
7 | text_args = ""
8 | counter = 1
9 | for args in arguments:
10 | for a in args['argument_list']:
11 | text_args += f"{counter}. {a['argument_title']}. "
12 | return text_args
13 |
14 | def topic_dict_to_str(topic):
15 | if topic['topic_title'] == topic['topic_description']:
16 | return topic['topic_title']
17 | else:
18 | return f"{topic['topic_title']}: {topic['topic_description']}." # keeping this in case we want to represent topics with the title and description
19 |
20 | class DebateNode:
21 | def __init__(self, round_topic: str, parent=None) -> None:
22 | self.children = []
23 | self.content = []
24 |
25 | self.self_delib = {} # paper_id: [] (format for all below)
26 | self.evidence = {}
27 | self.preemption = {}
28 | self.topics = {}
29 | self.init_arguments = {}
30 | self.response = {}
31 | self.final_arguments = {}
32 | self.conversation_history = None
33 |
34 | self.parent = parent
35 | self.round_topic = round_topic
36 |
37 |
38 | def __repr__(self):
39 | return topic_dict_to_str(self.round_topic)
40 |
41 | def conduct_self_deliberation(self, topic, paper_authors: List[PaperAuthor], moderator, log=None, num_evidence=5, num_arg=3):
42 | focus_paper = None
43 |
44 | for paper_author in paper_authors:
45 | # develop k arguments
46 | if paper_author.id not in self.self_delib.keys(): self.self_delib[paper_author.id] = []
47 | author_args = paper_author.generate_arguments(topic, k=num_arg)
48 | self.self_delib[paper_author.id].extend(author_args)
49 |
50 | # check if paper is the focus
51 | if paper_author.focus:
52 | focus_paper = paper_author
53 |
54 | # logging
55 | if log is not None:
56 | with open(os.path.join(log, 'self_deliberation.txt'), 'a') as f:
57 | f.write(f'Topic: {topic}\n\n')
58 |
59 | f.write(f'Develop Arguments:\n\n')
60 | temp = ""
61 | for i, arg in enumerate(author_args):
62 | temp += f"Argument #{i+1} - {arg['argument_title']}.\n\t{arg['description']}\n"
63 | f.write(f'{paper_author.focus} paper:\n{temp}\n\n')
64 |
65 | self.topics = moderator.generate_topics(round=self, parent_topic=topic, paper_authors=paper_authors)
66 |
67 | for subtopic in self.topics:
68 | self.children.append(DebateNode(subtopic, parent=self))
69 | return self.children
70 |
71 |
72 | def conduct_debate(self, paper_authors: List[PaperAuthor]):
73 |
74 | convo_history = f"Debate Topic Information:\n\t- Topic: {self.round_topic['topic_title']}\n\t- Topic Description: {self.round_topic['topic_description']}\n\n"
75 |
76 | # each paper presents their arguments
77 | convo_history = "Debate History:\n\n"
78 | for author in paper_authors:
79 | opposition = paper_authors[1-author.id]
80 |
81 | print(f"\nPRESENT ARGUMENT FOR AUTHOR {author.id}:\n")
82 | author_arg = author.present_argument(debate_node=self, parent_debate_node=self.parent, opposition=opposition)
83 | self.init_arguments[author.id] = author_arg
84 | convo_history += f"\t-Author {author.id}: I argue that {author_arg['argument_title'].lower()}. {author_arg['description']}\n"
85 |
86 | convo_history += "\n"
87 | # each paper responds to opposing side's arguments
88 | for author in paper_authors:
89 | opposition = paper_authors[1-author.id]
90 |
91 | print(f"\nRESPOND ARGUMENT FOR AUTHOR {author.id}:\n")
92 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
93 |
94 | author_response = author.respond_to_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
95 | self.response[author.id] = author_response
96 | convo_history += f"\t-Author {author.id}: {author_response['author_response']}\n"
97 |
98 | convo_history += "\n"
99 | # each paper revises their arguments
100 | for author in paper_authors:
101 | opposition = paper_authors[1-author.id]
102 |
103 | print(f"\nREVISE ARGUMENT FOR AUTHOR {author.id}:\n")
104 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
105 | author_revision = author.revise_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
106 | self.final_arguments[author.id] = author_revision
107 | convo_history += f"\t-Author {author.id}: I argue that {author_revision['revised_argument_title'].lower()}. {author_revision['revised_argument_description']}\n"
108 |
109 | self.conversation_history = convo_history
110 | return convo_history
111 |
112 | def expand_node(self, parent_node, new_node):
113 | parent_node.children.append(new_node)
114 | new_node.parents = parent_node
115 |
116 |
--------------------------------------------------------------------------------
/tree_of_debate.py:
--------------------------------------------------------------------------------
1 | import pickle
2 | import os
3 | # os.environ['CUDA_VISIBLE_DEVICES'] = "0,1,2,3"
4 | from debate import DebateNode
5 | from paper_details import Paper
6 | from persona import PaperAuthor
7 | from moderator import Moderator
8 | import argparse
9 | from typing import List
10 | from vllm import LLM
11 | import os
12 | import json
13 | from data_pairer import parse_papers, parse_papers_url
14 |
15 | def print_path(node: DebateNode, prefix=""):
16 | if len(node.children) == 0:
17 | # return prefix + node.round_topic['topic_title']
18 | out = f"""{prefix}{node.round_topic['topic_title']}: {node.round_topic['topic_description']}
19 | {prefix}Author 0's Argument: {node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}
20 | {prefix}Author 1's Argument: {node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}
21 |
22 | """
23 | node_dict = {'topic':node.round_topic['topic_title'],
24 | 'description':node.round_topic['topic_description'],
25 | 'author_0_argument': f"{node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}",
26 | 'author_1_argument': f"{node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}"}
27 | return out, node_dict
28 |
29 | elif node.parent is None:
30 | node_dict = {'topic':node.round_topic['topic_title'], 'children':[]}
31 | path = prefix + node.round_topic['topic_title'] + "\n\n"
32 |
33 | else:
34 | path = f"""{prefix}{node.round_topic['topic_title']}: {node.round_topic['topic_description']}
35 | {prefix}Author 0's Argument: {node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}
36 | {prefix}Author 1's Argument: {node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}
37 |
38 | """
39 | node_dict = {'topic':node.round_topic['topic_title'],
40 | 'description':node.round_topic['topic_description'],
41 | 'author_0_argument': f"{node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}",
42 | 'author_1_argument': f"{node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}",
43 | 'children':[]}
44 |
45 | for child in node.children:
46 | child_path, child_dict = print_path(child, prefix + "\t")
47 | path += child_path + "\n"
48 | node_dict['children'].append(child_dict)
49 |
50 | return path, node_dict
51 |
52 | def topic_dict_to_str(topic):
53 | if topic['topic_title'] == topic['topic_description']:
54 | return topic['topic_title']
55 | else:
56 | return f"{topic['topic_title']}: {topic['topic_description']}." # keeping this in case we want to represent topics with the title and description
57 | # return topic['argument_title']
58 |
59 | def collect_evidence(evidence, subtrees):
60 | for c in subtrees:
61 | for author_id, e in c.evidence.items():
62 | evidence[author_id].extend(e)
63 | return evidence
64 |
65 |
66 | def run_code(args, f_pap, c_pap, model_server):
67 |
68 | focus_paper = PaperAuthor(
69 | model = model_server,
70 | paper = Paper(f_pap),
71 | focus=True,
72 | id=0,
73 | log_dir=args.log_dir
74 | )
75 |
76 | cited_paper = PaperAuthor(
77 | model = model_server,
78 | paper = Paper(c_pap),
79 | focus=False,
80 | id=1,
81 | log_dir=args.log_dir
82 | )
83 |
84 | moderator = Moderator(model_server, args.log_dir)
85 |
86 | paper_authors = [focus_paper, cited_paper]
87 | leaf_node_label = {'topic_title': args.topic, 'topic_description': args.topic}
88 |
89 | if args.log_dir != "":
90 | with open(os.path.join(args.log_dir, 'self_deliberation.txt'), 'w') as f:
91 | f.write(f'Topic: {args.topic}\n\n')
92 |
93 | with open(f'{args.log_dir}/llm_calls.txt', 'w') as f:
94 | f.write(f'LLM Calls:\n')
95 |
96 | # each node has a topic
97 | root_node = DebateNode(leaf_node_label)
98 | all_evidence = {p.id:[] for p in paper_authors}
99 |
100 | subtrees = root_node.conduct_self_deliberation(leaf_node_label, paper_authors, moderator, log=args.log_dir) # k new, finer topics to discuss
101 | all_evidence = collect_evidence(all_evidence, subtrees)
102 |
103 | conversation_history = []
104 |
105 | queue_of_rounds: List[DebateNode] = []
106 | queue_of_rounds.extend(subtrees)
107 |
108 | debated_rounds = [root_node]
109 |
110 | depth = 0
111 | max_depth = 3
112 |
113 | while len(queue_of_rounds) > 0:
114 | round = queue_of_rounds.pop(0)
115 | debated_rounds.append(round)
116 | conversation = round.conduct_debate([focus_paper, cited_paper])
117 | conversation_history.append(conversation)
118 | if moderator.is_expand(round, conversation) and depth < max_depth:
119 | new_subtrees = round.conduct_self_deliberation(round.round_topic, paper_authors, moderator)
120 | all_evidence = collect_evidence(all_evidence, new_subtrees)
121 | queue_of_rounds.extend(new_subtrees)
122 | depth += 1
123 |
124 | conversation_history = ''.join(conversation_history)
125 | with open(f'{args.log_dir}/conversation_history.txt', 'w') as f:
126 | f.write(conversation_history)
127 |
128 | with open(f'{args.log_dir}/evidence.txt', 'w') as f:
129 | for author_id, e in all_evidence.items():
130 | unique_e = list(set(e))
131 | f.write(str(unique_e))
132 | f.write('\n')
133 |
134 | paths, tree_dict = print_path(root_node)
135 | with open(f'{args.log_dir}/path.txt', 'w') as f:
136 | f.write("\n\n\n\n\n")
137 | f.write("PATHS:\n")
138 | f.write(paths)
139 |
140 |
141 | with open(f'{args.log_dir}/tree.json', 'w', encoding='utf-8') as f:
142 | json.dump(tree_dict, f, ensure_ascii=False, indent=4)
143 |
144 | path_summary = moderator.summarize_path_debate(paper_authors, leaf_node_label, json.dumps(tree_dict, indent=2))
145 | with open(f'{args.log_dir}/path_summary.txt', 'w') as f:
146 | f.write(path_summary)
--------------------------------------------------------------------------------
/debate.py:
--------------------------------------------------------------------------------
1 | from persona import PaperAuthor
2 | from typing import List
3 | from collections import defaultdict
4 | import os
5 |
6 | def collect_arguments(arguments):
7 | text_args = ""
8 | counter = 1
9 | for args in arguments:
10 | for a in args['argument_list']:
11 | text_args += f"{counter}. {a['argument_title']}. "
12 | return text_args
13 |
14 | def topic_dict_to_str(topic):
15 | if topic['topic_title'] == topic['topic_description']:
16 | return topic['topic_title']
17 | else:
18 | return f"{topic['topic_title']}: {topic['topic_description']}." # keeping this in case we want to represent topics with the title and description
19 |
20 | class DebateNode:
21 | def __init__(self, round_topic: str, parent=None) -> None:
22 | self.children = []
23 | self.content = []
24 |
25 | self.self_delib = {} # paper_id: [] (format for all below)
26 | self.evidence = {}
27 | self.preemption = {}
28 | self.topics = {}
29 | self.init_arguments = {}
30 | self.response = {}
31 | self.final_arguments = {}
32 | self.conversation_history = None
33 |
34 | self.parent = parent
35 | self.round_topic = round_topic
36 |
37 |
38 | def __repr__(self):
39 | return topic_dict_to_str(self.round_topic)
40 |
41 | def conduct_self_deliberation(self, topic, paper_authors: List[PaperAuthor], moderator, log=None, num_evidence=5, num_arg=3):
42 | focus_paper = None
43 | for paper_author in paper_authors:
44 | # gather evidence
45 | evidence, scores = paper_author.gather_evidence(topic_dict_to_str(topic), k=num_evidence, return_scores=True)
46 |
47 | if paper_author.id not in self.evidence.keys(): self.evidence[paper_author.id] = []
48 | self.evidence[paper_author.id].extend(evidence)
49 |
50 | # develop k arguments
51 | if paper_author.id not in self.self_delib.keys(): self.self_delib[paper_author.id] = []
52 | author_args = paper_author.generate_arguments(topic, evidence, k=num_arg)
53 | self.self_delib[paper_author.id].extend(author_args)
54 |
55 | # check if paper is the focus
56 | if paper_author.focus:
57 | focus_paper = paper_author
58 |
59 | # logging
60 | if log is not None:
61 | with open(os.path.join(log, 'self_deliberation.txt'), 'a') as f:
62 | f.write(f'Topic: {topic}\n\n')
63 | f.write(f'Gather Evidence:\n\n')
64 | temp = "\n".join([f'{s} - {e}' for s, e in zip(scores, evidence)])
65 | f.write(f'{paper_author.focus} paper:\n{temp}\n\n')
66 |
67 | f.write(f'Develop Arguments:\n\n')
68 | temp = ""
69 | for i, arg in enumerate(author_args):
70 | temp += f"Argument #{i+1} - {arg['argument_title']}.\n\t{arg['description']}\n\t{arg['evidence']}\n"
71 | f.write(f'{paper_author.focus} paper:\n{temp}\n\n')
72 |
73 | # preemption
74 | ## for each other paper author, collect their respective arguments/evidence
75 | for i in range(len(paper_authors)):
76 | other_arguments = []
77 | for j in range(len(paper_authors)):
78 | if j != i:
79 | other_arguments.extend([a for a in self.self_delib[paper_authors[j].id]])
80 |
81 | if paper_authors[i].id not in self.preemption.keys(): self.preemption[paper_authors[i].id] = {}
82 | self.preemption[paper_authors[i].id].update(paper_authors[i].preempt_arguments(other_arguments))
83 |
84 | # logging
85 | if log is not None:
86 | with open(os.path.join(log, 'self_deliberation.txt'), 'a') as f:
87 | f.write(f'Preemption:\n\n')
88 | temp = ""
89 | for key in self.preemption.keys():
90 | temp += f"\t{key}\n"
91 | for j, p in enumerate(self.preemption[key]):
92 | temp += f"\t\tPreemptive Arg #{j+1}: {p}\n"
93 | temp += "\n"
94 | f.write(f'{paper_authors[i].focus} paper:\n{temp}\n\n')
95 |
96 | self.topics = moderator.generate_topics(round=self, parent_topic=topic, paper_authors=paper_authors)
97 |
98 | for subtopic in self.topics:
99 | self.children.append(DebateNode(subtopic, parent=self))
100 | return self.children
101 |
102 |
103 | def conduct_debate(self, paper_authors: List[PaperAuthor]):
104 |
105 | convo_history = f"Debate Topic Information:\n\t- Topic: {self.round_topic['topic_title']}\n\t- Topic Description: {self.round_topic['topic_description']}\n\n"
106 |
107 | # each paper presents their arguments
108 | convo_history = "Debate History:\n\n"
109 | for author in paper_authors:
110 | opposition = paper_authors[1-author.id]
111 |
112 | print(f"\nPRESENT ARGUMENT FOR AUTHOR {author.id}:\n")
113 | author_arg = author.present_argument(debate_node=self, parent_debate_node=self.parent, opposition=opposition)
114 | self.init_arguments[author.id] = author_arg
115 | convo_history += f"\t-Author {author.id}: I argue that {author_arg['argument_title'].lower()}. {author_arg['description']}\n"
116 |
117 | convo_history += "\n"
118 | # each paper responds to opposing side's arguments
119 | for author in paper_authors:
120 | opposition = paper_authors[1-author.id]
121 |
122 | print(f"\nRESPOND ARGUMENT FOR AUTHOR {author.id}:\n")
123 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
124 |
125 | author_response = author.respond_to_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
126 | self.response[author.id] = author_response
127 | convo_history += f"\t-Author {author.id}: {author_response['author_response']}\n"
128 |
129 | convo_history += "\n"
130 | # each paper revises their arguments
131 | for author in paper_authors:
132 | opposition = paper_authors[1-author.id]
133 |
134 | print(f"\nREVISE ARGUMENT FOR AUTHOR {author.id}:\n")
135 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
136 | author_revision = author.revise_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
137 | self.final_arguments[author.id] = author_revision
138 | convo_history += f"\t-Author {author.id}: I argue that {author_revision['revised_argument_title'].lower()}. {author_revision['revised_argument_description']}\n"
139 |
140 | self.conversation_history = convo_history
141 | return convo_history
142 |
143 | def expand_node(self, parent_node, new_node):
144 | parent_node.children.append(new_node)
145 | new_node.parents = parent_node
146 |
147 |
--------------------------------------------------------------------------------
/no_tree/debate.py:
--------------------------------------------------------------------------------
1 | from no_tree.persona import PaperAuthor
2 | from typing import List
3 | from collections import defaultdict
4 | import os
5 |
6 | def collect_arguments(arguments):
7 | text_args = ""
8 | counter = 1
9 | for args in arguments:
10 | for a in args['argument_list']:
11 | text_args += f"{counter}. {a['argument_title']}. "
12 | return text_args
13 |
14 | def topic_dict_to_str(topic):
15 | if topic['topic_title'] == topic['topic_description']:
16 | return topic['topic_title']
17 | else:
18 | return f"{topic['topic_title']}: {topic['topic_description']}." # keeping this in case we want to represent topics with the title and description
19 |
20 | class DebateNode:
21 | def __init__(self, round_topic: str, parent=None) -> None:
22 | self.children = []
23 | self.content = []
24 |
25 | self.self_delib = {} # paper_id: [] (format for all below)
26 | self.evidence = {}
27 | self.preemption = {}
28 | self.topics = {}
29 | self.init_arguments = {}
30 | self.response = {}
31 | self.final_arguments = {}
32 |
33 | self.parent = parent
34 | self.round_topic = round_topic
35 |
36 |
37 | def __repr__(self):
38 | return topic_dict_to_str(self.round_topic)
39 |
40 | def conduct_self_deliberation(self, topic, paper_authors: List[PaperAuthor], moderator, log=None, num_evidence=5, num_arg=3):
41 | focus_paper = None
42 | for paper_author in paper_authors:
43 | # gather evidence
44 | evidence, scores = paper_author.gather_evidence(topic_dict_to_str(topic), k=num_evidence, return_scores=True)
45 |
46 | if paper_author.id not in self.evidence.keys(): self.evidence[paper_author.id] = []
47 | self.evidence[paper_author.id].extend(evidence)
48 |
49 | # develop k arguments
50 | if paper_author.id not in self.self_delib.keys(): self.self_delib[paper_author.id] = []
51 | author_args = paper_author.generate_arguments(topic, evidence, k=num_arg)
52 | self.self_delib[paper_author.id].extend(author_args)
53 |
54 | # check if paper is the focus
55 | if paper_author.focus:
56 | focus_paper = paper_author
57 |
58 | # logging
59 | if log is not None:
60 | with open(os.path.join(log, 'self_deliberation.txt'), 'a') as f:
61 | f.write(f'Topic: {topic}\n\n')
62 | f.write(f'Gather Evidence:\n\n')
63 | temp = "\n".join([f'{s} - {e}' for s, e in zip(scores, evidence)])
64 | f.write(f'{paper_author.focus} paper:\n{temp}\n\n')
65 |
66 | f.write(f'Develop Arguments:\n\n')
67 | temp = ""
68 | for i, arg in enumerate(author_args):
69 | temp += f"Argument #{i+1} - {arg['argument_title']}.\n\t{arg['description']}\n\t{arg['evidence']}\n"
70 | f.write(f'{paper_author.focus} paper:\n{temp}\n\n')
71 |
72 | # preemption
73 | ## for each other paper author, collect their respective arguments/evidence
74 | for i in range(len(paper_authors)):
75 | other_arguments = []
76 | for j in range(len(paper_authors)):
77 | if j != i:
78 | other_arguments.extend([a for a in self.self_delib[paper_authors[j].id]])
79 |
80 | if paper_authors[i].id not in self.preemption.keys(): self.preemption[paper_authors[i].id] = {}
81 | self.preemption[paper_authors[i].id].update(paper_authors[i].preempt_arguments(other_arguments))
82 |
83 | # logging
84 | if log is not None:
85 | with open(os.path.join(log, 'self_deliberation.txt'), 'a') as f:
86 | f.write(f'Preemption:\n\n')
87 | temp = ""
88 | for key in self.preemption.keys():
89 | temp += f"\t{key}\n"
90 | for j, p in enumerate(self.preemption[key]):
91 | temp += f"\t\tPreemptive Arg #{j+1}: {p}\n"
92 | temp += "\n"
93 | f.write(f'{paper_authors[i].focus} paper:\n{temp}\n\n')
94 |
95 | self.topics = moderator.generate_topics(round=self, parent_topic=topic, paper_authors=paper_authors)
96 |
97 | # for subtopic in self.topics:
98 | # self.children.append(DebateNode(subtopic, parent=self))
99 |
100 |
101 | self.children.append(DebateNode(self.topics, parent=self))
102 | return self.children
103 |
104 |
105 | def conduct_debate(self, paper_authors: List[PaperAuthor]):
106 |
107 | convo_history = f"Debate Topic Information:\n\t- Topics: {self.round_topic['topic_title']}\n\t- Topic Description: {self.round_topic['topic_description']}\n\n"
108 |
109 | # each paper presents their arguments
110 | convo_history = "Debate History:\n\n"
111 | for author in paper_authors:
112 | opposition = paper_authors[1-author.id]
113 |
114 | print(f"\nPRESENT ARGUMENT FOR AUTHOR {author.id}:\n")
115 | author_arg = author.present_argument(debate_node=self, parent_debate_node=self.parent, opposition=opposition)
116 | self.init_arguments[author.id] = author_arg
117 | convo_history += f"\t-Author {author.id}: I argue that {author_arg['argument_title'].lower()}. {author_arg['description']}\n"
118 |
119 | convo_history += "\n"
120 | # each paper responds to opposing side's arguments
121 | for author in paper_authors:
122 | opposition = paper_authors[1-author.id]
123 |
124 | print(f"\nRESPOND ARGUMENT FOR AUTHOR {author.id}:\n")
125 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
126 |
127 | author_response = author.respond_to_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
128 | self.response[author.id] = author_response
129 | convo_history += f"\t-Author {author.id}: {author_response['author_response']}\n"
130 |
131 | convo_history += "\n"
132 | # each paper revises their arguments
133 | for author in paper_authors:
134 | opposition = paper_authors[1-author.id]
135 |
136 | print(f"\nREVISE ARGUMENT FOR AUTHOR {author.id}:\n")
137 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
138 | author_revision = author.revise_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
139 | self.final_arguments[author.id] = author_revision
140 | convo_history += f"\t-Author {author.id}: I argue that {author_revision['revised_argument_title'].lower()}. {author_revision['revised_argument_description']}\n"
141 |
142 | return convo_history
143 |
144 | def expand_node(self, parent_node, new_node):
145 | parent_node.children.append(new_node)
146 | new_node.parents = parent_node
147 |
148 |
--------------------------------------------------------------------------------
/retrieval/e5_model.py:
--------------------------------------------------------------------------------
1 | import re
2 | import tqdm
3 | import torch
4 | import warnings
5 | import torch.nn.functional as F
6 |
7 | from math import ceil
8 | from tqdm import tqdm
9 | from torch import Tensor
10 | from transformers import AutoTokenizer, AutoModel
11 | from adapters import AutoAdapterModel
12 | from typing import List
13 |
14 | from joblib import Memory
15 | memory = Memory(location=".cache", verbose=0)
16 |
17 | warnings.filterwarnings("ignore", category=FutureWarning)
18 |
19 | class E5:
20 | def __init__(self):
21 |
22 | # Select the most available GPU
23 | # available_gpus = GPUtil.getAvailable(order="memory", limit=1, maxMemory=0.8)
24 | # if available_gpus:
25 | # self.device = torch.device(f"cuda:{available_gpus[0]}")
26 | # print(f"E5Embedder using GPU: cuda:{available_gpus[0]}")
27 | # else:
28 | # self.device = torch.device("cpu")
29 | # print("No GPU available, E5Embedder using CPU.")
30 |
31 | # self.tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-v2')
32 | self.tokenizer = AutoTokenizer.from_pretrained('allenai/specter2_base')
33 | # self.model = AutoModel.from_pretrained('intfloat/e5-large-v2').to('cuda')
34 | self.model = AutoModel.from_pretrained('allenai/specter2_base').to('cuda')
35 | self.device = 'cuda'
36 | self.model.eval()
37 |
38 | @staticmethod
39 | def tokenization(tokenizer, text):
40 | '''
41 | Different tokenization procedures based on different models.
42 |
43 | Input: text as list of strings, if cpu option then list has length 1.
44 | Return: tokenized inputs, could be dictionary for BERT models.
45 | '''
46 | inputs = tokenizer(text, padding=True, truncation=True, max_length=512, return_tensors="pt")
47 | return inputs
48 |
49 | @staticmethod
50 | def preprocessing(text):
51 | '''
52 | Note, this preprocessing function should be applied to any text before getting its embedding.
53 | Input: text as string
54 |
55 | Output: removing newline, latex $$, and common website urls.
56 | '''
57 | pattern = r"(((https?|ftp)://)|(www.))[^\s/$.?#].[^\s]*"
58 | text = re.sub(pattern, '', text)
59 | return text.replace("\n", " ").replace("$","")
60 |
61 | @staticmethod
62 | def encoding(model, inputs, device):
63 | '''
64 | Different encoding procedures based on different models.
65 | Input: inputs are tokenized inputs in specific form
66 | Return: a numpy ndarray embedding on cpu.
67 |
68 | '''
69 | def average_pool(last_hidden_states: Tensor,
70 | attention_mask: Tensor) -> Tensor:
71 | last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
72 | return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
73 |
74 | with torch.no_grad():
75 | input_ids = inputs['input_ids'].to(device)
76 | assert input_ids.shape[1]<=512
77 | token_type_ids = inputs['token_type_ids'].to(device)
78 | attention_mask = inputs['attention_mask'].to(device)
79 |
80 | new_batch_dict={}
81 | new_batch_dict["input_ids"] = input_ids
82 | new_batch_dict["token_type_ids"] = token_type_ids
83 | new_batch_dict["attention_mask"] = attention_mask
84 |
85 | outputs = model(**new_batch_dict)
86 | embeddings = average_pool(outputs.last_hidden_state, new_batch_dict['attention_mask'])
87 | embeddings = F.normalize(embeddings, p=2, dim=1)
88 |
89 | output = embeddings.detach().cpu()
90 |
91 | del input_ids, token_type_ids, attention_mask, new_batch_dict, outputs, embeddings
92 | torch.cuda.empty_cache()
93 |
94 | return output.numpy()
95 |
96 |
97 | def __call__(self, text, batch_size=64):
98 | """
99 | text: a list of strings, each string is either a query or an abstract
100 | cuda: in the format of "0,1,6,7" or "0", by default, cpu option is used
101 | batch_size: if not specified, then an optimal batch_size is found by system, else,
102 | the user specified batch_size is used, may run into OOM error.
103 | Return: the embedding dictionary, where the key is a string (e.g. an abstract, query/subquery), and the value
104 | is np.ndarray of the vector, usually 1 or 2 dimensions.
105 | """
106 | ret = {}
107 | length = ceil(len(text)/batch_size)
108 | for i in tqdm(range(length), desc = "Begin Embedding...", leave = False):
109 | curr_batch = text[i*batch_size:(i+1)*batch_size]
110 | curr_batch_cleaned = [self.preprocessing(t) for t in curr_batch]
111 | inputs = self.tokenization(self.tokenizer, curr_batch_cleaned)
112 | embedding = self.encoding(self.model, inputs, self.device)
113 | for t, v in zip(curr_batch, embedding):
114 | ret[t] = v
115 | del inputs
116 | torch.cuda.empty_cache()
117 | return ret
118 |
119 | # @memory.cache
120 | def e5_embed(text_list: List[str], batch_size=64):
121 | # e5 = E5()
122 | # res = e5(text_list)
123 | # return res
124 |
125 | # embedding_model_name = 'allenai/specter2_base'
126 | embedding_model_name = 'BAAI/bge-large-en-v1.5'
127 | embedding_tokenizer = AutoTokenizer.from_pretrained(embedding_model_name)
128 | embedding_tokenizer.max_subtokens_sequence_length = 512
129 | embedding_tokenizer.model_max_length = 512
130 | # embedding_model = AutoAdapterModel.from_pretrained('allenai/specter2_base')
131 | # embedding_model.load_adapter("allenai/specter2_adhoc_query", source="hf", load_as="specter2_adhoc_query", set_active=True, device_map='auto')
132 | embedding_model = AutoModel.from_pretrained(embedding_model_name, device_map='auto')
133 | embedding_model.eval()
134 |
135 | if embedding_tokenizer.pad_token is None:
136 | embedding_tokenizer.add_special_tokens({'pad_token': '[PAD]'})
137 | embedding_model.resize_token_embeddings(len(embedding_tokenizer))
138 |
139 | sentence_embeddings = []
140 | for i in tqdm(range(0, len(text_list), batch_size)):
141 | encoded_input = embedding_tokenizer(text_list[i:i+batch_size], padding=True, truncation=True, return_tensors='pt').to(embedding_model.device)
142 | with torch.no_grad():
143 | model_output = embedding_model(**encoded_input)
144 | sentence_embedding = model_output[0][:, 0]
145 | sentence_embedding = torch.nn.functional.normalize(sentence_embedding, p=2, dim=1).squeeze().detach().cpu().numpy()
146 | if len(sentence_embedding.shape) == 1:
147 | sentence_embedding = [sentence_embedding]
148 | sentence_embeddings.extend(sentence_embedding)
149 |
150 |
151 | embeddings_dicts = {}
152 | for sentence, embedding in zip(text_list, sentence_embeddings):
153 | embeddings_dicts[sentence] = embedding
154 | return embeddings_dicts
155 |
156 | if __name__ == "__main__":
157 | texts = ["Hello", "World", "Python"]
158 | print(e5_embed(texts))
--------------------------------------------------------------------------------
/no_tree/.ipynb_checkpoints/debate-checkpoint.py:
--------------------------------------------------------------------------------
1 | from no_tree.persona import PaperAuthor
2 | from typing import List
3 | from collections import defaultdict
4 | import os
5 |
6 | def collect_arguments(arguments):
7 | text_args = ""
8 | counter = 1
9 | for args in arguments:
10 | for a in args['argument_list']:
11 | text_args += f"{counter}. {a['argument_title']}. "
12 | return text_args
13 |
14 | def topic_dict_to_str(topic):
15 | if topic['topic_title'] == topic['topic_description']:
16 | return topic['topic_title']
17 | else:
18 | return f"{topic['topic_title']}: {topic['topic_description']}." # keeping this in case we want to represent topics with the title and description
19 |
20 | class DebateNode:
21 | def __init__(self, round_topic: str, parent=None) -> None:
22 | self.children = []
23 | self.content = []
24 |
25 | self.self_delib = {} # paper_id: [] (format for all below)
26 | self.evidence = {}
27 | self.preemption = {}
28 | self.topics = {}
29 | self.init_arguments = {}
30 | self.response = {}
31 | self.final_arguments = {}
32 |
33 | self.parent = parent
34 | self.round_topic = round_topic
35 |
36 |
37 | def __repr__(self):
38 | return topic_dict_to_str(self.round_topic)
39 |
40 | def conduct_self_deliberation(self, topic, paper_authors: List[PaperAuthor], moderator, log=None, num_evidence=5, num_arg=3):
41 | focus_paper = None
42 | for paper_author in paper_authors:
43 | # gather evidence
44 | evidence, scores = paper_author.gather_evidence(topic_dict_to_str(topic), k=num_evidence, return_scores=True)
45 |
46 | if paper_author.id not in self.evidence.keys(): self.evidence[paper_author.id] = []
47 | self.evidence[paper_author.id].extend(evidence)
48 |
49 | # develop k arguments
50 | if paper_author.id not in self.self_delib.keys(): self.self_delib[paper_author.id] = []
51 | author_args = paper_author.generate_arguments(topic, evidence, k=num_arg)
52 | self.self_delib[paper_author.id].extend(author_args)
53 |
54 | # check if paper is the focus
55 | if paper_author.focus:
56 | focus_paper = paper_author
57 |
58 | # logging
59 | if log is not None:
60 | with open(os.path.join(log, 'self_deliberation.txt'), 'a') as f:
61 | f.write(f'Topic: {topic}\n\n')
62 | f.write(f'Gather Evidence:\n\n')
63 | temp = "\n".join([f'{s} - {e}' for s, e in zip(scores, evidence)])
64 | f.write(f'{paper_author.focus} paper:\n{temp}\n\n')
65 |
66 | f.write(f'Develop Arguments:\n\n')
67 | temp = ""
68 | for i, arg in enumerate(author_args):
69 | temp += f"Argument #{i+1} - {arg['argument_title']}.\n\t{arg['description']}\n\t{arg['evidence']}\n"
70 | f.write(f'{paper_author.focus} paper:\n{temp}\n\n')
71 |
72 | # preemption
73 | ## for each other paper author, collect their respective arguments/evidence
74 | for i in range(len(paper_authors)):
75 | other_arguments = []
76 | for j in range(len(paper_authors)):
77 | if j != i:
78 | other_arguments.extend([a for a in self.self_delib[paper_authors[j].id]])
79 |
80 | if paper_authors[i].id not in self.preemption.keys(): self.preemption[paper_authors[i].id] = {}
81 | self.preemption[paper_authors[i].id].update(paper_authors[i].preempt_arguments(other_arguments))
82 |
83 | # logging
84 | if log is not None:
85 | with open(os.path.join(log, 'self_deliberation.txt'), 'a') as f:
86 | f.write(f'Preemption:\n\n')
87 | temp = ""
88 | for key in self.preemption.keys():
89 | temp += f"\t{key}\n"
90 | for j, p in enumerate(self.preemption[key]):
91 | temp += f"\t\tPreemptive Arg #{j+1}: {p}\n"
92 | temp += "\n"
93 | f.write(f'{paper_authors[i].focus} paper:\n{temp}\n\n')
94 |
95 | self.topics = moderator.generate_topics(round=self, parent_topic=topic, paper_authors=paper_authors)
96 |
97 | # for subtopic in self.topics:
98 | # self.children.append(DebateNode(subtopic, parent=self))
99 |
100 |
101 | self.children.append(DebateNode(self.topics, parent=self))
102 | return self.children
103 |
104 |
105 | def conduct_debate(self, paper_authors: List[PaperAuthor]):
106 |
107 | convo_history = f"Debate Topic Information:\n\t- Topics: {self.round_topic['topic_title']}\n\t- Topic Description: {self.round_topic['topic_description']}\n\n"
108 |
109 | # each paper presents their arguments
110 | convo_history = "Debate History:\n\n"
111 | for author in paper_authors:
112 | opposition = paper_authors[1-author.id]
113 |
114 | print(f"\nPRESENT ARGUMENT FOR AUTHOR {author.id}:\n")
115 | author_arg = author.present_argument(debate_node=self, parent_debate_node=self.parent, opposition=opposition)
116 | self.init_arguments[author.id] = author_arg
117 | convo_history += f"\t-Author {author.id}: I argue that {author_arg['argument_title'].lower()}. {author_arg['description']}\n"
118 |
119 | convo_history += "\n"
120 | # each paper responds to opposing side's arguments
121 | for author in paper_authors:
122 | opposition = paper_authors[1-author.id]
123 |
124 | print(f"\nRESPOND ARGUMENT FOR AUTHOR {author.id}:\n")
125 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
126 |
127 | author_response = author.respond_to_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
128 | self.response[author.id] = author_response
129 | convo_history += f"\t-Author {author.id}: {author_response['author_response']}\n"
130 |
131 | convo_history += "\n"
132 | # each paper revises their arguments
133 | for author in paper_authors:
134 | opposition = paper_authors[1-author.id]
135 |
136 | print(f"\nREVISE ARGUMENT FOR AUTHOR {author.id}:\n")
137 | author_history = convo_history.replace(f'Author {author.id}:', "You:").replace(f'Author {1-author.id}:', "Opposition:")
138 | author_revision = author.revise_argument(author_history, debate_node=self, parent_debate_node=self.parent, opposition=opposition)
139 | self.final_arguments[author.id] = author_revision
140 | convo_history += f"\t-Author {author.id}: I argue that {author_revision['revised_argument_title'].lower()}. {author_revision['revised_argument_description']}\n"
141 |
142 | return convo_history
143 |
144 | def expand_node(self, parent_node, new_node):
145 | parent_node.children.append(new_node)
146 | new_node.parents = parent_node
147 |
148 |
--------------------------------------------------------------------------------
/tree_of_debate_no_tree.py:
--------------------------------------------------------------------------------
1 | import pickle
2 | import os
3 | # os.environ['CUDA_VISIBLE_DEVICES'] = "0,1,2,3"
4 | from no_tree.debate import DebateNode
5 | from no_tree.paper_details import Paper
6 | from no_tree.persona import PaperAuthor
7 | from no_tree.moderator import Moderator
8 | import argparse
9 | from typing import List
10 | from vllm import LLM
11 | import os
12 | import json
13 | from no_tree.data_pairer import parse_papers, parse_papers_url
14 |
15 | def print_path(node: DebateNode, prefix=""):
16 | if len(node.children) == 0:
17 | # return prefix + node.round_topic['topic_title']
18 | out = f"""{prefix}{node.round_topic['topic_title']}: {node.round_topic['topic_description']}
19 | {prefix}Author 0's Argument: {node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}
20 | {prefix}Author 1's Argument: {node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}
21 |
22 | """
23 | node_dict = {'topic':node.round_topic['topic_title'],
24 | 'description':node.round_topic['topic_description'],
25 | 'author_0_argument': f"{node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}",
26 | 'author_1_argument': f"{node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}"}
27 | return out, node_dict
28 |
29 | elif node.parent is None:
30 | node_dict = {'topic':node.round_topic['topic_title'], 'children':[]}
31 | path = prefix + node.round_topic['topic_title'] + "\n\n"
32 |
33 | else:
34 | path = f"""{prefix}{node.round_topic['topic_title']}: {node.round_topic['topic_description']}
35 | {prefix}Author 0's Argument: {node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}
36 | {prefix}Author 1's Argument: {node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}
37 |
38 | """
39 | node_dict = {'topic':node.round_topic['topic_title'],
40 | 'description':node.round_topic['topic_description'],
41 | 'author_0_argument': f"{node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}",
42 | 'author_1_argument': f"{node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}",
43 | 'children':[]}
44 |
45 | for child in node.children:
46 | child_path, child_dict = print_path(child, prefix + "\t")
47 | path += child_path + "\n"
48 | node_dict['children'].append(child_dict)
49 |
50 | return path, node_dict
51 |
52 | def topic_dict_to_str(topic):
53 | if topic['topic_title'] == topic['topic_description']:
54 | return topic['topic_title']
55 | else:
56 | return f"{topic['topic_title']}: {topic['topic_description']}." # keeping this in case we want to represent topics with the title and description
57 | # return topic['argument_title']
58 |
59 | def collect_evidence(evidence, subtrees):
60 | for c in subtrees:
61 | for author_id, e in c.evidence.items():
62 | evidence[author_id].extend(e)
63 | return evidence
64 |
65 |
66 | def run_no_tree_code(args, f_pap, c_pap, model_server):
67 |
68 | focus_paper = PaperAuthor(
69 | model = model_server,
70 | paper = Paper(f_pap),
71 | focus=True,
72 | id=0,
73 | log_dir=args.log_dir
74 | )
75 |
76 | cited_paper = PaperAuthor(
77 | model = model_server,
78 | paper = Paper(c_pap),
79 | focus=False,
80 | id=1,
81 | log_dir=args.log_dir
82 | )
83 |
84 | moderator = Moderator(model_server, args.log_dir)
85 |
86 | paper_authors = [focus_paper, cited_paper]
87 | leaf_node_label = {'topic_title': args.topic, 'topic_description': args.topic}
88 |
89 | if args.log_dir != "":
90 | with open(os.path.join(args.log_dir, 'self_deliberation.txt'), 'w') as f:
91 | f.write(f'Topic: {args.topic}\n\n')
92 |
93 | # each node has a topic
94 | root_node = DebateNode(leaf_node_label)
95 | all_evidence = {p.id:[] for p in paper_authors}
96 |
97 | subtrees = root_node.conduct_self_deliberation(leaf_node_label, paper_authors, moderator, log=args.log_dir) # k new, finer topics to discuss
98 | all_evidence = collect_evidence(all_evidence, subtrees)
99 |
100 | conversation_history = []
101 |
102 | queue_of_rounds: List[DebateNode] = []
103 | queue_of_rounds.extend(subtrees)
104 |
105 | debated_rounds = [root_node]
106 |
107 | depth = 0
108 | max_depth = 5
109 |
110 | while len(queue_of_rounds) > 0:
111 | round = queue_of_rounds.pop(0)
112 | debated_rounds.append(round)
113 | conversation = round.conduct_debate([focus_paper, cited_paper])
114 | conversation_history.append(conversation)
115 | if moderator.is_expand(round, conversation) and depth < max_depth:
116 | new_subtrees = round.conduct_self_deliberation(round.round_topic, paper_authors, moderator)
117 | all_evidence = collect_evidence(all_evidence, new_subtrees)
118 | queue_of_rounds.extend(new_subtrees)
119 | depth += 1
120 |
121 | conversation_history = ''.join(conversation_history)
122 | with open(f'{args.log_dir}/{args.experiment}_conversation_history.txt', 'w') as f:
123 | f.write(conversation_history)
124 |
125 | with open(f'{args.log_dir}/{args.experiment}_evidence.txt', 'w') as f:
126 | for author_id, e in all_evidence.items():
127 | unique_e = list(set(e))
128 | f.write(str(unique_e))
129 | f.write('\n')
130 |
131 | # with open('temp.pkl', 'rb') as f:
132 | # queue_of_rounds, debated_rounds, conversation_history, root_node = pickle.load(f)
133 |
134 | # similarities, differences = [], []
135 | # debated_rounds.extend(queue_of_rounds)
136 | # for round in debated_rounds:
137 | # if len(round.children) > 0:
138 | # similarities.append(topic_dict_to_str(round.round_topic))
139 | # else:
140 | # differences.append(topic_dict_to_str(round.round_topic))
141 |
142 |
143 | # summary = moderator.summarize_debate(conversation_history, similarities, differences)
144 |
145 | # with open(f'{args.log_dir}/summary.txt', 'w') as f:
146 | # f.write(summary)
147 |
148 | paths, tree_dict = print_path(root_node)
149 | with open(f'{args.log_dir}/{args.experiment}_path.txt', 'w') as f:
150 | f.write("\n\n\n\n\n")
151 | f.write("PATHS:\n")
152 | f.write(paths)
153 |
154 |
155 | with open(f'{args.log_dir}/{args.experiment}_tree.json', 'w', encoding='utf-8') as f:
156 | json.dump(tree_dict, f, ensure_ascii=False, indent=4)
157 |
158 | path_summary = moderator.summarize_path_debate(paper_authors, leaf_node_label, json.dumps(tree_dict, indent=2))
159 | with open(f'{args.log_dir}/{args.experiment}_summary.txt', 'w') as f:
160 | f.write(path_summary)
161 |
162 | # with open('temp.pkl', 'wb+') as f:
163 | # pickle.dump([queue_of_rounds, debated_rounds, conversation_history, root_node, similarities, differences], f)
164 |
165 |
166 | if __name__ == '__main__':
167 | parser = argparse.ArgumentParser()
168 | parser.add_argument("--focus_paper", default="2406_11709")
169 | parser.add_argument("--cited_paper", default="2310_10648")
170 | parser.add_argument("--topic", default="wireless sensing of human activity")
171 | parser.add_argument("--log_dir", default="logs")
172 | parser.add_argument("--download_dir", default="/")
173 | parser.add_argument('--no_arxiv', action=argparse.BooleanOptionalAction)
174 |
175 | args = parser.parse_args()
176 |
177 | if args.no_arxiv:
178 | args.log_dir = f"{args.log_dir}/{args.focus_paper.split('.json')[0]}-{args.cited_paper.split('.json')[0]}"
179 | else:
180 | args.log_dir = f"{args.log_dir}/{args.focus_paper.split('.json')[0]}-{args.cited_paper.split('.json')[0]}"
181 |
182 | if not os.path.exists(args.log_dir):
183 | os.makedirs(args.log_dir)
184 |
185 | parse_papers(args.focus_paper, args.cited_paper)
186 |
187 | with open('data.json', 'r') as file:
188 | data = json.load(file)
189 |
190 | model_server = LLM(model="nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",tensor_parallel_size=4,max_num_seqs=256,enable_prefix_caching=True)
191 | # model_server = LLM(model="meta-llama/Meta-Llama-3.1-8B-Instruct",tensor_parallel_size=2,max_num_seqs=100,enable_prefix_caching=True)
192 |
193 | for item in data:
194 | run_code(args, item['focus'], item['cited'], model_server)
195 |
196 |
--------------------------------------------------------------------------------
/tree_of_debate_no_delib.py:
--------------------------------------------------------------------------------
1 | import pickle
2 | import os
3 | # os.environ['CUDA_VISIBLE_DEVICES'] = "0,1,2,3"
4 | from no_delib.debate import DebateNode
5 | from no_delib.paper_details import Paper
6 | from no_delib.persona import PaperAuthor
7 | from no_delib.moderator import Moderator
8 | import argparse
9 | from typing import List
10 | from vllm import LLM
11 | import os
12 | import json
13 | from no_delib.data_pairer import parse_papers
14 |
15 | def print_path_old(node: DebateNode, prefix=""):
16 | if len(node.children) == 0:
17 | # return prefix + node.round_topic['topic_title']
18 | out = f"""{prefix}{node.round_topic['topic_title']}: {node.round_topic['topic_description']}
19 | {prefix}Author 0's Argument: {node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}
20 | {prefix}Author 1's Argument: {node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}
21 |
22 | """
23 | return out
24 | elif node.parent is None:
25 | path = prefix + node.round_topic['topic_title'] + "\n\n"
26 | else:
27 | path = f"""{prefix}{node.round_topic['topic_title']}: {node.round_topic['topic_description']}
28 | {prefix}Author 0's Argument: {node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}
29 | {prefix}Author 1's Argument: {node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}
30 |
31 | """
32 | for child in node.children:
33 | path += print_path(child, prefix + "\t") + "\n"
34 |
35 | return path
36 |
37 | def print_path(node: DebateNode, prefix=""):
38 | if len(node.children) == 0:
39 | # return prefix + node.round_topic['topic_title']
40 | out = f"""{prefix}{node.round_topic['topic_title']}: {node.round_topic['topic_description']}
41 | {prefix}Author 0's Argument: {node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}
42 | {prefix}Author 1's Argument: {node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}
43 |
44 | """
45 | node_dict = {'topic':node.round_topic['topic_title'],
46 | 'description':node.round_topic['topic_description'],
47 | 'author_0_argument': f"{node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}",
48 | 'author_1_argument': f"{node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}"}
49 | return out, node_dict
50 |
51 | elif node.parent is None:
52 | node_dict = {'topic':node.round_topic['topic_title'], 'children':[]}
53 | path = prefix + node.round_topic['topic_title'] + "\n\n"
54 |
55 | else:
56 | path = f"""{prefix}{node.round_topic['topic_title']}: {node.round_topic['topic_description']}
57 | {prefix}Author 0's Argument: {node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}
58 | {prefix}Author 1's Argument: {node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}
59 |
60 | """
61 | node_dict = {'topic':node.round_topic['topic_title'],
62 | 'description':node.round_topic['topic_description'],
63 | 'author_0_argument': f"{node.final_arguments[0]['revised_argument_title']}. {node.final_arguments[0]['revised_argument_description']}",
64 | 'author_1_argument': f"{node.final_arguments[1]['revised_argument_title']}. {node.final_arguments[1]['revised_argument_description']}",
65 | 'children':[]}
66 |
67 | for child in node.children:
68 | child_path, child_dict = print_path(child, prefix + "\t")
69 | path += child_path + "\n"
70 | node_dict['children'].append(child_dict)
71 |
72 | return path, node_dict
73 |
74 | def topic_dict_to_str(topic):
75 | if topic['topic_title'] == topic['topic_description']:
76 | return topic['topic_title']
77 | else:
78 | return f"{topic['topic_title']}: {topic['topic_description']}." # keeping this in case we want to represent topics with the title and description
79 | # return topic['argument_title']
80 |
81 | def collect_evidence(evidence, subtrees):
82 | for c in subtrees:
83 | for author_id, e in c.evidence.items():
84 | evidence[author_id].extend(e)
85 | return evidence
86 |
87 |
88 | def run_no_delib_code(args, f_pap, c_pap, model_server):
89 |
90 | focus_paper = PaperAuthor(
91 | model = model_server,
92 | paper = Paper(f_pap),
93 | focus=True,
94 | id=0,
95 | log_dir=args.log_dir
96 | )
97 |
98 | cited_paper = PaperAuthor(
99 | model = model_server,
100 | paper = Paper(c_pap),
101 | focus=False,
102 | id=1,
103 | log_dir=args.log_dir
104 | )
105 |
106 | moderator = Moderator(model_server, args.log_dir)
107 |
108 | paper_authors = [focus_paper, cited_paper]
109 | leaf_node_label = {'topic_title': args.topic, 'topic_description': args.topic}
110 |
111 | if args.log_dir != "":
112 | with open(os.path.join(args.log_dir, f'self_deliberation.txt'), 'w') as f:
113 | f.write(f'Topic: {args.topic}\n\n')
114 |
115 | # each node has a topic
116 | root_node = DebateNode(leaf_node_label)
117 | all_evidence = {p.id:[] for p in paper_authors}
118 |
119 | subtrees = root_node.conduct_self_deliberation(leaf_node_label, paper_authors, moderator, log=args.log_dir) # k new, finer topics to discuss
120 | all_evidence = collect_evidence(all_evidence, subtrees)
121 |
122 | conversation_history = []
123 |
124 | queue_of_rounds: List[DebateNode] = []
125 | queue_of_rounds.extend(subtrees)
126 |
127 | debated_rounds = [root_node]
128 |
129 | depth = 0
130 | max_depth = 3
131 |
132 | while len(queue_of_rounds) > 0:
133 | round = queue_of_rounds.pop(0)
134 | debated_rounds.append(round)
135 | conversation = round.conduct_debate([focus_paper, cited_paper])
136 | conversation_history.append(conversation)
137 | if moderator.is_expand(round, conversation) and depth < max_depth:
138 | new_subtrees = round.conduct_self_deliberation(round.round_topic, paper_authors, moderator)
139 | all_evidence = collect_evidence(all_evidence, new_subtrees)
140 | queue_of_rounds.extend(new_subtrees)
141 | depth += 1
142 |
143 | conversation_history = ''.join(conversation_history)
144 | with open(f'{args.log_dir}/{args.experiment}_conversation_history.txt', 'w') as f:
145 | f.write(conversation_history)
146 |
147 | with open(f'{args.log_dir}/{args.experiment}_evidence.txt', 'w') as f:
148 | for author_id, e in all_evidence.items():
149 | unique_e = list(set(e))
150 | f.write(str(unique_e))
151 | f.write('\n')
152 |
153 | paths, tree_dict = print_path(root_node)
154 | with open(f'{args.log_dir}/{args.experiment}_path.txt', 'w') as f:
155 | f.write("\n\n\n\n\n")
156 | f.write("PATHS:\n")
157 | f.write(paths)
158 |
159 | path_summary = moderator.summarize_path_debate(paper_authors, leaf_node_label, json.dumps(tree_dict, indent=2))
160 | with open(f'{args.log_dir}/{args.experiment}_summary.txt', 'w') as f:
161 | f.write(path_summary)
162 |
163 | # with open('temp.pkl', 'wb+') as f:
164 | # pickle.dump([queue_of_rounds, debated_rounds, conversation_history, root_node, similarities, differences], f)
165 |
166 |
167 | if __name__ == '__main__':
168 | parser = argparse.ArgumentParser()
169 | parser.add_argument("--focus_paper", default="2406_11709")
170 | parser.add_argument("--cited_paper", default="2310_10648")
171 | parser.add_argument("--topic", default="helping students fix their mistakes")
172 | parser.add_argument("--log_dir", default="logs")
173 | parser.add_argument("--download_dir", default="/")
174 | args = parser.parse_args()
175 |
176 | args.log_dir = f"{args.log_dir}/no_delib/{args.focus_paper.split('.json')[0]}-{args.cited_paper.split('.json')[0]}"
177 |
178 | if not os.path.exists(args.log_dir):
179 | os.makedirs(args.log_dir)
180 |
181 | # if not os.path.exists("data.json"):
182 |
183 | parse_papers(args.focus_paper, args.cited_paper)
184 |
185 | with open('data.json', 'r') as file:
186 | data = json.load(file)
187 |
188 | model_server = LLM(model="nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",tensor_parallel_size=4,max_num_seqs=256,enable_prefix_caching=True)
189 | # model_server = LLM(model="meta-llama/Meta-Llama-3.1-8B-Instruct",tensor_parallel_size=2,max_num_seqs=100,enable_prefix_caching=True)
190 |
191 | for item in data:
192 | run_code(args, item['focus'], item['cited'])
--------------------------------------------------------------------------------
/no_delib/.ipynb_checkpoints/moderator-checkpoint.py:
--------------------------------------------------------------------------------
1 | from vllm import SamplingParams
2 | from outlines.serve.vllm import JSONLogitsProcessor
3 | from unidecode import unidecode
4 | from persona import log_llm
5 | import json
6 | from pydantic import BaseModel, StringConstraints, conlist
7 | from typing_extensions import Annotated
8 | from debate import DebateNode
9 |
10 | class summary_schema(BaseModel):
11 | summary: Annotated[str, StringConstraints(strip_whitespace=True, min_length=50)]
12 |
13 | class expansion_schema(BaseModel):
14 | explanation: Annotated[str, StringConstraints(strip_whitespace=True)]
15 | progression_of_arguments: Annotated[str, StringConstraints(strip_whitespace=True)]
16 | meaningful_questions: Annotated[str, StringConstraints(strip_whitespace=True)]
17 | clear_winner: Annotated[str, StringConstraints(strip_whitespace=True)]
18 |
19 | class subtopic_schema(BaseModel):
20 | topic_title: Annotated[str, StringConstraints(strip_whitespace=True)]
21 | topic_description: Annotated[str, StringConstraints(strip_whitespace=True)]
22 | author_0_relevant_contributions: conlist(int, min_length=0,max_length=10)
23 | author_1_relevant_contributions: conlist(int, min_length=0,max_length=10)
24 |
25 |
26 | class subtopic_list_schema(BaseModel):
27 | subtopic_list : conlist(subtopic_schema, min_length=1, max_length=10)
28 |
29 | def arg_dict_to_str(args, arg_type=True):
30 | arguments = ""
31 | for i, key in enumerate(args.keys()):
32 | if arg_type:
33 | arguments += f"Author {(i)}'s Initial Argument: {args[key]['argument_title']}: {args[key]['description']}\n\n"
34 | else:
35 | arguments += f"Author {(i)}'s Revised Argument: {args[key]['revised_argument_title']}: {args[key]['revised_argument_description']}\n\n"
36 |
37 | return arguments.strip()
38 |
39 | def format_self_deliberation(debate_node, paper_authors):
40 | out = ""
41 | for author in paper_authors:
42 | out += f"Author {author.id} Paper Title: {author.paper.title}\n"
43 | out += f"Author {author.id} Paper Abstract: {author.paper.abstract}\n\n"
44 | out += f"Author {author.id} Paper Introduction: {author.paper.introduction}\n\n"
45 |
46 | for no, arg in enumerate(debate_node.self_delib[author.id]):
47 | out += f"Author {author.id} Paper's Contribution #{no+1}: {arg['argument_title']}: {arg['description']}\n"
48 |
49 | return out
50 |
51 | class Moderator:
52 | def __init__(self, model, log_dir):
53 | self.model = model # define model - Llama 3.
54 | self.log_dir = log_dir
55 |
56 | def generate_topics(self, round: DebateNode, parent_topic, paper_authors, k=5, temperature=0.3, top_p=0.99):
57 | topic_title = parent_topic['topic_title']
58 | prompt = f"""You are a fair and balanced moderator of a debate between two authors determining their respective novel contributions towards the following topic:
59 | Topic: {parent_topic['topic_title']}
60 | Topic Description: {parent_topic['topic_title']}
61 |
62 | Here are the two papers and their claimed novel contributions:
63 |
64 | {format_self_deliberation(round, paper_authors)}
65 |
66 | Based on each of the author's claimed novelties, you must determine the most meaningful, diverse set of subtopics within the parent topic, {topic_title}, which best cover the types of contributions each of the papers make. Remember that for each of your selected topics, the papers will be debating which of them makes the better contribution towards the topic. Hence, for each of your subtopics, cite the integer IDs of any relevant contributions from Author 0 (author_0_relevant_contributions) or Author 1 (author_1_relevant_contributions). At least one of these lists should be non-empty. Overall, our goal is to identify how novel Author 0's paper's contributions towards topic {topic_title} are by individually considering their contributions towards your subtopics.
67 |
68 | Output your subtopics (up to {k}) in the following JSON format:
69 | {{"subtopic_list":
70 | [
71 | {{
72 | "topic_title": ,
73 | "topic_description": <1-2 sentence string explaining the subtopic and what you feel would be most helpful for the papers to debate within the subtopic>,
74 | "author_0_relevant_contributions": ,
75 | "author_1_relevant_contributions":
76 | }},
77 | ...
78 | ]
79 | }}
80 |
81 | """
82 | logits_processor = JSONLogitsProcessor(schema=subtopic_list_schema, llm=self.model.llm_engine)
83 | sampling_params = SamplingParams(max_tokens=2000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
84 |
85 | outputs = unidecode(self.model.generate(prompt,
86 | sampling_params=sampling_params,
87 | use_tqdm=False)[0].outputs[0].text)
88 |
89 | log_llm(self.log_dir, prompt, outputs)
90 | outputs = json.loads(outputs)
91 |
92 | return outputs['subtopic_list']
93 |
94 | def is_expand(self, round: DebateNode, history, temperature=0.1, top_p=0.99):
95 | """
96 | Based on the previous arguments and the new arguments, determine if any progress has been made.
97 | """
98 | prev_args = arg_dict_to_str(round.init_arguments, True)
99 | new_args = arg_dict_to_str(round.final_arguments, False)
100 | logits_processor = JSONLogitsProcessor(schema=expansion_schema, llm=self.model.llm_engine)
101 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
102 |
103 | round_topic = round.round_topic
104 |
105 | prompt = f"""You are a moderator faciliating a debate in which two paper are debating who makes the better contribution towards the following topic:\n\t- Topic: {round_topic['topic_title']}\n\t- Topic Description: {round_topic['topic_description']}\n
106 | -------
107 |
108 | {history}
109 |
110 | -------
111 |
112 | Below, you are given the previous set of arguments and the current set of arguments.
113 |
114 | \"previous arguments\":\n{prev_args}
115 |
116 | \"current arguments\":\n{new_args}
117 |
118 | -------
119 |
120 | You must determine whether progress is being made. DO NOT focus on the language being used. Focus on the content of the arguments. Specifically, determine the following (True or False for each):
121 | 1. progression_of_arguments: Are these arguments sufficiently different enough to necesitate further debate? Are there new, deeper concepts being discussed between the two sets of arguments? Return "Yes" or "No".
122 | 2. meaningful_questions: Within the debate history, each author acknowledges each other's arguments and may ask clarifying questions accordingly. Do you believe that the clarifying questions have not been sufficiently addressed already and would be important to answer through further debate? If there are no questions raised in the debate history by either author, return "No", otherwise "Yes".
123 | 3. clear_winner: Do you believe that it is clear that one author has won the debate, and it does not need to be further deconstructured (in order to determine which components within each author's contributions are truly better)? Return "Yes" or "No".
124 |
125 | Output your argument in the following JSON format:
126 | {{
127 | "explanation": <2-5 sentence string to explain your reasoning about whether further debate is necessary>,
128 | "progression_of_arguments": ,
129 | "meaningful_questions": ,
130 | "clear_winner":
131 | }}
132 | """
133 | outputs = unidecode(self.model.generate(prompt,
134 | sampling_params=sampling_params,
135 | use_tqdm=False)[0].outputs[0].text).lower()
136 | print(f'IS EXPAND {outputs}')
137 | log_llm(self.log_dir, prompt, outputs)
138 | outputs = json.loads(outputs)
139 |
140 | return (("yes" in outputs['progression_of_arguments']) or ("yes" in outputs['meaningful_questions'])) and ("no" in outputs['clear_winner'])
141 |
142 | def summarize_path_debate(self, paper_authors, root_topic, path, temperature=0.4, top_p=0.99):
143 |
144 | logits_processor = JSONLogitsProcessor(schema=summary_schema, llm=self.model.llm_engine)
145 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
146 |
147 | prompt = f"""Two authors are debating their respective novelties with respect to the following topic:
148 | Topic: {root_topic['topic_title']}
149 |
150 | Author 0's paper title is: {paper_authors[0].paper.title}
151 | Author 1's paper title is: {paper_authors[1].paper.title}
152 |
153 | Here is a breakdown of their debates in tree format. At each tree node, we provide the topic_title:topic description, Author 0's corresponding argument and Author 1's corresponding argument:
154 |
155 | {path}
156 |
157 | Based on the debate breakdown, output a paragraph-long synthesis of the debate which summarizes the similarities and differences between the papers. Structure your summary with initially their similarities (which ideas/aspects overlap between the two papers?) to their differences (what makes the papers unique) in novelties. Focus more on the differences than the similarities.
158 |
159 | Format your output in the following JSON schema:
160 | {{
161 | "summary": <5-10 sentence string to summarize the similarities and differences between the two papers>
162 | }}
163 | """
164 | # conversation = history.extend(conversation)
165 | outputs = unidecode(self.model.generate(prompt,
166 | sampling_params=sampling_params,
167 | use_tqdm=False)[0].outputs[0].text)
168 | log_llm(self.log_dir, prompt, outputs)
169 | outputs = json.loads(outputs)['summary']
170 | return outputs
171 |
--------------------------------------------------------------------------------
/no_delib/moderator.py:
--------------------------------------------------------------------------------
1 | from vllm import SamplingParams
2 | from outlines.serve.vllm import JSONLogitsProcessor
3 | from unidecode import unidecode
4 | from no_delib.persona import log_llm
5 | import json
6 | from pydantic import BaseModel, StringConstraints, conlist
7 | from typing_extensions import Annotated
8 | from no_delib.debate import DebateNode
9 |
10 | class summary_schema(BaseModel):
11 | summary: Annotated[str, StringConstraints(strip_whitespace=True, min_length=50)]
12 |
13 | class expansion_schema(BaseModel):
14 | explanation: Annotated[str, StringConstraints(strip_whitespace=True)]
15 | progression_of_arguments: Annotated[str, StringConstraints(strip_whitespace=True)]
16 | meaningful_questions: Annotated[str, StringConstraints(strip_whitespace=True)]
17 | clear_winner: Annotated[str, StringConstraints(strip_whitespace=True)]
18 |
19 | class subtopic_schema(BaseModel):
20 | topic_title: Annotated[str, StringConstraints(strip_whitespace=True)]
21 | topic_description: Annotated[str, StringConstraints(strip_whitespace=True)]
22 | author_0_relevant_contributions: conlist(int, min_length=0,max_length=10)
23 | author_1_relevant_contributions: conlist(int, min_length=0,max_length=10)
24 |
25 |
26 | class subtopic_list_schema(BaseModel):
27 | subtopic_list : conlist(subtopic_schema, min_length=1, max_length=10)
28 |
29 | def arg_dict_to_str(args, arg_type=True):
30 | arguments = ""
31 | for i, key in enumerate(args.keys()):
32 | if arg_type:
33 | arguments += f"Author {(i)}'s Initial Argument: {args[key]['argument_title']}: {args[key]['description']}\n\n"
34 | else:
35 | arguments += f"Author {(i)}'s Revised Argument: {args[key]['revised_argument_title']}: {args[key]['revised_argument_description']}\n\n"
36 |
37 | return arguments.strip()
38 |
39 | def format_self_deliberation(debate_node, paper_authors):
40 | out = ""
41 | for author in paper_authors:
42 | out += f"Author {author.id} Paper Title: {author.paper.title}\n"
43 | out += f"Author {author.id} Paper Abstract: {author.paper.abstract}\n\n"
44 | out += f"Author {author.id} Paper Introduction: {author.paper.introduction}\n\n"
45 |
46 | for no, arg in enumerate(debate_node.self_delib[author.id]):
47 | out += f"Author {author.id} Paper's Contribution #{no+1}: {arg['argument_title']}: {arg['description']}\n"
48 |
49 | return out
50 |
51 | class Moderator:
52 | def __init__(self, model, log_dir):
53 | self.model = model # define model - Llama 3.
54 | self.log_dir = log_dir
55 |
56 | def generate_topics(self, round: DebateNode, parent_topic, paper_authors, k=5, temperature=0.3, top_p=0.99):
57 | topic_title = parent_topic['topic_title']
58 | prompt = f"""You are a fair and balanced moderator of a debate between two authors determining their respective novel contributions towards the following topic:
59 | Topic: {parent_topic['topic_title']}
60 | Topic Description: {parent_topic['topic_title']}
61 |
62 | Here are the two papers and their claimed novel contributions:
63 |
64 | {format_self_deliberation(round, paper_authors)}
65 |
66 | Based on each of the author's claimed novelties, you must determine the most meaningful, diverse set of subtopics within the parent topic, {topic_title}, which best cover the types of contributions each of the papers make. Remember that for each of your selected topics, the papers will be debating which of them makes the better contribution towards the topic. Hence, for each of your subtopics, cite the integer IDs of any relevant contributions from Author 0 (author_0_relevant_contributions) or Author 1 (author_1_relevant_contributions). At least one of these lists should be non-empty. Overall, our goal is to identify how novel Author 0's paper's contributions towards topic {topic_title} are by individually considering their contributions towards your subtopics.
67 |
68 | Output your subtopics (up to {k}) in the following JSON format:
69 | {{"subtopic_list":
70 | [
71 | {{
72 | "topic_title": ,
73 | "topic_description": <1-2 sentence string explaining the subtopic and what you feel would be most helpful for the papers to debate within the subtopic>,
74 | "author_0_relevant_contributions": ,
75 | "author_1_relevant_contributions":
76 | }},
77 | ...
78 | ]
79 | }}
80 |
81 | """
82 | logits_processor = JSONLogitsProcessor(schema=subtopic_list_schema, llm=self.model.llm_engine)
83 | sampling_params = SamplingParams(max_tokens=2000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
84 |
85 | outputs = unidecode(self.model.generate(prompt,
86 | sampling_params=sampling_params,
87 | use_tqdm=False)[0].outputs[0].text)
88 |
89 | log_llm(self.log_dir, prompt, outputs)
90 | outputs = json.loads(outputs)
91 |
92 | return outputs['subtopic_list']
93 |
94 | def is_expand(self, round: DebateNode, history, temperature=0.1, top_p=0.99):
95 | """
96 | Based on the previous arguments and the new arguments, determine if any progress has been made.
97 | """
98 | prev_args = arg_dict_to_str(round.init_arguments, True)
99 | new_args = arg_dict_to_str(round.final_arguments, False)
100 | logits_processor = JSONLogitsProcessor(schema=expansion_schema, llm=self.model.llm_engine)
101 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
102 |
103 | round_topic = round.round_topic
104 |
105 | prompt = f"""You are a moderator faciliating a debate in which two paper are debating who makes the better contribution towards the following topic:\n\t- Topic: {round_topic['topic_title']}\n\t- Topic Description: {round_topic['topic_description']}\n
106 | -------
107 |
108 | {history}
109 |
110 | -------
111 |
112 | Below, you are given the previous set of arguments and the current set of arguments.
113 |
114 | \"previous arguments\":\n{prev_args}
115 |
116 | \"current arguments\":\n{new_args}
117 |
118 | -------
119 |
120 | You must determine whether progress is being made. DO NOT focus on the language being used. Focus on the content of the arguments. Specifically, determine the following (True or False for each):
121 | 1. progression_of_arguments: Are these arguments sufficiently different enough to necesitate further debate? Are there new, deeper concepts being discussed between the two sets of arguments? Return "Yes" or "No".
122 | 2. meaningful_questions: Within the debate history, each author acknowledges each other's arguments and may ask clarifying questions accordingly. Do you believe that the clarifying questions have not been sufficiently addressed already and would be important to answer through further debate? If there are no questions raised in the debate history by either author, return "No", otherwise "Yes".
123 | 3. clear_winner: Do you believe that it is clear that one author has won the debate, and it does not need to be further deconstructured (in order to determine which components within each author's contributions are truly better)? Return "Yes" or "No".
124 |
125 | Output your argument in the following JSON format:
126 | {{
127 | "explanation": <2-5 sentence string to explain your reasoning about whether further debate is necessary>,
128 | "progression_of_arguments": ,
129 | "meaningful_questions": ,
130 | "clear_winner":
131 | }}
132 | """
133 | outputs = unidecode(self.model.generate(prompt,
134 | sampling_params=sampling_params,
135 | use_tqdm=False)[0].outputs[0].text).lower()
136 | print(f'IS EXPAND {outputs}')
137 | log_llm(self.log_dir, prompt, outputs)
138 | outputs = json.loads(outputs)
139 |
140 | return (("yes" in outputs['progression_of_arguments']) or ("yes" in outputs['meaningful_questions'])) and ("no" in outputs['clear_winner'])
141 |
142 | def summarize_path_debate(self, paper_authors, root_topic, tree, temperature=0.4, top_p=0.99):
143 |
144 | logits_processor = JSONLogitsProcessor(schema=summary_schema, llm=self.model.llm_engine)
145 | sampling_params = SamplingParams(max_tokens=2048, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
146 |
147 | prompt = f"""Two authors are debating their respective novelties with respect to the following topic:
148 | Topic: {root_topic['topic_title']}
149 |
150 | Author 0's paper title is: {paper_authors[0].paper.title}
151 | Author 1's paper title is: {paper_authors[1].paper.title}
152 |
153 | Here is a breakdown of their debates in a tree dictionary format. The debate tree can be interpreted as a debate starting from the root node topic and branching out into different child nodes based on different arguments brought up by each author. The debate path ends when either there is no progression in the authors' arguments or an author has clearly won with respect to novelty. At each tree node, we provide the topic, 'description' of the topic, Author 0's corresponding argument (author_0_argument), and Author 1's corresponding argument (author_0_argument) regarding the topic:
154 |
155 | {tree}
156 |
157 | Based on the debate breakdown, output an approximately paragraph-long synthesis of the debate which summarizes the similarities and differences between the papers. Loosely structure your summary with initially their similarities (which ideas/aspects overlap between the two papers?) to their differences (what makes the papers unique) in novelties strictly based the information discussed within the debate. Focus more on the differences than the similarities.
158 |
159 | Format your output in the following JSON schema:
160 | {{
161 | "summary": <5-20 sentence string to summarize the similarities and differences between the two papers identified within the debate tree>
162 | }}
163 | """
164 | # conversation = history.extend(conversation)
165 | outputs = unidecode(self.model.generate(prompt,
166 | sampling_params=sampling_params,
167 | use_tqdm=False)[0].outputs[0].text)
168 | log_llm(self.log_dir, prompt, outputs)
169 | outputs = json.loads(outputs)['summary']
170 | return outputs
171 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/moderator.py:
--------------------------------------------------------------------------------
1 | from vllm import SamplingParams
2 | from outlines.serve.vllm import JSONLogitsProcessor
3 | from unidecode import unidecode
4 | from persona import log_llm
5 | import json
6 | from pydantic import BaseModel, StringConstraints, conlist
7 | from typing_extensions import Annotated
8 | from debate import DebateNode
9 |
10 | class summary_schema(BaseModel):
11 | summary: Annotated[str, StringConstraints(strip_whitespace=True, min_length=50)]
12 |
13 | # class expansion_schema(BaseModel):
14 | # explanation: Annotated[str, StringConstraints(strip_whitespace=True, min_length=5)]
15 | # is_expand: bool
16 |
17 | class expansion_schema(BaseModel):
18 | explanation: Annotated[str, StringConstraints(strip_whitespace=True)]
19 | progression_of_arguments: Annotated[str, StringConstraints(strip_whitespace=True)]
20 | meaningful_questions: Annotated[str, StringConstraints(strip_whitespace=True)]
21 | clear_winner: Annotated[str, StringConstraints(strip_whitespace=True)]
22 |
23 | class subtopic_schema(BaseModel):
24 | topic_title: Annotated[str, StringConstraints(strip_whitespace=True)]
25 | topic_description: Annotated[str, StringConstraints(strip_whitespace=True)]
26 | author_0_relevant_contributions: conlist(int, min_length=0,max_length=10)
27 | author_1_relevant_contributions: conlist(int, min_length=0,max_length=10)
28 |
29 |
30 | class subtopic_list_schema(BaseModel):
31 | subtopic_list : conlist(subtopic_schema, min_length=1, max_length=10)
32 |
33 | def arg_dict_to_str(args, arg_type=True):
34 | arguments = ""
35 | for i, key in enumerate(args.keys()):
36 | if arg_type:
37 | arguments += f"Author {(i)}'s Initial Argument: {args[key]['argument_title']}: {args[key]['description']}\n\n"
38 | else:
39 | arguments += f"Author {(i)}'s Revised Argument: {args[key]['revised_argument_title']}: {args[key]['revised_argument_description']}\n\n"
40 |
41 | return arguments.strip()
42 |
43 | def format_evidence(list_evi, author, ids=None):
44 | text_evi = ""
45 | idx = 1
46 | for counter, item in enumerate(list_evi):
47 | if (ids == None) or ((counter + 1) in ids):
48 | text_evi += f"\t- Author {author.id}'s Supporting Evidence #{idx}. {item}\n"
49 | idx += 1
50 | return text_evi
51 |
52 | def format_preemption(author, list_pre):
53 | text_pre = f"\t- Author {1-author.id}'s relevant evidence to potentially counter the novelty of this contribution:\n"
54 | for e_id, e in enumerate(list_pre):
55 | text_pre += f"\t\t- Author {1-author.id}'s Counter Evidence #{e_id+1}: The opposition states, \"{e}\"\n"
56 | return text_pre
57 |
58 | def format_self_deliberation(debate_node, paper_authors):
59 | out = ""
60 | for author in paper_authors:
61 | out += f"Author {author.id} Paper Title: {author.paper.title}\n"
62 | out += f"Author {author.id} Paper Abstract: {author.paper.abstract}\n\n"
63 |
64 | for no, arg in enumerate(debate_node.self_delib[author.id]):
65 | out += f"Author {author.id} Paper's Contribution #{no+1}: {arg['argument_title']}: {arg['description']}\n"
66 | out += f"{format_evidence(debate_node.evidence[author.id], author, arg['evidence'])}"
67 | arg_key = f"{arg['argument_title']}: {arg['description']}"
68 | out += f"{format_preemption(author, debate_node.preemption[1-author.id][arg_key])}\n"
69 |
70 | return out
71 |
72 | class Moderator:
73 | def __init__(self, model, log_dir):
74 | self.model = model # define model - Llama 3.
75 | self.log_dir = log_dir
76 |
77 | def generate_topics(self, round: DebateNode, parent_topic, paper_authors, k=5, temperature=0.3, top_p=0.99):
78 | topic_title = parent_topic['topic_title']
79 | prompt = f"""You are a fair and balanced moderator of a debate between two authors determining their respective novel contributions towards the following topic:
80 | Topic: {parent_topic['topic_title']}
81 | Topic Description: {parent_topic['topic_title']}
82 |
83 | Here are the two papers and their claimed novel contributions with corresponding evidence:
84 |
85 | {format_self_deliberation(round, paper_authors)}
86 |
87 | Based on each of the author's claimed novelties, evidence, and counter-evidence to each other's arguments, you must determine the most meaningful, diverse set of subtopics within the parent topic, {topic_title}, which best cover the types of contributions each of the papers make. Remember that for each of your selected topics, the papers will be debating which of them makes the better contribution towards the topic. Hence, for each of your subtopics, cite the integer IDs of any relevant contributions from Author 0 (author_0_relevant_contributions) or Author 1 (author_1_relevant_contributions). At least one of these lists should be non-empty. Overall, our goal is to identify how novel Author 0's paper's contributions towards topic {topic_title} are by individually considering their contributions towards your subtopics.
88 |
89 | Output your subtopics (up to {k}) in the following JSON format:
90 | {{"subtopic_list":
91 | [
92 | {{
93 | "topic_title": ,
94 | "topic_description": <1-2 sentence string explaining the subtopic and what you feel would be most helpful for the papers to debate within the subtopic>,
95 | "author_0_relevant_contributions": ,
96 | "author_1_relevant_contributions":
97 | }},
98 | ...
99 | ]
100 | }}
101 |
102 | """
103 | logits_processor = JSONLogitsProcessor(schema=subtopic_list_schema, llm=self.model.llm_engine)
104 | sampling_params = SamplingParams(max_tokens=2000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
105 |
106 | outputs = unidecode(self.model.generate(prompt,
107 | sampling_params=sampling_params,
108 | use_tqdm=False)[0].outputs[0].text)
109 |
110 | log_llm(self.log_dir, prompt, outputs)
111 | outputs = json.loads(outputs)
112 |
113 | return outputs['subtopic_list']
114 |
115 | def is_expand(self, round: DebateNode, history, temperature=0.1, top_p=0.99):
116 | """
117 | Based on the previous arguments and the new arguments, determine if any progress has been made.
118 | """
119 | prev_args = arg_dict_to_str(round.init_arguments, True)
120 | new_args = arg_dict_to_str(round.final_arguments, False)
121 | logits_processor = JSONLogitsProcessor(schema=expansion_schema, llm=self.model.llm_engine)
122 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
123 |
124 | round_topic = round.round_topic
125 |
126 | prompt = f"""You are a moderator faciliating a debate in which two paper are debating who makes the better contribution towards the following topic:\n\t- Topic: {round_topic['topic_title']}\n\t- Topic Description: {round_topic['topic_description']}\n
127 | -------
128 |
129 | {history}
130 |
131 | -------
132 |
133 | Below, you are given the previous set of arguments and the current set of arguments.
134 |
135 | \"previous arguments\":\n{prev_args}
136 |
137 | \"current arguments\":\n{new_args}
138 |
139 | -------
140 |
141 | You must determine whether progress is being made. DO NOT focus on the language being used. Focus on the content of the arguments. Specifically, determine the following (True or False for each):
142 | 1. progression_of_arguments: Are these arguments sufficiently different enough to necesitate further debate? Are there new, deeper concepts being discussed between the two sets of arguments? Return "Yes" or "No".
143 | 2. meaningful_questions: Within the debate history, each author acknowledges each other's arguments and may ask clarifying questions accordingly. Do you believe that the clarifying questions have not been sufficiently addressed already and would be important to answer through further debate? If there are no questions raised in the debate history by either author, return "No", otherwise "Yes".
144 | 3. clear_winner: Do you believe that it is clear that one author has won the debate, and it does not need to be further deconstructured (in order to determine which components within each author's contributions are truly better)? Return "Yes" or "No".
145 |
146 | Output your argument in the following JSON format:
147 | {{
148 | "explanation": <2-5 sentence string to explain your reasoning about whether further debate is necessary>,
149 | "progression_of_arguments": ,
150 | "meaningful_questions": ,
151 | "clear_winner":
152 | }}
153 | """
154 | outputs = unidecode(self.model.generate(prompt,
155 | sampling_params=sampling_params,
156 | use_tqdm=False)[0].outputs[0].text).lower()
157 | print(f'IS EXPAND {outputs}')
158 | log_llm(self.log_dir, prompt, outputs)
159 | outputs = json.loads(outputs)
160 |
161 | return (("yes" in outputs['progression_of_arguments']) or ("yes" in outputs['meaningful_questions'])) and ("no" in outputs['clear_winner'])
162 |
163 | def summarize_path_debate(self, paper_authors, root_topic, tree, temperature=0.4, top_p=0.99):
164 |
165 | logits_processor = JSONLogitsProcessor(schema=summary_schema, llm=self.model.llm_engine)
166 | sampling_params = SamplingParams(max_tokens=2000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
167 |
168 | prompt = f"""Two authors are debating their respective novelties with respect to the following topic:
169 | Topic: {root_topic['topic_title']}
170 |
171 | Author 0's paper title is: {paper_authors[0].paper.title}
172 | Author 1's paper title is: {paper_authors[1].paper.title}
173 |
174 | Here is a breakdown of their debates in a tree dictionary format. The debate tree can be interpreted as a debate starting from the root node topic and branching out into different child nodes based on different arguments brought up by each author. The debate path ends when either there is no progression in the authors' arguments or an author has clearly won with respect to novelty. At each tree node, we provide the topic, 'description' of the topic, Author 0's corresponding argument (author_0_argument), and Author 1's corresponding argument (author_0_argument) regarding the topic:
175 |
176 | {tree}
177 |
178 | Based on the debate breakdown, output an approximately paragraph-long synthesis of the debate which summarizes the similarities and differences between the papers. Loosely structure your summary with initially their similarities (which ideas/aspects overlap between the two papers?) to their differences (what makes the papers unique) in novelties strictly based the information discussed within the debate. Focus more on the differences than the similarities. ENSURE that your output summary is specific and detailed-- no high-level, loose claims unsupported by evidence. Write it as if you were an expert on the topic.
179 |
180 | Format your output in the following JSON schema:
181 | {{
182 | "summary": <5-20 sentence string to summarize the similarities and differences between the two papers identified within the debate tree>
183 | }}
184 | """
185 | # conversation = history.extend(conversation)
186 | outputs = unidecode(self.model.generate(prompt,
187 | sampling_params=sampling_params,
188 | use_tqdm=False)[0].outputs[0].text)
189 | log_llm(self.log_dir, prompt, outputs)
190 | outputs = json.loads(outputs)['summary']
191 | return outputs
--------------------------------------------------------------------------------
/no_tree/moderator.py:
--------------------------------------------------------------------------------
1 | from vllm import SamplingParams
2 | from outlines.serve.vllm import JSONLogitsProcessor
3 | from unidecode import unidecode
4 | from no_tree.persona import log_llm
5 | import json
6 | from pydantic import BaseModel, StringConstraints, conlist
7 | from typing_extensions import Annotated
8 | from no_tree.debate import DebateNode
9 |
10 | class summary_schema(BaseModel):
11 | summary: Annotated[str, StringConstraints(strip_whitespace=True, min_length=50)]
12 |
13 | # class expansion_schema(BaseModel):
14 | # explanation: Annotated[str, StringConstraints(strip_whitespace=True, min_length=5)]
15 | # is_expand: bool
16 |
17 | class expansion_schema(BaseModel):
18 | explanation: Annotated[str, StringConstraints(strip_whitespace=True)]
19 | progression_of_arguments: Annotated[str, StringConstraints(strip_whitespace=True)]
20 | meaningful_questions: Annotated[str, StringConstraints(strip_whitespace=True)]
21 | clear_winner: Annotated[str, StringConstraints(strip_whitespace=True)]
22 |
23 | class subtopic_schema(BaseModel):
24 | topic_title: Annotated[str, StringConstraints(strip_whitespace=True)]
25 | topic_description: Annotated[str, StringConstraints(strip_whitespace=True)]
26 | author_0_relevant_contributions: conlist(int, min_length=0,max_length=10)
27 | author_1_relevant_contributions: conlist(int, min_length=0,max_length=10)
28 |
29 | def arg_dict_to_str(args, arg_type=True):
30 | arguments = ""
31 | for i, key in enumerate(args.keys()):
32 | if arg_type:
33 | arguments += f"Author {(i)}'s Initial Argument: {args[key]['argument_title']}: {args[key]['description']}\n\n"
34 | else:
35 | arguments += f"Author {(i)}'s Revised Argument: {args[key]['revised_argument_title']}: {args[key]['revised_argument_description']}\n\n"
36 |
37 | return arguments.strip()
38 |
39 | def format_evidence(list_evi, author, ids=None):
40 | text_evi = ""
41 | idx = 1
42 | for counter, item in enumerate(list_evi):
43 | if (ids == None) or ((counter + 1) in ids):
44 | text_evi += f"\t- Author {author.id}'s Supporting Evidence #{idx}. {item}\n"
45 | idx += 1
46 | return text_evi
47 |
48 | def format_preemption(author, list_pre):
49 | text_pre = f"\t- Author {1-author.id}'s relevant evidence to potentially counter the novelty of this contribution:\n"
50 | for e_id, e in enumerate(list_pre):
51 | text_pre += f"\t\t- Author {1-author.id}'s Counter Evidence #{e_id+1}: The opposition states, \"{e}\"\n"
52 | return text_pre
53 |
54 | def format_self_deliberation(debate_node, paper_authors):
55 | out = ""
56 | for author in paper_authors:
57 | out += f"Author {author.id} Paper Title: {author.paper.title}\n"
58 | out += f"Author {author.id} Paper Abstract: {author.paper.abstract}\n\n"
59 |
60 | for no, arg in enumerate(debate_node.self_delib[author.id]):
61 | out += f"Author {author.id} Paper's Contribution #{no+1}: {arg['argument_title']}: {arg['description']}\n"
62 | out += f"{format_evidence(debate_node.evidence[author.id], author, arg['evidence'])}"
63 | arg_key = f"{arg['argument_title']}: {arg['description']}"
64 | out += f"{format_preemption(author, debate_node.preemption[1-author.id][arg_key])}\n"
65 |
66 | return out
67 |
68 | class Moderator:
69 | def __init__(self, model, log_dir):
70 | self.model = model # define model - Llama 3.
71 | self.log_dir = log_dir
72 |
73 | def generate_topics(self, round: DebateNode, parent_topic, paper_authors, k=5, temperature=0.3, top_p=0.99):
74 | topic_title = parent_topic['topic_title']
75 | prompt = f"""You are a fair and balanced moderator of a debate between two authors determining their respective novel contributions towards the following topic:
76 | Topic: {parent_topic['topic_title']}
77 | Topic Description: {parent_topic['topic_title']}
78 |
79 | Here are the two papers and their claimed novel contributions with corresponding evidence:
80 |
81 | {format_self_deliberation(round, paper_authors)}
82 |
83 | Based on each of the author's claimed novelties, evidence, and counter-evidence to each other's arguments, you must determine the most meaningful, diverse set of subtopics within the parent topic, {topic_title}, which best cover the types of contributions each of the papers make. Remember that for each of your selected topics, the papers will be debating which of them makes the better contribution towards the topic. Hence, for each of your subtopics, cite the integer IDs of any relevant contributions from Author 0 (author_0_relevant_contributions) or Author 1 (author_1_relevant_contributions). At least one of these lists should be non-empty. Overall, our goal is to identify how novel Author 0's paper's contributions towards topic {topic_title} are by individually considering their contributions towards your subtopics.
84 |
85 | Output your subtopics (up to {k}) in the following JSON format:
86 | {{
87 | "topic_title": ,
88 | "topic_description": <1-2 sentence string explaining the subtopics and what you feel would be most helpful for the papers to debate within the subtopics>,
89 | "author_0_relevant_contributions": ,
90 | "author_1_relevant_contributions":
91 | }}
92 |
93 | """
94 | logits_processor = JSONLogitsProcessor(schema=subtopic_schema, llm=self.model.llm_engine)
95 | sampling_params = SamplingParams(max_tokens=3000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
96 |
97 | outputs = unidecode(self.model.generate(prompt,
98 | sampling_params=sampling_params,
99 | use_tqdm=False)[0].outputs[0].text)
100 |
101 | log_llm(self.log_dir, prompt, outputs)
102 | outputs = json.loads(outputs)
103 |
104 | return outputs
105 |
106 | def is_expand(self, round: DebateNode, history, temperature=0.1, top_p=0.99):
107 | """
108 | Based on the previous arguments and the new arguments, determine if any progress has been made.
109 | """
110 | prev_args = arg_dict_to_str(round.init_arguments, True)
111 | new_args = arg_dict_to_str(round.final_arguments, False)
112 | logits_processor = JSONLogitsProcessor(schema=expansion_schema, llm=self.model.llm_engine)
113 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
114 |
115 | round_topic = round.round_topic
116 |
117 | prompt = f"""You are a moderator faciliating a debate in which two paper are debating who makes the better contribution towards the following topic:\n\t- Topic: {round_topic['topic_title']}\n\t- Topic Description: {round_topic['topic_description']}\n
118 | -------
119 |
120 | {history}
121 |
122 | -------
123 |
124 | Below, you are given the previous set of arguments and the current set of arguments.
125 |
126 | \"previous arguments\":\n{prev_args}
127 |
128 | \"current arguments\":\n{new_args}
129 |
130 | -------
131 |
132 | You must determine whether progress is being made. DO NOT focus on the language being used. Focus on the content of the arguments. Specifically, determine the following (True or False for each):
133 | 1. progression_of_arguments: Are these arguments sufficiently different enough to necesitate further debate? Are there new, deeper concepts being discussed between the two sets of arguments? Return "Yes" or "No".
134 | 2. meaningful_questions: Within the debate history, each author acknowledges each other's arguments and may ask clarifying questions accordingly. Do you believe that the clarifying questions have not been sufficiently addressed already and would be important to answer through further debate? If there are no questions raised in the debate history by either author, return "No", otherwise "Yes".
135 | 3. clear_winner: Do you believe that it is clear that one author has won the debate, and it does not need to be further deconstructured (in order to determine which components within each author's contributions are truly better)? Return "Yes" or "No".
136 |
137 | Output your argument in the following JSON format:
138 | {{
139 | "explanation": <2-5 sentence string to explain your reasoning about whether further debate is necessary>,
140 | "progression_of_arguments": ,
141 | "meaningful_questions": ,
142 | "clear_winner":
143 | }}
144 | """
145 | outputs = unidecode(self.model.generate(prompt,
146 | sampling_params=sampling_params,
147 | use_tqdm=False)[0].outputs[0].text).lower()
148 | print(f'IS EXPAND {outputs}')
149 | log_llm(self.log_dir, prompt, outputs)
150 | outputs = json.loads(outputs)
151 |
152 | return (("yes" in outputs['progression_of_arguments']) or ("yes" in outputs['meaningful_questions'])) and ("no" in outputs['clear_winner'])
153 |
154 | def summarize_debate(self, conversation_history, similarities, differences, temperature=0.4, top_p=0.99):
155 | similarities = ",".join(similarities)
156 | differences = ",".join(differences)
157 |
158 | logits_processor = JSONLogitsProcessor(schema=summary_schema, llm=self.model.llm_engine)
159 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
160 |
161 | prompt = f"""The authors of two papers have debated about the similarities and differences between their papers. Author 0 is the author of the main paper, while Author 1 is the author of the paper being compared to the main paper. Below, you are given the \"conversation_history\" between the authors, and the specific similarities and differences. The similarities and differences are from the point-of-view of Author 0.
162 |
163 | \"conversation_history\":\n{conversation_history}
164 |
165 | \"similarities\": {similarities}
166 |
167 | \"differences\": {differences}
168 |
169 | Your task is to write a synthesis of the debate that summarizes the similarities and differences between the papers. Focus more on the differences than the similarities. Format the output as a schema:
170 | {{
171 | "summary": <5-10 sentence string to summarize the similarities and differences between the two papers>
172 | }}
173 | """
174 | # conversation = history.extend(conversation)
175 | outputs = unidecode(self.model.generate(prompt,
176 | sampling_params=sampling_params,
177 | use_tqdm=False)[0].outputs[0].text)
178 | log_llm(self.log_dir, prompt, outputs)
179 | outputs = json.loads(outputs)['summary']
180 | return outputs
181 |
182 | def summarize_path_debate(self, paper_authors, root_topic, tree, temperature=0.4, top_p=0.99):
183 |
184 | logits_processor = JSONLogitsProcessor(schema=summary_schema, llm=self.model.llm_engine)
185 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
186 |
187 | prompt = f"""Two authors are debating their respective novelties with respect to the following topic:
188 | Topic: {root_topic['topic_title']}
189 |
190 | Author 0's paper title is: {paper_authors[0].paper.title}
191 | Author 1's paper title is: {paper_authors[1].paper.title}
192 |
193 | Here is a breakdown of their debate. The debate ends when either there is no progression in the authors' arguments or an author has clearly won with respect to novelty. At each debate round, we provide the topic, 'description' of the topic, Author 0's corresponding argument (author_0_argument), and Author 1's corresponding argument (author_0_argument) regarding the topic:
194 |
195 | {tree}
196 |
197 | Based on the debate, output an approximately paragraph-long synthesis of the debate which summarizes the similarities and differences between the papers. Loosely structure your summary with initially their similarities (which ideas/aspects overlap between the two papers?) to their differences (what makes the papers unique) in novelties strictly based the information discussed within the debate. Focus more on the differences than the similarities.
198 |
199 | Format your output in the following JSON schema:
200 | {{
201 | "summary": <5-20 sentence string to summarize the similarities and differences between the two papers identified within the debate>
202 | }}
203 | """
204 | # conversation = history.extend(conversation)
205 | outputs = unidecode(self.model.generate(prompt,
206 | sampling_params=sampling_params,
207 | use_tqdm=False)[0].outputs[0].text)
208 | log_llm(self.log_dir, prompt, outputs)
209 | outputs = json.loads(outputs)['summary']
210 | return outputs
211 |
--------------------------------------------------------------------------------
/no_tree/.ipynb_checkpoints/moderator-checkpoint.py:
--------------------------------------------------------------------------------
1 | from vllm import SamplingParams
2 | from outlines.serve.vllm import JSONLogitsProcessor
3 | from unidecode import unidecode
4 | from no_tree.persona import log_llm
5 | import json
6 | from pydantic import BaseModel, StringConstraints, conlist
7 | from typing_extensions import Annotated
8 | from no_tree.debate import DebateNode
9 |
10 | class summary_schema(BaseModel):
11 | summary: Annotated[str, StringConstraints(strip_whitespace=True, min_length=50)]
12 |
13 | # class expansion_schema(BaseModel):
14 | # explanation: Annotated[str, StringConstraints(strip_whitespace=True, min_length=5)]
15 | # is_expand: bool
16 |
17 | class expansion_schema(BaseModel):
18 | explanation: Annotated[str, StringConstraints(strip_whitespace=True)]
19 | progression_of_arguments: Annotated[str, StringConstraints(strip_whitespace=True)]
20 | meaningful_questions: Annotated[str, StringConstraints(strip_whitespace=True)]
21 | clear_winner: Annotated[str, StringConstraints(strip_whitespace=True)]
22 |
23 | class subtopic_schema(BaseModel):
24 | topic_title: Annotated[str, StringConstraints(strip_whitespace=True)]
25 | topic_description: Annotated[str, StringConstraints(strip_whitespace=True)]
26 | author_0_relevant_contributions: conlist(int, min_length=0,max_length=10)
27 | author_1_relevant_contributions: conlist(int, min_length=0,max_length=10)
28 |
29 | def arg_dict_to_str(args, arg_type=True):
30 | arguments = ""
31 | for i, key in enumerate(args.keys()):
32 | if arg_type:
33 | arguments += f"Author {(i)}'s Initial Argument: {args[key]['argument_title']}: {args[key]['description']}\n\n"
34 | else:
35 | arguments += f"Author {(i)}'s Revised Argument: {args[key]['revised_argument_title']}: {args[key]['revised_argument_description']}\n\n"
36 |
37 | return arguments.strip()
38 |
39 | def format_evidence(list_evi, author, ids=None):
40 | text_evi = ""
41 | idx = 1
42 | for counter, item in enumerate(list_evi):
43 | if (ids == None) or ((counter + 1) in ids):
44 | text_evi += f"\t- Author {author.id}'s Supporting Evidence #{idx}. {item}\n"
45 | idx += 1
46 | return text_evi
47 |
48 | def format_preemption(author, list_pre):
49 | text_pre = f"\t- Author {1-author.id}'s relevant evidence to potentially counter the novelty of this contribution:\n"
50 | for e_id, e in enumerate(list_pre):
51 | text_pre += f"\t\t- Author {1-author.id}'s Counter Evidence #{e_id+1}: The opposition states, \"{e}\"\n"
52 | return text_pre
53 |
54 | def format_self_deliberation(debate_node, paper_authors):
55 | out = ""
56 | for author in paper_authors:
57 | out += f"Author {author.id} Paper Title: {author.paper.title}\n"
58 | out += f"Author {author.id} Paper Abstract: {author.paper.abstract}\n\n"
59 |
60 | for no, arg in enumerate(debate_node.self_delib[author.id]):
61 | out += f"Author {author.id} Paper's Contribution #{no+1}: {arg['argument_title']}: {arg['description']}\n"
62 | out += f"{format_evidence(debate_node.evidence[author.id], author, arg['evidence'])}"
63 | arg_key = f"{arg['argument_title']}: {arg['description']}"
64 | out += f"{format_preemption(author, debate_node.preemption[1-author.id][arg_key])}\n"
65 |
66 | return out
67 |
68 | class Moderator:
69 | def __init__(self, model, log_dir):
70 | self.model = model # define model - Llama 3.
71 | self.log_dir = log_dir
72 |
73 | def generate_topics(self, round: DebateNode, parent_topic, paper_authors, k=5, temperature=0.3, top_p=0.99):
74 | topic_title = parent_topic['topic_title']
75 | prompt = f"""You are a fair and balanced moderator of a debate between two authors determining their respective novel contributions towards the following topic:
76 | Topic: {parent_topic['topic_title']}
77 | Topic Description: {parent_topic['topic_title']}
78 |
79 | Here are the two papers and their claimed novel contributions with corresponding evidence:
80 |
81 | {format_self_deliberation(round, paper_authors)}
82 |
83 | Based on each of the author's claimed novelties, evidence, and counter-evidence to each other's arguments, you must determine the most meaningful, diverse set of subtopics within the parent topic, {topic_title}, which best cover the types of contributions each of the papers make. Remember that for each of your selected topics, the papers will be debating which of them makes the better contribution towards the topic. Hence, for each of your subtopics, cite the integer IDs of any relevant contributions from Author 0 (author_0_relevant_contributions) or Author 1 (author_1_relevant_contributions). At least one of these lists should be non-empty. Overall, our goal is to identify how novel Author 0's paper's contributions towards topic {topic_title} are by individually considering their contributions towards your subtopics.
84 |
85 | Output your subtopics (up to {k}) in the following JSON format:
86 | {{
87 | "topic_title": ,
88 | "topic_description": <1-2 sentence string explaining the subtopics and what you feel would be most helpful for the papers to debate within the subtopics>,
89 | "author_0_relevant_contributions": ,
90 | "author_1_relevant_contributions":
91 | }}
92 |
93 | """
94 | logits_processor = JSONLogitsProcessor(schema=subtopic_schema, llm=self.model.llm_engine)
95 | sampling_params = SamplingParams(max_tokens=3000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
96 |
97 | outputs = unidecode(self.model.generate(prompt,
98 | sampling_params=sampling_params,
99 | use_tqdm=False)[0].outputs[0].text)
100 |
101 | log_llm(self.log_dir, prompt, outputs)
102 | outputs = json.loads(outputs)
103 |
104 | return outputs
105 |
106 | def is_expand(self, round: DebateNode, history, temperature=0.1, top_p=0.99):
107 | """
108 | Based on the previous arguments and the new arguments, determine if any progress has been made.
109 | """
110 | prev_args = arg_dict_to_str(round.init_arguments, True)
111 | new_args = arg_dict_to_str(round.final_arguments, False)
112 | logits_processor = JSONLogitsProcessor(schema=expansion_schema, llm=self.model.llm_engine)
113 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
114 |
115 | round_topic = round.round_topic
116 |
117 | prompt = f"""You are a moderator faciliating a debate in which two paper are debating who makes the better contribution towards the following topic:\n\t- Topic: {round_topic['topic_title']}\n\t- Topic Description: {round_topic['topic_description']}\n
118 | -------
119 |
120 | {history}
121 |
122 | -------
123 |
124 | Below, you are given the previous set of arguments and the current set of arguments.
125 |
126 | \"previous arguments\":\n{prev_args}
127 |
128 | \"current arguments\":\n{new_args}
129 |
130 | -------
131 |
132 | You must determine whether progress is being made. DO NOT focus on the language being used. Focus on the content of the arguments. Specifically, determine the following (True or False for each):
133 | 1. progression_of_arguments: Are these arguments sufficiently different enough to necesitate further debate? Are there new, deeper concepts being discussed between the two sets of arguments? Return "Yes" or "No".
134 | 2. meaningful_questions: Within the debate history, each author acknowledges each other's arguments and may ask clarifying questions accordingly. Do you believe that the clarifying questions have not been sufficiently addressed already and would be important to answer through further debate? If there are no questions raised in the debate history by either author, return "No", otherwise "Yes".
135 | 3. clear_winner: Do you believe that it is clear that one author has won the debate, and it does not need to be further deconstructured (in order to determine which components within each author's contributions are truly better)? Return "Yes" or "No".
136 |
137 | Output your argument in the following JSON format:
138 | {{
139 | "explanation": <2-5 sentence string to explain your reasoning about whether further debate is necessary>,
140 | "progression_of_arguments": ,
141 | "meaningful_questions": ,
142 | "clear_winner":
143 | }}
144 | """
145 | outputs = unidecode(self.model.generate(prompt,
146 | sampling_params=sampling_params,
147 | use_tqdm=False)[0].outputs[0].text).lower()
148 | print(f'IS EXPAND {outputs}')
149 | log_llm(self.log_dir, prompt, outputs)
150 | outputs = json.loads(outputs)
151 |
152 | return (("yes" in outputs['progression_of_arguments']) or ("yes" in outputs['meaningful_questions'])) and ("no" in outputs['clear_winner'])
153 |
154 | def summarize_debate(self, conversation_history, similarities, differences, temperature=0.4, top_p=0.99):
155 | similarities = ",".join(similarities)
156 | differences = ",".join(differences)
157 |
158 | logits_processor = JSONLogitsProcessor(schema=summary_schema, llm=self.model.llm_engine)
159 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
160 |
161 | prompt = f"""The authors of two papers have debated about the similarities and differences between their papers. Author 0 is the author of the main paper, while Author 1 is the author of the paper being compared to the main paper. Below, you are given the \"conversation_history\" between the authors, and the specific similarities and differences. The similarities and differences are from the point-of-view of Author 0.
162 |
163 | \"conversation_history\":\n{conversation_history}
164 |
165 | \"similarities\": {similarities}
166 |
167 | \"differences\": {differences}
168 |
169 | Your task is to write a synthesis of the debate that summarizes the similarities and differences between the papers. Focus more on the differences than the similarities. Format the output as a schema:
170 | {{
171 | "summary": <5-10 sentence string to summarize the similarities and differences between the two papers>
172 | }}
173 | """
174 | # conversation = history.extend(conversation)
175 | outputs = unidecode(self.model.generate(prompt,
176 | sampling_params=sampling_params,
177 | use_tqdm=False)[0].outputs[0].text)
178 | log_llm(self.log_dir, prompt, outputs)
179 | outputs = json.loads(outputs)['summary']
180 | return outputs
181 |
182 | def summarize_path_debate(self, paper_authors, root_topic, tree, temperature=0.4, top_p=0.99):
183 |
184 | logits_processor = JSONLogitsProcessor(schema=summary_schema, llm=self.model.llm_engine)
185 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
186 |
187 | prompt = f"""Two authors are debating their respective novelties with respect to the following topic:
188 | Topic: {root_topic['topic_title']}
189 |
190 | Author 0's paper title is: {paper_authors[0].paper.title}
191 | Author 1's paper title is: {paper_authors[1].paper.title}
192 |
193 | Here is a breakdown of their debate. The debate ends when either there is no progression in the authors' arguments or an author has clearly won with respect to novelty. At each debate round, we provide the topic, 'description' of the topic, Author 0's corresponding argument (author_0_argument), and Author 1's corresponding argument (author_0_argument) regarding the topic:
194 |
195 | {tree}
196 |
197 | Based on the debate, output an approximately paragraph-long synthesis of the debate which summarizes the similarities and differences between the papers. Loosely structure your summary with initially their similarities (which ideas/aspects overlap between the two papers?) to their differences (what makes the papers unique) in novelties strictly based the information discussed within the debate. Focus more on the differences than the similarities.
198 |
199 | Format your output in the following JSON schema:
200 | {{
201 | "summary": <5-20 sentence string to summarize the similarities and differences between the two papers identified within the debate>
202 | }}
203 | """
204 | # conversation = history.extend(conversation)
205 | outputs = unidecode(self.model.generate(prompt,
206 | sampling_params=sampling_params,
207 | use_tqdm=False)[0].outputs[0].text)
208 | log_llm(self.log_dir, prompt, outputs)
209 | outputs = json.loads(outputs)['summary']
210 | return outputs
211 |
--------------------------------------------------------------------------------
/no_delib/persona.py:
--------------------------------------------------------------------------------
1 | from no_delib.paper_details import Paper
2 | from vllm import SamplingParams
3 | from outlines.serve.vllm import JSONLogitsProcessor
4 | from unidecode import unidecode
5 | from pydantic import BaseModel, StringConstraints, conlist
6 | from typing_extensions import Annotated
7 | import json
8 | import re
9 |
10 | def format_debate_context(you, opposition, parent_debate_node, debate_node):
11 | out = ""
12 |
13 | your_contributions = debate_node.round_topic['author_0_relevant_contributions'] if you.id == 0 else debate_node.round_topic['author_1_relevant_contributions']
14 | if len(your_contributions) > 0:
15 | total_conts = len(parent_debate_node.self_delib[you.id])
16 | out += f"""Here are your (Author {you.id}) claimed contributions towards the topic:\n"""
17 | for idx, cont in enumerate(your_contributions):
18 | if (cont - 1) < total_conts:
19 | arg = parent_debate_node.self_delib[you.id][cont-1]
20 | out += f"Author {you.id} Paper's Contribution #{idx+1}: {arg['argument_title']}: {arg['description']}\n"
21 | else:
22 | out += f"""Here is your paper's introduction, which may help you:\n{you.paper.introduction}\n"""
23 |
24 | opp_contributions = debate_node.round_topic['author_0_relevant_contributions'] if opposition.id == 0 else debate_node.round_topic['author_1_relevant_contributions']
25 | if len(opp_contributions) > 0:
26 | total_conts = len(parent_debate_node.self_delib[opposition.id])
27 | out += f"""Here are your opposition's (Author {opposition.id}) claimed contributions towards the topic:\n"""
28 | for idx, cont in enumerate(opp_contributions):
29 | if (cont - 1) < total_conts:
30 | arg = parent_debate_node.self_delib[opposition.id][cont-1]
31 | out += f"Author {opposition.id} Paper's Contribution #{idx+1}: {arg['argument_title']}: {arg['description']}\n"
32 | else:
33 | out += f"""Here is your opposition's introduction, which may help you:\n{opposition.paper.introduction}\n"""
34 |
35 | return out
36 |
37 | def format_args(list_arg):
38 | text_args = ""
39 | for counter, item in enumerate(list_arg):
40 | text_args += f"\t- Argument #{counter+1}. {item['argument_title']}: {item['description']}\n"
41 | return text_args
42 |
43 | class revise_schema(BaseModel):
44 | revised_argument_title: Annotated[str, StringConstraints(strip_whitespace=True)]
45 | revised_argument_description: Annotated[str, StringConstraints(strip_whitespace=True, min_length=5)]
46 |
47 | class argument_schema(BaseModel):
48 | argument_title: Annotated[str, StringConstraints(strip_whitespace=True)]
49 | description: Annotated[str, StringConstraints(strip_whitespace=True)]
50 |
51 | class response_schema(BaseModel):
52 | author_response: Annotated[str, StringConstraints(strip_whitespace=True, min_length=50)]
53 |
54 | class gen_argument_schema(BaseModel):
55 | argument_title: Annotated[str, StringConstraints(strip_whitespace=True)]
56 | description: Annotated[str, StringConstraints(strip_whitespace=True)]
57 |
58 | class argument_list_schema(BaseModel):
59 | argument_list : conlist(gen_argument_schema, min_length=1,max_length=5)
60 |
61 | def log_llm(log_dir, prompt, output):
62 | with open(f'{log_dir}/llm_calls.txt', 'a+', encoding="utf-8") as f:
63 | f.write('--------------------------------------------\n')
64 | f.write(f'PROMPT: {prompt}\n')
65 | f.write(f'OUTPUT: {output}\n')
66 | f.write('--------------------------------------------\n\n')
67 |
68 | class PaperAuthor:
69 | def __init__(self, model, id, paper: Paper, focus, log_dir):
70 | self.model = model # define model - Llama 3.1
71 | self.paper = paper
72 | self.focus = focus
73 | self.id = id
74 | self.log_dir = log_dir
75 |
76 | def generate_arguments(self, topic, evidence=False, temperature=0.3, top_p=0.99, k=3):
77 | """
78 | Given topic and evidence, generate k arguments.
79 | If the paper is a focus paper, and the debate round is round #1, the topic should be "I am great".
80 | If the paper is NOT a focus paper, the topic should be the focus paper's arguments.
81 | """
82 | logits_processor = JSONLogitsProcessor(schema=argument_list_schema, llm=self.model.llm_engine)
83 | sampling_params = SamplingParams(max_tokens=3000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
84 |
85 | prompt = f"You are the author of the paper, '{self.paper.title}'. The abstract of your work is:\n{self.paper.abstract}.\n The introduction of your work is: {self.paper.introduction}.\n\nYou are debating another author on the novel contributions that your work makes towards the following topic:\n"
86 |
87 | if topic['topic_title'] == topic['topic_description']:
88 | # if it's the root node:
89 | prompt += f"{topic['topic_title']}\n"
90 | else:
91 | # if it's a debate node, then topic is the focus paper's claim
92 | prompt += f"{topic['topic_title']}: {topic['topic_description']}\n"
93 |
94 | prompt += f"""Based on your paper's title, abstract, and introduction, output a list of 1 to {k} DIVERSE, specific arguments for your position. Each argument should have a corresponding "argument_title", which is a brief statement of your argument (e.g., Better Efficiency for Training) and a "description" explaining your argument and mentioning specific excerpts from your papers. Each argument should make a unique point.
95 |
96 | Output your list of arguments in the following JSON format:
97 | {{
98 | "argument_list":
99 | [
100 | {{
101 | "argument_title": ,
102 | "description": <1-2 sentence string explaining the argument, including specific excerpts from the evidence pool>
103 | }}
104 | ]
105 | }}
106 | """
107 | raw_out = self.model.generate(prompt, sampling_params=sampling_params)
108 | outputs = raw_out[0].outputs[0].text
109 | log_llm(self.log_dir, prompt, outputs)
110 |
111 | try:
112 | out = json.loads(fr'{outputs}')['argument_list']
113 | for arg_id, arg in enumerate(out):
114 | out[arg_id]["argument_title"] = unidecode(arg["argument_title"]).replace("\"", "'")
115 | out[arg_id]["description"] = unidecode(arg["description"]).replace("\"", "'")
116 | except:
117 | raise Exception(f"JSON ISSUE:\n\n{outputs}")
118 | return out
119 |
120 | def present_argument(self, debate_node, parent_debate_node, opposition, temperature=0.1, top_p=0.99):
121 | """
122 | Generate an argument based on your claims and evidences and other paper's claims and evidences.
123 | """
124 | logits_processor = JSONLogitsProcessor(schema=argument_schema, llm=self.model.llm_engine)
125 | sampling_params = SamplingParams(max_tokens=3000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
126 |
127 | round_topic = debate_node.round_topic
128 |
129 | prompt = f"You are the author of the paper, '{self.paper.title}'. The abstract of your work is:\n{self.paper.abstract}\n\nHere is the introduction to your paper:\n{self.paper.introduction}\n\nYou are debating another author (Opposition), whose work is titled, '{opposition.paper.title}', and abstract is:\n{opposition.paper.abstract}\n\n"
130 |
131 | prompt += f"""You are debating the other author on how and why your paper makes a better contribution towards the following topic:\n\t- Topic: {round_topic['topic_title']}\n\t- Topic Description: {round_topic['topic_description']}
132 |
133 | {format_debate_context(self, opposition, parent_debate_node, debate_node)}
134 |
135 | Given the above, make an argument for a specific reason why your contributions towards the topic, {round_topic['topic_title']}, are better than the opposition's. If you feel that you do not contribute to the given topic or your contributions ARE NOT better than the opposition's, then state so by conceding to the opposition (e.g., 'I do not believe my paper makes a better contribution than yours') and explain why.
136 |
137 | Output your argument in the following JSON format:
138 |
139 | {{
140 | "argument_title": ,
141 | "description": <2-3 sentence string explaining your argument>
142 | }}
143 | """
144 | print(prompt + '\n\n')
145 | outputs = unidecode(self.model.generate(prompt,
146 | sampling_params=sampling_params,
147 | use_tqdm=False)[0].outputs[0].text)
148 | log_llm(self.log_dir, prompt, outputs)
149 |
150 | return json.loads((outputs))
151 |
152 |
153 | def respond_to_argument(self, history, debate_node, parent_debate_node, opposition, temperature=0.4, top_p=0.99):
154 | """
155 | Respond to the paper given the current state of debate.
156 | """
157 |
158 | logits_processor = JSONLogitsProcessor(schema=response_schema, llm=self.model.llm_engine)
159 | sampling_params = SamplingParams(max_tokens=3000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
160 |
161 | last_occurrence_index = history.rfind("-Opposition:")
162 |
163 | # Check if "-Opposition:" exists in the string
164 | if last_occurrence_index != -1:
165 | history = history[:last_occurrence_index] + "\n" + history[last_occurrence_index:]+""
166 | print(f'HISTORY {history}')
167 |
168 | round_topic = debate_node.round_topic
169 |
170 | prompt = f"You are the author of the paper, '{self.paper.title}'. The abstract of your work is:\n{self.paper.abstract}\n\nYou are debating another author (Opposition), whose work is titled, '{opposition.paper.title}', and abstract is:\n{opposition.paper.abstract}\n\n"
171 |
172 | prompt += f"""You are debating the other author on how and why your paper makes a better contribution towards the following topic:\n\t- Topic: {round_topic['topic_title']}\n\t- Topic Description: {round_topic['topic_description']}
173 |
174 | {format_debate_context(self, opposition, parent_debate_node, debate_node)}
175 | """
176 |
177 | prompt+= f"""Here is your conversation debate history with the opposition paper. You must respond to the last argument presented by your opposition in debate (tagged between and ). A response may consist of (1) an acknowledgement of the opposition's previous response, (2) answering any of the questions about your paper brought up by the opposition, (3) asking any clarifying questions based on the opposition's claims and reasoning, (4) any clarifications of your own presented arguments based on the opposition, and/or (5) if you feel that the opposition's claim is strong and you do not have sufficient grounds to refute it, then a concession to your opposition.\n\n""" + history
178 |
179 | prompt += f"""\nOutput your new response in the following JSON format:
180 | {{
181 | "author_response": <2-3 sentence string response to the opposition's last turn with an explanation behind your reasoning (tagged between and )>
182 | }}
183 | """
184 | print(prompt + '\n\n')
185 | # conversation = history.extend(conversation)
186 | outputs = unidecode(self.model.generate(prompt,
187 | sampling_params=sampling_params,
188 | use_tqdm=False)[0].outputs[0].text)
189 |
190 | log_llm(self.log_dir, prompt, outputs)
191 | return json.loads(outputs)
192 |
193 | def revise_argument(self, history, debate_node, parent_debate_node, opposition, temperature=0.45, top_p=0.99):
194 | """
195 | Strengthen the final argument at the debate node for a paper.
196 | """
197 |
198 | logits_processor = JSONLogitsProcessor(schema=revise_schema, llm=self.model.llm_engine)
199 | sampling_params = SamplingParams(max_tokens=3000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
200 |
201 | round_topic = debate_node.round_topic
202 |
203 | prompt = f"You are the author of the paper, '{self.paper.title}'. The abstract of your work is:\n{self.paper.abstract}\n\nYou are debating another author (Opposition), whose work is titled, '{opposition.paper.title}', and abstract is:\n{opposition.paper.abstract}\n\n"
204 |
205 | prompt += f"""You are debating the other author on how and why your paper makes a better contribution towards the following topic:\n\t- Topic: {round_topic['topic_title']}\n\t- Topic Description: {round_topic['topic_description']}
206 |
207 | {format_debate_context(self, opposition, parent_debate_node, debate_node)}
208 | """
209 |
210 | prompt+= f"""Based on the debate history and your/your opposition's arguments and evidence, you must construct a new, stronger argument related to the topic. This consists of an argument that addresses/is robust to any doubts or clarifying questions made by the opposition which you feel are valid. If based on the debate, you feel that you do not contribute to the given topic or your contributions ARE NOT better than the opposition's, then state so by conceding to the opposition (e.g., 'I do not believe my paper makes a better contribution than yours') and explain why. \n\n""" + history
211 |
212 | prompt += f"""\nOutput your new, revised argument in the following JSON format:
213 | {{
214 | "revised_argument_title": ,
215 | "revised_argument_description": <2-3 sentence string explaining your new argument>
216 | }}
217 | """
218 | print(prompt + '\n\n')
219 |
220 | # conversation = history.extend(conversation)
221 | outputs = unidecode(self.model.generate(prompt,
222 | sampling_params=sampling_params,
223 | use_tqdm=False)[0].outputs[0].text)
224 |
225 | log_llm(self.log_dir, prompt, outputs)
226 | return json.loads(outputs)
227 |
--------------------------------------------------------------------------------
/tod_no_deliberation/moderator.py:
--------------------------------------------------------------------------------
1 | from vllm import SamplingParams
2 | from outlines.serve.vllm import JSONLogitsProcessor
3 | from unidecode import unidecode
4 | from persona import log_llm
5 | import json
6 | from pydantic import BaseModel, StringConstraints, conlist
7 | from typing_extensions import Annotated
8 | from debate import DebateNode
9 |
10 | class summary_schema(BaseModel):
11 | summary: Annotated[str, StringConstraints(strip_whitespace=True, min_length=50)]
12 |
13 | # class expansion_schema(BaseModel):
14 | # explanation: Annotated[str, StringConstraints(strip_whitespace=True, min_length=5)]
15 | # is_expand: bool
16 |
17 | class expansion_schema(BaseModel):
18 | explanation: Annotated[str, StringConstraints(strip_whitespace=True, min_length=5)]
19 | progression_of_arguments: bool
20 | meaningful_questions: bool
21 | clear_winner: bool
22 |
23 | class subtopic_schema(BaseModel):
24 | topic_title: Annotated[str, StringConstraints(strip_whitespace=True)]
25 | topic_description: Annotated[str, StringConstraints(strip_whitespace=True)]
26 | author_0_relevant_contributions: conlist(int, min_length=0,max_length=10)
27 | author_1_relevant_contributions: conlist(int, min_length=0,max_length=10)
28 |
29 |
30 | class subtopic_list_schema(BaseModel):
31 | subtopic_list : conlist(subtopic_schema, min_length=1, max_length=10)
32 |
33 |
34 | class sim_schema(BaseModel):
35 | similarities: Annotated[str, StringConstraints(strip_whitespace=True)]
36 | description: Annotated[str, StringConstraints(strip_whitespace=True)]
37 |
38 | class diff_schema(BaseModel):
39 | differences: Annotated[str, StringConstraints(strip_whitespace=True)]
40 | description: Annotated[str, StringConstraints(strip_whitespace=True)]
41 |
42 | def arg_dict_to_str(args, arg_type=True):
43 | arguments = ""
44 | for i, key in enumerate(args.keys()):
45 | if arg_type:
46 | arguments += f"Author {(i)}'s Initial Argument: {args[key]['argument_title']}: {args[key]['description']}\n\n"
47 | else:
48 | arguments += f"Author {(i)}'s Revised Argument: {args[key]['revised_argument_title']}: {args[key]['revised_argument_description']}\n\n"
49 |
50 | return arguments.strip()
51 |
52 | def format_evidence(list_evi, author, ids=None):
53 | text_evi = ""
54 | idx = 1
55 | for counter, item in enumerate(list_evi):
56 | if (ids == None) or ((counter + 1) in ids):
57 | text_evi += f"\t- Author {author.id}'s Supporting Evidence #{idx}. {item}\n"
58 | idx += 1
59 | return text_evi
60 |
61 | def format_self_deliberation(debate_node, paper_authors):
62 | out = ""
63 | for author in paper_authors:
64 | out += f"Author {author.id} Paper Title: {author.paper.title}\n"
65 | out += f"Author {author.id} Paper Abstract: {author.paper.abstract}\n\n"
66 | out += f"Author {author.id} Paper Introduction: {author.paper.introduction}\n\n"
67 |
68 | for no, arg in enumerate(debate_node.self_delib[author.id]):
69 | out += f"Author {author.id} Paper's Contribution #{no+1}: {arg['argument_title']}: {arg['description']}\n"
70 |
71 | return out
72 |
73 | class Moderator:
74 | def __init__(self, model, log_file):
75 | self.model = model # define model - Llama 3.
76 | self.log_file = log_file
77 |
78 | def generate_topics(self, round: DebateNode, parent_topic, paper_authors, k=3, temperature=0.3, top_p=0.99):
79 | topic_title = parent_topic['topic_title']
80 | prompt = f"""You are a fair and balanced moderator of a debate between two authors determining their respective novel contributions towards the following topic:
81 | Topic: {parent_topic['topic_title']}
82 | Topic Description: {parent_topic['topic_title']}
83 |
84 | Here are the two papers and their claimed novel contributions with corresponding evidence:
85 |
86 | {format_self_deliberation(round, paper_authors)}
87 |
88 | Based on each of the author's claimed novelties, you must determine the most meaningful, diverse set of subtopics within the parent topic, {topic_title}, which best cover the types of contributions each of the papers make. Remember that for each of your selected topics, the papers will be debating which of them makes the better contribution towards the topic. Hence, for each of your subtopics, cite the integer IDs of any relevant contributions from Author 0 (author_0_relevant_contributions) or Author 1 (author_1_relevant_contributions). At least one of these lists should be non-empty. Overall, our goal is to identify how novel Author 0's paper's contributions towards topic {topic_title} are by individually considering their contributions towards your subtopics.
89 |
90 | Output your subtopics (up to {k}) in the following JSON format:
91 | {{"subtopic_list":
92 | [
93 | {{
94 | "topic_title": ,
95 | "topic_description": <1-2 sentence string explaining the subtopic and what you feel would be most helpful for the papers to debate within the subtopic>,
96 | "author_0_relevant_contributions": ,
97 | "author_1_relevant_contributions":
98 | }},
99 | ...
100 | ]
101 | }}
102 |
103 | """
104 | logits_processor = JSONLogitsProcessor(schema=subtopic_list_schema, llm=self.model.llm_engine)
105 | sampling_params = SamplingParams(max_tokens=2000, logits_processors=[logits_processor], temperature=temperature, top_p=top_p)
106 |
107 | outputs = unidecode(self.model.generate(prompt,
108 | sampling_params=sampling_params,
109 | use_tqdm=False)[0].outputs[0].text)
110 |
111 | log_llm(self.log_file, prompt, outputs)
112 | outputs = json.loads(outputs)
113 |
114 | return outputs['subtopic_list']
115 |
116 | def is_expand(self, round: DebateNode, history):
117 | """
118 | Based on the previous arguments and the new arguments, determine if any progress has been made.
119 | """
120 | prev_args = arg_dict_to_str(round.init_arguments, True)
121 | new_args = arg_dict_to_str(round.final_arguments, False)
122 | logits_processor = JSONLogitsProcessor(schema=expansion_schema, llm=self.model.llm_engine)
123 | sampling_params = SamplingParams(max_tokens=1024, logits_processors=[logits_processor])
124 |
125 | round_topic = round.round_topic
126 |
127 | prompt = f"""You are a moderator faciliating a debate in which two paper are debating who makes the better contribution towards the following topic:\n\t- Topic: {round_topic['topic_title']}\n\t- Topic Description: {round_topic['topic_description']}\n
128 | -------
129 |
130 | {history}
131 |
132 | -------
133 |
134 | Below, you are given the previous set of arguments and the current set of arguments.
135 |
136 | \"previous arguments\":\n{prev_args}
137 |
138 | \"current arguments\":\n{new_args}
139 |
140 | -------
141 |
142 | You must determine whether progress is being made. DO NOT focus on the language being used. Focus on the content of the arguments. Specifically, determine the following (True or False for each):
143 | 1. progression_of_arguments: Are these arguments sufficiently different enough to necesitate further debate? Are there new, deeper concepts being discussed between the two sets of arguments?
144 | 2. meaningful_questions: Within the debate history, each author acknowledges each other's arguments and may ask clarifying questions accordingly. Do you believe that the clarifying questions have not been sufficiently addressed already and would be important to answer through further debate? If there are no questions raised in the debate history by either author, return False.
145 | 3. clear_winner: Do you believe that it is clear that one author has won the debate, and it does not need to be further deconstructured (in order to determine which components within each author's contributions are truly better)?
146 |
147 | Output your argument in the following JSON format:
148 | {{
149 | "explanation": <2-5 sentence string to explain your reasoning about whether further debate is necessary when comparing the \"previous arguments\" and the \"current arguments\">,
150 | "progression_of_arguments":