├── .gitignore
├── Home.py
├── LICENSE
├── README.md
├── config.py
├── demographics_dict.py
├── docs
├── chat_summary.txt
├── final_analysis.md
└── personas.json
├── pages
├── 1 Run_Virtual_Focus_Group.py
└── Analyze_Final_Results.py
├── persona_handler.py
├── requirements.txt
└── virtual_group.png
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | share/python-wheels/
24 | *.egg-info/
25 | .installed.cfg
26 | *.egg
27 | MANIFEST
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .nox/
43 | .coverage
44 | .coverage.*
45 | .cache
46 | nosetests.xml
47 | coverage.xml
48 | *.cover
49 | *.py,cover
50 | .hypothesis/
51 | .pytest_cache/
52 | cover/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | .pybuilder/
76 | target/
77 |
78 | # Jupyter Notebook
79 | .ipynb_checkpoints
80 |
81 | # IPython
82 | profile_default/
83 | ipython_config.py
84 |
85 | # pyenv
86 | # For a library or package, you might want to ignore these files since the code is
87 | # intended to run in multiple environments; otherwise, check them in:
88 | # .python-version
89 |
90 | # pipenv
91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
94 | # install all needed dependencies.
95 | #Pipfile.lock
96 |
97 | # poetry
98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99 | # This is especially recommended for binary packages to ensure reproducibility, and is more
100 | # commonly ignored for libraries.
101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102 | #poetry.lock
103 |
104 | # pdm
105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106 | #pdm.lock
107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108 | # in version control.
109 | # https://pdm.fming.dev/#use-with-ide
110 | .pdm.toml
111 |
112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
113 | __pypackages__/
114 |
115 | # Celery stuff
116 | celerybeat-schedule
117 | celerybeat.pid
118 |
119 | # SageMath parsed files
120 | *.sage.py
121 |
122 | # Environments
123 | .env
124 | .venv
125 | env/
126 | venv/
127 | ENV/
128 | env.bak/
129 | venv.bak/
130 |
131 | # Spyder project settings
132 | .spyderproject
133 | .spyproject
134 |
135 | # Rope project settings
136 | .ropeproject
137 |
138 | # mkdocs documentation
139 | /site
140 |
141 | # mypy
142 | .mypy_cache/
143 | .dmypy.json
144 | dmypy.json
145 |
146 | # Pyre type checker
147 | .pyre/
148 |
149 | # pytype static type analyzer
150 | .pytype/
151 |
152 | # Cython debug symbols
153 | cython_debug/
154 |
155 | # PyCharm
156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
158 | # and can be added to the global gitignore or merged into this file. For a more nuclear
159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
160 | #.idea/
161 |
--------------------------------------------------------------------------------
/Home.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from streamlit_extras.stylable_container import stylable_container
3 | import demographics_dict as dd
4 | import json
5 |
6 | st.set_page_config(page_title="Virtual Focus Group", page_icon=":tada:", layout="wide")
7 | with stylable_container(
8 | key="container_with_border",
9 | css_styles="""
10 | {
11 | border: 2px solid rgba(49, 51, 63, 0.2);
12 | background-color: lightteal;
13 | border-radius: 25px;
14 | padding: calc(1em - 1px);
15 | box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
16 | text-align: center;
17 | }
18 | """,
19 | ):
20 | st.markdown("
Build Your Personas
", unsafe_allow_html=True)
21 | st.markdown("Select the number of Personas to create, add details, then click Submit Personas.
", unsafe_allow_html=True)
22 |
23 |
24 |
25 | def save_personas(personas):
26 | persona_data = {}
27 | for i, persona in enumerate(personas):
28 | persona_name = f"Persona {i + 1}"
29 | persona_data[persona_name] = persona
30 | with open('docs/personas.json', 'w') as file:
31 | json.dump(persona_data, file, indent=4)
32 |
33 |
34 | def main():
35 | col1, col2, col3, col4 = st.columns([.5, 1, .5, .5])
36 | with col1:
37 | num_personas = st.slider("Number of Personas to Create: ", min_value=1, max_value=5, step=1, value=1)
38 | with col2:
39 | st.empty()
40 | with stylable_container(
41 | key="interior_container",
42 | css_styles="""
43 | {
44 | border: 1px solid rgba(49, 51, 63, 0.2);
45 | border-radius: 0.5rem;
46 | padding: calc(1em - 1px);
47 | box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
48 | }
49 | """,
50 | ):
51 |
52 | columns = st.columns(num_personas)
53 |
54 |
55 | # Create an empty list to store the personas
56 | personas = []
57 |
58 | # Allow users to build multiple personas
59 | for persona_id in range(num_personas):
60 | with columns[persona_id]:
61 | st.subheader(f"Persona {persona_id + 1}")
62 |
63 | # Create input fields for each demographic attribute
64 | name = st.text_input(f"Name (Persona {persona_id + 1})", key=f"name_{persona_id}")
65 | age = st.selectbox(f"Age (Persona {persona_id + 1})", dd.age_groups, key=f"age_{persona_id}")
66 | gender = st.selectbox(f"Gender (Persona {persona_id + 1})", dd.gender_groups, key=f"gender_{persona_id}")
67 | location = st.selectbox(f"Geographic Location (Persona {persona_id + 1})", dd.geographic_location, key=f"location_{persona_id}")
68 | education = st.selectbox(f"Education Level (Persona {persona_id + 1})", dd.education_levels, key=f"education_{persona_id}")
69 | employment = st.selectbox(f"Employment Status (Persona {persona_id + 1})", dd.employment_status, key=f"employment_{persona_id}")
70 | income = st.selectbox(f"Income Level (Persona {persona_id + 1})", dd.income_levels, key=f"income_{persona_id}")
71 | marital = st.selectbox(f"Marital Status (Persona {persona_id + 1})", dd.marital_status, key=f"marital_{persona_id}")
72 | children = st.selectbox(f"Number of Children (Persona {persona_id + 1})", dd.number_of_children, key=f"children_{persona_id}")
73 | job = st.selectbox(f"Occupation (Persona {persona_id + 1})", dd.occupation, key=f"job_{persona_id}")
74 | hobby = st.multiselect(f"Hobbies (Persona {persona_id + 1})", dd.hobbies, key=f"hobby_{persona_id}")
75 | backstory = st.text_area(f"Key Traits and Miscellaneous Data: (Persona {persona_id + 1})", key=f"backstory_{persona_id}")
76 |
77 | # Create a dictionary to represent the persona
78 | persona = {
79 | "Name": name,
80 | "Age": age,
81 | "Gender": gender,
82 | "Location": location,
83 | "Education": education,
84 | "Employment": employment,
85 | "Income": income,
86 | "Marital Status": marital,
87 | "Children": children,
88 | "Occupation": job,
89 | "Hobbies": hobby,
90 | "Backstory": backstory
91 | }
92 |
93 | personas.append(persona)
94 |
95 | # Add the persona to the list of personas
96 | with col3:
97 | with stylable_container(
98 | key="green_button",
99 | css_styles="""
100 | button {
101 | background-color: teal;
102 | color: white;
103 | box-shadow: 2px 0 7px 0 grey;
104 | margin-top: 20px;
105 | """,
106 | ):
107 | if st.button("Submit Personas", key=f"submit_{persona_id}"):
108 | save_personas(personas)
109 | st.success(f"Personas {persona_id - 2} - {num_personas} saved successfully!")
110 | with col4:
111 | with stylable_container(
112 | key="green_button",
113 | css_styles="""
114 | button {
115 | background-color: teal;
116 | color: white;
117 | box-shadow: 2px 0 7px 0 grey;
118 |
119 | }
120 | """,
121 | ):
122 | launch_focus_group = st.button("Launch Focus Group", key="launch_focus_group")
123 | if launch_focus_group:
124 | st.switch_page("pages/1 Run_Virtual_Focus_Group.py")
125 |
126 | if __name__ == "__main__":
127 | main()
128 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 msamylea
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | # AutoGen Virtual Focus Group
3 |
4 |
5 | 
6 |
7 |
8 | Virtual focus group with multiple custom personas, product details, and final analysis created with AutoGen, Ollama/Llama3, and Streamlit.
9 |
10 | Uses custom GroupChat and custom GroupChatManager to output the content to Streamlit in an organized, clean chat by removing blank messages and formatting content to use sender name.
11 |
12 | Create up to 5 Personas (you can change the data used in `demographics_dict.py`). They are saved to `docs/personas.json`.
13 |
14 | To analyze the discussion, run analysis from Analyze Final Results.
15 |
16 | The TERMINATE function does not trigger well with this code and Llama3, so if you're able to fix that part, let me know how you did it.
17 |
18 | ## Operational Customization
19 | Run a virtual focus group with the personas by entering a topic of discussion and kicking it off.
20 | * Create your own personas in `demographics_dict.py` or reuse the default.
21 | * To change the discussion length, edit max_round in `./pages/1 Run Virtual Focus Group.py` groupchat entry.
22 | * The final chat will be saved to `./docs/chat_summary.txt`.
23 | * The final summary wil lbe saved to `docs/final_analysis.md`.
24 |
25 |
26 | ## Model Selection
27 |
28 | By default, the model will use ``llama3:latest`` (see `config.py` for a one-location update for the entire project), but you can customize it with the following steps.
29 |
30 | * The 'l3custom' model mentioned is just the standard Ollama llama3:latest rebuilt with this Modelfile:
31 |
32 | ```dockerfile
33 | FROM llama3:latest
34 | TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
35 |
36 | {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
37 |
38 | {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
39 |
40 | {{ .Response }}<|eot_id|>"""
41 | PARAMETER num_keep 24
42 | PARAMETER stop "<|start_header_id|>"
43 | PARAMETER stop "<|end_header_id|>"
44 | PARAMETER stop "<|eot_id|>"
45 | PARAMETER stop Human:
46 | PARAMETER stop Assistant:
47 | ```
48 |
49 | ## Dependencies
50 |
51 | In a clean environment run `pip install -r requirements.txt` to install all dependencies.
--------------------------------------------------------------------------------
/config.py:
--------------------------------------------------------------------------------
1 | from langchain_openai import ChatOpenAI
2 | from openai import OpenAI
3 |
4 | llama_model = "llama3:latest" # l3custom (see README for tuning options)
5 |
6 | completions_model = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")
7 |
8 | model = ChatOpenAI(model="llama3:latest", base_url="http://localhost:11434/v1", api_key="ollama")
--------------------------------------------------------------------------------
/demographics_dict.py:
--------------------------------------------------------------------------------
1 | age_groups = ['<18', '18-24', '25-34', '35-44', '45-54', '55-64', '65+']
2 |
3 | gender_groups = ['male', 'female', 'non-binary']
4 |
5 | geographic_location = ['North Eastern US', 'Central US', 'South East US', 'West US']
6 |
7 | education_levels = ['High School', 'Some College', 'Associates Degree', 'Bachelors Degree', 'Masters Degree', 'Doctoral Degree']
8 |
9 | employment_status = ['Employed', 'Unemployed', 'Student', 'Retired', 'Other']
10 |
11 | income_levels = ['<20k', '20k-40k', '40k-60k', '60k-80k', '80k-100k', '100k-120k', '120k-140k', '140k-160k', '160k-180k', '180k-200k', '200k+']
12 |
13 | marital_status = ['Married', 'Single', 'Divorced', 'Widowed']
14 |
15 | number_of_children = ['0', '1', '2', '3', '4+']
16 |
17 | occupation = ['Management', 'Sales', 'Clerical', 'Service', 'Production', 'Technical', 'Other']
18 |
19 | hobbies = ['Sports', 'Music', 'Reading', 'Cooking', 'Traveling', 'Exercise', 'Pets', 'Shopping', 'Movies', 'Gardening', 'Games', 'Other']
20 |
21 |
--------------------------------------------------------------------------------
/docs/chat_summary.txt:
--------------------------------------------------------------------------------
1 | chat summary stored here
2 |
--------------------------------------------------------------------------------
/docs/final_analysis.md:
--------------------------------------------------------------------------------
1 | Final summary will go here...
2 |
--------------------------------------------------------------------------------
/docs/personas.json:
--------------------------------------------------------------------------------
1 | {
2 | "Persona 1": {
3 | "Name": "Diana",
4 | "Age": "25-34",
5 | "Gender": "female",
6 | "Location": "West US",
7 | "Education": "Masters Degree",
8 | "Employment": "Employed",
9 | "Income": "200k+",
10 | "Marital Status": "Single",
11 | "Children": "0",
12 | "Occupation": "Technical",
13 | "Hobbies": [
14 | "Games",
15 | "Exercise",
16 | "Traveling"
17 | ],
18 | "Backstory": "Highly opinionated and well educated. Very well spoken. Wants to know all the technical details of products and doesn't shy away from demanding more information when she feels like the advertising is false or misleading."
19 | }
20 | }
21 |
--------------------------------------------------------------------------------
/pages/1 Run_Virtual_Focus_Group.py:
--------------------------------------------------------------------------------
1 | import time
2 | import sys
3 | import os
4 | sys.path.append(os.path.dirname(os.path.dirname(__file__)))
5 | import streamlit as st
6 | from streamlit_extras.stylable_container import stylable_container
7 | from typing import Union, Literal
8 | import json
9 | import autogen
10 | from autogen import AssistantAgent, UserProxyAgent, Agent
11 | import persona_handler as ph
12 | import random
13 |
14 | import config as cfg
15 |
16 | with open('./docs/personas.json', 'r') as f:
17 | personas = json.load(f)
18 |
19 | llm_config={
20 | "config_list": [
21 | {
22 | "model": cfg.llama_model,
23 | "api_key": "ollama",
24 | "base_url": "http://localhost:11434/v1",
25 | "max_tokens": 8192,
26 | }
27 | ],
28 | "cache_seed": None,
29 | }
30 |
31 | # setup page title and description
32 | st.set_page_config(page_title="Virtual Focus Group", page_icon="🤖", layout="wide")
33 | with stylable_container(
34 | key="outer_container",
35 | css_styles="""
36 | {
37 | border: 2px solid rgba(49, 51, 63, 0.2);
38 | border-radius: 0.5rem;
39 | padding: calc(1em - 1px);
40 | box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
41 | }
42 | """,
43 | ):
44 |
45 | st.markdown("To begin, describe your product in detail and explain the type of feedback you are looking for from the group.
", unsafe_allow_html=True)
46 | st.markdown("The focus group will consist of a moderator and a group of personas. The moderator will guide the discussion, while the personas will provide feedback based on their unique characteristics and perspectives.
", unsafe_allow_html=True)
47 |
48 | class CustomGroupChatManager(autogen.GroupChatManager):
49 | def _process_received_message(self, message, sender, silent):
50 | formatted_message = "" # Initialize formatted_message as an empty string
51 | with stylable_container(
52 | key="container_with_border",
53 | css_styles="""
54 | {
55 | border: 1px solid rgba(49, 51, 63, 0.2);
56 | border-radius: 0.5rem;
57 | padding: calc(1em - 1px);
58 | box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
59 | }
60 | """,
61 | ):
62 | # Handle the case when message is a dictionary
63 | if isinstance(message, dict):
64 | if 'content' in message and message['content'].strip():
65 | formatted_message = f"**{sender.name}**: {message['content']}"
66 | st.session_state.setdefault("displayed_messages", []).append(message['content'])
67 | else:
68 | return super()._process_received_message(message, sender, silent)
69 | # Handle the case when message is a string
70 | elif isinstance(message, str) and message.strip():
71 | formatted_message = f"**{sender.name}**: {message}"
72 | st.session_state.setdefault("displayed_messages", []).append(message)
73 | else:
74 | return super()._process_received_message(message, sender, silent)
75 |
76 | # Only format and display the message if the sender is not the manager
77 | if sender != manager and formatted_message:
78 | with st.chat_message(sender.name):
79 | st.markdown(formatted_message + "\n")
80 | time.sleep(2)
81 |
82 | filename = "./docs/chat_summary.txt"
83 |
84 | with open(filename, 'a') as f:
85 | f.write(formatted_message + "\n")
86 | return super()._process_received_message(message, sender, silent)
87 |
88 | class CustomGroupChat(autogen.GroupChat):
89 | @staticmethod
90 | def custom_speaker_selection_func(last_speaker: Agent, groupchat: autogen.GroupChat) -> Union[Agent, Literal['auto', 'manual', 'random', 'round_robin'], None]:
91 |
92 | if last_speaker == moderator_agent:
93 | return random.choice(personas_agents)
94 | else:
95 | return random.choice([moderator_agent] + personas_agents)
96 | select_speaker_message_template = """You are in a focus group. The following roles are available:
97 | {roles}.
98 | Read the following conversation.
99 | Then select the next role from {agentlist} to play. Only return the role."""
100 |
101 | personas_agents = []
102 | for persona_name, persona_data in personas.items():
103 | persona_name = persona_data['Name']
104 | persona_prompt = ph.persona_prompt
105 | persona_description = json.dumps(personas)
106 | persona_agent = AssistantAgent(
107 | name=persona_name,
108 | system_message=persona_prompt,
109 | llm_config=llm_config,
110 | human_input_mode="NEVER",
111 | description=f"A virtual focus group participant named {persona_name}. They do not know anything about the product beyond what they are told. They should be called on to give opinions.",
112 | )
113 | personas_agents.append(persona_agent)
114 |
115 |
116 | moderator_agent = AssistantAgent(
117 | name="Moderator",
118 | system_message='''
119 | You keep the conversation flowing between group members.
120 | Do not reply more than once before another group member speaks again.
121 | You can answer group members questions, but you do not offer additional information.
122 | Do not offer opinions about the topic or user_input, only moderate the conversation.
123 | Do not say thank you or the end.''',
124 | default_auto_reply="Reply `TERMINATE` if the task is done.",
125 | llm_config=llm_config,
126 | description="A Focus Group moderator.",
127 | is_termination_msg=lambda x: True if "TERMINATE" in x.get("content") else False,
128 | human_input_mode="NEVER",
129 | )
130 |
131 | user_proxy = UserProxyAgent(
132 | name="Admin",
133 | human_input_mode= "NEVER",
134 | system_message="Human Admin for the Focus Group.",
135 | max_consecutive_auto_reply=5,
136 | default_auto_reply="Reply `TERMINATE` if the task is done.",
137 | is_termination_msg=lambda x: True if "TERMINATE" in x.get("content") else False,
138 | code_execution_config={"use_docker":False}
139 | )
140 |
141 |
142 | groupchat = CustomGroupChat(agents=[user_proxy, moderator_agent] + personas_agents, messages=[], speaker_selection_method=CustomGroupChat.custom_speaker_selection_func, max_round=20, select_speaker_message_template=CustomGroupChat.select_speaker_message_template)
143 |
144 |
145 | manager = CustomGroupChatManager(groupchat=groupchat, llm_config=llm_config)
146 | with stylable_container(
147 | key="chat_container",
148 | css_styles="""
149 | {
150 | border: 2px solid rgba(49, 51, 63, 0.2);
151 | border-radius: 0.5rem;
152 | padding: calc(1em - 1px);
153 | box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
154 | }
155 | """,
156 | ):
157 | with st.container(height=800):
158 | user_input = st.text_area("Describe your product and the topic of discussion to the group:")
159 | with stylable_container(
160 | key="green_button",
161 | css_styles="""
162 | button {
163 | box-shadow: 2px 0 7px 0 grey;
164 | }
165 | """,
166 | ):
167 | kickoff = st.button("Start Group Chat")
168 |
169 | if kickoff:
170 |
171 | llm_config={
172 | "config_list": [
173 | {
174 | "model": cfg.llama_model,
175 | "api_key": "ollama",
176 | "base_url": "http://localhost:11434/v1",
177 | "max_tokens": 8192,
178 | }
179 | ],
180 | "cache_seed": None,
181 | }
182 |
183 |
184 | if "chat_initiated" not in st.session_state:
185 | st.session_state.chat_initiated = False
186 | if not st.session_state.chat_initiated:
187 | moderator_agent.initiate_chat(
188 | manager,
189 | message=user_input,
190 | )
191 | st.session_state.chat_initiated = True
192 |
193 |
194 | st.stop()
195 |
--------------------------------------------------------------------------------
/pages/Analyze_Final_Results.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from streamlit_extras.stylable_container import stylable_container
3 |
4 | import sys
5 | import os
6 | sys.path.append(os.path.dirname(os.path.dirname(__file__)))
7 |
8 | import config as cfg
9 |
10 | st.set_page_config(page_title="Virtual Focus Group", page_icon=":tada:", layout="wide")
11 |
12 | with open("./docs/chat_summary.txt", 'r') as f:
13 | summary = f.read()
14 |
15 | with stylable_container(
16 | key="green_button",
17 | css_styles="""
18 | button {
19 | box-shadow: 2px 0 7px 0 grey;
20 | }
21 | """,
22 | ):
23 | submit = st.button("Generate Analysis of Focus Group")
24 |
25 |
26 |
27 | if submit:
28 | if not summary:
29 | st.error("No chat data available. Please run a focus group before generating an analysis.")
30 | else:
31 | with st.spinner("Processing Analysis..."):
32 | with stylable_container(
33 | key="title_container",
34 | css_styles="""
35 | {
36 | border: 2px solid rgba(49, 51, 63, 0.2);
37 | border-radius: 0.5rem;
38 | padding: calc(1em - 1px);
39 | box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
40 | }
41 | """,
42 | ):
43 | st.markdown("Analysis of Group Chat
", unsafe_allow_html=True)
44 | st.markdown("The following is a summary of the focus group chat.
", unsafe_allow_html=True)
45 |
46 | llm = cfg.completions_model
47 |
48 | response = llm.chat.completions.create(
49 | messages = [
50 | {"role": "system",
51 | "content": f"Analyze the focus group chat and provide a detailed summary and analysis of the discussion in markdown format. Chat: {summary}"},
52 | ],
53 | model=cfg.llama_model
54 | )
55 |
56 | analysis = response.choices[0].message.content
57 |
58 | with stylable_container(
59 | key="outer_container",
60 | css_styles="""
61 | {
62 | border: 2px solid rgba(49, 51, 63, 0.2);
63 | border-radius: 0.5rem;
64 | padding: calc(1em - 1px);
65 | box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
66 | }
67 | """,
68 | ):
69 | with st.container(height=800):
70 |
71 | st.markdown(analysis, unsafe_allow_html=True)
72 |
73 | with open("./docs/final_analysis.md", 'w') as f:
74 | f.write(analysis)
75 |
--------------------------------------------------------------------------------
/persona_handler.py:
--------------------------------------------------------------------------------
1 | import json
2 |
3 | persona_name = []
4 | persona_data = []
5 |
6 | persona_prompt = f"""
7 | You are {persona_name}, a member of a virtual focus group. Your role is to participate in a discussion about a given product or topic.
8 | In this focus group, you have never seen the product before and should give your opinions on the positive and negative aspects. Ask any questions
9 | needed to understand the product better.
10 | You always have an opinion to share. If you do not have children or a partner/spouse, do not mention children or a partner/spouse.
11 |
12 | Your demographics, traits, and background are as follows:
13 | {json.dumps(persona_data, indent=2)}
14 |
15 | When responding, make sure to:
16 | 1. Take your time and consider the topic carefully. Before replying, know your persona and how they would feel. Adhere strictly to your persona and do not act otherwise.
17 | 2. Remember that you are a participant. You know nothing about the product beyond what was told to you by the Admin or Moderator.
18 | 3. Act and speak in a way that is consistent with your demographics and traits. For example, if you are considered stubborn or shy, reflect that in your responses. Avoid being witty or using humor if your persona is serious or formal.
19 | 4. Provide opinions, insights, and reactions based on your persona's perspective.
20 | 5. Do not make up facts about your life or background that are not provided in the persona description.
21 |
22 | Remember to stay in character throughout the conversation and provide responses that align with your persona's background and traits.
23 |
24 | """
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | autogen
2 | streamlit
3 | streamlit_extras
4 | langchain_openai
5 |
--------------------------------------------------------------------------------
/virtual_group.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/msamylea/autogen_focus_group/4f5323c593866646270f004d72394d780b6d2b05/virtual_group.png
--------------------------------------------------------------------------------