├── images
├── user.png
├── banner.png
├── assistant2.png
├── iconshere.md
├── gemma2-2b-myGGUF.png
└── Gemma-2-Banner.original.jpg
├── Gemma2b-chatTurns.gif
├── model
└── yourGGUFhere.md
├── README.md
├── insrtuctions.txt
├── stappFULL.py
├── stappFULL-GPU.py
└── 95II7_log.txt
/images/user.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/fabiomatricardi/Gemma2-2b-it-chatbot/main/images/user.png
--------------------------------------------------------------------------------
/images/banner.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/fabiomatricardi/Gemma2-2b-it-chatbot/main/images/banner.png
--------------------------------------------------------------------------------
/Gemma2b-chatTurns.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/fabiomatricardi/Gemma2-2b-it-chatbot/main/Gemma2b-chatTurns.gif
--------------------------------------------------------------------------------
/images/assistant2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/fabiomatricardi/Gemma2-2b-it-chatbot/main/images/assistant2.png
--------------------------------------------------------------------------------
/images/iconshere.md:
--------------------------------------------------------------------------------
1 | ```
2 | assistant2.png
3 | banner.png
4 | Gemma-2-Banner.original.jpg
5 | user.png
6 |
7 | ```
8 |
--------------------------------------------------------------------------------
/images/gemma2-2b-myGGUF.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/fabiomatricardi/Gemma2-2b-it-chatbot/main/images/gemma2-2b-myGGUF.png
--------------------------------------------------------------------------------
/images/Gemma-2-Banner.original.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/fabiomatricardi/Gemma2-2b-it-chatbot/main/images/Gemma-2-Banner.original.jpg
--------------------------------------------------------------------------------
/model/yourGGUFhere.md:
--------------------------------------------------------------------------------
1 | GGUF file from
2 | ```
3 | https://huggingface.co/bartowski/gemma-2-2b-it-GGUF
4 | ```
5 |
6 | main REPO
7 | ```
8 | https://huggingface.co/google/gemma-2-2b-it
9 | ```
10 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # Gemma2-2b-it-chatbot
4 | Repo of the code from the Medium article about running Gemma2 2b locally
5 |
6 | > works with Python 3.11+
7 |
8 | ### Packages and dependencies
9 | create a virtual environment
10 | ```
11 | python -m venv venv
12 | ```
13 |
14 | Install with pip
15 | ```
16 | pip install streamlit==1.36.0 llama-cpp-python==0.2.85 tiktoken
17 | ```
18 |
19 |
20 | ### Doownload the model file in the model subdirectory
21 | ```
22 | Quantization: gemma-2-2b-it-Q5_K_M.gguf
23 |
24 | wget https://huggingface.co/bartowski/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it-Q5_K_M.gguf -OutFile model/gemma-2-2b-it-Q5_K_M.gguf
25 |
26 | ```
27 |
28 |
29 | ### Model Card
30 | ```
31 | NCTX = 8196
32 | CHAT TEMPLATE YES
33 | SYSTEM MESSAGE NOT SUPPORTED
34 | ```
35 |
36 | Available chat formats from metadata: chat_template.default
37 | ```
38 | Using gguf chat template: {{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '' + role + '
39 | ' + message['content'] | trim + '
40 | ' }}{% endfor %}{% if add_generation_prompt %}{{'model
41 | '}}{% endif %}
42 | Using chat eos_token:
43 | Using chat bos_token:
44 | ```
45 |
46 | #### Prompt format
47 | ```
48 | user
49 | {prompt}
50 | model
51 |
52 | model
53 | ```
54 |
55 | ### Create a BAT file
56 | point the path to your project path (iin my case is `C:\Users\FabioMatricardi\Documents\DEV\PortableLLMS\Gemma2-2b`)
57 | ```
58 | C:
59 | cd C:\Users\FabioMatricardi\Documents\DEV\PortableLLMS\Gemma2-2b
60 | call .\venv\Scripts\activate.bat
61 | streamlit run .\stapp.py
62 |
63 | PAUSE
64 |
65 | ```
66 |
67 | Run the `.bat` file or with the venv activated run
68 | ```
69 | streamlit run .\stapp.py
70 | ```
71 |
72 |
73 | ### CHAT TURN BASED
74 | Added a Toggle to enable Truns chat
75 | - it will consider always the last 5 messages (tunable)
76 | ```
77 | streamlit run .\stappFULL.py
78 | ```
79 |
80 |
81 |
82 |
83 |
84 |
85 |
--------------------------------------------------------------------------------
/insrtuctions.txt:
--------------------------------------------------------------------------------
1 | GGUF file from
2 | https://huggingface.co/bartowski/gemma-2-2b-it-GGUF
3 |
4 | main REPO
5 | https://huggingface.co/google/gemma-2-2b-it
6 |
7 |
8 | pip install streamlit==1.36.0 llama-cpp-python==0.2.85 tiktoken
9 |
10 | MODEL CARD
11 | NCTX = 918196
12 | CHAT TEMPLATE YES
13 | SYSTEM MESSAGE NOT SUPPORTED
14 |
15 | >>> Llama(model_path='model/gemma-2-2b-it-Q5_K_M.gguf')
16 | llama_model_loader: loaded meta data with 39 key-value pairs and 288 tensors from model/gemma-2-2b-it-Q5_K_M.gguf (version GGUF V3 (latest))
17 | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
18 | llama_model_loader: - kv 0: general.architecture str = gemma2
19 | llama_model_loader: - kv 1: general.type str = model
20 | llama_model_loader: - kv 2: general.name str = Gemma 2 2b It
21 | llama_model_loader: - kv 3: general.finetune str = it
22 | llama_model_loader: - kv 4: general.basename str = gemma-2
23 | llama_model_loader: - kv 5: general.size_label str = 2B
24 | llama_model_loader: - kv 6: general.license str = gemma
25 | llama_model_loader: - kv 7: general.tags arr[str,2] = ["conversational", "text-generation"]
26 | llama_model_loader: - kv 8: gemma2.context_length u32 = 8192
27 |
28 | llama_model_loader: - kv 32: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol...
29 | llama_model_loader: - type f32: 105 tensors
30 | llama_model_loader: - type q5_K: 156 tensors
31 | llama_model_loader: - type q6_K: 27 tensors
32 | llm_load_vocab: special tokens cache size = 249
33 | llm_load_vocab: token to piece cache size = 1.6014 MB
34 | llm_load_print_meta: format = GGUF V3 (latest)
35 | llm_load_print_meta: arch = gemma2
36 | llm_load_print_meta: vocab type = SPM
37 | llm_load_print_meta: n_vocab = 256000
38 | llm_load_print_meta: n_merges = 0
39 | llm_load_print_meta: vocab_only = 0
40 | llm_load_print_meta: n_ctx_train = 8192
41 | llm_load_print_meta: model type = 2B
42 | llm_load_print_meta: model ftype = Q5_K - Medium
43 | llm_load_print_meta: model params = 2.61 B
44 | llm_load_print_meta: model size = 1.79 GiB (5.87 BPW)
45 | llm_load_print_meta: general.name = Gemma 2 2b It
46 | llm_load_print_meta: BOS token = 2 ''
47 | llm_load_print_meta: EOS token = 1 ''
48 | llm_load_print_meta: UNK token = 3 ''
49 | llm_load_print_meta: PAD token = 0 ''
50 | llm_load_print_meta: LF token = 227 '<0x0A>'
51 | llm_load_print_meta: EOT token = 107 ''
52 | llm_load_print_meta: max token length = 48
53 | llm_load_tensors: ggml ctx size = 0.13 MiB
54 | llm_load_tensors: CPU buffer size = 1828.42 MiB
55 |
56 | Available chat formats from metadata: chat_template.default
57 | Using gguf chat template: {{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '' + role + '
58 | ' + message['content'] | trim + '
59 | ' }}{% endfor %}{% if add_generation_prompt %}{{'model
60 | '}}{% endif %}
61 | Using chat eos_token:
62 | Using chat bos_token:
63 |
64 |
65 | Prompt format
66 | user
67 | {prompt}
68 | model
69 |
70 | model
--------------------------------------------------------------------------------
/stappFULL.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from llama_cpp import Llama
3 | import warnings
4 | warnings.filterwarnings(action='ignore')
5 | import datetime
6 | import random
7 | import string
8 | from time import sleep
9 | import tiktoken
10 |
11 | # for counting the tokens in the prompt and in the result
12 | #context_count = len(encoding.encode(yourtext))
13 | encoding = tiktoken.get_encoding("r50k_base")
14 |
15 | verbosity = False
16 | nCTX = 8192
17 | sTOPS = ['']
18 | modelname = "Gemma2-2B-it"
19 | # Set the webpage title
20 | st.set_page_config(
21 | page_title=f"Your LocalGPT ✨ with {modelname}",
22 | page_icon="🌟",
23 | layout="wide")
24 |
25 | if "hf_model" not in st.session_state:
26 | st.session_state.hf_model = "Gemma2-2B-it"
27 | # Initialize chat history for the LLM
28 | if "messages" not in st.session_state:
29 | st.session_state.messages = []
30 |
31 | # Initialize the ChatMEssages for visualization only
32 | if "chatMessages" not in st.session_state:
33 | st.session_state.chatMessages = []
34 |
35 | if "repeat" not in st.session_state:
36 | st.session_state.repeat = 1.35
37 |
38 | if "temperature" not in st.session_state:
39 | st.session_state.temperature = 0.1
40 |
41 | if "maxlength" not in st.session_state:
42 | st.session_state.maxlength = 500
43 |
44 | if "speed" not in st.session_state:
45 | st.session_state.speed = 0.0
46 |
47 | if "numOfTurns" not in st.session_state:
48 | st.session_state.numOfTurns = 0
49 |
50 | if "maxTurns" not in st.session_state:
51 | st.session_state.maxTurns = 5 #must be odd number, greater than equal to 5
52 |
53 | def writehistory(filename,text):
54 | with open(filename, 'a', encoding='utf-8') as f:
55 | f.write(text)
56 | f.write('\n')
57 | f.close()
58 |
59 | def genRANstring(n):
60 | """
61 | n = int number of char to randomize
62 | """
63 | N = n
64 | res = ''.join(random.choices(string.ascii_uppercase +
65 | string.digits, k=N))
66 | return res
67 | #
68 |
69 | @st.cache_resource
70 | def create_chat():
71 | # Set HF API token and HF repo
72 | from llama_cpp import Llama
73 | client = Llama(
74 | model_path='model/gemma-2-2b-it-Q5_K_M.gguf',
75 | #n_gpu_layers=-1, #enable GPU
76 | temperature=0.24,
77 | n_ctx=nCTX,
78 | max_tokens=600,
79 | repeat_penalty=1.176,
80 | stop=sTOPS,
81 | verbose=verbosity,
82 | )
83 | print('loading gemma-2-2b-it-Q5_K_M.gguf with LlamaCPP...')
84 | return client
85 |
86 |
87 | # create THE SESSIoN STATES
88 | if "logfilename" not in st.session_state:
89 | ## Logger file
90 | logfile = f'{genRANstring(5)}_log.txt'
91 | st.session_state.logfilename = logfile
92 | #Write in the history the first 2 sessions
93 | writehistory(st.session_state.logfilename,f'{str(datetime.datetime.now())}\n\nYour own LocalGPT with 🌀 {modelname}\n---\n🧠🫡: You are a helpful assistant.')
94 | writehistory(st.session_state.logfilename,f'🌀: How may I help you today?')
95 |
96 |
97 | #AVATARS
98 | av_us = 'images/user.png' # './man.png' #"🦖" #A single emoji, e.g. "🧑💻", "🤖", "🦖". Shortcodes are not supported.
99 | av_ass = 'images/assistant2.png' #'./robot.png'
100 |
101 | ### START STREAMLIT UI
102 | # Create a header element
103 | st.image('images/Gemma-2-Banner.original.jpg',use_column_width=True)
104 | mytitle = f'> *🌟 {modelname} with {nCTX} tokens Context window* - Turn based Chat available with max capacity of :orange[**{st.session_state.maxTurns} messages**].'
105 | st.markdown(mytitle, unsafe_allow_html=True)
106 | #st.markdown('> Local Chat ')
107 | #st.markdown('---')
108 |
109 | # CREATE THE SIDEBAR
110 | with st.sidebar:
111 | st.image('images/banner.png', use_column_width=True)
112 | st.session_state.temperature = st.slider('Temperature:', min_value=0.0, max_value=1.0, value=0.65, step=0.01)
113 | st.session_state.maxlength = st.slider('Length reply:', min_value=150, max_value=2000,
114 | value=550, step=50)
115 | st.session_state.repeat = st.slider('Repeat Penalty:', min_value=0.0, max_value=2.0, value=1.176, step=0.02)
116 | st.session_state.turns = st.toggle('Turn based', value=False, help='Activate Conversational Turn Chat with History',
117 | disabled=False, label_visibility="visible")
118 | st.markdown(f"*Number of Max Turns*: {st.session_state.maxTurns}")
119 | actualTurns = st.markdown(f"*Chat History Lenght*: :green[Good]")
120 | statspeed = st.markdown(f'💫 speed: {st.session_state.speed} t/s')
121 | btnClear = st.button("Clear History",type="primary", use_container_width=True)
122 | st.markdown(f"**Logfile**: {st.session_state.logfilename}")
123 |
124 | llm = create_chat()
125 |
126 | # Display chat messages from history on app rerun
127 | for message in st.session_state.chatMessages:
128 | if message["role"] == "user":
129 | with st.chat_message(message["role"],avatar=av_us):
130 | st.markdown(message["content"])
131 | else:
132 | with st.chat_message(message["role"],avatar=av_ass):
133 | st.markdown(message["content"])
134 | # Accept user input
135 | if myprompt := st.chat_input("What is an AI model?"):
136 | # Add user message to chat history
137 | st.session_state.messages.append({"role": "user", "content": myprompt})
138 | st.session_state.chatMessages.append({"role": "user", "content": myprompt})
139 | st.session_state.numOfTurns = len(st.session_state.messages)
140 | # Display user message in chat message container
141 | with st.chat_message("user", avatar=av_us):
142 | st.markdown(myprompt)
143 | usertext = f"user: {myprompt}"
144 | writehistory(st.session_state.logfilename,usertext)
145 | # Display assistant response in chat message container
146 | with st.chat_message("assistant",avatar=av_ass):
147 | message_placeholder = st.empty()
148 | with st.spinner("Thinking..."):
149 | start = datetime.datetime.now()
150 | response = ''
151 | conv_messages = []
152 | if st.session_state.turns:
153 | if st.session_state.numOfTurns > st.session_state.maxTurns:
154 | conv_messages = st.session_state.messages[-st.session_state.maxTurns:]
155 | actualTurns.markdown(f"*Chat History Lenght*: :red[Trimmed]")
156 | else:
157 | conv_messages = st.session_state.messages
158 | else:
159 | conv_messages.append(st.session_state.messages[-1])
160 | full_response = ""
161 | for chunk in llm.create_chat_completion(
162 | messages=conv_messages,
163 | temperature=st.session_state.temperature,
164 | repeat_penalty= st.session_state.repeat,
165 | stop=sTOPS,
166 | max_tokens=st.session_state.maxlength,
167 | stream=True,):
168 | try:
169 | if chunk["choices"][0]["delta"]["content"]:
170 | full_response += chunk["choices"][0]["delta"]["content"]
171 | message_placeholder.markdown(full_response + "🟡")
172 | delta = datetime.datetime.now() -start
173 | totalseconds = delta.total_seconds()
174 | prompttokens = len(encoding.encode(myprompt))
175 | assistanttokens = len(encoding.encode(full_response))
176 | totaltokens = prompttokens + assistanttokens
177 | st.session_state.speed = totaltokens/totalseconds
178 | statspeed.markdown(f'💫 speed: {st.session_state.speed:.2f} t/s')
179 | except:
180 | pass
181 |
182 | delta = datetime.datetime.now() - start
183 | totalseconds = delta.total_seconds()
184 | prompttokens = len(encoding.encode(myprompt))
185 | assistanttokens = len(encoding.encode(full_response))
186 | totaltokens = prompttokens + assistanttokens
187 | st.session_state.speed = totaltokens/totalseconds
188 | statspeed.markdown(f'💫 speed: {st.session_state.speed:.2f} t/s')
189 | toregister = full_response + f"""
190 | ```
191 | 🧾 prompt tokens: {prompttokens}
192 | 📈 generated tokens: {assistanttokens}
193 | ⏳ generation time: {delta}
194 | 💫 speed: {st.session_state.speed:.3f} t/s
195 | ```"""
196 | message_placeholder.markdown(toregister)
197 | asstext = f"assistant: {toregister}"
198 | writehistory(st.session_state.logfilename,asstext)
199 | st.session_state.messages.append({"role": "assistant", "content": full_response})
200 | st.session_state.chatMessages.append({"role": "assistant", "content": toregister})
201 | st.session_state.numOfTurns = len(st.session_state.messages)
--------------------------------------------------------------------------------
/stappFULL-GPU.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from llama_cpp import Llama
3 | import warnings
4 | warnings.filterwarnings(action='ignore')
5 | import datetime
6 | import random
7 | import string
8 | from time import sleep
9 | import tiktoken
10 |
11 | # for counting the tokens in the prompt and in the result
12 | #context_count = len(encoding.encode(yourtext))
13 | encoding = tiktoken.get_encoding("r50k_base")
14 |
15 | verbosity = False
16 | nCTX = 8192
17 | sTOPS = ['']
18 | modelname = "Gemma2-2B-it"
19 | # Set the webpage title
20 | st.set_page_config(
21 | page_title=f"Your LocalGPT ✨ with {modelname}",
22 | page_icon="🌟",
23 | layout="wide")
24 |
25 | if "hf_model" not in st.session_state:
26 | st.session_state.hf_model = "Gemma2-2B-it"
27 | # Initialize chat history for the LLM
28 | if "messages" not in st.session_state:
29 | st.session_state.messages = []
30 |
31 | # Initialize the ChatMEssages for visualization only
32 | if "chatMessages" not in st.session_state:
33 | st.session_state.chatMessages = []
34 |
35 | if "repeat" not in st.session_state:
36 | st.session_state.repeat = 1.35
37 |
38 | if "temperature" not in st.session_state:
39 | st.session_state.temperature = 0.1
40 |
41 | if "maxlength" not in st.session_state:
42 | st.session_state.maxlength = 500
43 |
44 | if "speed" not in st.session_state:
45 | st.session_state.speed = 0.0
46 |
47 | if "numOfTurns" not in st.session_state:
48 | st.session_state.numOfTurns = 0
49 |
50 | if "maxTurns" not in st.session_state:
51 | st.session_state.maxTurns = 5 #must be odd number, greater than equal to 5
52 |
53 | def writehistory(filename,text):
54 | with open(filename, 'a', encoding='utf-8') as f:
55 | f.write(text)
56 | f.write('\n')
57 | f.close()
58 |
59 | def genRANstring(n):
60 | """
61 | n = int number of char to randomize
62 | """
63 | N = n
64 | res = ''.join(random.choices(string.ascii_uppercase +
65 | string.digits, k=N))
66 | return res
67 | #
68 |
69 | @st.cache_resource
70 | def create_chat():
71 | # Set HF API token and HF repo
72 | from llama_cpp import Llama
73 | client = Llama(
74 | model_path='model/gemma-2-2b-it-Q5_K_M.gguf',
75 | n_gpu_layers=-1, #enable GPU
76 | temperature=0.24,
77 | n_ctx=nCTX,
78 | max_tokens=600,
79 | repeat_penalty=1.176,
80 | stop=sTOPS,
81 | verbose=verbosity,
82 | )
83 | print('loading gemma-2-2b-it-Q5_K_M.gguf with LlamaCPP...')
84 | return client
85 |
86 |
87 | # create THE SESSIoN STATES
88 | if "logfilename" not in st.session_state:
89 | ## Logger file
90 | logfile = f'{genRANstring(5)}_log.txt'
91 | st.session_state.logfilename = logfile
92 | #Write in the history the first 2 sessions
93 | writehistory(st.session_state.logfilename,f'{str(datetime.datetime.now())}\n\nYour own LocalGPT with 🌀 {modelname}\n---\n🧠🫡: You are a helpful assistant.')
94 | writehistory(st.session_state.logfilename,f'🌀: How may I help you today?')
95 |
96 |
97 | #AVATARS
98 | av_us = 'images/user.png' # './man.png' #"🦖" #A single emoji, e.g. "🧑💻", "🤖", "🦖". Shortcodes are not supported.
99 | av_ass = 'images/assistant2.png' #'./robot.png'
100 |
101 | ### START STREAMLIT UI
102 | # Create a header element
103 | st.image('images/Gemma-2-Banner.original.jpg',use_column_width=True)
104 | mytitle = f'> *🌟 {modelname} with {nCTX} tokens Context window* - Turn based Chat available with max capacity of :orange[**{st.session_state.maxTurns} messages**].'
105 | st.markdown(mytitle, unsafe_allow_html=True)
106 | #st.markdown('> Local Chat ')
107 | #st.markdown('---')
108 |
109 | # CREATE THE SIDEBAR
110 | with st.sidebar:
111 | st.image('images/banner.png', use_column_width=True)
112 | st.session_state.temperature = st.slider('Temperature:', min_value=0.0, max_value=1.0, value=0.65, step=0.01)
113 | st.session_state.maxlength = st.slider('Length reply:', min_value=150, max_value=2000,
114 | value=550, step=50)
115 | st.session_state.repeat = st.slider('Repeat Penalty:', min_value=0.0, max_value=2.0, value=1.176, step=0.02)
116 | st.session_state.turns = st.toggle('Turn based', value=False, help='Activate Conversational Turn Chat with History',
117 | disabled=False, label_visibility="visible")
118 | st.markdown(f"*Number of Max Turns*: {st.session_state.maxTurns}")
119 | actualTurns = st.markdown(f"*Chat History Lenght*: :green[Good]")
120 | statspeed = st.markdown(f'💫 speed: {st.session_state.speed} t/s')
121 | btnClear = st.button("Clear History",type="primary", use_container_width=True)
122 | st.markdown(f"**Logfile**: {st.session_state.logfilename}")
123 |
124 | llm = create_chat()
125 |
126 | # Display chat messages from history on app rerun
127 | for message in st.session_state.chatMessages:
128 | if message["role"] == "user":
129 | with st.chat_message(message["role"],avatar=av_us):
130 | st.markdown(message["content"])
131 | else:
132 | with st.chat_message(message["role"],avatar=av_ass):
133 | st.markdown(message["content"])
134 | # Accept user input
135 | if myprompt := st.chat_input("What is an AI model?"):
136 | # Add user message to chat history
137 | st.session_state.messages.append({"role": "user", "content": myprompt})
138 | st.session_state.chatMessages.append({"role": "user", "content": myprompt})
139 | st.session_state.numOfTurns = len(st.session_state.messages)
140 | # Display user message in chat message container
141 | with st.chat_message("user", avatar=av_us):
142 | st.markdown(myprompt)
143 | usertext = f"user: {myprompt}"
144 | writehistory(st.session_state.logfilename,usertext)
145 | # Display assistant response in chat message container
146 | with st.chat_message("assistant",avatar=av_ass):
147 | message_placeholder = st.empty()
148 | with st.spinner("Thinking..."):
149 | start = datetime.datetime.now()
150 | response = ''
151 | conv_messages = []
152 | if st.session_state.turns:
153 | if st.session_state.numOfTurns > st.session_state.maxTurns:
154 | conv_messages = st.session_state.messages[-st.session_state.maxTurns:]
155 | actualTurns.markdown(f"*Chat History Lenght*: :red[Trimmed]")
156 | else:
157 | conv_messages = st.session_state.messages
158 | else:
159 | conv_messages.append(st.session_state.messages[-1])
160 | full_response = ""
161 | for chunk in llm.create_chat_completion(
162 | messages=conv_messages,
163 | temperature=st.session_state.temperature,
164 | repeat_penalty= st.session_state.repeat,
165 | stop=sTOPS,
166 | max_tokens=st.session_state.maxlength,
167 | stream=True,):
168 | try:
169 | if chunk["choices"][0]["delta"]["content"]:
170 | full_response += chunk["choices"][0]["delta"]["content"]
171 | message_placeholder.markdown(full_response + "🟡")
172 | delta = datetime.datetime.now() -start
173 | totalseconds = delta.total_seconds()
174 | prompttokens = len(encoding.encode(myprompt))
175 | assistanttokens = len(encoding.encode(full_response))
176 | totaltokens = prompttokens + assistanttokens
177 | st.session_state.speed = totaltokens/totalseconds
178 | statspeed.markdown(f'💫 speed: {st.session_state.speed:.2f} t/s')
179 | except:
180 | pass
181 |
182 | delta = datetime.datetime.now() - start
183 | totalseconds = delta.total_seconds()
184 | prompttokens = len(encoding.encode(myprompt))
185 | assistanttokens = len(encoding.encode(full_response))
186 | totaltokens = prompttokens + assistanttokens
187 | st.session_state.speed = totaltokens/totalseconds
188 | statspeed.markdown(f'💫 speed: {st.session_state.speed:.2f} t/s')
189 | toregister = full_response + f"""
190 | ```
191 | 🧾 prompt tokens: {prompttokens}
192 | 📈 generated tokens: {assistanttokens}
193 | ⏳ generation time: {delta}
194 | 💫 speed: {st.session_state.speed:.3f} t/s
195 | ```"""
196 | message_placeholder.markdown(toregister)
197 | asstext = f"assistant: {toregister}"
198 | writehistory(st.session_state.logfilename,asstext)
199 | st.session_state.messages.append({"role": "assistant", "content": full_response})
200 | st.session_state.chatMessages.append({"role": "assistant", "content": toregister})
201 | st.session_state.numOfTurns = len(st.session_state.messages)
202 |
203 |
--------------------------------------------------------------------------------
/95II7_log.txt:
--------------------------------------------------------------------------------
1 | 2024-08-07 15:00:19.594594
2 |
3 | Your own LocalGPT with 🌀 Gemma2-2B-it
4 | ---
5 | 🧠🫡: You are a helpful assistant.
6 | 🌀: How may I help you today?
7 | user: Hi there!
8 | assistant: Hello! 👋 How can I help you today? 😊
9 |
10 | ```
11 |
12 | 🧾 prompt tokens: 3
13 | 📈 generated tokens: 15
14 | ⏳ generation time: 0:00:01.764205
15 | 💫 speed: 10.202895921959183 t/s
16 | ```
17 | user: I am Fabio Matricardi, a senior process control system engineer in the oil and gas industries
18 | assistant: It's nice to meet you, Fabio! 😊
19 |
20 | That sounds like an exciting field. I understand that working as a Process Control System Engineer in the Oil & Gas industry requires expertise in complex systems and regulations.
21 |
22 | What kind of projects have you been involved with? Are there any particular challenges or technologies you find particularly interesting within this sector? Perhaps we could discuss some recent advancements, best practices, or even brainstorm ideas for future applications!
23 |
24 |
25 | Let me know what's on your mind. I'm eager to learn more about the work you do and contribute in a way that might be helpful. 😄 👍
26 |
27 | ```
28 |
29 | 🧾 prompt tokens: 20
30 | 📈 generated tokens: 132
31 | ⏳ generation time: 0:00:14.682562
32 | 💫 speed: 10.352416696759052 t/s
33 | ```
34 | user: do you know what my job is about?
35 | assistant: Yes, I do! 😊 As a Process Control System Engineer in the Oil and Gas industry, your work involves ensuring safe and efficient operations of complex processes.
36 |
37 | Here's a general overview:
38 |
39 | * **Designing and implementing control systems:** You create algorithms that monitor and adjust various parameters like flow rates, pressure, temperature, etc., to maintain optimal process conditions.
40 | * **Troubleshooting and problem-solving:** You identify issues with the system's performance and design solutions using your expertise in instrumentation, automation, and data analysis.
41 | * **Safety & Regulatory Compliance:** You ensure that all systems adhere to strict safety protocols and environmental regulations, such as those related to flaring, emissions control, or hazardous material handling.
42 | * **Data Analysis & Optimization:** You utilize sensor data from various sources to optimize processes for efficiency, reduce waste, and improve overall profitability.
43 |
44 | It's a highly technical field that combines engineering principles with an understanding of the specific challenges of the oil and gas industry!
45 |
46 |
47 | What specifically are you working on currently? I can share information about related technologies or topics if you have any specific areas in mind.
48 |
49 |
50 |
51 | ```
52 | 🧾 prompt tokens: 40
53 | 📈 generated tokens: 187
54 | ⏳ generation time: 0:00:23.965224
55 | 💫 speed: 10.61900824982064 t/s
56 | ```
57 | ```
58 |
59 | 🧾 prompt tokens: 9
60 | 📈 generated tokens: 308
61 | ⏳ generation time: 0:00:33.272206
62 | 💫 speed: 9.527471668094385 t/s
63 | ```
64 | user: I am working on a FLNG project. what can you tell me about FLNG?
65 | assistant: Ah, that's exciting! You're diving into the world of Floating Liquefied Natural Gas (FLNG). It's a fascinating and complex area.
66 |
67 | Here’s an overview to get us started:
68 |
69 |
70 | **What is FLNG?**
71 |
72 | * **It's for remote gas fields:** Imagine those vast offshore natural gas reserves, where traditional drilling methods are difficult or too expensive. FLNG comes in handy! It allows production from these locations without having the need for a fixed platform (like an LNG plant).
73 | * **Floating processing unit:** Instead of building a permanent facility on land, you have a large floating vessel equipped with all the technology needed to process natural gas extracted from offshore fields.
74 |
75 | **How it Works:**
76 |
77 | 1. **Gas Extraction:** The FLNG starts by extracting natural gas from wells in the deep ocean or remote locations.
78 | 2. **Liquefaction & Separation:** The extracted gas is then processed on board, where it's liquefied (cooled) to make it easier for transport and storage.
79 | 3. **Ship Loading:** Once liquefied, the LNG is loaded onto a ship that will take it to a receiving terminal for further processing or export.
80 |
81 | **Advantages of FLNG:**
82 |
83 | * **Accessibility:** It allows access to previously inaccessible gas fields where building fixed infrastructure would be costly.
84 | * **Lower Costs:** FLNG can significantly reduce costs compared to traditional LNG plants due to its simpler construction and fewer engineering challenges.
85 | * **Flexibility:** The modular design allows for customization, making it adaptable to various types of natural gas reserves and offshore conditions.
86 |
87 | **Challenges of FLNG:**
88 |
89 | * **High Initial Capital Costs**: Building an FLNG vessel is a very expensive undertaking compared to traditional LNG plants due to its complexity.
90 | * **Environmental Impact:** While FLNG aims for reduced emissions compared to land-based LNG, there are still environmental concerns related to its construction and potential leaks during operation.
91 | * **Complex Operations:** FLNG requires skilled personnel with experience in offshore operations, complex engineering systems, and strict safety protocols.
92 |
93 | **Where it's used?**
94 | * FLNG is increasingly popular for developing countries where traditional LNG infrastructure can be challenging or expensive to build.
95 | * The technology has also been employed in remote regions of the world like:
96 | - Southeast Asia (Indonesia)
97 | - Australia
98 | -
99 | ```
100 |
101 | 🧾 prompt tokens: 18
102 | 📈 generated tokens: 549
103 | ⏳ generation time: 0:01:14.747818
104 | 💫 speed: 7.585505706668254 t/s
105 | ```
106 | user: continue
107 | assistant: Here are some more insights into FLNG, geared towards someone working on a project like yours:
108 |
109 | **Technical Aspects:**
110 |
111 | * **Advanced Control Systems:** The heart of an FLNG is its complex control system. You'll need deep understanding of process automation (PLC systems), data acquisition and analysis for real-time monitoring, as well as advanced algorithms for optimal gas processing under varying conditions.
112 | * **Offshore Engineering Expertise:** The vessel itself needs to withstand harsh weather, corrosion from seawater, and potential seismic activity. Your knowledge of offshore platforms, mooring systems, and structural integrity is crucial.
113 | * **Safety Protocols & Regulations:** FLNG operations require stringent safety measures due to the risks associated with working at sea. You'll need to understand regulations like MARPOL (Marine Pollution Prevention) for minimizing environmental impact and ensuring safe handling of hazardous materials.
114 |
115 | **Operational Considerations:**
116 |
117 | * **Gas Handling Capacity:** FLNGs come in various sizes, each designed for different gas reserves. Your understanding of flow rates, pressure management, and temperature control is essential to ensure smooth operation.
118 | * **Ship Loading & Transportation:** Efficient loading procedures on the ship are critical for maximizing efficiency and minimizing delays during LNG transfer.
119 | * **Project Management and Collaboration:** Managing a complex project like FLNG requires strong communication with contractors, engineers from various disciplines (mechanical, electrical, civil), shipping companies, and regulatory agencies.
120 |
121 | **Current Trends & Future Developments:**
122 |
123 | * **Smart Technologies:** FLNGs are increasingly incorporating advanced sensors, AI-powered analytics, and predictive maintenance to optimize operations and reduce downtime.
124 | * **Modular Design:** The FLNG industry is moving towards modular units that can be easily assembled, reducing construction time and costs compared to traditional ship building methods.
125 | * **Green Technology Integration:** There’s a growing push for more sustainable solutions in FLNG design, such as using renewable energy sources or minimizing emissions from the vessel's operations.
126 |
127 |
128 |
129 | **If you have specific questions about your project or any of these aspects, don't hesitate to ask! I can provide further information and details based on what you need.**
130 |
131 | Let me know if you want to explore something in particular like:
132 |
133 |
134 | * **Specific challenges related to your FLNG project?**
135 | * **Technical specifications for different types of FLNGs?**
136 | * **Case studies of successful FLNG projects?**
137 |
138 |
139 |
140 |
141 | ```
142 |
143 | 🧾 prompt tokens: 1
144 | 📈 generated tokens: 534
145 | ⏳ generation time: 0:01:26.082927
146 | 💫 speed: 6.214937370798277 t/s
147 | ```
148 | user: Case studies of successful FLNG projects?
149 | assistant: You're in for a fascinating journey! Here are some notable case studies of successful FLNG projects, each offering insights into different aspects of the technology:
150 |
151 | **1. Shell Prelude Offshore Floating Liquefied Natural Gas (FLNG) Project:**
152 | * **Location:** Browse Basin offshore Western Australia
153 | * **Significance:** This is one of the largest and most complex FLNG operations in the world, boasting a massive liquefaction capacity. It's currently under construction with a projected startup date in late 2023.
154 | * **Key Points to Note:** The Prelude project highlights the challenges of building such large-scale floating facilities while ensuring environmental safety, especially when dealing with high levels of production and complex operations at sea.
155 |
156 | **2. Petronas FLNG Malaysia (FLNG) Project:**
157 | * **Location:** Sabah offshore Malaysian waters
158 | * **Significance:** This project is a prime example of the success story of FLNG in Southeast Asia. It's been operational since late 2017, producing LNG from the nearby Kimanis Gas Field.
159 | * **Key Points to Note:** Petronas’ experience with FLNG provides valuable insights into cost-effective operation and efficient gas processing for a specific region like Malaysia.
160 |
161 | **3. Petróleo Brasileiro ( Petrobras) FLNG Project in Brazil:**
162 | * **Location:** Offshore fields off the coast of Rio de Janeiro, Brazil.
163 | * **Significance:** This project demonstrates the ability to utilize FLNG technology even in challenging environments with complex geological conditions and high levels of marine traffic near the production area.
164 |
165 | **Key Takeaways from These Projects:**
166 |
167 |
168 | * **Technological Advancements:** Each project showcases advancements in materials science (corrosion-resistant alloys), engineering design for harsh ocean conditions, and sophisticated control systems that optimize gas liquefaction, storage, and transportation.
169 | * **Economic Viability:** FLNG projects have proven to be economically viable options in remote regions with limited infrastructure or high water depth where traditional LNG plants would be prohibitively expensive.
170 | * **Environmental Responsibility:** While FLNG still has room for improvement regarding environmental impact, these projects demonstrate commitment to reducing emissions and minimizing the footprint of operations at sea through technologies like scrubbers that reduce sulfur dioxide pollution from exhaust gases.
171 |
172 | **Where you can find more information:**
173 |
174 |
175 | * **Industry Publications:** Look for articles in journals such as "LNG Journal," "The Offshore Engineer
176 | ```
177 |
178 | 🧾 prompt tokens: 8
179 | 📈 generated tokens: 552
180 | ⏳ generation time: 0:01:31.446989
181 | 💫 speed: 6.123766415097603 t/s
182 | ```
183 |
--------------------------------------------------------------------------------