├── README.md └── llm_dialogue_dataset_full_version.ipynb /README.md: -------------------------------------------------------------------------------- 1 | # ai-twinkle-llm-lab-llm-dialogue-dataset 2 | -------------------------------------------------------------------------------- /llm_dialogue_dataset_full_version.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "id": "view-in-github", 7 | "colab_type": "text" 8 | }, 9 | "source": [ 10 | "\"Open" 11 | ] 12 | }, 13 | { 14 | "cell_type": "markdown", 15 | "source": [ 16 | "# 使用 API 建立對話式訓練資料集(Colab 實作)\n", 17 | "\n", 18 | "此 Colab 實作將會完整處理實作對話資料生成工作\n", 19 | "\n", 20 | "註明:本 Colab 是由 Simon Liu 根據 [Twinkle AI - 使用 Gemma-3-12B-it API 建立對話式訓練資料集(Colab 實作)](https://github.com/ai-twinkle/llm-lab/blob/main/courses/2025-08-llm-dialogue-dataset/00_setup_and_api_call.ipynb) 編修完成" 21 | ], 22 | "metadata": { 23 | "id": "2TLRvB-1RJrm" 24 | }, 25 | "id": "2TLRvB-1RJrm" 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "source": [ 30 | "# I am Simon\n", 31 | "\n", 32 | "大家好,我是 Simon 劉育維,是一位 AI 領域解決方案專家,目前也擔任 Google GenAI 領域開發者專家 (GDE),期待能夠幫助企業導入人工智慧相關技術解決問題。如果這篇文章對您有幫助,請在 Medium 上按一下鼓勵,並追蹤我的個人帳號,這樣您就可以隨時閱讀我所撰寫的文章。歡迎在我的 Linkedin 上留言提供意見,並與我一起討論有關人工智慧的主題,期待能夠對大家有所幫助!" 33 | ], 34 | "metadata": { 35 | "id": "QjRAgnGbT76D" 36 | }, 37 | "id": "QjRAgnGbT76D" 38 | }, 39 | { 40 | "cell_type": "markdown", 41 | "id": "4df8ae2d-0c56-48f2-bef6-d7798800bfd5", 42 | "metadata": { 43 | "id": "4df8ae2d-0c56-48f2-bef6-d7798800bfd5" 44 | }, 45 | "source": [ 46 | "## 1 對話資料生成 & 對話集格式介紹\n", 47 | "\n", 48 | "在這個章節,目標是建立一份「可持續擴充」的對話資料集。主要的步驟如下:\n", 49 | "\n", 50 | "1. 使用 OpenAI SDK 連 API\n", 51 | "2. 介紹對話資料的常見格式:**Alpaca**, **ShareGPT**,以及 **OpenAI** 格式(我們採用後者)\n", 52 | "3. 探討 `.jsonl` 格式與 `.parquet` 格式的優缺點,並說明 HF Hub 對 parquet 的轉換支援\n", 53 | " (上傳 parquet 時 HF 會自動生成 `.parquet` 分支與 viewer)" 54 | ] 55 | }, 56 | { 57 | "cell_type": "markdown", 58 | "source": [ 59 | "### 1.1 初始化 OpenAI API 環境參數" 60 | ], 61 | "metadata": { 62 | "id": "Cp6NGO-N6IpY" 63 | }, 64 | "id": "Cp6NGO-N6IpY" 65 | }, 66 | { 67 | "cell_type": "code", 68 | "source": [ 69 | "# 取得 Colab 金鑰環境變數\n", 70 | "\n", 71 | "from google.colab import userdata" 72 | ], 73 | "metadata": { 74 | "id": "MpqGewAZCGJi" 75 | }, 76 | "id": "MpqGewAZCGJi", 77 | "execution_count": 1, 78 | "outputs": [] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "source": [ 83 | "# 初始化 OpenAI 套件設定\n", 84 | "# @markdown 請設定以下 OpenAI Compatible 的變數數值\n", 85 | "\n", 86 | "\n", 87 | "from openai import OpenAI\n", 88 | "\n", 89 | "# 請去申請 Google API Key ,然後放在 Colab 左邊側邊欄,「鑰匙」的地方,保護你的 Key\n", 90 | "API_KEY = userdata.get('GOOGLE_API_KEY') #@param {type:\"string\"}\n", 91 | "BASE_URL = \"https://generativelanguage.googleapis.com/v1beta/openai/\" #@param {type:\"string\"}\n", 92 | "MODEL = \"gemini-2.0-flash\" #@param {type:\"string\"}\n", 93 | "\n", 94 | "client = OpenAI(\n", 95 | " api_key=API_KEY,\n", 96 | " base_url=BASE_URL\n", 97 | ")\n", 98 | "\n", 99 | "print(\"API client 已初始化\")" 100 | ], 101 | "metadata": { 102 | "colab": { 103 | "base_uri": "https://localhost:8080/" 104 | }, 105 | "id": "CoHUqXvH299O", 106 | "outputId": "b3de8d22-ec1c-4e7a-f70d-095f8346af1d" 107 | }, 108 | "id": "CoHUqXvH299O", 109 | "execution_count": 2, 110 | "outputs": [ 111 | { 112 | "output_type": "stream", 113 | "name": "stdout", 114 | "text": [ 115 | "API client 已初始化\n" 116 | ] 117 | } 118 | ] 119 | }, 120 | { 121 | "cell_type": "code", 122 | "source": [ 123 | "# 測試 OpenAI 套件設定\n", 124 | "\n", 125 | "try:\n", 126 | " resp = client.chat.completions.create(\n", 127 | " model=MODEL,\n", 128 | " messages=[\n", 129 | " {\"role\": \"system\", \"content\": \"你是專業的助理,使用繁體中文回答。\"},\n", 130 | " {\"role\": \"user\", \"content\": \"請用一句話介紹什麼是大型語言模型(LLM)。\"}\n", 131 | " ],\n", 132 | " temperature=0.7,\n", 133 | " max_tokens=256,\n", 134 | " )\n", 135 | " print(\"✅ 呼叫成功\")\n", 136 | "except Exception as e:\n", 137 | " print(\"❌ 呼叫失敗,請檢查 API Key / base_url / 模型名稱是否正確。\")\n", 138 | " raise e\n", 139 | "\n", 140 | "if resp.choices:\n", 141 | " print(\"=== Model Output ===\")\n", 142 | " print(resp.choices[0].message.content)\n", 143 | "else:\n", 144 | " import json\n", 145 | " print(\"⚠️ 非預期回傳格式:\")\n", 146 | " print(json.dumps(resp.model_dump(), ensure_ascii=False, indent=2))" 147 | ], 148 | "metadata": { 149 | "colab": { 150 | "base_uri": "https://localhost:8080/" 151 | }, 152 | "id": "ZSGiTu4BS1MF", 153 | "outputId": "2fb7563e-af7e-4e37-ef30-d84e43a9dd0d" 154 | }, 155 | "id": "ZSGiTu4BS1MF", 156 | "execution_count": 3, 157 | "outputs": [ 158 | { 159 | "output_type": "stream", 160 | "name": "stdout", 161 | "text": [ 162 | "✅ 呼叫成功\n", 163 | "=== Model Output ===\n", 164 | "大型語言模型 (LLM) 是一種經過大量文本數據訓練的人工智慧模型,能夠理解、生成和預測人類語言。\n", 165 | "\n" 166 | ] 167 | } 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "id": "5ffa29ba-2e60-4041-a21f-c8f328f61304", 173 | "metadata": { 174 | "id": "5ffa29ba-2e60-4041-a21f-c8f328f61304" 175 | }, 176 | "source": [ 177 | "### 1.2 常見對話資料集格式比較\n", 178 | "\n" 179 | ] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "id": "241fddab-ede4-4d95-86b7-569bee685087", 184 | "metadata": { 185 | "id": "241fddab-ede4-4d95-86b7-569bee685087" 186 | }, 187 | "source": [ 188 | "

\n", 189 | "
\n", 190 | " 圖 1:Wiki 對話格式示意圖\n", 191 | "

" 192 | ] 193 | }, 194 | { 195 | "cell_type": "markdown", 196 | "id": "cd084c2b-1741-4a4d-8932-5e6dfdfafcfa", 197 | "metadata": { 198 | "id": "cd084c2b-1741-4a4d-8932-5e6dfdfafcfa" 199 | }, 200 | "source": [ 201 | "### 1.3 JSONL vs Parquet 比較\n", 202 | "\n", 203 | "| 格式 | 優點 | 缺點 |\n", 204 | "|----------|-------------------------------|------------------------------|\n", 205 | "| `.jsonl` | 易讀、輕量、開發友善 | 檔案大、大量數據讀取效率較低 |\n", 206 | "| `.parquet` | 壓縮效果好、查詢效能高、支援 HF 轉換 | 不易直接閱讀,需使用工具處理 |\n", 207 | "\n", 208 | "注意:即使你上傳 `.jsonl`,HF Hub 也可能幫你生成 `.parquet` 分支,方便瀏覽與載入。" 209 | ] 210 | }, 211 | { 212 | "cell_type": "markdown", 213 | "id": "e0a123e6-20c2-41d3-a235-7cfc8974c969", 214 | "metadata": { 215 | "id": "e0a123e6-20c2-41d3-a235-7cfc8974c969" 216 | }, 217 | "source": [ 218 | "

\n", 219 | "
\n", 220 | " 圖 2:HF Hub 自動生成的 .parquet 分支\n", 221 | "

" 222 | ] 223 | }, 224 | { 225 | "cell_type": "markdown", 226 | "id": "911efc9a-b8b9-4b28-92d8-c6d405ce31e3", 227 | "metadata": { 228 | "id": "911efc9a-b8b9-4b28-92d8-c6d405ce31e3" 229 | }, 230 | "source": [ 231 | "### 1.4 Reference-Free vs Reference-Based\n", 232 | "\n", 233 | "- **Reference-Free(無參考)**:用一些 seed prompt 引導模型生成。最早出自 [Self-Instruct: Aligning Language Models with Self-Generated Instructions\n", 234 | "](https://arxiv.org/abs/2212.10560)。\n", 235 | "- **Reference-Based(參考內容)**:使用真實資料片段(例如 Wiki 條目)作 prompt 佐料,讓生成內容更 grounded。" 236 | ] 237 | }, 238 | { 239 | "cell_type": "markdown", 240 | "id": "582b3052-568d-4ab3-aa56-cc5fe2c942ab", 241 | "metadata": { 242 | "id": "582b3052-568d-4ab3-aa56-cc5fe2c942ab" 243 | }, 244 | "source": [ 245 | "#### 1.4.1 Reference-Free 實作\n", 246 | "\n", 247 | "在 Reference-Free 的情境下,我們並不依賴任何外部知識庫或文件,而是透過 **seed 任務 (seed task)** 來驅動模型自行生成資料。 \n", 248 | "這些 seed 任務通常包含一個 **instruction(指令)**,加上少量的 **instance(範例輸入/輸出對)**,作為模型模仿與延伸的起點。 \n", 249 | "\n", 250 | "這種方法的代表性工作是 *Self-Instruct*,它透過人工設計的一些高品質種子指令,讓模型去「舉一反三」產生更多指令和對應答案,最終建立出龐大的資料集。\n", 251 | "\n", 252 | "以下是一個取自 [self-instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl) seed 範例,主題是「早餐建議」。 \n", 253 | "\n", 254 | "```json\n", 255 | "{\n", 256 | " \"id\": \"seed_task_0\",\n", 257 | " \"name\": \"breakfast_suggestion\",\n", 258 | " \"instruction\": \"Is there anything I can eat for a breakfast that doesn't include eggs, yet includes protein, and has roughly 700-1000 calories?\",\n", 259 | " \"instances\": [\n", 260 | " {\n", 261 | " \"input\": \"\",\n", 262 | " \"output\": \"Yes, you can have 1 oatmeal banana protein shake and 4 strips of bacon. The oatmeal banana protein shake may contain 1/2 cup oatmeal, 60 grams whey protein powder, 1/2 medium banana, 1 tbsp flaxseed oil and 1/2 cup water, totaling about 550 calories. The 4 strips of bacon contains about 200 calories.\"\n", 263 | " }\n", 264 | " ],\n", 265 | " \"is_classification\": false\n", 266 | "}\n", 267 | "```\n", 268 | "說明:\n", 269 | "- id:任務的唯一識別碼。\n", 270 | "- name:任務名稱,方便辨識。\n", 271 | "- instruction:給模型的主要問題或指令。\n", 272 | "- instances:包含輸入/輸出對,本例中 input 為空,代表模型直接依 instruction 回答;output 是一個可能的解答。\n", 273 | "- is_classification:標記此任務是否為分類型問題(此例為否)。" 274 | ] 275 | }, 276 | { 277 | "cell_type": "markdown", 278 | "id": "52128cae-647b-43da-913d-04aed64fc783", 279 | "metadata": { 280 | "id": "52128cae-647b-43da-913d-04aed64fc783" 281 | }, 282 | "source": [ 283 | "在實務中,我們會設計數十到數百個 seed 任務,涵蓋不同領域與指令型態,作為 Reference-Free 資料生成的核心基礎。\n", 284 | "\n", 285 | "不過,我們的作法並**不完全等同於 Self-Instruct**。\n", 286 | "相較於 Self-Instruct 的完整 pipeline(如:過濾、去重、迭代擴展),我們傾向採用更簡單直接的方式:\n", 287 | "\t1.\t人工撰寫少量高品質 seed 指令。\n", 288 | "\t2.\t要求模型基於這些 seed 產生新的 seed 指令(但僅限輸出 seed 本文,避免雜訊)。\n", 289 | "\t3.\t再利用這些新 seed 指令,由模型生成單輪問答配對。\n", 290 | "\n", 291 | "這樣的流程更輕量,雖然缺少複雜的篩選與多輪迭代,但對於課程實作與教學目標而言,已經能清楚展現 Reference-Free 的核心精神。" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "source": [ 297 | "base_seed = \"\"\"Is there anything I can eat for a breakfast that doesn't include eggs, yet includes protein, and has roughly 700-1000 calories?\"\"\"" 298 | ], 299 | "metadata": { 300 | "id": "Zhfi1rGdVo7i" 301 | }, 302 | "id": "Zhfi1rGdVo7i", 303 | "execution_count": 4, 304 | "outputs": [] 305 | }, 306 | { 307 | "cell_type": "code", 308 | "execution_count": 5, 309 | "id": "727cbf67-aca6-4e63-854f-d08a889ea711", 310 | "metadata": { 311 | "colab": { 312 | "base_uri": "https://localhost:8080/" 313 | }, 314 | "id": "727cbf67-aca6-4e63-854f-d08a889ea711", 315 | "outputId": "82e43e30-b32d-4ac9-c9b8-bf36d066d007" 316 | }, 317 | "outputs": [ 318 | { 319 | "output_type": "stream", 320 | "name": "stdout", 321 | "text": [ 322 | "🔹 原始 seed: Is there anything I can eat for a breakfast that doesn't include eggs, yet includes protein, and has roughly 700-1000 calories?\n", 323 | "🔸 新的 seed: 推薦一份不含乳製品、高纖且富含健康脂肪的午餐食譜,熱量約 500 大卡。\n" 324 | ] 325 | } 326 | ], 327 | "source": [ 328 | "# Step 1: 以既有 seed 為出發點,要求 LLM 產生「不同但相關」的新 seed。\n", 329 | "# 重要:嚴格要求只輸出 seed 文字本身,不要任何多餘說明、標籤或引號。\n", 330 | "\n", 331 | "from openai import OpenAI\n", 332 | "import re\n", 333 | "\n", 334 | "seed_gen_messages = [\n", 335 | " {\n", 336 | " \"role\": \"system\",\n", 337 | " \"content\": (\n", 338 | " \"你是一個資料生成器。你的任務是『根據給定 seed,產生一則不同但主題相關的 seed 指令』。\\n\"\n", 339 | " \"務必遵守:\\n\"\n", 340 | " \"1) 僅輸出新的 seed 指令本身(繁體中文)。\\n\"\n", 341 | " \"2) 不要加任何解釋、前後文、引號、標點裝飾或標籤。\\n\"\n", 342 | " \"3) 一至兩句話,清楚可執行。\\n\"\n", 343 | " \"4) 避免重複與原 seed 完全相同的限制條件或措辭,但主題需相關。\\n\"\n", 344 | " )\n", 345 | " },\n", 346 | " {\n", 347 | " \"role\": \"user\",\n", 348 | " \"content\": (\n", 349 | " f\"這是原始 seed:\\n{base_seed}\\n\\n\"\n", 350 | " \"請依規則產生一個新的 seed 指令(繁體中文)。只輸出新 seed 本文,其他一律不要。\"\n", 351 | " )\n", 352 | " },\n", 353 | "]\n", 354 | "\n", 355 | "resp_seed = client.chat.completions.create(\n", 356 | " model=MODEL,\n", 357 | " messages=seed_gen_messages,\n", 358 | " temperature=0.9,\n", 359 | " max_tokens=200,\n", 360 | ")\n", 361 | "\n", 362 | "new_seed_instruction_raw = resp_seed.choices[0].message.content.strip()\n", 363 | "\n", 364 | "# 基本清理:移除常見多餘字樣(保險)\n", 365 | "def sanitize_seed(text: str) -> str:\n", 366 | " text = text.strip()\n", 367 | " # 移除可能的程式碼圍欄或引號\n", 368 | " text = re.sub(r\"^```.*?\\n|\\n```$\", \"\", text, flags=re.DOTALL) # 去掉 ``` 區塊\n", 369 | " text = text.strip(\"「」\\\"'` \\n\\t\")\n", 370 | " # 去掉可能的前綴\n", 371 | " text = re.sub(r\"^(新的?seed指令[::]\\s*|seed[::]\\s*|新指令[::]\\s*)\", \"\", text, flags=re.IGNORECASE)\n", 372 | " return text.strip()\n", 373 | "\n", 374 | "new_seed_instruction = sanitize_seed(new_seed_instruction_raw)\n", 375 | "\n", 376 | "print(\"🔹 原始 seed:\", base_seed)\n", 377 | "print(\"🔸 新的 seed:\", new_seed_instruction)" 378 | ] 379 | }, 380 | { 381 | "cell_type": "code", 382 | "execution_count": 6, 383 | "id": "93ef39f8-4d25-4404-bcc4-59c4da85a9a9", 384 | "metadata": { 385 | "colab": { 386 | "base_uri": "https://localhost:8080/" 387 | }, 388 | "id": "93ef39f8-4d25-4404-bcc4-59c4da85a9a9", 389 | "outputId": "3c622424-7d24-4484-96e0-ecfc1d3fdbd0" 390 | }, 391 | "outputs": [ 392 | { 393 | "output_type": "stream", 394 | "name": "stdout", 395 | "text": [ 396 | "✅ 已生成單輪 QA 並寫入: outputs/datasets.jsonl\n", 397 | "\n", 398 | "=== 回答預覽 ===\n", 399 | " 好的,這是一份不含乳製品、高纖且富含健康脂肪,熱量約 500 大卡的午餐食譜,並提供詳細的說明和建議,讓你輕鬆準備:\n", 400 | "\n", 401 | "**午餐食譜:酪梨藜麥沙拉佐烤鮭魚**\n", 402 | "\n", 403 | "**熱量估算:** 約 480-520 大卡 (依食材份量微調)\n", 404 | "\n", 405 | "**食材:**\n", 406 | "\n", 407 | "* **烤鮭魚 (約 120 克):** 提供優質蛋白質和 Omega-3 脂肪酸 (約 200 大卡)\n", 408 | " * 鮭魚片:120 克\n", 409 | " * 橄欖油:1 茶匙\n", 410 | " * 鹽:少許\n", 411 | " * 黑胡椒:少許\n", 412 | " * 檸檬汁:少許 (可選)\n", 413 | "* **藜麥 (煮熟後約 1 杯):** 提供豐富纖維和植物性蛋白質 (約 220 大卡)\n", 414 | " * 乾燥藜麥:1/2 杯\n", 415 | " * 水:1 杯\n", 416 | "* **酪梨 (1/4 個):** 提供健康脂肪和纖維 (約 80 大卡)\n", 417 | "* **蔬菜 (總量約 1 杯):** 提供纖維、維生素和礦物質 (約 20 大卡)\n", 418 | " * 小黃瓜:1/4 根 (切丁)\n", 419 | " * 紅蘿蔔:1/4 根 (切丁)\n", 420 | " * 甜椒 (任何顏色):1/4 個 (切丁)\n", 421 | " * 芝麻葉或其他綠葉蔬菜:適量\n", 422 | "* **堅果和種子 (約 1 湯匙):** 提供健康脂肪和礦物質 (約 50 大卡)\n", 423 | " * 南瓜籽、葵花籽或杏仁片:1 湯匙 (混合或單選)\n", 424 | "* **醬汁:**\n", 425 | " * 檸檬汁:1 湯匙\n", 426 | " * 橄欖油:1 茶匙\n", 427 | " * 第戎芥末:1/2 茶匙 (可選)\n", 428 | " * 鹽:少許\n", 429 | " * 黑胡椒:少許\n", 430 | "\n", 431 | "**烹飪步驟:**\n", 432 | "\n", 433 | "1. **準備藜麥:**\n", 434 | " * 將藜麥用清水沖洗乾淨。\n", 435 | " * 將藜麥和水放入鍋中煮沸,然後轉小火煮約 15 分鐘,或直到藜麥煮熟且水分被吸收。\n", 436 | " \n" 437 | ] 438 | } 439 | ], 440 | "source": [ 441 | "# Step 2: 以「新的 seed 指令」當作 user 提問,生成單輪回答(assistant 一次回覆)。\n", 442 | "# 產出為 OpenAI messages 格式,可直接累積進 datasets.jsonl。\n", 443 | "\n", 444 | "import json\n", 445 | "from uuid import uuid4\n", 446 | "from pathlib import Path\n", 447 | "\n", 448 | "qa_messages = [\n", 449 | " {\"role\": \"system\", \"content\": \"你是一位營養與飲食規劃的專家,請使用繁體中文,給出明確、可執行的建議。\"},\n", 450 | " {\"role\": \"user\", \"content\": new_seed_instruction},\n", 451 | "]\n", 452 | "\n", 453 | "resp_qa = client.chat.completions.create(\n", 454 | " model=MODEL,\n", 455 | " messages=qa_messages,\n", 456 | " temperature=0.7,\n", 457 | " max_tokens=600,\n", 458 | ")\n", 459 | "\n", 460 | "answer = resp_qa.choices[0].message.content\n", 461 | "\n", 462 | "example = {\n", 463 | " \"id\": str(uuid4()),\n", 464 | " \"type\": \"reference_free\",\n", 465 | " \"seed\": new_seed_instruction,\n", 466 | " \"messages\": [\n", 467 | " qa_messages[0], # system\n", 468 | " qa_messages[1], # user(新的 seed)\n", 469 | " {\"role\": \"assistant\", \"content\": answer}, # 單輪回答\n", 470 | " ]\n", 471 | "}\n", 472 | "\n", 473 | "# ✅ 可選:追加寫入 datasets.jsonl(供下一章節 QC 使用)\n", 474 | "out_path = Path(\"outputs/datasets.jsonl\")\n", 475 | "out_path.parent.mkdir(parents=True, exist_ok=True)\n", 476 | "with out_path.open(\"a\", encoding=\"utf-8\") as f:\n", 477 | " f.write(json.dumps(example, ensure_ascii=False) + \"\\n\")\n", 478 | "\n", 479 | "print(\"✅ 已生成單輪 QA 並寫入:\", out_path)\n", 480 | "print(\"\\n=== 回答預覽 ===\\n\", answer[:800])" 481 | ] 482 | }, 483 | { 484 | "cell_type": "markdown", 485 | "id": "60cc5f17-dc9f-400b-9a1b-94f7975ac569", 486 | "metadata": { 487 | "id": "60cc5f17-dc9f-400b-9a1b-94f7975ac569" 488 | }, 489 | "source": [ 490 | "#### 1.4.2 Reference-based 資料生成\n", 491 | "\n", 492 | "在 Reference-based 的情境下,我們會使用一段外部文本作為依據,並在其上生成問答資料。\n", 493 | "這種方式常見於知識型 QA 系統(例如 Wikipedia 問答),其核心原則是:\n", 494 | "- 問題(Question)必須來自於文本\n", 495 | "- 答案(Answer)必須完全依照文本,不可超出文本範圍\n", 496 | "\n", 497 | "這樣生成的資料,可以幫助模型學會「根據參考內容回答」,而非憑空想像。" 498 | ] 499 | }, 500 | { 501 | "cell_type": "code", 502 | "source": [ 503 | "article_context = \"\"\"\n", 504 | "[ 開源模型 ] Google Gemma 3 270M 介紹\n", 505 | "Simon Liu\n", 506 | "\n", 507 | "[ 開源模型 ] Google Gemma 3 270M\n", 508 | "Google 官方部落格介紹:連結\n", 509 | "\n", 510 | "Gemma 3 270M 是 Google DeepMind 於 2025 年 8 月正式推出的一款極致輕量化、大幅降低運算成本的開源語言模型。其設計理念側重於高能效、可在邊緣設備上直接運行 (on-device),並且能迅速完成特定任務的微調 (fine-tuning),以達到成本效益最佳化。\n", 511 | "\n", 512 | "I. 核心技術特點與差異化優勢\n", 513 | "1. 模型規模與架構設計\n", 514 | "總參數量為 2.7 億個參數,其中約 1.7 億個參數是 embedding 層權重,剩下則是 transformer 模組,屬於 Gemma 3 家族中的最小版本,採用 decoder‐only Transformer 架構。\n", 515 | "\n", 516 | "2. 能源效率極佳\n", 517 | "透過 INT4 量化後,根據官方的說法,在 Pixel 9 Pro SoC 上進行 25 次對話測試僅消耗 0.75% 電量,展現出極低耗電特性。\n", 518 | "\n", 519 | "3. 出色的 instruction-following 能力\n", 520 | "即使在未經複雜調校下,依然具備強大的「依指令執行」能力,於 IFEval 基準測試中,Gemma 3 270M 取得約 51.2% 的分數,超越多數更大模型。\n", 521 | "\n", 522 | "4. 支援量化自覺訓練 (QAT),便於部署\n", 523 | "提供可用於 INT4 推論的 QAT 檢查點,確保在極度壓縮下仍維持足夠性能,適合資源受限的執行環境。\n", 524 | "\n", 525 | "\n", 526 | "實際在 Samsung S24 Plus 測試 Google Gemma 3 270M INT8 實測結果,手機離線運算速度表現真的很漂亮,知識能力就不為難他了,或許可以 Fine-Tune 成文字對分類的AI模型\n", 527 | "應用場景與部署策略\n", 528 | "適用任務類型\n", 529 | "適合高頻、明確定義片段任務,例如:情緒分析 (sentiment analysis)、實體擷取 (entity extraction)、查詢 (query routing)、結構化文本生成、創意寫作、遵從性檢查等。\n", 530 | "\n", 531 | "快速微調與部署\n", 532 | "模型尺寸小,可在數小時內完成 fine‑tuning,極速部署原型,且可在輕量基礎設施或裝置端運行,提高開發效率並降低成本。\n", 533 | "\n", 534 | "隱私與使用者控制\n", 535 | "可完全本地化部署,避免資料往返雲端,提升敏感資料保護及隱私控制。\n", 536 | "\n", 537 | "建構專責微模型 (fleet of specialized models)\n", 538 | "利用其小巧、效率高的特性,可同時維運多個專門優化的任務模型,實現模組化、效能優化與成本最小化。\n", 539 | "\n", 540 | "比較分析與風險考量\n", 541 | "成本效益 VS 通用能力\n", 542 | "相較大模型,其推論成本與能耗極低;但在通用性、複雜對話或生成能力方面仍有限制,應視任務選擇。\n", 543 | "\n", 544 | "推論性能 VS 訓練性能\n", 545 | "雖然適合地端部署和快速微調,蛋 Google Gemma 的 Fine-Tuning 仍會建議在 GPU 或者 TPU 上的完成,並非本地端的設備進行處理。\n", 546 | "\n", 547 | "結論\n", 548 | "Gemma 3 270M 是典型的在資源受限環境中,以最低成本、最快部署速度能夠完成高效能任務,兼顧能效與靈活性。適用於邊緣部署、快速開發與特定功能場景,如客服分類、自動標註與本地化創意應用。\n", 549 | "\n", 550 | "若企業目標是打造輕量、可擴展且具隱私保障 Edge 端的 AI 解決方案,Gemma 3 270M 是值得納入模型庫的優選選項。\n", 551 | "\"\"\"" 552 | ], 553 | "metadata": { 554 | "id": "XKPF6HVH7sLk" 555 | }, 556 | "id": "XKPF6HVH7sLk", 557 | "execution_count": 7, 558 | "outputs": [] 559 | }, 560 | { 561 | "cell_type": "code", 562 | "source": [ 563 | "# @markdown 請設定以下 想產生幾組 QA 的變數數值\n", 564 | "\n", 565 | "NUM_QA = 10 # @param {type:\"string\"}" 566 | ], 567 | "metadata": { 568 | "id": "nPnLQcxDpLhu" 569 | }, 570 | "id": "nPnLQcxDpLhu", 571 | "execution_count": 8, 572 | "outputs": [] 573 | }, 574 | { 575 | "cell_type": "code", 576 | "execution_count": 9, 577 | "id": "b98dab6b-5a35-4ef2-9cd5-638988ee81a6", 578 | "metadata": { 579 | "id": "b98dab6b-5a35-4ef2-9cd5-638988ee81a6" 580 | }, 581 | "outputs": [], 582 | "source": [ 583 | "# ==== 產生「只有問題」→ 再逐題回答(Reference-based)====\n", 584 | "import json, re\n", 585 | "from typing import List\n", 586 | "from uuid import uuid4\n", 587 | "from pathlib import Path\n", 588 | "\n", 589 | "# ---------- (A) 用 Structured Outputs 產生「問題清單」 ----------\n", 590 | "# 參考:OpenAI Structured Outputs / responses.parse(若端點不支援,會自動 fallback)\n", 591 | "# Docs: platform.openai.com/docs/guides/structured-outputs & responses.parse\n", 592 | "from pydantic import BaseModel, Field, conlist\n", 593 | "\n", 594 | "class QuestionItem(BaseModel):\n", 595 | " question: str = Field(..., min_length=4, description=\"依據給定文本可直接回答的問題(繁體中文)\")\n", 596 | "\n", 597 | "class QuestionList(BaseModel):\n", 598 | " items: List[QuestionItem]\n", 599 | "\n", 600 | "def generate_questions_from_context(context: str, n_pairs: int = 4) -> List[str]:\n", 601 | " sys_rules = (\n", 602 | " \"你是資料標註助理,請使用繁體中文設計問題。\\n\"\n", 603 | " f\"請產生 {n_pairs} 題問題,不要提供答案。\\n\"\n", 604 | " \"原則:\\n\"\n", 605 | " \"1) 問題必須可由【文本】直接回答,或能忠實改寫自其中資訊。\\n\"\n", 606 | " \"2) 禁止加入【文本】以外的知識。\\n\"\n", 607 | " \"3) 問題要清楚、具體,答案可在 1–2 句內表達。\\n\"\n", 608 | " \"4) 若【文本】不足以支撐問題,請產生需要使用者進一步釐清的問題(單一句)。\\n\"\n", 609 | " \"5) 問題要自然,不要暴露有任何【文本】或外部資料存在。\\n\"\n", 610 | " \"6) 只輸出 JSON,格式固定為:{\\\"items\\\":[{\\\"question\\\":\\\"...\\\"}, ...]}。\"\n", 611 | " )\n", 612 | " user_rules = (\n", 613 | " \"請根據以下【文本】設計問題:\\n\\n\"\n", 614 | " f\"{context}\\n\\n\"\n", 615 | " \"⚠️ 僅輸出 JSON,格式:{\\\"items\\\":[{\\\"question\\\":\\\"...\\\"}, ...]},\"\n", 616 | " \"不得有額外說明/Markdown/前後綴。\"\n", 617 | " )\n", 618 | "\n", 619 | " # ---- 路徑 1:responses.parse(支援時最穩定)----\n", 620 | " try:\n", 621 | " parsed = client.beta.chat.completions.parse(\n", 622 | " model=MODEL,\n", 623 | " messages=[{\"role\": \"system\", \"content\": sys_rules},\n", 624 | " {\"role\": \"user\", \"content\": user_rules}],\n", 625 | " response_format=QuestionList, # 注意:這裡是一個 Pydantic Model class\n", 626 | " )\n", 627 | " # 取得結構化結果(關鍵)\n", 628 | " items = parsed.choices[0].message.parsed.items\n", 629 | " questions = [it.question.strip() for it in items if it.question.strip()]\n", 630 | " return questions[:n_pairs]\n", 631 | "\n", 632 | " except Exception:\n", 633 | " # ---- 路徑 2:Chat Completions + JSON(相容端常用)----\n", 634 | " fallback_sys = (\n", 635 | " \"你是資料標註助理。請只輸出 JSON,不要任何解釋或 Markdown。\\n\"\n", 636 | " '格式:[{\"question\":\"...\"}, {\"question\":\"...\"}]'\n", 637 | " )\n", 638 | " fallback_user = (\n", 639 | " f\"{sys_rules}\\n\\n\"\n", 640 | " \"請輸出 JSON 陣列,每個物件僅含 question 欄位。\\n\\n\"\n", 641 | " f\"【文本】\\n{context}\"\n", 642 | " )\n", 643 | " resp = client.chat.completions.create(\n", 644 | " model=MODEL,\n", 645 | " messages=[{\"role\": \"system\", \"content\": fallback_sys},\n", 646 | " {\"role\": \"user\", \"content\": fallback_user}],\n", 647 | " # 部分代理不支援 JSON mode;若報錯就移除此參數\n", 648 | " response_format={\"type\": \"json_object\"},\n", 649 | " temperature=0.2,\n", 650 | " max_tokens=800,\n", 651 | " )\n", 652 | " raw = resp.choices[0].message.content.strip()\n", 653 | " txt = re.sub(r\"^```json\\s*|\\s*```$\", \"\", raw, flags=re.IGNORECASE).strip()\n", 654 | " data = json.loads(txt)\n", 655 | "\n", 656 | " # 接受 [{\"question\": \"...\"}] 或 {\"items\":[...]}\n", 657 | " items = data.get(\"items\") if isinstance(data, dict) and \"items\" in data else data\n", 658 | " if not isinstance(items, list):\n", 659 | " raise ValueError(\"模型輸出不是問題清單 JSON 陣列/物件\")\n", 660 | "\n", 661 | " qs = []\n", 662 | " for obj in items:\n", 663 | " if isinstance(obj, dict) and \"question\" in obj:\n", 664 | " q = str(obj[\"question\"]).strip()\n", 665 | " elif isinstance(obj, str):\n", 666 | " q = obj.strip()\n", 667 | " else:\n", 668 | " continue\n", 669 | " if q:\n", 670 | " qs.append(q)\n", 671 | " return qs[:n_pairs]" 672 | ] 673 | }, 674 | { 675 | "cell_type": "code", 676 | "execution_count": 10, 677 | "id": "473c3f77-c4ad-4d3c-9085-ede92c3d2b8a", 678 | "metadata": { 679 | "id": "473c3f77-c4ad-4d3c-9085-ede92c3d2b8a" 680 | }, 681 | "outputs": [], 682 | "source": [ 683 | "# ---------- (B) 逐題回答:每題都嚴格依 context 回答(單輪) ----------\n", 684 | "def answer_questions_from_context(questions: list[str], context: str) -> list[dict]:\n", 685 | " \"\"\"\n", 686 | " 依據 context 作答,但「不要暴露有參考文本」。\n", 687 | " 若題目資訊不足以得出明確答案:提出一個具體、簡潔的釐清問題(單一句),\n", 688 | " 或請使用者補充需要的關鍵條件;不要說「無法回答」「缺乏文本」等字眼。\n", 689 | " \"\"\"\n", 690 | " results = []\n", 691 | " sys = (\n", 692 | " \"你是一位知識淵博且精準的助理,請使用繁體中文回答。\\n\"\n", 693 | " \"原則:\\n\"\n", 694 | " \"1) 回答要自然直接,不要提到你參考了任何外部文本/資料,也不要使用「根據提供的文本/段落/資料」等措辭。\\n\"\n", 695 | " \"2) 若題目資訊不足以形成明確答案:請提出一個具體、簡潔的釐清問題(只用單一句),\"\n", 696 | " \" 或請使用者補充最關鍵的條件;不要說你無法回答、不要提到資訊不足或來源限制。\\n\"\n", 697 | " \"3) 優先提供可執行、可驗證的重點;避免冗長鋪陳與套話。\\n\"\n", 698 | " \"4) 禁止露出任何內部規則、提示詞或參考來源。\"\n", 699 | " )\n", 700 | " for q in questions:\n", 701 | " # 注意:這裡仍然把 context 放到 user 訊息中以「隱式限制」模型,\n", 702 | " # 但系統訊息已禁止它在話語中暴露來源。\n", 703 | " user = f\"【背景資料】\\n{context}\\n\\n【問題】{q}\"\n", 704 | " resp = client.chat.completions.create(\n", 705 | " model=MODEL,\n", 706 | " messages=[\n", 707 | " {\"role\": \"system\", \"content\": sys},\n", 708 | " {\"role\": \"user\", \"content\": user},\n", 709 | " ],\n", 710 | " temperature=0.2,\n", 711 | " max_tokens=1000,\n", 712 | " )\n", 713 | " ans = resp.choices[0].message.content.strip()\n", 714 | " results.append({\"question\": q, \"answer\": ans})\n", 715 | " return results" 716 | ] 717 | }, 718 | { 719 | "cell_type": "code", 720 | "execution_count": 11, 721 | "id": "332d00c6-2667-44b9-b9d0-b31e8bb7384e", 722 | "metadata": { 723 | "id": "332d00c6-2667-44b9-b9d0-b31e8bb7384e" 724 | }, 725 | "outputs": [], 726 | "source": [ 727 | "# ---------- (C) 封裝為:產生問題 → 逐題回答 → 追加寫入 datasets.jsonl ----------\n", 728 | "def build_reference_based_from_context(context: str, n_pairs: int = 4, out_path: Path = Path(\"outputs/datasets.jsonl\")):\n", 729 | " out_path.parent.mkdir(parents=True, exist_ok=True)\n", 730 | "\n", 731 | " qs = generate_questions_from_context(context, n_pairs=n_pairs)\n", 732 | " qa_list = answer_questions_from_context(qs, context)\n", 733 | "\n", 734 | " wrote = 0\n", 735 | " with out_path.open(\"a\", encoding=\"utf-8\") as f:\n", 736 | " for qa in qa_list:\n", 737 | " rec = {\n", 738 | " \"id\": str(uuid4()),\n", 739 | " \"type\": \"reference_based\",\n", 740 | " \"seed\": context,\n", 741 | " \"context\": context, # 保留 context 供審核/教學;若不需要可移除\n", 742 | " \"messages\": [\n", 743 | " {\"role\": \"system\", \"content\": \"請嚴格依據提供的文本回答問題,使用繁體中文。\"},\n", 744 | " {\"role\": \"user\", \"content\": qa[\"question\"]},\n", 745 | " {\"role\": \"assistant\", \"content\": qa[\"answer\"]},\n", 746 | " ],\n", 747 | " }\n", 748 | " f.write(json.dumps(rec, ensure_ascii=False) + \"\\n\")\n", 749 | " wrote += 1\n", 750 | "\n", 751 | " print(f\"✅ 已新增 {wrote} 筆 reference-based QA 至 {out_path}\")\n", 752 | " return qa_list" 753 | ] 754 | }, 755 | { 756 | "cell_type": "code", 757 | "source": [ 758 | "import re\n", 759 | "\n", 760 | "def split_markdown_by_headers(markdown_text):\n", 761 | " \"\"\"Splits a markdown string by headers (#, ##, ###).\"\"\"\n", 762 | " # Use regex to find all headers and their positions\n", 763 | " # This regex looks for lines starting with 1 to 3 '#' characters, followed by a space\n", 764 | " # and captures the header line and the content that follows until the next header\n", 765 | " segments = []\n", 766 | " # Find all matches of headers and their starting positions\n", 767 | " matches = list(re.finditer(r\"^(#+\\s.*)$\", markdown_text, re.MULTILINE))\n", 768 | "\n", 769 | " if not matches:\n", 770 | " # If no headers are found, return the entire text as a single segment\n", 771 | " return [markdown_text]\n", 772 | "\n", 773 | " # Add content before the first header if it exists\n", 774 | " if matches[0].start() > 0:\n", 775 | " segments.append(markdown_text[:matches[0].start()].strip())\n", 776 | "\n", 777 | " # Iterate through the matches to extract segments\n", 778 | " for i in range(len(matches)):\n", 779 | " start_pos = matches[i].start()\n", 780 | " # The end position is the start of the next header, or the end of the text\n", 781 | " end_pos = matches[i+1].start() if i+1 < len(matches) else len(markdown_text)\n", 782 | " segment = markdown_text[start_pos:end_pos].strip()\n", 783 | " if segment:\n", 784 | " segments.append(segment)\n", 785 | "\n", 786 | " return segments\n", 787 | "\n", 788 | "# Split the article_context\n", 789 | "wiki_segments = split_markdown_by_headers(article_context)\n", 790 | "\n", 791 | "# Print the number of segments and the first few\n", 792 | "print(f\"Split article_context into {len(wiki_segments)} segments.\")\n", 793 | "for i, segment in enumerate(wiki_segments):\n", 794 | " print(f\"\\n--- Segment {i+1} ---\")\n", 795 | " print(segment[:500] + ('...' if len(segment) > 500 else ''))" 796 | ], 797 | "metadata": { 798 | "colab": { 799 | "base_uri": "https://localhost:8080/" 800 | }, 801 | "id": "EoiTAHOb9N3p", 802 | "outputId": "992265f9-900c-4b24-d954-10979e1f344a" 803 | }, 804 | "id": "EoiTAHOb9N3p", 805 | "execution_count": 12, 806 | "outputs": [ 807 | { 808 | "output_type": "stream", 809 | "name": "stdout", 810 | "text": [ 811 | "Split article_context into 1 segments.\n", 812 | "\n", 813 | "--- Segment 1 ---\n", 814 | "\n", 815 | "[ 開源模型 ] Google Gemma 3 270M 介紹\n", 816 | "Simon Liu\n", 817 | "\n", 818 | "[ 開源模型 ] Google Gemma 3 270M\n", 819 | "Google 官方部落格介紹:連結\n", 820 | "\n", 821 | "Gemma 3 270M 是 Google DeepMind 於 2025 年 8 月正式推出的一款極致輕量化、大幅降低運算成本的開源語言模型。其設計理念側重於高能效、可在邊緣設備上直接運行 (on-device),並且能迅速完成特定任務的微調 (fine-tuning),以達到成本效益最佳化。\n", 822 | "\n", 823 | "I. 核心技術特點與差異化優勢\n", 824 | "1. 模型規模與架構設計\n", 825 | "總參數量為 2.7 億個參數,其中約 1.7 億個參數是 embedding 層權重,剩下則是 transformer 模組,屬於 Gemma 3 家族中的最小版本,採用 decoder‐only Transformer 架構。\n", 826 | "\n", 827 | "2. 能源效率極佳\n", 828 | "透過 INT4 量化後,根據官方的說法,在 Pixel 9 Pro SoC 上進行 25 次對話測試僅消耗 0.75% 電量,展現出極低耗電特性。\n", 829 | "\n", 830 | "3. 出色的 instruction-following...\n" 831 | ] 832 | } 833 | ] 834 | }, 835 | { 836 | "cell_type": "code", 837 | "source": [ 838 | "print(\"* Context information: \\n\")\n", 839 | "\n", 840 | "for index, context in enumerate(wiki_segments):\n", 841 | " print(f\"===== context: {index} =====\")\n", 842 | " print(\"length of context: \" + str(len(context)))\n", 843 | " print(\"Suggestion QA about this context: \" + str(int(len(context)/100)+1))\n", 844 | " print(\"Preview the content: \\n\\n\" + context[:30])\n", 845 | " print(f\"======================\")" 846 | ], 847 | "metadata": { 848 | "colab": { 849 | "base_uri": "https://localhost:8080/" 850 | }, 851 | "id": "lPl422189trw", 852 | "outputId": "f067509d-dfc2-445e-a696-f91520791a44" 853 | }, 854 | "id": "lPl422189trw", 855 | "execution_count": 13, 856 | "outputs": [ 857 | { 858 | "output_type": "stream", 859 | "name": "stdout", 860 | "text": [ 861 | "* Context information: \n", 862 | "\n", 863 | "===== context: 0 =====\n", 864 | "length of context: 1428\n", 865 | "Suggestion QA about this context: 15\n", 866 | "Preview the content: \n", 867 | "\n", 868 | "\n", 869 | "[ 開源模型 ] Google Gemma 3 270M \n", 870 | "======================\n" 871 | ] 872 | } 873 | ] 874 | }, 875 | { 876 | "cell_type": "code", 877 | "source": [ 878 | "import time\n", 879 | "\n", 880 | "for context in wiki_segments:\n", 881 | " if NUM_QA is None:\n", 882 | " n_pair = int(NUM_QA/len(wiki_segments))\n", 883 | " else:\n", 884 | " n_pair = NUM_QA\n", 885 | "\n", 886 | " _qa_preview = build_reference_based_from_context(context, n_pairs = NUM_QA)\n", 887 | " print(\"\\n--- 產生預覽 ---\")\n", 888 | " for i, qa in enumerate(_qa_preview, 1):\n", 889 | " print(f\"Q{i}: {qa['question']}\")\n", 890 | " print(f\"A{i}: {qa['answer'][:200]}{'...' if len(qa['answer'])>200 else ''}\\n\")\n", 891 | " time.sleep(5)" 892 | ], 893 | "metadata": { 894 | "colab": { 895 | "base_uri": "https://localhost:8080/" 896 | }, 897 | "id": "4mRbvk1Z92rz", 898 | "outputId": "1a4db710-4938-41c0-dce4-9a52d2941eb0" 899 | }, 900 | "id": "4mRbvk1Z92rz", 901 | "execution_count": 14, 902 | "outputs": [ 903 | { 904 | "output_type": "stream", 905 | "name": "stdout", 906 | "text": [ 907 | "✅ 已新增 10 筆 reference-based QA 至 outputs/datasets.jsonl\n", 908 | "\n", 909 | "--- 產生預覽 ---\n", 910 | "Q1: Google Gemma 3 270M 是由哪個機構推出的?\n", 911 | "A1: Google DeepMind。\n", 912 | "\n", 913 | "Q2: Gemma 3 270M 的設計理念是什麼?\n", 914 | "A2: Gemma 3 270M 的設計理念側重於高能效,使其能在邊緣設備上直接運行,並迅速完成特定任務的微調,以達到成本效益最佳化。\n", 915 | "\n", 916 | "Q3: Gemma 3 270M 總共有多少個參數?\n", 917 | "A3: Gemma 3 270M 總共有 2.7 億個參數。\n", 918 | "\n", 919 | "Q4: Gemma 3 270M 在 Pixel 9 Pro SoC 上進行 25 次對話測試消耗多少電量?\n", 920 | "A4: Gemma 3 270M 在 Pixel 9 Pro SoC 上進行 25 次對話測試僅消耗 0.75% 電量。\n", 921 | "\n", 922 | "Q5: Gemma 3 270M 在 IFEval 基準測試中取得了約多少分數?\n", 923 | "A5: Gemma 3 270M 在 IFEval 基準測試中取得約 51.2% 的分數。\n", 924 | "\n", 925 | "Q6: Gemma 3 270M 支援哪種量化訓練方式,以方便部署?\n", 926 | "A6: Gemma 3 270M 支援量化自覺訓練 (QAT),以便於 INT4 推論的部署。\n", 927 | "\n", 928 | "Q7: Gemma 3 270M 適合哪些類型的高頻任務?\n", 929 | "A7: Gemma 3 270M 適合情緒分析、實體擷取、查詢、結構化文本生成、創意寫作和遵從性檢查等高頻、明確定義片段的任務。\n", 930 | "\n", 931 | "Q8: 使用 Gemma 3 270M 進行 fine-tuning 通常建議在哪種硬體上完成?\n", 932 | "A8: 建議在 GPU 或 TPU 上完成 Google Gemma 的 Fine-Tuning。\n", 933 | "\n", 934 | "Q9: Gemma 3 270M 的哪些特性使其適用於邊緣部署?\n", 935 | "A9: Gemma 3 270M 適用於邊緣部署的特性包含:\n", 936 | "\n", 937 | "* **能源效率極佳**:INT4 量化後耗電量極低。\n", 938 | "* **模型尺寸小**:參數少,可在輕量基礎設施或裝置端運行。\n", 939 | "* **快速微調與部署**:可在數小時內完成 fine‑tuning,極速部署原型。\n", 940 | "* **隱私與使用者控制**:可完全本地化部署,避免資料往返雲端。\n", 941 | "\n", 942 | "Q10: 相較於大型模型,Gemma 3 270M 在通用能力方面有何限制?\n", 943 | "A10: 在通用性、複雜對話或生成能力方面仍有限制。\n", 944 | "\n" 945 | ] 946 | } 947 | ] 948 | }, 949 | { 950 | "cell_type": "markdown", 951 | "metadata": { 952 | "id": "3014b7ac-7748-46f6-965e-8f92b57377cf" 953 | }, 954 | "source": [ 955 | "## 2 資料品質檢查與過濾(Quality Checks)\n", 956 | "\n", 957 | "\n", 958 | "目標:\n", 959 | "- 載入 `raw.jsonl`\n", 960 | "- 規則式檢查:敏感詞 / 結構完整 / 長度門檻 / 不含 placeholder\n", 961 | "- 產出 `clean.jsonl`\n", 962 | "- 生成摘要報表(通過/剔除統計、剔除原因分佈)" 963 | ], 964 | "id": "3014b7ac-7748-46f6-965e-8f92b57377cf" 965 | }, 966 | { 967 | "cell_type": "markdown", 968 | "metadata": { 969 | "id": "50b8ced3-2e81-4f19-af3f-99c98c5efbd8" 970 | }, 971 | "source": [ 972 | "> 註:不論如何,禁用 [opencc-python](https://github.com/yichen0831/opencc-python) 做任何轉換\n", 973 | "\n", 974 | "> 雖然 OpenCC 的簡轉繁功能很方便,但它只是機械式轉換,繁體字有時會被誤判或錯轉,導致語意錯誤或不符合在地用法,因此並不適合需要精準繁體輸出的情境。" 975 | ], 976 | "id": "50b8ced3-2e81-4f19-af3f-99c98c5efbd8" 977 | }, 978 | { 979 | "cell_type": "markdown", 980 | "metadata": { 981 | "id": "8aed0c38-dc62-44ef-aa93-dc043781f5c8" 982 | }, 983 | "source": [ 984 | "### 2.1 準備路徑與依賴" 985 | ], 986 | "id": "8aed0c38-dc62-44ef-aa93-dc043781f5c8" 987 | }, 988 | { 989 | "cell_type": "code", 990 | "execution_count": 15, 991 | "metadata": { 992 | "id": "92d2e6e0-336f-4893-88b9-2b88dc67c79d", 993 | "colab": { 994 | "base_uri": "https://localhost:8080/" 995 | }, 996 | "outputId": "dc0afc41-7b90-4ff2-c66f-79e360a1938d" 997 | }, 998 | "outputs": [ 999 | { 1000 | "output_type": "stream", 1001 | "name": "stdout", 1002 | "text": [ 1003 | "✅ 讀取來源: outputs/datasets.jsonl\n", 1004 | "✅ 乾淨輸出: outputs/clean.jsonl\n" 1005 | ] 1006 | } 1007 | ], 1008 | "source": [ 1009 | "from pathlib import Path\n", 1010 | "import json, re, statistics\n", 1011 | "from collections import Counter, defaultdict\n", 1012 | "\n", 1013 | "INPUT_PATH = Path(\"outputs/datasets.jsonl\")\n", 1014 | "\n", 1015 | "OUTPUT_DIR = Path(\"outputs\")\n", 1016 | "OUTPUT_DIR.mkdir(parents=True, exist_ok=True)\n", 1017 | "\n", 1018 | "OUTPUT_CLEAN = OUTPUT_DIR / \"clean.jsonl\"\n", 1019 | "OUTPUT_REPORT = OUTPUT_DIR / \"qc_report.json\"\n", 1020 | "\n", 1021 | "\n", 1022 | "print(\"✅ 讀取來源:\", INPUT_PATH)\n", 1023 | "print(\"✅ 乾淨輸出:\", OUTPUT_CLEAN)" 1024 | ], 1025 | "id": "92d2e6e0-336f-4893-88b9-2b88dc67c79d" 1026 | }, 1027 | { 1028 | "cell_type": "markdown", 1029 | "metadata": { 1030 | "id": "3926d664-eddd-4303-9207-b3a2260afaf3" 1031 | }, 1032 | "source": [ 1033 | "### 2.2 載入資料\n", 1034 | "\n", 1035 | "逐行讀取 JSONL,存到 list。這裡不做任何變形,只檢視基本鍵值。" 1036 | ], 1037 | "id": "3926d664-eddd-4303-9207-b3a2260afaf3" 1038 | }, 1039 | { 1040 | "cell_type": "code", 1041 | "execution_count": 16, 1042 | "metadata": { 1043 | "id": "a26301db-4c15-4580-a84b-5f6a4687685b", 1044 | "colab": { 1045 | "base_uri": "https://localhost:8080/" 1046 | }, 1047 | "outputId": "40f39a5e-e115-4b88-a4cf-167067eda1f3" 1048 | }, 1049 | "outputs": [ 1050 | { 1051 | "output_type": "stream", 1052 | "name": "stdout", 1053 | "text": [ 1054 | "Number of records: 11\n" 1055 | ] 1056 | } 1057 | ], 1058 | "source": [ 1059 | "records = []\n", 1060 | "with INPUT_PATH.open(\"r\", encoding=\"utf-8\") as f:\n", 1061 | " for line in f:\n", 1062 | " try:\n", 1063 | " records.append(json.loads(line))\n", 1064 | " except Exception as e:\n", 1065 | " # 若出現無法解析的行,記錄並跳過\n", 1066 | " print(\"⚠️ 無法解析的行,已略過:\", e)\n", 1067 | "\n", 1068 | "print(f\"Number of records: {len(records)}\")" 1069 | ], 1070 | "id": "a26301db-4c15-4580-a84b-5f6a4687685b" 1071 | }, 1072 | { 1073 | "cell_type": "markdown", 1074 | "metadata": { 1075 | "id": "9b9753e1-25c2-4fce-9faa-c6266f19f43c" 1076 | }, 1077 | "source": [ 1078 | "### 2.3 品質規則定義\n", 1079 | "\n", 1080 | "本課採「規則式(rule-based)」檢查以快速過濾:\n", 1081 | "1. **結構**:`messages` 至少包含 `system`、`user`、`assistant` 三則;且對話文本不為空。\n", 1082 | "2. **多輪性**:對話需包含至少 3 輪(可鬆綁為 1 輪以上,但本課先採至少 3 輪)。\n", 1083 | "3. **長度**:合併文本長度至少 80 字(避免過短)。\n", 1084 | "4. **敏感詞**:過濾個資或敏感詞(示例黑名單)。\n", 1085 | "5. **Placeholder**:不得包含 `XXX`、`<填充>` 類佔位符。" 1086 | ], 1087 | "id": "9b9753e1-25c2-4fce-9faa-c6266f19f43c" 1088 | }, 1089 | { 1090 | "cell_type": "code", 1091 | "execution_count": 17, 1092 | "metadata": { 1093 | "id": "4c0923e3-6fa0-41bb-97a4-653a036c72d5" 1094 | }, 1095 | "outputs": [], 1096 | "source": [ 1097 | "# 1) 結構/角色檢查\n", 1098 | "def has_min_roles(msgs):\n", 1099 | " roles = [m.get(\"role\") for m in msgs]\n", 1100 | " return {\"system\", \"user\", \"assistant\"}.issubset(set(roles))\n", 1101 | "\n", 1102 | "# 2) 多輪性(這裡以訊息數 >= 3 視為最低門檻;若需要更嚴謹可解析回合)\n", 1103 | "def has_min_turns(msgs, min_msgs=3):\n", 1104 | " return len(msgs) >= min_msgs\n", 1105 | "\n", 1106 | "# 3) 長度門檻\n", 1107 | "def meet_min_length(msgs, min_chars=80):\n", 1108 | " total = sum(len((m.get(\"content\") or \"\").strip()) for m in msgs)\n", 1109 | " return total >= min_chars\n", 1110 | "\n", 1111 | "# 4) 敏感詞(示例):身分證/電話/地址/Email/信用卡/生日\n", 1112 | "SENSITIVE_PATTERNS = [\n", 1113 | " r\"\\b[A-Z][12]\\d{8}\\b\", # 台灣身分證格式\n", 1114 | " r\"\\b09\\d{8}\\b|\\b0\\d{1,2}-\\d{6,8}\\b\", # 手機或市話\n", 1115 | " r\"[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}\", # email\n", 1116 | " r\"\\b\\d{4}[- ]?\\d{4}[- ]?\\d{4}[- ]?\\d{4}\\b\", # 信用卡 16 碼\n", 1117 | " r\"\\b(19|20)\\d{2}[/-]\\d{1,2}[/-]\\d{1,2}\\b\", # 西元生日 yyyy/mm/dd 或 yyyy-mm-dd\n", 1118 | "]\n", 1119 | "\n", 1120 | "def has_sensitive(text):\n", 1121 | " return any(re.search(p, text) for p in SENSITIVE_PATTERNS)\n", 1122 | "\n", 1123 | "# 5) Placeholder 過濾\n", 1124 | "PLACEHOLDER_PATTERNS = [r\"XXX\", r\"<填充>\", r\"\\[PLACEHOLDER\\]\"]\n", 1125 | "\n", 1126 | "def has_placeholder(text):\n", 1127 | " return any(re.search(p, text, flags=re.IGNORECASE) for p in PLACEHOLDER_PATTERNS)" 1128 | ], 1129 | "id": "4c0923e3-6fa0-41bb-97a4-653a036c72d5" 1130 | }, 1131 | { 1132 | "cell_type": "markdown", 1133 | "metadata": { 1134 | "id": "80e5a6cd-4b31-451f-a1c8-c34437a11560" 1135 | }, 1136 | "source": [ 1137 | "### 2.4 單筆檢查與原因標註\n", 1138 | "\n", 1139 | "輸入一筆記錄,回傳 (是否通過, 剔除原因集合)。" 1140 | ], 1141 | "id": "80e5a6cd-4b31-451f-a1c8-c34437a11560" 1142 | }, 1143 | { 1144 | "cell_type": "code", 1145 | "execution_count": 18, 1146 | "metadata": { 1147 | "id": "495390d6-59f4-4ec5-a7e7-1b236a5a1ef7" 1148 | }, 1149 | "outputs": [], 1150 | "source": [ 1151 | "def join_text_by_roles(msgs, roles=(\"assistant\",)):\n", 1152 | " return \"\\n\".join((m.get(\"content\") or \"\").strip()\n", 1153 | " for m in msgs if m.get(\"role\") in roles)\n", 1154 | "\n", 1155 | "def quality_check(record):\n", 1156 | " reasons = []\n", 1157 | "\n", 1158 | " msgs = record.get(\"messages\", [])\n", 1159 | " if not isinstance(msgs, list) or not msgs:\n", 1160 | " return False, {\"bad_structure\"}\n", 1161 | "\n", 1162 | " if not has_min_roles(msgs):\n", 1163 | " reasons.append(\"missing_roles\")\n", 1164 | "\n", 1165 | " if not has_min_turns(msgs, min_msgs=3):\n", 1166 | " reasons.append(\"too_few_messages\")\n", 1167 | "\n", 1168 | " # ⬇️ 只看 assistant 文字,避免掃到 user 提示內的「例如 身分證/電話…」\n", 1169 | " text = join_text_by_roles(msgs, roles=(\"assistant\",))\n", 1170 | "\n", 1171 | " if not meet_min_length(msgs, min_chars=80):\n", 1172 | " reasons.append(\"too_short\")\n", 1173 | "\n", 1174 | " if has_sensitive(text):\n", 1175 | " reasons.append(\"sensitive_content\")\n", 1176 | "\n", 1177 | " if has_placeholder(text):\n", 1178 | " reasons.append(\"placeholder_found\")\n", 1179 | "\n", 1180 | " return (len(reasons) == 0), set(reasons)" 1181 | ], 1182 | "id": "495390d6-59f4-4ec5-a7e7-1b236a5a1ef7" 1183 | }, 1184 | { 1185 | "cell_type": "markdown", 1186 | "metadata": { 1187 | "id": "943576f6-5c01-488e-8e71-f305afae9dd1" 1188 | }, 1189 | "source": [ 1190 | "### 2.5 執行過濾並輸出 `clean.jsonl`" 1191 | ], 1192 | "id": "943576f6-5c01-488e-8e71-f305afae9dd1" 1193 | }, 1194 | { 1195 | "cell_type": "code", 1196 | "execution_count": 19, 1197 | "metadata": { 1198 | "id": "71a2ae66-e807-4ecd-a4bf-577825c339d2", 1199 | "colab": { 1200 | "base_uri": "https://localhost:8080/" 1201 | }, 1202 | "outputId": "762bac96-7b6c-46a5-ab08-5a4100f33388" 1203 | }, 1204 | "outputs": [ 1205 | { 1206 | "output_type": "stream", 1207 | "name": "stdout", 1208 | "text": [ 1209 | "✅ 通過:8 筆\n", 1210 | "❌ 剔除:3 筆\n" 1211 | ] 1212 | } 1213 | ], 1214 | "source": [ 1215 | "kept, dropped = [], []\n", 1216 | "for rec in records:\n", 1217 | " ok, reasons = quality_check(rec)\n", 1218 | " if ok:\n", 1219 | " kept.append(rec)\n", 1220 | " else:\n", 1221 | " dropped.append((rec.get(\"id\"), reasons))\n", 1222 | "\n", 1223 | "with OUTPUT_CLEAN.open(\"w\", encoding=\"utf-8\") as f:\n", 1224 | " for r in kept:\n", 1225 | " f.write(json.dumps(r, ensure_ascii=False) + \"\\n\")\n", 1226 | "\n", 1227 | "print(f\"✅ 通過:{len(kept)} 筆\")\n", 1228 | "print(f\"❌ 剔除:{len(dropped)} 筆\")" 1229 | ], 1230 | "id": "71a2ae66-e807-4ecd-a4bf-577825c339d2" 1231 | }, 1232 | { 1233 | "cell_type": "markdown", 1234 | "metadata": { 1235 | "id": "ae5ec9a4-9eea-4a5a-8af4-0cdbead0a49a" 1236 | }, 1237 | "source": [ 1238 | "### 2.6 產出品質報表\n", 1239 | "\n", 1240 | "統計剔除原因分佈、長度分佈(通過者),並輸出 `qc_report.json` 方便保存與追蹤。" 1241 | ], 1242 | "id": "ae5ec9a4-9eea-4a5a-8af4-0cdbead0a49a" 1243 | }, 1244 | { 1245 | "cell_type": "code", 1246 | "execution_count": 20, 1247 | "metadata": { 1248 | "id": "91348e15-d898-4b96-a6f4-8e19fa1080ae", 1249 | "colab": { 1250 | "base_uri": "https://localhost:8080/" 1251 | }, 1252 | "outputId": "b57ad11f-987f-41e2-ed68-41d7ec56f1c8" 1253 | }, 1254 | "outputs": [ 1255 | { 1256 | "output_type": "stream", 1257 | "name": "stdout", 1258 | "text": [ 1259 | "{\n", 1260 | " \"input_total\": 11,\n", 1261 | " \"kept\": 8,\n", 1262 | " \"dropped\": 3,\n", 1263 | " \"drop_reasons\": {\n", 1264 | " \"too_short\": 3\n", 1265 | " },\n", 1266 | " \"length_stats_kept\": {\n", 1267 | " \"min\": 95,\n", 1268 | " \"max\": 927,\n", 1269 | " \"mean\": 224.5,\n", 1270 | " \"median\": 109.0\n", 1271 | " }\n", 1272 | "}\n" 1273 | ] 1274 | } 1275 | ], 1276 | "source": [ 1277 | "# 剔除原因分佈\n", 1278 | "reason_counter = Counter()\n", 1279 | "for _id, reasons in dropped:\n", 1280 | " reason_counter.update(reasons)\n", 1281 | "\n", 1282 | "# 通過資料長度(字元計)分佈\n", 1283 | "lengths = []\n", 1284 | "for r in kept:\n", 1285 | " lengths.append(sum(len((m.get(\"content\") or \"\").strip()) for m in r[\"messages\"]))\n", 1286 | "\n", 1287 | "report = {\n", 1288 | " \"input_total\": len(records),\n", 1289 | " \"kept\": len(kept),\n", 1290 | " \"dropped\": len(dropped),\n", 1291 | " \"drop_reasons\": dict(reason_counter),\n", 1292 | " \"length_stats_kept\": {\n", 1293 | " \"min\": min(lengths) if lengths else 0,\n", 1294 | " \"max\": max(lengths) if lengths else 0,\n", 1295 | " \"mean\": round(statistics.mean(lengths), 2) if lengths else 0,\n", 1296 | " \"median\": statistics.median(lengths) if lengths else 0,\n", 1297 | " },\n", 1298 | "}\n", 1299 | "\n", 1300 | "with OUTPUT_REPORT.open(\"w\", encoding=\"utf-8\") as f:\n", 1301 | " json.dump(report, f, ensure_ascii=False, indent=2)\n", 1302 | "\n", 1303 | "print(json.dumps(report, ensure_ascii=False, indent=2))" 1304 | ], 1305 | "id": "91348e15-d898-4b96-a6f4-8e19fa1080ae" 1306 | }, 1307 | { 1308 | "cell_type": "markdown", 1309 | "metadata": { 1310 | "id": "fbb83d71-8c3f-41d6-adb7-3ff00e181cfb" 1311 | }, 1312 | "source": [ 1313 | "### 2.7. 抽樣檢視通過樣本(前 2 筆)\n", 1314 | "\n", 1315 | "確認清洗後的資料結構與內容是否符合預期。" 1316 | ], 1317 | "id": "fbb83d71-8c3f-41d6-adb7-3ff00e181cfb" 1318 | }, 1319 | { 1320 | "cell_type": "code", 1321 | "execution_count": 21, 1322 | "metadata": { 1323 | "id": "1f78a29c-74fa-465c-a16d-1ba8f406406a", 1324 | "colab": { 1325 | "base_uri": "https://localhost:8080/" 1326 | }, 1327 | "outputId": "d47b52a1-1658-4cfe-a35c-6263e957d222" 1328 | }, 1329 | "outputs": [ 1330 | { 1331 | "output_type": "stream", 1332 | "name": "stdout", 1333 | "text": [ 1334 | "\n", 1335 | "--- Clean Sample 1 / topic=None ---\n", 1336 | "好的,這是一份不含乳製品、高纖且富含健康脂肪,熱量約 500 大卡的午餐食譜,並提供詳細的說明和建議,讓你輕鬆準備:\n", 1337 | "\n", 1338 | "**午餐食譜:酪梨藜麥沙拉佐烤鮭魚**\n", 1339 | "\n", 1340 | "**熱量估算:** 約 480-520 大卡 (依食材份量微調)\n", 1341 | "\n", 1342 | "**食材:**\n", 1343 | "\n", 1344 | "* **烤鮭魚 (約 120 克):** 提供優質蛋白質和 Omega-3 脂肪酸 (約 200 大卡)\n", 1345 | " * 鮭魚片:120 克\n", 1346 | " * 橄欖油:1 茶匙\n", 1347 | " * 鹽:少許\n", 1348 | " * 黑胡椒:少許\n", 1349 | " * 檸檬汁:少許 (可選)\n", 1350 | "* **藜麥 (煮熟後約 1 杯):** 提供豐富纖維和植物性蛋白質 (約 220 大卡)\n", 1351 | " * 乾燥藜麥:1/2 杯\n", 1352 | " * 水:1 杯\n", 1353 | "* **酪梨 (1/4 個):** 提供健康脂肪和纖維 (約 80 大卡)\n", 1354 | "* **蔬菜 (總量約 1 杯):** 提供纖維、維生素和礦物質 (約 20 大卡)\n", 1355 | " * 小黃瓜:1/4 根 (切丁)\n", 1356 | " * 紅蘿蔔:1/4 根 (切丁)\n", 1357 | " * 甜椒 (任何顏色):1/4 個 (切...\n", 1358 | "\n", 1359 | "--- Clean Sample 2 / topic=None ---\n", 1360 | "Gemma 3 270M 的設計理念側重於高能效,使其能在邊緣設備上直接運行,並迅速完成特定任務的微調,以達到成本效益最佳化。\n" 1361 | ] 1362 | } 1363 | ], 1364 | "source": [ 1365 | "preview = []\n", 1366 | "with OUTPUT_CLEAN.open(\"r\", encoding=\"utf-8\") as f:\n", 1367 | " for i, line in enumerate(f):\n", 1368 | " if i >= 2:\n", 1369 | " break\n", 1370 | " preview.append(json.loads(line))\n", 1371 | "\n", 1372 | "for i, s in enumerate(preview, 1):\n", 1373 | " print(f\"\\n--- Clean Sample {i} / topic={s.get('topic')} ---\")\n", 1374 | " text = s[\"messages\"][-1][\"content\"]\n", 1375 | " print(text[:500] + (\"...\" if len(text) > 500 else \"\"))" 1376 | ], 1377 | "id": "1f78a29c-74fa-465c-a16d-1ba8f406406a" 1378 | }, 1379 | { 1380 | "cell_type": "markdown", 1381 | "metadata": { 1382 | "id": "52ebdc34-7526-4679-bf09-b9f868311a92" 1383 | }, 1384 | "source": [ 1385 | "### 2.8(可選)LLM 輔助檢查(實務建議)\n", 1386 | "> 所謂的 LLM-as-Judge\n", 1387 | "\n", 1388 | "在規則式檢查後,可抽樣使用 LLM 來做語義層面的檢查(如:是否符合主題、語氣、是否含危險建議等)。 \n", 1389 | "以下為示意程式(預設註解,不影響主流程)。" 1390 | ], 1391 | "id": "52ebdc34-7526-4679-bf09-b9f868311a92" 1392 | }, 1393 | { 1394 | "cell_type": "code", 1395 | "source": [ 1396 | "len(preview)" 1397 | ], 1398 | "metadata": { 1399 | "colab": { 1400 | "base_uri": "https://localhost:8080/" 1401 | }, 1402 | "id": "V1MswHRmqxSa", 1403 | "outputId": "318ccb70-aecb-4250-8c52-812fa49ceb15" 1404 | }, 1405 | "id": "V1MswHRmqxSa", 1406 | "execution_count": 22, 1407 | "outputs": [ 1408 | { 1409 | "output_type": "execute_result", 1410 | "data": { 1411 | "text/plain": [ 1412 | "2" 1413 | ] 1414 | }, 1415 | "metadata": {}, 1416 | "execution_count": 22 1417 | } 1418 | ] 1419 | }, 1420 | { 1421 | "cell_type": "code", 1422 | "execution_count": 23, 1423 | "metadata": { 1424 | "id": "88b8958b-6f3a-4417-9ba2-e13a4a5500fd", 1425 | "colab": { 1426 | "base_uri": "https://localhost:8080/" 1427 | }, 1428 | "outputId": "7843492d-a550-4bc7-f8cd-995b2c768785" 1429 | }, 1430 | "outputs": [ 1431 | { 1432 | "output_type": "stream", 1433 | "name": "stdout", 1434 | "text": [ 1435 | "LLM QC -> PASS\n", 1436 | "LLM QC -> PASS\n" 1437 | ] 1438 | } 1439 | ], 1440 | "source": [ 1441 | "def llm_qc_judgement(text: str) -> bool:\n", 1442 | " \"\"\"回傳 True 視為通過;False 視為不通過\"\"\"\n", 1443 | " prompt = f\"請閱讀以下對話是否符合:主題連貫、語氣正式友善、無敏感資料、無危險建議。\\n\\n{text}\\n\\n請只回答:PASS 或 FAIL。\"\n", 1444 | " resp = client.chat.completions.create(\n", 1445 | " model=\"gemma-3-12b-it\",\n", 1446 | " messages=[{\"role\":\"user\",\"content\": prompt}],\n", 1447 | " temperature=0.0,\n", 1448 | " max_tokens=10,\n", 1449 | " )\n", 1450 | " ans = resp.choices[0].message.content.strip().upper()\n", 1451 | " return ans.startswith(\"PASS\")\n", 1452 | "\n", 1453 | "# 示例(只檢查前 3 筆)\n", 1454 | "for s in preview:\n", 1455 | " ok = llm_qc_judgement(\"\\n\".join(m[\"content\"] for m in s[\"messages\"]))\n", 1456 | " print(\"LLM QC ->\", \"PASS\" if ok else \"FAIL\")" 1457 | ], 1458 | "id": "88b8958b-6f3a-4417-9ba2-e13a4a5500fd" 1459 | }, 1460 | { 1461 | "cell_type": "markdown", 1462 | "metadata": { 1463 | "id": "e2bfbf3a-5157-4bab-826b-64c510619a31" 1464 | }, 1465 | "source": [ 1466 | "### 2.9.(可選)如果生成資料集一直沒通過" 1467 | ], 1468 | "id": "e2bfbf3a-5157-4bab-826b-64c510619a31" 1469 | }, 1470 | { 1471 | "cell_type": "code", 1472 | "execution_count": 24, 1473 | "metadata": { 1474 | "id": "b5d49ad4-a02e-4416-81a3-22bdd0b10ab5" 1475 | }, 1476 | "outputs": [], 1477 | "source": [ 1478 | "# 🔍 Debug:逐筆列出命中的敏感詞 / Placeholder(含前後文)\n", 1479 | "import re\n", 1480 | "\n", 1481 | "def _ctx(text: str, start: int, end: int, width: int = 50) -> str:\n", 1482 | " s = max(0, start - width)\n", 1483 | " e = min(len(text), end + width)\n", 1484 | " return text[s:start] + \"【\" + text[start:end] + \"】\" + text[end:e]\n", 1485 | "\n", 1486 | "def debug_scan_record(rec: dict, show_only_hits: bool = True):\n", 1487 | " rid = rec.get(\"id\", \"\")\n", 1488 | " topic = rec.get(\"topic\", \"\")\n", 1489 | " msgs = rec.get(\"messages\", [])\n", 1490 | "\n", 1491 | " # 🔑 只掃 assistant(模型輸出)\n", 1492 | " text = \"\\n\".join((m.get(\"content\") or \"\") for m in msgs if m.get(\"role\") == \"assistant\")\n", 1493 | "\n", 1494 | " sens_hits = []\n", 1495 | " for p in SENSITIVE_PATTERNS:\n", 1496 | " for m in re.finditer(p, text, flags=re.IGNORECASE):\n", 1497 | " sens_hits.append((p, m.start(), m.end(), m.group(0)))\n", 1498 | "\n", 1499 | " ph_hits = []\n", 1500 | " for p in PLACEHOLDER_PATTERNS:\n", 1501 | " for m in re.finditer(p, text, flags=re.IGNORECASE):\n", 1502 | " ph_hits.append((p, m.start(), m.end(), m.group(0)))\n", 1503 | "\n", 1504 | " if sens_hits or ph_hits or not show_only_hits:\n", 1505 | " print(f\"\\n=== Record id={rid} | topic={topic} ===\")\n", 1506 | " if sens_hits:\n", 1507 | " print(f\"Sensitive matches ({len(sens_hits)}):\")\n", 1508 | " for p, s, e, g in sens_hits:\n", 1509 | " print(f\" - pattern: {p} | match: {g!r}\")\n", 1510 | " print(\" ...\", _ctx(text, s, e), \"...\")\n", 1511 | " if ph_hits:\n", 1512 | " print(f\"Placeholder matches ({len(ph_hits)}):\")\n", 1513 | " for p, s, e, g in ph_hits:\n", 1514 | " print(f\" - pattern: {p} | match: {g!r}\")\n", 1515 | " print(\" ...\", _ctx(text, s, e), \"...\")\n", 1516 | " return bool(sens_hits), bool(ph_hits)\n", 1517 | "\n", 1518 | "def debug_scan_all(recs: list[dict], limit: int | None = None):\n", 1519 | " n = 0\n", 1520 | " total_sens = total_ph = 0\n", 1521 | " for rec in recs:\n", 1522 | " sens, ph = debug_scan_record(rec)\n", 1523 | " total_sens += int(sens)\n", 1524 | " total_ph += int(ph)\n", 1525 | " n += 1\n", 1526 | " if limit and n >= limit:\n", 1527 | " break\n", 1528 | " print(f\"\\nSummary: scanned {n} records | with_sensitive={total_sens} | with_placeholder={total_ph}\")" 1529 | ], 1530 | "id": "b5d49ad4-a02e-4416-81a3-22bdd0b10ab5" 1531 | }, 1532 | { 1533 | "cell_type": "code", 1534 | "execution_count": 25, 1535 | "metadata": { 1536 | "id": "f4161db3-b9e8-430e-b8b7-a5492edc3491", 1537 | "colab": { 1538 | "base_uri": "https://localhost:8080/" 1539 | }, 1540 | "outputId": "f128761d-faf4-444b-faff-7dc14faa79d2" 1541 | }, 1542 | "outputs": [ 1543 | { 1544 | "output_type": "stream", 1545 | "name": "stdout", 1546 | "text": [ 1547 | "\n", 1548 | "Summary: scanned 11 records | with_sensitive=0 | with_placeholder=0\n" 1549 | ] 1550 | } 1551 | ], 1552 | "source": [ 1553 | "# 假設你已在前面載入 records = [...](從 raw.jsonl)\n", 1554 | "debug_scan_all(records) # 掃全部\n", 1555 | "# 或只看前 10 筆\n", 1556 | "# debug_scan_all(records, limit=10)" 1557 | ], 1558 | "id": "f4161db3-b9e8-430e-b8b7-a5492edc3491" 1559 | }, 1560 | { 1561 | "cell_type": "markdown", 1562 | "metadata": { 1563 | "id": "74e85e7f-dce9-4f7a-a73f-033328c2e549" 1564 | }, 1565 | "source": [ 1566 | "# 3 將資料集上傳到 Hugging Face Hub(Dataset Repo)\n", 1567 | "\n", 1568 | "本章目標:\n", 1569 | "1. 準備要上傳的檔案(預期:`outputs/datasets.jsonl`)\n", 1570 | "2. 使用 `huggingface_hub` 建立或覆用 **Dataset repo**\n", 1571 | "3. 上傳 `data/train.jsonl`(選配:同時上傳 `train.parquet`)\n", 1572 | "4. 建立 / 更新 Dataset Card(`README.md`)" 1573 | ], 1574 | "id": "74e85e7f-dce9-4f7a-a73f-033328c2e549" 1575 | }, 1576 | { 1577 | "cell_type": "code", 1578 | "execution_count": 26, 1579 | "metadata": { 1580 | "id": "544a0f13-b9e4-4ada-99ff-6bbdff9715d6", 1581 | "colab": { 1582 | "base_uri": "https://localhost:8080/" 1583 | }, 1584 | "outputId": "e0a997b1-949c-45d6-a7d6-79a42fea6e1b" 1585 | }, 1586 | "outputs": [ 1587 | { 1588 | "output_type": "stream", 1589 | "name": "stdout", 1590 | "text": [ 1591 | "Repo: Simon-Liu/gemma-270m-medium-qa\n", 1592 | "Local file: /content/outputs/datasets.jsonl\n" 1593 | ] 1594 | } 1595 | ], 1596 | "source": [ 1597 | "from huggingface_hub import HfApi, create_repo, upload_file, upload_folder\n", 1598 | "from huggingface_hub import login as hf_login\n", 1599 | "from pathlib import Path\n", 1600 | "import json, os, time\n", 1601 | "from google.colab import userdata\n", 1602 | "\n", 1603 | "# @markdown 請設定以下 HuggingFace 專案資訊 的變數數值\n", 1604 | "\n", 1605 | "# @markdown > HuggingFace Token 可以設定在 Google Colab 左邊的金鑰區域\n", 1606 | "\n", 1607 | "# === 基本設定(請依實際調整) ===\n", 1608 | "HF_TOKEN = userdata.get(\"HF_TOKEN\") # @param {type:\"string\"}\n", 1609 | "ORG_OR_USER = \"Simon-Liu\" # @param {type:\"string\"}\n", 1610 | "DATASET_NAME = \"gemma-270m-medium-qa\" # @param {type:\"string\"}\n", 1611 | "REPO_ID = f\"{ORG_OR_USER}/{DATASET_NAME}\"\n", 1612 | "\n", 1613 | "LOCAL_JSONL = Path(\"outputs/datasets.jsonl\")\n", 1614 | "assert LOCAL_JSONL.exists(), f\"找不到 {LOCAL_JSONL},請先完成前面章節生成資料\"\n", 1615 | "\n", 1616 | "# 可選:是否也上傳 Parquet(HF Hub 也會在後台自動生成 parquet 分支,但這裡示範手動輸出一次)\n", 1617 | "ALSO_UPLOAD_PARQUET = True # @param {type:\"string\"}\n", 1618 | "\n", 1619 | "print(\"Repo:\", REPO_ID)\n", 1620 | "print(\"Local file:\", LOCAL_JSONL.resolve())" 1621 | ], 1622 | "id": "544a0f13-b9e4-4ada-99ff-6bbdff9715d6" 1623 | }, 1624 | { 1625 | "cell_type": "code", 1626 | "execution_count": 27, 1627 | "metadata": { 1628 | "id": "1d9493e6-8eb4-4ea0-a678-0ce92d029b95", 1629 | "colab": { 1630 | "base_uri": "https://localhost:8080/" 1631 | }, 1632 | "outputId": "2abcc60b-f690-43d3-a66e-e194d57b5fef" 1633 | }, 1634 | "outputs": [ 1635 | { 1636 | "output_type": "stream", 1637 | "name": "stdout", 1638 | "text": [ 1639 | "✅ 產生 Dataset Card: /content/outputs/README.md\n" 1640 | ] 1641 | } 1642 | ], 1643 | "source": [ 1644 | "CARD_PATH = Path(\"outputs/README.md\")\n", 1645 | "CARD_PATH.parent.mkdir(parents=True, exist_ok=True)\n", 1646 | "\n", 1647 | "# 注意:HF 會讀取 README.md 頂端的 YAML 區塊作為中繼資料\n", 1648 | "card_md = f\"\"\"---\n", 1649 | "pretty_name: {REPO_ID} (Gemma-3-27B-it, ADK Reference)\n", 1650 | "tags:\n", 1651 | "- dialog\n", 1652 | "- instruction-tuning\n", 1653 | "- sft\n", 1654 | "- openai-messages\n", 1655 | "- reference-based\n", 1656 | "- reference-free\n", 1657 | "license: cc-by-4.0\n", 1658 | "task_categories:\n", 1659 | "- text-generation\n", 1660 | "language:\n", 1661 | "- zh\n", 1662 | "---\n", 1663 | "\n", 1664 | "本資料集包含由 ** {MODEL} ** 生成的對話資料,採用 **OpenAI Chat Messages** 格式(`.jsonl`)。資料來源結合:\n", 1665 | "- **Reference-free**:由 seed 派生的單輪問答。\n", 1666 | "- **Reference-based**:依據參考文本生成單輪問答。\n", 1667 | "\n", 1668 | "> 檔案路徑:`data/train.jsonl`(選配:`data/train.parquet`)\n", 1669 | "\n", 1670 | "## 結構說明\n", 1671 | "- 每列為一筆樣本:`{{\"id\": \"...\", \"type\": \"...\", \"seed\": \"...\", \"context\": \"...\", \"messages\": [{{\"role\":\"user\",\"content\":\"...\"}}, {{\"role\":\"assistant\",\"content\":\"...\"}}]}}`\n", 1672 | "- `type` 欄位標示資料來源:`reference_free` 或 `reference_based`。\n", 1673 | "- `seed` 欄位儲存 Reference-free 的原始 seed 指令,或 Reference-based 的參考文本片段。\n", 1674 | "- `context` 欄位僅在 `reference_based` 資料中包含完整的參考文本片段。\n", 1675 | "- 訓練時可直接使用 `messages` 欄位的對話格式進行訓練。\n", 1676 | "\n", 1677 | "## 來源與限制\n", 1678 | "- Model: {MODEL}\n", 1679 | "- 語言:繁體中文(生成內容),部分參考文本為英文。\n", 1680 | "- 使用情境:教學示範用;不代表專業意見。\n", 1681 | "- **重要**:Reference-based 資料的問題和答案均從參考文本中生成,答案不應超出參考文本範圍。\n", 1682 | "\n", 1683 | "## 授權\n", 1684 | "- 建議使用 **CC BY 4.0**;若另有需求請調整 `license` 欄位。\n", 1685 | "\"\"\"\n", 1686 | "\n", 1687 | "CARD_PATH.write_text(card_md, encoding=\"utf-8\")\n", 1688 | "print(\"✅ 產生 Dataset Card:\", CARD_PATH.resolve())" 1689 | ], 1690 | "id": "1d9493e6-8eb4-4ea0-a678-0ce92d029b95" 1691 | }, 1692 | { 1693 | "cell_type": "code", 1694 | "execution_count": 28, 1695 | "metadata": { 1696 | "id": "9e517999-70ed-4814-8fe7-b486c360fa6d", 1697 | "colab": { 1698 | "base_uri": "https://localhost:8080/", 1699 | "height": 67, 1700 | "referenced_widgets": [ 1701 | "c91e25ccc6034a7c94f52d676c44b94e", 1702 | "13731b55ad004f798665cdc8af1cfff3", 1703 | "cac6255099324943afc2b4fb4e0df034", 1704 | "b2657e8be0c644558ac96a766b85e29c", 1705 | "6d6a9866108f4b248808096ef0dfbeea", 1706 | "c5d372f7efc840aca0e64f3b2a4350fb", 1707 | "9ccd9d2c4a6246a2994be3f1eea8be37", 1708 | "01bb49ccf71e4555a06a6b1933e7c016", 1709 | "c6d75ee7208d4e29b3723889751a6284", 1710 | "c58b638d61764d94a15bc15271296320", 1711 | "850eda4161184dd5b7925955ebcd320e" 1712 | ] 1713 | }, 1714 | "outputId": "9bc2aad1-70db-4e8e-9f33-1461e78d5b73" 1715 | }, 1716 | "outputs": [ 1717 | { 1718 | "output_type": "display_data", 1719 | "data": { 1720 | "text/plain": [ 1721 | "Creating parquet from Arrow format: 0%| | 0/1 [00:00