├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── Notebooks └── evaluating-large-language-models-using-llm-as-a-judge-with-amazon-bedrock.ipynb └── README.md /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *main* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT-0 licence 2 | 3 | MIT No Attribution 4 | 5 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 6 | 7 | Permission is hereby granted, free of charge, to any person obtaining a copy of 8 | this software and associated documentation files (the "Software"), to deal in 9 | the Software without restriction, including without limitation the rights to 10 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 11 | the Software, and to permit persons to whom the Software is furnished to do so. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 15 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 16 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 17 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 18 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 19 | -------------------------------------------------------------------------------- /Notebooks/evaluating-large-language-models-using-llm-as-a-judge-with-amazon-bedrock.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "3c825696-de59-4e70-8461-af86a02d812b", 6 | "metadata": { 7 | "tags": [] 8 | }, 9 | "source": [ 10 | "# Evaluating Large Language Models using LLM-as-a-Judge with Amazon Bedrock" 11 | ] 12 | }, 13 | { 14 | "cell_type": "markdown", 15 | "id": "90832ab8-1405-4c45-b185-e65cf8b3b13e", 16 | "metadata": {}, 17 | "source": [ 18 | "This notebook serves as a base for evaluating Large Language Models using LLM-as-a-Judge with Amazon Bedrock. \n", 19 | "\n", 20 | "> This notebook should work well with the Data Science 3.0 kernel in SageMaker Studio\n", 21 | "\n", 22 | "\n", 23 | "Evaluating large language models (LLM) is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences. To address this, strong LLMs are used as judges to evaluate these models on more open-ended questions. The agreement between LLM judges and human preferences has been verified by introducing two benchmarks: [Multi Turn (MT)-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge/data/mt_bench), a multi-turn question set, and [Chatbot Arena](https://arena.lmsys.org/), a crowdsourced battle platform. The results reveal that strong LLM judges can match both controlled and crowdsourced human preferences well, achieving over 80% agreement, the same level of agreement between humans. This makes LLM-as-a-judge a scalable and explainable way to approximate human preferences, which are otherwise very expensive to obtain.\n", 24 | "\n", 25 | "> ℹ️ **Note:** The evaluation steps in this lab are based on the paper [Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena](https://arxiv.org/pdf/2306.05685.pdf).\n", 26 | "\n", 27 | "This lab addresses this challenge by providing a practical solution for evaluating LLMs using LLM-as-a-Judge with Amazon Bedrock. This is relevant for developers and researchers working on evaluating LLM based applications. In the notebook you are guided using MT-Bench questions to generate test answers and evaluate them with a single-answer grading using the Bedrock API, Python and Langchain. For demonstration purpose of this lab Claude Instant is evaluated and Claude 3 Sonnet is used as strong LLM judge. The notebook consists of the following chapters: \n", 28 | "\n", 29 | "1) [Setup of the environment](#1.-Setup-of-the-enviroment)\n", 30 | "2) [Load MT-Bench questions](#2.-Load-MT-Bench-questions)\n", 31 | "3) [Generate test answers from LLM which should be evaluated](#3.-Generate-test-answers-from-LLM-which-should-be-evaluated)\n", 32 | "4) [Evaluate answers with strong LLM-as-a-judge](#4.-Evaluate-answers-with-strong-LLM-as-a-judge)\n", 33 | "5) [Generate explanation for average rating score](#5.-Generate-explanation-for-average-rating-score)" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "id": "8c45dfee-35e1-467e-b831-f3b72f97f017", 39 | "metadata": {}, 40 | "source": [ 41 | "## 1. Setup of the enviroment" 42 | ] 43 | }, 44 | { 45 | "cell_type": "markdown", 46 | "id": "cd06d4c4-500f-4dd0-ba19-6559f85fde9b", 47 | "metadata": {}, 48 | "source": [ 49 | "We start by installing the required libraries." 50 | ] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": 2, 55 | "id": "bb52e396-d5b3-4a6c-bc85-5a577efd33cf", 56 | "metadata": {}, 57 | "outputs": [], 58 | "source": [ 59 | "%%capture \n", 60 | "%pip install langchain==0.1.10 boto3 tqdm" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "id": "6f81966d-ec79-42f1-9d05-446b5814b42f", 66 | "metadata": {}, 67 | "source": [ 68 | "## 2. Load MT-Bench question set" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "id": "eb150ca6-5373-48d9-8472-2266c1468e30", 74 | "metadata": {}, 75 | "source": [ 76 | "This lab uses the MT-Bench questions set which consists of 80 high-quality multi-turn questions. They are designed to test multi-turn conversation and instruction-following ability, covering common use cases and focusing on challenging questions to differentiate models.\n", 77 | "\n", 78 | "To evaluate custom applications or fine tuned LLMs, questions should be adjusted or created according to the use cases. They should focus on covering common uses cases and usage patterns.\n", 79 | "\n", 80 | "We download the questions to use them for evaluation." 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": 3, 86 | "id": "c19cfdf0-b995-4898-9d93-032c714bd645", 87 | "metadata": { 88 | "tags": [] 89 | }, 90 | "outputs": [], 91 | "source": [ 92 | "# Import necessary libraries\n", 93 | "import requests\n", 94 | "import json\n", 95 | "\n", 96 | "# Download MT-Bench questions\n", 97 | "url = \"https://raw.githubusercontent.com/lm-sys/FastChat/main/fastchat/llm_judge/data/mt_bench/question.jsonl\"\n", 98 | "response = requests.get(url)\n", 99 | "lines = response.text.split(\"\\n\") \n", 100 | "\n", 101 | "# Iterate through lines and append them to questions array as json\n", 102 | "questions = []\n", 103 | "for line in lines:\n", 104 | " if line:\n", 105 | " questions.append(json.loads(line))" 106 | ] 107 | }, 108 | { 109 | "cell_type": "markdown", 110 | "id": "4133b71d-9ee9-4808-85f3-f096f5e7e0fe", 111 | "metadata": {}, 112 | "source": [ 113 | "## 3. Generate test answers from LLM which should be evaluated" 114 | ] 115 | }, 116 | { 117 | "cell_type": "markdown", 118 | "id": "ded3bed1-f6b7-4775-9f8b-6bbee2b962eb", 119 | "metadata": {}, 120 | "source": [ 121 | "Now that we have the questions stored, we use the LLM which should be evaluated to generate the answers to these questions. First we create a prompt template which we use in a second step to generate each answer." 122 | ] 123 | }, 124 | { 125 | "cell_type": "code", 126 | "execution_count": 4, 127 | "id": "d8896ed0-4d83-4774-bb9f-bd4d6b5fb088", 128 | "metadata": {}, 129 | "outputs": [], 130 | "source": [ 131 | "# Import necessary libraries\n", 132 | "from langchain.prompts import PromptTemplate\n", 133 | "from langchain_community.chat_models import BedrockChat\n", 134 | "import boto3\n", 135 | "\n", 136 | "# Create bedrock client\n", 137 | "boto3_bedrock = boto3.client('bedrock-runtime', region_name='us-east-1')" 138 | ] 139 | }, 140 | { 141 | "cell_type": "code", 142 | "execution_count": 5, 143 | "id": "3ff03e7e-31c2-4090-a3f1-b2bd013138a0", 144 | "metadata": { 145 | "tags": [] 146 | }, 147 | "outputs": [], 148 | "source": [ 149 | "# Create a prompt template to generate a question a end-user could have about each open ended question\n", 150 | "initial_question_prompt_template = PromptTemplate(\n", 151 | " input_variables=[\"input\",\"history\"],\n", 152 | " template=\"\"\"HUMAN:\n", 153 | " You are an artificial intelligence assistant and answer questions from a curious user\n", 154 | " Give a helpful, detailed, and polite answers to the user's question \n", 155 | " \n", 156 | " Current conversation:\n", 157 | " \n", 158 | " {history}\n", 159 | " \n", 160 | " \n", 161 | " Here is the human's next reply:\n", 162 | " \n", 163 | " {input}\n", 164 | " \n", 165 | "\n", 166 | " ANSWER:\"\"\")" 167 | ] 168 | }, 169 | { 170 | "cell_type": "code", 171 | "execution_count": 6, 172 | "id": "e20f225d-8821-4524-bf35-791ebf07caff", 173 | "metadata": { 174 | "tags": [] 175 | }, 176 | "outputs": [], 177 | "source": [ 178 | "# For each model provider there are different parameters to define when inferencing against the model. These depend on the use case.\n", 179 | "inference_modifier = {\n", 180 | " \"temperature\": 0.5,\n", 181 | " \"top_k\": 250,\n", 182 | " \"top_p\": 1,\n", 183 | " }\n", 184 | " \n", 185 | "\n", 186 | "evaluate_llm = BedrockChat(model_id = \"anthropic.claude-instant-v1\",\n", 187 | " client = boto3_bedrock, \n", 188 | " model_kwargs = inference_modifier \n", 189 | " )" 190 | ] 191 | }, 192 | { 193 | "cell_type": "code", 194 | "execution_count": 7, 195 | "id": "92bc75f6-8dcb-488b-954a-c8bd1611e1da", 196 | "metadata": { 197 | "tags": [] 198 | }, 199 | "outputs": [], 200 | "source": [ 201 | "# Import necessary libraries\n", 202 | "from langchain.memory import ConversationBufferMemory\n", 203 | "from langchain.chains import ConversationChain\n", 204 | "from tqdm.auto import tqdm" 205 | ] 206 | }, 207 | { 208 | "cell_type": "markdown", 209 | "id": "7f67a224-e317-42d6-82a4-9af60b033210", 210 | "metadata": {}, 211 | "source": [ 212 | "ℹ️ **Note:** The next steps takes several minutes to complete." 213 | ] 214 | }, 215 | { 216 | "cell_type": "code", 217 | "execution_count": 8, 218 | "id": "d81da9c9-dddd-43e7-a268-d7d4d26ed49a", 219 | "metadata": { 220 | "tags": [] 221 | }, 222 | "outputs": [ 223 | { 224 | "data": { 225 | "application/vnd.jupyter.widget-view+json": { 226 | "model_id": "86dfc4c88a7e4c289ae35ca1377a03bf", 227 | "version_major": 2, 228 | "version_minor": 0 229 | }, 230 | "text/plain": [ 231 | " 0%| | 0/80 [00:00Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question displayed below.\n", 287 | " Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of the response. Your evaluation should focus on the assistant's answer to the second user question. Begin your evaluation by providing a short explanation. Be as objective as possible. After providing your explanation, you must rate the response on a scale of 1 to 10 by strictly following this format: \\\"\\\", for example: \\\"Rating: 5\\ \n", 288 | " \n", 289 | " \n", 290 | " Human:\n", 291 | " \n", 292 | " {question1}\n", 293 | " \n", 294 | "\n", 295 | " Assistant:\n", 296 | " \n", 297 | " {answer1}\n", 298 | " \n", 299 | "\n", 300 | " Human:\n", 301 | " \n", 302 | " {question2}\n", 303 | " \n", 304 | "\n", 305 | " Assistant:\n", 306 | " \n", 307 | " {answer2}\n", 308 | " \n", 309 | " \n", 310 | "\n", 311 | " ANSWER:\"\"\")" 312 | ] 313 | }, 314 | { 315 | "cell_type": "code", 316 | "execution_count": 10, 317 | "id": "63d18812-ed8b-48b2-a520-448b74a99e03", 318 | "metadata": { 319 | "tags": [] 320 | }, 321 | "outputs": [], 322 | "source": [ 323 | "# For each model provider there are different parameters to define when inferencing against the model. These depend on the use case.\n", 324 | "eval_inference_modifier = {\n", 325 | " \"temperature\": 0.5,\n", 326 | " \"top_k\": 250,\n", 327 | " \"top_p\": 1,\n", 328 | " }\n", 329 | " \n", 330 | "\n", 331 | "eval_llm = BedrockChat(model_id = \"anthropic.claude-3-sonnet-20240229-v1:0\",\n", 332 | " client = boto3_bedrock, \n", 333 | " model_kwargs = inference_modifier \n", 334 | " )" 335 | ] 336 | }, 337 | { 338 | "cell_type": "markdown", 339 | "id": "e0e0ac00-172c-4760-93ca-9b8de10345bd", 340 | "metadata": {}, 341 | "source": [ 342 | "ℹ️ **Note:** The next steps takes several minutes to complete." 343 | ] 344 | }, 345 | { 346 | "cell_type": "code", 347 | "execution_count": 11, 348 | "id": "90722ec0-9a6c-4345-a4f4-1c41cb029afb", 349 | "metadata": { 350 | "tags": [] 351 | }, 352 | "outputs": [ 353 | { 354 | "data": { 355 | "application/vnd.jupyter.widget-view+json": { 356 | "model_id": "7b44f895bfcd4623a52d933260af4053", 357 | "version_major": 2, 358 | "version_minor": 0 359 | }, 360 | "text/plain": [ 361 | " 0%| | 0/80 [00:00(.*?)\"\n", 374 | "explanation_rating = []\n", 375 | "for question in tqdm(questions):\n", 376 | " question1 = question['answers'][0]['input']\n", 377 | " question2 = question['answers'][1]['input']\n", 378 | " answer1 = question['answers'][0]['response']\n", 379 | " answer2 = question['answers'][1]['response']\n", 380 | " question['rating_text'] = eval_llm.invoke(eval_prompt_template.format(question1 = question1, answer1 = answer1, question2=question2, answer2=answer2)).content\n", 381 | " tag_value = re.search(reg_str, question['rating_text'])\n", 382 | " if tag_value: \n", 383 | " question['rating_score'] = tag_value.group(1)\n", 384 | " explanation_rating.append(question['rating_text'])\n", 385 | " amount_questions = amount_questions + 1\n", 386 | " ratings_add_up = ratings_add_up + int(question['rating_score'])" 387 | ] 388 | }, 389 | { 390 | "cell_type": "code", 391 | "execution_count": 12, 392 | "id": "ad0bf06d-dd94-4cdc-9855-41a596328302", 393 | "metadata": { 394 | "tags": [] 395 | }, 396 | "outputs": [ 397 | { 398 | "name": "stdout", 399 | "output_type": "stream", 400 | "text": [ 401 | "The average rating score is: 8.6125\n" 402 | ] 403 | } 404 | ], 405 | "source": [ 406 | "# Calculate the average rating score\n", 407 | "average_rating = ratings_add_up/amount_questions\n", 408 | "print(\"The average rating score is: {}\".format(average_rating))" 409 | ] 410 | }, 411 | { 412 | "cell_type": "code", 413 | "execution_count": 13, 414 | "id": "7917b47a-4c7c-4ec2-94fb-f290a1d2addb", 415 | "metadata": {}, 416 | "outputs": [ 417 | { 418 | "data": { 419 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjMAAAGwCAYAAABcnuQpAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/H5lhTAAAACXBIWXMAAA9hAAAPYQGoP6dpAABF50lEQVR4nO3deVxVdeL/8fcV5QqIuLMoCgqa+4YaaqOWG6ZpNmajuWQ1lZoio341W9AUzMklx8nSTMkybb6uWSloittYuJB+zUgNl0piMhISRYXz+8Ofd7qyCIgdjr6ej8d9PLifc+49b67CffM5555jMwzDEAAAgEWVMTsAAADAraDMAAAAS6PMAAAAS6PMAAAAS6PMAAAAS6PMAAAAS6PMAAAASytrdoDbLScnRz/++KM8PT1ls9nMjgMAAArBMAxlZGTIz89PZcoUPPdyx5eZH3/8Uf7+/mbHAAAAxXDmzBnVqlWrwHXu+DLj6ekp6dqLUbFiRZPTAABKu5zMTB2770+SpOCdO1TG3d3kRHen9PR0+fv7O97HC3LHl5nru5YqVqxImQEA3FRO2bKq4OIi6dp7B2XGXIU5RIQDgAEAgKVRZgAAgKVRZgAAgKXd8cfMFFZ2drauXLlidgzgjuPq6nrTj1UCwK2468uMYRhKSUnRr7/+anYU4I5UpkwZBQYGytXV1ewoAO5Qd32ZuV5katSoIXd3d06sB5Sg6yetPHv2rGrXrs3PF4Db4q4uM9nZ2Y4iU7VqVbPjAHek6tWr68cff9TVq1dVrlw5s+MAuAPd1Tuyrx8j4845BIDb5vrupezsbJOTALhT3dVl5jqmvoHbh58vALcbZQYAAFgaZQYAAFia6WXmhx9+0OOPP66qVavK3d1dLVq00P79+x3LDcNQZGSk/Pz85Obmps6dO+vIkSMmJsadYNmyZapUqZLZMQAAJcDUMpOWlqYOHTqoXLly+uyzz/T1119r9uzZTm8ys2bN0pw5c7RgwQIlJCTIx8dH3bp1U0ZGhnnBS4k9e/bIxcVFPXv2NDtKqRYQEKB58+Y5jQ0cOFDffvutOYEAACXK1I9mv/baa/L399fSpUsdYwEBAY6vDcPQvHnzNGXKFPXv31+SFBMTI29vb61YsULPPPPMHx25VHn33Xf1/PPP65133tHp06dVu3bt27at7Oxs2Wy2UnMmV8MwlJ2drbJli/df2M3NTW5ubiWcyly3+poAgFWZ+s60YcMGhYSEaMCAAapRo4ZatmypxYsXO5YnJycrJSVF3bt3d4zZ7XZ16tRJe/bsyfM5s7KylJ6e7nQrCsMwlJOZ+YffDMMoUs4LFy7oo48+0nPPPafevXtr2bJljmWhoaGaNGmS0/r/+c9/VK5cOW3btk2SdPnyZU2cOFE1a9aUh4eH2rVrp+3btzvWv74bZuPGjWrUqJHsdrtOnTqlhIQEdevWTdWqVZOXl5c6deqkAwcOOG3rm2++UceOHVW+fHk1atRIW7Zskc1m07p16xzr/PDDDxo4cKAqV66sqlWrqm/fvjp58mS+3+/27dtls9m0efNmhYSEyG63a+fOnTpx4oT69u0rb29vVahQQW3atNGWLVscj+vcubNOnTqlcePGyWazOT5Zc+NupsjISLVo0ULLly9XQECAvLy89NhjjznNAGZkZGjw4MHy8PCQr6+v5s6dq86dOys8PDzf3F999ZW6dOkiT09PVaxYUa1bt9a+ffscy3fv3q1OnTrJ3d1dlStXVo8ePZSWlibp2v/lMWPGqEaNGipfvrw6duyohISEm74mhmFo1qxZqlu3rtzc3NS8eXP97//+b74ZAcDqTP0T7rvvvtPChQsVERGhF154QV9++aXGjBkju92uoUOHKiUlRZLk7e3t9Dhvb2+dOnUqz+eMjo7W1KlTi53JuHhRSa1aF/vxxdXgwH7ZinC+m1WrVqlBgwZq0KCBHn/8cT3//PN66aWXZLPZNHjwYP39739XdHS048171apV8vb2VqdOnSRJTzzxhE6ePKmVK1fKz89Pa9euVc+ePXX48GEFBwdLkjIzMxUdHa133nlHVatWVY0aNZScnKxhw4Zp/vz5kqTZs2erV69eOnbsmDw9PZWTk6N+/fqpdu3a+uKLL5SRkaG//e1vTtkzMzPVpUsX3XfffdqxY4fKli2r6dOnq2fPnjp06FCBp72fOHGiXn/9ddWtW1eVKlXS999/r169emn69OkqX768YmJi1KdPHyUlJal27dpas2aNmjdvrr/+9a96+umnC3xNT5w4oXXr1mnjxo1KS0vTo48+qpkzZ2rGjBmSpIiICO3evVsbNmyQt7e3Xn75ZR04cEAtWrTI9zkHDx6sli1bauHChXJxcVFiYqLjxHGJiYl64IEHNGLECM2fP19ly5bVtm3bHOdjmThxolavXq2YmBjVqVNHs2bNUo8ePXT8+HFVqVIl39fkxRdf1Jo1a7Rw4UIFBwdrx44devzxx1W9enXHvz+Awmn48iZllbWbtv2TMx80bdtWYmqZycnJUUhIiKKioiRJLVu21JEjR7Rw4UINHTrUsd6N56kwDCPfc1dMnjxZERERjvvp6eny9/e/DenNtWTJEj3++OOSpJ49e+q3337T1q1b1bVrVw0cOFDjxo3Trl27dN9990mSVqxYoUGDBqlMmTI6ceKEPvzwQ33//ffy8/OTJI0fP16bNm3S0qVLHf8eV65c0ZtvvqnmzZs7tnv//fc75Xj77bdVuXJlxcfHq3fv3oqNjdWJEye0fft2+fj4SJJmzJihbt26OR6zcuVKlSlTRu+8847j33Hp0qWqVKmStm/f7jQTd6Np06Y5PVfVqlWd8k2fPl1r167Vhg0bNHr0aFWpUkUuLi7y9PR05MlPTk6Oli1bJk9PT0nSkCFDtHXrVs2YMUMZGRmKiYnRihUr9MADDzgyX3/98nP69GlNmDBB99xzjyQ5iqJ07XiwkJAQvfnmm46xxo0bS7o287Zw4UItW7ZMYWFhkqTFixcrLi5OS5Ys0YQJE/J8TS5cuKA5c+bo888/V2hoqCSpbt262rVrl95++23KDIA7kqllxtfXV40aNXIaa9iwoVavXi1JjjeflJQU+fr6OtZJTU3NNVtznd1ul91e/BZtc3NTgwP7b75iCbMV4fiNpKQkffnll1qzZo0kqWzZsho4cKDeffddde3aVdWrV1e3bt30wQcf6L777lNycrL+/e9/a+HChZKkAwcOyDAM1a9f3+l5s7KynC7r4OrqqmbNmjmtk5qaqpdfflmff/65fvrpJ2VnZyszM1OnT592ZPP393cqDm3btnV6jv379+v48eOO0nDdpUuXdOLEiQK/95CQEKf7Fy5c0NSpU7Vx40bHKfMvXrzoyFMUAQEBTpl8fX2Vmpoq6dos4pUrV5y+Fy8vLzVo0KDA54yIiNBTTz2l5cuXq2vXrhowYIDq1asn6drMzIABA/J83IkTJ3TlyhV16NDBMVauXDm1bdtWR48edVr396/J119/rUuXLjkVPunabsWWLVsWmBUArMrUMtOhQwclJSU5jX377beqU6eOJCkwMFA+Pj6Ki4tz/CK+fPmy4uPj9dprr92WTDabrUi7e8ywZMkSXb16VTVr1nSMGYahcuXKKS0tTZUrV9bgwYM1duxY/eMf/9CKFSvUuHFjxwxGTk6OXFxctH//frm4uDg9d4UKFRxfu7m55ZoBGz58uP7zn/9o3rx5qlOnjux2u0JDQ3X58mVHjpud8TUnJ0etW7fWBx98kGtZ9erVC3ysh4eH0/0JEyZo8+bNev311xUUFCQ3Nzf9+c9/duQpihuvG2Sz2ZSTkyNJjmOa8polLEhkZKQGDRqkTz75RJ999pleeeUVrVy5Ug8//HCBByAXtL0bx37/mlzP+8knnzj9/5B0SyUfAEozUw8AHjdunPbu3auoqCgdP35cK1as0KJFizRq1ChJ136Rh4eHKyoqSmvXrtX//d//afjw4XJ3d9egQYPMjG6aq1ev6r333tPs2bOVmJjouH311VeqU6eOoyD069dPly5d0qZNm7RixQrHLinp2u687OxspaamKigoyOl2s10xO3fu1JgxY9SrVy81btxYdrtdP//8s2P5Pffco9OnT+unn35yjP3+oFVJatWqlY4dO6YaNWrk2r6Xl1eRXo+dO3dq+PDhevjhh9W0aVP5+PjkOpDY1dX1lq8LVK9ePZUrV05ffvmlYyw9PV3Hjh276WPr16+vcePGKTY2Vv3793d8eq9Zs2baunVrno8JCgqSq6urdu3a5Ri7cuWK9u3bp4YNG+a7resHa58+fTrXa3sn7m4FAMnkMtOmTRutXbtWH374oZo0aaJXX31V8+bN0+DBgx3rTJw4UeHh4Ro5cqRCQkL0ww8/KDY2NtcuirvF9YNTn3zySTVp0sTp9uc//1lLliyRdO2v9b59++qll17S0aNHncpf/fr1NXjwYA0dOlRr1qxRcnKyEhIS9Nprr+nTTz8tcPtBQUFavny5jh49qi+++EKDBw92mmHo1q2b6tWrp2HDhunQoUPavXu3pkyZIum/swyDBw9WtWrV1LdvX+3cuVPJycmKj4/X2LFj9f333xfp9QgKCtKaNWschW7QoEGO2YnrAgICtGPHDv3www9OxasoPD09NWzYME2YMEHbtm3TkSNHNGLECJUpUybfmaiLFy9q9OjR2r59u06dOqXdu3crISHBUUYmT56shIQEjRw5UocOHdI333yjhQsX6ueff5aHh4eee+45TZgwQZs2bdLXX3+tp59+WpmZmXryyScLzDl+/HiNGzdOMTExOnHihA4ePKh//vOfiomJKdb3DgClneknDendu7cOHz6sS5cu6ejRo7k+cWKz2RQZGamzZ8/q0qVLio+PV5MmTUxKa74lS5aoa9euec5gPPLII0pMTHR8VHrw4MH66quvdN999+U6B83SpUs1dOhQ/e1vf1ODBg300EMP6YsvvrjpX+/vvvuu0tLS1LJlSw0ZMsTx0eHrXFxctG7dOv32229q06aNnnrqKb344ouSpPLly0u6dpXyHTt2qHbt2urfv78aNmyoESNG6OLFi6pYsWKRXo+5c+eqcuXKat++vfr06aMePXqoVatWTutMmzZNJ0+eVL169W66G6sgc+bMUWhoqHr37q2uXbuqQ4cOatiwoeP7upGLi4vOnTunoUOHqn79+nr00UcVFhbm+LRd/fr1FRsbq6+++kpt27ZVaGio1q9f7zhPzMyZM/XII49oyJAhatWqlY4fP67NmzercuXKBeZ89dVX9fLLLys6OloNGzZUjx499PHHHyswMLDY3zsAlGY2o6gnOLGY9PR0eXl56fz587neKC9duqTk5GQFBgbm+4aEW7d792517NhRx48fdxz8eie4cOGCatasqdmzZxc4W3K34+cMVpOTmek4RUe/3jP4aLZJCnr/vhGnCkWJW7t2rSpUqKDg4GAdP35cY8eOVYcOHSxfZA4ePKhvvvlGbdu21fnz5zVt2jRJUt++fU1OBgB3N8oMSlxGRoYmTpyoM2fOqFq1auratatmz55tdqwS8frrryspKUmurq5q3bq1du7cqWrVqpkdCwDuapQZlLihQ4c6nfTwTtGyZUunK7oDAEoH0w8ABgAAuBWUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUmTuczWbTunXrSvx5O3furPDwcMf9gIAAzZs3r8S3k9e2rGDZsmWqVKmS2TEA4K7AR7PzETDpkz9sW0U9w+Pw4cMd19kpW7asqlSpombNmukvf/mLhg8frjJl/ttRz549e9PT319ns9m0du1a9evX76brrlmzJtdVpm/V9u3b1aVLF6WlpTkVgduxrZIUEBCg8PBwp8I1cOBA9erVy7xQAHAXYWbGonr27KmzZ8/q5MmT+uyzz9SlSxeNHTtWvXv31tWrVx3r+fj4yG4vuVNxX7lyRZJUpUqVP+xin3/ktq4zDMPpdSwqNzc3p2tW3Qlu9TUBgNuFMmNRdrtdPj4+qlmzplq1aqUXXnhB69ev12effaZly5Y51vv9bqbLly9r9OjR8vX1Vfny5RUQEKDo6GhJ12YXJOnhhx+WzWZz3I+MjFSLFi307rvvqm7durLb7TIMI89dPxkZGRo0aJAqVKggPz8//eMf/3AsO3nypGw2mxITEx1jv/76q2w2m7Zv366TJ0+qS5cukqTKlSvLZrNp+PDhknLvZkpLS9PQoUNVuXJlubu7KywsTMeOHXMsv76LZ/PmzWrYsKEqVKjgKH/52b59u2w2mzZv3qyQkBDZ7Xbt3LlTJ06cUN++feXt7a0KFSqoTZs22rJli+NxnTt31qlTpzRu3DjZbDbHFbRv3M10/XVcvny5AgIC5OXlpccee0wZGRlOr9/gwYPl4eEhX19fzZ0796a72L766it16dJFnp6eqlixolq3bq19+/Y5lu/evVudOnWSu7u7KleurB49eigtLU2SlJWV5bhQaPny5dWxY0clJCTc9DUxDEOzZs1S3bp15ebmpubNm+t///d/880IALcbZeYOcv/996t58+Zas2ZNnsvnz5+vDRs26KOPPlJSUpLef/99R2m5/ia2dOlSnT171ulN7fjx4/roo4+0evVqpzJyo7///e9q1qyZDhw4oMmTJ2vcuHGKi4srVHZ/f3+tXr1akpSUlKSzZ8/qjTfeyHPd4cOHa9++fdqwYYP+/e9/yzAM9erVyzFrJEmZmZl6/fXXtXz5cu3YsUOnT5/W+PHjb5pj4sSJio6O1tGjR9WsWTP99ttv6tWrl7Zs2aKDBw+qR48e6tOnj06fPi3p2i6wWrVqadq0aTp79myBhenEiRNat26dNm7cqI0bNyo+Pl4zZ850LI+IiNDu3bu1YcMGxcXFaefOnY4roOdn8ODBqlWrlhISErR//35NmjTJsUsuMTFRDzzwgBo3bqx///vf2rVrl/r06aPs7GzH97p69WrFxMTowIEDCgoKUo8ePfTLL78U+Jq8+OKLWrp0qRYuXKgjR45o3LhxevzxxxUfH3/T1xcAbgeOmbnD3HPPPTp06FCey06fPq3g4GB17NhRNptNderUcSyrXr26JKlSpUry8fFxetzly5e1fPlyxzr56dChgyZNmiRJql+/vnbv3q25c+eqW7duN83t4uKiKlWqSJJq1KiR78Gzx44d04YNG7R79261b99ekvTBBx/I399f69at04ABAyRd2x321ltvOS5uOXr0aMeFIQsybdo0p7xVq1ZV8+bNHfenT5+utWvXasOGDRo9erSqVKkiFxcXeXp65nrdbpSTk6Nly5Y5dpkNGTJEW7du1YwZM5SRkaGYmBitWLFCDzzwgKRrxdLPz6/A5zx9+rQmTJige+65R5IUHBzsWDZr1iyFhITozTffdIw1btxY0rUrfi9cuFDLli1TWFiYJGnx4sWKi4vTkiVLNGHChDxfkwsXLmjOnDn6/PPPFRoaKkmqW7eudu3apbfffludOnUqMC8A3A7MzNxhDMNw7Oq40fDhw5WYmKgGDRpozJgxio2NLdRz1qlT56ZFRpLjze33948ePVqobRTW0aNHVbZsWbVr184xVrVqVTVo0MBpW+7u7k5X6fb19VVqaupNnz8kJMTp/oULFzRx4kQ1atRIlSpVUoUKFfTNN984ZmaKIiAgwOnYn99n+u6773TlyhW1bdvWsdzLy0sNGjQo8DkjIiL01FNPqWvXrpo5c6ZOnDjhWHZ9ZiYvJ06c0JUrV9ShQwfHWLly5dS2bdtc/2a/f02+/vprXbp0Sd26dVOFChUct/fee89p2wDwR6LM3GGOHj2qwMDAPJe1atVKycnJevXVV3Xx4kU9+uij+vOf/3zT5/Tw8Ch2nuvF6vonrAzDcCz7/W6hwvr9428c/32Ju/HTTzabLd/H/t6N3+uECRO0evVqzZgxQzt37lRiYqKaNm2qy5cvFzl7XplycnIc+a+P/d7NMkdGRurIkSN68MEH9fnnn6tRo0Zau3atpGsHIeenoO3dOPb71+R63k8++USJiYmO29dff81xMwBMQ5m5g3z++ec6fPiwHnnkkXzXqVixogYOHKjFixdr1apVWr16teMYiXLlyjmOpyiOvXv35rp/fffH9Zmd3x9TcuPxN66urpJUYIZGjRrp6tWr+uKLLxxj586d07fffquGDRsWO3t+du7cqeHDh+vhhx9W06ZN5ePjo5MnT+bKfSuvmyTVq1dP5cqV05dffukYS09PdzqwOT/169fXuHHjFBsbq/79+2vp0qWSpGbNmmnr1q15PiYoKEiurq7atWuXY+zKlSvat29fga9jo0aNZLfbdfr0aQUFBTnd/P39C/vtAkCJ4pgZi8rKylJKSoqys7P1008/adOmTYqOjlbv3r01dOjQPB8zd+5c+fr6qkWLFipTpoz+9a9/ycfHx3F8SkBAgLZu3aoOHTrIbrcX+vw01+3evVuzZs1Sv379FBcXp3/961/65JNr5+txc3PTvffeq5kzZyogIEA///yzXnzxRafH16lTRzabTRs3blSvXr3k5uamChUqOK0THBysvn376umnn9bbb78tT09PTZo0STVr1lTfvn2LlLcwgoKCtGbNGvXp00c2m00vvfSSY3biuoCAAO3YsUOPPfaY7Ha7qlWrVuTteHp6atiwYZowYYKqVKmiGjVq6JVXXlGZMmXy3W148eJFTZgwQX/+858VGBio77//XgkJCY4yO3nyZDVt2lQjR47Us88+K1dXV23btk0DBgxQtWrV9Nxzzzm2V7t2bc2aNUuZmZl68sknC8w5fvx4jRs3Tjk5OerYsaPS09O1Z88eVahQQcOGDSvy9w4At4qZGYvatGmTfH19FRAQoJ49e2rbtm2aP3++1q9fLxcXlzwfU6FCBb322msKCQlRmzZtdPLkSX366aeOXUCzZ89WXFyc/P391bJlyyJn+tvf/qb9+/erZcuWevXVVzV79mz16NHDsfzdd9/VlStXFBISorFjx2r69OlOj69Zs6amTp2qSZMmydvbW6NHj85zO0uXLlXr1q3Vu3dvhYaGyjAMffrpp7flxHpz585V5cqV1b59e/Xp00c9evRQq1atnNaZNm2aTp48qXr16hXq2KL8zJkzR6Ghoerdu7e6du2qDh06qGHDhipfvnye67u4uOjcuXMaOnSo6tevr0cffVRhYWGaOnWqpGszNrGxsfrqq6/Utm1bhYaGav369Spb9trfMDNnztQjjzyiIUOGqFWrVjp+/Lg2b9580xL76quv6uWXX1Z0dLQaNmyoHj166OOPP8539yYA3G42ozAHElhYenq6vLy8dP78eVWsWNFp2aVLl5ScnKzAwMB83zAAs1y4cEE1a9bU7NmzC5wtKe34OYPV5GRmKqlVa0lSv94zlFW25E48WlRFPUP8naSg9+8bsZsJKCUOHjyob775Rm3bttX58+cdHyW/HbvPAOBOQpkBSpHXX39dSUlJcnV1VevWrbVz585iHYMDAHcTygxQSrRs2VL79+83OwYAWA4HAAMAAEujzOjmJyYDUHz8fAG43e7qMnP9o7yZmZkmJwHuXNfPlpzfKQMA4Fbd1cfMuLi4qFKlSo7r47i7u+d7gjIARZeTk6P//Oc/cnd3d5zfBgBK2l3/2+X6lY4LcxFCAEVXpkwZ1a5dmz8UANw2d32Zsdls8vX1VY0aNYp14UMABXN1dXWcZRoAboe7vsxc5+Liwj59AAAsiD+XAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApZlaZiIjI2Wz2ZxuPj4+juWGYSgyMlJ+fn5yc3NT586ddeTIERMTAwCA0sb0mZnGjRvr7Nmzjtvhw4cdy2bNmqU5c+ZowYIFSkhIkI+Pj7p166aMjAwTEwMAgNKkrOkBypZ1mo25zjAMzZs3T1OmTFH//v0lSTExMfL29taKFSv0zDPP5Pl8WVlZysrKctxPT0+/PcEBAECpYPrMzLFjx+Tn56fAwEA99thj+u677yRJycnJSklJUffu3R3r2u12derUSXv27Mn3+aKjo+Xl5eW4+fv73/bvAQAAmMfUMtOuXTu999572rx5sxYvXqyUlBS1b99e586dU0pKiiTJ29vb6THe3t6OZXmZPHmyzp8/77idOXPmtn4PAADAXKbuZgoLC3N83bRpU4WGhqpevXqKiYnRvffeK0my2WxOjzEMI9fY79ntdtnt9tsTGAAAlDqm72b6PQ8PDzVt2lTHjh1zHEdz4yxMampqrtkaAABw9ypVZSYrK0tHjx6Vr6+vAgMD5ePjo7i4OMfyy5cvKz4+Xu3btzcxJQAAKE1M3c00fvx49enTR7Vr11ZqaqqmT5+u9PR0DRs2TDabTeHh4YqKilJwcLCCg4MVFRUld3d3DRo0yMzYAACgFDG1zHz//ff6y1/+op9//lnVq1fXvffeq71796pOnTqSpIkTJ+rixYsaOXKk0tLS1K5dO8XGxsrT09PM2AAAoBQxtcysXLmywOU2m02RkZGKjIz8YwIBAADLKVXHzAAAABQVZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFgaZQYAAFhaqSkz0dHRstlsCg8Pd4wZhqHIyEj5+fnJzc1NnTt31pEjR8wLCQAASp1SUWYSEhK0aNEiNWvWzGl81qxZmjNnjhYsWKCEhAT5+PioW7duysjIMCkpAAAobUwvM7/99psGDx6sxYsXq3Llyo5xwzA0b948TZkyRf3791eTJk0UExOjzMxMrVixwsTEAACgNDG9zIwaNUoPPvigunbt6jSenJyslJQUde/e3TFmt9vVqVMn7dmzJ9/ny8rKUnp6utMNAADcucqaufGVK1fqwIEDSkhIyLUsJSVFkuTt7e007u3trVOnTuX7nNHR0Zo6dWrJBgUAAKWWaTMzZ86c0dixY/X++++rfPny+a5ns9mc7huGkWvs9yZPnqzz5887bmfOnCmxzAAAoPQxbWZm//79Sk1NVevWrR1j2dnZ2rFjhxYsWKCkpCRJ12ZofH19Heukpqbmmq35PbvdLrvdfvuCAwCAUsW0mZkHHnhAhw8fVmJiouMWEhKiwYMHKzExUXXr1pWPj4/i4uIcj7l8+bLi4+PVvn17s2IDAIBSxrSZGU9PTzVp0sRpzMPDQ1WrVnWMh4eHKyoqSsHBwQoODlZUVJTc3d01aNAgMyIDAIBSyNQDgG9m4sSJunjxokaOHKm0tDS1a9dOsbGx8vT0NDsaAAAoJUpVmdm+fbvTfZvNpsjISEVGRpqSBwAAlH6mn2cGAADgVlBmAACApVFmAACApVFmAACApVFmAACApVFmAACApVFmAACApRWrzPz0008aMmSI/Pz8VLZsWbm4uDjdAAAA/ijFOmne8OHDdfr0ab300kvy9fUt8CrWAAAAt1OxysyuXbu0c+dOtWjRooTjAAAAFE2xdjP5+/vLMIySzgIAAFBkxSoz8+bN06RJk3Ty5MkSjgMAAFA0xdrNNHDgQGVmZqpevXpyd3dXuXLlnJb/8ssvJRIOAADgZopVZubNm1fCMQAAAIqnWGVm2LBhJZ0DAACgWIpVZiQpOztb69at09GjR2Wz2dSoUSM99NBDnGcGAAD8oYpVZo4fP65evXrphx9+UIMGDWQYhr799lv5+/vrk08+Ub169Uo6JwAAQJ6K9WmmMWPGqF69ejpz5owOHDiggwcP6vTp0woMDNSYMWNKOiMAAEC+ijUzEx8fr71796pKlSqOsapVq2rmzJnq0KFDiYUDAAC4mWLNzNjtdmVkZOQa/+233+Tq6nrLoQAAAAqrWGWmd+/e+utf/6ovvvhChmHIMAzt3btXzz77rB566KGSzggAAJCvYpWZ+fPnq169egoNDVX58uVVvnx5dejQQUFBQXrjjTdKOiMAAEC+inXMTKVKlbR+/XodO3ZM33zzjQzDUKNGjRQUFFTS+QAAAApU7PPMSFJwcLCCg4NLKgsAAECRFbrMRERE6NVXX5WHh4ciIiIKXHfOnDm3HAwAAKAwCl1mDh48qCtXrji+BgAAKA0KXWa2bduW59cAAABmKtanmUaMGJHneWYuXLigESNG3HIoAACAwipWmYmJidHFixdzjV+8eFHvvffeLYcCAAAorCJ9mik9Pd1xkryMjAyVL1/esSw7O1uffvqpatSoUeIhAQAA8lOkMlOpUiXZbDbZbDbVr18/13KbzaapU6eWWDgAAICbKVKZ2bZtmwzD0P3336/Vq1c7XWjS1dVVderUkZ+fX4mHBAAAyE+RykynTp0kScnJyapdu7ZsNtttCQUAAFBYxToD8KlTp3Tq1Kl8l//pT38qdiAAAICiKFaZ6dy5c66x38/SZGdnFzsQAABAURTro9lpaWlOt9TUVG3atElt2rRRbGxsSWcEAADIV7FmZry8vHKNdevWTXa7XePGjdP+/ftvORgAAEBhFGtmJj/Vq1dXUlJSST4lAABAgYo1M3Po0CGn+4Zh6OzZs5o5c6aaN29eIsEAAAAKo1hlpkWLFrLZbDIMw2n83nvv1bvvvlsiwQAAAAqjWGUmOTnZ6X6ZMmVUvXp1p8sbAAAA/BGKVWbq1KlT0jkAAACKpdgHAG/dulW9e/dWvXr1FBQUpN69e2vLli0lmQ0AAOCmilVmFixYoJ49e8rT01Njx47VmDFjVLFiRfXq1UsLFiwo6YwAAAD5KtZupujoaM2dO1ejR492jI0ZM0YdOnTQjBkznMYBAABup2LNzKSnp6tnz565xrt376709PRbDgUAAFBYxSozDz30kNauXZtrfP369erTp88thwIAACisQu9mmj9/vuPrhg0basaMGdq+fbtCQ0MlSXv37tXu3bv1t7/9reRTAgAA5MNm3Hjmu3wEBgYW7gltNn333XeFWnfhwoVauHChTp48KUlq3LixXn75ZYWFhUm6dmbhqVOnatGiRUpLS1O7du30z3/+U40bNy7U80vXdol5eXnp/PnzqlixYqEfBwC4O+VkZiqpVWtJUr/eM5RV1m5alpMzHzRt22Yryvt3oWdmbjxRXkmoVauWZs6cqaCgIElSTEyM+vbtq4MHD6px48aaNWuW5syZo2XLlql+/fqaPn26unXrpqSkJHl6epZ4HgAAYD0leqHJourTp4969eql+vXrq379+poxY4YqVKigvXv3yjAMzZs3T1OmTFH//v3VpEkTxcTEKDMzUytWrDAzNgAAKEUKPTMTERGhV199VR4eHoqIiChw3Tlz5hQ5SHZ2tv71r3/pwoULCg0NVXJyslJSUtS9e3fHOna7XZ06ddKePXv0zDPP5Pk8WVlZysrKctzn01UAANzZCl1mDh48qCtXrkiSDhw4IJvNlud6+Y3n5/DhwwoNDdWlS5dUoUIFrV27Vo0aNdKePXskSd7e3k7re3t769SpU/k+X3R0tKZOnVqkDAAAwLoKXWa2bdvm+Hr79u0lFqBBgwZKTEzUr7/+qtWrV2vYsGGKj493LL+xHBmGUWBhmjx5stPMUXp6uvz9/UssLwAAKF2KfMzM1atXVbZsWf3f//1fiQRwdXVVUFCQQkJCFB0drebNm+uNN96Qj4+PJCklJcVp/dTU1FyzNb9nt9tVsWJFpxsAALhzFbnMlC1bVnXq1FF2dvbtyCPDMJSVlaXAwED5+PgoLi7Osezy5cuKj49X+/btb8u2AQCA9RTr2kwvvviiJk+erPfff19VqlQp9sZfeOEFhYWFyd/fXxkZGVq5cqW2b9+uTZs2yWazKTw8XFFRUQoODlZwcLCioqLk7u6uQYMGFXubAADgzlKsMjN//nwdP35cfn5+qlOnjjw8PJyWHzhwoFDP89NPP2nIkCE6e/asvLy81KxZM23atEndunWTJE2cOFEXL17UyJEjHSfNi42N5RwzAADAoVhlpm/fvkX+1FJelixZUuBym82myMhIRUZG3vK2AADAnalYZYZyAQAASotinQG4bt26OnfuXK7xX3/9VXXr1r3lUAAAAIVVrDJz8uTJPD/NlJWVpe+///6WQwEAABRWkXYzbdiwwfH15s2b5eXl5bifnZ2trVu3Fvrq2gAAACWhSGWmX79+kq4dmDts2DCnZeXKlVNAQIBmz55dYuEAAABupkhlJicnR5IUGBiohIQEVatW7baEAgAAKKxifZopOTk519ivv/6qSpUq3WoeAACAIinWAcCvvfaaVq1a5bg/YMAAValSRTVr1tRXX31VYuEAAABuplhl5u2333ZciTouLk5btmzRpk2bFBYWpgkTJpRoQAAAgIIUazfT2bNnHWVm48aNevTRR9W9e3cFBASoXbt2JRoQAACgIMWamalcubLOnDkjSdq0aZO6du0q6doVr2/X1bQBAADyUqyZmf79+2vQoEEKDg7WuXPnFBYWJklKTExUUFBQiQYEAAAoSLHKzNy5cxUQEKAzZ85o1qxZqlChgqRru59GjhxZogEBAAAKUqwyU65cOY0fPz7XeHh4+K3mAQAAKJJCl5kNGzYoLCxM5cqVc7qsQV4eeuihWw4GAABQGIUuM/369VNKSopq1KjhuKxBXmw2GwcBAwCAP0yhy8z1Sxnc+DUAAICZinzMTE5OjpYtW6Y1a9bo5MmTstlsqlu3rh555BENGTJENpvtduQEAADIU5HOM2MYhh566CE99dRT+uGHH9S0aVM1btxYJ0+e1PDhw/Xwww/frpwAAAB5KtLMzLJly7Rjxw5t3bpVXbp0cVr2+eefq1+/fnrvvfc0dOjQEg0JAACQnyLNzHz44Yd64YUXchUZSbr//vs1adIkffDBByUWDgAA4GaKVGYOHTqknj175rs8LCyMq2YDAIA/VJHKzC+//CJvb+98l3t7eystLe2WQwEAABRWkcpMdna2ypbN/zAbFxcXXb169ZZDAQAAFFaRDgA2DEPDhw+X3W7Pc3lWVlaJhAIAACisIpWZYcOG3XQdPskEAAD+SEUqM0uXLr1dOQAAAIqlSMfMAAAAlDaUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmUGQAAYGmmlpno6Gi1adNGnp6eqlGjhvr166ekpCSndQzDUGRkpPz8/OTm5qbOnTvryJEjJiUGAACljallJj4+XqNGjdLevXsVFxenq1evqnv37rpw4YJjnVmzZmnOnDlasGCBEhIS5OPjo27duikjI8PE5AAAoLQoa+bGN23a5HR/6dKlqlGjhvbv368//elPMgxD8+bN05QpU9S/f39JUkxMjLy9vbVixQo988wzuZ4zKytLWVlZjvvp6em395sAAACmKlXHzJw/f16SVKVKFUlScnKyUlJS1L17d8c6drtdnTp10p49e/J8jujoaHl5eTlu/v7+tz84AAAwTakpM4ZhKCIiQh07dlSTJk0kSSkpKZIkb29vp3W9vb0dy240efJknT9/3nE7c+bM7Q0OAABMZepupt8bPXq0Dh06pF27duVaZrPZnO4bhpFr7Dq73S673X5bMgIAgNKnVMzMPP/889qwYYO2bdumWrVqOcZ9fHwkKdcsTGpqaq7ZGgAAcHcytcwYhqHRo0drzZo1+vzzzxUYGOi0PDAwUD4+PoqLi3OMXb58WfHx8Wrfvv0fHRcAAJRCpu5mGjVqlFasWKH169fL09PTMQPj5eUlNzc32Ww2hYeHKyoqSsHBwQoODlZUVJTc3d01aNAgM6MDAIBSwtQys3DhQklS586dncaXLl2q4cOHS5ImTpyoixcvauTIkUpLS1O7du0UGxsrT0/PPzgtAAAojUwtM4Zh3HQdm82myMhIRUZG3v5AAADAckrFAcAAAADFRZkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWRpkBAACWVtbsAACAu0vApE/MjqCTMx80OwJKEDMzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0igzAADA0kwtMzt27FCfPn3k5+cnm82mdevWOS03DEORkZHy8/OTm5ubOnfurCNHjpgTFgAAlEqmlpkLFy6oefPmWrBgQZ7LZ82apTlz5mjBggVKSEiQj4+PunXrpoyMjD84KQAAKK3KmrnxsLAwhYWF5bnMMAzNmzdPU6ZMUf/+/SVJMTEx8vb21ooVK/TMM8/8kVEBAEApVWqPmUlOTlZKSoq6d+/uGLPb7erUqZP27NmT7+OysrKUnp7udAMAAHeuUltmUlJSJEne3t5O497e3o5leYmOjpaXl5fj5u/vf1tzAgAAc5XaMnOdzWZzum8YRq6x35s8ebLOnz/vuJ05c+Z2RwQAACYy9ZiZgvj4+Ei6NkPj6+vrGE9NTc01W/N7drtddrv9tucDAAClQ6mdmQkMDJSPj4/i4uIcY5cvX1Z8fLzat29vYjIAAFCamDoz89tvv+n48eOO+8nJyUpMTFSVKlVUu3ZthYeHKyoqSsHBwQoODlZUVJTc3d01aNAgE1MDAIDSxNQys2/fPnXp0sVxPyIiQpI0bNgwLVu2TBMnTtTFixc1cuRIpaWlqV27doqNjZWnp6dZkQEAQCljapnp3LmzDMPId7nNZlNkZKQiIyP/uFAAAMBSSu0xMwAAAIVBmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZGmQEAAJZW1uwAAICSETDpE7Mj6OTMB82OgLsQMzMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSKDMAAMDSypodALguYNInZkfQyZkPmh0BAFBEzMwAAABLs0SZefPNNxUYGKjy5curdevW2rlzp9mRAABAKVHqy8yqVasUHh6uKVOm6ODBg7rvvvsUFham06dPmx0NAACUAqW+zMyZM0dPPvmknnrqKTVs2FDz5s2Tv7+/Fi5caHY0AABQCpTqA4AvX76s/fv3a9KkSU7j3bt31549e/J8TFZWlrKyshz3z58/L0lKT0+/fUFRInKyMs2OwP8TWJpVfoZKe86czEz9lp0tScrOylTO///aDHfz76Tr37thGDddt1SXmZ9//lnZ2dny9vZ2Gvf29lZKSkqej4mOjtbUqVNzjfv7+9+WjLizeM0zOwFgbVb5GSp0zgVDbmeMm7LK63k7ZWRkyMvLq8B1SnWZuc5mszndNwwj19h1kydPVkREhON+Tk6OfvnlF1WtWjXfx5glPT1d/v7+OnPmjCpWrGh2nHyRs2RZIacVMkrkLGlWyGmFjBI5S4JhGMrIyJCfn99N1y3VZaZatWpycXHJNQuTmpqaa7bmOrvdLrvd7jRWqVKl2xWxRFSsWLHU/SfKCzlLlhVyWiGjRM6SZoWcVsgokfNW3WxG5rpSfQCwq6urWrdurbi4OKfxuLg4tW/f3qRUAACgNCnVMzOSFBERoSFDhigkJEShoaFatGiRTp8+rWeffdbsaAAAoBQo9WVm4MCBOnfunKZNm6azZ8+qSZMm+vTTT1WnTh2zo90yu92uV155JddusdKGnCXLCjmtkFEiZ0mzQk4rZJTI+UezGYX5zBMAAEApVaqPmQEAALgZygwAALA0ygwAALA0ygwAALA0yoyJ3nzzTQUGBqp8+fJq3bq1du7caXYkJzt27FCfPn3k5+cnm82mdevWmR0pl+joaLVp00aenp6qUaOG+vXrp6SkJLNj5bJw4UI1a9bMcWKq0NBQffbZZ2bHuqno6GjZbDaFh4ebHcVJZGSkbDab083Hx8fsWLn88MMPevzxx1W1alW5u7urRYsW2r9/v9mxnAQEBOR6LW02m0aNGmV2NCdXr17Viy++qMDAQLm5ualu3bqaNm2acnJyzI7mJCMjQ+Hh4apTp47c3NzUvn17JSQkmJrpZr/LDcNQZGSk/Pz85Obmps6dO+vIkSPmhC0myoxJVq1apfDwcE2ZMkUHDx7Ufffdp7CwMJ0+fdrsaA4XLlxQ8+bNtWDBArOj5Cs+Pl6jRo3S3r17FRcXp6tXr6p79+66cOGC2dGc1KpVSzNnztS+ffu0b98+3X///erbt2+p/oWRkJCgRYsWqVmzZmZHyVPjxo119uxZx+3w4cNmR3KSlpamDh06qFy5cvrss8/09ddfa/bs2aXujOQJCQlOr+P1k5QOGDDA5GTOXnvtNb311ltasGCBjh49qlmzZunvf/+7/vGPf5gdzclTTz2luLg4LV++XIcPH1b37t3VtWtX/fDDD6Zlutnv8lmzZmnOnDlasGCBEhIS5OPjo27duikjI+MPTnoLDJiibdu2xrPPPus0ds899xiTJk0yKVHBJBlr1641O8ZNpaamGpKM+Ph4s6PcVOXKlY133nnH7Bh5ysjIMIKDg424uDijU6dOxtixY82O5OSVV14xmjdvbnaMAv3P//yP0bFjR7NjFNnYsWONevXqGTk5OWZHcfLggw8aI0aMcBrr37+/8fjjj5uUKLfMzEzDxcXF2Lhxo9N48+bNjSlTppiUytmNv8tzcnIMHx8fY+bMmY6xS5cuGV5eXsZbb71lQsLiYWbGBJcvX9b+/fvVvXt3p/Hu3btrz549JqW6M5w/f16SVKVKFZOT5C87O1srV67UhQsXFBoaanacPI0aNUoPPvigunbtanaUfB07dkx+fn4KDAzUY489pu+++87sSE42bNigkJAQDRgwQDVq1FDLli21ePFis2MV6PLly3r//fc1YsSIUndh3o4dO2rr1q369ttvJUlfffWVdu3apV69epmc7L+uXr2q7OxslS9f3mnczc1Nu3btMilVwZKTk5WSkuL0fmS329WpUydLvR+V+jMA34l+/vlnZWdn57pYpre3d66LaqLwDMNQRESEOnbsqCZNmpgdJ5fDhw8rNDRUly5dUoUKFbR27Vo1atTI7Fi5rFy5UgcOHDB9P39B2rVrp/fee0/169fXTz/9pOnTp6t9+/Y6cuSIqlatanY8SdJ3332nhQsXKiIiQi+88IK+/PJLjRkzRna7XUOHDjU7Xp7WrVunX3/9VcOHDzc7Si7/8z//o/Pnz+uee+6Ri4uLsrOzNWPGDP3lL38xO5qDp6enQkND9eqrr6phw4by9vbWhx9+qC+++ELBwcFmx8vT9fecvN6PTp06ZUakYqHMmOjGv3wMwyh1fw1ZyejRo3Xo0KFS+xdQgwYNlJiYqF9//VWrV6/WsGHDFB8fX6oKzZkzZzR27FjFxsbm+uuyNAkLC3N83bRpU4WGhqpevXqKiYlRRESEicn+KycnRyEhIYqKipIktWzZUkeOHNHChQtLbZlZsmSJwsLC5OfnZ3aUXFatWqX3339fK1asUOPGjZWYmKjw8HD5+flp2LBhZsdzWL58uUaMGKGaNWvKxcVFrVq10qBBg3TgwAGzoxXI6u9HlBkTVKtWTS4uLrlmYVJTU3O1YxTO888/rw0bNmjHjh2qVauW2XHy5OrqqqCgIElSSEiIEhIS9MYbb+jtt982Odl/7d+/X6mpqWrdurVjLDs7Wzt27NCCBQuUlZUlFxcXExPmzcPDQ02bNtWxY8fMjuLg6+ubq6g2bNhQq1evNilRwU6dOqUtW7ZozZo1ZkfJ04QJEzRp0iQ99thjkq6V2FOnTik6OrpUlZl69eopPj5eFy5cUHp6unx9fTVw4EAFBgaaHS1P1z8FmJKSIl9fX8e41d6POGbGBK6urmrdurXjUwPXxcXFqX379ialsibDMDR69GitWbNGn3/+ean9hZEXwzCUlZVldgwnDzzwgA4fPqzExETHLSQkRIMHD1ZiYmKpLDKSlJWVpaNHjzr9MjZbhw4dcp0m4Ntvvy21F8ldunSpatSooQcffNDsKHnKzMxUmTLOb1kuLi6l7qPZ13l4eMjX11dpaWnavHmz+vbta3akPAUGBsrHx8fp/ejy5cuKj4+31PsRMzMmiYiI0JAhQxQSEqLQ0FAtWrRIp0+f1rPPPmt2NIfffvtNx48fd9xPTk5WYmKiqlSpotq1a5uY7L9GjRqlFStWaP369fL09HTMdnl5ecnNzc3kdP/1wgsvKCwsTP7+/srIyNDKlSu1fft2bdq0yexoTjw9PXMdb+Th4aGqVauWquOQxo8frz59+qh27dpKTU3V9OnTlZ6eXqr+Qh83bpzat2+vqKgoPfroo/ryyy+1aNEiLVq0yOxoueTk5Gjp0qUaNmyYypYtnW8Lffr00YwZM1S7dm01btxYBw8e1Jw5czRixAizoznZvHmzDMNQgwYNdPz4cU2YMEENGjTQE088YVqmm/0uDw8PV1RUlIKDgxUcHKyoqCi5u7tr0KBBpmUuMjM/SnW3++c//2nUqVPHcHV1NVq1alXqPk68bds2Q1Ku27Bhw8yO5pBXPknG0qVLzY7mZMSIEY5/6+rVqxsPPPCAERsba3asQimNH80eOHCg4evra5QrV87w8/Mz+vfvbxw5csTsWLl8/PHHRpMmTQy73W7cc889xqJFi8yOlKfNmzcbkoykpCSzo+QrPT3dGDt2rFG7dm2jfPnyRt26dY0pU6YYWVlZZkdzsmrVKqNu3bqGq6ur4ePjY4waNcr49ddfTc10s9/lOTk5xiuvvGL4+PgYdrvd+NOf/mQcPnzY1MxFZTMMw/jDGxQAAEAJ4ZgZAABgaZQZAABgaZQZAABgaZQZAABgaZQZAABgaZQZAABgaZQZAABgaZQZAABgaZQZAH+4kydPymazKTEx0ewoAO4AlBkAeRo+fLhsNptsNpvKli2r2rVr67nnnlNaWlqRn6dfv35OY/7+/jp79myput4TAOuizADIV8+ePXX27FmdPHlS77zzjj7++GONHDnylp/XxcVFPj4+pfaihsV15coVsyMAdyXKDIB82e12+fj4qFatWurevbsGDhyo2NhYx/Ls7Gw9+eSTCgwMlJubmxo0aKA33njDsTwyMlIxMTFav369Y5Zn+/btuXYzbd++XTabTVu3blVISIjc3d3Vvn17JSUlOeWZPn26atSoIU9PTz311FOaNGmSWrRokW/+tLQ0DR48WNWrV5ebm5uCg4O1dOlSx/Lvv/9ejz32mKpUqSIPDw+FhIToiy++cCxfuHCh6tWrJ1dXVzVo0EDLly93en6bzaa33npLffv2lYeHh6ZPny5J+vjjj9W6dWuVL19edevW1dSpU3X16tUiv/4ACufO+rMIwG3z3XffadOmTSpXrpxjLCcnR7Vq1dJHH32katWqac+ePfrrX/8qX19fPfrooxo/fryOHj2q9PR0R4moUqWKfvzxxzy3MWXKFM2ePVvVq1fXs88+qxEjRmj37t2SpA8++EAzZszQm2++qQ4dOmjlypWaPXu2AgMD88380ksv6euvv9Znn32matWq6fjx47p48aIk6bffflOnTp1Us2ZNbdiwQT4+Pjpw4IBycnIkSWvXrtXYsWM1b948de3aVRs3btQTTzyhWrVqqUuXLo5tvPLKK4qOjtbcuXPl4uKizZs36/HHH9f8+fN133336cSJE/rrX//qWBfAbWD2ZbsBlE7Dhg0zXFxcDA8PD6N8+fKGJEOSMWfOnAIfN3LkSOORRx5xep6+ffs6rZOcnGxIMg4ePGgYhmFs27bNkGRs2bLFsc4nn3xiSDIuXrxoGIZhtGvXzhg1apTT83To0MFo3rx5vln69OljPPHEE3kue/vttw1PT0/j3LlzeS5v37698fTTTzuNDRgwwOjVq5fjviQjPDzcaZ377rvPiIqKchpbvny54evrm29OALeG3UwA8tWlSxclJibqiy++0PPPP68ePXro+eefd1rnrbfeUkhIiKpXr64KFSpo8eLFOn36dLG216xZM8fXvr6+kqTU1FRJUlJSktq2beu0/o33b/Tcc89p5cqVatGihSZOnKg9e/Y4liUmJqply5aqUqVKno89evSoOnTo4DTWoUMHHT161GksJCTE6f7+/fs1bdo0VahQwXF7+umndfbsWWVmZhaYF0DxUGYA5MvDw0NBQUFq1qyZ5s+fr6ysLE2dOtWx/KOPPtK4ceM0YsQIxcbGKjExUU888YQuX75crO39fheWzWaTJMdun9+PXWcYRoHPFxYWplOnTik8PFw//vijHnjgAY0fP16S5ObmdtM8eW3vxjEPDw+n+zk5OZo6daoSExMdt8OHD+vYsWMqX778TbcJoOgoMwAK7ZVXXtHrr7/uOOZl586dat++vUaOHKmWLVsqKChIJ06ccHqMq6ursrOzb3nbDRo00Jdffuk0tm/fvps+rnr16ho+fLjef/99zZs3T4sWLZJ0bRYoMTFRv/zyS56Pa9iwoXbt2uU0tmfPHjVs2LDA7bVq1UpJSUkKCgrKdStThl+5wO3ATxaAQuvcubMaN26sqKgoSVJQUJD27dunzZs369tvv9VLL72khIQEp8cEBATo0KFDSkpK0s8//1zsjy8///zzWrJkiWJiYnTs2DFNnz5dhw4dyjVT8nsvv/yy1q9fr+PHj+vIkSPauHGjo4z85S9/kY+Pj/r166fdu3fru+++0+rVq/Xvf/9bkjRhwgQtW7ZMb731lo4dO6Y5c+ZozZo1jpmdgrb53nvvKTIyUkeOHNHRo0e1atUqvfjii8X6vgHcHGUGQJFERERo8eLFOnPmjJ599ln1799fAwcOVLt27XTu3Llc56F5+umn1aBBA8dxNdc/nVRUgwcP1uTJkzV+/Hi1atVKycnJGj58eIG7blxdXTV58mQ1a9ZMf/rTn+Ti4qKVK1c6lsXGxqpGjRrq1auXmjZtqpkzZ8rFxUWS1K9fP73xxhv6+9//rsaNG+vtt9/W0qVL1blz5wJz9ujRQxs3blRcXJzatGmje++9V3PmzFGdOnWK9X0DuDmbcbOdzgBQSnXr1k0+Pj65zv8C4O7CeWYAWEJmZqbeeust9ejRQy4uLvrwww+1ZcsWxcXFmR0NgMmYmQFgCRcvXlSfPn104MABZWVlqUGDBnrxxRfVv39/s6MBMBllBgAAWBoHAAMAAEujzAAAAEujzAAAAEujzAAAAEujzAAAAEujzAAAAEujzAAAAEujzAAAAEv7f0Pjb3cRu7FsAAAAAElFTkSuQmCC", 420 | "text/plain": [ 421 | "
" 422 | ] 423 | }, 424 | "metadata": {}, 425 | "output_type": "display_data" 426 | } 427 | ], 428 | "source": [ 429 | "# Diplay rating scores in bar chart\n", 430 | "import matplotlib.pyplot as plt\n", 431 | "from operator import countOf\n", 432 | "\n", 433 | "rating_scores = []\n", 434 | "for question in questions:\n", 435 | " if 'rating_score' in question: \n", 436 | " rating_scores.append(question['rating_score'])\n", 437 | "\n", 438 | "bar_labels = []\n", 439 | "rating_scores_count = []\n", 440 | "\n", 441 | "for x in range(11):\n", 442 | " bar_labels.append(str(x))\n", 443 | " rating_scores_count.append(countOf(rating_scores,str(x)))\n", 444 | "\n", 445 | "fig, ax = plt.subplots()\n", 446 | "ax.bar(bar_labels, rating_scores_count)\n", 447 | "ax.set_ylabel('Distribution')\n", 448 | "ax.set_xlabel('Rating score')\n", 449 | "plt.axvline(x=average_rating, color='tab:red')\n", 450 | "plt.legend(['Average rating score','Distribution rating score'])\n", 451 | "plt.show()\n" 452 | ] 453 | }, 454 | { 455 | "cell_type": "markdown", 456 | "id": "2e4c565e-ca5e-4a7a-9587-7d036228c609", 457 | "metadata": {}, 458 | "source": [ 459 | "## 5. Generate explanation for average rating score" 460 | ] 461 | }, 462 | { 463 | "cell_type": "markdown", 464 | "id": "019df54a-1c4d-4096-9508-f4ee356feb08", 465 | "metadata": {}, 466 | "source": [ 467 | "To explain the average rating score, each rating explanation can be used to create a summary and identiy areas for improvements to further optimize the application." 468 | ] 469 | }, 470 | { 471 | "cell_type": "code", 472 | "execution_count": 14, 473 | "id": "bfa61bdc-e5dc-4ccf-833f-2c07414d8b9e", 474 | "metadata": {}, 475 | "outputs": [], 476 | "source": [ 477 | "# Define prompt to summarize all ratings to explain the given average rating\n", 478 | "summary_prompt_template = \"\"\"HUMAN:\n", 479 | "Please act as an impartial summarizer and summarize the following explanations from a LLM as a judge to one single statement\n", 480 | "Explain the main areas for improvement. Also, write a concise summary of the following explantions from a LLM as a judge to explain the average rating given which the LLM as judge gave. \n", 481 | "\n", 482 | "{average_rating}\n", 483 | "{explanations}\n", 484 | "ANSWER:\"\"\"\n", 485 | "summary_prompt = PromptTemplate.from_template(summary_prompt_template)" 486 | ] 487 | }, 488 | { 489 | "cell_type": "code", 490 | "execution_count": 15, 491 | "id": "fa67becf-81e3-4ae3-aab5-b5dcf5094863", 492 | "metadata": {}, 493 | "outputs": [], 494 | "source": [ 495 | "explanation_avg_rating = eval_llm.invoke(summary_prompt.format(average_rating=average_rating, explanations=explanation_rating)).content" 496 | ] 497 | }, 498 | { 499 | "cell_type": "code", 500 | "execution_count": 16, 501 | "id": "cf44bb82-9c23-49d2-b2fe-c4041d030c6a", 502 | "metadata": {}, 503 | "outputs": [ 504 | { 505 | "name": "stdout", 506 | "output_type": "stream", 507 | "text": [ 508 | "Based on the explanations provided, the main areas for improvement seem to be:\n", 509 | "\n", 510 | "1. Providing more concise summaries or conclusions at times to reinforce the key points.\n", 511 | "2. Expanding on certain aspects with additional details, examples, or context where relevant.\n", 512 | "3. Analyzing complexities like time/space complexity, alternative approaches, or potential limitations in more depth for some technical responses.\n", 513 | "\n", 514 | "As for the average rating of 8.6125 given by the LLM judge, the explanations suggest that the assistant's responses were generally of high quality, demonstrating strong understanding, accuracy, relevance, and helpfulness in addressing the given tasks or questions. The judge commended the assistant's creativity, level of detail, clear explanations, and ability to provide well-reasoned and insightful solutions across a diverse range of topics and scenarios. However, there were occasional opportunities for improvement in areas like conciseness, depth of analysis, and considering additional nuances or perspectives, which likely prevented some responses from achieving a perfect 10/10 rating.\n" 515 | ] 516 | } 517 | ], 518 | "source": [ 519 | "print(explanation_avg_rating)" 520 | ] 521 | }, 522 | { 523 | "cell_type": "markdown", 524 | "id": "cea3c480-9d28-4d75-9248-abcae16c45c2", 525 | "metadata": {}, 526 | "source": [ 527 | "## Conclusion" 528 | ] 529 | }, 530 | { 531 | "cell_type": "markdown", 532 | "id": "a3f5d0d6-4a5a-49fe-8ff6-8d5f6ddc3fea", 533 | "metadata": {}, 534 | "source": [ 535 | "The lab demonstrates a practical approach for evaluating large language models (LLMs) using the LLM-as-a-Judge technique with Amazon Bedrock. This method addresses the challenges in evaluating LLMs due to their broad capabilities and the limitations of existing benchmarks in measuring human preferences.\n", 536 | "\n", 537 | "By leveraging strong LLM judges, such as Claude 3 Sonnet in this notebook, the lab showcases how to assess the performance of LLMs like Claude Instant on the Multi Turn (MT)-Bench, a benchmark designed to measure alignment with human preferences. \n", 538 | "\n", 539 | "This approach makes LLM-as-a-judge a scalable and explainable way to approximate human preferences, which are otherwise very costly to obtain. The notebook provides a step-by-step guide on setting up the environment, loading the MT-Bench questions, generating test answers from the LLM under evaluation, leveraging the Bedrock API to assess the answers using the LLM judge, and visualize the results.\n", 540 | "\n", 541 | "The successful demonstration of this LLM-as-a-Judge methodology with Amazon Bedrock demonstrates the potential for developers and researchers working on LLM-based applications to adopt this innovative evaluation technique. By understanding the alignment of their models with human preferences, they can make more informed decisions and continue to improve the capabilities of these powerful language models." 542 | ] 543 | } 544 | ], 545 | "metadata": { 546 | "availableInstances": [ 547 | { 548 | "_defaultOrder": 0, 549 | "_isFastLaunch": true, 550 | "category": "General purpose", 551 | "gpuNum": 0, 552 | "hideHardwareSpecs": false, 553 | "memoryGiB": 4, 554 | "name": "ml.t3.medium", 555 | "vcpuNum": 2 556 | }, 557 | { 558 | "_defaultOrder": 1, 559 | "_isFastLaunch": false, 560 | "category": "General purpose", 561 | "gpuNum": 0, 562 | "hideHardwareSpecs": false, 563 | "memoryGiB": 8, 564 | "name": "ml.t3.large", 565 | "vcpuNum": 2 566 | }, 567 | { 568 | "_defaultOrder": 2, 569 | "_isFastLaunch": false, 570 | "category": "General purpose", 571 | "gpuNum": 0, 572 | "hideHardwareSpecs": false, 573 | "memoryGiB": 16, 574 | "name": "ml.t3.xlarge", 575 | "vcpuNum": 4 576 | }, 577 | { 578 | "_defaultOrder": 3, 579 | "_isFastLaunch": false, 580 | "category": "General purpose", 581 | "gpuNum": 0, 582 | "hideHardwareSpecs": false, 583 | "memoryGiB": 32, 584 | "name": "ml.t3.2xlarge", 585 | "vcpuNum": 8 586 | }, 587 | { 588 | "_defaultOrder": 4, 589 | "_isFastLaunch": true, 590 | "category": "General purpose", 591 | "gpuNum": 0, 592 | "hideHardwareSpecs": false, 593 | "memoryGiB": 8, 594 | "name": "ml.m5.large", 595 | "vcpuNum": 2 596 | }, 597 | { 598 | "_defaultOrder": 5, 599 | "_isFastLaunch": false, 600 | "category": "General purpose", 601 | "gpuNum": 0, 602 | "hideHardwareSpecs": false, 603 | "memoryGiB": 16, 604 | "name": "ml.m5.xlarge", 605 | "vcpuNum": 4 606 | }, 607 | { 608 | "_defaultOrder": 6, 609 | "_isFastLaunch": false, 610 | "category": "General purpose", 611 | "gpuNum": 0, 612 | "hideHardwareSpecs": false, 613 | "memoryGiB": 32, 614 | "name": "ml.m5.2xlarge", 615 | "vcpuNum": 8 616 | }, 617 | { 618 | "_defaultOrder": 7, 619 | "_isFastLaunch": false, 620 | "category": "General purpose", 621 | "gpuNum": 0, 622 | "hideHardwareSpecs": false, 623 | "memoryGiB": 64, 624 | "name": "ml.m5.4xlarge", 625 | "vcpuNum": 16 626 | }, 627 | { 628 | "_defaultOrder": 8, 629 | "_isFastLaunch": false, 630 | "category": "General purpose", 631 | "gpuNum": 0, 632 | "hideHardwareSpecs": false, 633 | "memoryGiB": 128, 634 | "name": "ml.m5.8xlarge", 635 | "vcpuNum": 32 636 | }, 637 | { 638 | "_defaultOrder": 9, 639 | "_isFastLaunch": false, 640 | "category": "General purpose", 641 | "gpuNum": 0, 642 | "hideHardwareSpecs": false, 643 | "memoryGiB": 192, 644 | "name": "ml.m5.12xlarge", 645 | "vcpuNum": 48 646 | }, 647 | { 648 | "_defaultOrder": 10, 649 | "_isFastLaunch": false, 650 | "category": "General purpose", 651 | "gpuNum": 0, 652 | "hideHardwareSpecs": false, 653 | "memoryGiB": 256, 654 | "name": "ml.m5.16xlarge", 655 | "vcpuNum": 64 656 | }, 657 | { 658 | "_defaultOrder": 11, 659 | "_isFastLaunch": false, 660 | "category": "General purpose", 661 | "gpuNum": 0, 662 | "hideHardwareSpecs": false, 663 | "memoryGiB": 384, 664 | "name": "ml.m5.24xlarge", 665 | "vcpuNum": 96 666 | }, 667 | { 668 | "_defaultOrder": 12, 669 | "_isFastLaunch": false, 670 | "category": "General purpose", 671 | "gpuNum": 0, 672 | "hideHardwareSpecs": false, 673 | "memoryGiB": 8, 674 | "name": "ml.m5d.large", 675 | "vcpuNum": 2 676 | }, 677 | { 678 | "_defaultOrder": 13, 679 | "_isFastLaunch": false, 680 | "category": "General purpose", 681 | "gpuNum": 0, 682 | "hideHardwareSpecs": false, 683 | "memoryGiB": 16, 684 | "name": "ml.m5d.xlarge", 685 | "vcpuNum": 4 686 | }, 687 | { 688 | "_defaultOrder": 14, 689 | "_isFastLaunch": false, 690 | "category": "General purpose", 691 | "gpuNum": 0, 692 | "hideHardwareSpecs": false, 693 | "memoryGiB": 32, 694 | "name": "ml.m5d.2xlarge", 695 | "vcpuNum": 8 696 | }, 697 | { 698 | "_defaultOrder": 15, 699 | "_isFastLaunch": false, 700 | "category": "General purpose", 701 | "gpuNum": 0, 702 | "hideHardwareSpecs": false, 703 | "memoryGiB": 64, 704 | "name": "ml.m5d.4xlarge", 705 | "vcpuNum": 16 706 | }, 707 | { 708 | "_defaultOrder": 16, 709 | "_isFastLaunch": false, 710 | "category": "General purpose", 711 | "gpuNum": 0, 712 | "hideHardwareSpecs": false, 713 | "memoryGiB": 128, 714 | "name": "ml.m5d.8xlarge", 715 | "vcpuNum": 32 716 | }, 717 | { 718 | "_defaultOrder": 17, 719 | "_isFastLaunch": false, 720 | "category": "General purpose", 721 | "gpuNum": 0, 722 | "hideHardwareSpecs": false, 723 | "memoryGiB": 192, 724 | "name": "ml.m5d.12xlarge", 725 | "vcpuNum": 48 726 | }, 727 | { 728 | "_defaultOrder": 18, 729 | "_isFastLaunch": false, 730 | "category": "General purpose", 731 | "gpuNum": 0, 732 | "hideHardwareSpecs": false, 733 | "memoryGiB": 256, 734 | "name": "ml.m5d.16xlarge", 735 | "vcpuNum": 64 736 | }, 737 | { 738 | "_defaultOrder": 19, 739 | "_isFastLaunch": false, 740 | "category": "General purpose", 741 | "gpuNum": 0, 742 | "hideHardwareSpecs": false, 743 | "memoryGiB": 384, 744 | "name": "ml.m5d.24xlarge", 745 | "vcpuNum": 96 746 | }, 747 | { 748 | "_defaultOrder": 20, 749 | "_isFastLaunch": false, 750 | "category": "General purpose", 751 | "gpuNum": 0, 752 | "hideHardwareSpecs": true, 753 | "memoryGiB": 0, 754 | "name": "ml.geospatial.interactive", 755 | "supportedImageNames": [ 756 | "sagemaker-geospatial-v1-0" 757 | ], 758 | "vcpuNum": 0 759 | }, 760 | { 761 | "_defaultOrder": 21, 762 | "_isFastLaunch": true, 763 | "category": "Compute optimized", 764 | "gpuNum": 0, 765 | "hideHardwareSpecs": false, 766 | "memoryGiB": 4, 767 | "name": "ml.c5.large", 768 | "vcpuNum": 2 769 | }, 770 | { 771 | "_defaultOrder": 22, 772 | "_isFastLaunch": false, 773 | "category": "Compute optimized", 774 | "gpuNum": 0, 775 | "hideHardwareSpecs": false, 776 | "memoryGiB": 8, 777 | "name": "ml.c5.xlarge", 778 | "vcpuNum": 4 779 | }, 780 | { 781 | "_defaultOrder": 23, 782 | "_isFastLaunch": false, 783 | "category": "Compute optimized", 784 | "gpuNum": 0, 785 | "hideHardwareSpecs": false, 786 | "memoryGiB": 16, 787 | "name": "ml.c5.2xlarge", 788 | "vcpuNum": 8 789 | }, 790 | { 791 | "_defaultOrder": 24, 792 | "_isFastLaunch": false, 793 | "category": "Compute optimized", 794 | "gpuNum": 0, 795 | "hideHardwareSpecs": false, 796 | "memoryGiB": 32, 797 | "name": "ml.c5.4xlarge", 798 | "vcpuNum": 16 799 | }, 800 | { 801 | "_defaultOrder": 25, 802 | "_isFastLaunch": false, 803 | "category": "Compute optimized", 804 | "gpuNum": 0, 805 | "hideHardwareSpecs": false, 806 | "memoryGiB": 72, 807 | "name": "ml.c5.9xlarge", 808 | "vcpuNum": 36 809 | }, 810 | { 811 | "_defaultOrder": 26, 812 | "_isFastLaunch": false, 813 | "category": "Compute optimized", 814 | "gpuNum": 0, 815 | "hideHardwareSpecs": false, 816 | "memoryGiB": 96, 817 | "name": "ml.c5.12xlarge", 818 | "vcpuNum": 48 819 | }, 820 | { 821 | "_defaultOrder": 27, 822 | "_isFastLaunch": false, 823 | "category": "Compute optimized", 824 | "gpuNum": 0, 825 | "hideHardwareSpecs": false, 826 | "memoryGiB": 144, 827 | "name": "ml.c5.18xlarge", 828 | "vcpuNum": 72 829 | }, 830 | { 831 | "_defaultOrder": 28, 832 | "_isFastLaunch": false, 833 | "category": "Compute optimized", 834 | "gpuNum": 0, 835 | "hideHardwareSpecs": false, 836 | "memoryGiB": 192, 837 | "name": "ml.c5.24xlarge", 838 | "vcpuNum": 96 839 | }, 840 | { 841 | "_defaultOrder": 29, 842 | "_isFastLaunch": true, 843 | "category": "Accelerated computing", 844 | "gpuNum": 1, 845 | "hideHardwareSpecs": false, 846 | "memoryGiB": 16, 847 | "name": "ml.g4dn.xlarge", 848 | "vcpuNum": 4 849 | }, 850 | { 851 | "_defaultOrder": 30, 852 | "_isFastLaunch": false, 853 | "category": "Accelerated computing", 854 | "gpuNum": 1, 855 | "hideHardwareSpecs": false, 856 | "memoryGiB": 32, 857 | "name": "ml.g4dn.2xlarge", 858 | "vcpuNum": 8 859 | }, 860 | { 861 | "_defaultOrder": 31, 862 | "_isFastLaunch": false, 863 | "category": "Accelerated computing", 864 | "gpuNum": 1, 865 | "hideHardwareSpecs": false, 866 | "memoryGiB": 64, 867 | "name": "ml.g4dn.4xlarge", 868 | "vcpuNum": 16 869 | }, 870 | { 871 | "_defaultOrder": 32, 872 | "_isFastLaunch": false, 873 | "category": "Accelerated computing", 874 | "gpuNum": 1, 875 | "hideHardwareSpecs": false, 876 | "memoryGiB": 128, 877 | "name": "ml.g4dn.8xlarge", 878 | "vcpuNum": 32 879 | }, 880 | { 881 | "_defaultOrder": 33, 882 | "_isFastLaunch": false, 883 | "category": "Accelerated computing", 884 | "gpuNum": 4, 885 | "hideHardwareSpecs": false, 886 | "memoryGiB": 192, 887 | "name": "ml.g4dn.12xlarge", 888 | "vcpuNum": 48 889 | }, 890 | { 891 | "_defaultOrder": 34, 892 | "_isFastLaunch": false, 893 | "category": "Accelerated computing", 894 | "gpuNum": 1, 895 | "hideHardwareSpecs": false, 896 | "memoryGiB": 256, 897 | "name": "ml.g4dn.16xlarge", 898 | "vcpuNum": 64 899 | }, 900 | { 901 | "_defaultOrder": 35, 902 | "_isFastLaunch": false, 903 | "category": "Accelerated computing", 904 | "gpuNum": 1, 905 | "hideHardwareSpecs": false, 906 | "memoryGiB": 61, 907 | "name": "ml.p3.2xlarge", 908 | "vcpuNum": 8 909 | }, 910 | { 911 | "_defaultOrder": 36, 912 | "_isFastLaunch": false, 913 | "category": "Accelerated computing", 914 | "gpuNum": 4, 915 | "hideHardwareSpecs": false, 916 | "memoryGiB": 244, 917 | "name": "ml.p3.8xlarge", 918 | "vcpuNum": 32 919 | }, 920 | { 921 | "_defaultOrder": 37, 922 | "_isFastLaunch": false, 923 | "category": "Accelerated computing", 924 | "gpuNum": 8, 925 | "hideHardwareSpecs": false, 926 | "memoryGiB": 488, 927 | "name": "ml.p3.16xlarge", 928 | "vcpuNum": 64 929 | }, 930 | { 931 | "_defaultOrder": 38, 932 | "_isFastLaunch": false, 933 | "category": "Accelerated computing", 934 | "gpuNum": 8, 935 | "hideHardwareSpecs": false, 936 | "memoryGiB": 768, 937 | "name": "ml.p3dn.24xlarge", 938 | "vcpuNum": 96 939 | }, 940 | { 941 | "_defaultOrder": 39, 942 | "_isFastLaunch": false, 943 | "category": "Memory Optimized", 944 | "gpuNum": 0, 945 | "hideHardwareSpecs": false, 946 | "memoryGiB": 16, 947 | "name": "ml.r5.large", 948 | "vcpuNum": 2 949 | }, 950 | { 951 | "_defaultOrder": 40, 952 | "_isFastLaunch": false, 953 | "category": "Memory Optimized", 954 | "gpuNum": 0, 955 | "hideHardwareSpecs": false, 956 | "memoryGiB": 32, 957 | "name": "ml.r5.xlarge", 958 | "vcpuNum": 4 959 | }, 960 | { 961 | "_defaultOrder": 41, 962 | "_isFastLaunch": false, 963 | "category": "Memory Optimized", 964 | "gpuNum": 0, 965 | "hideHardwareSpecs": false, 966 | "memoryGiB": 64, 967 | "name": "ml.r5.2xlarge", 968 | "vcpuNum": 8 969 | }, 970 | { 971 | "_defaultOrder": 42, 972 | "_isFastLaunch": false, 973 | "category": "Memory Optimized", 974 | "gpuNum": 0, 975 | "hideHardwareSpecs": false, 976 | "memoryGiB": 128, 977 | "name": "ml.r5.4xlarge", 978 | "vcpuNum": 16 979 | }, 980 | { 981 | "_defaultOrder": 43, 982 | "_isFastLaunch": false, 983 | "category": "Memory Optimized", 984 | "gpuNum": 0, 985 | "hideHardwareSpecs": false, 986 | "memoryGiB": 256, 987 | "name": "ml.r5.8xlarge", 988 | "vcpuNum": 32 989 | }, 990 | { 991 | "_defaultOrder": 44, 992 | "_isFastLaunch": false, 993 | "category": "Memory Optimized", 994 | "gpuNum": 0, 995 | "hideHardwareSpecs": false, 996 | "memoryGiB": 384, 997 | "name": "ml.r5.12xlarge", 998 | "vcpuNum": 48 999 | }, 1000 | { 1001 | "_defaultOrder": 45, 1002 | "_isFastLaunch": false, 1003 | "category": "Memory Optimized", 1004 | "gpuNum": 0, 1005 | "hideHardwareSpecs": false, 1006 | "memoryGiB": 512, 1007 | "name": "ml.r5.16xlarge", 1008 | "vcpuNum": 64 1009 | }, 1010 | { 1011 | "_defaultOrder": 46, 1012 | "_isFastLaunch": false, 1013 | "category": "Memory Optimized", 1014 | "gpuNum": 0, 1015 | "hideHardwareSpecs": false, 1016 | "memoryGiB": 768, 1017 | "name": "ml.r5.24xlarge", 1018 | "vcpuNum": 96 1019 | }, 1020 | { 1021 | "_defaultOrder": 47, 1022 | "_isFastLaunch": false, 1023 | "category": "Accelerated computing", 1024 | "gpuNum": 1, 1025 | "hideHardwareSpecs": false, 1026 | "memoryGiB": 16, 1027 | "name": "ml.g5.xlarge", 1028 | "vcpuNum": 4 1029 | }, 1030 | { 1031 | "_defaultOrder": 48, 1032 | "_isFastLaunch": false, 1033 | "category": "Accelerated computing", 1034 | "gpuNum": 1, 1035 | "hideHardwareSpecs": false, 1036 | "memoryGiB": 32, 1037 | "name": "ml.g5.2xlarge", 1038 | "vcpuNum": 8 1039 | }, 1040 | { 1041 | "_defaultOrder": 49, 1042 | "_isFastLaunch": false, 1043 | "category": "Accelerated computing", 1044 | "gpuNum": 1, 1045 | "hideHardwareSpecs": false, 1046 | "memoryGiB": 64, 1047 | "name": "ml.g5.4xlarge", 1048 | "vcpuNum": 16 1049 | }, 1050 | { 1051 | "_defaultOrder": 50, 1052 | "_isFastLaunch": false, 1053 | "category": "Accelerated computing", 1054 | "gpuNum": 1, 1055 | "hideHardwareSpecs": false, 1056 | "memoryGiB": 128, 1057 | "name": "ml.g5.8xlarge", 1058 | "vcpuNum": 32 1059 | }, 1060 | { 1061 | "_defaultOrder": 51, 1062 | "_isFastLaunch": false, 1063 | "category": "Accelerated computing", 1064 | "gpuNum": 1, 1065 | "hideHardwareSpecs": false, 1066 | "memoryGiB": 256, 1067 | "name": "ml.g5.16xlarge", 1068 | "vcpuNum": 64 1069 | }, 1070 | { 1071 | "_defaultOrder": 52, 1072 | "_isFastLaunch": false, 1073 | "category": "Accelerated computing", 1074 | "gpuNum": 4, 1075 | "hideHardwareSpecs": false, 1076 | "memoryGiB": 192, 1077 | "name": "ml.g5.12xlarge", 1078 | "vcpuNum": 48 1079 | }, 1080 | { 1081 | "_defaultOrder": 53, 1082 | "_isFastLaunch": false, 1083 | "category": "Accelerated computing", 1084 | "gpuNum": 4, 1085 | "hideHardwareSpecs": false, 1086 | "memoryGiB": 384, 1087 | "name": "ml.g5.24xlarge", 1088 | "vcpuNum": 96 1089 | }, 1090 | { 1091 | "_defaultOrder": 54, 1092 | "_isFastLaunch": false, 1093 | "category": "Accelerated computing", 1094 | "gpuNum": 8, 1095 | "hideHardwareSpecs": false, 1096 | "memoryGiB": 768, 1097 | "name": "ml.g5.48xlarge", 1098 | "vcpuNum": 192 1099 | }, 1100 | { 1101 | "_defaultOrder": 55, 1102 | "_isFastLaunch": false, 1103 | "category": "Accelerated computing", 1104 | "gpuNum": 8, 1105 | "hideHardwareSpecs": false, 1106 | "memoryGiB": 1152, 1107 | "name": "ml.p4d.24xlarge", 1108 | "vcpuNum": 96 1109 | }, 1110 | { 1111 | "_defaultOrder": 56, 1112 | "_isFastLaunch": false, 1113 | "category": "Accelerated computing", 1114 | "gpuNum": 8, 1115 | "hideHardwareSpecs": false, 1116 | "memoryGiB": 1152, 1117 | "name": "ml.p4de.24xlarge", 1118 | "vcpuNum": 96 1119 | }, 1120 | { 1121 | "_defaultOrder": 57, 1122 | "_isFastLaunch": false, 1123 | "category": "Accelerated computing", 1124 | "gpuNum": 0, 1125 | "hideHardwareSpecs": false, 1126 | "memoryGiB": 32, 1127 | "name": "ml.trn1.2xlarge", 1128 | "vcpuNum": 8 1129 | }, 1130 | { 1131 | "_defaultOrder": 58, 1132 | "_isFastLaunch": false, 1133 | "category": "Accelerated computing", 1134 | "gpuNum": 0, 1135 | "hideHardwareSpecs": false, 1136 | "memoryGiB": 512, 1137 | "name": "ml.trn1.32xlarge", 1138 | "vcpuNum": 128 1139 | }, 1140 | { 1141 | "_defaultOrder": 59, 1142 | "_isFastLaunch": false, 1143 | "category": "Accelerated computing", 1144 | "gpuNum": 0, 1145 | "hideHardwareSpecs": false, 1146 | "memoryGiB": 512, 1147 | "name": "ml.trn1n.32xlarge", 1148 | "vcpuNum": 128 1149 | } 1150 | ], 1151 | "instance_type": "ml.t3.medium", 1152 | "kernelspec": { 1153 | "display_name": "Python 3 (ipykernel)", 1154 | "language": "python", 1155 | "name": "python3" 1156 | }, 1157 | "language_info": { 1158 | "codemirror_mode": { 1159 | "name": "ipython", 1160 | "version": 3 1161 | }, 1162 | "file_extension": ".py", 1163 | "mimetype": "text/x-python", 1164 | "name": "python", 1165 | "nbconvert_exporter": "python", 1166 | "pygments_lexer": "ipython3", 1167 | "version": "3.10.13" 1168 | } 1169 | }, 1170 | "nbformat": 4, 1171 | "nbformat_minor": 5 1172 | } 1173 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | **Update** Amazon Bedrock now supports LLM-as-a-judge capabilities - [AWS Blog Post](https://aws.amazon.com/blogs/aws/new-rag-evaluation-and-llm-as-a-judge-capabilities-in-amazon-bedrock/) 2 | 3 | # Evaluating Large Language Models using LLM-as-a-Judge 4 | 5 | Evaluating large language models (LLM) is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences. To address this, strong LLMs are used as judges to evaluate these models on more open-ended questions. The agreement between LLM judges and human preferences has been verified by introducing two benchmarks: Multi Turn (MT)-bench, a multi-turn question set, and Chatbot Arena, a crowdsourced battle platform. The results reveal that strong LLM judges can match both controlled and crowdsourced human preferences well, achieving over 80% agreement, the same level of agreement between humans This makes LLM-as-a-judge a scalable and explainable way to approximate human preferences, which are otherwise very expensive to obtain. 6 | 7 | > ℹ️ **Note:** The evaluation steps in this lab are based on the paper [Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena](https://arxiv.org/pdf/2306.05685.pdf). 8 | 9 | This lab addresses this challenge by providing a practical solution for evaluating LLMs using LLM-as-a-Judge with Amazon Bedrock. This is relevant for developers and researchers working on evaluating LLM based applications. In the notebook you are guided using MT-Bench questions to generate test answers and evaluate them with a single-answer grading using the Bedrock API, Python and Langchain. The notebook consists of the following chapters: 10 | 11 | 1) Set-up of the environment 12 | 2) Load MT-Bench questions 13 | 3) Generate test answers from LLM which should be evaluated 14 | 4) Evaluate answers with strong LLM-as-a-judge 15 | 5) Generate explanation for average rating score 16 | 17 | 18 | ## Getting started 19 | 20 | ### Choose a notebook environment 21 | 22 | This lab is presented as a **Python notebook**, which you can run from the environment of your choice: 23 | 24 | - [SageMaker Studio](https://aws.amazon.com/sagemaker/studio/) is a web-based integrated development environment (IDE) for machine learning. To get started quickly, refer to the [instructions for domain quick setup](https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-quick-start.html). 25 | - [SageMaker Notebook Instance](https://docs.aws.amazon.com/sagemaker/latest/dg/howitworks-create-ws.html) is a machine learning (ML) compute instance running the Jupyter Notebook App. 26 | - To use your existing (local or other) notebook environment, make sure it has [credentials for calling AWS](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). 27 | 28 | 29 | ### Enable AWS IAM permissions for Bedrock 30 | 31 | The AWS identity you assume from your notebook environment (which is the [*Studio/notebook Execution Role*](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) from SageMaker, or can be a role or IAM User for self-managed notebooks), must have sufficient [AWS IAM permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) to call the Amazon Bedrock service. 32 | 33 | To grant Bedrock access to your identity: 34 | 35 | - Open the [AWS IAM Console](https://us-east-1.console.aws.amazon.com/iam/home?#) 36 | - Find your [Role](https://us-east-1.console.aws.amazon.com/iamv2/home?#/roles) (if using SageMaker or otherwise assuming an IAM Role), or else [User](https://us-east-1.console.aws.amazon.com/iamv2/home?#/users) 37 | - Select *Add Permissions > Create Inline Policy* to attach new inline permissions, open the *JSON* editor and paste in the below example policy: 38 | 39 | ``` 40 | { 41 | "Version": "2012-10-17", 42 | "Statement": { 43 | "Sid": "AllowInference", 44 | "Effect": "Allow", 45 | "Action": [ 46 | "bedrock:InvokeModel" 47 | ], 48 | "Resource": "arn:aws:bedrock:*::foundation-model/*" 49 | } 50 | } 51 | ``` 52 | 53 | > ℹ️ **Note:** With Amazon SageMaker, your notebook execution role is typically be *separate* from the user or role that you log in to the AWS Console with. If you want to explore the AWS Console for Amazon Bedrock, you need to grant permissions to your Console user/role too. You can run the notebooks anywhere as long as you have access to the AWS Bedrock service and have appropriate credentials 54 | 55 | For more information on the fine-grained action and resource permissions in Bedrock, check out the Bedrock Developer Guide. 56 | 57 | 58 | ### Clone and use the notebooks 59 | 60 | > ℹ️ **Note:** In SageMaker Studio, you can open a "System Terminal" to run these commands by clicking *File > New > Terminal* 61 | 62 | Once your notebook environment is set up, clone this workshop repository into it. 63 | 64 | ```sh 65 | sudo yum install -y unzip 66 | git clone git@github.com:aws-samples/evaluating-large-language-models-using-llm-as-a-judge.git 67 | cd evaluating-large-language-models-using-llm-as-a-judge 68 | ``` 69 | 70 | You're now ready to explore the lab notebook! You will be guided through connection the notebook to Amazon Bedrock for large language model access. 71 | 72 | 73 | ## Contributing 74 | 75 | See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. 76 | 77 | ## License 78 | This library is licensed under the MIT-0 License. See the [LICENSE](LICENSE) file. 79 | 80 | --------------------------------------------------------------------------------