├── .DS_Store ├── AskWiki └── Pre_Trained_Models_and_Semantic_Search_Ask_Wiki_Notebook.ipynb ├── Databricks_demo └── ML Quickstart_ Model Training with HuggingFace.ipynb ├── DocumentAI └── Fine_tune_LayoutLMv3_on_CORD.ipynb ├── Enterprise_AI_intro ├── Transformation_AI_Slides.pdf └── Transforming_AI_Notebook_by_Raj.ipynb ├── FinBERT ├── FinBERT talk.pdf └── Financial_Demo_Using_Gradio_&_Hugging_Face.ipynb ├── FoodApp ├── Build AI Apps Minutes July 2022.pdf ├── Indian_Food_Translator.ipynb └── Indian_food_image_classification_fine-tuning.ipynb ├── LICENSE ├── README.md ├── SetFit └── SetFit.ipynb └── explainability └── Transformers_Interpretability_Explainability.ipynb /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rajshah4/huggingface-demos/3be106ea99e8949c98e514a4f763693ca4e4e117/.DS_Store -------------------------------------------------------------------------------- /AskWiki/Pre_Trained_Models_and_Semantic_Search_Ask_Wiki_Notebook.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "id": "gh6QOr-qO4Ym" 7 | }, 8 | "source": [ 9 | "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/education-toolkit/blob/main/02_ml-demos-with-gradio.ipynb)\n", 10 | "\n", 11 | "\n", 12 | "Transformers Pipeline Demo Notebook\n", 13 | "\n", 14 | "💡 **Welcome!**\n", 15 | "\n", 16 | "This notebook is a self-contained way to start using widely known Open Source technologies (`transformers`, `gradio`, etc). The core use case is building an application to search Wikipedia. This notebook can be found at [https://bit.ly/raj_askwiki](https://bit.ly/raj_askwiki), the accompanying slides are at [****](https://github.com/rajshah4/huggingface-demos/tree/main/FinBERT).\n" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": null, 22 | "metadata": { 23 | "id": "lb65KY8VcSV8" 24 | }, 25 | "outputs": [], 26 | "source": [ 27 | "!pip install --quiet transformers\n", 28 | "!pip install --quiet gradio\n", 29 | "!pip install --quiet sentence-transformers\n", 30 | "!pip install --quiet datasets\n", 31 | "import os\n", 32 | "import gzip\n", 33 | "import json" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "metadata": { 39 | "id": "NkJmA-r5L0EB" 40 | }, 41 | "source": [ 42 | "# Tutorial: Using Pretrained Models and Building Demos with Gradio ⚡ & Hugging Face 🤗 " 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": { 48 | "id": "D_Iv1CJZPekG" 49 | }, 50 | "source": [ 51 | "**Learning goals:** The goal of this tutorial is to learn How To\n", 52 | "\n", 53 | "1. Use pre-trained models from the transformers library and that are available on the Hugging Face Hub\n", 54 | "2. Extract information from Pre-Trained Models\n", 55 | "3. Doing a similarity search\n", 56 | "4. Building a web demo, [Ask Wiki](https://huggingface.co/spaces/rajistics/Ask-Wiki)\n", 57 | "\n", 58 | "**Duration**: 60 minutes\n", 59 | "\n", 60 | "**Prerequisites:** Knowledge of Python and basic familiarity with machine learning \n", 61 | "\n", 62 | "**Author**: [Rajiv Shah](https://twitter.com/rajistics) (feel free to ping me with any questions about this tutorial) \n", 63 | "\n", 64 | "All of these steps can be done for free! All you need is an Internet browser and a place where you can write Python 👩‍💻" 65 | ] 66 | }, 67 | { 68 | "cell_type": "markdown", 69 | "metadata": { 70 | "id": "_-wtA32Tbfbd" 71 | }, 72 | "source": [ 73 | "# Let's Start by Exploring Pre-trained Models at Hugging Face Hub\n", 74 | "[Hugging Face Hub](https://hf.co) \n", 75 | "\n", 76 | "**Voice:** Automatic Speech Recognition [Facebook's Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)\n", 77 | "\n", 78 | "**Image:** Object Detection [DETR End-to-End Object Detection model with ResNet-50 backbone](https://huggingface.co/facebook/detr-resnet-50)" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "metadata": { 84 | "id": "Vqq4sZ4Qb8me" 85 | }, 86 | "source": [ 87 | "## Let's Run Pretrained Models for Predictions" 88 | ] 89 | }, 90 | { 91 | "cell_type": "markdown", 92 | "metadata": { 93 | "id": "DpvE1ionWmLy" 94 | }, 95 | "source": [ 96 | "The [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) supports many common tasks out-of-the-box:\n", 97 | "\n", 98 | "**Text**:\n", 99 | "* Sentiment analysis: classify the polarity of a given text.\n", 100 | "* Text generation (in English): generate text from a given input.\n", 101 | "* Name entity recognition (NER): label each word with the entity it represents (person, date, location, etc.).\n", 102 | "* Question answering: extract the answer from the context, given some context and a question.\n", 103 | "* Fill-mask: fill in the blank given a text with masked words.\n", 104 | "* Summarization: generate a summary of a long sequence of text or document.\n", 105 | "* Translation: translate text into another language.\n", 106 | "* Feature extraction: create a tensor representation of the text.\n", 107 | "\n", 108 | "**Image**:\n", 109 | "* Image classification: classify an image.\n", 110 | "* Image segmentation: classify every pixel in an image.\n", 111 | "* Object detection: detect objects within an image.\n", 112 | "\n", 113 | "**Audio**:\n", 114 | "* Audio classification: assign a label to a given segment of audio.\n", 115 | "* Automatic speech recognition (ASR): transcribe audio data into text.\n", 116 | "\n", 117 | "\n", 118 | "\n", 119 | "For more details about the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) and associated tasks, refer to the documentation [here](https://huggingface.co/docs/transformers/main/en/./main_classes/pipelines).\n" 120 | ] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "metadata": { 125 | "id": "wBYp9IOZDWNB" 126 | }, 127 | "source": [ 128 | "###Text" 129 | ] 130 | }, 131 | { 132 | "cell_type": "code", 133 | "execution_count": null, 134 | "metadata": { 135 | "id": "BKkkyOlfcing" 136 | }, 137 | "outputs": [], 138 | "source": [ 139 | "#Sentiment Analysis\n", 140 | "from transformers import pipeline\n", 141 | "sent_classifier = pipeline(\"sentiment-analysis\")\n", 142 | "sent_classifier(\"I was pretty happy with the sneakers\")" 143 | ] 144 | }, 145 | { 146 | "cell_type": "code", 147 | "execution_count": null, 148 | "metadata": { 149 | "id": "J_J-cpP7C5Rr" 150 | }, 151 | "outputs": [], 152 | "source": [ 153 | "ARTICLE = \"\"\" New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.\n", 154 | "A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.\n", 155 | "Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared \"I do\" five more times, sometimes only within two weeks of each other.\n", 156 | "In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her \"first and only\" marriage.\n", 157 | "Barrientos, now 39, is facing two criminal counts of \"offering a false instrument for filing in the first degree,\" referring to her false statements on the\n", 158 | "2010 marriage license application, according to court documents.\n", 159 | "Prosecutors said the marriages were part of an immigration scam.\n", 160 | "On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.\n", 161 | "After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective\n", 162 | "Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.\n", 163 | "All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.\n", 164 | "Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.\n", 165 | "Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.\n", 166 | "The case was referred to the Bronx District Attorney\\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\\'s\n", 167 | "Investigation Division. Seven of the men are from so-called \"red-flagged\" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.\n", 168 | "Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.\n", 169 | "If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.\n", 170 | "\"\"\"" 171 | ] 172 | }, 173 | { 174 | "cell_type": "code", 175 | "execution_count": null, 176 | "metadata": { 177 | "id": "pJezrNIEBh6s" 178 | }, 179 | "outputs": [], 180 | "source": [ 181 | "summarizer = pipeline(\"summarization\")" 182 | ] 183 | }, 184 | { 185 | "cell_type": "code", 186 | "execution_count": null, 187 | "metadata": { 188 | "colab": { 189 | "base_uri": "https://localhost:8080/" 190 | }, 191 | "id": "yOeWt-OxC5OV", 192 | "outputId": "a9284cdd-69a7-405c-e1b4-656cfaf26583" 193 | }, 194 | "outputs": [ 195 | { 196 | "data": { 197 | "text/plain": [ 198 | "[{'summary_text': ' Liana Barrientos has been married 10 times, nine of them between 1999 and 2002 . She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say . She pleaded not guilty at State Supreme Court in the Bronx on Friday .'}]" 199 | ] 200 | }, 201 | "execution_count": 7, 202 | "metadata": {}, 203 | "output_type": "execute_result" 204 | } 205 | ], 206 | "source": [ 207 | "summarizer(ARTICLE, max_length=130, min_length=30, do_sample=True)" 208 | ] 209 | }, 210 | { 211 | "cell_type": "code", 212 | "execution_count": null, 213 | "metadata": { 214 | "id": "SaEWTfmjcysx" 215 | }, 216 | "outputs": [], 217 | "source": [ 218 | "#Zero Shot Learning\n", 219 | "zero_shot = pipeline('zero-shot-classification')\n", 220 | "zero_shot(\n", 221 | " \"This is a course about the Transformers library\",\n", 222 | " candidate_labels=[\"education\", \"politics\", \"business\"],\n", 223 | ")" 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "metadata": { 229 | "id": "F3iwLYgdDd4-" 230 | }, 231 | "source": [ 232 | "###Image" 233 | ] 234 | }, 235 | { 236 | "cell_type": "code", 237 | "execution_count": null, 238 | "metadata": { 239 | "id": "4uImYTfZC5Kr" 240 | }, 241 | "outputs": [], 242 | "source": [ 243 | "vision_classifier = pipeline(task=\"image-classification\")\n", 244 | "result = vision_classifier(\n", 245 | " images=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg\"\n", 246 | ")\n", 247 | "print(\"\\n\".join([f\"Class {d['label']} with score {round(d['score'], 4)}\" for d in result]))" 248 | ] 249 | }, 250 | { 251 | "cell_type": "markdown", 252 | "metadata": { 253 | "id": "-RZA0oOhmsh5" 254 | }, 255 | "source": [ 256 | "## Finetuned FinBERT for Forward Looking Statements\n", 257 | "\n", 258 | "[FinBERT-FLS](https://huggingface.co/yiyanghkust/finbert-fls) is model developed for identifying Forward-looking statements (FLS). These statements inform investors of managers’ beliefs and opinions about firm's future events or results. Identifying forward-looking statements from corporate reports can assist investors in financial analysis. FinBERT-FLS is a FinBERT model fine-tuned on 3,500 manually annotated sentences from Management Discussion and Analysis section of annual reports of Russell 3000 firms." 259 | ] 260 | }, 261 | { 262 | "cell_type": "code", 263 | "execution_count": null, 264 | "metadata": { 265 | "id": "rGyamloQmdxI" 266 | }, 267 | "outputs": [], 268 | "source": [ 269 | "from transformers import BertTokenizer, BertForSequenceClassification, pipeline\n", 270 | "\n", 271 | "finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-fls',num_labels=3)\n", 272 | "tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-fls')\n", 273 | "nlp = pipeline(\"text-classification\", model=finbert, tokenizer=tokenizer)\n", 274 | "results = nlp('In the past, the age of our fleet to enhance availability and reliability due to reduced downtime for repairs.')\n", 275 | "print(results) " 276 | ] 277 | }, 278 | { 279 | "cell_type": "markdown", 280 | "metadata": { 281 | "id": "QYA6nfG9BKPB" 282 | }, 283 | "source": [ 284 | "## Inference Pipelines from Hugging Face" 285 | ] 286 | }, 287 | { 288 | "cell_type": "markdown", 289 | "metadata": { 290 | "id": "wZhAaT_PZ6EF" 291 | }, 292 | "source": [ 293 | "For production use, you can use the [inference API](https://huggingface.co/inference-api) to get predictions via simple API calls. To get the snippet, just go here\n", 294 | "![](https://i.ibb.co/P9yyTHg/Screen-Shot-2022-07-01-at-10-30-20-AM.png)" 295 | ] 296 | }, 297 | { 298 | "cell_type": "code", 299 | "execution_count": null, 300 | "metadata": { 301 | "id": "fNWp4t5UBIc_" 302 | }, 303 | "outputs": [], 304 | "source": [ 305 | "# Example Snippet\n", 306 | "import requests\n", 307 | "\n", 308 | "API_URL = \"https://api-inference.huggingface.co/models/yiyanghkust/finbert-fls\"\n", 309 | "headers = {\"Authorization\": \"Bearer {API_TOKEN}\"} ###Add your API Key here after Bearer\n", 310 | "\n", 311 | "def query(payload):\n", 312 | "\tresponse = requests.post(API_URL, headers=headers, json=payload)\n", 313 | "\treturn response.json()\n", 314 | "\t\n", 315 | "output = query({\n", 316 | "\t\"inputs\": \"In the past, the age of our fleet to enhance availability and reliability due to reduced downtime for repairs.\",\n", 317 | "})" 318 | ] 319 | }, 320 | { 321 | "cell_type": "markdown", 322 | "metadata": { 323 | "id": "I_8Q4GglDub2" 324 | }, 325 | "source": [ 326 | "# Extracting Information from Pre-trained Models" 327 | ] 328 | }, 329 | { 330 | "cell_type": "markdown", 331 | "metadata": { 332 | "id": "946BuwY9eLtE" 333 | }, 334 | "source": [ 335 | "[Embedding Model Used in the Presentation](https://huggingface.co/obrizum/all-MiniLM-L6-v2)" 336 | ] 337 | }, 338 | { 339 | "cell_type": "code", 340 | "execution_count": null, 341 | "metadata": { 342 | "id": "BpdMF-fmBsdt" 343 | }, 344 | "outputs": [], 345 | "source": [ 346 | "sentences = [\"This is an example sentence\", \"Each sentence is converted\"]\n", 347 | "\n", 348 | "from sentence_transformers import SentenceTransformer, util, CrossEncoder\n", 349 | "\n", 350 | "model = SentenceTransformer('obrizum/all-MiniLM-L6-v2')\n", 351 | "embeddings = model.encode(sentences)\n", 352 | "print(embeddings)" 353 | ] 354 | }, 355 | { 356 | "cell_type": "markdown", 357 | "metadata": { 358 | "id": "MdCNeJZSBtit" 359 | }, 360 | "source": [ 361 | "# Similarity Search\n" 362 | ] 363 | }, 364 | { 365 | "cell_type": "markdown", 366 | "metadata": { 367 | "id": "rhj65Rwae0l5" 368 | }, 369 | "source": [ 370 | "A great visualization tool for embeddings is the [Tensorflow Projector](https://projector.tensorflow.org/)" 371 | ] 372 | }, 373 | { 374 | "cell_type": "markdown", 375 | "metadata": { 376 | "id": "aHf7NRNmf6mP" 377 | }, 378 | "source": [ 379 | "[Similarity Model Used in the Presentation](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1)" 380 | ] 381 | }, 382 | { 383 | "cell_type": "markdown", 384 | "metadata": { 385 | "id": "PXTaUijqgOVA" 386 | }, 387 | "source": [ 388 | "Let's use the concept of embeddings now to match a query with the best related passage in some documents. This demo will use an extract of Wikipedia as the passages to search." 389 | ] 390 | }, 391 | { 392 | "cell_type": "code", 393 | "execution_count": null, 394 | "metadata": { 395 | "id": "OJRD8mzZD_ss" 396 | }, 397 | "outputs": [], 398 | "source": [ 399 | "## Get Wikipedia data\n", 400 | "wikipedia_filepath = 'simplewiki-2020-11-01.jsonl.gz'\n", 401 | "\n", 402 | "if not os.path.exists(wikipedia_filepath):\n", 403 | " util.http_get('http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz', wikipedia_filepath)" 404 | ] 405 | }, 406 | { 407 | "cell_type": "code", 408 | "execution_count": null, 409 | "metadata": { 410 | "colab": { 411 | "base_uri": "https://localhost:8080/" 412 | }, 413 | "id": "nmFzmOJnD_qd", 414 | "outputId": "c41df69e-eba9-4b6a-d4e3-f0fc1573cac0", 415 | "collapsed": true 416 | }, 417 | "outputs": [ 418 | { 419 | "name": "stdout", 420 | "output_type": "stream", 421 | "text": [ 422 | "Passages: 169597\n" 423 | ] 424 | } 425 | ], 426 | "source": [ 427 | "passages = []\n", 428 | "with gzip.open(wikipedia_filepath, 'rt', encoding='utf8') as fIn:\n", 429 | " for line in fIn:\n", 430 | " data = json.loads(line.strip())\n", 431 | "\n", 432 | " #Add all paragraphs\n", 433 | " #passages.extend(data['paragraphs'])\n", 434 | "\n", 435 | " #Only add the first paragraph\n", 436 | " passages.append(data['paragraphs'][0])\n", 437 | "\n", 438 | "print(\"Passages:\", len(passages))\n" 439 | ] 440 | }, 441 | { 442 | "cell_type": "code", 443 | "execution_count": null, 444 | "metadata": { 445 | "colab": { 446 | "base_uri": "https://localhost:8080/" 447 | }, 448 | "id": "n0j-U9BkE0-2", 449 | "outputId": "786c858b-8de3-41a3-c77d-af6c0009d783" 450 | }, 451 | "outputs": [ 452 | { 453 | "data": { 454 | "text/plain": [ 455 | "{'id': '798870',\n", 456 | " 'paragraphs': ['The Seminole bat (\"Lasiurus seminolus\") is a type of bat in the family Vespertilionidae.',\n", 457 | " 'The Seminole bat is often confused with the red bat. The Seminole bat has a mahogany color with a frosted look because to white tipped dorsal hairs. They weigh around 12 grams. Females are larger than males.',\n", 458 | " 'The Seminole bat is found in the Southeastern United States. This includes Louisiana, Georgia, Alabama, Mississippi, South Carolina and parts of Texas, Tennessee, Arkansas and North Carolina. It has also been seen as far as Mexico. It is a migratory species. In the winter, it lives along the Gulf Coast, North and South Carolina, and southern Arkansas. In the summer, they migrate as far north as Missouri and Kentucky.',\n", 459 | " 'It prefers to live in forested areas. In winter, they are found to use leaf litter and Spanish moss as insulation in their roost sites.',\n", 460 | " 'Seminole bats are insectivores. They eat large amounts of Hymenoptera (ants, bees and wasps), Coleoptera (beetles), Lepidoptera (moths). They have also been shown to eat smaller amounts of Homoptera (cicadas) and Diptera (flies).'],\n", 461 | " 'title': 'Seminole bat'}" 462 | ] 463 | }, 464 | "execution_count": 15, 465 | "metadata": {}, 466 | "output_type": "execute_result" 467 | } 468 | ], 469 | "source": [ 470 | "data" 471 | ] 472 | }, 473 | { 474 | "cell_type": "code", 475 | "execution_count": null, 476 | "metadata": { 477 | "colab": { 478 | "base_uri": "https://localhost:8080/" 479 | }, 480 | "id": "g6ZduxGEE4bq", 481 | "outputId": "a4cbfb44-d31d-4218-dab6-2704cf55050a" 482 | }, 483 | "outputs": [ 484 | { 485 | "data": { 486 | "text/plain": [ 487 | "['Ted Cassidy (July 31, 1932 - January 16, 1979) was an American actor. He was best known for his roles as Lurch and Thing on \"The Addams Family\".',\n", 488 | " 'Aileen Carol Wuornos Pralle (born Aileen Carol Pittman; February 29, 1956\\xa0– October 9, 2002) was an American serial killer. She was born in Rochester, Michigan. She confessed to killing six men in Florida and was executed in Florida State Prison by lethal injection for the murders. Wuornos said that the men she killed had raped her or tried to rape her while she was working as a prostitute.',\n", 489 | " \"A crater is a round dent on a planet. They are usually shaped like a circle or an oval. They are usually made by something like a meteor hitting the surface of a planet. Underground activity such as volcanoes or explosions can also cause them but it's not as likely.\",\n", 490 | " 'Store has several meanings:',\n", 491 | " 'Chinese New Year, known in China as the SpringFestival and in Singapore as the LunarNewYear, is a holiday on and around the new moon on the first day of the year in the traditional Chinese calendar. This calendar is based on the changes in the moon and is only sometimes changed to fit the seasons of the year based on how the Earth moves around the sun. Because of this, Chinese New Year is never on January1. It moves around between January21 and February20.']" 492 | ] 493 | }, 494 | "execution_count": 16, 495 | "metadata": {}, 496 | "output_type": "execute_result" 497 | } 498 | ], 499 | "source": [ 500 | "passages[0:5]" 501 | ] 502 | }, 503 | { 504 | "cell_type": "code", 505 | "execution_count": null, 506 | "metadata": { 507 | "id": "Pa5O7L2GD4b2" 508 | }, 509 | "outputs": [], 510 | "source": [ 511 | "#We use the Bi-Encoder to encode all passages, so that we can use it with sematic search\n", 512 | "bi_encoder = SentenceTransformer('multi-qa-MiniLM-L6-cos-v1')\n", 513 | "bi_encoder.max_seq_length = 256 #Truncate long passages to 256 tokens\n", 514 | "top_k = 32 #Number of passages we want to retrieve with the bi-encoder\n" 515 | ] 516 | }, 517 | { 518 | "cell_type": "code", 519 | "execution_count": null, 520 | "metadata": { 521 | "id": "7Bff7SgeEXS1" 522 | }, 523 | "outputs": [], 524 | "source": [ 525 | "\n", 526 | "# We encode all passages into our vector space. This takes about 5 minutes (depends on your GPU speed)\n", 527 | "corpus_embeddings = bi_encoder.encode(passages, convert_to_tensor=True, show_progress_bar=True)" 528 | ] 529 | }, 530 | { 531 | "cell_type": "code", 532 | "execution_count": null, 533 | "metadata": { 534 | "id": "bH3wxiOPFt1n" 535 | }, 536 | "outputs": [], 537 | "source": [ 538 | "query=\"What is the capital of the United States?\"" 539 | ] 540 | }, 541 | { 542 | "cell_type": "code", 543 | "execution_count": null, 544 | "metadata": { 545 | "id": "36bOslhVD_mc" 546 | }, 547 | "outputs": [], 548 | "source": [ 549 | "# Encode the query using the bi-encoder and find potentially relevant passages\n", 550 | "question_embedding = bi_encoder.encode(query, convert_to_tensor=True)\n", 551 | "question_embedding = question_embedding.cuda()\n", 552 | "hits = util.semantic_search(question_embedding, corpus_embeddings, top_k=top_k)\n", 553 | "hits = hits[0] # Get the hits for the first query" 554 | ] 555 | }, 556 | { 557 | "cell_type": "code", 558 | "execution_count": null, 559 | "metadata": { 560 | "id": "N3pBMwLIF3us" 561 | }, 562 | "outputs": [], 563 | "source": [ 564 | "print(\"\\n-------------------------\\n\")\n", 565 | "print(\"Top-3 Bi-Encoder Retrieval hits\")\n", 566 | "hits = sorted(hits, key=lambda x: x['score'], reverse=True)\n", 567 | "for hit in hits[0:3]:\n", 568 | " print(\"\\t{:.3f}\\t{}\".format(hit['score'], passages[hit['corpus_id']].replace(\"\\n\", \" \")))\n" 569 | ] 570 | }, 571 | { 572 | "cell_type": "markdown", 573 | "metadata": { 574 | "id": "DojWEZjZi8XC" 575 | }, 576 | "source": [ 577 | "For a more advanced similarity search, let's add a cross encoder for reranking. If you want to see a more thorough comparison against lexicon searching, check out [https://bit.ly/raj_semantic/](https://bit.ly/raj_semantic/)\n" 578 | ] 579 | }, 580 | { 581 | "cell_type": "code", 582 | "execution_count": null, 583 | "metadata": { 584 | "id": "afVCdmhPD_eY" 585 | }, 586 | "outputs": [], 587 | "source": [ 588 | "##Go add a Cross Encoder for Reranking\n", 589 | "cross_encoder = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-6-v2')\n", 590 | "\n", 591 | "##### Re-Ranking #####\n", 592 | "# Now, score all retrieved passages with the cross_encoder\n", 593 | "cross_inp = [[query, passages[hit['corpus_id']]] for hit in hits]\n", 594 | "cross_scores = cross_encoder.predict(cross_inp)\n", 595 | "\n", 596 | "# Sort results by the cross-encoder scores\n", 597 | "for idx in range(len(cross_scores)):\n", 598 | " hits[idx]['cross-score'] = cross_scores[idx]\n", 599 | "\n", 600 | "# Output of top-5 hits from re-ranker\n", 601 | "print(\"\\n-------------------------\\n\")\n", 602 | "print(\"Top-3 Cross-Encoder Re-ranker hits\")\n", 603 | "hits = sorted(hits, key=lambda x: x['cross-score'], reverse=True)\n", 604 | "for hit in hits[0:3]:\n", 605 | " print(\"\\t{:.3f}\\t{}\".format(hit['cross-score'], passages[hit['corpus_id']].replace(\"\\n\", \" \")))" 606 | ] 607 | }, 608 | { 609 | "cell_type": "markdown", 610 | "metadata": { 611 | "id": "PR9faV2NWTrG" 612 | }, 613 | "source": [ 614 | "# Let's Build a Demo\n", 615 | "\n", 616 | "**Demos** of machine learning models are an increasingly important part of machine learning. Demos allow:\n", 617 | "\n", 618 | "* model developers to easily **present** their work to a wide audience\n", 619 | "* increase **reproducibility** of machine learning research\n", 620 | "* diverse users to more easily **identify and debug** failure points of models\n", 621 | "\n", 622 | "\n", 623 | "As a quick example of what we would like to build, check out the [Keras Org on Hugging Face](https://huggingface.co/keras-io), which includes a description card and a collection of Models and Spaces built by Keras community. Any Space can be opened in your browser and you can use the model immediately, as shown here: \n", 624 | "\n", 625 | "![](https://i.ibb.co/7y6DGjB/ezgif-5-cc52b7e590.gif)\n", 626 | "\n", 627 | "\n" 628 | ] 629 | }, 630 | { 631 | "cell_type": "markdown", 632 | "metadata": { 633 | "id": "g0KzbU4lQtv3" 634 | }, 635 | "source": [ 636 | "## 1. Build Quick ML Demos in Python Using the Gradio Library" 637 | ] 638 | }, 639 | { 640 | "cell_type": "markdown", 641 | "metadata": { 642 | "id": "rlSs72oUQ1VW" 643 | }, 644 | "source": [ 645 | "`gradio` is a handy Python library that lets you build web demos simply by specifying the list of input and output **components** expected by your machine learning model. \n", 646 | "\n", 647 | "For more detail [see the docs](https://gradio.app/docs/)\n", 648 | "\n", 649 | "In addition to the input and output types, Gradio expects a third parameter, which is the prediction function itself. This parameter can be ***any* regular Python function** that takes in parameter(s) corresponding to the input component(s) and returns value(s) corresponding to the output component(s)\n", 650 | "\n", 651 | "Enough words. Let's see some code!" 652 | ] 653 | }, 654 | { 655 | "cell_type": "code", 656 | "execution_count": null, 657 | "metadata": { 658 | "id": "aE8aAbTL0uab" 659 | }, 660 | "outputs": [], 661 | "source": [ 662 | "#from sentence_transformers import SentenceTransformer, CrossEncoder, util\n", 663 | "#import torch\n", 664 | "##import pickle\n", 665 | "#import pandas as pd\n", 666 | "#import gradio as gr" 667 | ] 668 | }, 669 | { 670 | "cell_type": "code", 671 | "execution_count": null, 672 | "metadata": { 673 | "colab": { 674 | "background_save": true, 675 | "base_uri": "https://localhost:8080/", 676 | "height": 592 677 | }, 678 | "id": "Uc_Q6IK-0uWs", 679 | "outputId": "81f5999f-ab0b-46fa-c42c-3a894b185144" 680 | }, 681 | "outputs": [ 682 | { 683 | "name": "stdout", 684 | "output_type": "stream", 685 | "text": [ 686 | "Colab notebook detected. This cell will run indefinitely so that you can see errors and logs. To turn off, set debug=False in launch().\n", 687 | "Running on public URL: https://58549.gradio.app\n", 688 | "\n", 689 | "This share link expires in 72 hours. For free permanent hosting, check out Spaces (https://huggingface.co/spaces)\n" 690 | ] 691 | }, 692 | { 693 | "data": { 694 | "text/html": [ 695 | "
" 696 | ], 697 | "text/plain": [ 698 | "" 699 | ] 700 | }, 701 | "metadata": {}, 702 | "output_type": "display_data" 703 | } 704 | ], 705 | "source": [ 706 | "import gradio as gr\n", 707 | "\n", 708 | "def search(query,top_k=100):\n", 709 | " ans=[]\n", 710 | " ##### Sematic Search #####\n", 711 | " # Encode the query using the bi-encoder and find potentially relevant passages\n", 712 | " question_embedding = bi_encoder.encode(query, convert_to_tensor=True)\n", 713 | " hits = util.semantic_search(question_embedding, corpus_embeddings, top_k=top_k)\n", 714 | " hits = hits[0] # Get the hits for the first query\n", 715 | " ##### Re-Ranking #####\n", 716 | " # Now, score all retrieved passages with the cross_encoder\n", 717 | " cross_inp = [[query, passages[hit['corpus_id']]] for hit in hits]\n", 718 | " cross_scores = cross_encoder.predict(cross_inp)\n", 719 | " # Sort results by the cross-encoder scores\n", 720 | " for idx in range(len(cross_scores)):\n", 721 | " hits[idx]['cross-score'] = cross_scores[idx]\n", 722 | " hits = sorted(hits, key=lambda x: x['cross-score'], reverse=True)\n", 723 | " for idx, hit in enumerate(hits[0:3]):\n", 724 | " ans.append(passages[hit['corpus_id']])\n", 725 | " return ans[0],ans[1],ans[2]\n", 726 | "\n", 727 | "iface = gr.Interface(fn=search, \n", 728 | " inputs=gr.Textbox(label=\"Query\"), \n", 729 | " outputs=[gr.Textbox(label=\"Answer 1\"),gr.Textbox(label=\"Answer 2\"),gr.Textbox(label=\"Answer 3\")])\n", 730 | "\n", 731 | "iface.launch(debug=True)" 732 | ] 733 | }, 734 | { 735 | "cell_type": "markdown", 736 | "metadata": { 737 | "id": "b6Ek7cORgDkQ" 738 | }, 739 | "source": [ 740 | "## 2. Host the Demo (for free) on Hugging Face Spaces\n", 741 | "\n", 742 | "Once you made a Gradio demo, you can host it permanently on Hugging Spaces very easily:\n", 743 | "\n", 744 | "Here are the steps to that (shown in the GIF below):\n", 745 | "\n", 746 | "A. First, create a Hugging Face account if you do not already have one, by visiting https://huggingface.co/ and clicking \"Sign Up\"\n", 747 | "\n", 748 | "B. Once you are logged in, click on your profile picture and then click on \"New Space\" underneath it to get to this page: https://huggingface.co/new-space\n", 749 | "\n", 750 | "C. Give your Space a name and a license. Select \"Gradio\" as the Space SDK, and then choose \"Public\" if you are fine with everyone accessing your Space and the underlying code\n", 751 | "\n", 752 | "D. Then you will find a page that provides you instructions on how to upload your files into the Git repository for that Space. You may also need to add a `requirements.txt` file to specify any Python package dependencies.\n", 753 | "\n", 754 | "E. Once you have pushed your files, that's it! Spaces will automatically build your Gradio demo allowing you to share it with anyone, anywhere!\n", 755 | "\n", 756 | "![GIF](https://huggingface.co/blog/assets/28_gradio-spaces/spaces-demo-finalized.gif)\n", 757 | "\n", 758 | "\n", 759 | "\n" 760 | ] 761 | }, 762 | { 763 | "cell_type": "markdown", 764 | "metadata": { 765 | "id": "ZmMd2HDSj0st" 766 | }, 767 | "source": [ 768 | "Check out [Ask Wiki](https://huggingface.co/spaces/rajistics/Ask-Wiki) by rajistics \n" 769 | ] 770 | } 771 | ], 772 | "metadata": { 773 | "accelerator": "GPU", 774 | "colab": { 775 | "collapsed_sections": [], 776 | "name": "Pre-Trained Models and Semantic Search - Ask Wiki Notebook", 777 | "provenance": [] 778 | }, 779 | "gpuClass": "standard", 780 | "kernelspec": { 781 | "display_name": "Python 3", 782 | "language": "python", 783 | "name": "python3" 784 | }, 785 | "language_info": { 786 | "codemirror_mode": { 787 | "name": "ipython", 788 | "version": 3 789 | }, 790 | "file_extension": ".py", 791 | "mimetype": "text/x-python", 792 | "name": "python", 793 | "nbconvert_exporter": "python", 794 | "pygments_lexer": "ipython3", 795 | "version": "3.9.5" 796 | } 797 | }, 798 | "nbformat": 4, 799 | "nbformat_minor": 0 800 | } -------------------------------------------------------------------------------- /Databricks_demo/ML Quickstart_ Model Training with HuggingFace.ipynb: -------------------------------------------------------------------------------- 1 | {"cells":[{"cell_type":"markdown","source":["# Databricks & Hugging Face ML Quickstart: Model Training\n\nThis notebook provides a quick overview of machine learning model training on Databricks using Hugging Face transformers. The notebook includes using MLflow to track the trained models.\n\nThis tutorial covers:\n- Part 1: Training a text classification transformer model with MLflow tracking\n\n### Requirements\n- Cluster running Databricks Runtime 7.5 ML or above\n- Training is super slow/unusable if there is no GPU attached to the cluster"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"528c6d2b-3c5a-4236-ac61-fa8cc8d4d323"}}},{"cell_type":"markdown","source":["### Libraries\nImport the necessary libraries. These libraries are preinstalled on Databricks Runtime for Machine Learning ([AWS](https://docs.databricks.com/runtime/mlruntime.html)|[Azure](https://docs.microsoft.com/azure/databricks/runtime/mlruntime)|[GCP](https://docs.gcp.databricks.com/runtime/mlruntime.html)) clusters and are tuned for compatibility and performance."],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"8738a402-24dc-4074-bebb-b51bec8e74db"}}},{"cell_type":"code","source":["%pip install transformers datasets mlflow torch"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"7c554af6-20e3-44ec-a8e1-b1e3411ab169"}},"outputs":[{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":"Python interpreter will be restarted.\nCollecting transformers\n Downloading transformers-4.20.1-py3-none-any.whl (4.4 MB)\nCollecting datasets\n Downloading datasets-2.3.2-py3-none-any.whl (362 kB)\nCollecting mlflow\n Downloading mlflow-1.27.0-py3-none-any.whl (17.9 MB)\nCollecting torch\n Downloading torch-1.12.0-cp38-cp38-manylinux1_x86_64.whl (776.3 MB)\nCollecting xxhash\n Downloading xxhash-3.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (212 kB)\nCollecting aiohttp\n Downloading aiohttp-3.8.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.3 MB)\nCollecting fsspec[http]>=2021.05.0\n Downloading fsspec-2022.5.0-py3-none-any.whl (140 kB)\nCollecting dill<0.3.6\n Downloading dill-0.3.5.1-py2.py3-none-any.whl (95 kB)\nRequirement already satisfied: numpy>=1.17 in /databricks/python3/lib/python3.8/site-packages (from datasets) (1.20.1)\nCollecting multiprocess\n Downloading multiprocess-0.70.13-py38-none-any.whl (131 kB)\nCollecting responses<0.19\n Downloading responses-0.18.0-py3-none-any.whl (38 kB)\nCollecting huggingface-hub<1.0.0,>=0.1.0\n Downloading huggingface_hub-0.8.1-py3-none-any.whl (101 kB)\nCollecting pyarrow>=6.0.0\n Downloading pyarrow-8.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (29.4 MB)\nRequirement already satisfied: packaging in /databricks/python3/lib/python3.8/site-packages (from datasets) (20.9)\nRequirement already satisfied: pandas in /databricks/python3/lib/python3.8/site-packages (from datasets) (1.2.4)\nCollecting tqdm>=4.62.1\n Downloading tqdm-4.64.0-py2.py3-none-any.whl (78 kB)\nRequirement already satisfied: requests>=2.19.0 in /databricks/python3/lib/python3.8/site-packages (from datasets) (2.25.1)\nCollecting pyyaml>=5.1\n Downloading PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (701 kB)\nRequirement already satisfied: filelock in /usr/local/lib/python3.8/dist-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.6.0)\nCollecting typing-extensions>=3.7.4.3\n Downloading typing_extensions-4.3.0-py3-none-any.whl (25 kB)\nRequirement already satisfied: pyparsing>=2.0.2 in /databricks/python3/lib/python3.8/site-packages (from packaging->datasets) (2.4.7)\nRequirement already satisfied: certifi>=2017.4.17 in /databricks/python3/lib/python3.8/site-packages (from requests>=2.19.0->datasets) (2020.12.5)\nRequirement already satisfied: chardet<5,>=3.0.2 in /databricks/python3/lib/python3.8/site-packages (from requests>=2.19.0->datasets) (4.0.0)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /databricks/python3/lib/python3.8/site-packages (from requests>=2.19.0->datasets) (1.25.11)\nRequirement already satisfied: idna<3,>=2.5 in /databricks/python3/lib/python3.8/site-packages (from requests>=2.19.0->datasets) (2.10)\nRequirement already satisfied: pytz in /databricks/python3/lib/python3.8/site-packages (from mlflow) (2020.5)\nCollecting Flask\n Downloading Flask-2.1.3-py3-none-any.whl (95 kB)\nRequirement already satisfied: scipy in /databricks/python3/lib/python3.8/site-packages (from mlflow) (1.6.2)\nCollecting alembic\n Downloading alembic-1.8.1-py3-none-any.whl (209 kB)\nCollecting sqlparse>=0.3.1\n Downloading sqlparse-0.4.2-py3-none-any.whl (42 kB)\nRequirement already satisfied: protobuf>=3.12.0 in /databricks/python3/lib/python3.8/site-packages (from mlflow) (3.17.2)\nCollecting gitpython>=2.1.0\n Downloading GitPython-3.1.27-py3-none-any.whl (181 kB)\nCollecting databricks-cli>=0.8.7\n Downloading databricks-cli-0.17.0.tar.gz (81 kB)\nCollecting cloudpickle\n Downloading cloudpickle-2.1.0-py3-none-any.whl (25 kB)\nCollecting click>=7.0\n Downloading click-8.1.3-py3-none-any.whl (96 kB)\nCollecting sqlalchemy>=1.4.0\n Downloading SQLAlchemy-1.4.39-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.6 MB)\nCollecting gunicorn\n Downloading gunicorn-20.1.0-py3-none-any.whl (79 kB)\nCollecting docker>=4.0.0\n Downloading docker-5.0.3-py2.py3-none-any.whl (146 kB)\nCollecting querystring-parser\n Downloading querystring_parser-1.2.4-py2.py3-none-any.whl (7.9 kB)\nCollecting importlib-metadata!=4.7.0,>=3.7.0\n Downloading importlib_metadata-4.12.0-py3-none-any.whl (21 kB)\nRequirement already satisfied: entrypoints in /databricks/python3/lib/python3.8/site-packages (from mlflow) (0.3)\nCollecting prometheus-flask-exporter\n Downloading prometheus_flask_exporter-0.20.2-py3-none-any.whl (18 kB)\nCollecting pyjwt>=1.7.0\n Downloading PyJWT-2.4.0-py3-none-any.whl (18 kB)\nCollecting oauthlib>=3.1.0\n Downloading oauthlib-3.2.0-py3-none-any.whl (151 kB)\nCollecting tabulate>=0.7.7\n Downloading tabulate-0.8.10-py3-none-any.whl (29 kB)\nRequirement already satisfied: six>=1.10.0 in /databricks/python3/lib/python3.8/site-packages (from databricks-cli>=0.8.7->mlflow) (1.15.0)\nCollecting websocket-client>=0.32.0\n Downloading websocket_client-1.3.3-py3-none-any.whl (54 kB)\nCollecting gitdb<5,>=4.0.1\n Downloading gitdb-4.0.9-py3-none-any.whl (63 kB)\nCollecting smmap<6,>=3.0.1\n Downloading smmap-5.0.0-py3-none-any.whl (24 kB)\nCollecting zipp>=0.5\n Downloading zipp-3.8.1-py3-none-any.whl (5.6 kB)\nCollecting greenlet!=0.4.17\n Downloading greenlet-1.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (156 kB)\nCollecting regex!=2019.12.17\n Downloading regex-2022.7.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (765 kB)\nCollecting tokenizers!=0.11.3,<0.13,>=0.11.1\n Downloading tokenizers-0.12.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (6.6 MB)\nCollecting async-timeout<5.0,>=4.0.0a3\n Downloading async_timeout-4.0.2-py3-none-any.whl (5.8 kB)\nCollecting charset-normalizer<3.0,>=2.0\n Downloading charset_normalizer-2.1.0-py3-none-any.whl (39 kB)\nCollecting frozenlist>=1.1.1\n Downloading frozenlist-1.3.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (158 kB)\nCollecting multidict<7.0,>=4.5\n Downloading multidict-6.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (121 kB)\nRequirement already satisfied: attrs>=17.3.0 in /databricks/python3/lib/python3.8/site-packages (from aiohttp->datasets) (20.3.0)\nCollecting yarl<2.0,>=1.0\n Downloading yarl-1.7.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (308 kB)\nCollecting aiosignal>=1.1.2\n Downloading aiosignal-1.2.0-py3-none-any.whl (8.2 kB)\nCollecting importlib-resources\n Downloading importlib_resources-5.8.0-py3-none-any.whl (28 kB)\nCollecting Mako\n Downloading Mako-1.2.1-py3-none-any.whl (78 kB)\nCollecting Werkzeug>=2.0\n Downloading Werkzeug-2.1.2-py3-none-any.whl (224 kB)\nCollecting itsdangerous>=2.0\n Downloading itsdangerous-2.1.2-py3-none-any.whl (15 kB)\nCollecting Jinja2>=3.0\n Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)\nRequirement already satisfied: MarkupSafe>=2.0 in /databricks/python3/lib/python3.8/site-packages (from Jinja2>=3.0->Flask->mlflow) (2.0.1)\nRequirement already satisfied: setuptools>=3.0 in /usr/local/lib/python3.8/dist-packages (from gunicorn->mlflow) (52.0.0)\nRequirement already satisfied: python-dateutil>=2.7.3 in /databricks/python3/lib/python3.8/site-packages (from pandas->datasets) (2.8.1)\nRequirement already satisfied: prometheus-client in /databricks/python3/lib/python3.8/site-packages (from prometheus-flask-exporter->mlflow) (0.10.1)\nBuilding wheels for collected packages: databricks-cli\n Building wheel for databricks-cli (setup.py): started\n Building wheel for databricks-cli (setup.py): finished with status 'done'\n Created wheel for databricks-cli: filename=databricks_cli-0.17.0-py3-none-any.whl size=141932 sha256=bb09e2cf09646974e0569af11512120f854d104b5284c4656f865a660e821cc9\n Stored in directory: /root/.cache/pip/wheels/bc/ef/2a/18885b70c6b78d4b9612ef2bf4bfdc7325f43db9d817d20f3f\nSuccessfully built databricks-cli\nInstalling collected packages: zipp, multidict, frozenlist, yarl, Werkzeug, smmap, Jinja2, itsdangerous, importlib-metadata, greenlet, click, charset-normalizer, async-timeout, aiosignal, websocket-client, typing-extensions, tqdm, tabulate, sqlalchemy, pyyaml, pyjwt, oauthlib, Mako, importlib-resources, gitdb, fsspec, Flask, dill, aiohttp, xxhash, tokenizers, sqlparse, responses, regex, querystring-parser, pyarrow, prometheus-flask-exporter, multiprocess, huggingface-hub, gunicorn, gitpython, docker, databricks-cli, cloudpickle, alembic, transformers, torch, mlflow, datasets\n Attempting uninstall: Jinja2\n Found existing installation: Jinja2 2.11.3\n Not uninstalling jinja2 at /databricks/python3/lib/python3.8/site-packages, outside environment /local_disk0/.ephemeral_nfs/envs/pythonEnv-bab12bc9-22d1-4101-97b5-6ae403f8662e\n Can't uninstall 'Jinja2'. No files were found to uninstall.\n Attempting uninstall: pyarrow\n Found existing installation: pyarrow 4.0.0\n Not uninstalling pyarrow at /databricks/python3/lib/python3.8/site-packages, outside environment /local_disk0/.ephemeral_nfs/envs/pythonEnv-bab12bc9-22d1-4101-97b5-6ae403f8662e\n Can't uninstall 'pyarrow'. No files were found to uninstall.\nSuccessfully installed Flask-2.1.3 Jinja2-3.1.2 Mako-1.2.1 Werkzeug-2.1.2 aiohttp-3.8.1 aiosignal-1.2.0 alembic-1.8.1 async-timeout-4.0.2 charset-normalizer-2.1.0 click-8.1.3 cloudpickle-2.1.0 databricks-cli-0.17.0 datasets-2.3.2 dill-0.3.5.1 docker-5.0.3 frozenlist-1.3.0 fsspec-2022.5.0 gitdb-4.0.9 gitpython-3.1.27 greenlet-1.1.2 gunicorn-20.1.0 huggingface-hub-0.8.1 importlib-metadata-4.12.0 importlib-resources-5.8.0 itsdangerous-2.1.2 mlflow-1.27.0 multidict-6.0.2 multiprocess-0.70.13 oauthlib-3.2.0 prometheus-flask-exporter-0.20.2 pyarrow-8.0.0 pyjwt-2.4.0 pyyaml-6.0 querystring-parser-1.2.4 regex-2022.7.9 responses-0.18.0 smmap-5.0.0 sqlalchemy-1.4.39 sqlparse-0.4.2 tabulate-0.8.10 tokenizers-0.12.1 torch-1.12.0 tqdm-4.64.0 transformers-4.20.1 typing-extensions-4.3.0 websocket-client-1.3.3 xxhash-3.0.0 yarl-1.7.2 zipp-3.8.1\nPython interpreter will be restarted.\n","removedWidgets":[],"addedWidgets":{},"metadata":{},"type":"ansi","arguments":{}}},"output_type":"display_data","data":{"text/plain":["Python interpreter will be restarted.\nCollecting transformers\n Downloading transformers-4.20.1-py3-none-any.whl (4.4 MB)\nCollecting datasets\n Downloading datasets-2.3.2-py3-none-any.whl (362 kB)\nCollecting mlflow\n Downloading mlflow-1.27.0-py3-none-any.whl (17.9 MB)\nCollecting torch\n Downloading torch-1.12.0-cp38-cp38-manylinux1_x86_64.whl (776.3 MB)\nCollecting xxhash\n Downloading xxhash-3.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (212 kB)\nCollecting aiohttp\n Downloading aiohttp-3.8.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.3 MB)\nCollecting fsspec[http]>=2021.05.0\n Downloading fsspec-2022.5.0-py3-none-any.whl (140 kB)\nCollecting dill<0.3.6\n Downloading dill-0.3.5.1-py2.py3-none-any.whl (95 kB)\nRequirement already satisfied: numpy>=1.17 in /databricks/python3/lib/python3.8/site-packages (from datasets) (1.20.1)\nCollecting multiprocess\n Downloading multiprocess-0.70.13-py38-none-any.whl (131 kB)\nCollecting responses<0.19\n Downloading responses-0.18.0-py3-none-any.whl (38 kB)\nCollecting huggingface-hub<1.0.0,>=0.1.0\n Downloading huggingface_hub-0.8.1-py3-none-any.whl (101 kB)\nCollecting pyarrow>=6.0.0\n Downloading pyarrow-8.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (29.4 MB)\nRequirement already satisfied: packaging in /databricks/python3/lib/python3.8/site-packages (from datasets) (20.9)\nRequirement already satisfied: pandas in /databricks/python3/lib/python3.8/site-packages (from datasets) (1.2.4)\nCollecting tqdm>=4.62.1\n Downloading tqdm-4.64.0-py2.py3-none-any.whl (78 kB)\nRequirement already satisfied: requests>=2.19.0 in /databricks/python3/lib/python3.8/site-packages (from datasets) (2.25.1)\nCollecting pyyaml>=5.1\n Downloading PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (701 kB)\nRequirement already satisfied: filelock in /usr/local/lib/python3.8/dist-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.6.0)\nCollecting typing-extensions>=3.7.4.3\n Downloading typing_extensions-4.3.0-py3-none-any.whl (25 kB)\nRequirement already satisfied: pyparsing>=2.0.2 in /databricks/python3/lib/python3.8/site-packages (from packaging->datasets) (2.4.7)\nRequirement already satisfied: certifi>=2017.4.17 in /databricks/python3/lib/python3.8/site-packages (from requests>=2.19.0->datasets) (2020.12.5)\nRequirement already satisfied: chardet<5,>=3.0.2 in /databricks/python3/lib/python3.8/site-packages (from requests>=2.19.0->datasets) (4.0.0)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /databricks/python3/lib/python3.8/site-packages (from requests>=2.19.0->datasets) (1.25.11)\nRequirement already satisfied: idna<3,>=2.5 in /databricks/python3/lib/python3.8/site-packages (from requests>=2.19.0->datasets) (2.10)\nRequirement already satisfied: pytz in /databricks/python3/lib/python3.8/site-packages (from mlflow) (2020.5)\nCollecting Flask\n Downloading Flask-2.1.3-py3-none-any.whl (95 kB)\nRequirement already satisfied: scipy in /databricks/python3/lib/python3.8/site-packages (from mlflow) (1.6.2)\nCollecting alembic\n Downloading alembic-1.8.1-py3-none-any.whl (209 kB)\nCollecting sqlparse>=0.3.1\n Downloading sqlparse-0.4.2-py3-none-any.whl (42 kB)\nRequirement already satisfied: protobuf>=3.12.0 in /databricks/python3/lib/python3.8/site-packages (from mlflow) (3.17.2)\nCollecting gitpython>=2.1.0\n Downloading GitPython-3.1.27-py3-none-any.whl (181 kB)\nCollecting databricks-cli>=0.8.7\n Downloading databricks-cli-0.17.0.tar.gz (81 kB)\nCollecting cloudpickle\n Downloading cloudpickle-2.1.0-py3-none-any.whl (25 kB)\nCollecting click>=7.0\n Downloading click-8.1.3-py3-none-any.whl (96 kB)\nCollecting sqlalchemy>=1.4.0\n Downloading SQLAlchemy-1.4.39-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.6 MB)\nCollecting gunicorn\n Downloading gunicorn-20.1.0-py3-none-any.whl (79 kB)\nCollecting docker>=4.0.0\n Downloading docker-5.0.3-py2.py3-none-any.whl (146 kB)\nCollecting querystring-parser\n Downloading querystring_parser-1.2.4-py2.py3-none-any.whl (7.9 kB)\nCollecting importlib-metadata!=4.7.0,>=3.7.0\n Downloading importlib_metadata-4.12.0-py3-none-any.whl (21 kB)\nRequirement already satisfied: entrypoints in /databricks/python3/lib/python3.8/site-packages (from mlflow) (0.3)\nCollecting prometheus-flask-exporter\n Downloading prometheus_flask_exporter-0.20.2-py3-none-any.whl (18 kB)\nCollecting pyjwt>=1.7.0\n Downloading PyJWT-2.4.0-py3-none-any.whl (18 kB)\nCollecting oauthlib>=3.1.0\n Downloading oauthlib-3.2.0-py3-none-any.whl (151 kB)\nCollecting tabulate>=0.7.7\n Downloading tabulate-0.8.10-py3-none-any.whl (29 kB)\nRequirement already satisfied: six>=1.10.0 in /databricks/python3/lib/python3.8/site-packages (from databricks-cli>=0.8.7->mlflow) (1.15.0)\nCollecting websocket-client>=0.32.0\n Downloading websocket_client-1.3.3-py3-none-any.whl (54 kB)\nCollecting gitdb<5,>=4.0.1\n Downloading gitdb-4.0.9-py3-none-any.whl (63 kB)\nCollecting smmap<6,>=3.0.1\n Downloading smmap-5.0.0-py3-none-any.whl (24 kB)\nCollecting zipp>=0.5\n Downloading zipp-3.8.1-py3-none-any.whl (5.6 kB)\nCollecting greenlet!=0.4.17\n Downloading greenlet-1.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (156 kB)\nCollecting regex!=2019.12.17\n Downloading regex-2022.7.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (765 kB)\nCollecting tokenizers!=0.11.3,<0.13,>=0.11.1\n Downloading tokenizers-0.12.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (6.6 MB)\nCollecting async-timeout<5.0,>=4.0.0a3\n Downloading async_timeout-4.0.2-py3-none-any.whl (5.8 kB)\nCollecting charset-normalizer<3.0,>=2.0\n Downloading charset_normalizer-2.1.0-py3-none-any.whl (39 kB)\nCollecting frozenlist>=1.1.1\n Downloading frozenlist-1.3.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (158 kB)\nCollecting multidict<7.0,>=4.5\n Downloading multidict-6.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (121 kB)\nRequirement already satisfied: attrs>=17.3.0 in /databricks/python3/lib/python3.8/site-packages (from aiohttp->datasets) (20.3.0)\nCollecting yarl<2.0,>=1.0\n Downloading yarl-1.7.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (308 kB)\nCollecting aiosignal>=1.1.2\n Downloading aiosignal-1.2.0-py3-none-any.whl (8.2 kB)\nCollecting importlib-resources\n Downloading importlib_resources-5.8.0-py3-none-any.whl (28 kB)\nCollecting Mako\n Downloading Mako-1.2.1-py3-none-any.whl (78 kB)\nCollecting Werkzeug>=2.0\n Downloading Werkzeug-2.1.2-py3-none-any.whl (224 kB)\nCollecting itsdangerous>=2.0\n Downloading itsdangerous-2.1.2-py3-none-any.whl (15 kB)\nCollecting Jinja2>=3.0\n Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)\nRequirement already satisfied: MarkupSafe>=2.0 in /databricks/python3/lib/python3.8/site-packages (from Jinja2>=3.0->Flask->mlflow) (2.0.1)\nRequirement already satisfied: setuptools>=3.0 in /usr/local/lib/python3.8/dist-packages (from gunicorn->mlflow) (52.0.0)\nRequirement already satisfied: python-dateutil>=2.7.3 in /databricks/python3/lib/python3.8/site-packages (from pandas->datasets) (2.8.1)\nRequirement already satisfied: prometheus-client in /databricks/python3/lib/python3.8/site-packages (from prometheus-flask-exporter->mlflow) (0.10.1)\nBuilding wheels for collected packages: databricks-cli\n Building wheel for databricks-cli (setup.py): started\n Building wheel for databricks-cli (setup.py): finished with status 'done'\n Created wheel for databricks-cli: filename=databricks_cli-0.17.0-py3-none-any.whl size=141932 sha256=bb09e2cf09646974e0569af11512120f854d104b5284c4656f865a660e821cc9\n Stored in directory: /root/.cache/pip/wheels/bc/ef/2a/18885b70c6b78d4b9612ef2bf4bfdc7325f43db9d817d20f3f\nSuccessfully built databricks-cli\nInstalling collected packages: zipp, multidict, frozenlist, yarl, Werkzeug, smmap, Jinja2, itsdangerous, importlib-metadata, greenlet, click, charset-normalizer, async-timeout, aiosignal, websocket-client, typing-extensions, tqdm, tabulate, sqlalchemy, pyyaml, pyjwt, oauthlib, Mako, importlib-resources, gitdb, fsspec, Flask, dill, aiohttp, xxhash, tokenizers, sqlparse, responses, regex, querystring-parser, pyarrow, prometheus-flask-exporter, multiprocess, huggingface-hub, gunicorn, gitpython, docker, databricks-cli, cloudpickle, alembic, transformers, torch, mlflow, datasets\n Attempting uninstall: Jinja2\n Found existing installation: Jinja2 2.11.3\n Not uninstalling jinja2 at /databricks/python3/lib/python3.8/site-packages, outside environment /local_disk0/.ephemeral_nfs/envs/pythonEnv-bab12bc9-22d1-4101-97b5-6ae403f8662e\n Can't uninstall 'Jinja2'. No files were found to uninstall.\n Attempting uninstall: pyarrow\n Found existing installation: pyarrow 4.0.0\n Not uninstalling pyarrow at /databricks/python3/lib/python3.8/site-packages, outside environment /local_disk0/.ephemeral_nfs/envs/pythonEnv-bab12bc9-22d1-4101-97b5-6ae403f8662e\n Can't uninstall 'pyarrow'. No files were found to uninstall.\nSuccessfully installed Flask-2.1.3 Jinja2-3.1.2 Mako-1.2.1 Werkzeug-2.1.2 aiohttp-3.8.1 aiosignal-1.2.0 alembic-1.8.1 async-timeout-4.0.2 charset-normalizer-2.1.0 click-8.1.3 cloudpickle-2.1.0 databricks-cli-0.17.0 datasets-2.3.2 dill-0.3.5.1 docker-5.0.3 frozenlist-1.3.0 fsspec-2022.5.0 gitdb-4.0.9 gitpython-3.1.27 greenlet-1.1.2 gunicorn-20.1.0 huggingface-hub-0.8.1 importlib-metadata-4.12.0 importlib-resources-5.8.0 itsdangerous-2.1.2 mlflow-1.27.0 multidict-6.0.2 multiprocess-0.70.13 oauthlib-3.2.0 prometheus-flask-exporter-0.20.2 pyarrow-8.0.0 pyjwt-2.4.0 pyyaml-6.0 querystring-parser-1.2.4 regex-2022.7.9 responses-0.18.0 smmap-5.0.0 sqlalchemy-1.4.39 sqlparse-0.4.2 tabulate-0.8.10 tokenizers-0.12.1 torch-1.12.0 tqdm-4.64.0 transformers-4.20.1 typing-extensions-4.3.0 websocket-client-1.3.3 xxhash-3.0.0 yarl-1.7.2 zipp-3.8.1\nPython interpreter will be restarted.\n"]}}],"execution_count":0},{"cell_type":"markdown","source":["### Install Git LFS"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"8a18992e-ce3f-4b09-a6a1-ddd867006afa"}}},{"cell_type":"code","source":["%sh\ncurl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash\nsudo apt-get install git-lfs"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"27caf826-3804-40f1-8cd8-bc72077ceeb0"}},"outputs":[{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":"Detected operating system as Ubuntu/focal.\nChecking for curl...\nDetected curl...\nChecking for gpg...\nDetected gpg...\nRunning apt-get update... done.\nInstalling apt-transport-https... done.\nInstalling /etc/apt/sources.list.d/github_git-lfs.list...done.\nImporting packagecloud gpg key... done.\nRunning apt-get update... done.\n\nThe repository is setup! You can now install packages.\nReading package lists...\nBuilding dependency tree...\nReading state information...\nThe following NEW packages will be installed:\n git-lfs\n0 upgraded, 1 newly installed, 0 to remove and 68 not upgraded.\nNeed to get 7,168 kB of archives.\nAfter this operation, 15.6 MB of additional disk space will be used.\nGet:1 https://packagecloud.io/github/git-lfs/ubuntu focal/main amd64 git-lfs amd64 3.2.0 [7,168 kB]\ndebconf: delaying package configuration, since apt-utils is not installed\nFetched 7,168 kB in 0s (15.4 MB/s)\nSelecting previously unselected package git-lfs.\n(Reading database ... \n(Reading database ... 5%\n(Reading database ... 10%\n(Reading database ... 15%\n(Reading database ... 20%\n(Reading database ... 25%\n(Reading database ... 30%\n(Reading database ... 35%\n(Reading database ... 40%\n(Reading database ... 45%\n(Reading database ... 50%\n(Reading database ... 55%\n(Reading database ... 60%\n(Reading database ... 65%\n(Reading database ... 70%\n(Reading database ... 75%\n(Reading database ... 80%\n(Reading database ... 85%\n(Reading database ... 90%\n(Reading database ... 95%\n(Reading database ... 100%\n(Reading database ... 92257 files and directories currently installed.)\nPreparing to unpack .../git-lfs_3.2.0_amd64.deb ...\nUnpacking git-lfs (3.2.0) ...\nSetting up git-lfs (3.2.0) ...\nGit LFS initialized.\nProcessing triggers for man-db (2.9.1-1) ...\n","removedWidgets":[],"addedWidgets":{},"metadata":{},"type":"ansi","arguments":{}}},"output_type":"display_data","data":{"text/plain":["Detected operating system as Ubuntu/focal.\nChecking for curl...\nDetected curl...\nChecking for gpg...\nDetected gpg...\nRunning apt-get update... done.\nInstalling apt-transport-https... done.\nInstalling /etc/apt/sources.list.d/github_git-lfs.list...done.\nImporting packagecloud gpg key... done.\nRunning apt-get update... done.\n\nThe repository is setup! You can now install packages.\nReading package lists...\nBuilding dependency tree...\nReading state information...\nThe following NEW packages will be installed:\n git-lfs\n0 upgraded, 1 newly installed, 0 to remove and 68 not upgraded.\nNeed to get 7,168 kB of archives.\nAfter this operation, 15.6 MB of additional disk space will be used.\nGet:1 https://packagecloud.io/github/git-lfs/ubuntu focal/main amd64 git-lfs amd64 3.2.0 [7,168 kB]\ndebconf: delaying package configuration, since apt-utils is not installed\nFetched 7,168 kB in 0s (15.4 MB/s)\nSelecting previously unselected package git-lfs.\n(Reading database ... \n(Reading database ... 5%\n(Reading database ... 10%\n(Reading database ... 15%\n(Reading database ... 20%\n(Reading database ... 25%\n(Reading database ... 30%\n(Reading database ... 35%\n(Reading database ... 40%\n(Reading database ... 45%\n(Reading database ... 50%\n(Reading database ... 55%\n(Reading database ... 60%\n(Reading database ... 65%\n(Reading database ... 70%\n(Reading database ... 75%\n(Reading database ... 80%\n(Reading database ... 85%\n(Reading database ... 90%\n(Reading database ... 95%\n(Reading database ... 100%\n(Reading database ... 92257 files and directories currently installed.)\nPreparing to unpack .../git-lfs_3.2.0_amd64.deb ...\nUnpacking git-lfs (3.2.0) ...\nSetting up git-lfs (3.2.0) ...\nGit LFS initialized.\nProcessing triggers for man-db (2.9.1-1) ...\n"]}}],"execution_count":0},{"cell_type":"code","source":["import mlflow\nimport torch\n#from hyperopt import fmin, tpe, hp, SparkTrials, Trials, STATUS_OK\n#from hyperopt.pyll import scope\nfrom datasets import load_dataset, load_metric\nfrom huggingface_hub import notebook_login\nfrom transformers import (\n AutoModelForSequenceClassification,\n AutoTokenizer,\n Trainer,\n TrainingArguments,\n)"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"cec61d6d-ea1a-4d2f-9ee6-625393a24aa5"}},"outputs":[],"execution_count":0},{"cell_type":"markdown","source":["### Log into Hugging Face Hub"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"094921b5-f746-4303-af4b-7a4d61b3b48a"}}},{"cell_type":"markdown","source":["This uses the command line to login into the hugging face hub. If the Hugging Face hub is private, specify the location using the \"HF_ENDPOINT\" parameter."],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"4af8d844-c5f0-45c0-80c8-c0b74e55abe9"}}},{"cell_type":"code","source":["from huggingface_hub.commands.user import _login\nfrom huggingface_hub import HfApi\napi = HfApi()\n_login(hf_api = api, token = \"API Token\")"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"896afdc7-85b9-4ad0-ac9e-2a68def84532"}},"outputs":[{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":"Login successful\nYour token has been saved to /root/.huggingface/token\n\u001B[1m\u001B[31mAuthenticated through git-credential store but this isn't the helper defined on your machine.\nYou might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default\n\ngit config --global credential.helper store\u001B[0m\n","removedWidgets":[],"addedWidgets":{},"metadata":{},"type":"ansi","arguments":{}}},"output_type":"display_data","data":{"text/plain":["Login successful\nYour token has been saved to /root/.huggingface/token\n\u001B[1m\u001B[31mAuthenticated through git-credential store but this isn't the helper defined on your machine.\nYou might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default\n\ngit config --global credential.helper store\u001B[0m\n"]}}],"execution_count":0},{"cell_type":"code","source":["#Verify Login\n!huggingface-cli whoami"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"0a3fc9dd-7a03-41c4-a43a-6a2a3ee610bd"}},"outputs":[{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":"rajistics\r\n\u001B[1morgs: \u001B[0m huggingface,spaces-explorers,demo-org,HF-test-lab,qualitydatalab\r\n","removedWidgets":[],"addedWidgets":{},"metadata":{},"type":"ansi","arguments":{}}},"output_type":"display_data","data":{"text/plain":["rajistics\r\n\u001B[1morgs: \u001B[0m huggingface,spaces-explorers,demo-org,HF-test-lab,qualitydatalab\r\n"]}}],"execution_count":0},{"cell_type":"markdown","source":["### Load data\nThe tutorial uses the IMDB dataset for move reviews. The complete [model card](https://huggingface.co/datasets/imdb) can be found at Hugging Face with details on the dataset. \n\nThe goal is to classify reviews as positive or negative. \n\nThe dataset is loaded using the Hugging Face datasets package."],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"b2f67ffe-ad1b-49a1-a7cf-603daa8c9890"}}},{"cell_type":"code","source":["# Load and preprocess data\ntrain_dataset, test_dataset = load_dataset(\"imdb\", split=[\"train\", \"test\"])"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"51fd8a9f-bef3-4fbd-90ce-8531f5f71205"}},"outputs":[{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":{"text/plain":"Downloading builder script: 0%| | 0.00/1.79k [00:00 of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\n","removedWidgets":[],"addedWidgets":{},"metadata":{},"type":"ansi","arguments":{}}},"output_type":"display_data","data":{"text/plain":["Parameter 'function'= of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\n"]}},{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":{"text/plain":" 0%| | 0/25 [00:00\n .ansiout {\n display: block;\n unicode-bidi: embed;\n white-space: pre-wrap;\n word-wrap: break-word;\n word-break: break-all;\n font-family: \"Source Code Pro\", \"Menlo\", monospace;;\n font-size: 13px;\n color: #555;\n margin-left: 4px;\n line-height: 19px;\n }\n"]}}],"execution_count":0},{"cell_type":"code","source":["mlflow.end_run()\ntrainer.push_to_hub()"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"12d375d9-a641-4b64-a6d1-38a01c095070"}},"outputs":[{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":"Saving model checkpoint to ./output\nConfiguration saved in ./output/config.json\nModel weights saved in ./output/pytorch_model.bin\n","removedWidgets":[],"addedWidgets":{},"metadata":{},"type":"ansi","arguments":{}}},"output_type":"display_data","data":{"text/plain":["Saving model checkpoint to ./output\nConfiguration saved in ./output/config.json\nModel weights saved in ./output/pytorch_model.bin\n"]}},{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":{"text/plain":"Upload file pytorch_model.bin: 0%| | 32.0k/251M [00:00 main\n\nDropping the following result as it does not have all the necessary fields:\n{'task': {'name': 'Text Classification', 'type': 'text-classification'}, 'dataset': {'name': 'imdb', 'type': 'imdb', 'args': 'plain_text'}}\nTo https://huggingface.co/rajistics/distilbert-imdb-mlflow\n 565ce9d..2e139c6 main -> main\n\nOut[12]: 'https://huggingface.co/rajistics/distilbert-imdb-mlflow/commit/565ce9de2a3bf303432d5ca277711f8237b8097c'","removedWidgets":[],"addedWidgets":{},"metadata":{},"type":"ansi","arguments":{}}},"output_type":"display_data","data":{"text/plain":["To https://huggingface.co/rajistics/distilbert-imdb-mlflow\n 11f9d35..565ce9d main -> main\n\nDropping the following result as it does not have all the necessary fields:\n{'task': {'name': 'Text Classification', 'type': 'text-classification'}, 'dataset': {'name': 'imdb', 'type': 'imdb', 'args': 'plain_text'}}\nTo https://huggingface.co/rajistics/distilbert-imdb-mlflow\n 565ce9d..2e139c6 main -> main\n\nOut[12]: 'https://huggingface.co/rajistics/distilbert-imdb-mlflow/commit/565ce9de2a3bf303432d5ca277711f8237b8097c'"]}}],"execution_count":0},{"cell_type":"code","source":["from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline\n# Model Card: https://huggingface.co/lvwerra/distilbert-imdb\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-cased\")\nmodel = AutoModelForSequenceClassification.from_pretrained(\"rajistics/distilbert-imdb-mlflow\")\nmoviereview = pipeline(\"text-classification\", model = model, tokenizer = tokenizer)"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"5bcdb820-3b8f-43f0-8c86-ff34dc5a8bc7"}},"outputs":[{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":"loading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a\nModel config DistilBertConfig {\n \"_name_or_path\": \"distilbert-base-cased\",\n \"activation\": \"gelu\",\n \"attention_dropout\": 0.1,\n \"dim\": 768,\n \"dropout\": 0.1,\n \"hidden_dim\": 3072,\n \"initializer_range\": 0.02,\n \"max_position_embeddings\": 512,\n \"model_type\": \"distilbert\",\n \"n_heads\": 12,\n \"n_layers\": 6,\n \"output_past\": true,\n \"pad_token_id\": 0,\n \"qa_dropout\": 0.1,\n \"seq_classif_dropout\": 0.2,\n \"sinusoidal_pos_embds\": false,\n \"tie_weights_\": true,\n \"transformers_version\": \"4.20.1\",\n \"vocab_size\": 28996\n}\n\nloading file https://huggingface.co/distilbert-base-cased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/ba377304984dc63e3ede0e23a938bbbf04d5c3835b66d5bb48343aecca188429.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791\nloading file https://huggingface.co/distilbert-base-cased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/acb5c2138c1f8c84f074b86dafce3631667fccd6efcb1a7ea1320cf75c386a36.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6\nloading file https://huggingface.co/distilbert-base-cased/resolve/main/added_tokens.json from cache at None\nloading file https://huggingface.co/distilbert-base-cased/resolve/main/special_tokens_map.json from cache at None\nloading file https://huggingface.co/distilbert-base-cased/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/81e970e5e6ec68be12da0f8f3b2f2469c78d579282299a2ea65b4b7441719107.ec5c189f89475aac7d8cbd243960a0655cfadc3d0474da8ff2ed0bf1699c2a5f\nloading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a\nModel config DistilBertConfig {\n \"_name_or_path\": \"distilbert-base-cased\",\n \"activation\": \"gelu\",\n \"attention_dropout\": 0.1,\n \"dim\": 768,\n \"dropout\": 0.1,\n \"hidden_dim\": 3072,\n \"initializer_range\": 0.02,\n \"max_position_embeddings\": 512,\n \"model_type\": \"distilbert\",\n \"n_heads\": 12,\n \"n_layers\": 6,\n \"output_past\": true,\n \"pad_token_id\": 0,\n \"qa_dropout\": 0.1,\n \"seq_classif_dropout\": 0.2,\n \"sinusoidal_pos_embds\": false,\n \"tie_weights_\": true,\n \"transformers_version\": \"4.20.1\",\n \"vocab_size\": 28996\n}\n\nloading configuration file https://huggingface.co/rajistics/distilbert-imdb-mlflow/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ef11e290776d32ed8373fb83bdc594abae0602cc3d4a6530f3ed9533e98aac64.a8ccf646a0873f3d60805d2779a0b1caf3ca469ffd72e36224dee22566738d73\nModel config DistilBertConfig {\n \"_name_or_path\": \"rajistics/distilbert-imdb-mlflow\",\n \"activation\": \"gelu\",\n \"architectures\": [\n \"DistilBertForSequenceClassification\"\n ],\n \"attention_dropout\": 0.1,\n \"dim\": 768,\n \"dropout\": 0.1,\n \"hidden_dim\": 3072,\n \"initializer_range\": 0.02,\n \"max_position_embeddings\": 512,\n \"model_type\": \"distilbert\",\n \"n_heads\": 12,\n \"n_layers\": 6,\n \"output_past\": true,\n \"pad_token_id\": 0,\n \"problem_type\": \"single_label_classification\",\n \"qa_dropout\": 0.1,\n \"seq_classif_dropout\": 0.2,\n \"sinusoidal_pos_embds\": false,\n \"tie_weights_\": true,\n \"torch_dtype\": \"float32\",\n \"transformers_version\": \"4.20.1\",\n \"vocab_size\": 28996\n}\n\nhttps://huggingface.co/rajistics/distilbert-imdb-mlflow/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpmdxxho8z\n","removedWidgets":[],"addedWidgets":{},"metadata":{},"type":"ansi","arguments":{}}},"output_type":"display_data","data":{"text/plain":["loading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a\nModel config DistilBertConfig {\n \"_name_or_path\": \"distilbert-base-cased\",\n \"activation\": \"gelu\",\n \"attention_dropout\": 0.1,\n \"dim\": 768,\n \"dropout\": 0.1,\n \"hidden_dim\": 3072,\n \"initializer_range\": 0.02,\n \"max_position_embeddings\": 512,\n \"model_type\": \"distilbert\",\n \"n_heads\": 12,\n \"n_layers\": 6,\n \"output_past\": true,\n \"pad_token_id\": 0,\n \"qa_dropout\": 0.1,\n \"seq_classif_dropout\": 0.2,\n \"sinusoidal_pos_embds\": false,\n \"tie_weights_\": true,\n \"transformers_version\": \"4.20.1\",\n \"vocab_size\": 28996\n}\n\nloading file https://huggingface.co/distilbert-base-cased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/ba377304984dc63e3ede0e23a938bbbf04d5c3835b66d5bb48343aecca188429.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791\nloading file https://huggingface.co/distilbert-base-cased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/acb5c2138c1f8c84f074b86dafce3631667fccd6efcb1a7ea1320cf75c386a36.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6\nloading file https://huggingface.co/distilbert-base-cased/resolve/main/added_tokens.json from cache at None\nloading file https://huggingface.co/distilbert-base-cased/resolve/main/special_tokens_map.json from cache at None\nloading file https://huggingface.co/distilbert-base-cased/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/81e970e5e6ec68be12da0f8f3b2f2469c78d579282299a2ea65b4b7441719107.ec5c189f89475aac7d8cbd243960a0655cfadc3d0474da8ff2ed0bf1699c2a5f\nloading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a\nModel config DistilBertConfig {\n \"_name_or_path\": \"distilbert-base-cased\",\n \"activation\": \"gelu\",\n \"attention_dropout\": 0.1,\n \"dim\": 768,\n \"dropout\": 0.1,\n \"hidden_dim\": 3072,\n \"initializer_range\": 0.02,\n \"max_position_embeddings\": 512,\n \"model_type\": \"distilbert\",\n \"n_heads\": 12,\n \"n_layers\": 6,\n \"output_past\": true,\n \"pad_token_id\": 0,\n \"qa_dropout\": 0.1,\n \"seq_classif_dropout\": 0.2,\n \"sinusoidal_pos_embds\": false,\n \"tie_weights_\": true,\n \"transformers_version\": \"4.20.1\",\n \"vocab_size\": 28996\n}\n\nloading configuration file https://huggingface.co/rajistics/distilbert-imdb-mlflow/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ef11e290776d32ed8373fb83bdc594abae0602cc3d4a6530f3ed9533e98aac64.a8ccf646a0873f3d60805d2779a0b1caf3ca469ffd72e36224dee22566738d73\nModel config DistilBertConfig {\n \"_name_or_path\": \"rajistics/distilbert-imdb-mlflow\",\n \"activation\": \"gelu\",\n \"architectures\": [\n \"DistilBertForSequenceClassification\"\n ],\n \"attention_dropout\": 0.1,\n \"dim\": 768,\n \"dropout\": 0.1,\n \"hidden_dim\": 3072,\n \"initializer_range\": 0.02,\n \"max_position_embeddings\": 512,\n \"model_type\": \"distilbert\",\n \"n_heads\": 12,\n \"n_layers\": 6,\n \"output_past\": true,\n \"pad_token_id\": 0,\n \"problem_type\": \"single_label_classification\",\n \"qa_dropout\": 0.1,\n \"seq_classif_dropout\": 0.2,\n \"sinusoidal_pos_embds\": false,\n \"tie_weights_\": true,\n \"torch_dtype\": \"float32\",\n \"transformers_version\": \"4.20.1\",\n \"vocab_size\": 28996\n}\n\nhttps://huggingface.co/rajistics/distilbert-imdb-mlflow/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpmdxxho8z\n"]}},{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":{"text/plain":"Downloading: 0%| | 0.00/251M [00:00\n\nYou can then click the experiment page icon to display the more detailed MLflow experiment page ([AWS](https://docs.databricks.com/applications/mlflow/tracking.html#notebook-experiments)|[Azure](https://docs.microsoft.com/azure/databricks/applications/mlflow/tracking#notebook-experiments)|[GCP](https://docs.gcp.databricks.com/applications/mlflow/tracking.html#notebook-experiments)). This page allows you to compare runs and view details for specific runs.\n\n"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"70e02a64-6878-4b9b-9297-5390c9e19ddc"}}},{"cell_type":"code","source":["runs = mlflow.search_runs(\"3759898664210413\")"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"00ff072f-fccd-41ae-ba16-60340e4b6379"}},"outputs":[],"execution_count":0},{"cell_type":"code","source":["import pandas\nruns.to_csv(\"output/mlflow_runs.csv\")"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"be4baec5-089b-4537-a53d-dfe183f2c68b"}},"outputs":[{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":"fatal: not a git repository (or any of the parent directories): .git\r\nfatal: not a git repository (or any of the parent directories): .git\r\nfatal: not a git repository (or any of the parent directories): .git\r\n","removedWidgets":[],"addedWidgets":{},"metadata":{},"type":"ansi","arguments":{}}},"output_type":"display_data","data":{"text/plain":["fatal: not a git repository (or any of the parent directories): .git\r\nfatal: not a git repository (or any of the parent directories): .git\r\nfatal: not a git repository (or any of the parent directories): .git\r\n"]}}],"execution_count":0},{"cell_type":"code","source":["%sh\ncd output\ngit add mlflow_runs.csv\ngit commit -m \"Add MLFlow results\"\ngit push"],"metadata":{"application/vnd.databricks.v1+cell":{"title":"","showTitle":false,"inputWidgets":{},"nuid":"410156e8-69a3-4adc-84f2-72df68582c81"}},"outputs":[{"output_type":"display_data","metadata":{"application/vnd.databricks.v1+output":{"datasetInfos":[],"data":"[main 7d3e3d7] Add MLFlow results\n 1 file changed, 4 insertions(+)\n create mode 100644 mlflow_runs.csv\nTo https://huggingface.co/rajistics/distilbert-imdb-mlflow\n 2e139c6..7d3e3d7 main -> main\n","removedWidgets":[],"addedWidgets":{},"metadata":{},"type":"ansi","arguments":{}}},"output_type":"display_data","data":{"text/plain":["[main 7d3e3d7] Add MLFlow results\n 1 file changed, 4 insertions(+)\n create mode 100644 mlflow_runs.csv\nTo https://huggingface.co/rajistics/distilbert-imdb-mlflow\n 2e139c6..7d3e3d7 main -> main\n"]}}],"execution_count":0}],"metadata":{"application/vnd.databricks.v1+notebook":{"notebookName":"ML Quickstart: Model Training with HuggingFace","dashboards":[],"notebookMetadata":{"pythonIndentUnit":2},"language":"python","widgets":{},"notebookOrigID":3759898664210413}},"nbformat":4,"nbformat_minor":0} 2 | -------------------------------------------------------------------------------- /Enterprise_AI_intro/Transformation_AI_Slides.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rajshah4/huggingface-demos/3be106ea99e8949c98e514a4f763693ca4e4e117/Enterprise_AI_intro/Transformation_AI_Slides.pdf -------------------------------------------------------------------------------- /FinBERT/FinBERT talk.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rajshah4/huggingface-demos/3be106ea99e8949c98e514a4f763693ca4e4e117/FinBERT/FinBERT talk.pdf -------------------------------------------------------------------------------- /FoodApp/Build AI Apps Minutes July 2022.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rajshah4/huggingface-demos/3be106ea99e8949c98e514a4f763693ca4e4e117/FoodApp/Build AI Apps Minutes July 2022.pdf -------------------------------------------------------------------------------- /FoodApp/Indian_Food_Translator.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "id": "gh6QOr-qO4Ym" 7 | }, 8 | "source": [ 9 | "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/education-toolkit/blob/main/02_ml-demos-with-gradio.ipynb)\n", 10 | "\n", 11 | "\n", 12 | "Transformers Pipeline Demo Notebook\n", 13 | "\n", 14 | "💡 **Welcome!**\n", 15 | "\n", 16 | "This notebook is a self-contained way to start using widely known Open Source technologies (`transformers`, `gradio`, etc). The core use case is getting familiar with Transformers and building a web application. \n", 17 | "- This notebook can be found at [https://bit.ly/raj_food](https://bit.ly/raj_food)\n", 18 | "- The accompanying slides are at my github for [huggingface-demos](https://github.com/rajshah4/huggingface-demos/tree/main/FoodApp). \n", 19 | "- A version of the app in spaces, [Indian Food Translator app](https://huggingface.co/spaces/rajistics/Indian_food_translator).\n" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": { 26 | "id": "lb65KY8VcSV8" 27 | }, 28 | "outputs": [], 29 | "source": [ 30 | "!pip install --quiet git+https://github.com/huggingface/transformers.git \n", 31 | "!pip install --quiet gradio\n", 32 | "!pip install --quiet datasets\n", 33 | "import os" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "metadata": { 39 | "id": "NkJmA-r5L0EB" 40 | }, 41 | "source": [ 42 | "# Tutorial: Using Pretrained Models and Building Demos with Gradio ⚡ & Hugging Face 🤗 " 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": { 48 | "id": "D_Iv1CJZPekG" 49 | }, 50 | "source": [ 51 | "**Learning goals:** The goal of this tutorial is to learn How To\n", 52 | "\n", 53 | "1. Use pre-trained models from the transformers library and that are available on the Hugging Face Hub\n", 54 | "2. Building a web demo, [Indian Food Detector](https://huggingface.co/spaces/rajistics/Indian_food_translator)\n", 55 | "\n", 56 | "**Duration**: 45 minutes\n", 57 | "\n", 58 | "**Prerequisites:** Knowledge of Python and basic familiarity with machine learning \n", 59 | "\n", 60 | "**Author**: [Rajiv Shah](https://twitter.com/rajistics) (feel free to ping me with any questions about this tutorial) \n", 61 | "\n", 62 | "All of these steps can be done for free! All you need is an Internet browser and a place where you can write Python 👩‍💻" 63 | ] 64 | }, 65 | { 66 | "cell_type": "markdown", 67 | "metadata": { 68 | "id": "_-wtA32Tbfbd" 69 | }, 70 | "source": [ 71 | "# Let's Start by Exploring Tasks at Hugging Face Hub\n", 72 | "[Hugging Face Tasks](https://hf.co/tasks) \n", 73 | "\n", 74 | "---\n", 75 | "\n", 76 | "[Automated Speech Recognition](https://huggingface.co/tasks/automatic-speech-recognition)\n", 77 | "\n", 78 | "[Object Detection](https://huggingface.co/tasks/object-detection)\n", 79 | "\n", 80 | "[Summarization](https://huggingface.co/tasks/summarization)\n" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": { 86 | "id": "Vqq4sZ4Qb8me" 87 | }, 88 | "source": [ 89 | "## Let's Run Pretrained Models using Python" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": { 95 | "id": "DpvE1ionWmLy" 96 | }, 97 | "source": [ 98 | "The [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) supports many common tasks out-of-the-box:\n", 99 | "\n", 100 | "**Text**:\n", 101 | "* Sentiment analysis: classify the polarity of a given text.\n", 102 | "* Text generation (in English): generate text from a given input.\n", 103 | "* Name entity recognition (NER): label each word with the entity it represents (person, date, location, etc.).\n", 104 | "* Question answering: extract the answer from the context, given some context and a question.\n", 105 | "* Fill-mask: fill in the blank given a text with masked words.\n", 106 | "* Summarization: generate a summary of a long sequence of text or document.\n", 107 | "* Translation: translate text into another language.\n", 108 | "* Feature extraction: create a tensor representation of the text.\n", 109 | "\n", 110 | "**Image**:\n", 111 | "* Image classification: classify an image.\n", 112 | "* Image segmentation: classify every pixel in an image.\n", 113 | "* Object detection: detect objects within an image.\n", 114 | "\n", 115 | "**Audio**:\n", 116 | "* Audio classification: assign a label to a given segment of audio.\n", 117 | "* Automatic speech recognition (ASR): transcribe audio data into text.\n", 118 | "\n", 119 | "\n", 120 | "\n", 121 | "For more details about the [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline) and associated tasks, refer to the documentation [here](https://huggingface.co/docs/transformers/main/en/./main_classes/pipelines).\n" 122 | ] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "metadata": { 127 | "id": "wBYp9IOZDWNB" 128 | }, 129 | "source": [ 130 | "###Text" 131 | ] 132 | }, 133 | { 134 | "cell_type": "code", 135 | "execution_count": null, 136 | "metadata": { 137 | "id": "BKkkyOlfcing" 138 | }, 139 | "outputs": [], 140 | "source": [ 141 | "#Sentiment Analysis\n", 142 | "from transformers import pipeline\n", 143 | "sent_classifier = pipeline(\"sentiment-analysis\")\n", 144 | "sent_classifier(\"I was pretty happy with the sneakers\")" 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "execution_count": null, 150 | "metadata": { 151 | "id": "pJezrNIEBh6s" 152 | }, 153 | "outputs": [], 154 | "source": [ 155 | "#Text Generation\n", 156 | "generator = pipeline(\"text-generation\")\n", 157 | "#generator(\"In this course, we will teach you how to\")\n", 158 | "generator(\"Once upon a time,\")" 159 | ] 160 | }, 161 | { 162 | "cell_type": "markdown", 163 | "metadata": { 164 | "id": "F3iwLYgdDd4-" 165 | }, 166 | "source": [ 167 | "###Image" 168 | ] 169 | }, 170 | { 171 | "cell_type": "code", 172 | "execution_count": null, 173 | "metadata": { 174 | "id": "4uImYTfZC5Kr" 175 | }, 176 | "outputs": [], 177 | "source": [ 178 | "vision_classifier = pipeline(task=\"image-classification\")\n", 179 | "result = vision_classifier(\n", 180 | " images=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg\"\n", 181 | ")\n", 182 | "print(\"\\n\".join([f\"Class {d['label']} with score {round(d['score'], 4)}\" for d in result]))" 183 | ] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "source": [ 188 | "### Translation Model" 189 | ], 190 | "metadata": { 191 | "id": "7A2paz6EOZr9" 192 | } 193 | }, 194 | { 195 | "cell_type": "code", 196 | "execution_count": null, 197 | "metadata": { 198 | "id": "SaEWTfmjcysx" 199 | }, 200 | "outputs": [], 201 | "source": [ 202 | "from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline\n", 203 | "#Background on the model: https://huggingface.co/facebook/nllb-200-distilled-600M\n", 204 | "#Get list of language codes: https://github.com/facebookresearch/flores/tree/main/flores200#languages-in-flores-200\n", 205 | "\n", 206 | "model = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/nllb-200-distilled-600M\")\n", 207 | "tokenizer = AutoTokenizer.from_pretrained(\"facebook/nllb-200-distilled-600M\")\n", 208 | "\n", 209 | "translator = pipeline('translation', model=model, tokenizer=tokenizer, src_lang=\"eng_Latn\", tgt_lang='guj_Gujr')\n", 210 | "\n", 211 | "translator(\"UN Chief says there is no military solution in Syria\")" 212 | ] 213 | }, 214 | { 215 | "cell_type": "markdown", 216 | "source": [ 217 | "Language Codes: \n", 218 | "ace_Arab - Acehnese (Arabic script)\n", 219 | "\n", 220 | "asm_Beng - Assamese\n", 221 | "\n", 222 | "bel_cyrl - Belarusian\n", 223 | "\n", 224 | "bod_Tibt - Standard Tibetan\n", 225 | "\n", 226 | "guj_Gujr - Gujarati\n", 227 | "\n", 228 | "kmr+Latn - Northern Kurdish\n", 229 | "\n", 230 | "tir_Ethi - Tigrinya\n", 231 | "\n", 232 | "yue_Hant - Yue Chinese\n" 233 | ], 234 | "metadata": { 235 | "id": "a3zpz5eilPeQ" 236 | } 237 | }, 238 | { 239 | "cell_type": "markdown", 240 | "source": [ 241 | "## Fine Tuned Pretrained Models\n", 242 | "\n", 243 | "Their are thousands of models available that are trained for specific tasks. You can also [fine tune](https://huggingface.co/docs/transformers/training) pre-trained models on your own data to solve your particular tasks." 244 | ], 245 | "metadata": { 246 | "id": "gxyM_ztwdqx8" 247 | } 248 | }, 249 | { 250 | "cell_type": "markdown", 251 | "metadata": { 252 | "id": "-RZA0oOhmsh5" 253 | }, 254 | "source": [ 255 | "### Finetuned FinBERT for Forward Looking Statements\n", 256 | "\n", 257 | "[FinBERT-FLS](https://huggingface.co/yiyanghkust/finbert-fls) is model developed for identifying Forward-looking statements (FLS). These statements inform investors of managers’ beliefs and opinions about firm's future events or results. Identifying forward-looking statements from corporate reports can assist investors in financial analysis. FinBERT-FLS is a FinBERT model fine-tuned on 3,500 manually annotated sentences from Management Discussion and Analysis section of annual reports of Russell 3000 firms." 258 | ] 259 | }, 260 | { 261 | "cell_type": "code", 262 | "execution_count": null, 263 | "metadata": { 264 | "id": "rGyamloQmdxI" 265 | }, 266 | "outputs": [], 267 | "source": [ 268 | "from transformers import AutoTokenizer, AutoModelForSequenceClassification\n", 269 | "\n", 270 | "tokenizer = AutoTokenizer.from_pretrained(\"yiyanghkust/finbert-fls\")\n", 271 | "\n", 272 | "model = AutoModelForSequenceClassification.from_pretrained(\"yiyanghkust/finbert-fls\")\n", 273 | "\n", 274 | "\n", 275 | "nlp = pipeline(\"text-classification\", model=model, tokenizer=tokenizer)\n", 276 | "results = nlp('In the past, the age of our fleet to enhance availability and reliability due to reduced downtime for repairs.')\n", 277 | "print(results) " 278 | ] 279 | }, 280 | { 281 | "cell_type": "markdown", 282 | "metadata": { 283 | "id": "QYA6nfG9BKPB" 284 | }, 285 | "source": [ 286 | "## Inference Pipelines from Hugging Face" 287 | ] 288 | }, 289 | { 290 | "cell_type": "markdown", 291 | "metadata": { 292 | "id": "wZhAaT_PZ6EF" 293 | }, 294 | "source": [ 295 | "For production use, you can use the [inference API](https://huggingface.co/inference-api) to get predictions via simple API calls. To get the snippet, just go here\n", 296 | "![](https://i.ibb.co/P9yyTHg/Screen-Shot-2022-07-01-at-10-30-20-AM.png)" 297 | ] 298 | }, 299 | { 300 | "cell_type": "code", 301 | "execution_count": null, 302 | "metadata": { 303 | "id": "fNWp4t5UBIc_" 304 | }, 305 | "outputs": [], 306 | "source": [ 307 | "# Example Snippet\n", 308 | "import requests\n", 309 | "\n", 310 | "API_URL = \"https://api-inference.huggingface.co/models/yiyanghkust/finbert-fls\"\n", 311 | "headers = {\"Authorization\": \"Bearer {API_TOKEN}\"} ###Add your API Key here after Bearer\n", 312 | "\n", 313 | "def query(payload):\n", 314 | "\tresponse = requests.post(API_URL, headers=headers, json=payload)\n", 315 | "\treturn response.json()\n", 316 | "\t\n", 317 | "output = query({\n", 318 | "\t\"inputs\": \"In the past, the age of our fleet to enhance availability and reliability due to reduced downtime for repairs.\",\n", 319 | "})" 320 | ] 321 | }, 322 | { 323 | "cell_type": "markdown", 324 | "source": [ 325 | "### Image Classification for Indian Food\n", 326 | "\n", 327 | "[Indian Food Detector](https://huggingface.co/rajistics/finetuned-indian-food) is a pretrained model available at the Hugging Face hub. It was fined tuned using the [indian food dataset](https://huggingface.co/datasets/rajistics/indian_food_images)." 328 | ], 329 | "metadata": { 330 | "id": "c87wQvZ7hCaW" 331 | } 332 | }, 333 | { 334 | "cell_type": "code", 335 | "source": [ 336 | "from transformers import AutoFeatureExtractor, AutoModelForImageClassification\n", 337 | "extractor = AutoFeatureExtractor.from_pretrained(\"rajistics/finetuned-indian-food\")\n", 338 | "model = AutoModelForImageClassification.from_pretrained(\"rajistics/finetuned-indian-food\")" 339 | ], 340 | "metadata": { 341 | "id": "gU8SipmIeSvn" 342 | }, 343 | "execution_count": null, 344 | "outputs": [] 345 | }, 346 | { 347 | "cell_type": "code", 348 | "source": [ 349 | "#let's get an image to test with using streaming option\n", 350 | "from datasets import load_dataset\n", 351 | "dataset = load_dataset(\"rajistics/indian_food_images\",split='test', streaming=True)" 352 | ], 353 | "metadata": { 354 | "id": "Y6nNEldEf30H" 355 | }, 356 | "execution_count": null, 357 | "outputs": [] 358 | }, 359 | { 360 | "cell_type": "code", 361 | "source": [ 362 | "imagepic = next(iter(dataset))['image']\n", 363 | "imagepic" 364 | ], 365 | "metadata": { 366 | "id": "tKpcmmf8hY8S" 367 | }, 368 | "execution_count": null, 369 | "outputs": [] 370 | }, 371 | { 372 | "cell_type": "code", 373 | "source": [ 374 | "from PIL import Image\n", 375 | "\n", 376 | "inputs = extractor(images=imagepic, return_tensors=\"pt\")\n", 377 | "outputs = model(**inputs)\n", 378 | "logits = outputs.logits\n", 379 | "predicted_class_idx = logits.argmax(-1).item()\n", 380 | "print(\"Predicted class:\", model.config.id2label[predicted_class_idx])" 381 | ], 382 | "metadata": { 383 | "id": "93jCa4v7e8NR" 384 | }, 385 | "execution_count": null, 386 | "outputs": [] 387 | }, 388 | { 389 | "cell_type": "code", 390 | "source": [ 391 | "import requests\n", 392 | "im_url = Image.open(requests.get('https://huggingface.co/rajistics/finetuned-indian-food/resolve/main/126.jpg', stream=True).raw)\n", 393 | "im_url" 394 | ], 395 | "metadata": { 396 | "id": "SgrNwGBA7ETt" 397 | }, 398 | "execution_count": null, 399 | "outputs": [] 400 | }, 401 | { 402 | "cell_type": "code", 403 | "source": [ 404 | "inputs = extractor(images=im_url, return_tensors=\"pt\")\n", 405 | "outputs = model(**inputs)\n", 406 | "logits = outputs.logits\n", 407 | "predicted_class_idx = logits.argmax(-1).item()\n", 408 | "print(\"Predicted class:\", model.config.id2label[predicted_class_idx])" 409 | ], 410 | "metadata": { 411 | "id": "bHQCWfxr7rTR" 412 | }, 413 | "execution_count": null, 414 | "outputs": [] 415 | }, 416 | { 417 | "cell_type": "markdown", 418 | "metadata": { 419 | "id": "PR9faV2NWTrG" 420 | }, 421 | "source": [ 422 | "# Let's Build a Demo\n", 423 | "\n", 424 | "**Demos** of machine learning models are an increasingly important part of machine learning. Demos allow:\n", 425 | "\n", 426 | "* model developers to easily **present** their work to a wide audience\n", 427 | "* increase **reproducibility** of machine learning research\n", 428 | "* diverse users to more easily **identify and debug** failure points of models\n", 429 | "\n", 430 | "\n", 431 | "As a quick example of what we would like to build, check out the [Keras Org on Hugging Face](https://huggingface.co/keras-io), which includes a description card and a collection of Models and Spaces built by Keras community. Any Space can be opened in your browser and you can use the model immediately, as shown here: \n", 432 | "\n", 433 | "![](https://i.ibb.co/7y6DGjB/ezgif-5-cc52b7e590.gif)\n", 434 | "\n", 435 | "\n" 436 | ] 437 | }, 438 | { 439 | "cell_type": "markdown", 440 | "metadata": { 441 | "id": "g0KzbU4lQtv3" 442 | }, 443 | "source": [ 444 | "## 1. Build Quick ML Demos in Python Using the Gradio Library" 445 | ] 446 | }, 447 | { 448 | "cell_type": "markdown", 449 | "metadata": { 450 | "id": "rlSs72oUQ1VW" 451 | }, 452 | "source": [ 453 | "`gradio` is a handy Python library that lets you build web demos simply by specifying the list of input and output **components** expected by your machine learning model. \n", 454 | "\n", 455 | "For more detail [see the docs](https://gradio.app/docs/)\n", 456 | "\n", 457 | "In addition to the input and output types, Gradio expects a third parameter, which is the prediction function itself. This parameter can be ***any* regular Python function** that takes in parameter(s) corresponding to the input component(s) and returns value(s) corresponding to the output component(s)\n", 458 | "\n", 459 | "Enough words. Let's see some code!" 460 | ] 461 | }, 462 | { 463 | "cell_type": "markdown", 464 | "source": [ 465 | "###Creating an Interface" 466 | ], 467 | "metadata": { 468 | "id": "lES_Lp2fjSyn" 469 | } 470 | }, 471 | { 472 | "cell_type": "code", 473 | "source": [ 474 | "import gradio as gr\n", 475 | "def greet(name):\n", 476 | " return \"Hello \" + name + \"!!\"\n", 477 | "\n", 478 | "demo = gr.Interface(fn=greet, inputs=\"text\", outputs=\"text\")\n", 479 | "demo.launch()" 480 | ], 481 | "metadata": { 482 | "id": "SyTIJSxZjNfv" 483 | }, 484 | "execution_count": null, 485 | "outputs": [] 486 | }, 487 | { 488 | "cell_type": "code", 489 | "source": [ 490 | "import gradio as gr\n", 491 | "\n", 492 | "gr.Interface.load(\"huggingface/facebook/fastspeech2-en-ljspeech\").launch();\n", 493 | "\n" 494 | ], 495 | "metadata": { 496 | "id": "SHpXFv-DjUu9" 497 | }, 498 | "execution_count": null, 499 | "outputs": [] 500 | }, 501 | { 502 | "cell_type": "code", 503 | "source": [ 504 | "import gradio as gr\n", 505 | "from PIL import Image\n", 506 | "\n", 507 | "##Image Classification\n", 508 | "from transformers import AutoFeatureExtractor, AutoModelForImageClassification\n", 509 | "extractor = AutoFeatureExtractor.from_pretrained(\"rajistics/finetuned-indian-food\")\n", 510 | "model = AutoModelForImageClassification.from_pretrained(\"rajistics/finetuned-indian-food\")\n", 511 | "\n", 512 | "def image_to_text(imagepic):\n", 513 | " inputs = extractor(images=imagepic, return_tensors=\"pt\")\n", 514 | " outputs = model(**inputs)\n", 515 | " logits = outputs.logits\n", 516 | " predicted_class_idx = logits.argmax(-1).item()\n", 517 | " return (model.config.id2label[predicted_class_idx])\n", 518 | "\n", 519 | "##Translation\n", 520 | "#from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline\n", 521 | "#Get list of language codes: https://github.com/facebookresearch/flores/tree/main/flores200#languages-in-flores-200\n", 522 | "#modelt = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/nllb-200-distilled-600M\")\n", 523 | "#tokenizert = AutoTokenizer.from_pretrained(\"facebook/nllb-200-distilled-600M\")\n", 524 | "\n", 525 | "#def translation(text):\n", 526 | "# translator = pipeline('translation', model=modelt, tokenizer=tokenizert, src_lang=\"eng_Latn\", tgt_lang='ron_Latn')\n", 527 | "# output = translator(text)\n", 528 | "# return (output[0]['translation_text'])\n", 529 | "\n", 530 | "##Translation\n", 531 | "demo = gr.Blocks()\n", 532 | "with demo:\n", 533 | " image_file = gr.Image(type=\"pil\")\n", 534 | " b1 = gr.Button(\"Recognize Image\")\n", 535 | " text = gr.Textbox()\n", 536 | " b1.click(image_to_text, inputs=image_file, outputs=text)\n", 537 | " #b2 = gr.Button(\"Translation\")\n", 538 | " #out1 = gr.Textbox()\n", 539 | " #b2.click(translation, inputs=text, outputs=out1)\n", 540 | "demo.launch()" 541 | ], 542 | "metadata": { 543 | "id": "YdGvfJH8jUno" 544 | }, 545 | "execution_count": null, 546 | "outputs": [] 547 | }, 548 | { 549 | "cell_type": "markdown", 550 | "metadata": { 551 | "id": "b6Ek7cORgDkQ" 552 | }, 553 | "source": [ 554 | "## 2. Host the Demo (for free) on Hugging Face Spaces\n", 555 | "\n", 556 | "Once you made a Gradio demo, you can host it permanently on Hugging Spaces very easily:\n", 557 | "\n", 558 | "Here are the steps to that (shown in the GIF below):\n", 559 | "\n", 560 | "A. First, create a Hugging Face account if you do not already have one, by visiting https://huggingface.co/ and clicking \"Sign Up\"\n", 561 | "\n", 562 | "B. Once you are logged in, click on your profile picture and then click on \"New Space\" underneath it to get to this page: https://huggingface.co/new-space\n", 563 | "\n", 564 | "C. Give your Space a name and a license. Select \"Gradio\" as the Space SDK, and then choose \"Public\" if you are fine with everyone accessing your Space and the underlying code\n", 565 | "\n", 566 | "D. Then you will find a page that provides you instructions on how to upload your files into the Git repository for that Space. You may also need to add a `requirements.txt` file to specify any Python package dependencies.\n", 567 | "\n", 568 | "E. Once you have pushed your files, that's it! Spaces will automatically build your Gradio demo allowing you to share it with anyone, anywhere!\n", 569 | "\n", 570 | "![GIF](https://huggingface.co/blog/assets/28_gradio-spaces/spaces-demo-finalized.gif)\n", 571 | "\n", 572 | "\n", 573 | "\n" 574 | ] 575 | }, 576 | { 577 | "cell_type": "markdown", 578 | "source": [ 579 | "Check out the final version of the app that identifies Indian food in pictures and then translates the result into other languages, [Indian Food Translator](https://huggingface.co/spaces/rajistics/Indian_food_translator)" 580 | ], 581 | "metadata": { 582 | "id": "pQ6kH4Cf8j8-" 583 | } 584 | }, 585 | { 586 | "cell_type": "code", 587 | "source": [ 588 | "" 589 | ], 590 | "metadata": { 591 | "id": "4_2ZN1B98jkY" 592 | }, 593 | "execution_count": null, 594 | "outputs": [] 595 | } 596 | ], 597 | "metadata": { 598 | "accelerator": "GPU", 599 | "colab": { 600 | "collapsed_sections": [], 601 | "name": "Indian Food Translator", 602 | "provenance": [] 603 | }, 604 | "gpuClass": "standard", 605 | "kernelspec": { 606 | "display_name": "Python 3", 607 | "language": "python", 608 | "name": "python3" 609 | }, 610 | "language_info": { 611 | "codemirror_mode": { 612 | "name": "ipython", 613 | "version": 3 614 | }, 615 | "file_extension": ".py", 616 | "mimetype": "text/x-python", 617 | "name": "python", 618 | "nbconvert_exporter": "python", 619 | "pygments_lexer": "ipython3", 620 | "version": "3.9.5" 621 | } 622 | }, 623 | "nbformat": 4, 624 | "nbformat_minor": 0 625 | } -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # huggingface-demos 2 | 3 | You can find public demos I have created in this repo. 4 | 5 | - FinBERT - [Colab notebook](https://bit.ly/raj_finbert) 6 | - AskWiki - [Colab notebook](https://bit.ly/raj_askwiki) 7 | - FoodApp - [Colab notebook](https://bit.ly/raj_foodapp) 8 | 9 | --------------------------------------------------------------------------------