├── .gitattributes ├── LICENSE ├── README.md ├── azure-custom-speech ├── README.md └── sampledata │ ├── InsuranceLanguage-Data1.txt │ ├── b0013.wav │ ├── test_audio.wav │ └── test_audio_stereo.wav ├── azure-speech-streaming-reactjs ├── .gitignore ├── README.md ├── common │ └── images │ │ ├── sampleoutputrealtimetranscription.PNG │ │ └── speechstreamingdiagram.PNG ├── front-end-ui │ ├── .env │ ├── .env.desktop │ ├── .env.hosted.microsoft │ ├── .env.hosted.progger │ ├── .github │ │ └── workflows │ │ │ └── azure-static-web-apps-wonderful-wave-0f526c610.yml │ ├── .gitignore │ ├── build │ │ ├── asset-manifest.json │ │ ├── favicon.ico │ │ ├── index.html │ │ ├── logo192.png │ │ ├── logo512.png │ │ ├── manifest.json │ │ ├── robots.txt │ │ └── static │ │ │ ├── css │ │ │ ├── 2.8c0e6b67.chunk.css │ │ │ └── 2.8c0e6b67.chunk.css.map │ │ │ └── js │ │ │ ├── 2.cd9c6385.chunk.js │ │ │ ├── 2.cd9c6385.chunk.js.LICENSE.txt │ │ │ ├── 2.cd9c6385.chunk.js.map │ │ │ ├── main.a03b50d2.chunk.js │ │ │ ├── main.a03b50d2.chunk.js.map │ │ │ ├── runtime-main.4452493b.js │ │ │ └── runtime-main.4452493b.js.map │ ├── package-lock.json │ ├── package.json │ ├── public │ │ ├── favicon.ico │ │ ├── index.html │ │ ├── logo192.png │ │ ├── logo512.png │ │ ├── manifest.json │ │ └── robots.txt │ └── src │ │ ├── App.css │ │ ├── App.jsx │ │ ├── authConfig.js │ │ ├── components │ │ ├── Dashboard.jsx │ │ ├── InitializeStream.jsx │ │ ├── OutputWindows.jsx │ │ ├── PageLayout.jsx │ │ ├── Profile.jsx │ │ ├── ProfileData.jsx │ │ ├── SignInButton.jsx │ │ ├── SignOutButton.jsx │ │ └── Splashscreen.jsx │ │ ├── config.json │ │ ├── graph.js │ │ ├── index.js │ │ ├── styles │ │ └── theme.js │ │ └── token_util.js ├── speechexpressbackend │ ├── .env │ ├── .gitignore │ ├── .vscode │ │ └── settings.json │ ├── config.json │ ├── package-lock.json │ ├── package.json │ └── serverapp.js └── speechreactfrontend │ ├── .env │ ├── .gitignore │ ├── .vscode │ └── settings.json │ ├── README.md │ ├── package-lock.json │ ├── package.json │ ├── public │ ├── favicon.ico │ ├── index.html │ ├── logo192.png │ ├── logo512.png │ ├── manifest.json │ └── robots.txt │ └── src │ ├── App.css │ ├── App.js │ ├── index.js │ └── token_util.js ├── call-batch-analytics ├── README.md ├── batch-analytics-arm-template-servicebus.json └── sampledata │ └── SampleData-SiriAzureGoogleTalk1.wav ├── common ├── Call Center Intelligence Sample Demo Script.docx └── images │ ├── batchanalyticsarchitecture.png │ ├── cover.png │ ├── custom-speech-overview.png │ ├── customspeechendpointid.PNG │ ├── dbCreds.png │ ├── deploycustomspeechmodel.PNG │ ├── enterInfo.png │ ├── highleveloverview.JPG │ ├── highleveloverview.PNG │ ├── image003.png │ ├── image005.png │ ├── image007.png │ ├── image009.png │ ├── image011.png │ ├── image013.png │ ├── image015.png │ ├── image016.png │ ├── image017.png │ ├── loadingPBI.png │ ├── refreshDB.png │ ├── sqlInfo.png │ └── uploadcustomspeechdata.PNG └── powerbi ├── README.md ├── SentimentInsights.pbit └── SpeechInsights.pbit /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 amulchapla 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Call Center Intelligence: Powered by Azure AI 2 | 3 | This is a sample solution for Call Center Intelligence powered by Azure AI. It shows how Azure AI services could be used both in real-time and batch scenarios for an Intelligent Contact Center. 4 | 5 | Below diagram depicts key components and Azure services used in this sample solution. 6 | 7 | 8 | 9 | ## Contents 10 | 11 | Outline the file contents of the repository. It helps users navigate the codebase, build configuration and any related assets. 12 | 13 | | Folder | Description | 14 | |-------------------|--------------------------------------------| 15 | | [azure-custom-speech](azure-custom-speech) | Sample data and instructions to create custom transcription model using Azure Speech service (Step 1 in above diagram). This step produces a sample custom speech model. This step also enables speech logging to capture real-time call audio. | 16 | | [azure-speech-streaming-reactjs](azure-speech-streaming-reactjs) | Java Script applications that simulates real-time call intelligence (Step 2 in above diagram). This application also captures audio conversation that could be used in the next step for batch call analytics. | 17 | | [call-batch-analytics](call-batch-analytics) | ARM template file and deployment guide for performing ingestion & batch analytics of calls using various Azure AI services (Step 3 in above diagram). This part of the solution can be used either with data output from step 2 OR using sample call recordings (if you have that). | 18 | | [powerbi](powerbi) | Template files and deployment guide for visualizing call insights using Power BI (Step 4 in above diagram). | 19 | 20 | 21 | ## Prerequisites 22 | 23 | * An existing [Azure Account](https://azure.microsoft.com/free/) 24 | * Ensure you have [Node.js](https://nodejs.org/en/download/) installed. Required for `Step 2` only. 25 | * Ensure you have [Power BI](https://powerbi.microsoft.com/en-us/downloads/) installed. Required for `Step 4` only. 26 | 27 | 28 | ## Dependencies 29 | 30 | This solution is modular and some part of the solution can be used independently and some components depends on other steps to be completed. In summary, real-time and batch call analytics can be used independently. Below is a list of dependencies: 31 | * Step 2 depends on Step 1 to be completed. Custom Speech model created in Step 1 is used in Step 2. Step 2 can be used without step 1 with minor code modifications (for advanced users only). 32 | * Step 3 and Step 4 can be used independently, if you have sample call recordings. If you don't have sample call recordings, then use Step 2 to simulate business conversations and capture the recording that you could use in this step. 33 | 34 | ## Getting started 35 | 36 | Follow the individual instructions for each step of the solution provided within above `Folders`. 37 | 38 | -------------------------------------------------------------------------------- /azure-custom-speech/README.md: -------------------------------------------------------------------------------- 1 | ## What is Custom Speech? 2 | 3 | Azure [Custom Speech](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/custom-speech-overview) is a set of online tools that allow you to evaluate and improve the Microsoft speech-to-text accuracy for your applications, tools, and products. This powerful feature will be used in this sample to customize the Azure speech model for Insurance language - this helps model more accurately transcribe insurance words and phrases. Similarly, you could use the Cusom Speech to customize the model for different business scenarios for different industries - medical, finance, manufactoring, retail etc. 4 | 5 | # Steps to create & deploy a Custom Speech model 6 | 7 | Before you can do anything with Custom Speech, you'll need an Azure account and a Speech service subscription. After you have an account, you can prep your data, train and test your models, inspect recognition quality, evaluate accuracy, and ultimately deploy and use the custom speech-to-text model. 8 | 9 | This diagram highlights the pieces that make up the [Custom Speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech). Use the links below to learn more about each step. 10 | 11 | ![Diagram that highlights the components that make up the Custom Speech area of the Speech Studio.](../common/images/custom-speech-overview.png) 12 | 13 | 1. [Bring your own storage (BYOS) for Speech logging](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-encryption-of-data-at-rest#bring-your-own-storage-byos-for-customization-and-logging) feature will be used to enable speech logging. Azure Speech Service supports automatically logging audio content to customer managed Azure storage account. This is very useful when you are using speech for real-time interaction and want the service to capture audio for post analysis, archival, compliance etc. This will allow us capture real-time audio for batch analytics. In this pattern, speech service will return transcription in real-time to the client and also capture audio in the Azure storage account specified here. `You might have to request access to this preview feature. Follow instructions [here](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-encryption-of-data-at-rest#bring-your-own-storage-byos-for-customization-and-logging)' 14 | 15 | 2. [Create Speech Service resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices). Create a Speech service resource using Azure portal. Select “Yes” for bring your own storage box when creating a new Azure Speech service resource. `NOTE: You need to create a Speech Resource with a paid (S0) key. The free key account will not work for the batch analytics step.` 16 | 17 | 3. `Create a new custom speech project` on [Speech Studio](https://speech.microsoft.com/customspeech) using the newly created speech resource from the previous step. Content like data, models, tests, and endpoints are organized into *projects* in the [Speech Studio](https://speech.microsoft.com/customspeech). Each project is specific to a domain and country/language. For example, you might create a project for call centers that use English in the United States. 18 | 19 | To create your first project, select **Speech-to-text/Custom speech**, and then select **New Project**. Follow the instructions provided by the wizard to create your project. After you create a project, you should see four tabs: **Data**, **Testing**, **Training**, and **Deployment**. 20 | 21 | 4. Upload test data to customize the speech model for business lanaguage and terms. You can use sample language adaptation data provided under `azure-custom-speech\sampledata\InsuranceLanguage-Data1.txt`. Select "Plain text" as the data type when uploading txt file as shown below. 22 | 23 | 24 | 25 | 5. `Train a model`. Improve the accuracy of your speech-to-text model by providing related text (<200 MB). This data helps to train the speech-to-text model. 26 | 27 | 6. `Deploy a model`. After training, if you're satisfied with the result, you can deploy your model to a custom endpoint. Create a new custom endpoint and select “Enable content logging (audio and diagnostic information) for this endpoint.” as shown in screenshot below. 28 | 29 | 30 | 31 | 7. `Get Custom Speech Model Endpoint ID`- Once you have successully deployed a custom speech model. click on the deployed model and get the "Endpoint ID" as shown below. You will need the "Endpoint ID", "Service region" and "Subscription Key" in the next step. 32 | 33 | 34 | -------------------------------------------------------------------------------- /azure-custom-speech/sampledata/InsuranceLanguage-Data1.txt: -------------------------------------------------------------------------------- 1 | The insurance industry safeguards the assets of its policyholders by transferring 2 | risk from an individual or business to an insurance company. Insurance companies act as financial intermediaries in that they invest the premiums they collect 3 | for providing this service. Insurance company size is usually measured by net 4 | premiums written, that is, premium revenues less amounts paid for reinsurance. 5 | There are three main insurance sectors: property/casualty, life/health and health 6 | insurance. Property/casualty (P/C) consists mainly of auto, home and commercial insurance. Life/health (L/H) consists mainly of life insurance and annuity 7 | products. Health insurance is offered by private health insurance companies 8 | and some L/H and P/C insurers, as well as by government programs such as 9 | Medicare. 10 | 11 | All types of insurance are regulated by the states, with each state having its 12 | own set of statutes and rules. State insurance departments oversee insurer solvency, market conduct and, to a greater or lesser degree, review and rule on 13 | requests for rate increases for coverage. The National Association of Insurance 14 | Commissioners develops model rules and regulations for the industry, many 15 | of which must be approved by state legislatures. The McCarran-Ferguson Act, 16 | passed by Congress in 1945, refers to continued state regulation of the insurance 17 | industry as being in the public interest. Under the 1999 Gramm-Leach-Bliley 18 | Financial Services Modernization Act, insurance activities—whether conducted 19 | by banks, broker-dealers or insurers—are regulated by the states. However, there 20 | have been, and continue to be, challenges to state regulation from some segments of the federal government as well as from some financial services firms. 21 | 22 | Insurers are required to use statutory accounting principles (SAP) when filing 23 | annual financial reports with state regulators and the Internal Revenue Service. SAP, 24 | which evolved to enhance the industry’s financial stability, is more conservative 25 | than the generally accepted accounting principles (GAAP), established by the independent Financial Accounting Standards Board (FASB). The Securities and Exchange 26 | Commission (SEC) requires publicly owned companies to report their financial 27 | results using GAAP rules. Insurers outside the United States use standards that differ from SAP and GAAP. As global markets developed, the need for more uniform 28 | accounting standards became clear. In 2001 the International Accounting Standards 29 | Board (IASB), an independent international accounting standards setting organization, began work on a set of standards, called International Financial Reporting 30 | Standards (IFRS) that it hopes will be used around the world. Since 2001 over 100 31 | countries have required or permitted the use of IFRS. 32 | In 2007 the SEC voted to stop requiring non-U.S. companies that use IFRS 33 | to re-issue their financial reports for U.S. investors using GAAP. In 2008 the 34 | National Association of Insurance Commissioners began to explore ways to 35 | move from statutory accounting principles to IFRS. Also in 2008, the FASB and 36 | IASB undertook a joint project to develop a common and improved framework 37 | for financial reporting. 38 | 39 | Property/casualty and life insurance policies were once sold almost exclusively 40 | by agents—either by captive agents, representing one insurance company, or 41 | by independent agents, representing several companies. Insurance companies 42 | selling through captive agents and/or by mail, telephone or via the Internet 43 | are called “direct writers.” However, the distinctions between direct writers and 44 | independent agency companies have been blurring since the 1990s, when insurers began to use multiple channels to reach potential customers. In addition, in 45 | the 1980s banks began to explore the possibility of selling insurance through 46 | independent agents, usually buying agencies for that purpose. Other distribution channels include sales through professional organizations and through 47 | workplaces. 48 | 49 | Auto insurance protects against financial loss in the event of an accident. It is a 50 | contract between the policyholder and the insurance company. The policyholder agrees to pay the premium and the insurance company agrees to pay losses as 51 | defined in the policy. 52 | Auto insurance provides property, liability and medical coverage: 53 | Property coverage pays for damage to, or theft of, the car. 54 | Liability coverage pays for the policyholder’s legal responsibility to 55 | others for bodily injury or property damage. 56 | Medical coverage pays for the cost of treating injuries, rehabilitation 57 | and sometimes lost wages and funeral expenses. 58 | Most states require drivers to have auto liability insurance before they can legally drive a car. (Liability insurance pays the other driver’s medical, car repair and 59 | other costs when the policyholder is at fault in an auto accident.) All states have 60 | laws that set the minimum amounts of insurance or other financial security 61 | drivers have to pay for the harm caused by their negligence behind the wheel if 62 | an accident occurs. Most auto policies are for six months to a year. A basic auto 63 | insurance policy is comprised of six different kinds of coverage, each of which is 64 | priced separately (see below). 65 | Bodily Injury Liability 66 | This coverage applies to injuries that the policyholder and family members listed on the policy cause to someone else. These individuals are also covered when 67 | driving other peoples’ cars with permission. As motorists in serious accidents 68 | may be sued for large amounts, drivers can opt to buy more than the staterequired minimum to protect personal assets such as homes and savings. 69 | Medical Payments or Personal Injury Protection (PIP) 70 | This coverage pays for the treatment of injuries to the driver and passengers 71 | of the policyholder’s car. At its broadest, PIP can cover medical payments, 72 | lost wages and the cost of replacing services normally performed by someone 73 | injured in an auto accident. It may also cover funeral costs. 74 | Property Damage Liability 75 | This coverage pays for damage policyholders (or someone driving the car with 76 | their permission) may cause to someone else’s property. Usually, this means 77 | damage to someone else’s car, but it also includes damage to lamp posts, telephone poles, fences, buildings or other structures hit in an accident. 78 | 79 | This coverage pays for damage to the policyholder’s car resulting from a collision with another car, an object or as a result of flipping over. It also covers 80 | damage caused by potholes. Collision coverage is generally sold with a deductible of $250 to $1,000—the higher the deductible, the lower the premium. Even 81 | if policyholders are at fault for an accident, collision coverage will reimburse 82 | them for the costs of repairing the car, minus the deductible. If the policyholder 83 | is not at fault, the insurance company may try to recover the amount it paid 84 | from the other driver’s insurance company, a process known as subrogation. If 85 | the company is successful, policyholders will also be reimbursed for the deductible. 86 | 87 | This coverage reimburses for loss due to theft or damage caused by something 88 | other than a collision with another car or object, such as fire, falling objects, 89 | missiles, explosions, earthquakes, windstorms, hail, flood, vandalism and riots, 90 | or contact with animals such as birds or deer. Comprehensive insurance is usually sold with a $100 to $300 deductible, though policyholders may opt for a 91 | higher deductible as a way of lowering their premium. Comprehensive insurance may also reimburse the policyholder if a windshield is cracked or shattered. 92 | Some companies offer separate glass coverage with or without a deductible. 93 | States do not require the purchase of collision or comprehensive coverage, but 94 | lenders may insist borrowers carry it until a car loan is paid off. It may also be a 95 | requirement of some dealerships if a car is leased. 96 | 6. Uninsured and Underinsured Motorist Coverage 97 | Uninsured motorist coverage will reimburse the policyholder, a member of the 98 | family or a designated driver if one of them is hit by an uninsured or a hit-andrun driver. Underinsured motorist coverage comes into play when an at-fault 99 | driver has insufficient insurance to pay for the other driver’s total loss. This coverage will also protect a policyholder who is hit while a pedestrian. 100 | 101 | Homeowners insurance provides financial protection against disasters. It is a package policy, which means that it covers both damage to property and liability, or 102 | legal responsibility, for any injuries and property damage policyholders or their 103 | families cause to other people. This includes damage caused by household pets. 104 | Damage caused by most disasters is covered but there are exceptions. Standard 105 | homeowners policies do not cover flooding, earthquakes or poor maintenance. 106 | Flood coverage, however, is available in the form of a separate policy both from 107 | the National Flood Insurance Program (NFIP) and from a few private insurers. Earthquake coverage is available either in the form of an endorsement or 108 | as a separate policy. Most maintenance-related problems are the homeowners’ 109 | responsibility. 110 | A standard homeowners insurance policy includes four essential types of 111 | coverage. They include: 112 | Coverage for the Structure of the Home 113 | This part of a policy pays to repair or rebuild a home if it is damaged or 114 | destroyed by fire, hurricane, hail, lightning or other disaster listed in the policy. 115 | It will not pay for damage caused by a flood, earthquake or routine wear and 116 | tear. Most standard policies also cover structures that are not attached to a 117 | house such as a garage, tool shed or gazebo. Generally, these structures are covered for about 10 percent of the total amount of insurance on the structure of 118 | the home. 119 | Coverage for Personal Belongings 120 | Furniture, clothes, sports equipment and other personal items are covered if 121 | they are stolen or destroyed by fire, hurricane or other insured disaster. Most 122 | companies provide coverage for 50 to 70 percent of the amount of insurance on 123 | the structure of a home. This part of the policy includes off-premises coverage. 124 | This means that belongings are covered anywhere in the world, unless the policyholder has decided against off-premises coverage. Expensive items like jewelry, 125 | furs and silverware are covered, but there are usually dollar limits if they are stolen. To insure these items to their full value, individuals can purchase a special 126 | personal property endorsement or floater and insure the item for its appraised 127 | value. 128 | Trees, plants and shrubs are also covered under standard homeowners insurance—generally up to about $500 per item. Perils covered are theft, fire, lightning, explosion, vandalism, riot and even falling aircraft. They are not covered 129 | for damage by wind or disease 130 | 131 | Liability coverage protects against the cost of lawsuits for bodily injury or property damage that policyholders or family members cause to other people. It also 132 | pays for damage caused by pets. The liability portion of the policy pays for both 133 | the cost of defending the policyholder in court and any court awards—up to the 134 | limit of the policy. Coverage is not just in the home but extends to anywhere 135 | in the world. Liability limits generally start at about $100,000. However, experts 136 | recommend that homeowners purchase at least $300,000 worth of protection. 137 | An umbrella or excess liability policy, which provides broader coverage, including claims for libel and slander, as well as higher liability limits, can be added to 138 | the policy. Generally, umbrella policies cost between $200 to $350 for $1 million of additional liability protection. 139 | Homeowners policies also provide no-fault medical coverage. In the event 140 | that someone is injured in a policyholder’s home, the injured person can simply submit medical bills to the policyholder’s insurance company. In this way 141 | expenses are paid without a liability claim being filed. This coverage, however, 142 | does not pay the medical bills for the policyholder’s own family or pets. 143 | 4. Additional Living Expenses 144 | This pays the additional costs of living away from home if a house is inhabitable due to damage from a fire, storm or other insured disaster. It covers hotel 145 | bills, restaurant meals and other extra living expenses incurred while the home 146 | is being rebuilt. Coverage for additional living expenses differs from company to 147 | company. Many policies provide coverage for about 20 percent of the insurance 148 | on a house. The coverage can be increased for an additional premium. Some 149 | companies sell a policy that provides an unlimited amount of loss-of-use coverage, but for a limited amount of time. 150 | Additional living expense coverage also reimburses homeowners who rent 151 | out part of their home for the rent that would have been collected from a tenant if the home had not been destroyed. 152 | Types of Homeowners Insurance Policies 153 | There are several types of homeowners insurance policies that differ in the amount 154 | of insurance coverage they provide. The different types are fairly standard throughout the country. However, individual states and companies may offer policies that 155 | are slightly different or go by other names such as “standard” or “deluxe.” People 156 | who rent the homes they live in have specific renters policies. 157 | -------------------------------------------------------------------------------- /azure-custom-speech/sampledata/b0013.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-custom-speech/sampledata/b0013.wav -------------------------------------------------------------------------------- /azure-custom-speech/sampledata/test_audio.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-custom-speech/sampledata/test_audio.wav -------------------------------------------------------------------------------- /azure-custom-speech/sampledata/test_audio_stereo.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-custom-speech/sampledata/test_audio_stereo.wav -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/.gitignore: -------------------------------------------------------------------------------- 1 | # See https://help.github.com/articles/ignoring-files/ for more about ignoring files. 2 | 3 | # dependencies 4 | /speechexpressbackend/node_modules 5 | /speechreactfrontend/node_modules 6 | /front-end-ui/node_modules 7 | /node_modules 8 | /.pnp 9 | .pnp.js 10 | 11 | # testing 12 | /coverage 13 | 14 | # production 15 | /build 16 | 17 | # misc 18 | .DS_Store 19 | .env.local 20 | .env.development.local 21 | .env.test.local 22 | .env.production.local 23 | 24 | npm-debug.log* 25 | yarn-debug.log* 26 | yarn-error.log* 27 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/README.md: -------------------------------------------------------------------------------- 1 | # Real-time Transcription using Azure Speech in ReactJS 2 | 3 | This sample simulates call center intelligence in real-time using Azure AI services. The code also records the conversation to Azure storage using the conversation ID provided by the user on web UI. 4 | 5 | This sample shows design pattern examples for authentication token exchange and management, as well as capturing audio from a microphone or file for speech-to-text conversions. 6 | 7 | Below architecture diagram depicts key components and API/communication sequence used in this sample 8 | 9 | 10 | This sample uses Express.js backend framework which allows you to make http calls from any front end. ReactJS is used for frontend app. *NOTE*: This sample is only using the Azure Speech SDK - it does not use Azure Bot Service and Direct Line Speech channel. 11 | 12 | * **Express.js**: Express is a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications. It facilitates the rapid development of Node based web applications. 13 | 14 | * **React.js** often referred to as React or ReactJS is a JavaScript library responsible for building a hierarchy of UI components or in other words, responsible for the rendering of UI components. It provides support for both frontend and server-side. 15 | 16 | ## Prerequisites 17 | 18 | 1. This article assumes that you have an Azure account and Speech service subscription. If you don't have an account and subscription, [try the Speech service for free](https://docs.microsoft.com/azure/cognitive-services/speech-service/overview#try-the-speech-service-for-free). 19 | 1. Ensure you have [Node.js](https://nodejs.org/en/download/) installed. 20 | 21 | ## How to run the app 22 | 23 | 1. Clone this repo. This repo has two apps as shown in the architecture diagram above: 24 | * speechreactfrontend folder is for the "ReactJS Frontend" component and 25 | * speechexpressbackend folder is for the "ExpressJS Backend" component 26 | 27 | 28 | 2. **Prepare and run the Speech service Express.js backend** 29 | - Go to speechexpressbacked directory and run `npm install -all` to install dependencies. 30 | - Update the “.env” file with your Azure Speech service key and Azure region. Azure Region value examples: “eastus2”, “westus” 31 | - Start Speech service backend app by running `‘npm start’` 32 | - If you are running this locally then try accessing below URLs from browser to verify that the backend component is working as expected 33 | * `http://localhost:8080/api/sayhello` 34 | * `http://localhost:8080/api/get-speech-token` 35 | - If you have deployed speechexpressbacked app to Azure App Service (as per instructions below) then you can verify using URLs from browser as below: 36 | * `https://<>/api/sayhello` 37 | * `https://<>/api/get-speech-token` 38 | 3. **Prepare and run the Speech client React.js frontend** 39 | + Go to speechreactfrontend directory and run `npm install -all` to install dependencies. 40 | + Update “package.json” as following. Set value of “proxy” depending on where your Express.js backend is running. 41 | + If Express.js backend “speechexpressbacked” running on local machine then use `"proxy": "http://localhost:8080"` 42 | + If Express.js backend “speechexpressbacked”running on Azure. Use `"proxy": https://<>.azurewebsites.net` 43 | + Open a browser and go to `http://localhost:3000` to access the app. Click on the microphone icon on the web page and start talking. You should see transcription displayed on the web page in real-time (an example shown below). 44 | 45 | 46 | 47 | 48 | 49 | 50 | + If you have also deployed the frontend ReactJS to Azure App Service then use the deployed app service URL which you can find on Azure portal for your App Service. Example: `https://myspeechreactfrontend.azurewebsites.net` 51 | 52 | 53 | 54 | ## Deploying sample code to Azure App Service 55 | You can deploy your Node.js app using VS Code and the Azure App Service extension. Follow instructions [Deploy NodeJS using Azure App Service]:https://docs.microsoft.com/en-us/azure/app-service/quickstart-nodejs?pivots=platform-linux#deploy-to-azure that explains how to deploy any node app to Azure App Service. 56 | 57 | * To deploy **speechexpressbacked** to Azure App Service, select the “speechexpressbacked” as the root folder when prompted in the VS code. 58 | - Validate that your ExpressJS backend is successfully deployed by trying to access one of the two APIs hosted by your backend 59 | - `https://<>/api/sayhello` 60 | - `https://<>/api/get-speech-token` 61 | 62 | * Similarly, you can deploy **speechreactfrontend** to another Azure App Service instance by selecting the root folder for this app. This sample assumes that you are deploying the frontend and the backend app on a **separate** app service instance. 63 | - Before deploying your “speechreactfrontend”, update “package.json”. Set the value of “proxy” pointing it to the “speechexpressbacked” App Service URL. Use `"proxy": https://<>.azurewebsites.net` 64 | - Deploy your frontend after updating package.json. 65 | - You should now be able to access the web app and do real-time transcription from a browser from your mobile phone or any other device that can access the app service url. 66 | 67 | ## Issues and resolutions 68 | 69 | | Issues/Errors | Resolutions | 70 | | :-------------| :-----------| 71 | | **Invalid Host Header** error in the browser when running the React Front end | Add DANGEROUSLY_DISABLE_HOST_CHECK=true in the .env for the front end. This solution is not recommended for production deployment. This is to enable a quick demonstration of real-time speech streaming capability using the web browser. | 72 | |Express.js backend API not accessible when deployed to Azure app service. | Verify that the port used by the express backend (in serverapp.js) is using value ‘process.env.WEB_PORT || 8080’ | 73 | 74 | 75 | ## Change recognition language 76 | 77 | To change the source recognition language, change the locale strings in `App.js` lines **32** and **66**, which sets the recognition language property on the `SpeechConfig` object. 78 | 79 | ```javascript 80 | speechConfig.speechRecognitionLanguage = 'en-US' 81 | ``` 82 | 83 | For a full list of supported locales, see the [language support article](https://docs.microsoft.com/azure/cognitive-services/speech-service/language-support#speech-to-text). 84 | 85 | ## Speech-to-text from microphone 86 | 87 | To convert speech-to-text using a microphone, run the app and then click **Convert speech to text from your mic.**. This will prompt you for access to your microphone, and then listen for you to speak. The following function `sttFromMic` in `App.js` contains the implementation. 88 | 89 | ```javascript 90 | async sttFromMic() { 91 | const tokenObj = await getTokenOrRefresh(); 92 | const speechConfig = speechsdk.SpeechConfig.fromAuthorizationToken(tokenObj.authToken, tokenObj.region); 93 | speechConfig.speechRecognitionLanguage = 'en-US'; 94 | 95 | const audioConfig = speechsdk.AudioConfig.fromDefaultMicrophoneInput(); 96 | const recognizer = new speechsdk.SpeechRecognizer(speechConfig, audioConfig); 97 | 98 | this.setState({ 99 | displayText: 'speak into your microphone...' 100 | }); 101 | 102 | recognizer.recognizeOnceAsync(result => { 103 | let displayText; 104 | if (result.reason === ResultReason.RecognizedSpeech) { 105 | displayText = `RECOGNIZED: Text=${result.text}` 106 | } else { 107 | displayText = 'ERROR: Speech was cancelled or could not be recognized. Ensure your microphone is working properly.'; 108 | } 109 | 110 | this.setState({ 111 | displayText: displayText 112 | }); 113 | }); 114 | } 115 | ``` 116 | 117 | Running speech-to-text from a microphone is done by creating an `AudioConfig` object and using it with the recognizer. 118 | 119 | ```javascript 120 | const audioConfig = speechsdk.AudioConfig.fromDefaultMicrophoneInput(); 121 | const recognizer = new speechsdk.SpeechRecognizer(speechConfig, audioConfig); 122 | ``` 123 | 124 | ## Speech-to-text from file 125 | 126 | To convert speech-to-text from an audio file, run the app and then click **Convert speech to text from an audio file.**. This will open a file browser and allow you to select an audio file. The following function `fileChange` is bound to an event handler that detects the file change. 127 | 128 | ```javascript 129 | async fileChange(event) { 130 | const audioFile = event.target.files[0]; 131 | console.log(audioFile); 132 | const fileInfo = audioFile.name + ` size=${audioFile.size} bytes `; 133 | 134 | this.setState({ 135 | displayText: fileInfo 136 | }); 137 | 138 | const tokenObj = await getTokenOrRefresh(); 139 | const speechConfig = speechsdk.SpeechConfig.fromAuthorizationToken(tokenObj.authToken, tokenObj.region); 140 | speechConfig.speechRecognitionLanguage = 'en-US'; 141 | 142 | const audioConfig = speechsdk.AudioConfig.fromWavFileInput(audioFile); 143 | const recognizer = new speechsdk.SpeechRecognizer(speechConfig, audioConfig); 144 | 145 | recognizer.recognizeOnceAsync(result => { 146 | let displayText; 147 | if (result.reason === ResultReason.RecognizedSpeech) { 148 | displayText = `RECOGNIZED: Text=${result.text}` 149 | } else { 150 | displayText = 'ERROR: Speech was cancelled or could not be recognized. Ensure your microphone is working properly.'; 151 | } 152 | 153 | this.setState({ 154 | displayText: fileInfo + displayText 155 | }); 156 | }); 157 | } 158 | ``` 159 | 160 | You need the audio file as a JavaScript [`File`](https://developer.mozilla.org/en-US/docs/Web/API/File) object, so you can grab it directly off the event target using `const audioFile = event.target.files[0];`. Next, you use the file to create the `AudioConfig` and then pass it to the recognizer. 161 | 162 | ```javascript 163 | const audioConfig = speechsdk.AudioConfig.fromWavFileInput(audioFile); 164 | const recognizer = new speechsdk.SpeechRecognizer(speechConfig, audioConfig); 165 | ``` 166 | 167 | ## Token exchange process 168 | 169 | This sample application shows an example design pattern for retrieving and managing tokens, a common task when using the Speech JavaScript SDK in a browser environment. A simple Express back-end is implemented in the same project under `server/index.js`, which abstracts the token retrieval process. 170 | 171 | The reason for this design is to prevent your speech key from being exposed on the front-end, since it can be used to make calls directly to your subscription. By using an ephemeral token, you are able to protect your speech key from being used directly. To get a token, you use the Speech REST API and make a call using your speech key and region. In the Express part of the app, this is implemented in `index.js` behind the endpoint `/api/get-speech-token`, which the front-end uses to get tokens. 172 | 173 | ```javascript 174 | app.get('/api/get-speech-token', async (req, res, next) => { 175 | res.setHeader('Content-Type', 'application/json'); 176 | const speechKey = process.env.SPEECH_KEY; 177 | const speechRegion = process.env.SPEECH_REGION; 178 | 179 | if (speechKey === 'paste-your-speech-key-here' || speechRegion === 'paste-your-speech-region-here') { 180 | res.status(400).send('You forgot to add your speech key or region to the .env file.'); 181 | } else { 182 | const headers = { 183 | headers: { 184 | 'Ocp-Apim-Subscription-Key': speechKey, 185 | 'Content-Type': 'application/x-www-form-urlencoded' 186 | } 187 | }; 188 | 189 | try { 190 | const tokenResponse = await axios.post(`https://${speechRegion}.api.cognitive.microsoft.com/sts/v1.0/issueToken`, null, headers); 191 | res.send({ token: tokenResponse.data, region: speechRegion }); 192 | } catch (err) { 193 | res.status(401).send('There was an error authorizing your speech key.'); 194 | } 195 | } 196 | }); 197 | ``` 198 | 199 | In the request, you create a `Ocp-Apim-Subscription-Key` header, and pass your speech key as the value. Then you make a request to the **issueToken** endpoint for your region, and an authorization token is returned. In a production application, this endpoint returning the token should be *restricted by additional user authentication* whenever possible. 200 | 201 | On the front-end, `token_util.js` contains the helper function `getTokenOrRefresh` that is used to manage the refresh and retrieval process. 202 | 203 | ```javascript 204 | export async function getTokenOrRefresh() { 205 | const cookie = new Cookie(); 206 | const speechToken = cookie.get('speech-token'); 207 | 208 | if (speechToken === undefined) { 209 | try { 210 | const res = await axios.get('/api/get-speech-token'); 211 | const token = res.data.token; 212 | const region = res.data.region; 213 | cookie.set('speech-token', region + ':' + token, {maxAge: 540, path: '/'}); 214 | 215 | console.log('Token fetched from back-end: ' + token); 216 | return { authToken: token, region: region }; 217 | } catch (err) { 218 | console.log(err.response.data); 219 | return { authToken: null, error: err.response.data }; 220 | } 221 | } else { 222 | console.log('Token fetched from cookie: ' + speechToken); 223 | const idx = speechToken.indexOf(':'); 224 | return { authToken: speechToken.slice(idx + 1), region: speechToken.slice(0, idx) }; 225 | } 226 | } 227 | ``` 228 | 229 | This function uses the `universal-cookie` library to store and retrieve the token from local storage. It first checks to see if there is an existing cookie, and in that case it returns the token without hitting the Express back-end. If there is no existing cookie for a token, it makes the call to `/api/get-speech-token` to fetch a new one. Since we need both the token and its corresponding region later, the cookie is stored in the format `token:region` and upon retrieval is spliced into each value. 230 | 231 | Tokens for the service expire after 10 minutes, so the sample uses the `maxAge` property of the cookie to act as a trigger for when a new token needs to be generated. It is reccommended to use 9 minutes as the expiry time to act as a buffer, so we set `maxAge` to **540 seconds**. 232 | 233 | In `App.js` you use `getTokenOrRefresh` in the functions for speech-to-text from a microphone, and from a file. Finally, use the `SpeechConfig.fromAuthorizationToken` function to create an auth context using the token. 234 | 235 | ```javascript 236 | const tokenObj = await getTokenOrRefresh(); 237 | const speechConfig = speechsdk.SpeechConfig.fromAuthorizationToken(tokenObj.authToken, tokenObj.region); 238 | ``` 239 | 240 | In many other Speech service samples, you will see the function `SpeechConfig.fromSubscription` used instead of `SpeechConfig.fromAuthorizationToken`, but by **avoiding the usage** of `fromSubscription` on the front-end, you prevent your speech subscription key from becoming exposed, and instead utilize the token authentication process. `fromSubscription` is safe to use in a Node.js environment, or in other Speech SDK programming languages when the call is made on a back-end, but it is best to avoid using in a browser-based JavaScript environment. -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/common/images/sampleoutputrealtimetranscription.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/common/images/sampleoutputrealtimetranscription.PNG -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/common/images/speechstreamingdiagram.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/common/images/speechstreamingdiagram.PNG -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/.env: -------------------------------------------------------------------------------- 1 | REACT_APP_PLATFORM=desktop 2 | REACT_APP_BACKEND_API="http://localhost:8080" -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/.env.desktop: -------------------------------------------------------------------------------- 1 | REACT_APP_PLATFORM=desktop 2 | REACT_APP_BACKEND_API="http://localhost:8080" -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/.env.hosted.microsoft: -------------------------------------------------------------------------------- 1 | REACT_APP_PLATFORM=hosted 2 | REACT_APP_BACKEND_API="https://speechexpressbackendamc.azurewebsites.net" 3 | REACT_APP_CLIENT_ID=6798e375-a31f-48b0-abcb-87f06d70d0b6 4 | REACT_APP_REDIRECT_URI=https://speechreactfrontendamc.azurewebsites.net/ 5 | REACT_APP_POST_LOGOUT_REDIRECT_URI=http://localhost:3000/ 6 | REACT_APP_TENANT_ID=72f988bf-86f1-41af-91ab-2d7cd011db47 -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/.env.hosted.progger: -------------------------------------------------------------------------------- 1 | REACT_APP_PLATFORM=hosted 2 | REACT_APP_BACKEND_API="https://speechexpressbackendamc.azurewebsites.net" 3 | REACT_APP_CLIENT_ID=6798e375-a31f-48b0-abcb-87f06d70d0b6 4 | REACT_APP_REDIRECT_URI=http://localhost:3000/ 5 | REACT_APP_POST_LOGOUT_REDIRECT_URI=http://localhost:3000/ 6 | REACT_APP_TENANT_ID=6e8d4ac4-6168-4505-ae45-4206d841b472 -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/.github/workflows/azure-static-web-apps-wonderful-wave-0f526c610.yml: -------------------------------------------------------------------------------- 1 | name: Azure Static Web Apps CI/CD 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | pull_request: 8 | types: [opened, synchronize, reopened, closed] 9 | branches: 10 | - master 11 | 12 | jobs: 13 | build_and_deploy_job: 14 | if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') 15 | runs-on: ubuntu-latest 16 | name: Build and Deploy Job 17 | steps: 18 | - uses: actions/checkout@v2 19 | with: 20 | submodules: true 21 | - name: Build And Deploy 22 | id: builddeploy 23 | uses: Azure/static-web-apps-deploy@v1 24 | with: 25 | azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_WONDERFUL_WAVE_0F526C610 }} 26 | repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments) 27 | action: "upload" 28 | ###### Repository/Build Configurations - These values can be configured to match your app requirements. ###### 29 | # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig 30 | app_location: "/" # App source code path 31 | api_location: "" # Api source code path - optional 32 | output_location: "" # Built app content directory - optional 33 | ###### End of Repository/Build Configurations ###### 34 | 35 | close_pull_request_job: 36 | if: github.event_name == 'pull_request' && github.event.action == 'closed' 37 | runs-on: ubuntu-latest 38 | name: Close Pull Request Job 39 | steps: 40 | - name: Close Pull Request 41 | id: closepullrequest 42 | uses: Azure/static-web-apps-deploy@v1 43 | with: 44 | azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_WONDERFUL_WAVE_0F526C610 }} 45 | action: "close" 46 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/.gitignore: -------------------------------------------------------------------------------- 1 | node_modules 2 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/asset-manifest.json: -------------------------------------------------------------------------------- 1 | { 2 | "files": { 3 | "main.js": "/static/js/main.a03b50d2.chunk.js", 4 | "main.js.map": "/static/js/main.a03b50d2.chunk.js.map", 5 | "runtime-main.js": "/static/js/runtime-main.4452493b.js", 6 | "runtime-main.js.map": "/static/js/runtime-main.4452493b.js.map", 7 | "static/css/2.8c0e6b67.chunk.css": "/static/css/2.8c0e6b67.chunk.css", 8 | "static/js/2.cd9c6385.chunk.js": "/static/js/2.cd9c6385.chunk.js", 9 | "static/js/2.cd9c6385.chunk.js.map": "/static/js/2.cd9c6385.chunk.js.map", 10 | "index.html": "/index.html", 11 | "static/css/2.8c0e6b67.chunk.css.map": "/static/css/2.8c0e6b67.chunk.css.map", 12 | "static/js/2.cd9c6385.chunk.js.LICENSE.txt": "/static/js/2.cd9c6385.chunk.js.LICENSE.txt" 13 | }, 14 | "entrypoints": [ 15 | "static/js/runtime-main.4452493b.js", 16 | "static/css/2.8c0e6b67.chunk.css", 17 | "static/js/2.cd9c6385.chunk.js", 18 | "static/js/main.a03b50d2.chunk.js" 19 | ] 20 | } -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/front-end-ui/build/favicon.ico -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/index.html: -------------------------------------------------------------------------------- 1 | Realtime Call Intelligence-Azure AI
-------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/logo192.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/front-end-ui/build/logo192.png -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/logo512.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/front-end-ui/build/logo512.png -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/manifest.json: -------------------------------------------------------------------------------- 1 | { 2 | "short_name": "React App", 3 | "name": "Create React App Sample", 4 | "icons": [ 5 | { 6 | "src": "favicon.ico", 7 | "sizes": "64x64 32x32 24x24 16x16", 8 | "type": "image/x-icon" 9 | }, 10 | { 11 | "src": "logo192.png", 12 | "type": "image/png", 13 | "sizes": "192x192" 14 | }, 15 | { 16 | "src": "logo512.png", 17 | "type": "image/png", 18 | "sizes": "512x512" 19 | } 20 | ], 21 | "start_url": ".", 22 | "display": "standalone", 23 | "theme_color": "#000000", 24 | "background_color": "#ffffff" 25 | } 26 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/robots.txt: -------------------------------------------------------------------------------- 1 | # https://www.robotstxt.org/robotstxt.html 2 | User-agent: * 3 | Disallow: 4 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/static/js/2.cd9c6385.chunk.js.LICENSE.txt: -------------------------------------------------------------------------------- 1 | /* 2 | object-assign 3 | (c) Sindre Sorhus 4 | @license MIT 5 | */ 6 | 7 | /*! 8 | Copyright (c) 2018 Jed Watson. 9 | Licensed under the MIT License (MIT), see 10 | http://jedwatson.github.io/classnames 11 | */ 12 | 13 | /*! 14 | * The buffer module from node.js, for the browser. 15 | * 16 | * @author Feross Aboukhadijeh 17 | * @license MIT 18 | */ 19 | 20 | /*! 21 | * cookie 22 | * Copyright(c) 2012-2014 Roman Shtylman 23 | * Copyright(c) 2015 Douglas Christopher Wilson 24 | * MIT Licensed 25 | */ 26 | 27 | /*! ***************************************************************************** 28 | Copyright (c) Microsoft Corporation. 29 | 30 | Permission to use, copy, modify, and/or distribute this software for any 31 | purpose with or without fee is hereby granted. 32 | 33 | THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH 34 | REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY 35 | AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, 36 | INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM 37 | LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR 38 | OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR 39 | PERFORMANCE OF THIS SOFTWARE. 40 | ***************************************************************************** */ 41 | 42 | /*! @azure/msal-browser v2.19.0 2021-11-02 */ 43 | 44 | /*! @azure/msal-common v5.1.0 2021-11-02 */ 45 | 46 | /*! ieee754. BSD-3-Clause License. Feross Aboukhadijeh */ 47 | 48 | /** 49 | * React Router v6.0.2 50 | * 51 | * Copyright (c) Remix Software Inc. 52 | * 53 | * This source code is licensed under the MIT license found in the 54 | * LICENSE.md file in the root directory of this source tree. 55 | * 56 | * @license MIT 57 | */ 58 | 59 | /** @license React v0.20.2 60 | * scheduler.production.min.js 61 | * 62 | * Copyright (c) Facebook, Inc. and its affiliates. 63 | * 64 | * This source code is licensed under the MIT license found in the 65 | * LICENSE file in the root directory of this source tree. 66 | */ 67 | 68 | /** @license React v17.0.2 69 | * react-dom.production.min.js 70 | * 71 | * Copyright (c) Facebook, Inc. and its affiliates. 72 | * 73 | * This source code is licensed under the MIT license found in the 74 | * LICENSE file in the root directory of this source tree. 75 | */ 76 | 77 | /** @license React v17.0.2 78 | * react-jsx-runtime.production.min.js 79 | * 80 | * Copyright (c) Facebook, Inc. and its affiliates. 81 | * 82 | * This source code is licensed under the MIT license found in the 83 | * LICENSE file in the root directory of this source tree. 84 | */ 85 | 86 | /** @license React v17.0.2 87 | * react.production.min.js 88 | * 89 | * Copyright (c) Facebook, Inc. and its affiliates. 90 | * 91 | * This source code is licensed under the MIT license found in the 92 | * LICENSE file in the root directory of this source tree. 93 | */ 94 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/static/js/main.a03b50d2.chunk.js: -------------------------------------------------------------------------------- 1 | (this.webpackJsonpspeechreactfrontend=this.webpackJsonpspeechreactfrontend||[]).push([[0],{101:function(e,t){},201:function(e,t){},203:function(e,t){},205:function(e,t){},206:function(e,t){},207:function(e,t){},208:function(e,t){},275:function(e,t,n){"use strict";n.r(t);n(224);var r=n(4),c=n.n(r),a=n(70),i=n.n(a),s=n(2),o=n.n(s),d=n(57),u=n(0),l=n(1),h=n(47),b=n(7),j=n(8),p=n(117),x=window.navigator.userAgent,g=x.indexOf("MSIE "),O=x.indexOf("Trident/"),f=g>0||O>0,k=x.indexOf("Edge/")>0,m=x.indexOf("Firefox")>0,y={auth:{clientId:"276f60f6-4d80-4c78-95c4-15defb2e6861",authority:"https://login.microsoftonline.com/".concat("6e8d4ac4-6168-4505-ae45-4206d841b472"),redirectUri:"http://localhost:3000/",postLogoutRedirectUri:"http://localhost:3000/"},cache:{cacheLocation:"localStorage",storeAuthStateInCookie:f||k||m},system:{loggerOptions:{loggerCallback:function(e,t,n){if(!n)switch(e){case p.a.Error:return void console.error(t);case p.a.Info:return void console.info(t);case p.a.Verbose:return void console.debug(t);case p.a.Warning:return void console.warn(t);default:return}}}}},S={scopes:["User.Read"]},v=n(49),w=n(160),A=n(118),C=n(91),T=n(13),R=function(){var e=Object(v.e)().instance;return Object(T.jsx)(A.a,{variant:"secondary",className:"ml-auto",drop:"left",title:"Sign In",children:Object(T.jsx)(C.a.Item,{as:"button",onClick:function(){var t;"popup"===(t="redirect")?e.loginPopup(S).catch((function(e){console.log(e)})):"redirect"===t&&e.loginRedirect(S).catch((function(e){console.log(e)}))},children:"Sign in using Redirect"})})},E=function(){var e=Object(v.e)().instance;return Object(T.jsx)(A.a,{variant:"secondary",className:"ml-auto",drop:"left",title:"Sign Out",children:Object(T.jsx)(C.a.Item,{as:"button",onClick:function(){var t;"popup"===(t="redirect")?e.logoutPopup({postLogoutRedirectUri:"/",mainWindowRedirectUri:"/"}):"redirect"===t&&e.logoutRedirect({postLogoutRedirectUri:"/"})},children:"Sign out using Redirect"})})},M=n(293),P=function(e){var t=Object(v.d)(),n=!0;function r(){return Object(T.jsx)(T.Fragment,{children:e.children})}function c(){return Object(T.jsxs)(T.Fragment,{children:[Object(T.jsxs)(M.a,{fluid:!0,children:[n?Object(T.jsxs)(w.a,{bg:"dark",variant:"dark",children:[Object(T.jsx)(w.a.Brand,{children:"AI-Powered Call Center"}),t?Object(T.jsx)(E,{}):Object(T.jsx)(R,{})]}):Object(T.jsx)("p",{}),Object(T.jsx)("br",{})]}),t?Object(T.jsx)(T.Fragment,{children:e.children}):Object(T.jsx)("p",{})]})}function a(){return n?Object(T.jsx)(c,{}):Object(T.jsx)(r,{})}return Object(T.jsx)(a,{})},I=n(41),N=n(200),L=n(136),D=n(100),F=function(e){return Object(T.jsxs)(N.a,{fluid:!0,children:[Object(T.jsxs)(L.a,{children:[Object(T.jsx)(D.a,{children:Object(T.jsx)(I.a,{bg:"secondary",border:"primary",style:{height:"275px"},children:Object(T.jsxs)(I.a.Body,{children:[Object(T.jsx)(I.a.Header,{children:"Profile:"}),Object(T.jsx)(I.a.Text,{children:e.profile})]})})}),Object(T.jsx)(D.a,{children:Object(T.jsx)(I.a,{bg:"secondary",border:"primary",style:{height:"275px"},children:Object(T.jsxs)(I.a.Body,{children:[Object(T.jsx)(I.a.Header,{children:"Dashboard"}),Object(T.jsx)(I.a.Text,{children:e.dashboard})]})})})]}),Object(T.jsx)("br",{}),Object(T.jsxs)(L.a,{children:[Object(T.jsx)(D.a,{children:Object(T.jsx)(I.a,{text:"success",bg:"dark",border:"primary",style:{height:"600px"},children:Object(T.jsxs)(I.a.Body,{children:[Object(T.jsx)(I.a.Header,{children:"Transcription Output Window:"}),Object(T.jsx)(I.a.Text,{children:e.text})]})})}),Object(T.jsx)(D.a,{children:Object(T.jsx)(I.a,{text:"success",bg:"dark",border:"primary",style:{height:"600px"},children:Object(T.jsxs)(I.a.Body,{children:[Object(T.jsx)(I.a.Header,{children:"NLP Output Window:"}),Object(T.jsx)(I.a.Text,{children:e.nlpOutput})]})})})]}),Object(T.jsx)(L.a,{children:Object(T.jsx)(D.a,{children:Object(T.jsx)(I.a,{text:"success",bg:"dark",border:"danger",style:{height:"200px"},children:Object(T.jsxs)(I.a.Body,{children:[Object(T.jsx)(I.a.Header,{children:"Debug Console Window:"}),Object(T.jsx)(I.a.Text,{children:e.debugData})]})})})})]})},U=n(137),z=n.n(U),B=n(213),W="https://aipoweredcallcenter.azurewebsites.net";function q(e){return H.apply(this,arguments)}function H(){return(H=Object(d.a)(o.a.mark((function e(t){var n,r,c,a,i,s,d,u;return o.a.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:if(n=new B.a,void 0!==(r=n.get("speech-token"))){e.next=24;break}return e.prev=3,console.log("Try getting token from the express backend"),c={"Content-Type":"application/json",Authorization:"Bearer ".concat(t),"Access-Control-Allow-Origin":"*"},console.log(c),e.next=9,z.a.get(W+"/api/get-speech-token",{headers:c});case 9:return a=e.sent,i=a.data.token,s=a.data.region,d=a.data.endpoint_id,n.set("speech-token",s+":"+i,{maxAge:540,path:"/"}),console.log("Token fetched from back-end: "+i),e.abrupt("return",{authToken:i,region:s,endpoint_id:d});case 18:return e.prev=18,e.t0=e.catch(3),console.log(e.t0.response.data),e.abrupt("return",{authToken:null,error:e.t0.response.data});case 22:e.next=27;break;case 24:return console.log("Token fetched from cookie: "+r),u=r.indexOf(":"),e.abrupt("return",{authToken:r.slice(u+1),region:r.slice(0,u)});case 27:case"end":return e.stop()}}),e,null,[[3,18]])})))).apply(this,arguments)}function _(e,t){return J.apply(this,arguments)}function J(){return(J=Object(d.a)(o.a.mark((function e(t,n){var r,c,a;return o.a.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:return e.prev=0,r={transcript:t},c={"Content-Type":"application/json",Authorization:"Bearer ".concat(n)},e.next=5,z.a.post(W+"/api/ta-key-phrases",r,{headers:c});case 5:return a=e.sent,e.abrupt("return",a.data);case 9:return e.prev=9,e.t0=e.catch(0),e.abrupt("return",{keyPhrasesExtracted:"NoKP",entityExtracted:"NoEnt"});case 12:case"end":return e.stop()}}),e,null,[[0,9]])})))).apply(this,arguments)}var G=n(33),K=n(94),Q=n(45),V=n(79),X=function(e){var t="enabled.",n="enable",c=Object(v.e)(),a=c.instance,i=c.accounts,s=Object(r.useState)(null),o=Object(Q.a)(s,2),d=o[0],u=o[1];e.AudioEnabled?(t="enabled.",n="disable"):(t="disabled.",n="enable");var l="Click here to "+n+".",h="Start Mic Streaming",b="primary";return e.isStreaming?(h="Stop Mic Streaming",b="danger"):(h="Start Mic Streaming",b="primary"),Object(r.useEffect)((function(){d&&e.onMicRecordClick(d)}),[d]),Object(T.jsx)(T.Fragment,{children:Object(T.jsxs)("table",{children:[Object(T.jsxs)("tr",{height:"100",children:[Object(T.jsx)("td",{children:Object(T.jsxs)("strong",{children:["Audio Recording is ",t]})}),Object(T.jsx)("td",{children:Object(T.jsx)(V.a,{onClick:e.onToggleClick,children:l})})]}),Object(T.jsxs)("tr",{height:"100",children:[Object(T.jsx)("td",{children:Object(T.jsx)("strong",{children:"Start Or Stop Streaming"})}),Object(T.jsx)("td",{children:Object(T.jsx)(V.a,{variant:b,onClick:function(){(function(){var e=Object(K.a)(Object(K.a)({},S),{},{account:i[0],scopes:["api://6798e375-a31f-48b0-abcb-87f06d70d0b6/user_impersonation"]});a.acquireTokenSilent(e).then((function(e){u(e.accessToken)})).catch((function(t){a.acquireTokenPopup(e).then((function(e){u(e.accessToken)}))}))})()},children:h})})]})]})})},Y=function(e){var t=Object(v.e)(),n=t.instance,c=t.accounts,a=Object(r.useState)(null),i=Object(Q.a)(a,2),s=i[0],o=i[1];return!0?Object(T.jsx)(T.Fragment,{children:Object(T.jsxs)("table",{children:[Object(T.jsxs)("h5",{className:"card-title",children:["Welcome ",c[0].name]}),s?Object(T.jsx)("p",{children:"Access Token Acquired!"}):Object(T.jsx)(V.a,{variant:"secondary",onClick:function(){var e=Object(K.a)(Object(K.a)({},S),{},{account:c[0]});n.acquireTokenSilent(e).then((function(e){o(e.accessToken)})).catch((function(t){n.acquireTokenPopup(e).then((function(e){o(e.accessToken)}))}))},children:"Request Access Token"})]})}):Object(T.jsx)(T.Fragment,{children:Object(T.jsx)("table",{children:Object(T.jsx)("h5",{className:"card-title",children:"Welcome, Demo User!"})})})},Z=n(197),$=function(e){Object(b.a)(n,e);var t=Object(j.a)(n);function n(e){var r;return Object(u.a)(this,n),(r=t.call(this,e)).handleAudioRecordingSwitch=function(){r.state.AudioRecordingEnabled?r.setState({AudioRecordingEnabled:!1}):r.setState({AudioRecordingEnabled:!0})},r.handleMicRecorderClick=r.handleMicRecorderClick.bind(Object(h.a)(r)),r.handleAudioRecordingSwitch=r.handleAudioRecordingSwitch.bind(Object(h.a)(r)),r.state={accessToken:null,AudioRecordingEnabled:!0,isStreaming:!1,isHosted:!1,color:"white",value:"",displayText:"Transcribed text will show here when streaming.",displayNLPOutput:"This windows will display detected entities.",debugConsole:"Debug logs will be displayed here."},r}return Object(l.a)(n,[{key:"handleMicRecorderClick",value:function(){var e=Object(d.a)(o.a.mark((function e(t){var n,r;return o.a.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:if(!this.state.isStreaming){e.next=7;break}return this.setState({debugConsole:"Stop Mic Event Received"}),e.next=4,this.setState({isStreaming:!1});case 4:return e.abrupt("return",null);case 7:return e.next=9,this.setState({isStreaming:!0});case 9:return n=function(e){return new Promise((function(t){return setTimeout(t,e)}))},e.next=12,this.InitializeStream(t);case 12:return r=e.sent,e.next=15,this.sttFromMic(r,t);case 15:return e.next=17,n(2e3);case 17:return this.setState({debugConsole:"Mic is listening for audio."}),this.setState({debugConsole:"Will check every 2 seconds for stop event"}),e.next=21,n(2e3);case 21:if(this.state.isStreaming){e.next=17;break}case 22:return e.next=24,this.stopMicStream(r);case 24:case"end":return e.stop()}}),e,this)})));return function(t){return e.apply(this,arguments)}}()},{key:"componentDidMount",value:function(){var e=Object(d.a)(o.a.mark((function e(){return o.a.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:case"end":return e.stop()}}),e)})));return function(){return e.apply(this,arguments)}}()},{key:"InitializeStream",value:function(){var e=Object(d.a)(o.a.mark((function e(t){var n,r,c,a,i;return o.a.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:return e.next=2,q(t);case 2:return n=e.sent,r=n.endpoint_id,c=Z.SpeechConfig.fromAuthorizationToken(n.authToken,n.region),this.state.AudioRecordingEnabled&&(c.endpointId=r,c.setServiceProperty("clientConnectionId",this.state.value,Z.ServicePropertyChannel.UriQueryParameter)),c.speechRecognitionLanguage="en-US",a=Z.AudioConfig.fromDefaultMicrophoneInput(),i=new Z.SpeechRecognizer(c,a),e.abrupt("return",i);case 10:case"end":return e.stop()}}),e,this)})));return function(t){return e.apply(this,arguments)}}()},{key:"stopMicStream",value:function(){var e=Object(d.a)(o.a.mark((function e(t){return o.a.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:return e.next=2,t.stopContinuousRecognitionAsync();case 2:return e.next=4,this.setState({isStreaming:!1});case 4:this.setState({debugConsole:"Mic stopped listening"});case 5:case"end":return e.stop()}}),e,this)})));return function(t){return e.apply(this,arguments)}}()},{key:"sttFromMic",value:function(){var e=Object(d.a)(o.a.mark((function e(t,n){var r,c,a=this;return o.a.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:return r="",c=" ",t.sessionStarted=function(e,t){r="Session ID: "+t.sessionId,a.setState({displayText:r})},t.recognized=function(){var e=Object(d.a)(o.a.mark((function e(t,i){var s,d,u;return o.a.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:if(i.result.reason!==G.a.RecognizedSpeech){e.next=12;break}return r+="\n".concat(i.result.text),a.setState({displayText:r}),e.next=5,_(i.result.text,n);case 5:s=e.sent,(d=JSON.stringify(s.keyPhrasesExtracted)).length>15&&(c+="\n"+d,a.setState({displayNLPOutput:c})),(u=JSON.stringify(s.entityExtracted)).length>12&&(c+="\n"+u,a.setState({displayNLPOutput:c.replace("
","\n")})),e.next=13;break;case 12:i.result.reason===G.a.NoMatch&&(r+="\n");case 13:case"end":return e.stop()}}),e)})));return function(t,n){return e.apply(this,arguments)}}(),e.next=6,t.startContinuousRecognitionAsync();case 6:return e.next=8,this.setState({isStreaming:!0});case 8:case"end":return e.stop()}}),e,this)})));return function(t,n){return e.apply(this,arguments)}}()},{key:"render",value:function(){return Object(T.jsx)(T.Fragment,{children:Object(T.jsxs)(P,{children:[Object(T.jsx)(v.a,{children:Object(T.jsx)(F,{profile:Object(T.jsx)(Y,{processAccessToken:this.processAccessToken}),debugData:this.state.debugConsole,nlpOutput:this.state.displayNLPOutput,text:this.state.displayText,dashboard:Object(T.jsx)(X,{isStreaming:this.state.isStreaming,AudioEnabled:this.state.AudioRecordingEnabled,onToggleClick:this.handleAudioRecordingSwitch,onMicRecordClick:this.handleMicRecorderClick})})}),Object(T.jsx)(v.c,{children:Object(T.jsx)(F,{profile:Object(T.jsx)(Y,{}),debugData:this.state.debugConsole,nlpOutput:this.state.displayNLPOutput,text:this.state.displayText,dashboard:Object(T.jsx)(X,{isStreaming:this.state.isStreaming,AudioEnabled:this.state.AudioRecordingEnabled,onToggleClick:this.handleAudioRecordingSwitch,onMicRecordClick:this.handleMicRecorderClick})})})]})})}}]),n}(r.Component),ee=n(212),te=n(292),ne=n(291),re=n(210),ce=n.n(re),ae=Object(ne.a)({palette:{primary:{main:"#556cd6"},secondary:{main:"#19857b"},error:{main:ce.a.A400},background:{default:"#fff"}}}),ie=n(290),se=n(20),oe=new ie.a(y);!oe.getActiveAccount()&&oe.getAllAccounts().length>0&&oe.setActiveAccount(oe.getAllAccounts()[0]),oe.enableAccountStorageEvents(),oe.addEventCallback((function(e){if(e.eventType===se.a.LOGIN_SUCCESS&&e.payload.account){var t=e.payload.account;oe.setActiveAccount(t)}})),i.a.render(Object(T.jsx)(c.a.StrictMode,{children:Object(T.jsx)(ee.a,{children:Object(T.jsx)(te.a,{theme:ae,children:Object(T.jsx)(v.b,{instance:oe,children:Object(T.jsx)($,{})})})})}),document.getElementById("root"))},85:function(e,t){}},[[275,1,2]]]); 2 | //# sourceMappingURL=main.a03b50d2.chunk.js.map -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/static/js/main.a03b50d2.chunk.js.map: -------------------------------------------------------------------------------- 1 | {"version":3,"sources":["authConfig.js","components/SignInButton.jsx","components/SignOutButton.jsx","components/PageLayout.jsx","components/OutputWindows.jsx","token_util.js","components/Dashboard.jsx","components/Profile.jsx","App.jsx","styles/theme.js","index.js"],"names":["ua","window","navigator","userAgent","msie","indexOf","msie11","isIE","isEdge","isFirefox","msalConfig","auth","clientId","process","authority","redirectUri","postLogoutRedirectUri","cache","cacheLocation","storeAuthStateInCookie","system","loggerOptions","loggerCallback","level","message","containsPii","LogLevel","Error","console","error","Info","info","Verbose","debug","Warning","warn","loginRequest","scopes","SignInButton","instance","useMsal","DropdownButton","variant","className","drop","title","Dropdown","Item","as","onClick","loginType","loginPopup","catch","e","log","loginRedirect","SignOutButton","logoutType","logoutPopup","mainWindowRedirectUri","logoutRedirect","PageLayout","props","isAuthenticated","useIsAuthenticated","isHosted","DesktopLayout","children","HostedLayout","Container","fluid","Navbar","bg","Brand","Layout","OutputWindows","Row","Col","Card","border","style","height","Body","Header","Text","profile","dashboard","text","nlpOutput","debugData","BACKEND_API","getTokenOrRefresh","accessToken","a","cookie","Cookie","undefined","speechToken","get","headers","axios","res","token","data","region","endpoint_id","set","maxAge","path","authToken","response","idx","slice","getKeyPhrases","requestText","transcript","post","keyPhrasesExtracted","entityExtracted","Dashboard","record_status","verb","accounts","useState","setAccessToken","AudioEnabled","button_message","button_variant","isStreaming","useEffect","onMicRecordClick","Button","onToggleClick","request","account","acquireTokenSilent","then","acquireTokenPopup","RequestAccessToken","ProfileContent","name","speechsdk","require","App","handleAudioRecordingSwitch","state","AudioRecordingEnabled","setState","handleMicRecorderClick","bind","color","value","displayText","displayNLPOutput","debugConsole","this","delay","ms","Promise","setTimeout","InitializeStream","recognizer","sttFromMic","stopMicStream","tokenObj","customSpeechEndpoint","speechConfig","SpeechConfig","fromAuthorizationToken","endpointId","setServiceProperty","ServicePropertyChannel","UriQueryParameter","speechRecognitionLanguage","audioConfig","AudioConfig","fromDefaultMicrophoneInput","SpeechRecognizer","stopContinuousRecognitionAsync","resultText","nlpText","sessionStarted","s","sessionId","recognized","result","reason","ResultReason","RecognizedSpeech","nlpObj","keyPhraseText","JSON","stringify","length","entityText","replace","NoMatch","startContinuousRecognitionAsync","processAccessToken","Component","theme","createMuiTheme","palette","primary","main","secondary","red","A400","background","default","msalInstance","PublicClientApplication","getActiveAccount","getAllAccounts","setActiveAccount","enableAccountStorageEvents","addEventCallback","event","eventType","EventType","LOGIN_SUCCESS","payload","ReactDOM","render","StrictMode","ThemeProvider","document","getElementById"],"mappings":"gYAIMA,EAAKC,OAAOC,UAAUC,UACtBC,EAAOJ,EAAGK,QAAQ,SAClBC,EAASN,EAAGK,QAAQ,YAGpBE,EAAOH,EAAO,GAAKE,EAAS,EAC5BE,EAHSR,EAAGK,QAAQ,SAGF,EAClBI,EAHUT,EAAGK,QAAQ,WAGC,EAGfK,EAAa,CACtBC,KAAM,CACFC,SAAUC,uCACVC,UAAU,qCAAD,OAAuCD,wCAChDE,YAAaF,yBACbG,sBAAuBH,0BAE3BI,MAAO,CACHC,cAAe,eACfC,uBAAwBZ,GAAQC,GAAUC,GAE9CW,OAAQ,CACJC,cAAe,CACXC,eAAgB,SAACC,EAAOC,EAASC,GAC7B,IAAIA,EAGJ,OAAQF,GACJ,KAAKG,IAASC,MAEV,YADAC,QAAQC,MAAML,GAElB,KAAKE,IAASI,KAEV,YADAF,QAAQG,KAAKP,GAEjB,KAAKE,IAASM,QAEV,YADAJ,QAAQK,MAAMT,GAElB,KAAKE,IAASQ,QAEV,YADAN,QAAQO,KAAKX,GAEjB,QACI,YAQXY,EAAe,CACxBC,OAAQ,CAAC,c,0CC7CAC,EAAe,WACxB,IAAQC,EAAaC,cAAbD,SAaR,OACI,cAACE,EAAA,EAAD,CAAgBC,QAAQ,YAAYC,UAAU,UAAUC,KAAK,OAAOC,MAAM,UAA1E,SAEI,cAACC,EAAA,EAASC,KAAV,CAAeC,GAAG,SAASC,QAAS,WAdxB,IAACC,EACC,WADDA,EAcyC,YAZtDX,EAASY,WAAWf,GAAcgB,OAAM,SAAAC,GACpCzB,QAAQ0B,IAAID,MAEK,aAAdH,GACPX,EAASgB,cAAcnB,GAAcgB,OAAM,SAAAC,GACvCzB,QAAQ0B,IAAID,OAOhB,uCClBCG,EAAgB,WACzB,IAAQjB,EAAaC,cAAbD,SAcR,OACI,cAACE,EAAA,EAAD,CAAgBC,QAAQ,YAAYC,UAAU,UAAUC,KAAK,OAAOC,MAAM,WAA1E,SAEI,cAACC,EAAA,EAASC,KAAV,CAAeC,GAAG,SAASC,QAAS,WAfvB,IAACQ,EACC,WADDA,EAeyC,YAbvDlB,EAASmB,YAAY,CACjB1C,sBAAuB,IACvB2C,sBAAuB,MAEL,aAAfF,GACPlB,EAASqB,eAAe,CACpB5C,sBAAuB,OAO3B,wC,SCdC6C,EAAa,SAACC,GACvB,IAAMC,EAAkBC,cAIhBC,GAAW,EAKnB,SAASC,IACL,OAAO,mCAAGJ,EAAMK,WAGpB,SAASC,IAEL,OACI,qCACA,eAACC,EAAA,EAAD,CAAWC,OAAK,EAAhB,UACEL,EAAW,eAACM,EAAA,EAAD,CAAQC,GAAG,OAAO9B,QAAQ,OAA1B,UACT,cAAC6B,EAAA,EAAOE,MAAR,qCACEV,EAAkB,cAAC,EAAD,IAAoB,cAAC,EAAD,OAChC,sBACZ,0BAEEA,EAAkB,mCAAGD,EAAMK,WAAe,yBAKpD,SAASO,IAEL,OAAIT,EACO,cAACG,EAAD,IAEA,cAACF,EAAD,IAIf,OACI,cAACQ,EAAD,K,mCC9CKC,EAAgB,SAACb,GAC1B,OACE,eAAC,IAAD,CAAWQ,OAAK,EAAhB,UACA,eAACM,EAAA,EAAD,WACE,cAACC,EAAA,EAAD,UACE,cAACC,EAAA,EAAD,CAAMN,GAAG,YAAYO,OAAO,UAAUC,MAAO,CAAEC,OAAQ,SAAvD,SACE,eAACH,EAAA,EAAKI,KAAN,WACA,cAACJ,EAAA,EAAKK,OAAN,uBACA,cAACL,EAAA,EAAKM,KAAN,UAAYtB,EAAMuB,iBAItB,cAACR,EAAA,EAAD,UACE,cAACC,EAAA,EAAD,CAAMN,GAAG,YAAYO,OAAO,UAAUC,MAAO,CAAEC,OAAQ,SAAvD,SACE,eAACH,EAAA,EAAKI,KAAN,WACA,cAACJ,EAAA,EAAKK,OAAN,wBACA,cAACL,EAAA,EAAKM,KAAN,UAAYtB,EAAMwB,sBAKxB,uBACA,eAACV,EAAA,EAAD,WACE,cAACC,EAAA,EAAD,UACA,cAACC,EAAA,EAAD,CAAMS,KAAK,UAAUf,GAAG,OAAOO,OAAO,UAAUC,MAAO,CAAEC,OAAQ,SAAjE,SACE,eAACH,EAAA,EAAKI,KAAN,WACA,cAACJ,EAAA,EAAKK,OAAN,2CACA,cAACL,EAAA,EAAKM,KAAN,UAAYtB,EAAMyB,cAIpB,cAACV,EAAA,EAAD,UACA,cAACC,EAAA,EAAD,CAAMS,KAAK,UAAUf,GAAG,OAAOO,OAAO,UAAUC,MAAO,CAAEC,OAAQ,SAAjE,SACE,eAACH,EAAA,EAAKI,KAAN,WACA,cAACJ,EAAA,EAAKK,OAAN,iCACA,cAACL,EAAA,EAAKM,KAAN,UAAYtB,EAAM0B,sBAKtB,cAACZ,EAAA,EAAD,UACE,cAACC,EAAA,EAAD,UACA,cAACC,EAAA,EAAD,CAAMS,KAAK,UAAUf,GAAG,OAAOO,OAAO,SAASC,MAAO,CAAEC,OAAQ,SAAhE,SACE,eAACH,EAAA,EAAKI,KAAN,WACA,cAACJ,EAAA,EAAKK,OAAN,oCACA,cAACL,EAAA,EAAKM,KAAN,UAAYtB,EAAM2B,yB,2BCjDtBC,EAAc7E,gDAEb,SAAe8E,EAAtB,kC,4CAAO,WAAiCC,GAAjC,6BAAAC,EAAA,yDACGC,EAAS,IAAIC,SAICC,KAHdC,EAAcH,EAAOI,IAAI,iBAF5B,iCAQKtE,QAAQ0B,IAAI,8CACN6C,EAAU,CAAC,eAAgB,mBAAoB,cAAgB,UAAhB,OAA2BP,GAAe,8BAA+B,KAC9HhE,QAAQ0B,IAAI6C,GAVjB,SAWuBC,IAAMF,IAAIR,EAAc,wBAAyB,CAACS,YAXzE,cAWWE,EAXX,OAYWC,EAAQD,EAAIE,KAAKD,MACjBE,EAASH,EAAIE,KAAKC,OAClBC,EAAcJ,EAAIE,KAAKE,YAC7BX,EAAOY,IAAI,eAAgBF,EAAS,IAAMF,EAAO,CAACK,OAAQ,IAAKC,KAAM,MAErEhF,QAAQ0B,IAAI,gCAAkCgD,GAjBnD,kBAkBY,CAAEO,UAAWP,EAAOE,OAAQA,EAAQC,YAAaA,IAlB7D,yCAoBK7E,QAAQ0B,IAAI,KAAIwD,SAASP,MApB9B,kBAqBY,CAAEM,UAAW,KAAMhF,MAAO,KAAIiF,SAASP,OArBnD,uCAwBC3E,QAAQ0B,IAAI,8BAAgC2C,GACtCc,EAAMd,EAAY5F,QAAQ,KAzBjC,kBA0BQ,CAAEwG,UAAWZ,EAAYe,MAAMD,EAAM,GAAIP,OAAQP,EAAYe,MAAM,EAAGD,KA1B9E,2D,sBA8BA,SAAeE,EAAtB,oC,4CAAO,WAA6BC,EAAatB,GAA1C,mBAAAC,EAAA,sEAGOU,EAAO,CAACY,WAAYD,GACpBf,EAAU,CAAE,eAAgB,mBAAoB,cAAgB,UAAhB,OAA2BP,IAJlF,SAKmBQ,IAAMgB,KAAK1B,EAAc,sBAAuBa,EAAM,CAACJ,YAL1E,cAKOE,EALP,yBAOQA,EAAIE,MAPZ,yDAUQ,CAACc,oBAAqB,OAAQC,gBAAiB,UAVvD,0D,0DC7BMC,EAAY,SAACzD,GACtB,IAAI0D,EAAgB,WAChBC,EAAO,SACX,EAA+BjF,cAAvBD,EAAR,EAAQA,SAAUmF,EAAlB,EAAkBA,SAClB,EAAsCC,mBAAS,MAA/C,mBAAO/B,EAAP,KAAoBgC,EAApB,KAEI9D,EAAM+D,cACNL,EAAgB,WAChBC,EAAO,YAEPD,EAAgB,YAChBC,EAAO,UAGX,IAAIjG,EAAU,iBAAmBiG,EAAO,IACpCK,EAAiB,sBACjBC,EAAiB,UAuCrB,OAtCIjE,EAAMkE,aACNF,EAAiB,qBACjBC,EAAiB,WAEjBD,EAAiB,sBACjBC,EAAiB,WA2BrBE,qBAAW,WACHrC,GACA9B,EAAMoE,iBAAiBtC,KAE5B,CAACA,IAGA,mCACI,kCACI,qBAAIX,OAAO,MAAX,UACI,6BAAI,yDAA4BuC,OAChC,6BAAI,cAACW,EAAA,EAAD,CAAQlF,QAASa,EAAMsE,cAAvB,SAAuC5G,SAE/C,qBAAIyD,OAAO,MAAX,UACI,6BAAI,+DACJ,6BAAI,cAACkD,EAAA,EAAD,CAAQzF,QAASqF,EAAgB9E,QAvBrD,YAhBA,WACI,IAAMoF,EAAO,2BACNjG,GADM,IAETkG,QAASZ,EAAS,GAClBrF,OAAQ,CAAE,mEAGdE,EAASgG,mBAAmBF,GAASG,MAAK,SAAC1B,GACvCc,EAAed,EAASlB,gBACzBxC,OAAM,SAACC,GACNd,EAASkG,kBAAkBJ,GAASG,MAAK,SAAC1B,GACtCc,EAAed,EAASlB,oBAOhC8C,IAqBgB,SAAyDZ,eChEpEa,EAAiB,SAAC7E,GAC3B,MAA+BtB,cAAvBD,EAAR,EAAQA,SAAUmF,EAAlB,EAAkBA,SAClB,EAAsCC,mBAAS,MAA/C,mBAAO/B,EAAP,KAAoBgC,EAApB,KA0BA,OAtBmB,EAwBX,mCACI,kCACE,qBAAIjF,UAAU,aAAd,qBAAoC+E,EAAS,GAAGkB,QAC/ChD,EACC,uDAEA,cAACuC,EAAA,EAAD,CAAQzF,QAAQ,YAAYO,QAzB5C,WACI,IAAMoF,EAAO,2BACNjG,GADM,IAETkG,QAASZ,EAAS,KAGtBnF,EAASgG,mBAAmBF,GAASG,MAAK,SAAC1B,GACvCc,EAAed,EAASlB,gBACzBxC,OAAM,SAACC,GACNd,EAASkG,kBAAkBJ,GAASG,MAAK,SAAC1B,GACtCc,EAAed,EAASlB,oBAepB,uCAOR,mCACI,gCACI,oBAAIjD,UAAU,aAAd,sCClCdkG,EAAYC,EAAQ,KAILC,E,kDACnB,WAAYjF,GAAQ,IAAD,8BACf,cAAMA,IA6CVkF,2BAA6B,WAEvB,EAAKC,MAAMC,sBACb,EAAKC,SAAS,CAACD,uBAAuB,IAEtC,EAAKC,SAAS,CAACD,uBAAuB,KAhDtC,EAAKE,uBAAyB,EAAKA,uBAAuBC,KAA5B,gBAC9B,EAAKL,2BAA6B,EAAKA,2BAA2BK,KAAhC,gBAElC,EAAKJ,MAAQ,CACXrD,YAAa,KACbsD,uBAAuB,EACvBlB,aAAa,EACb/D,UAAU,EACVqF,MAAO,QACPC,MAAO,GACPC,YAAa,kDACbC,iBAAkB,+CAClBC,aAAc,sCAfD,E,iGAmBnB,WAA6B9D,GAA7B,iBAAAC,EAAA,0DAKM8D,KAAKV,MAAMjB,YALjB,uBAMI2B,KAAKR,SAAS,CAACO,aAAe,4BANlC,SAOUC,KAAKR,SAAS,CAACnB,aAAa,IAPtC,gCAQW,MARX,uBAUU2B,KAAKR,SAAS,CAACnB,aAAa,IAVtC,cAaQ4B,EAAQ,SAAAC,GAAE,OAAI,IAAIC,SAAQ,SAAAzD,GAAG,OAAI0D,WAAW1D,EAAKwD,OAbzD,UAc6BF,KAAKK,iBAAiBpE,GAdnD,eAcUqE,EAdV,iBAeUN,KAAKO,WAAWD,EAAYrE,GAftC,yBAgBUgE,EAAM,KAhBhB,eAkBMD,KAAKR,SAAS,CAACO,aAAe,gCAC9BC,KAAKR,SAAS,CAACO,aAAe,8CAnBpC,UAoBYE,EAAM,KApBlB,WAsBWD,KAAKV,MAAMjB,YAtBtB,0CAuBU2B,KAAKQ,cAAcF,GAvB7B,iD,6HAqCA,sBAAApE,EAAA,0F,2HAIF,WAAuBD,GAAvB,uBAAAC,EAAA,sEAC2BF,EAAkBC,GAD7C,cACUwE,EADV,OAEUC,EAAuBD,EAAS3D,YAChC6D,EAAezB,EAAU0B,aAAaC,uBAAuBJ,EAASvD,UAAWuD,EAAS5D,QAC5FmD,KAAKV,MAAMC,wBAGboB,EAAaG,WAAaJ,EAG1BC,EAAaI,mBAAmB,qBAAsBf,KAAKV,MAAMM,MAAOV,EAAU8B,uBAAuBC,oBAG3GN,EAAaO,0BAA4B,QACnCC,EAAcjC,EAAUkC,YAAYC,6BACpCf,EAAa,IAAIpB,EAAUoC,iBAAiBX,EAAcQ,GAfpE,kBAgBWb,GAhBX,iD,yHAoBE,WAAoBA,GAApB,SAAApE,EAAA,sEACQoE,EAAWiB,iCADnB,uBAEQvB,KAAKR,SAAS,CAACnB,aAAc,IAFrC,OAGE2B,KAAKR,SAAS,CAACO,aAAe,0BAHhC,gD,sHAOA,WAAiBO,EAAYrE,GAA7B,wBAAAC,EAAA,6DAEMsF,EAAa,GACbC,EAAU,IAEdnB,EAAWoB,eAAiB,SAACC,EAAGjI,GAC9B8H,EAAa,eAAiB9H,EAAEkI,UAChC,EAAKpC,SAAS,CACZK,YAAa2B,KAIjBlB,EAAWuB,WAAX,uCAAwB,WAAOF,EAAGjI,GAAV,mBAAAwC,EAAA,yDAEnBxC,EAAEoI,OAAOC,SAAWC,IAAaC,iBAFd,wBAIdT,GAAU,YAAS9H,EAAEoI,OAAOlG,MAC5B,EAAK4D,SAAS,CACZK,YAAa2B,IAND,SAUOlE,EAAc5D,EAAEoI,OAAOlG,KAAMK,GAVpC,OAURiG,EAVQ,QAaRC,EAAgBC,KAAKC,UAAUH,EAAOxE,sBAE3B4E,OAAS,KACtBb,GAAW,KAAOU,EAClB,EAAK3C,SAAS,CAAEM,iBAAkB2B,MAIhCc,EAAaH,KAAKC,UAAUH,EAAOvE,kBAE3B2E,OAAS,KACnBb,GAAW,KAAOc,EAClB,EAAK/C,SAAS,CAAEM,iBAAkB2B,EAAQe,QAAQ,QAAS,SAzBjD,wBA6BT9I,EAAEoI,OAAOC,SAAWC,IAAaS,UAEtCjB,GAAU,MA/BI,4CAAxB,wDAZF,SA8CQlB,EAAWoC,kCA9CnB,uBA+CQ1C,KAAKR,SAAS,CAACnB,aAAc,IA/CrC,gD,6EAkDA,WACE,OACE,mCACA,eAAC,EAAD,WACE,cAAC,IAAD,UACE,cAAC,EAAD,CAAe3C,QAAS,cAAC,EAAD,CAAgBiH,mBAAoB3C,KAAK2C,qBAAuB7G,UAAWkE,KAAKV,MAAMS,aAAclE,UAAWmE,KAAKV,MAAMQ,iBAAkBlE,KAAMoE,KAAKV,MAAMO,YAAalE,UAAW,cAAC,EAAD,CAAW0C,YAAa2B,KAAKV,MAAMjB,YAAaH,aAAc8B,KAAKV,MAAMC,sBAAuBd,cAAeuB,KAAKX,2BAA4Bd,iBAAkByB,KAAKP,6BAEtX,cAAC,IAAD,UACE,cAAC,EAAD,CAAe/D,QAAS,cAAC,EAAD,IAAmBI,UAAWkE,KAAKV,MAAMS,aAAclE,UAAWmE,KAAKV,MAAMQ,iBAAkBlE,KAAMoE,KAAKV,MAAMO,YAAalE,UAAW,cAAC,EAAD,CAAW0C,YAAa2B,KAAKV,MAAMjB,YAAaH,aAAc8B,KAAKV,MAAMC,sBAAuBd,cAAeuB,KAAKX,2BAA4Bd,iBAAkByB,KAAKP,sC,GAlJhTmD,a,mDChBpBC,GAAQC,aAAe,CAClCC,QAAS,CACPC,QAAS,CACPC,KAAM,WAERC,UAAW,CACTD,KAAM,WAER/K,MAAO,CACL+K,KAAME,KAAIC,MAEZC,WAAY,CACVC,QAAS,W,mBCDTC,GAAe,IAAIC,KAAwBzM,IAG5CwM,GAAaE,oBAAsBF,GAAaG,iBAAiBpB,OAAS,GAE7EiB,GAAaI,iBAAiBJ,GAAaG,iBAAiB,IAI9DH,GAAaK,6BAEbL,GAAaM,kBAAiB,SAACC,GAC7B,GAAIA,EAAMC,YAAcC,KAAUC,eAAiBH,EAAMI,QAAQvF,QAAS,CACxE,IAAMA,EAAUmF,EAAMI,QAAQvF,QAC9B4E,GAAaI,iBAAiBhF,OAKlCwF,IAASC,OACP,cAAC,IAAMC,WAAP,UACE,cAAC,KAAD,UACE,cAACC,GAAA,EAAD,CAAezB,MAAOA,GAAtB,SACE,cAAC,IAAD,CAAcjK,SAAU2K,GAAxB,SACA,cAAC,EAAD,YAKNgB,SAASC,eAAe,U","file":"static/js/main.a03b50d2.chunk.js","sourcesContent":["import { LogLevel } from \"@azure/msal-browser\";\r\n// Browser check variables\r\n// If you support IE, our recommendation is that you sign-in using Redirect APIs\r\n// If you as a developer are testing using Edge InPrivate mode, please add \"isEdge\" to the if check\r\nconst ua = window.navigator.userAgent;\r\nconst msie = ua.indexOf(\"MSIE \");\r\nconst msie11 = ua.indexOf(\"Trident/\");\r\nconst msedge = ua.indexOf(\"Edge/\");\r\nconst firefox = ua.indexOf(\"Firefox\");\r\nconst isIE = msie > 0 || msie11 > 0;\r\nconst isEdge = msedge > 0;\r\nconst isFirefox = firefox > 0; // Only needed if you need to support the redirect flow in Firefox incognito\r\n\r\n// Config object to be passed to Msal on creation\r\nexport const msalConfig = {\r\n auth: {\r\n clientId: process.env.REACT_APP_CLIENT_ID,\r\n authority: `https://login.microsoftonline.com/${process.env.REACT_APP_TENANT_ID}`,\r\n redirectUri: process.env.REACT_APP_REDIRECT_URI,\r\n postLogoutRedirectUri: process.env.REACT_APP_POST_LOGOUT_REDIRECT_URI\r\n },\r\n cache: {\r\n cacheLocation: \"localStorage\",\r\n storeAuthStateInCookie: isIE || isEdge || isFirefox\r\n },\r\n system: {\r\n loggerOptions: {\r\n loggerCallback: (level, message, containsPii) => {\r\n if (containsPii) {\t\r\n return;\t\r\n }\r\n switch (level) {\t\r\n case LogLevel.Error:\t\r\n console.error(message);\t\r\n return;\t\r\n case LogLevel.Info:\t\r\n console.info(message);\t\r\n return;\t\r\n case LogLevel.Verbose:\t\r\n console.debug(message);\t\r\n return;\t\r\n case LogLevel.Warning:\t\r\n console.warn(message);\t\r\n return;\t\r\n default:\r\n return;\r\n }\r\n }\r\n }\r\n }\r\n};\r\n\r\n// Add here scopes for id token to be used at MS Identity Platform endpoints.\r\nexport const loginRequest = {\r\n scopes: [\"User.Read\"]\r\n};\r\n\r\n// Add here the endpoints for MS Graph API services you would like to use.\r\nexport const graphConfig = {\r\n graphMeEndpoint: \"https://graph.microsoft.com/v1.0/me\"\r\n};","import React from \"react\";\r\nimport { useMsal } from \"@azure/msal-react\";\r\nimport { loginRequest } from \"../authConfig\";\r\nimport DropdownButton from \"react-bootstrap/DropdownButton\";\r\nimport Dropdown from \"react-bootstrap/esm/Dropdown\";\r\n\r\n/**\r\n * Renders a drop down button with child buttons for logging in with a popup or redirect\r\n */\r\nexport const SignInButton = () => {\r\n const { instance } = useMsal();\r\n\r\n const handleLogin = (loginType) => {\r\n if (loginType === \"popup\") {\r\n instance.loginPopup(loginRequest).catch(e => {\r\n console.log(e);\r\n });\r\n } else if (loginType === \"redirect\") {\r\n instance.loginRedirect(loginRequest).catch(e => {\r\n console.log(e);\r\n });\r\n }\r\n }\r\n return (\r\n \r\n {/* handleLogin(\"popup\")}>Sign in using Popup */}\r\n handleLogin(\"redirect\")}>Sign in using Redirect\r\n \r\n )\r\n}","import React from \"react\";\r\nimport { useMsal } from \"@azure/msal-react\";\r\nimport DropdownButton from \"react-bootstrap/DropdownButton\";\r\nimport Dropdown from \"react-bootstrap/esm/Dropdown\";\r\n\r\n/**\r\n * Renders a sign-out button\r\n */\r\nexport const SignOutButton = () => {\r\n const { instance } = useMsal();\r\n\r\n const handleLogout = (logoutType) => {\r\n if (logoutType === \"popup\") {\r\n instance.logoutPopup({\r\n postLogoutRedirectUri: \"/\",\r\n mainWindowRedirectUri: \"/\"\r\n });\r\n } else if (logoutType === \"redirect\") {\r\n instance.logoutRedirect({\r\n postLogoutRedirectUri: \"/\",\r\n });\r\n }\r\n }\r\n return (\r\n \r\n {/* handleLogout(\"popup\")}>Sign out using Popup */}\r\n handleLogout(\"redirect\")}>Sign out using Redirect\r\n \r\n )\r\n}","import React from \"react\";\r\nimport Navbar from \"react-bootstrap/Navbar\";\r\nimport { useIsAuthenticated } from \"@azure/msal-react\";\r\nimport { SignInButton } from \"./SignInButton\";\r\nimport { SignOutButton } from \"./SignOutButton\";\r\nimport { Container } from \"reactstrap\";\r\n// import { ConsoleLoggingListener } from \"microsoft-cognitiveservices-speech-sdk/distrib/lib/src/common.browser/ConsoleLoggingListener\";\r\n\r\n/**\r\n * Renders the navbar component with a sign-in or sign-out button depending on whether or not a user is authenticated\r\n * @param props \r\n */\r\nexport const PageLayout = (props) => {\r\n const isAuthenticated = useIsAuthenticated();\r\n const { REACT_APP_PLATFORM } = process.env;\r\n\r\n if (REACT_APP_PLATFORM === \"hosted\") {\r\n var isHosted = true;\r\n } else {\r\n var isHosted = false;\r\n }\r\n\r\n function DesktopLayout() {\r\n return <>{props.children};\r\n }\r\n\r\n function HostedLayout() {\r\n \r\n return (\r\n <>\r\n \r\n { isHosted ? \r\n AI-Powered Call Center\r\n { isAuthenticated ? : }\r\n :

}\r\n
\r\n
\r\n { isAuthenticated ? <>{props.children} :

}\r\n \r\n )\r\n }\r\n\r\n function Layout() {\r\n\r\n if (isHosted) {\r\n return \r\n } else {\r\n return \r\n }\r\n }\r\n\r\n return (\r\n \r\n );\r\n};\r\n","import React from \"react\";\r\nimport Card from 'react-bootstrap/Card';\r\nimport Container from 'react-bootstrap/Container';\r\nimport Row from 'react-bootstrap/Row';\r\nimport Col from 'react-bootstrap/Col';\r\n\r\nexport const OutputWindows = (props) => {\r\n return (\r\n \r\n \r\n \r\n \r\n \r\n Profile:\r\n {props.profile}\r\n \r\n \r\n \r\n \r\n \r\n \r\n Dashboard\r\n {props.dashboard}\r\n \r\n \r\n \r\n \r\n
\r\n \r\n \r\n \r\n \r\n Transcription Output Window:\r\n {props.text}\r\n \r\n \r\n \r\n \r\n \r\n \r\n NLP Output Window:\r\n {props.nlpOutput}\r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n Debug Console Window:\r\n {props.debugData}\r\n \r\n \r\n \r\n \r\n
\r\n );\r\n};","import axios from 'axios';\r\nimport Cookie from 'universal-cookie';\r\nconst BACKEND_API = process.env.REACT_APP_BACKEND_API\r\n\r\nexport async function getTokenOrRefresh(accessToken) {\r\n const cookie = new Cookie();\r\n const speechToken = cookie.get('speech-token');\r\n \r\n\r\n if (speechToken === undefined) {\r\n try {\r\n\r\n console.log('Try getting token from the express backend');\r\n const headers = {'Content-Type': 'application/json', 'Authorization': `Bearer ${accessToken}`, 'Access-Control-Allow-Origin': '*'}\r\n console.log(headers)\r\n const res = await axios.get(BACKEND_API + '/api/get-speech-token', {headers});\r\n const token = res.data.token;\r\n const region = res.data.region;\r\n const endpoint_id = res.data.endpoint_id\r\n cookie.set('speech-token', region + ':' + token, {maxAge: 540, path: '/'});\r\n\r\n console.log('Token fetched from back-end: ' + token);\r\n return { authToken: token, region: region, endpoint_id: endpoint_id };\r\n } catch (err) {\r\n console.log(err.response.data);\r\n return { authToken: null, error: err.response.data };\r\n }\r\n } else {\r\n console.log('Token fetched from cookie: ' + speechToken);\r\n const idx = speechToken.indexOf(':');\r\n return { authToken: speechToken.slice(idx + 1), region: speechToken.slice(0, idx) };\r\n }\r\n}\r\n\r\nexport async function getKeyPhrases(requestText, accessToken) { \r\n try{\r\n //Key Phrase extraction\r\n const data = {transcript: requestText};\r\n const headers = { 'Content-Type': 'application/json', 'Authorization': `Bearer ${accessToken}`};\r\n const res = await axios.post(BACKEND_API + '/api/ta-key-phrases', data, {headers}); \r\n \r\n return res.data;\r\n //return {keyPhrasesExtracted: keyPhrasesExtracted};\r\n } catch (err) { \r\n return {keyPhrasesExtracted: \"NoKP\", entityExtracted: \"NoEnt\"};\r\n }\r\n\r\n}\r\n\r\nexport async function getKeyPhrasesOld(requestText, accessToken) { \r\n\r\n try{\r\n //Key Phrase extraction\r\n const data = {transcript: requestText};\r\n const headers = { 'Content-Type': 'application/json', 'Authorization': `Bearer ${accessToken}`};\r\n \r\n const res = await axios.post('/api/ta-key-phrases', data, {headers}); \r\n //const keyPhrasesExtracted = JSON.stringify(res.body.keyPhraseResponse); \r\n \r\n return res.data;\r\n //return {keyPhrasesExtracted: keyPhrasesExtracted};\r\n } catch (err) { \r\n return {keyPhrasesExtracted: \"None\"};\r\n }\r\n}","import Button from 'react-bootstrap/Button';\r\nimport { loginRequest } from \"../authConfig\";\r\nimport { useMsal} from \"@azure/msal-react\";\r\nimport { useState, useEffect } from \"react\";\r\n\r\nexport const Dashboard = (props) => {\r\n let record_status = \"enabled.\"\r\n let verb = \"enable\"\r\n const { instance, accounts } = useMsal();\r\n const [accessToken, setAccessToken] = useState(null);\r\n\r\n if (props.AudioEnabled) {\r\n record_status = \"enabled.\"\r\n verb = \"disable\"\r\n } else {\r\n record_status = \"disabled.\"\r\n verb = \"enable\"\r\n }\r\n\r\n let message = \"Click here to \" + verb + \".\"\r\n let button_message = \"Start Mic Streaming\"\r\n let button_variant = \"primary\"\r\n if (props.isStreaming) {\r\n button_message = \"Stop Mic Streaming\"\r\n button_variant = \"danger\"\r\n } else {\r\n button_message = \"Start Mic Streaming\"\r\n button_variant = \"primary\"\r\n }\r\n\r\n function RequestAccessToken() {\r\n const request = {\r\n ...loginRequest,\r\n account: accounts[0],\r\n scopes: [ \"api://6798e375-a31f-48b0-abcb-87f06d70d0b6/user_impersonation\" ]\r\n };\r\n //silently acquire an access token\r\n instance.acquireTokenSilent(request).then((response) => {\r\n setAccessToken(response.accessToken);\r\n }).catch((e) => {\r\n instance.acquireTokenPopup(request).then((response) => {\r\n setAccessToken(response.accessToken);\r\n });\r\n });\r\n }\r\n\r\n function ClickHandler() {\r\n if (process.env.REACT_APP_PLATFORM === \"hosted\") {\r\n RequestAccessToken()\r\n } else {\r\n setAccessToken(\"fake_token\")\r\n }\r\n }\r\n\r\n useEffect( () => {\r\n if (accessToken) {\r\n props.onMicRecordClick(accessToken)\r\n } \r\n }, [accessToken])\r\n\r\n return(\r\n <>\r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n
Audio Recording is {record_status}
Start Or Stop Streaming
\r\n \r\n )\r\n};","import { useMsal} from \"@azure/msal-react\";\r\nimport Button from \"react-bootstrap/Button\";\r\nimport { loginRequest } from \"../authConfig\";\r\nimport { useState, useEffect } from \"react\";\r\n\r\nexport const ProfileContent = (props) => {\r\n const { instance, accounts } = useMsal();\r\n const [accessToken, setAccessToken] = useState(null);\r\n const { REACT_APP_PLATFORM } = process.env;\r\n\r\n if (REACT_APP_PLATFORM === \"hosted\") {\r\n var isHosted = true;\r\n } else {\r\n var isHosted = false;\r\n }\r\n\r\n function RequestAccessToken() {\r\n const request = {\r\n ...loginRequest,\r\n account: accounts[0]\r\n };\r\n //silently acquire an access token\r\n instance.acquireTokenSilent(request).then((response) => {\r\n setAccessToken(response.accessToken);\r\n }).catch((e) => {\r\n instance.acquireTokenPopup(request).then((response) => {\r\n setAccessToken(response.accessToken);\r\n });\r\n });\r\n\r\n }\r\n\r\n\r\n if (isHosted) {\r\n return (\r\n <>\r\n \r\n
Welcome {accounts[0].name}
\r\n {accessToken ? \r\n

Access Token Acquired!

\r\n :\r\n \r\n }\r\n
\r\n \r\n );\r\n } else {\r\n return (\r\n <>\r\n \r\n
Welcome, Demo User!
\r\n
\r\n \r\n )\r\n }\r\n };","import React, { Component } from 'react';\r\nimport { loginRequest } from \"./authConfig\";\r\nimport { useState, useEffect } from \"react\";\r\nimport { useMsal} from \"@azure/msal-react\";\r\nimport { AuthenticatedTemplate, UnauthenticatedTemplate } from \"@azure/msal-react\";\r\nimport { PageLayout } from \"./components/PageLayout\";\r\nimport { OutputWindows } from \"./components/OutputWindows\";\r\nimport { getKeyPhrases, getTokenOrRefresh } from './token_util.js';\r\nimport { ResultReason } from 'microsoft-cognitiveservices-speech-sdk';\r\nimport { Dashboard } from \"./components/Dashboard.jsx\";\r\n// import { ConsoleLoggingListener } from 'microsoft-cognitiveservices-speech-sdk/distrib/lib/src/common.browser/ConsoleLoggingListener';\r\nimport { ProfileContent } from \"./components/Profile.jsx\";\r\n\r\n\r\n//Set Config\r\n// let config = require('./config.json')\r\nconst speechsdk = require('microsoft-cognitiveservices-speech-sdk')\r\n\r\n\r\n// Start App\r\nexport default class App extends Component {\r\n constructor(props) {\r\n super(props);\r\n\r\n this.handleMicRecorderClick = this.handleMicRecorderClick.bind(this);\r\n this.handleAudioRecordingSwitch = this.handleAudioRecordingSwitch.bind(this);\r\n\r\n this.state = {\r\n accessToken: null,\r\n AudioRecordingEnabled: true,\r\n isStreaming: false,\r\n isHosted: false,\r\n color: 'white',\r\n value: '', \r\n displayText: 'Transcribed text will show here when streaming.',\r\n displayNLPOutput: 'This windows will display detected entities.',\r\n debugConsole: 'Debug logs will be displayed here.'\r\n };\r\n }\r\n\r\n async handleMicRecorderClick(accessToken) {\r\n\r\n // event.preventDefault();\r\n\r\n //flip toggle\r\n if (this.state.isStreaming) {\r\n this.setState({debugConsole : 'Stop Mic Event Received'})\r\n await this.setState({isStreaming: false})\r\n return null\r\n } else {\r\n await this.setState({isStreaming: true})\r\n }\r\n \r\n const delay = ms => new Promise(res => setTimeout(res, ms));\r\n const recognizer = await this.InitializeStream(accessToken);\r\n await this.sttFromMic(recognizer, accessToken);\r\n await delay(2000);\r\n do {\r\n this.setState({debugConsole : \"Mic is listening for audio.\"})\r\n this.setState({debugConsole : \"Will check every 2 seconds for stop event\"})\r\n await delay(2000);\r\n }\r\n while (this.state.isStreaming);\r\n await this.stopMicStream(recognizer);\r\n }\r\n\r\n\r\n handleAudioRecordingSwitch = () => {\r\n\r\n if (this.state.AudioRecordingEnabled) {\r\n this.setState({AudioRecordingEnabled: false})\r\n } else {\r\n this.setState({AudioRecordingEnabled: true})\r\n }\r\n\r\n }\r\n\r\n async componentDidMount() {\r\n }\r\n\r\n\r\nasync InitializeStream(accessToken) {\r\n const tokenObj = await getTokenOrRefresh(accessToken);\r\n const customSpeechEndpoint = tokenObj.endpoint_id\r\n const speechConfig = speechsdk.SpeechConfig.fromAuthorizationToken(tokenObj.authToken, tokenObj.region);\r\n if (this.state.AudioRecordingEnabled) {\r\n\r\n //Setting below specifies custom speech model ID that is created using Speech Studio\r\n speechConfig.endpointId = customSpeechEndpoint;\r\n\r\n //Setting below allows specifying custom GUID that can be used to correlate audio captured by Speech Logging\r\n speechConfig.setServiceProperty(\"clientConnectionId\", this.state.value, speechsdk.ServicePropertyChannel.UriQueryParameter);\r\n \r\n }\r\n speechConfig.speechRecognitionLanguage = 'en-US';\r\n const audioConfig = speechsdk.AudioConfig.fromDefaultMicrophoneInput();\r\n const recognizer = new speechsdk.SpeechRecognizer(speechConfig, audioConfig);\r\n return recognizer\r\n \r\n };\r\n\r\n async stopMicStream(recognizer) {\r\n await recognizer.stopContinuousRecognitionAsync();\r\n await this.setState({isStreaming : false});\r\n this.setState({debugConsole : 'Mic stopped listening'});\r\n }\r\n\r\n\r\n async sttFromMic(recognizer, accessToken) {\r\n\r\n let resultText = \"\";\r\n let nlpText = \" \";\r\n\r\n recognizer.sessionStarted = (s, e) => {\r\n resultText = \"Session ID: \" + e.sessionId;\r\n this.setState({\r\n displayText: resultText\r\n });\r\n };\r\n\r\n recognizer.recognized = async (s, e) => {\r\n\r\n if(e.result.reason === ResultReason.RecognizedSpeech){\r\n //Display continuous transcript\r\n resultText += `\\n${e.result.text}`; \r\n this.setState({\r\n displayText: resultText\r\n }); \r\n \r\n //Perform continuous NLP\r\n const nlpObj = await getKeyPhrases(e.result.text, accessToken); \r\n \r\n //Display extracted Key Phrases \r\n const keyPhraseText = JSON.stringify(nlpObj.keyPhrasesExtracted);\r\n \r\n if(keyPhraseText.length > 15){\r\n nlpText += \"\\n\" + keyPhraseText;\r\n this.setState({ displayNLPOutput: nlpText }); \r\n } \r\n\r\n //Display extracted entities\r\n const entityText = JSON.stringify(nlpObj.entityExtracted); \r\n\r\n if(entityText.length > 12){\r\n nlpText += \"\\n\" + entityText;\r\n this.setState({ displayNLPOutput: nlpText.replace('
', '\\n') });\r\n } \r\n }\r\n\r\n else if (e.result.reason === ResultReason.NoMatch) {\r\n //resultText += `\\nNo Match`\r\n resultText += `\\n`\r\n } \r\n };\r\n await recognizer.startContinuousRecognitionAsync();\r\n await this.setState({isStreaming : true});\r\n}\r\n\r\n render() {\r\n return (\r\n <>\r\n \r\n \r\n } debugData={this.state.debugConsole} nlpOutput={this.state.displayNLPOutput} text={this.state.displayText} dashboard={} />\r\n \r\n \r\n } debugData={this.state.debugConsole} nlpOutput={this.state.displayNLPOutput} text={this.state.displayText} dashboard={} />\r\n \r\n \r\n \r\n );\r\n }\r\n}\r\n","import { unstable_createMuiStrictModeTheme as createMuiTheme } from '@material-ui/core/styles';\r\nimport red from '@material-ui/core/colors/red';\r\n\r\n// Create a theme instance.\r\nexport const theme = createMuiTheme({\r\n palette: {\r\n primary: {\r\n main: '#556cd6',\r\n },\r\n secondary: {\r\n main: '#19857b',\r\n },\r\n error: {\r\n main: red.A400,\r\n },\r\n background: {\r\n default: '#fff',\r\n },\r\n },\r\n});","import 'bootstrap/dist/css/bootstrap.css';\r\nimport React from 'react';\r\nimport ReactDOM from 'react-dom';\r\nimport App from \"./App.jsx\";\r\nimport { BrowserRouter as Router } from \"react-router-dom\";\r\nimport { ThemeProvider } from '@material-ui/core/styles';\r\nimport { theme } from \"./styles/theme\";\r\nimport { MsalProvider } from \"@azure/msal-react\";\r\n\r\n\r\n// MSAL Imports\r\nimport { PublicClientApplication, EventType } from \"@azure/msal-browser\";\r\nimport { msalConfig } from \"./authConfig\";\r\n\r\n// MSAL configuration\r\nconst msalInstance = new PublicClientApplication(msalConfig);\r\n\r\n// Default to using the first account if no account is active on page load\r\nif (!msalInstance.getActiveAccount() && msalInstance.getAllAccounts().length > 0) {\r\n // Account selection logic is app dependent. Adjust as needed for different use cases.\r\n msalInstance.setActiveAccount(msalInstance.getAllAccounts()[0]);\r\n}\r\n\r\n// Optional - This will update account state if a user signs in from another tab or window\r\nmsalInstance.enableAccountStorageEvents();\r\n\r\nmsalInstance.addEventCallback((event) => {\r\n if (event.eventType === EventType.LOGIN_SUCCESS && event.payload.account) {\r\n const account = event.payload.account;\r\n msalInstance.setActiveAccount(account);\r\n }\r\n});\r\n\r\n\r\nReactDOM.render(\r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n ,\r\n document.getElementById('root')\r\n);\r\n\r\n\r\n"],"sourceRoot":""} -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/build/static/js/runtime-main.4452493b.js: -------------------------------------------------------------------------------- 1 | !function(e){function r(r){for(var n,f,l=r[0],a=r[1],c=r[2],i=0,s=[];i0.2%", 46 | "not dead", 47 | "not op_mini all" 48 | ], 49 | "development": [ 50 | "last 1 chrome version", 51 | "last 1 firefox version", 52 | "last 1 safari version" 53 | ] 54 | }, 55 | "proxy": "http://localhost:8080" 56 | } 57 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/public/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/front-end-ui/public/favicon.ico -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/public/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 12 | 13 | 17 | 18 | 27 | 28 | Realtime Call Intelligence-Azure AI 29 | 30 | 31 | 32 |
33 | 43 | 44 | 45 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/public/logo192.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/front-end-ui/public/logo192.png -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/public/logo512.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/front-end-ui/public/logo512.png -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/public/manifest.json: -------------------------------------------------------------------------------- 1 | { 2 | "short_name": "React App", 3 | "name": "Create React App Sample", 4 | "icons": [ 5 | { 6 | "src": "favicon.ico", 7 | "sizes": "64x64 32x32 24x24 16x16", 8 | "type": "image/x-icon" 9 | }, 10 | { 11 | "src": "logo192.png", 12 | "type": "image/png", 13 | "sizes": "192x192" 14 | }, 15 | { 16 | "src": "logo512.png", 17 | "type": "image/png", 18 | "sizes": "512x512" 19 | } 20 | ], 21 | "start_url": ".", 22 | "display": "standalone", 23 | "theme_color": "#000000", 24 | "background_color": "#ffffff" 25 | } 26 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/public/robots.txt: -------------------------------------------------------------------------------- 1 | # https://www.robotstxt.org/robotstxt.html 2 | User-agent: * 3 | Disallow: 4 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/App.css: -------------------------------------------------------------------------------- 1 | .main-container { 2 | margin-top: 100px; 3 | } 4 | 5 | .fa-microphone { 6 | color: #14A76C; 7 | } 8 | 9 | .fa-microphone:hover { 10 | color: #d44916; 11 | } 12 | 13 | .fa-file-audio { 14 | color: #FF652F; 15 | } 16 | 17 | .fa-file-audio:hover { 18 | color: #d44916; 19 | } 20 | 21 | .output-display { 22 | background-color: #f9f6fa; 23 | color: white; 24 | height: 700px; 25 | } 26 | 27 | .nlpoutput-display { 28 | background-color: #72e7ce; 29 | color: white; 30 | height: 700px; 31 | } 32 | 33 | .fas, .fa { 34 | cursor: pointer; 35 | } 36 | 37 | .background { 38 | background-color: #272727; 39 | color: #747474; 40 | } -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/App.jsx: -------------------------------------------------------------------------------- 1 | import React, { Component } from 'react'; 2 | import { AuthenticatedTemplate, UnauthenticatedTemplate } from "@azure/msal-react"; 3 | import { PageLayout } from "./components/PageLayout"; 4 | import { OutputWindows } from "./components/OutputWindows"; 5 | import { getKeyPhrases, getTokenOrRefresh } from './token_util.js'; 6 | import { ResultReason } from 'microsoft-cognitiveservices-speech-sdk'; 7 | import { Dashboard } from "./components/Dashboard.jsx"; 8 | import { ProfileContent } from "./components/Profile.jsx"; 9 | 10 | //Set Config 11 | const speechsdk = require('microsoft-cognitiveservices-speech-sdk') 12 | 13 | 14 | // Start App 15 | export default class App extends Component { 16 | constructor(props) { 17 | super(props); 18 | 19 | this.handleMicRecorderClick = this.handleMicRecorderClick.bind(this); 20 | this.handleAudioRecordingSwitch = this.handleAudioRecordingSwitch.bind(this); 21 | 22 | this.state = { 23 | accessToken: null, 24 | AudioRecordingEnabled: true, 25 | isStreaming: false, 26 | isHosted: false, 27 | color: 'white', 28 | value: '', 29 | displayText: 'Transcribed text will show here when streaming.', 30 | displayNLPOutput: 'This windows will display detected entities.', 31 | debugConsole: 'Debug logs will be displayed here.' 32 | }; 33 | } 34 | 35 | async handleMicRecorderClick(accessToken) { 36 | 37 | if (this.state.isStreaming) { 38 | this.setState({debugConsole : 'Stop Mic Event Received'}) 39 | await this.setState({isStreaming: false}) 40 | return null 41 | } else { 42 | await this.setState({isStreaming: true}) 43 | } 44 | 45 | const delay = ms => new Promise(res => setTimeout(res, ms)); 46 | const recognizer = await this.InitializeStream(accessToken); 47 | await this.sttFromMic(recognizer, accessToken); 48 | await delay(2000); 49 | do { 50 | this.setState({debugConsole : "Mic is listening for audio."}) 51 | this.setState({debugConsole : "Will check every 2 seconds for stop event"}) 52 | await delay(2000); 53 | } 54 | while (this.state.isStreaming); 55 | await this.stopMicStream(recognizer); 56 | } 57 | 58 | 59 | handleAudioRecordingSwitch = () => { 60 | 61 | if (this.state.AudioRecordingEnabled) { 62 | this.setState({AudioRecordingEnabled: false}) 63 | } else { 64 | this.setState({AudioRecordingEnabled: true}) 65 | } 66 | 67 | } 68 | 69 | async componentDidMount() { 70 | } 71 | 72 | 73 | async InitializeStream(accessToken) { 74 | const tokenObj = await getTokenOrRefresh(accessToken); 75 | const customSpeechEndpoint = tokenObj.endpoint_id 76 | const speechConfig = speechsdk.SpeechConfig.fromAuthorizationToken(tokenObj.authToken, tokenObj.region); 77 | if (this.state.AudioRecordingEnabled) { 78 | 79 | //Setting below specifies custom speech model ID that is created using Speech Studio 80 | speechConfig.endpointId = customSpeechEndpoint; 81 | 82 | //Setting below allows specifying custom GUID that can be used to correlate audio captured by Speech Logging 83 | speechConfig.setServiceProperty("clientConnectionId", this.state.value, speechsdk.ServicePropertyChannel.UriQueryParameter); 84 | 85 | } 86 | speechConfig.speechRecognitionLanguage = 'en-US'; 87 | const audioConfig = speechsdk.AudioConfig.fromDefaultMicrophoneInput(); 88 | const recognizer = new speechsdk.SpeechRecognizer(speechConfig, audioConfig); 89 | return recognizer 90 | 91 | }; 92 | 93 | async stopMicStream(recognizer) { 94 | await recognizer.stopContinuousRecognitionAsync(); 95 | await this.setState({isStreaming : false}); 96 | this.setState({debugConsole : 'Mic stopped listening'}); 97 | } 98 | 99 | 100 | async sttFromMic(recognizer, accessToken) { 101 | 102 | let resultText = ""; 103 | let nlpText = " "; 104 | 105 | recognizer.sessionStarted = (s, e) => { 106 | resultText = "Session ID: " + e.sessionId; 107 | this.setState({ 108 | displayText: resultText 109 | }); 110 | }; 111 | 112 | recognizer.recognized = async (s, e) => { 113 | 114 | if(e.result.reason === ResultReason.RecognizedSpeech){ 115 | //Display continuous transcript 116 | resultText += `\n${e.result.text}`; 117 | this.setState({ 118 | displayText: resultText 119 | }); 120 | 121 | //Perform continuous NLP 122 | const nlpObj = await getKeyPhrases(e.result.text, accessToken); 123 | 124 | //Display extracted Key Phrases 125 | const keyPhraseText = JSON.stringify(nlpObj.keyPhrasesExtracted); 126 | 127 | if(keyPhraseText.length > 15){ 128 | nlpText += "\n" + keyPhraseText; 129 | this.setState({ displayNLPOutput: nlpText }); 130 | } 131 | 132 | //Display extracted entities 133 | const entityText = JSON.stringify(nlpObj.entityExtracted); 134 | 135 | if(entityText.length > 12){ 136 | nlpText += "\n" + entityText; 137 | this.setState({ displayNLPOutput: nlpText.replace('
', '\n') }); 138 | } 139 | } 140 | 141 | else if (e.result.reason === ResultReason.NoMatch) { 142 | //resultText += `\nNo Match` 143 | resultText += `\n` 144 | } 145 | }; 146 | await recognizer.startContinuousRecognitionAsync(); 147 | await this.setState({isStreaming : true}); 148 | } 149 | 150 | render() { 151 | return ( 152 | <> 153 | 154 | 155 | } debugData={this.state.debugConsole} nlpOutput={this.state.displayNLPOutput} text={this.state.displayText} dashboard={} /> 156 | 157 | 158 | } debugData={this.state.debugConsole} nlpOutput={this.state.displayNLPOutput} text={this.state.displayText} dashboard={} /> 159 | 160 | 161 | 162 | ); 163 | } 164 | } 165 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/authConfig.js: -------------------------------------------------------------------------------- 1 | import { LogLevel } from "@azure/msal-browser"; 2 | // Browser check variables 3 | // If you support IE, our recommendation is that you sign-in using Redirect APIs 4 | // If you as a developer are testing using Edge InPrivate mode, please add "isEdge" to the if check 5 | const ua = window.navigator.userAgent; 6 | const msie = ua.indexOf("MSIE "); 7 | const msie11 = ua.indexOf("Trident/"); 8 | const msedge = ua.indexOf("Edge/"); 9 | const firefox = ua.indexOf("Firefox"); 10 | const isIE = msie > 0 || msie11 > 0; 11 | const isEdge = msedge > 0; 12 | const isFirefox = firefox > 0; // Only needed if you need to support the redirect flow in Firefox incognito 13 | 14 | // Config object to be passed to Msal on creation 15 | export const msalConfig = { 16 | auth: { 17 | clientId: process.env.REACT_APP_CLIENT_ID, 18 | authority: `https://login.microsoftonline.com/${process.env.REACT_APP_TENANT_ID}`, 19 | redirectUri: process.env.REACT_APP_REDIRECT_URI, 20 | postLogoutRedirectUri: process.env.REACT_APP_POST_LOGOUT_REDIRECT_URI 21 | }, 22 | cache: { 23 | cacheLocation: "localStorage", 24 | storeAuthStateInCookie: isIE || isEdge || isFirefox 25 | }, 26 | system: { 27 | loggerOptions: { 28 | loggerCallback: (level, message, containsPii) => { 29 | if (containsPii) { 30 | return; 31 | } 32 | switch (level) { 33 | case LogLevel.Error: 34 | console.error(message); 35 | return; 36 | case LogLevel.Info: 37 | console.info(message); 38 | return; 39 | case LogLevel.Verbose: 40 | console.debug(message); 41 | return; 42 | case LogLevel.Warning: 43 | console.warn(message); 44 | return; 45 | default: 46 | return; 47 | } 48 | } 49 | } 50 | } 51 | }; 52 | 53 | // Add here scopes for id token to be used at MS Identity Platform endpoints. 54 | export const loginRequest = { 55 | scopes: ["User.Read"] 56 | }; 57 | 58 | // Add here the endpoints for MS Graph API services you would like to use. 59 | export const graphConfig = { 60 | graphMeEndpoint: "https://graph.microsoft.com/v1.0/me" 61 | }; -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/components/Dashboard.jsx: -------------------------------------------------------------------------------- 1 | import Button from 'react-bootstrap/Button'; 2 | import { loginRequest } from "../authConfig"; 3 | import { useMsal} from "@azure/msal-react"; 4 | import { useState, useEffect } from "react"; 5 | 6 | export const Dashboard = (props) => { 7 | let record_status = "enabled." 8 | let verb = "enable" 9 | const { instance, accounts } = useMsal(); 10 | const [accessToken, setAccessToken] = useState(null); 11 | 12 | if (props.AudioEnabled) { 13 | record_status = "enabled." 14 | verb = "disable" 15 | } else { 16 | record_status = "disabled." 17 | verb = "enable" 18 | } 19 | 20 | let message = "Click here to " + verb + "." 21 | let button_message = "Start Mic Streaming" 22 | let button_variant = "primary" 23 | if (props.isStreaming) { 24 | button_message = "Stop Mic Streaming" 25 | button_variant = "danger" 26 | } else { 27 | button_message = "Start Mic Streaming" 28 | button_variant = "primary" 29 | } 30 | 31 | function RequestAccessToken() { 32 | const request = { 33 | ...loginRequest, 34 | account: accounts[0], 35 | scopes: [ "api://6798e375-a31f-48b0-abcb-87f06d70d0b6/user_impersonation" ] 36 | }; 37 | instance.acquireTokenSilent(request).then((response) => { 38 | setAccessToken(response.accessToken); 39 | }).catch((e) => { 40 | instance.acquireTokenPopup(request).then((response) => { 41 | setAccessToken(response.accessToken); 42 | }); 43 | }); 44 | } 45 | 46 | function ClickHandler() { 47 | if (process.env.REACT_APP_PLATFORM === "hosted") { 48 | RequestAccessToken() 49 | } else { 50 | setAccessToken("fake_token") 51 | } 52 | } 53 | 54 | useEffect( () => { 55 | if (accessToken) { 56 | props.onMicRecordClick(accessToken) 57 | } 58 | }, [accessToken]) 59 | 60 | return( 61 | <> 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 |
Audio Recording is {record_status}
Start Or Stop Streaming
72 | 73 | ) 74 | }; -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/components/InitializeStream.jsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/front-end-ui/src/components/InitializeStream.jsx -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/components/OutputWindows.jsx: -------------------------------------------------------------------------------- 1 | import React from "react"; 2 | import Card from 'react-bootstrap/Card'; 3 | import Container from 'react-bootstrap/Container'; 4 | import Row from 'react-bootstrap/Row'; 5 | import Col from 'react-bootstrap/Col'; 6 | 7 | export const OutputWindows = (props) => { 8 | return ( 9 | 10 | 11 | 12 | 13 | 14 | Profile: 15 | {props.profile} 16 | 17 | 18 | 19 | 20 | 21 | 22 | Dashboard 23 | {props.dashboard} 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 | 32 | 33 | Transcription Output Window: 34 | {props.text} 35 | 36 | 37 | 38 | 39 | 40 | 41 | NLP Output Window: 42 | {props.nlpOutput} 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | Debug Console Window: 52 | {props.debugData} 53 | 54 | 55 | 56 | 57 |
58 | ); 59 | }; -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/components/PageLayout.jsx: -------------------------------------------------------------------------------- 1 | import React from "react"; 2 | import Navbar from "react-bootstrap/Navbar"; 3 | import { useIsAuthenticated } from "@azure/msal-react"; 4 | import { SignInButton } from "./SignInButton"; 5 | import { SignOutButton } from "./SignOutButton"; 6 | import { Container } from "reactstrap"; 7 | // import { ConsoleLoggingListener } from "microsoft-cognitiveservices-speech-sdk/distrib/lib/src/common.browser/ConsoleLoggingListener"; 8 | 9 | /** 10 | * Renders the navbar component with a sign-in or sign-out button depending on whether or not a user is authenticated 11 | * @param props 12 | */ 13 | export const PageLayout = (props) => { 14 | const isAuthenticated = useIsAuthenticated(); 15 | const { REACT_APP_PLATFORM } = process.env; 16 | 17 | if (REACT_APP_PLATFORM === "hosted") { 18 | var isHosted = true; 19 | } else { 20 | var isHosted = false; 21 | } 22 | 23 | function DesktopLayout() { 24 | return <>{props.children}; 25 | } 26 | 27 | function HostedLayout() { 28 | 29 | return ( 30 | <> 31 | 32 | { isHosted ? 33 | AI-Powered Call Center 34 | { isAuthenticated ? : } 35 | :

} 36 |
37 |
38 | { isAuthenticated ? <>{props.children} :

} 39 | 40 | ) 41 | } 42 | 43 | function Layout() { 44 | 45 | if (isHosted) { 46 | return 47 | } else { 48 | return 49 | } 50 | } 51 | 52 | return ( 53 | 54 | ); 55 | }; 56 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/components/Profile.jsx: -------------------------------------------------------------------------------- 1 | import { useMsal} from "@azure/msal-react"; 2 | import Button from "react-bootstrap/Button"; 3 | import { loginRequest } from "../authConfig"; 4 | import { useState, useEffect } from "react"; 5 | 6 | export const ProfileContent = (props) => { 7 | const { instance, accounts } = useMsal(); 8 | const [accessToken, setAccessToken] = useState(null); 9 | const { REACT_APP_PLATFORM } = process.env; 10 | 11 | if (REACT_APP_PLATFORM === "hosted") { 12 | var isHosted = true; 13 | } else { 14 | var isHosted = false; 15 | } 16 | 17 | function RequestAccessToken() { 18 | const request = { 19 | ...loginRequest, 20 | account: accounts[0] 21 | }; 22 | //silently acquire an access token 23 | instance.acquireTokenSilent(request).then((response) => { 24 | setAccessToken(response.accessToken); 25 | }).catch((e) => { 26 | instance.acquireTokenPopup(request).then((response) => { 27 | setAccessToken(response.accessToken); 28 | }); 29 | }); 30 | 31 | } 32 | 33 | 34 | if (isHosted) { 35 | return ( 36 | <> 37 | 38 |
Welcome {accounts[0].name}
39 | {accessToken ? 40 |

Access Token Acquired!

41 | : 42 | 43 | } 44 |
45 | 46 | ); 47 | } else { 48 | return ( 49 | <> 50 | 51 |
Welcome, Demo User!
52 |
53 | 54 | ) 55 | } 56 | }; -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/components/ProfileData.jsx: -------------------------------------------------------------------------------- 1 | import React from "react"; 2 | 3 | /** 4 | * Renders information about the user obtained from MS Graph 5 | * @param props 6 | */ 7 | export const ProfileData = (props) => { 8 | console.log(props.graphData); 9 | 10 | return ( 11 |
12 |

First Name: {props.graphData.givenName}

13 | {/*

Last Name: {props.graphData.surname}

14 |

Email: {props.graphData.userPrincipalName}

15 |

Id: {props.graphData.id}

*/} 16 |
17 | ); 18 | }; -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/components/SignInButton.jsx: -------------------------------------------------------------------------------- 1 | import React from "react"; 2 | import { useMsal } from "@azure/msal-react"; 3 | import { loginRequest } from "../authConfig"; 4 | import DropdownButton from "react-bootstrap/DropdownButton"; 5 | import Dropdown from "react-bootstrap/esm/Dropdown"; 6 | 7 | /** 8 | * Renders a drop down button with child buttons for logging in with a popup or redirect 9 | */ 10 | export const SignInButton = () => { 11 | const { instance } = useMsal(); 12 | 13 | const handleLogin = (loginType) => { 14 | if (loginType === "popup") { 15 | instance.loginPopup(loginRequest).catch(e => { 16 | console.log(e); 17 | }); 18 | } else if (loginType === "redirect") { 19 | instance.loginRedirect(loginRequest).catch(e => { 20 | console.log(e); 21 | }); 22 | } 23 | } 24 | return ( 25 | 26 | {/* handleLogin("popup")}>Sign in using Popup */} 27 | handleLogin("redirect")}>Sign in using Redirect 28 | 29 | ) 30 | } -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/components/SignOutButton.jsx: -------------------------------------------------------------------------------- 1 | import React from "react"; 2 | import { useMsal } from "@azure/msal-react"; 3 | import DropdownButton from "react-bootstrap/DropdownButton"; 4 | import Dropdown from "react-bootstrap/esm/Dropdown"; 5 | 6 | /** 7 | * Renders a sign-out button 8 | */ 9 | export const SignOutButton = () => { 10 | const { instance } = useMsal(); 11 | 12 | const handleLogout = (logoutType) => { 13 | if (logoutType === "popup") { 14 | instance.logoutPopup({ 15 | postLogoutRedirectUri: "/", 16 | mainWindowRedirectUri: "/" 17 | }); 18 | } else if (logoutType === "redirect") { 19 | instance.logoutRedirect({ 20 | postLogoutRedirectUri: "/", 21 | }); 22 | } 23 | } 24 | return ( 25 | 26 | {/* handleLogout("popup")}>Sign out using Popup */} 27 | handleLogout("redirect")}>Sign out using Redirect 28 | 29 | ) 30 | } -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/components/Splashscreen.jsx: -------------------------------------------------------------------------------- 1 | import React from "react"; 2 | 3 | export const SplashScreen = (props) => { 4 | console.log("in splashscreen") 5 | return( 6 | <>

Hello World

7 | ); 8 | }; -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/config.json: -------------------------------------------------------------------------------- 1 | { 2 | 3 | } -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/graph.js: -------------------------------------------------------------------------------- 1 | import { graphConfig } from "./authConfig"; 2 | 3 | /** 4 | * Attaches a given access token to a MS Graph API call. Returns information about the user 5 | * @param accessToken 6 | */ 7 | export async function callMsGraph(accessToken) { 8 | const headers = new Headers(); 9 | const bearer = `Bearer ${accessToken}`; 10 | 11 | headers.append("Authorization", bearer); 12 | 13 | const options = { 14 | method: "GET", 15 | headers: headers 16 | }; 17 | 18 | return fetch(graphConfig.graphMeEndpoint, options) 19 | .then(response => response.json()) 20 | .catch(error => console.log(error)); 21 | } 22 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/index.js: -------------------------------------------------------------------------------- 1 | import 'bootstrap/dist/css/bootstrap.css'; 2 | import React from 'react'; 3 | import ReactDOM from 'react-dom'; 4 | import App from "./App.jsx"; 5 | import { BrowserRouter as Router } from "react-router-dom"; 6 | import { ThemeProvider } from '@material-ui/core/styles'; 7 | import { theme } from "./styles/theme"; 8 | import { MsalProvider } from "@azure/msal-react"; 9 | 10 | 11 | // MSAL Imports 12 | import { PublicClientApplication, EventType } from "@azure/msal-browser"; 13 | import { msalConfig } from "./authConfig"; 14 | 15 | // MSAL configuration 16 | const msalInstance = new PublicClientApplication(msalConfig); 17 | 18 | // Default to using the first account if no account is active on page load 19 | if (!msalInstance.getActiveAccount() && msalInstance.getAllAccounts().length > 0) { 20 | // Account selection logic is app dependent. Adjust as needed for different use cases. 21 | msalInstance.setActiveAccount(msalInstance.getAllAccounts()[0]); 22 | } 23 | 24 | // Optional - This will update account state if a user signs in from another tab or window 25 | msalInstance.enableAccountStorageEvents(); 26 | 27 | msalInstance.addEventCallback((event) => { 28 | if (event.eventType === EventType.LOGIN_SUCCESS && event.payload.account) { 29 | const account = event.payload.account; 30 | msalInstance.setActiveAccount(account); 31 | } 32 | }); 33 | 34 | 35 | ReactDOM.render( 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | , 45 | document.getElementById('root') 46 | ); 47 | 48 | 49 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/styles/theme.js: -------------------------------------------------------------------------------- 1 | import { unstable_createMuiStrictModeTheme as createMuiTheme } from '@material-ui/core/styles'; 2 | import red from '@material-ui/core/colors/red'; 3 | 4 | // Create a theme instance. 5 | export const theme = createMuiTheme({ 6 | palette: { 7 | primary: { 8 | main: '#556cd6', 9 | }, 10 | secondary: { 11 | main: '#19857b', 12 | }, 13 | error: { 14 | main: red.A400, 15 | }, 16 | background: { 17 | default: '#fff', 18 | }, 19 | }, 20 | }); -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/front-end-ui/src/token_util.js: -------------------------------------------------------------------------------- 1 | import axios from 'axios'; 2 | import Cookie from 'universal-cookie'; 3 | const BACKEND_API = process.env.REACT_APP_BACKEND_API 4 | 5 | export async function getTokenOrRefresh(accessToken) { 6 | const cookie = new Cookie(); 7 | const speechToken = cookie.get('speech-token'); 8 | 9 | 10 | if (speechToken === undefined) { 11 | try { 12 | 13 | console.log('Try getting token from the express backend'); 14 | const headers = {'Content-Type': 'application/json', 'Authorization': `Bearer ${accessToken}`, 'Access-Control-Allow-Origin': '*'} 15 | console.log(headers) 16 | const res = await axios.get(BACKEND_API + '/api/get-speech-token', {headers}); 17 | const token = res.data.token; 18 | const region = res.data.region; 19 | const endpoint_id = res.data.endpoint_id 20 | cookie.set('speech-token', region + ':' + token, {maxAge: 540, path: '/'}); 21 | 22 | console.log('Token fetched from back-end: ' + token); 23 | return { authToken: token, region: region, endpoint_id: endpoint_id }; 24 | } catch (err) { 25 | console.log(err.response.data); 26 | return { authToken: null, error: err.response.data }; 27 | } 28 | } else { 29 | console.log('Token fetched from cookie: ' + speechToken); 30 | const idx = speechToken.indexOf(':'); 31 | return { authToken: speechToken.slice(idx + 1), region: speechToken.slice(0, idx) }; 32 | } 33 | } 34 | 35 | export async function getKeyPhrases(requestText, accessToken) { 36 | try{ 37 | //Key Phrase extraction 38 | const data = {transcript: requestText}; 39 | const headers = { 'Content-Type': 'application/json', 'Authorization': `Bearer ${accessToken}`}; 40 | const res = await axios.post(BACKEND_API + '/api/ta-key-phrases', data, {headers}); 41 | 42 | return res.data; 43 | //return {keyPhrasesExtracted: keyPhrasesExtracted}; 44 | } catch (err) { 45 | return {keyPhrasesExtracted: "NoKP", entityExtracted: "NoEnt"}; 46 | } 47 | 48 | } 49 | 50 | export async function getKeyPhrasesOld(requestText, accessToken) { 51 | 52 | try{ 53 | //Key Phrase extraction 54 | const data = {transcript: requestText}; 55 | const headers = { 'Content-Type': 'application/json', 'Authorization': `Bearer ${accessToken}`}; 56 | 57 | const res = await axios.post('/api/ta-key-phrases', data, {headers}); 58 | //const keyPhrasesExtracted = JSON.stringify(res.body.keyPhraseResponse); 59 | 60 | return res.data; 61 | //return {keyPhrasesExtracted: keyPhrasesExtracted}; 62 | } catch (err) { 63 | return {keyPhrasesExtracted: "None"}; 64 | } 65 | } -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechexpressbackend/.env: -------------------------------------------------------------------------------- 1 | SPEECH_KEY=speechkey 2 | SPEECH_REGION=centralus 3 | TEXTANALYTICS_KEY=testapikey 4 | TEXTANALYTICS_ENDPOINT=https://callcenterai.cognitiveservices.azure.com/ 5 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechexpressbackend/.gitignore: -------------------------------------------------------------------------------- 1 | # See https://help.github.com/articles/ignoring-files/ for more about ignoring files. 2 | 3 | # dependencies 4 | /node_modules 5 | /.pnp 6 | .pnp.js 7 | 8 | # testing 9 | /coverage 10 | 11 | # production 12 | /build 13 | 14 | # misc 15 | .DS_Store 16 | .env.local 17 | .env.development.local 18 | .env.test.local 19 | .env.production.local 20 | 21 | npm-debug.log* 22 | yarn-debug.log* 23 | yarn-error.log* 24 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechexpressbackend/.vscode/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "appService.deploySubpath": "." 3 | } -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechexpressbackend/config.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "name": "speech", 4 | "subscription_key": "2e7b1e3a316a4d5dac86a7d4460f4c80", 5 | "region": "centralus", 6 | "endpoint_id": "036d9808-81bd-40d4-bf82-b46e131da519", 7 | "text_analytics_key": "4ae0acaad9624f9e8c5a6d67af4393c5", 8 | "text_analytics_endpoint": "https://callcenterai.cognitiveservices.azure.com/", 9 | "web_port": "8080" 10 | } 11 | ] -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechexpressbackend/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "speechexpressbackend", 3 | "version": "1.0.0", 4 | "description": "Express backend to enable audio streaming using Azure Speech using token", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1", 8 | "start": "node serverapp.js", 9 | "server": "nodemon serverapp.js" 10 | }, 11 | "keywords": [ 12 | "Azure", 13 | "Speech" 14 | ], 15 | "license": "MIT", 16 | "dependencies": { 17 | "@azure/ai-text-analytics": "^5.1.0", 18 | "axios": "^0.21.1", 19 | "cors": "^2.8.5", 20 | "dotenv": "^10.0.0", 21 | "express": "^4.17.1" 22 | }, 23 | "devDependencies": { 24 | "nodemon": "^2.0.12" 25 | } 26 | } 27 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechexpressbackend/serverapp.js: -------------------------------------------------------------------------------- 1 | require('dotenv').config() 2 | const express = require('express') 3 | const cors = require('cors'); 4 | const axios = require('axios'); 5 | const app = express(); 6 | app.use(cors()); 7 | 8 | // get config 9 | const config = require('./config.json') 10 | const port = config[0].web_port 11 | const speechKey = config[0].subscription_key; 12 | const speechRegion = config[0].region; 13 | const endpoint_id = config[0].endpoint_id; 14 | const textAnalyticsKey = config[0].text_analytics_key; 15 | const textAnalyticsEndpoint = config[0].text_analytics_endpoint; 16 | 17 | 18 | //"use strict"; 19 | const { TextAnalyticsClient, AzureKeyCredential } = require("@azure/ai-text-analytics"); 20 | const { json } = require('express'); 21 | 22 | app.use(express.json()); 23 | 24 | app.get('/api/sayhello', (req, res) => { 25 | res.send('Hello World from the backend!') 26 | }); 27 | 28 | 29 | app.get('/api/get-speech-token', async (req, res, next) => { 30 | res.setHeader('Content-Type', 'application/json'); 31 | 32 | 33 | if (speechKey === 'paste-your-speech-key-here' || speechRegion === 'paste-your-speech-region-here') { 34 | res.status(400).send('You forgot to add your speech key or region to the .env file.'); 35 | } else { 36 | const headers = { 37 | headers: { 38 | 'Ocp-Apim-Subscription-Key': speechKey, 39 | 'Content-Type': 'application/x-www-form-urlencoded' 40 | } 41 | }; 42 | 43 | try { 44 | console.log(`this is the api request`) 45 | console.log(req.headers) 46 | console.log(`Speechkey loaded for speech region ${speechRegion}. Getting token.`) 47 | const tokenResponse = await axios.post(`https://${speechRegion}.api.cognitive.microsoft.com/sts/v1.0/issueToken`, null, headers); 48 | res.send({ token: tokenResponse.data, region: speechRegion, endpoint_id: endpoint_id }); 49 | } catch (err) { 50 | res.status(401).send('There was an error authorizing your speech key.'); 51 | } 52 | } 53 | }); 54 | 55 | 56 | app.post('/api/ta-key-phrases', async (req, res) => { 57 | const requestJSON = JSON.stringify(req.body); 58 | //console.log('JSON string request body ' + requestJSON); 59 | 60 | const requestText = JSON.stringify(req.body.transcript); 61 | //console.log('Received transcription text : ' + requestText); 62 | 63 | try { 64 | const keyPhrasesInput = [ 65 | requestText, 66 | ]; 67 | const textAnalyticsClient = new TextAnalyticsClient(textAnalyticsEndpoint, new AzureKeyCredential(textAnalyticsKey)); 68 | 69 | let keyPhrasesText = "KEY PHRASES: "; 70 | const keyPhraseResult = await textAnalyticsClient.extractKeyPhrases(keyPhrasesInput); 71 | keyPhraseResult.forEach(document => { 72 | keyPhraseResponse = document.keyPhrases; 73 | keyPhrasesText += document.keyPhrases; 74 | }); 75 | 76 | let entityText = "ENTITIES: "; 77 | const entityResults = await textAnalyticsClient.recognizeEntities(keyPhrasesInput); 78 | entityResults.forEach(document => { 79 | //console.log(`Document ID: ${document.id}`); 80 | document.entities.forEach(entity => { 81 | if(entity.confidenceScore > 0.5){ 82 | //console.log(`\tName: ${entity.text} \tCategory: ${entity.category} \tSubcategory: ${entity.subCategory ? entity.subCategory : "N/A"}`); 83 | const currentEntity = entity.category + ": " + entity.text; 84 | entityText += " " + currentEntity; 85 | //console.log(`\tScore: ${entity.confidenceScore}`); 86 | } 87 | }); 88 | }); 89 | 90 | let piiText = "PII Redacted Text: "; 91 | const piiResults = await textAnalyticsClient.recognizePiiEntities(keyPhrasesInput, "en"); 92 | for (const result of piiResults) { 93 | if (result.error === undefined) { 94 | if(result.redactedText.indexOf('*') > -1){ 95 | //console.log("Redacted Text: ", result.redactedText); 96 | piiText += result.redactedText; 97 | //console.log(" -- Recognized PII entities for input", result.id, "--"); 98 | } 99 | 100 | for (const entity of result.entities) { 101 | //console.log(entity.text, ":", entity.category, "(Score:", entity.confidenceScore, ")"); 102 | const currentEntity = entity.category + ": " + entity.text; 103 | piiText += currentEntity; 104 | } 105 | } else { 106 | console.error("Encountered an error:", result.error); 107 | } 108 | } 109 | 110 | const headers = { 'Content-Type': 'application/json' }; 111 | res.headers = headers; 112 | //res.send({ keyPhrasesExtracted: keyPhraseResponse, entityExtracted: entityResults, piiExtracted: piiResults }); 113 | res.send({ keyPhrasesExtracted: keyPhrasesText, entityExtracted: entityText, piiExtracted: piiText }); 114 | } catch (err) { 115 | console.log(err); 116 | res.status(401).send('There was an error authorizing your text analytics key. Check your text analytics service key or endpoint to the .env file.'); 117 | } 118 | }); 119 | 120 | app.post('/api/ta-key-phrases-old', async (req, res) => { 121 | //You can find your key and endpoint in the resource's key and endpoint page, under resource management. 122 | const textAnalyticsKey = config[0].text_analytics_key; 123 | const textAnalyticsEndpoint = config[0].text_analytics_endpoint; 124 | const requestJSON = JSON.stringify(req.body); 125 | //console.log('JSON string request body ' + requestJSON); 126 | const requestText = JSON.stringify(req.body.transcript); 127 | console.log('Received transcription text : ' + requestText); 128 | 129 | try { 130 | const keyPhrasesInput = [ 131 | requestText, 132 | ]; 133 | const textAnalyticsClient = new TextAnalyticsClient(textAnalyticsEndpoint, new AzureKeyCredential(textAnalyticsKey)); 134 | const keyPhraseResult = await textAnalyticsClient.extractKeyPhrases(keyPhrasesInput); 135 | /*keyPhraseResult.forEach(document => { 136 | console.log(`ID: ${document.id}`); 137 | keyPhraseResponse = document.keyPhrases; 138 | console.log(`\tDocument Key Phrases: ${keyPhraseResponse}`); 139 | });*/ 140 | const headers = { 'Content-Type': 'application/json' }; 141 | res.headers = headers; 142 | res.send({ keyPhrasesExtracted: keyPhraseResponse }); 143 | } catch (err) { 144 | console.log(err); 145 | res.status(401).send('There was an error authorizing your text analytics key. Check your text analytics service key or endpoint to the .env file.'); 146 | } 147 | }); 148 | 149 | 150 | 151 | 152 | app.listen(port, () => { 153 | console.log(`Express backend app listening on port ${port}`) 154 | }) -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/.env: -------------------------------------------------------------------------------- 1 | DANGEROUSLY_DISABLE_HOST_CHECK=true 2 | CUSTOM_SPEECH_ENDPOINT_ID=5c0e6aec-f9b6-4da5-9228-a02b17d7a749 3 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/.gitignore: -------------------------------------------------------------------------------- 1 | # See https://help.github.com/articles/ignoring-files/ for more about ignoring files. 2 | 3 | # dependencies 4 | /node_modules 5 | /.pnp 6 | .pnp.js 7 | 8 | # testing 9 | /coverage 10 | 11 | # production 12 | /build 13 | 14 | # misc 15 | .DS_Store 16 | .env.local 17 | .env.development.local 18 | .env.test.local 19 | .env.production.local 20 | 21 | npm-debug.log* 22 | yarn-debug.log* 23 | yarn-error.log* 24 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/.vscode/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "appService.defaultWebAppToDeploy": "/subscriptions/636c0f7e-d103-4a9c-a904-f929d2a83c88/resourceGroups/learn-2696d155-aa17-4e66-b705-20560ca1e7/providers/Microsoft.Web/sites/speechreactfrontendamc", 3 | "appService.deploySubpath": "." 4 | } -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/README.md: -------------------------------------------------------------------------------- 1 | # Getting Started with Create React App 2 | 3 | This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). 4 | 5 | ## Available Scripts 6 | 7 | In the project directory, you can run: 8 | 9 | ### `npm start` 10 | 11 | Runs the app in the development mode.\ 12 | Open [http://localhost:3000](http://localhost:3000) to view it in the browser. 13 | 14 | The page will reload if you make edits.\ 15 | You will also see any lint errors in the console. 16 | 17 | ### `npm test` 18 | 19 | Launches the test runner in the interactive watch mode.\ 20 | See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. 21 | 22 | ### `npm run build` 23 | 24 | Builds the app for production to the `build` folder.\ 25 | It correctly bundles React in production mode and optimizes the build for the best performance. 26 | 27 | The build is minified and the filenames include the hashes.\ 28 | Your app is ready to be deployed! 29 | 30 | See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. 31 | 32 | ### `npm run eject` 33 | 34 | **Note: this is a one-way operation. Once you `eject`, you can’t go back!** 35 | 36 | If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. 37 | 38 | Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own. 39 | 40 | You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. 41 | 42 | ## Learn More 43 | 44 | You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). 45 | 46 | To learn React, check out the [React documentation](https://reactjs.org/). 47 | 48 | ### Code Splitting 49 | 50 | This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) 51 | 52 | ### Analyzing the Bundle Size 53 | 54 | This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) 55 | 56 | ### Making a Progressive Web App 57 | 58 | This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) 59 | 60 | ### Advanced Configuration 61 | 62 | This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) 63 | 64 | ### Deployment 65 | 66 | This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) 67 | 68 | ### `npm run build` fails to minify 69 | 70 | This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify) 71 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "speechreactfrontend", 3 | "version": "0.1.0", 4 | "private": true, 5 | "dependencies": { 6 | "@fortawesome/fontawesome-svg-core": "^1.2.35", 7 | "@fortawesome/free-solid-svg-icons": "^5.15.3", 8 | "@fortawesome/react-fontawesome": "^0.1.14", 9 | "@material-ui/core": "^4.12.3", 10 | "@testing-library/jest-dom": "^5.14.1", 11 | "@testing-library/react": "^11.2.7", 12 | "@testing-library/user-event": "^12.8.3", 13 | "all": "0.0.0", 14 | "axios": "^0.21.1", 15 | "bootstrap": "^5.0.2", 16 | "microsoft-cognitiveservices-speech-sdk": "^1.18.0", 17 | "react": "^17.0.2", 18 | "react-bootstrap": "^1.6.1", 19 | "react-dom": "^17.0.2", 20 | "react-scripts": "4.0.3", 21 | "reactstrap": "^8.9.0", 22 | "universal-cookie": "^4.0.4", 23 | "web-vitals": "^1.1.2" 24 | }, 25 | "scripts": { 26 | "start": "react-scripts start", 27 | "build": "react-scripts build", 28 | "test": "react-scripts test", 29 | "eject": "react-scripts eject" 30 | }, 31 | "eslintConfig": { 32 | "extends": [ 33 | "react-app", 34 | "react-app/jest" 35 | ] 36 | }, 37 | "browserslist": { 38 | "production": [ 39 | ">0.2%", 40 | "not dead", 41 | "not op_mini all" 42 | ], 43 | "development": [ 44 | "last 1 chrome version", 45 | "last 1 firefox version", 46 | "last 1 safari version" 47 | ] 48 | }, 49 | "proxy": "http://localhost:8080" 50 | } 51 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/public/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/speechreactfrontend/public/favicon.ico -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/public/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 12 | 13 | 17 | 18 | 27 | 28 | Realtime Call Intelligence-Azure AI 29 | 30 | 31 | 32 |
33 | 43 | 44 | 45 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/public/logo192.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/speechreactfrontend/public/logo192.png -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/public/logo512.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/azure-speech-streaming-reactjs/speechreactfrontend/public/logo512.png -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/public/manifest.json: -------------------------------------------------------------------------------- 1 | { 2 | "short_name": "React App", 3 | "name": "Create React App Sample", 4 | "icons": [ 5 | { 6 | "src": "favicon.ico", 7 | "sizes": "64x64 32x32 24x24 16x16", 8 | "type": "image/x-icon" 9 | }, 10 | { 11 | "src": "logo192.png", 12 | "type": "image/png", 13 | "sizes": "192x192" 14 | }, 15 | { 16 | "src": "logo512.png", 17 | "type": "image/png", 18 | "sizes": "512x512" 19 | } 20 | ], 21 | "start_url": ".", 22 | "display": "standalone", 23 | "theme_color": "#000000", 24 | "background_color": "#ffffff" 25 | } 26 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/public/robots.txt: -------------------------------------------------------------------------------- 1 | # https://www.robotstxt.org/robotstxt.html 2 | User-agent: * 3 | Disallow: 4 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/src/App.css: -------------------------------------------------------------------------------- 1 | .main-container { 2 | margin-top: 100px; 3 | } 4 | 5 | .fa-microphone { 6 | color: #14A76C; 7 | } 8 | 9 | .fa-microphone:hover { 10 | color: #d44916; 11 | } 12 | 13 | .fa-file-audio { 14 | color: #FF652F; 15 | } 16 | 17 | .fa-file-audio:hover { 18 | color: #d44916; 19 | } 20 | 21 | .output-display { 22 | background-color: #f9f6fa; 23 | color: white; 24 | height: 700px; 25 | } 26 | 27 | .nlpoutput-display { 28 | background-color: #72e7ce; 29 | color: white; 30 | height: 700px; 31 | } 32 | 33 | .fas, .fa { 34 | cursor: pointer; 35 | } 36 | 37 | .background { 38 | background-color: #272727; 39 | color: #747474; 40 | } -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/src/App.js: -------------------------------------------------------------------------------- 1 | import React, { Component } from 'react'; 2 | import { Container } from 'reactstrap'; 3 | //import axios from 'axios'; 4 | import { getKeyPhrases, getTokenOrRefresh } from './token_util.js'; 5 | import { ResultReason } from 'microsoft-cognitiveservices-speech-sdk'; 6 | import './App.css'; 7 | 8 | const speechsdk = require('microsoft-cognitiveservices-speech-sdk') 9 | 10 | export default class App extends Component { 11 | constructor(props) { 12 | super(props); 13 | 14 | this.state = {color: 'white', value: '' }; 15 | 16 | this.handleChange = this.handleChange.bind(this); 17 | this.handleSubmit = this.handleSubmit.bind(this); 18 | 19 | this.state = { 20 | color: 'white', 21 | displayText: 'INITIALIZED: ready to test speech...', 22 | displayNLPOutput: 'NLP Output: ...' 23 | }; 24 | } 25 | 26 | handleChange(event) { 27 | this.setState({value: event.target.value}); 28 | } 29 | 30 | handleSubmit(event) { 31 | alert('Your conversation will be saved with name : ' + this.state.value + ' Submit a different name to change it.'); 32 | event.preventDefault(); 33 | } 34 | 35 | async componentDidMount() { 36 | // check for valid speech key/region 37 | const tokenRes = await getTokenOrRefresh(); 38 | if (tokenRes.authToken === null) { 39 | this.setState({ 40 | displayText: 'FATAL_ERROR amc: ' + tokenRes.error 41 | }); 42 | } 43 | } 44 | 45 | async sttFromMic() { 46 | const tokenObj = await getTokenOrRefresh(); 47 | //const customSpeechEndpoint = process.env.CUSTOM_SPEECH_ENDPOINT_ID; 48 | const speechConfig = speechsdk.SpeechConfig.fromAuthorizationToken(tokenObj.authToken, tokenObj.region); 49 | speechConfig.speechRecognitionLanguage = 'en-US'; 50 | 51 | //Setting below specifies custom speech model ID that is created using Speech Studio 52 | speechConfig.endpointId = 'd26026b7-aaa0-40bf-84e7-35054451a3f4'; 53 | 54 | //Setting below allows specifying custom GUID that can be used to correlnpate audio captured by Speech Logging 55 | speechConfig.setServiceProperty("clientConnectionId", this.state.value, speechsdk.ServicePropertyChannel.UriQueryParameter); 56 | 57 | const audioConfig = speechsdk.AudioConfig.fromDefaultMicrophoneInput(); 58 | const recognizer = new speechsdk.SpeechRecognizer(speechConfig, audioConfig); 59 | 60 | this.setState({ 61 | displayText: 'Speak into your microphone to start conversation...' + this.state.value 62 | }); 63 | 64 | let resultText = ""; 65 | let nlpText = " "; 66 | recognizer.sessionStarted = (s, e) => { 67 | resultText = "Session ID: " + e.sessionId; 68 | 69 | this.setState({ 70 | displayText: resultText 71 | }); 72 | }; 73 | 74 | recognizer.recognized = async (s, e) => { 75 | if(e.result.reason === ResultReason.RecognizedSpeech){ 76 | 77 | //Display continuous transcript 78 | resultText += `\n${e.result.text}`; 79 | this.setState({ 80 | displayText: resultText 81 | }); 82 | 83 | //Perform continuous NLP 84 | const nlpObj = await getKeyPhrases(e.result.text); 85 | 86 | //Display extracted Key Phrases 87 | const keyPhraseText = JSON.stringify(nlpObj.keyPhrasesExtracted); 88 | if(keyPhraseText.length > 15){ 89 | //nlpText += "\n" + keyPhraseText; 90 | //this.setState({ displayNLPOutput: nlpText }); 91 | } 92 | 93 | //Display extracted entities 94 | const entityText = JSON.stringify(nlpObj.entityExtracted); 95 | if(entityText.length > 12){ 96 | nlpText += "\n" + entityText; 97 | this.setState({ displayNLPOutput: nlpText.replace('
', '\n') }); 98 | } 99 | 100 | //Display PII Detected 101 | const piiText = JSON.stringify(nlpObj.piiExtracted); 102 | if(piiText.length > 21){ 103 | nlpText += "\n" + piiText; 104 | this.setState({ displayNLPOutput: nlpText.replace('
', '\n') }); 105 | } 106 | } 107 | else if (e.result.reason === ResultReason.NoMatch) { 108 | //resultText += `\nNo Match` 109 | resultText += `\n` 110 | } 111 | 112 | }; 113 | 114 | recognizer.startContinuousRecognitionAsync(); 115 | } 116 | 117 | async fileChange(event) { 118 | const audioFile = event.target.files[0]; 119 | console.log(audioFile); 120 | const fileInfo = audioFile.name + ` size=${audioFile.size} bytes `; 121 | 122 | this.setState({ 123 | displayText: fileInfo 124 | }); 125 | 126 | const tokenObj = await getTokenOrRefresh(); 127 | const speechConfig = speechsdk.SpeechConfig.fromAuthorizationToken(tokenObj.authToken, tokenObj.region); 128 | speechConfig.speechRecognitionLanguage = 'en-US'; 129 | 130 | const audioConfig = speechsdk.AudioConfig.fromWavFileInput(audioFile); 131 | const recognizer = new speechsdk.SpeechRecognizer(speechConfig, audioConfig); 132 | 133 | recognizer.recognizeOnceAsync(result => { 134 | let displayText; 135 | if (result.reason === ResultReason.RecognizedSpeech) { 136 | displayText = `RECOGNIZED: Text=${result.text}` 137 | } else { 138 | displayText = 'ERROR: Speech was cancelled or could not be recognized. Ensure your microphone is working properly.'; 139 | } 140 | 141 | this.setState({ 142 | displayText: fileInfo + displayText 143 | }); 144 | }); 145 | } 146 | 147 | render() { 148 | return ( 149 | 150 | 151 |
Realtime Call Intelligence - powered by Azure AI
152 |
NOTE: This covnersation will be recorded for demo purpose.
153 |
-----------------------------------------------------------
154 | 155 |
156 | 160 | 161 |
162 | 163 | 164 |
165 | this.sttFromMic()}> 166 | STEP 2 - Click on Microphone and start talking for real-time insights.. 167 |
168 | 169 |
----- Speech-to-text Output ---------------------------------------------------- AI-powered Call Insights ------
170 | 171 |
172 |
173 | {this.state.displayText} 174 |
175 |
176 | {this.state.displayNLPOutput} 177 |
178 |
179 | 180 |
181 | ); 182 | } 183 | } -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/src/index.js: -------------------------------------------------------------------------------- 1 | import 'bootstrap/dist/css/bootstrap.css'; 2 | import React from 'react'; 3 | import ReactDOM from 'react-dom'; 4 | import App from './App'; 5 | 6 | ReactDOM.render( 7 | 8 | 9 | , 10 | document.getElementById('root') 11 | ); 12 | 13 | 14 | -------------------------------------------------------------------------------- /azure-speech-streaming-reactjs/speechreactfrontend/src/token_util.js: -------------------------------------------------------------------------------- 1 | import axios from 'axios'; 2 | import Cookie from 'universal-cookie'; 3 | 4 | export async function getTokenOrRefresh() { 5 | const cookie = new Cookie(); 6 | const speechToken = cookie.get('speech-token'); 7 | 8 | if (speechToken === undefined) { 9 | try { 10 | console.log('Try getting token from the express backend'); 11 | const res = await axios.get('/api/get-speech-token'); 12 | const token = res.data.token; 13 | const region = res.data.region; 14 | cookie.set('speech-token', region + ':' + token, {maxAge: 540, path: '/'}); 15 | 16 | console.log('Token fetched from back-end: ' + token); 17 | return { authToken: token, region: region }; 18 | } catch (err) { 19 | console.log(err.response.data); 20 | return { authToken: null, error: err.response.data }; 21 | } 22 | } else { 23 | console.log('Token fetched from cookie: ' + speechToken); 24 | const idx = speechToken.indexOf(':'); 25 | return { authToken: speechToken.slice(idx + 1), region: speechToken.slice(0, idx) }; 26 | } 27 | } 28 | 29 | export async function getKeyPhrases(requestText) { 30 | 31 | try{ 32 | //Key Phrase extraction 33 | const data = {transcript: requestText}; 34 | const headers = { 'Content-Type': 'application/json' }; 35 | 36 | const res = await axios.post('/api/ta-key-phrases', data, {headers}); 37 | 38 | return res.data; 39 | //return {keyPhrasesExtracted: keyPhrasesExtracted}; 40 | } catch (err) { 41 | return {keyPhrasesExtracted: "NoKP", entityExtracted: "NoEnt"}; 42 | } 43 | } 44 | 45 | export async function getKeyPhrasesOld(requestText) { 46 | 47 | try{ 48 | //Key Phrase extraction 49 | const data = {transcript: requestText}; 50 | const headers = { 'Content-Type': 'application/json' }; 51 | 52 | const res = await axios.post('/api/ta-key-phrases', data, {headers}); 53 | //const keyPhrasesExtracted = JSON.stringify(res.body.keyPhraseResponse); 54 | 55 | return res.data; 56 | //return {keyPhrasesExtracted: keyPhrasesExtracted}; 57 | } catch (err) { 58 | return {keyPhrasesExtracted: "None"}; 59 | } 60 | } -------------------------------------------------------------------------------- /call-batch-analytics/README.md: -------------------------------------------------------------------------------- 1 | # Getting started with the Call Batch Analytics 2 | 3 | This call batch analytics component is using [Ingestion Client](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/ingestion) that helps transcribe your audio files without any development effort. The Ingestion Client monitors your dedicated Azure Storage container so that new audio files are transcribed automatically as soon as they land. 4 | 5 | This tool uses multiple Azure Cognitive Services to get insights from call recordings. It uses Azure Speech to transcribe calls and then Azure Text Analytics for various analytics tasks (including sentiment analysis, PII detection/redection etc). Insights extracted by these Azure AI services are then stored to Azure SQL database for further analysis and visualization (using Power BI or other tools). 6 | 7 | Think of this tool as an automated & scalable transcription solution for all audio files in your Azure Storage. This tool is a quick and effortless way to transcribe your audio files or just explore transcription. 8 | 9 | Using an ARM template deployment, all the resources necessary to seamlessly process your audio files are configured and turned on. 10 | 11 | # Architecture 12 | 13 | The Ingestion Client is optimized to use the capabilities of the Azure Speech infrastructure. It uses Azure resources to orchestrate transcription requests to the Azure Speech service using audio files as they appear in your dedicated storage containers. 14 | 15 | The following diagram shows the structure of this tool as defined by the ARM template. 16 | 17 | ![Architecture](../common/images/batchanalyticsarchitecture.png) 18 | 19 | When a file lands in a storage container, the Grid event indicates the completed upload of a file. The file is filtered and pushed to a Service bus topic. Code in Azure Functions triggered by a Service bus message picks up the event and creates a transmission request using the Azure Speech services batch pipeline. When the transmission request is complete, an event is placed in another queue in the same service bus resource. A different Azure Function triggered by the completion event starts monitoring transcription completion status. When transcription completes, the Azure Function copies the transcript into the same container where the audio file was obtained. 20 | 21 | 22 | # Setup Guide 23 | 24 | Follow these steps to set up and run the tool using ARM templates. 25 | 26 | ## Prerequisites 27 | 28 | An [Azure Account](https://azure.microsoft.com/free/), a [Azure Speech services key](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) and a [Azure Text Analytics service key](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalyticsis) is needed to run prior to deploying this tool using ARM template. 29 | 30 | > **_NOTE:_** You need to create a Speech Resource with a paid (S0) key. The free key account will not work. 31 | 32 | 33 | ### Operating Mode 34 | 35 | Audio files can be processed either by the [Speech to Text API v3.0](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) for batch processing, or our [Speech SDK](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-sdk) for real-time processing. We will be using the Batch Mode and make use of `Diarization` feature offered in the Batch Mode. 36 | 37 | 38 | ## Setup Instructions 39 | 40 | Follow the instructions below to deploy the resources from ARM template. 41 | 42 | 1. In the [Azure portal](https://portal.azure.com), click **Create a Resource**. In the search box, type **template deployment**, and select the **Template deployment** resource. 43 | 44 | 2. On the screen that appears, click the**Create** button. 45 | 46 | ![Create template](../common/images/image003.png) 47 | 48 | 3. You will be creating Azure resources from the ARM template we provide. Click on click the **Build your own template in the editor** link. 49 | 50 | ![Create template2](../common/images/image005.png) 51 | 52 | 4. Load the template by clicking **Load file**. Use the `call-batch-analytics\batch-analytics-arm-template-servicebus.json` file in this step. Alternatively, 53 | you could copy/paste the template in the editor. 54 | 55 | ![Load template](../common/images/image007.png) 56 | 57 | 5. Once the template text is loaded you will be able to read and edit the transcript. Do 58 | **NOT** attempt any edits at this stage. You need to save the template you loaded, so click the **Save** button. 59 | 60 | ![Save template](../common/images/image009.png) 61 | 62 | Saving the template will result in the screen below. You will need to fill in the form provided. 63 | 64 | **Following settings are required to be completed in this step.** 65 | 1. Specify `Resource group` name. Recommended to create a new resource group. 66 | 2. Specify `Storage Account` name. Recommended to create a new storage accout. 67 | 3. Provide an existing `Azure Speech Services Key`. 68 | 4. Select `Azure Speech Services Region` that corresponds to your Azure Speech Services Key. 69 | 5. Provide an existing `Text Analytics Key`. 70 | 6. Select `Azure Text Analytics Region` that corresponds to your Text Analytics Key. 71 | 7. Provide `Sql Administrator Login` and `Sql Administrator Login Password`. Make a note of this as we will need this in future steps. 72 | 73 | Other setting are options. You can change them if you wish. 74 | 75 | 76 | It is important that all the information is correct. Let us look at the form and go through each field. 77 | 78 | ![form template](../common/images/image011.png) 79 | 80 | > **_NOTE:_** Use short descriptive names in the form for your resource group. Long resource group names can result in deployment error. 81 | 82 | * Pick the Azure Subscription Id where you will create the resources. 83 | 84 | * Either pick or create a resource group. (It would be better to have all the Ingestion Client 85 | resources within the same resource group so we suggest you create a new resource group.) 86 | 87 | * Pick a region. This can be the `same region as your Azure Speech key`. 88 | 89 | The following settings all relate to the resources and their attributes: 90 | 91 | * Give your storage account a name. You will be using a new storage 92 | account rather than an existing one. 93 | 94 | The following 2 steps are optional. If you omit them, the tool will use the base model to obtain 95 | transcripts. If you have created a Speech model, then enter a custom model. 96 | 97 | Transcripts are obtained by polling the service. We acknowledge that there is a cost related to that. 98 | So, the following setting gives you the option to limit that cost by telling your Azure Function how 99 | often you want it to fire. 100 | 101 | * Enter the polling frequency. There are many scenarios where this would be required to be 102 | done couple of times a day. 103 | 104 | * Enter locale of the audio. You need to tell us what language model we need to use to 105 | transcribe your audio. 106 | 107 | * Enter your Azure Speech subscription key and Locale information. 108 | 109 | The rest of the settings relate to the transcription request. You can read more about those in [How to use batch transcription](https://docs.microsoft.com/azure/cognitive-services/speech-service/batch-transcription). 110 | 111 | 112 | * Select a profanity option. 113 | 114 | * Select a punctuation option. 115 | 116 | * Select to Add Diarization [all locales] . 117 | 118 | * Select to Add Word level Timestamps [all locales] . 119 | 120 | 121 | If you want to perform Text Analytics, add those credentials. 122 | 123 | 124 | * Add Text analytics key 125 | 126 | * Add Text analytics region 127 | 128 | * Add Sentiment 129 | 130 | * Add Personally Identifiable Information (PII) Redaction 131 | 132 | > **_NOTE:_** The ARM template also allows you to customize the PII categories through the PiiCategories variable (e.g., to only redact person names and organizations set the value to "Person,Organization"). A full list of all supported categories can be found in the [PII Entity Categories](https://docs.microsoft.com/azure/cognitive-services/text-analytics/named-entity-types?tabs=personal). The ARM template also allows you to set a minimum confidence for redaction through the PiiMinimumPrecision value, the value must be between 0.0 and 1.0. More details can be found in the [Pii Detection Documentation](https://docs.microsoft.com/azure/search/cognitive-search-skill-pii-detection). 133 | 134 | If you want to further analytics we could map the transcript json we produce to a DB schema. 135 | 136 | * Enter SQL DB credential login 137 | 138 | * Enter SQL DB credential password 139 | 140 | 141 | Press **Create** to create the resources. It typically takes 1-2 mins. The resources 142 | are listed below. 143 | 144 | ![resources](../common/images/image013.png) 145 | 146 | If a Consumption Plan (Y1) was selected for the Azure Functions, make sure that the functions are synced with the other resources (see [Trigger syncing](https://docs.microsoft.com/azure/azure-functions/functions-deployment-technologies#trigger-syncing) for further details). 147 | 148 | To do so, click on your **StartTranscription** function in the portal and wait until your function shows up: 149 | 150 | ![resources](../common/images/image016.png) 151 | 152 | Do the same for the **FetchTranscription** function: 153 | 154 | ![resources](../common/images/image017.png) 155 | 156 | > **_Important:_** Until you restart both Azure functions you may see errors. 157 | 158 | ## Running Batch Analytics on Call Recordings 159 | 160 | 1. **Upload audio files to the newly created audio-input container**. 161 | 162 | 163 | Use Azure Portal or [Microsoft Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to upload call audio files to your new storage account in audio-input container. The process of transcription is asynchronous. Transcription usually takes half the time of the audio track to complete. The structure of your newly created storage account will look like the picture below. 164 | 165 | ![containers](../common/images/image015.png) 166 | 167 | There are several containers to distinguish between the various outputs. We suggest (for the sake of keeping things tidy) to follow the pattern and use the audio-input container as the only container for uploading your audio. 168 | 169 | 2. **Check results of batch analytics**: Once the batch process fiishes, results are added to json-result-output and test-results-output containers in the same storage account. -------------------------------------------------------------------------------- /call-batch-analytics/sampledata/SampleData-SiriAzureGoogleTalk1.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/call-batch-analytics/sampledata/SampleData-SiriAzureGoogleTalk1.wav -------------------------------------------------------------------------------- /common/Call Center Intelligence Sample Demo Script.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/Call Center Intelligence Sample Demo Script.docx -------------------------------------------------------------------------------- /common/images/batchanalyticsarchitecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/batchanalyticsarchitecture.png -------------------------------------------------------------------------------- /common/images/cover.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/cover.png -------------------------------------------------------------------------------- /common/images/custom-speech-overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/custom-speech-overview.png -------------------------------------------------------------------------------- /common/images/customspeechendpointid.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/customspeechendpointid.PNG -------------------------------------------------------------------------------- /common/images/dbCreds.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/dbCreds.png -------------------------------------------------------------------------------- /common/images/deploycustomspeechmodel.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/deploycustomspeechmodel.PNG -------------------------------------------------------------------------------- /common/images/enterInfo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/enterInfo.png -------------------------------------------------------------------------------- /common/images/highleveloverview.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/highleveloverview.JPG -------------------------------------------------------------------------------- /common/images/highleveloverview.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/highleveloverview.PNG -------------------------------------------------------------------------------- /common/images/image003.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/image003.png -------------------------------------------------------------------------------- /common/images/image005.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/image005.png -------------------------------------------------------------------------------- /common/images/image007.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/image007.png -------------------------------------------------------------------------------- /common/images/image009.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/image009.png -------------------------------------------------------------------------------- /common/images/image011.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/image011.png -------------------------------------------------------------------------------- /common/images/image013.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/image013.png -------------------------------------------------------------------------------- /common/images/image015.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/image015.png -------------------------------------------------------------------------------- /common/images/image016.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/image016.png -------------------------------------------------------------------------------- /common/images/image017.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/image017.png -------------------------------------------------------------------------------- /common/images/loadingPBI.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/loadingPBI.png -------------------------------------------------------------------------------- /common/images/refreshDB.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/refreshDB.png -------------------------------------------------------------------------------- /common/images/sqlInfo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/sqlInfo.png -------------------------------------------------------------------------------- /common/images/uploadcustomspeechdata.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/common/images/uploadcustomspeechdata.PNG -------------------------------------------------------------------------------- /powerbi/README.md: -------------------------------------------------------------------------------- 1 | 2 | # Visualizing the Batch Analytics output in Power BI 3 | 4 | ## Setup Guide 5 | 6 | The following guide describes how you can create Power BI reports from templates that help realize the value of the call batch analytics output. 7 | 8 | ## Prerequisites 9 | 10 | A working version of the [call-batch-analytics](../call-batch-analytics/README.md) as well as [Power BI Desktop](aka.ms/PowerBIDownload) installed for free on your machine. 11 | 12 | Make sure you have also downloaded a copy of the report you would like to use: 13 | 14 | - [Speech Insights](SpeechInsights.pbit) 15 | - [Sentiment Insights](SentimentInsights.pbit) 16 | 17 | ## Power BI Desktop Setup Instructions 18 | 19 | 1. Ensure that you have downloaded and installed Power BI Desktop before beginning this guide. Navigate to and double click on the .pbit you downloaded. With Power BI Desktop installed properly, this should automatically load the program. 20 | 21 | ![Loading Power BI Desktop](../common/images/loadingPBI.png) 22 | 23 | 2. After Power BI Desktop has finished loading, it will prompt you to enter SQL server and database information. These are the values you declare during your ARM Template deployment, but can also be found in the Overview page of the SQL Database in your Azure Portal. Enter these values and click Load. 24 | 25 | ![Power BI Desktop asking for information](../common/images/enterInfo.png) 26 | 27 | Enter Azure SQL DB server details. You can get this from Azure portal: 28 | 29 | ![Finding SQL server and database names](../common/images/sqlInfo.png) 30 | 31 | 3. Power BI Desktop will then display a pop up that shows a Refresh of the SQL database occurring. After a few seconds, another window will appear and prompt you to enter in credentials to access your SQL database. Select Database as the credential type, and enter the user name and password you specified during the ARM template deployment of the accelerator. Then, click Connect and wait for the Refresh to complete. 32 | 33 | ![Power BI Desktop shows data refreshing](../common/images/refreshDB.png) 34 | 35 | Enter Azure SQL DB credentials. You can get this from Azure portal: 36 | 37 | ![Power BI Desktop prompting for credentials](../common/images/dbCreds.png) 38 | 39 | 4. You should now be looking at the Cover page of the Power BI report template you opened. You can navigate across pages using the tabs at the bottom, or simply control+click on the boxes on the right hand side. Feel free to customize visuals, add new pages, and change the look and feel to match your organization. Enjoy! 40 | 41 | ![Power BI Desktop shows data refreshing](../common/images/cover.png) 42 | 43 | 44 | -------------------------------------------------------------------------------- /powerbi/SentimentInsights.pbit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/powerbi/SentimentInsights.pbit -------------------------------------------------------------------------------- /powerbi/SpeechInsights.pbit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amulchapla/CallCenterIntelligenceAzureAI/f5a6952db758d1f325ea1f112624d2135a7bc8f0/powerbi/SpeechInsights.pbit --------------------------------------------------------------------------------