├── .gitbook.yaml ├── docs ├── mycroft-technologies │ ├── mycroft-core │ │ ├── enclosure.md │ │ ├── services │ │ │ ├── enclosure.md │ │ │ ├── skills-service.md │ │ │ ├── voice-service.md │ │ │ ├── README.md │ │ │ └── audio-service.md │ │ ├── skills-service.md │ │ ├── voice-service.md │ │ ├── legacy-repo.md │ │ ├── testing.md │ │ ├── plugins │ │ │ ├── wake-word-plugins.md │ │ │ ├── stt.md │ │ │ ├── audioservice.md │ │ │ ├── tts.md │ │ │ └── README.md │ │ ├── audio-service.md │ │ └── README.md │ ├── mycroft-gui │ │ ├── README.md │ │ └── remote-stt-and-tts.md │ ├── mycroft-skills-manager.md │ ├── mimic-overview │ │ └── mimic-recording-studio.md │ ├── adapt │ │ ├── adapt-examples.md │ │ └── adapt-tutorial.md │ ├── mycroft-roadmap.md │ ├── platforms.md │ ├── mimic-tts │ │ └── README.md │ └── precise.md ├── using-mycroft-ai │ ├── customizations │ │ ├── README.md │ │ ├── languages │ │ │ └── francais-french.md │ │ └── config-manager.md │ ├── troubleshooting │ │ ├── README.md │ │ ├── support-skill.md │ │ ├── getting-more-support.md │ │ └── wake-word-troubleshooting.md │ ├── get-mycroft │ │ ├── mark-ii │ │ │ └── README.md │ │ ├── android.md │ │ └── README.md │ └── installing-new-skills.md ├── skill-development │ ├── skill-types │ │ ├── README.md │ │ └── fallback-skill.md │ ├── displaying-information │ │ ├── README.md │ │ ├── mycroft-gui-on-a-pc.md │ │ ├── show-simple-content.md │ │ └── mark-1-display.md │ ├── marketplace-submission │ │ ├── README.md │ │ ├── skills-acceptance-process │ │ │ ├── skills-review-information-template.md │ │ │ ├── skills-review-code-template.md │ │ │ └── skills-review-functional-template.md │ │ └── skill-readme-md.md │ ├── voice-user-interface-design-guidelines │ │ ├── interactions-and-guidelines │ │ │ ├── README.md │ │ │ ├── intents.md │ │ │ ├── confirmations.md │ │ │ └── statements-and-prompts.md │ │ ├── README.md │ │ ├── error-handling.md │ │ ├── design-to-development.md │ │ ├── design-process.md │ │ ├── prototyping.md │ │ ├── voice-assistant-personas.md │ │ ├── what-can-a-skill-do.md │ │ └── conversations.md │ ├── user-interaction │ │ ├── confirmations.md │ │ ├── README.md │ │ ├── parsing.md │ │ ├── intents │ │ │ └── README.md │ │ └── converse.md │ ├── intents │ │ └── intent-service.md │ ├── voight-kampff │ │ ├── README.md │ │ ├── reviewing-the-report.md │ │ └── scenario-outlines.md │ ├── faq.md │ ├── skill-structure │ │ ├── dependencies │ │ │ ├── requirements-files.md │ │ │ ├── README.md │ │ │ └── manifest-yml.md │ │ ├── api.md │ │ └── lifecycle-methods.md │ ├── introduction │ │ ├── python-resources.md │ │ ├── your-first-skill.md │ │ └── README.md │ └── mycroft-skills-manager │ │ └── README.md ├── .gitbook │ └── assets │ │ ├── gpio-pins.png │ │ ├── Mimic_color.png │ │ ├── Date-Time-Settings.png │ │ ├── Mimic3-intro-banner.png │ │ ├── Date-Time-Settings (1).png │ │ ├── Mimic3-intro-banner_v2.png │ │ ├── mycroft-gui-screenshot.jpg │ │ ├── Mimic3-intro-banner_v2.1.png │ │ └── Mimic3-intro-banner_v2.3.png ├── README.md └── about-mycroft-ai │ ├── glossary.md │ └── why-use-mycroft.md ├── img ├── message-tester.png ├── git-pull-request.png ├── Date-Time-Settings.png ├── mimic3-architecture.png ├── mimic3-voice_parts.png ├── Date-Time-Settings (1).png ├── mimic3-server_screenshot.jpg ├── mycroft-core-start-debug.png ├── CLI-showing-version-number.png ├── default-mycroft-voice-settings.png ├── mark1-utterances-from-command-line.png └── skills-forking-mycroft-skills-repo.png ├── .gitbook └── assets │ └── message-tester.png ├── redirects ├── README.md └── redirect-mapping.csv ├── SUPPORT.md ├── CONTRIBUTING.md ├── CODE_OF_CONDUCT.md ├── TODO.md └── README.md /.gitbook.yaml: -------------------------------------------------------------------------------- 1 | root: ./docs/ 2 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/enclosure.md: -------------------------------------------------------------------------------- 1 | # Enclosure 2 | 3 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/customizations/README.md: -------------------------------------------------------------------------------- 1 | # Customizations 2 | 3 | -------------------------------------------------------------------------------- /docs/skill-development/skill-types/README.md: -------------------------------------------------------------------------------- 1 | # Advanced Skill Types 2 | 3 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/services/enclosure.md: -------------------------------------------------------------------------------- 1 | # Enclosure 2 | 3 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/skills-service.md: -------------------------------------------------------------------------------- 1 | # Skills Service 2 | 3 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/voice-service.md: -------------------------------------------------------------------------------- 1 | # Voice Service 2 | 3 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/services/skills-service.md: -------------------------------------------------------------------------------- 1 | # Skills Service 2 | 3 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/services/voice-service.md: -------------------------------------------------------------------------------- 1 | # Voice Service 2 | 3 | -------------------------------------------------------------------------------- /docs/skill-development/displaying-information/README.md: -------------------------------------------------------------------------------- 1 | # Displaying information 2 | 3 | -------------------------------------------------------------------------------- /img/message-tester.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/message-tester.png -------------------------------------------------------------------------------- /img/git-pull-request.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/git-pull-request.png -------------------------------------------------------------------------------- /img/Date-Time-Settings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/Date-Time-Settings.png -------------------------------------------------------------------------------- /img/mimic3-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/mimic3-architecture.png -------------------------------------------------------------------------------- /img/mimic3-voice_parts.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/mimic3-voice_parts.png -------------------------------------------------------------------------------- /img/Date-Time-Settings (1).png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/Date-Time-Settings (1).png -------------------------------------------------------------------------------- /img/mimic3-server_screenshot.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/mimic3-server_screenshot.jpg -------------------------------------------------------------------------------- /img/mycroft-core-start-debug.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/mycroft-core-start-debug.png -------------------------------------------------------------------------------- /.gitbook/assets/message-tester.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/.gitbook/assets/message-tester.png -------------------------------------------------------------------------------- /docs/.gitbook/assets/gpio-pins.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/docs/.gitbook/assets/gpio-pins.png -------------------------------------------------------------------------------- /img/CLI-showing-version-number.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/CLI-showing-version-number.png -------------------------------------------------------------------------------- /docs/.gitbook/assets/Mimic_color.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/docs/.gitbook/assets/Mimic_color.png -------------------------------------------------------------------------------- /img/default-mycroft-voice-settings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/default-mycroft-voice-settings.png -------------------------------------------------------------------------------- /img/mark1-utterances-from-command-line.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/mark1-utterances-from-command-line.png -------------------------------------------------------------------------------- /img/skills-forking-mycroft-skills-repo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/img/skills-forking-mycroft-skills-repo.png -------------------------------------------------------------------------------- /docs/.gitbook/assets/Date-Time-Settings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/docs/.gitbook/assets/Date-Time-Settings.png -------------------------------------------------------------------------------- /docs/.gitbook/assets/Mimic3-intro-banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/docs/.gitbook/assets/Mimic3-intro-banner.png -------------------------------------------------------------------------------- /docs/.gitbook/assets/Date-Time-Settings (1).png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/docs/.gitbook/assets/Date-Time-Settings (1).png -------------------------------------------------------------------------------- /docs/.gitbook/assets/Mimic3-intro-banner_v2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/docs/.gitbook/assets/Mimic3-intro-banner_v2.png -------------------------------------------------------------------------------- /docs/.gitbook/assets/mycroft-gui-screenshot.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/docs/.gitbook/assets/mycroft-gui-screenshot.jpg -------------------------------------------------------------------------------- /docs/.gitbook/assets/Mimic3-intro-banner_v2.1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/docs/.gitbook/assets/Mimic3-intro-banner_v2.1.png -------------------------------------------------------------------------------- /docs/.gitbook/assets/Mimic3-intro-banner_v2.3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MycroftAI/documentation/HEAD/docs/.gitbook/assets/Mimic3-intro-banner_v2.3.png -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/services/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: 'Work In Progress: This section will detail the Services that make up Mycroft.' 3 | --- 4 | 5 | # Services 6 | 7 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/troubleshooting/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Having trouble with Mycroft? Here are solutions to common problems, and ways 4 | to get further support. 5 | --- 6 | 7 | # Troubleshooting 8 | 9 | -------------------------------------------------------------------------------- /docs/skill-development/marketplace-submission/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Want to share your new creation with the world? To have your Skill listed in 4 | the Marketplace, we need to ensure it has all the information required and 5 | works as expected. 6 | --- 7 | 8 | # Marketplace Submission 9 | 10 | -------------------------------------------------------------------------------- /redirects/README.md: -------------------------------------------------------------------------------- 1 | # Redirects 2 | This directory contains the table of contents from our old documentation and a csv of the redirects that need to be maintained to ensure old links will continue to function as expected. 3 | 4 | This is important as our long history of community support provide a significant portion of traffic to the documentation. 5 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/interactions-and-guidelines/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | In this section, you will learn about the interaction between users and 4 | Mycroft. You will learn the terms used to describe the different parts of the 5 | interaction, and best practices. 6 | --- 7 | 8 | # Interactions 9 | 10 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/customizations/languages/francais-french.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: How to configure Mycroft for use in French. 3 | --- 4 | 5 | # Français (French) 6 | 7 | ### Example configuration 8 | 9 | ``` 10 | { 11 | "lang": "fr-fr", 12 | "stt": { 13 | "module": "mycroft", 14 | "mycroft": { 15 | "lang": "fr-fr" 16 | } 17 | }, 18 | "tts": { 19 | "module": "google", 20 | "google": { 21 | "lang": "fr" 22 | } 23 | } 24 | } 25 | ``` 26 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-gui/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: Mycroft-GUI is an open source visual and display framework for Mycroft. 3 | --- 4 | 5 | # Mycroft GUI 6 | 7 | It runs on top of KDE Plasma Technology and is built using Kirigami - a lightweight user interface framework for convergent applications which are empowered by Qt. 8 | 9 | ### Debugging 10 | 11 | The QML layer logs to the `wayland-session.log`. 12 | 13 | On the Mark II this is located at: 14 | `/.local/share/sddm/wayland-session.log` 15 | 16 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/get-mycroft/mark-ii/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | The Mark II is our next-generation smart speaker designed for you and your 4 | family, whatever your level of technical expertise. The Dev Kit is open 5 | hardware and software to enable development. 6 | --- 7 | 8 | # Mark II 9 | 10 | For the production Mark II shipping now, please see the dedicated documentation at: 11 | 12 | {% embed url="https://mycroft-ai.gitbook.io/mark-ii/" %} 13 | 14 | For the Mark II Dev Kit including the software that was initially distributed with those kits, jump to the next page: 15 | -------------------------------------------------------------------------------- /docs/skill-development/user-interaction/confirmations.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Confirmations are used to verify that the input from the User was understood 4 | correctly. These may be verbal or non-verbal. 5 | --- 6 | 7 | # Confirmations 8 | 9 | May be verbal or non-verbal. See the [Voice User Interface Design Guidelines section on Confirmations](https://mycroft-ai.gitbook.io/docs/skill-development/voice-user-interface-design-guidelines/interactions-and-guidelines/confirmations) 10 | 11 | ## Non-verbal Confirmation 12 | 13 | See [`acknowledge()`](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.html#mycroft.MycroftSkill.acknowledge) 14 | 15 | -------------------------------------------------------------------------------- /docs/skill-development/intents/intent-service.md: -------------------------------------------------------------------------------- 1 | # Intent Service 2 | 3 | Mycroft supports both Padatious and Adapt intent handlers. Mycroft Core has an intent service that decides which intent will be triggered by a particular utterance. This is based on the confidence of a 4 | 5 | ## Intent Confidence 6 | 7 | ## Order of Priority 8 | 1. Active skills attempt to handle using `converse()` 9 | 2. Padatious high match intents (confidence > 0.95) 10 | 3. Adapt intent handlers 11 | 4. Fallbacks: 12 | - Padatious near match intents (confidence > 0.8) 13 | - General fallbacks 14 | - Padatious loose match intents (confidence > 0.5) 15 | - Unknown intent handler 16 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-skills-manager.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Mycroft Skills Manager (msm) is a command line tool used to add, manage and 4 | remove Skills on any Mycroft installation. 5 | --- 6 | 7 | # Mycroft Skills Manager 8 | 9 | Mycroft Skills Manager \(MSM\) is a command line tool used to add, manage and remove Skills on any Mycroft installation. It can install any Skill listed on the [Mycroft Skills Repository](https://github.com/MycroftAI/mycroft-skills) or from any Github repository. `msm` is a handy tool for Developers and those who may frequently reconfigure, install and uninstall Skills. 10 | 11 | ## Using Mycroft Skills Manager 12 | 13 | See the Skill development section for general MSM usage instructions. 14 | 15 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mimic-overview/mimic-recording-studio.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Mimic Recording Studio provides simplifies the collection of voice training 4 | data from individuals, by allowing you to speak and record phrases from a 5 | corpus. 6 | --- 7 | 8 | # Mimic Recording Studio 9 | 10 | Mimic Recording Studio is a Docker-based web application written in React, Flask , Gunicorn and back-ended on to sqlite. 11 | 12 | Mimic Recording Studio provides simplifies the collection of voice training data from individuals, by allowing you to speak and record phrases from a _corpus_. The recordings can then be input into machine learning algorithms to produce a distinct voice for Mimic or other Text to Speech engines. 13 | 14 | [You can download Mimic Recording studio on GitHub](https://github.com/MycroftAI/mimic-recording-studio). 15 | -------------------------------------------------------------------------------- /SUPPORT.md: -------------------------------------------------------------------------------- 1 | # How to get support with Mycroft software, hardware and products 2 | 3 | There are multiple ways to seek support with Mycroft software, hardware and products. 4 | 5 | ## Forum 6 | 7 | We maintain a [Forum](https://community.mycroft.ai) which is regularly monitored. 8 | Feel free to post questions, bugs, and requests for assistance in the relevant Forum Topic. 9 | 10 | ## Chat 11 | 12 | Mycroft staff are regularly available in our [Chat](https://community.mycroft.ai) platform. 13 | There are specific rooms available for different projects and products. 14 | 15 | ## Contact 16 | 17 | You can contact us via [our online form](https://mycroft.ai/contact), or give a call. 18 | 19 | ## GitHub 20 | 21 | We welcome you raising Issues and Pull Requests on our public GitHub repositories. 22 | See the [CONTRIBUTING.md](CONTRIBUTING.md) file for more information. 23 | 24 | ## Helping us to help you 25 | 26 | Our [documentation](https://mycroft.ai/documentation/troubleshooting/) contains troubleshooting information, and information on log files and other files that we may need to help us help you. 27 | 28 | 29 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Learn about the Design process for planning a Skill, the basics of Voice 4 | Interaction, and best practices. 5 | --- 6 | 7 | # Voice User Interface Design Guidelines 8 | 9 | Through these guidelines you will learn how to use principles of Voice User Interface Design to build more effective skills. These tools will help define and validate the features of the skill before diving deep into development. 10 | 11 | This guide will cover what a Skill can do, Voice Assistant Personas, the Design process we recommend, some Voice Interaction definitions, and best practices. We will also cover some methods to use that can help plan, prototype and test your skill during the early design stages. 12 | 13 | To work well with Mycroft's processes, and accelerate your skill into the Marketplace, it is best to take a [Test Driven Development approach](https://behave.readthedocs.io/en/latest/philosophy.html). By the end of this process you should have all the tools you need to begin creating a Behave test for your skill. How to create the Behave test will be covered in the next section. 14 | 15 | 16 | 17 | 18 | 19 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/error-handling.md: -------------------------------------------------------------------------------- 1 | # Error Handling 2 | 3 | Inevitably, the user will say something that your skill can’t handle. It’s best not to think of these as errors on the part of the user, remember there aren’t really errors in conversations. Conversations are naturally cooperative with both parties seeking to continue to a successful outcome. Keeping that in mind you should do your best to avoid Error Handling that is a dead-end. 4 | 5 | **Avoid** 6 | 7 | | Speaker | | 8 | | :--- | :--- | 9 | | **User** | _Timer_ | 10 | | **Mycroft** | _I'm sorry I can't help you with that_ | 11 | 12 | **Better** 13 | 14 | | **Speaker** | | 15 | | :--- | :--- | 16 | | **Use**r | _Timer_ | 17 | | **Mycroft** | _A timer for how long?_ | 18 | | **User** | _5 minutes_ | 19 | 20 | In the first example Mycroft does not give the user any options to finish the job they set out to do. Try to avoid situations where the user has to start the conversation over. 21 | 22 | ## Help, Cancel, and Stop 23 | 24 | When designing your skill it's best to think about the universal utterances, help, cancel and stop. At any point while interacting with the user should be able to say “help” to get assistance using the skill. Even if the help is quite simple. In practice 25 | 26 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-gui/remote-stt-and-tts.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: Utilize the STT and TTS services of a remotely hosted mycroft-core instance. 3 | --- 4 | 5 | # Remote STT and TTS 6 | 7 | {% hint style="warning" %} 8 | This feature is experimental. 9 | {% endhint %} 10 | 11 | Enabling "Remote TTS" and "Remote STT" in the options make Mycroft-GUI act as a routing client for sending and receiving raw audio. This is intended to be used with a remotely hosted mycroft-core instance on the local network. 12 | 13 | Please note, this is completely experimental. It is being worked on for running Mycroft-GUI on the hardware with limited on-board processing power such as the PinePhone, however it should work on any Linux system. 14 | 15 | ### Requirements 16 | 17 | You must have mycroft-core hosted on a different local machine. On the device running mycroft-core, you need: 18 | 19 | * [Wave Client](https://github.com/AIIX/wave-client): should be installed in /mycroft/clients/wave/ and entries added to start it up in `start-mycroft.sh` and `stop-mycroft.sh` 20 | * [Remote-STT Skill](https://github.com/AIIX/remote-stt) 21 | * [Remote-TTS Skill](https://github.com/AIIX/remote-tts) 22 | * Changes to your `mycroft.conf`: `{ "remote": true, "remote-server": "your.local.ipaddress.here" }` 23 | 24 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/design-to-development.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Use a Behavior Driven approach and write tests in a natural language style 4 | before any development begins. This improves communication within your team, 5 | and sets your Skill up for success. 6 | --- 7 | 8 | # Design to Development 9 | 10 | ## Creating Behave Test files based on your Design 11 | 12 | For Skills to enter our official Marketplace they need to be testable. We use a Behavior Driven approach and write tests in a natural language style before any development begins. After the skill is developed the Behave tests will need to be updated with the appropriate Dialog files, and in many cases Steps will need to be created to incorporate the tests. 13 | 14 | Small portion of an example Feature file: 15 | 16 | ```text 17 | Feature: mycroft-moon-phase 18 | 19 | Scenario Outline: The user asks for the current moon phase 20 | Given an english speaking user 21 | When the user asks for "" 22 | Then "skill-moon-phase" should reply with "Today’s moon is Waning Crescent with 55% illumination" 23 | 24 | Examples: 25 | | the current moon phase | 26 | | what’s the moon phase | 27 | | what's the phase of the moon | 28 | | what is the moon phase today | 29 | ``` 30 | 31 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/troubleshooting/support-skill.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: The easiest way to find out what's happening on your device. 3 | --- 4 | 5 | # Support Skill 6 | 7 | ## Using the Support Skill 8 | 9 | As of December 2017, we have implemented a [Support Skill](https://github.com/MycroftAI/skill-support) as a core \(default\) **Skill** on all Mycroft **Devices**. This means that you can generate information about what the **Device** is doing. This information is then emailed to you, so you can review it for privacy reasons, and then if you choose, forward it to Mycroft for support. 10 | 11 | To generate information for a Support Ticket, say any of these phrases: 12 | 13 | > "Create a support ticket" "You're not working!" "Send me debug info" 14 | 15 | Mycroft will then ask: 16 | 17 | `"Please give me a brief description of the problem"` 18 | 19 | Tell Mycroft what the problem was, or what you were doing. Mycroft will then compile information and send an email to the address registered to your [home.mycroft.ai](https://home.mycroft.ai) account. The email will contain a link to [0x0.st](https://0x0.st), with information about your Device, settings, and Skills. This information can be used for troubleshooting. 20 | 21 | ![Support information generated by Support Skill](https://mycroft.ai/wp-content/uploads/2018/01/support-skill-email.png) 22 | 23 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/interactions-and-guidelines/intents.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | An intent is how the User starts an interaction with Mycroft. It represents 4 | the job they are trying to accomplish. 5 | --- 6 | 7 | # Intents 8 | 9 | Let's start with an example. A user in Melbourne, Australia might want to know about the weather. To ask for this information, they might say: 10 | 11 | > "Hey Mycroft, what's today's weather like?" 12 | > 13 | > "Hey Mycroft, what's the weather like in Melbourne?" 14 | > 15 | > "Hey Mycroft, weather" 16 | 17 | Even though these are three different expressions, for most of us they probably have roughly the same meaning. In each case we would assume the user expects Mycroft to respond with today's weather for their current location. 18 | 19 | It is up us as Skill creators to teach Mycroft the variety of ways that a user might express the same intent. This is a key part of the design process. It is the key difference between a Skill that kind of works if you know what to say, and a Skill that feels intuitive and natural to talk to. 20 | 21 | This is handled by an intent parser whose job it is to learn from your Skill what intents it can handle, and extract from the user's speech and key information that might be useful for your Skill. In this case it might include the specified date and location. 22 | 23 | For technical details on using Intents in your Mycroft Skill, see: 24 | 25 | {% page-ref page="intents.md" %} 26 | 27 | 28 | 29 | -------------------------------------------------------------------------------- /docs/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Step by step instructions to get you started using Mycroft AI, developing your 4 | own Voice Skills, or learning about the technology that makes it all happen. 5 | --- 6 | 7 | # Documentation 8 | 9 | Want to give Mycroft a try? Mycroft is open source software that runs where you want it to, whether it’s on your own hardware or one of our dedicated devices. 10 | 11 | {% page-ref page="using-mycroft-ai/get-mycroft/" %} 12 | 13 | Having trouble? See our range of troubleshooting guides and common issues. 14 | 15 | {% page-ref page="using-mycroft-ai/troubleshooting/" %} 16 | 17 | Want to develop your own Voice Skill? Whether you are new to coding, or a master level programmer, creating your first Mycroft Voice Skill is easy! 18 | 19 | {% page-ref page="skill-development/introduction/" %} 20 | 21 | Interested in the technologies that make up Mycroft AI? We have developed a range of open source technologies to bring Mycroft to life. They are all open source, and can be used together, or independently. 22 | 23 | {% page-ref page="mycroft-technologies/overview.md" %} 24 | 25 | Have further questions? Mycroft has a wonderful open source Community. If you have further questions, join us on the Community [Forums](https://community.mycroft.ai) or [Chat](https://chat.mycroft.ai). 26 | 27 | Our documentation, just like our source code, is open source, and you can contribute to improving it [on Github](https://github.com/mycroftai/documentation). Alternatively, please rate our documentation and leave comments on how we can improve. 28 | 29 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/installing-new-skills.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Skills are like apps for a voice assistant. They give Mycroft access to new 4 | information and new abilities. 5 | --- 6 | 7 | # Installing New Skills 8 | 9 | {% hint style="info" %} 10 | The [Skills Marketplace](https://market.mycroft.ai/skills) is a curated list of Voice Skills that have been tested and approved by the Mycroft Community. 11 | {% endhint %} 12 | 13 | ## Voice Installation 14 | 15 | The simplest way to install a new Skill is to ask Mycroft to do it for you by saying: 16 | 17 | > Hey Mycroft, install {skill name} 18 | 19 | This will search the Skills Marketplace and install them on your device. 20 | 21 | ## Removing a Skill 22 | 23 | To remove the Skill, you can say: 24 | 25 | > Hey Mycroft, uninstall {skill name} 26 | 27 | ## Command Line 28 | 29 | If you prefer to use the command line, you can directly use the [Mycroft Skills Manager](../skill-development/mycroft-skills-manager/). 30 | 31 | To install a Skill run: 32 | 33 | ```text 34 | mycroft-msm install skill-name 35 | ``` 36 | 37 | To remove a Skill run: 38 | 39 | ```text 40 | mycroft-msm remove skill-name 41 | ``` 42 | 43 | For more detail on command line usage see: 44 | 45 | {% page-ref page="../skill-development/mycroft-skills-manager/" %} 46 | 47 | ## Troubleshooting 48 | 49 | If you are having trouble using the Mycroft Skills Manager, or the `mycroft-msm` command cannot be found, see our dedicated troubleshooting guide: 50 | 51 | {% page-ref page="../skill-development/mycroft-skills-manager/msm-troubleshooting.md" %} 52 | 53 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/legacy-repo.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: Maintaining the history of Mycroft contributions to our legacy repo. 3 | --- 4 | 5 | # Legacy Repo 6 | 7 | ## Legacy repo shout-out 8 | 9 | Prior to open-sourcing Mycroft in the mycroft-core.git repo, a number of folks were working on the mycroft.git repo. The commit history was messy, there were some API keys checked in, and we felt it was time for a fresh start. The biggest drawback is the loss of commit/contributor history. This page will at least resolve the latter. Thank you to everyone who helped us get this far! 10 | 11 | {% hint style="info" %} 12 | Please note that below is a percentage of commits, and not a percentage of contribution value. Everyone below is awesome. 13 | {% endhint %} 14 | 15 | ## Git Summary: Mycroft Legacy Repo, 20 May 2016 16 | 17 | ```text 18 | seanfitz@r2d2:~/development/mycroft (master u=)$ git summary 19 | 20 | project : mycroft 21 | repo age : 8 months 22 | active : 205 days 23 | commits : 549 24 | files : 259 25 | authors : 26 | 218 Sean Fitzgerald 39.7% 27 | 188 Jonathan D'Orleans 34.2% 28 | 71 Ryan Sipes 12.9% 29 | 37 Wolfgange3311999 6.7% 30 | 18 aatchison 3.3% 31 | 4 Arron Atchison 0.7% 32 | 4 Nicholas Peterson 0.7% 33 | 2 ChristopherRios7 0.4% 34 | 2 Daniel Foré 0.4% 35 | 2 Matthew D. Scholefield 0.4% 36 | 1 Jordan Sipes 0.2% 37 | 1 Nathan Dyer 0.2% 38 | 1 Åke Forslund 0.2% 39 | ``` 40 | 41 | -------------------------------------------------------------------------------- /docs/skill-development/user-interaction/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Now that you have the basics, learn about how to interact get input and create 4 | meaningful interactions with Users of your Skill. 5 | --- 6 | 7 | # User interaction 8 | 9 | ## Intents 10 | 11 | The initial interactions with your Skill will start with a general utterance from the User. Your Skill must register intents that it can handle and Mycroft will then determine which Skill can respond to the Users request. 12 | 13 | {% page-ref page="intents/" %} 14 | 15 | ## Statements 16 | 17 | Mycroft can speak any information that you need to communicate to the User. 18 | 19 | {% page-ref page="statements.md" %} 20 | 21 | ## Prompts 22 | 23 | If you need to prompt the User for information, you can use a range of methods to ask different types of questions. 24 | 25 | {% page-ref page="prompts.md" %} 26 | 27 | ## Parsing Utterances 28 | 29 | A common process within a Skill is to parse the contents of an utterance. Mycroft provides a range of easy to use methods for this task. 30 | 31 | {% page-ref page="parsing.md" %} 32 | 33 | ## Confirmations 34 | 35 | Confirmations are used to verify that the input from the User was understood correctly. These may be verbal or non-verbal. 36 | 37 | {% page-ref page="confirmations.md" %} 38 | 39 | ## Conversational Context 40 | 41 | For more complex cases, a Skill can add context to the Intent Parser to create more natural conversational interaction. 42 | 43 | {% page-ref page="conversational-context.md" %} 44 | 45 | ## Converse 46 | 47 | Converse allows a recently active Skill to inspect utterances before the normal intent handling process. 48 | 49 | {% page-ref page="converse.md" %} 50 | 51 | -------------------------------------------------------------------------------- /docs/skill-development/displaying-information/mycroft-gui-on-a-pc.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | During development, it can be useful to emulate the Mark II display on your 4 | primary machine. 5 | --- 6 | 7 | # Mycroft-GUI on a PC 8 | 9 | ## Installation 10 | 11 | The [Mycroft-GUI](https://github.com/mycroftAI/mycroft-gui) package requires some newer packages, so a newer distribution such as KDE Neon or K/Ubuntu 20.04 is recommended. 12 | 13 | On supported systems, the installation is equivalent to a development installation of mycroft-core: 14 | 15 | ```bash 16 | git clone https://github.com/mycroftAI/mycroft-gui 17 | cd mycroft-gui 18 | ./dev_setup.sh 19 | ``` 20 | 21 | {% hint style="info" %} 22 | For more detail see the [Getting Started](https://github.com/mycroftAI/mycroft-gui#getting-started) section of the mycroft-gui repository 23 | {% endhint %} 24 | 25 | ## Running the application 26 | 27 | Running the Mycroft GUI is as simple as launching: 28 | 29 | ```bash 30 | mycroft-gui-app 31 | ``` 32 | 33 | If mycroft-core is not already running, all Mycroft services will be started. 34 | 35 | By default the GUI window includes a text entry box to send utterances to Mycroft. 36 | 37 | ![Mycroft-GUI application](../../.gitbook/assets/mycroft-gui-screenshot.jpg) 38 | 39 | In the bottom-left corner of the window is a hamburger-style menu. Here you can find hints for using Mycroft, settings for the mycroft-gui application, and licensing information. 40 | 41 | ### Emulating a Mark II 42 | 43 | In order to emulate the display of a Mycroft Mark II, or any other device, you can set the dimensions of the screen and hide the text input bar when launching the application. 44 | 45 | ```bash 46 | mycroft-gui-app --width=800 --height=480 --hideTextInput 47 | ``` 48 | 49 | -------------------------------------------------------------------------------- /docs/skill-development/user-interaction/parsing.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Mycroft provides a range of easy to use methods to parse the contents of 4 | utterances from Users. 5 | --- 6 | 7 | # Parsing Utterances 8 | 9 | ## Extracting data 10 | 11 | ### Datetime 12 | 13 | Extracts date and time information from a sentence. 14 | 15 | See [`extract_datetime()`](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.util.parse.html#mycroft.util.parse.extract_datetime) 16 | 17 | ### Duration 18 | 19 | Convert an english phrase into a number of seconds. 20 | 21 | See [`extract_duration()`](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.util.parse.html#mycroft.util.parse.extract_duration) 22 | 23 | ### Numbers 24 | 25 | [`extract_number()`](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.util.parse.html#mycroft.util.parse.extract_number) takes in a string and extracts a single number. 26 | 27 | [`extract_numbers()`](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.util.parse.html#mycroft.util.parse.extract_numbers) takes in a string and extracts a list of numbers. 28 | 29 | ## Matching 30 | 31 | ### Vocab Matching 32 | 33 | [`voc_match()`](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.html#mycroft.MycroftSkill.voc_match) determines if the given utterance contains the vocabulary provided. 34 | 35 | ### Fuzzy Matching 36 | 37 | [`fuzzy_match()`](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.util.parse.html#mycroft.util.parse.fuzzy_match) performs a ‘fuzzy’ comparison between two strings. 38 | 39 | ### Match One 40 | 41 | [`match_one()`](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.util.parse.html#mycroft.util.parse.match_one) finds the best match from a list or dictionary given an input. 42 | 43 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/adapt/adapt-examples.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | There are several **Intent Parser** examples for Adapt on GitHub, and this 4 | page walks you through what they are, and where you might use them. 5 | --- 6 | 7 | # Adapt Examples 8 | 9 | {% hint style="info" %} 10 | If you are looking to use Adapt in a Mycroft Skill, please see [Skill Development > Intents](../../skill-development/user-interaction/intents/) 11 | {% endhint %} 12 | 13 | ## Single intent Parser 14 | 15 | [See the Single Intent Parser code on GitHub](https://github.com/MycroftAI/adapt/blob/master/examples/single_intent_parser.py) 16 | 17 | This sample **Intent Parser** uses a fixed vocabulary to extract entities for an **Intent**. This type of intent parser is useful when the parameters of the **Intent** are well-bounded; that is, the possible **Utterances** are well known. 18 | 19 | ## Multi-intent Parser 20 | 21 | [See the Multi-intent Parser code on GitHub](https://github.com/MycroftAI/adapt/blob/master/examples/multi_intent_parser.py) 22 | 23 | This sample **Intent Parser** uses multiple **Intents**, and disambiguates between them using intent confidence. For example, if one **Intent** has a higher confidence than the second, the **Intent Parser** will assume the first **Intent**. 24 | 25 | ## Regular Expression \(regex\) Intent Parser 26 | 27 | [See the Regex Intent Parser code on GitHub](https://github.com/MycroftAI/adapt/blob/master/examples/regex_intent_parser.py) 28 | 29 | This sample **Intent Parser** uses a regular expression entity to extract location from a query. This type of **Intent Parser** is useful where a parameter needs to be extracted from a query and it takes a form that can be expressed in regex. Other parameters that are often expressed in regex are IP address, names of people and colors. 30 | 31 | ... 32 | 33 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/testing.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: An overview of the test frameworks used by mycroft-core. 3 | --- 4 | 5 | # Testing 6 | 7 | ## Unit Tests 8 | 9 | Mycroft-core unit tests are located in `mycroft-core/test/unittests` 10 | 11 | These are created using the [`unittest`](https://docs.python.org/3/library/unittest.html) package from the Python standard library. 12 | 13 | ### Test runner 14 | 15 | A helper command is provided in `mycroft-core/bin` to enable quick and simple access to the test runner. It is assumed that this directory is on your `$PATH` . 16 | 17 | If that is not the case, prepend all `mycroft-*` commands with the path to your `mycroft-core/bin` directory or run the tests manually. 18 | 19 | #### Execute all tests 20 | 21 | To execute all tests run: 22 | 23 | ```text 24 | mycroft-start unittest 25 | ``` 26 | 27 | #### Execute single test 28 | 29 | To execute a single test file run: 30 | 31 | ```text 32 | mycroft-start singlueunittest path/to/test/file.py 33 | ``` 34 | 35 | The second argument points to the test file you wish to execute. Note that this is relative to your `mycroft-core` installation. 36 | 37 | If I was working on the audio utilities, I might run: 38 | 39 | ```text 40 | mycroft-start singlueunittest test/unittests/audio/test_utils.py 41 | ``` 42 | 43 | #### Manually run tests 44 | 45 | The `mycroft-start` helper commands are a convenience wrapper. They activate the Python virtual environment and run pytest. 46 | 47 | The equivalent commands to run all tests would be: 48 | 49 | ```text 50 | cd ~/mycroft-core 51 | source .venv/bin/activate 52 | pytest 53 | ``` 54 | 55 | ## Integration Tests 56 | 57 | The Mycroft integration test suite is called Voight Kampff. Currently this does explicitly cover mycroft-core, however Skills are tested, which indirectly tests a range of Mycroft's technologies. 58 | 59 | For more detail on Voight Kampff see: 60 | 61 | {% page-ref page="../../skill-development/voight-kampff/" %} 62 | 63 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/design-process.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | At Mycroft we are advocates for a User Centered Design or Design Thinking 4 | approach. 5 | --- 6 | 7 | # Design Process 8 | 9 | The basic Design Thinking process is: 10 | 11 | 1. Empathize 12 | 2. Design 13 | 3. Ideate 14 | 4. Prototype 15 | 5. Test 16 | 6. Then rinse and repeat until your product adequately meets the user's needs. 17 | 18 | You can learn more about the Design Thinking process at [interaction-design](https://www.interaction-design.org/literature/article/5-stages-in-the-design-thinking-process). Don't be intimidated by Prototyping and Testing. The great thing about Voice Interactions is that the prototypes can be very low-fidelity with no programming necessary. All you need to do is find a few willing participants to act out the interaction of your skill. You can read more about prototype testing later in this guide. 19 | 20 | Once you have decided what problem you want to address with your skill it's best to start thinking about the user's jobs to be done. Jobs Stories are similar to User Stories, but we find them to be a little more streamlined. The basic problem is defined in three parts, The Situation, The Motivation and Goal, and the Intended Outcome. They can be written like this: 21 | 22 | > When \_\_\_\_, I want to \_\_\_\_, so I can \_\_\_\_. 23 | 24 | Throughout the Voice User Interface Design Guidelines we will be taking a look at example work from a Moon Phase Skill that is designed to give the user information about the Moon phase. Below is an example Job Story from the Moon Phase Skill. 25 | 26 | > When I'm thinking about taking moon photography, I want to know what day the next full moon will be, so I can plan on taking photos of the moon that night. 27 | 28 | The great part about Job Stories is that they do not dictate a solution. For example this job story could be resolved with a traditional mobile app. However, using a voice interaction is probably quicker than launching an app. Thinking about your user's needs in terms of Job Stories helps you determine whether or not a voice interaction is the best solution. 29 | 30 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to Mycroft Documentation 2 | 3 | Mycroft is intentionally open source software, and we subscribe to the collaborative approaches that are the hallmark of the open source community. For this reason, everyone is welcome to contribute to Mycroft Documentation. 4 | 5 | ## Process flow 6 | 7 | ### If you are non-technical 8 | 9 | If you would like to contribute to the Mycroft Documentation and are non-technical, please [make contact](https://mycroft.ai/contact) and we will provide a non-technical mechanism to contribute. 10 | 11 | ### If you are technical, and/or you know Git and GitHub 12 | 13 | If you would like to contribute to Mycroft Documentation and are technically-minded, you may: 14 | 15 | * Raise an Issue on this repo in GitHub 16 | * Clone this Repo into your own GitHub account, make changes and raise a Pull Request 17 | 18 | ### Documentation Team 19 | 20 | A Documentation Team is responsible for the guardianship of Mycroft Documentation. This team comprises both Mycroft Staff and Mycroft community members. The Documentation Team is responsible for additions and deletions to the Team itself. To chat with the Documentation Team, please join the [Mycroft Chat facility](https://chat.mycroft.ai), specifically the ~documentation channel. 21 | 22 | ### Review of Issues and Pull Requests 23 | 24 | All Issues and Pull Requests will be reviewed by a Documentation Team member within a reasonable timeframe, and feedback will be provided. 25 | 26 | Pull Requests will be reviewed for elements such as: 27 | 28 | * technical accuracy 29 | * tone of voice 30 | * grammar 31 | * formatting 32 | 33 | Changes may be requested. 34 | 35 | If no correspondence on an Issue or Pull Request is received within eight (8) weeks, the Documentation Team reserves the right to close the Issue or Pull Request. 36 | 37 | If a Pull Request satisfies review criteria, it will be merged to the appropriate branch. 38 | 39 | ### Dispute resolution 40 | 41 | If at any time you believe the Documentation Team has acted unfairly, you may [contact Mycroft](https://mycroft.ai/contact) and request review and/or dispute resolution at any time. This process will be conducted in a professional manner. 42 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/prototyping.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | One way to prototype your skill is to get willing test participants to act as 4 | the User of the skill. These tests can be done in person or over a video 5 | conferencing tool. 6 | --- 7 | 8 | # Prototyping 9 | 10 | As the skill designer you will act as the Voice Assistant and respond to the user's input with pre-written [Example Interations](example-dialogs.md) that you have created. You can think of it like a table read for a screenplay except the user is ad-libbing. Make sure you ask your participants if it is alright to record the sessions so that you can take better notes after the test is complete. 11 | 12 | As the part of the Voice Assistant you want to stick to the script, but you are actually hoping the tester says something that you haven't thought of yet. The goal of these tests isn't to just validate your assumptions, the goal is to observe interactions that did not occur to you at first. 13 | 14 | To begin the test give the user a task. For example, with the Moon Phase skill, we can pull the task directly from our Jobs to be done work. 15 | 16 | **Task Example:** 17 | 18 | > _You're thinking about taking some photos of the moon tonight, and you want to know what phase the moon will be in. What would you ask Mycroft to help you plan your photo session?_ 19 | 20 | The first off the cuff responses from your test are generally the best, but it's always good to ask your testers if they can provide any variations on the way they might ask the question. 21 | 22 | Be considerate of your tester's time. A usability test should never exceed 90 minutes, after that [the test subject will likely get tired](https://www.nngroup.com/articles/time-budgets-for-usability-sessions/). We think you can get valuable feedback in a 30 to 45 minute session. 23 | 24 | If you are creating a Skill as a hobby or as an individual you may not have the resources to provide an incentive to your testers. In that case you might be limited to friends and family for testing. If you are creating a skill for a larger organisation make sure that you think about possible incentives and check if your organisation requires a legal waiver for user testing. 25 | 26 | 27 | 28 | 29 | 30 | -------------------------------------------------------------------------------- /docs/about-mycroft-ai/glossary.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Use our Glossary to learn more about the specialist terms that we use in 4 | natural language processing generally, and more specifically with Mycroft 5 | software and hardware. 6 | --- 7 | 8 | # Glossary of terms 9 | 10 | * **Device** - the platform you have Mycroft installed on. This might be using an **Enclosure**, like the Mark 1, or on a Raspberry Pi, or it might be on a Linux desktop. Each **Device** needs to be **Paired** with your Mycroft Home account before it can be used. 11 | * **Enclosure** - a hardware implementation of Mycroft. Examples include the Mark 1 and Picroft. 12 | * **Fallback** - a **Skill** that is designated to be a 'catch-all' when Mycroft cannot interpret the **Intent** from an **Utterance**. 13 | * **Intent** - when a user speaks an **Utterance** to Mycroft, Mycroft tries to interpret the **Intent** of the **Utterance**, and match the **Intent** will a **Skill**. 14 | * **Mark 1** - The Mark 1 device is an **Enclosure** for Mycroft, based on Raspberry Pi. It is designed as a reference hardware device, and is able to be easily modified, hacked and extended. 15 | * **Pairing** - The process of registering a **Device** to home.mycroft.ai. Each **Device** needs to be **Paired** before it can be used. 16 | * **Registration Code** - a 6-character code that Mycroft issues to you during setup of the Device. This code is needed to [add a Device](http://mycroft.ai/documentation/home-mycroft-ai-pairing/) 17 | * **Skill** - when Mycroft hears the **Wake Word**, then an **Utterance**, Mycroft will try to find a **Skill** that matches the **Utterance**. The **Skill** might fetch some data, or play some audio, or speak, or display some information. If Mycroft can't find a **Skill** that matches the Utterance, he will tell you he doesn't understand. 18 | * **Utterance** - an Utterance is how you interact with Mycroft. An **Utterance** is a command or question - like "What's the weather like in Kansas City?" or "Tell me about the Pembroke Welsh Corgi". 19 | * **Wake Word** - the Wake Word is the phrase you use to tell Mycroft you're about to issue a command. You can configure your own **Wake Word** in your [Mycroft Home](https://home.mycroft.ai) account. 20 | 21 | Are we missing a word? Let us know below. 22 | 23 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/plugins/wake-word-plugins.md: -------------------------------------------------------------------------------- 1 | # Wake Word Plugins 2 | 3 | All Mycroft Wake Word Plugins need to provide a class derived from the HotWordEngine base class in `mycroft.client.speech.hotword_factory` 4 | 5 | When the `__init__()` method of the base class is run the config for that module will be loaded and available through `self.config`. Mycroft's selected language will also be available through `self.lang`. 6 | 7 | For example, the following [Mycroft configuration](../../../using-mycroft-ai/customizations/config-manager.md): 8 | 9 | ```javascript 10 | { 11 | "listener": { 12 | "wake_word": "my wake word", 13 | }, 14 | "hotwords": { 15 | "my wake word": { 16 | "module": "example_wake_word_plugin", 17 | "custom_attribute": "some value" 18 | } 19 | } 20 | } 21 | ``` 22 | 23 | will load the `"my wake word"` structure from the `"hotwords"` section using the `"example_wake_word_plugin"` . The defined `custom_attribute` will then be available within my plugin on `self.config` 24 | 25 | ### found\_wake\_word\(\) 26 | 27 | Each Wake Word plugin must define the `found_wake_word()` method taking one argument: 28 | 29 | * `frame_data` - this is the audio data that needs to be checked for a wake word. You can process audio here or return a result previously handled in the `update()` method. 30 | 31 | ### update\(\) 32 | 33 | The `update()` method is optional and takes one argument: 34 | 35 | * `chunk` - live audio chunks allowing for streaming predictions. Results must be returned in the `found_wake_word()` method. 36 | 37 | ### stop\(\) 38 | 39 | The `stop()` method is optional and takes no arguments. It should be used to perform any actions needed to shut down the hot word engine. This may include things such as unloading data or to shutdown external processes. 40 | 41 | ## Entry point 42 | 43 | To make the class detectable as a Wake Word plugin, the package needs to provide an entry point under the `mycroft.plugin.wake_word` namespace. 44 | 45 | ```python 46 | setup([...], 47 | entry_points = {'mycroft.plugin.wake_word': 'example_wake_word_plugin = my_example_ww:myWakeWordEngine'} 48 | ) 49 | ``` 50 | 51 | Where: 52 | 53 | * `example_wake_word_plugin` is the Wake Word module name for the plugin 54 | * `my_example_ww` is the Python module; and 55 | * `myWakeWordEngine` is the class in the module to return 56 | 57 | -------------------------------------------------------------------------------- /docs/about-mycroft-ai/why-use-mycroft.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Why would you choose to use Mycroft over alternatives such as Amazon™ Alexa™, 4 | Google Assistant™, Microsoft Cortana™ or Apple's Siri™? 5 | --- 6 | 7 | # Why use Mycroft AI? 8 | 9 | Why would you choose to use Mycroft over alternatives such as Amazon™ Alexa™, Google Assistant™, Microsoft Cortana™ or Apple's Siri™? 10 | 11 | Great question! 12 | 13 | While alternative tools are readily available, and easy to use, they come with some pretty big downsides. 14 | 15 | ## Open source 16 | 17 | Mycroft is open source. This means that the code used by Mycroft can be inspected, copied, modified, and contributed back to the Mycroft community for everyone to enjoy. 18 | 19 | Tools like Amazon™ Alexa™, Google Assistant™, Microsoft Cortana™ or Apple's Siri™ are _black boxes_. That is, you can't look inside them, and see what they do - or importantly - _how_ they do it. And while they may allow you some flexibility to create new **Skills**, these are usually strictly controlled. Mycroft gives you freedom, flexibility and control over how your want _your_ voice assistant to work. Mycroft is AI for _everyone_. 20 | 21 | Mycroft code is licensed under the [Apache 2 open source license](https://en.wikipedia.org/wiki/Apache_License). 22 | 23 | ## Respecting your privacy 24 | 25 | Mycroft uses _opt-in privacy_. This means we will only record what you say to Mycroft with your explicit permission. Don't want us to record your voice? No problem! If you'd like us to help Mycroft become more accurate, you can opt in to have your voice anonymously recorded. 26 | 27 | You're welcome to view our [privacy policy](https://mycroft.ai/privacy-policy/) 28 | 29 | ## More hardware, more choices 30 | 31 | Mycroft runs on a wide range of software platforms and hardware - meaning that you can run Mycroft on the **Devices** _you_ want. Try Mycroft on a Raspberry Pi, on an Android device - or on your Linux desktop. 32 | 33 | ## Lightweight 34 | 35 | Mycroft is lightweight, and will run on low-powered hardware, such as the Raspberry Pi 3. So if you're a Hacker, Maker or Hobbyist, it's the perfect platform for low cost experimentation. 36 | 37 | ## Community 38 | 39 | Mycroft has a vibrant, engaged and helpful community. You can interact with the community via: 40 | 41 | * Chat - [https://chat.mycroft.ai](https://chat.mycroft.ai) 42 | * Forum - [https://community.mycroft.ai](https://community.mycroft.ai) 43 | 44 | -------------------------------------------------------------------------------- /docs/skill-development/marketplace-submission/skills-acceptance-process/skills-review-information-template.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Template for the Information Review component of the Skills Acceptance 4 | Process. 5 | --- 6 | 7 | # Information Review Template 8 | 9 | ```text 10 | ## Meta 11 | 12 | * Who: [@your-github-handle](https://github.com/your-github-handle) / @your-mycroft-chat-handle 13 | * Datestamp: run `date +%Y-%m-%d_%H:%M:%S_%Z` 14 | * Language and dialect of tester: 15 | 16 | ## 2. Information Review - accurate and understandable 17 | 18 | This review checks the README for completeness - does it follow the README template and include all the relevant sections such as Intents, known issues, dependencies and so on? 19 | 20 | * [ ] **Icon** 21 | 22 | Does the Skill have an appropriate icon that is either included in the repo or linked from an appropriate place \(eg raw.githack.com not privateicons.com or rawgit.com\)? 23 | 24 | > 25 | 26 | * [ ] **Description** 27 | 28 | Are the title, short description, and long description \(under 'About'\) clear and concise? Do they provide enough information to understand what the Skill does? Does the title have the word "skill" included? This is strongly discouraged. 29 | 30 | > 31 | 32 | * [ ] **Examples** 33 | 34 | Do the examples give you a clear understanding of how you can use the Skill? Is there a single example per dot-point? 35 | 36 | > 37 | 38 | * [ ] **Supported Devices** 39 | 40 | If relevant, are the supported devices listed? An example might be a Skill that requires the screen of the Mark II. If the section is not present, all devices are considered supported. 41 | 42 | > 43 | 44 | * [ ] **Categories** 45 | 46 | Is there at least one category listed? At least one category must be selected for the Skill to be displayed correctly in the [Mycroft Marketplace](https://market.mycroft.ai). 47 | 48 | Is the bolded category the most appropriate for this Skill? The bold category is considered the primary category and will be used for display in the [Marketplace](https://market.mycroft.ai/), all others are secondary and used for search results. 49 | 50 | > 51 | 52 | * [ ] **Tags** 53 | 54 | Are listed tags appropriate for this Skill? 55 | 56 | > 57 | 58 | * [ ] **License** 59 | 60 | Is an appropriate LICENSE file used in the repo - such as Apache-2.0 or GPL-3.0? 61 | 62 | > 63 | 64 | ### Actions Required 65 | 66 | \* 67 | ``` 68 | 69 | -------------------------------------------------------------------------------- /docs/skill-development/marketplace-submission/skills-acceptance-process/skills-review-code-template.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: Template for the Code Review component of the Skills Acceptance Process. 3 | --- 4 | 5 | # Code Review Template 6 | 7 | ```text 8 | ## Meta 9 | 10 | * Platform: 11 | * Mycroft-core version: 12 | * Who: [@your-github-handle](https://github.com/your-github-handle) / @your-mycroft-chat-handle 13 | * Datestamp: run `date +%Y-%m-%d_%H:%M:%S_%Z` 14 | * Language and dialect of tester: 15 | 16 | ## 0. Automated tests 17 | 18 | Are all automated tests passing? 19 | 20 | * [ ] Skill tester - Jenkins 21 | * [ ] Continuous Integration - Travis-CI 22 | 23 | ## 1. Code Review - secure and stable 24 | 25 | * [ ] **Code Quality** 26 | 27 | Can you understand what the code is doing? Is there inline documentation? Do you have any concerns about this code running on your machine? Are there any performance issues such as nested or infinite loops? Do you have significant concerns about the overall code quality? 28 | 29 | _NOTE: We do not enforce PEP8 Checks on Skills_ 30 | 31 | > 32 | 33 | * [ ] **Error Handling** 34 | 35 | Are there any specific checks we make for error handling or graceful degradation? 36 | 37 | > 38 | 39 | * [ ] **Libraries** 40 | 41 | Does the Skill include the correct libraries? Does it use too many libraries or dependencies? 42 | 43 | > 44 | 45 | * [ ] **Required Dependencies** 46 | 47 | Check `requirements.txt` and `requirements.sh` - are the required dependencies listed? If `requirements.sh` is used, is some form of conditional processing done to match against multiple distros? Often Skill Authors will add requirements.txt using only an “library=1.x.x” instead of “library >=1.x.x”. Check to make sure that there is an equal or greater than in the requirements to help future-proof the Skill, unless a _specific_ version is needed. 48 | 49 | > 50 | 51 | * [ ] **Settings** Is the `settingsmeta` file well laid out? If settings are not used, has the default file been deleted? If it is the default file, the first setting section will be called "Options << Name of section". > 52 | * [ ] **Integration Tests** Does the skill include sufficient integration tests, included in the `test` folder? 53 | 54 | > 55 | 56 | * [ ] **Other Files** 57 | 58 | Are there any other files included that are unnecessary or you are unsure of their function? 59 | 60 | > 61 | 62 | ### Actions Required: 63 | 64 | \* 65 | ``` 66 | 67 | -------------------------------------------------------------------------------- /docs/skill-development/voight-kampff/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Integration testing ensures your Skill works as expected when it is running 4 | inside Mycroft in a realistic environment alongside other Skills. 5 | --- 6 | 7 | # Integration Tests 8 | 9 | Mycroft has recently adopted a new [Behavior Driven Development \(BDD\)](https://en.wikipedia.org/wiki/Behavior_Driven_Development) framework called [Behave](https://behave.readthedocs.io/) for integration testing. Mycroft's implementation has been code-named Voight Kampff. 10 | 11 | ## Why have integration tests 12 | 13 | Writing tests for your Skill has benefits for both you and Mycroft as a whole. For the project it allows us to automatically check that the Skill is working as intended, and that no other Skill is causing a conflict. 14 | 15 | Test-driven development also allows you to work more efficiently and find bugs faster. We are currently working on a Skill Design Guide that outlines the process we go through to design new voice Skills. 16 | 17 | ## Test Format 18 | 19 | Voight Kampff tests are split into `Features`. Each `Feature` may have one or more `Scenarios`, and each `Scenario` will have multiple `Steps`. 20 | 21 | To see it in action let's look at a short example for a Weather Skill. 22 | 23 | ```yaml 24 | Feature: current-weather 25 | Scenario: current local weather 26 | Given an English speaking user 27 | When the user says "tell me the weather" 28 | Then "my-weather-skill" should reply with "Right now, it's overcast clouds and 32 degrees." 29 | ``` 30 | 31 | In this example, we started by defining the name of the Feature, in this case "current-weather", then added a single Scenario "current local weather". Within this Scenario we have three Steps: 32 | 33 | * `Given` steps define the state of the Scenario, 34 | * `When` steps describe actions that are taken, and 35 | * `Then` steps observe the outcome of the test. 36 | 37 | When this specific test is run with the provided Steps, the system will: 1. Ensure that it is in the appropriate state awaiting an utterance in English. 2. Send the given utterance to Mycroft as if it had been spoken by a user. 3. Observe Mycroft's response and check that the correct Skill responded with appropriate dialog. 38 | 39 | ### File location 40 | 41 | Each `Feature` we define for our Skills test suite should be placed in it's own file inside the `test/behave` directory of our Skill. For this example we will save the Feature file in: 42 | 43 | ```text 44 | /opt/mycroft/skills/my-weather-skill/test/behave/current-weather.feature` 45 | ``` 46 | 47 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-roadmap.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | A list of roadmaps to help you see where Mycroft's technology is headed. Note 4 | that with the pace of change, these documents can become dated quickly. 5 | --- 6 | 7 | # Roadmap 8 | 9 | A new [Master Roadmap 2020](https://docs.google.com/document/d/1XCSdcszcuWhZsIHpkVrmHhFGHLdx7xqvd_BYELcXqTY/edit) has been created. This should always start with the highest priority project followed by individual roadmaps for secondary priorities. 10 | 11 | Each individual Roadmap will include the following sections: 12 | 13 | * Next - Work that is needed most urgently. 14 | * Longer term - Work that is needed but can wait until “Next” items are finished. 15 | * Nice to have - Features that can be added at any time but will not be prioritized by the internal Mycroft team. 16 | 17 | The Roadmaps themselves may either be collaborative documents like the [Master Roadmap](https://docs.google.com/document/d/1XCSdcszcuWhZsIHpkVrmHhFGHLdx7xqvd_BYELcXqTY/edit?usp=sharing), or where it directly relates to a single technology hosted as a Github project board. Each of the collaborative documents are set to allow comments so please jump in and have your say on the direction of Mycroft. 18 | 19 | ## Archive 20 | 21 | {% hint style="info" %} 22 | The following Roadmaps have been kept for archive purposes only. 23 | {% endhint %} 24 | 25 | We now have both a **Master Roadmap**, and associated voice stack Roadmaps to help you see where Mycroft's technology is headed. Your comments are warmly welcomed in the below documents: 26 | 27 | * [Master Roadmap](https://docs.google.com/document/d/1QntesVNTGx-DlSK45gjFi5ToDJtDuMwYY_C-u9C3fFI/edit) 28 | * [Mycroft Marketplace Roadmap](https://docs.google.com/document/d/1Ai2LcJRYUmzw8L6BK8hKWckFYfAAwzuAUYtw9uRuXuc/edit#heading=h.ck6l3z2vlex4) 29 | * [Speech to Text \(STT\) Roadmap](https://docs.google.com/document/d/1o0b5r8qcxan59TxSeJxBX5x1ehcT1xWiX7b_mN6IfQw/edit#heading=h.ck6l3z2vlex4) 30 | * [Text to Speech \(TTS\) Roadmap](https://docs.google.com/document/d/1gSXU5-UaXSOOA7T9UWz1hT779o36pIYOard3iGXlHCA/edit#heading=h.ck6l3z2vlex4) 31 | * [Wake Word Roadmap](https://docs.google.com/document/d/1UsN9BLuHeiuuP92LEm17GSfufNcRyP1j8HB_W5FUHcE/edit) 32 | * [`mycroft-core` Roadmap](https://docs.google.com/document/d/1GvhoqC9mUTEDLrwXVSEQ4JecuUMKaTpE1l1FNE6mMTA/edit#heading=h.ck6l3z2vlex4) 33 | * [Languages Roadmap](https://docs.google.com/document/d/1otSj9YBEASCPAgD7MZmUR4m5Nal5O0fjN67y3gJRfNg/edit#heading=h.ck6l3z2vlex4) 34 | * [Personal / Standalone Server Roadmap](https://docs.google.com/document/d/1VJsk9xmFig1qAP-wH_SAB-3xyKRx28Z47IwlJWJMqOM/edit#heading=h.ck6l3z2vlex4) 35 | 36 | -------------------------------------------------------------------------------- /docs/skill-development/faq.md: -------------------------------------------------------------------------------- 1 | # FAQ 2 | 3 | ## How do I disable a Skill? 4 | 5 | During Skill development you may have reason to disable one or more Skills. Rather than constantly install or uninstall them via voice, or by adding and removing them from `/opt/mycroft/skills/`, you can disable them in [the `mycroft.conf` file](https://mycroft.ai/documentation/mycroft-conf/). 6 | 7 | First, identify the name of the Skill. The name of the Skill is the `path` attribute in the [`.gitmodules`](https://github.com/MycroftAI/mycroft-skills/blob/master/.gitmodules) file. 8 | 9 | To disable one or more Skills on a Mycroft Device, find where your `mycroft.conf` file is stored, then edit it using an editor like `nano` or `vi`. 10 | 11 | Search for the string `blacklisted` in the file. Then, edit the line below to include the Skill you wish to disable, and save the file. You will then need to reboot, or restart the `mycroft` services on the Device. 12 | 13 | ```text 14 | "skills": { 15 | "blacklisted_skills": ["skill-media", "send_sms", "skill-wolfram-alpha, YOUR_SKILL"] 16 | } 17 | ``` 18 | 19 | ## How do I increase the priority of Skills during loading? 20 | 21 | During Skill development, you may wish to increase the priority of your Skill loading during the startup process. This allows you to start using the Skill as soon as possible. 22 | 23 | First, identify the name of the Skill. The name of the Skill is the `path` attribute in the [`.gitmodules`](https://github.com/MycroftAI/mycroft-skills/blob/master/.gitmodules) file. 24 | 25 | To prioritize loading one or more Skills on a Mycroft Device, find where your [`mycroft.conf` file](https://mycroft.ai/documentation/mycroft-conf/) is stored, then edit it using an editor like `nano` or `vi`. 26 | 27 | Search for the string `priority` in the file. Then, edit the line below to include the Skill you wish to prioritize, and save the file. You will then need to reboot, or restart the `mycroft` services on the Device. 28 | 29 | ```text 30 | "priority_skills": ["skill-pairing"], 31 | ``` 32 | 33 | ## How do I find more information on Mycroft functions? 34 | 35 | You can find documentation on Mycroft functions and helper methods at the [Mycroft Core API documentation](https://mycroft-core.readthedocs.io/en/master/) 36 | 37 | ## Need more help? 38 | 39 | If something isn't working as expected, please join us in the [~Skills channel of Mycroft Chat](https://chat.mycroft.ai/community/channels/skills). 40 | 41 | It's also really helpful for us if you add an issue to our [documentation repo](https://github.com/MycroftAI/documentation/issues). This means we can make sure it gets covered for all developers in the future. 42 | 43 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/voice-assistant-personas.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | A Voice Assistant Persona is the personality that the user attributes to the 4 | voice assistant. 5 | --- 6 | 7 | # Voice Assistant Personas 8 | 9 | The user will automatically create a mental persona of the assistant, it's human nature, so it's best to be deliberate and intentionally define the persona as much as possible. Many factors contribute to the persona of a voice assistant. The chosen Voice \(Text-To-Speech\), the dialog, the pace, and tone all factors that come together to create a Persona. 10 | 11 | We encourage developers to choose between making their skills fit the Mycroft Assistant Persona, or design for their own Persona. 12 | 13 | ## Mycroft’s Persona 14 | 15 | By default Mycroft has a young sounding voice with a slightly lower register. Mycroft is a curious digital assistant with a positive attitude. Mycroft always strives to be proper and polite, however, occasionally Mycroft may seem unrefined because of his love for new solutions from the open-source world. Mycroft doesn’t try to be 100% human, but he attempts to speak as naturally as possible. He is playful in tone, and will joke with you when appropriate. If Mycroft seems abrupt and to the point at times, don’t feel like he’s being rude, he’s just being efficient. 16 | 17 | **Character Traits** 18 | 19 | * He’s more like Commander Data than the Enterprise Computer 20 | * Polite 21 | * Helpful 22 | * Positive 23 | * Efficient 24 | * Speaks in the present tense 25 | * Speaks in First Person 26 | 27 | ## How do you decide which Persona to use? 28 | 29 | Does your skill perform a similar function to one of Mycroft’s Default Skills? 30 | 31 | * Media Playback 32 | * IoT 33 | * General Information 34 | * Lists 35 | 36 | If your skill fits in this criteria it’s best to use the Mycroft Assistant Persona to achieve a more unified and consistent experience. 37 | 38 | Is your skill more unique and could benefit from its own personality? 39 | 40 | * A Branded Experience for example “Hello, I’m Mr. Peanutbutter and welcome to Hollywood Stars and Celebrities: What Do They Know? Do They Know Things?? Let's Find Out!!” 41 | * Trivia game or other game skill 42 | * Joke Skill 43 | * Storytelling Skill 44 | 45 | **Mycroft Persona Example Interactions** 46 | 47 | | Speaker | | 48 | | :--- | :--- | 49 | | **User** | What's the weather like? | 50 | | **Mycroft** | _It’s currently a clear sky and 65 degrees fahrenheit. Today’s forecast is for a high of 68 and a low of 56._ | 51 | | **User** | _Set a Timer for 20 minutes_ | 52 | | **Mycroft** | _Okay I’m starting a timer for 20 minutes_ | 53 | 54 | -------------------------------------------------------------------------------- /docs/skill-development/skill-structure/dependencies/requirements-files.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | A Skill's `requirements.txt` and `requirements.sh` files provide an 4 | alternative method to define the dependencies of a Mycroft Skill. 5 | --- 6 | 7 | # Requirements files 8 | 9 | The older method of defining requirements is still available, and is present in the majority of Skills available in the Marketplace. This uses a `requirements.txt` and/or `requirements.sh` file. 10 | 11 | ## requirements.txt 12 | 13 | The `requirements.txt` file can only be used to define Python package dependencies. It uses the [standard Python PIP format](https://pip.readthedocs.io/en/1.1/requirements.html) to list Python packages from [PyPI](https://pypi.org/) to install. Each line in the file represents a separate package to install and must match the title provided by PyPI. 14 | 15 | The following example will install the latest available versions of the [`requests`](https://pypi.org/project/requests/) and [`gensim`](https://pypi.org/project/gensim/) packages. 16 | 17 | ```text 18 | requests 19 | gensim 20 | ``` 21 | 22 | If specific versions of a package are required, we can use comparison operators to indicate which version. 23 | 24 | * `requests==2.22.0` The package must must be version `2.22.0`. 25 | * `requests>=2.22.0` The package must be version `2.22.0` or higher. 26 | * `requests<=2.22.0` The package must be version `2.22.0` or lower. 27 | 28 | It is strongly recommended to only use these operators when required. If submitting a Skill to the Marketplace, you will be asked to provide reasoning as to why a specific version of a package is necessary. 29 | 30 | ### Examples of requirements.txt 31 | 32 | * [Weather Skill](https://github.com/MycroftAI/skill-weather/blob/19.08/requirements.txt) 33 | * [Wiki Skill](https://github.com/MycroftAI/skill-wiki/blob/19.08/requirements.txt) 34 | 35 | ## requirements.sh 36 | 37 | The `requirements.sh` file may contain a shell script that is run during installation of the Skill. Shell scripting is beyond the scope of these documents, however there are many tutorials available online. 38 | 39 | Prior to the `manifest.yml` file, this was the only method available to install system packages. If you are only installing packages, using the [`manifest.yml`](manifest-yml.md) file instead is recommended. 40 | 41 | The contents of this file will be checked carefully if a Skill is submitted for inclusion in the [Skills Marketplace](https://market.mycroft.ai) 42 | 43 | ### Examples of requirements.sh 44 | 45 | * [Zork \(adventure game\)](https://github.com/forslund/white-house-adventure/blob/6eba5df187bc8a7735b05e93a28a6390b8c6f40c/requirements.sh) 46 | 47 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/troubleshooting/getting-more-support.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: I've tried everything! Where to get further support. 3 | --- 4 | 5 | # Getting more support 6 | 7 | ## Does an answer already exist? 8 | 9 | Before asking a new question, have you: 10 | 11 | * Searched our documentation? 12 | * Searched the [Community Forum](https://community.mycroft.ai/c/Help-with-Mycroft-related-issues)? 13 | 14 | ## Asking for help 15 | 16 | ### What you need 17 | 18 | Help us to help you. When you ask for support, please provide as much detail as possible. 19 | 20 | The following information is very useful for anyone trying to help: 21 | 22 | * What type of Mycroft device you're using, such as a Mark 1, Picroft, KDE Plasmoid, regular Linux desktop 23 | * What steps you took leading up to the issue 24 | * What happened, compared to what you expected to happen 25 | * Information from your [Mycroft logs](log-files.md) 26 | * Information generated from the [Support Skill](support-skill.md) 27 | 28 | ### Where to ask 29 | 30 | #### Community Forums 31 | 32 | The best place to post most support questions is the [Community Forums](https://community.mycroft.ai/c/Help-with-Mycroft-related-issues). This enables the many experienced members of our Community to assist you in a timely manner. Once a solution is found, it also means that others who may face the same problem in the future can benefit from your experience. 33 | 34 | [Check out the Support Category of the Community Forums.](https://community.mycroft.ai/c/Help-with-Mycroft-related-issues) 35 | 36 | #### Community Chat 37 | 38 | The Community also has a real-time Chat service. This is useful to have direct discussion about your issue. Please be mindful that we are a global Community and someone may not respond immediately to your message. Be sure to provide enough detail so that when someone does come online they can understand what is happening. 39 | 40 | [Join the Troubleshooting channel in Chat.](https://chat.mycroft.ai/community/channels/troubleshooting) 41 | 42 | #### Github Issues 43 | 44 | If you have discovered a technical issue and are aware of which Mycroft component it relates to, these can be logged as issues on [Github](https://github.com/MycroftAI). If you are unsure, you can always ask in the [Community Chat](https://chat.mycroft.ai/community/channels/troubleshooting). 45 | 46 | #### Mycroft Team 47 | 48 | Where needed, the Mycroft Team are happy to help. You can email queries to support@mycroft.ai and a team member will respond when available. 49 | 50 | Please note that we are a small team and our primary focus is on improving Mycroft for everyone's benefit. It may take time for us to respond to an email. 51 | 52 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/get-mycroft/android.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Mycroft for Android is a Community led project. It is a proof of concept and 4 | is not ready for general use. There is no .apk available. 5 | --- 6 | 7 | # Android 8 | 9 | There are two components to Mycroft for Android: 10 | 11 | 1. The Android companion app. The app works by opening a websocket connection to the `mycroft-core` messagebus, and sending and receiving messages from there. It implements voice recognition and Text to Speech \(TTS\) via Google APIs at the moment, 12 | 2. Mycroft Core. Mycroft for Android requires that you already have `mycroft-core` installed and working on a machine that the Android application can access via a websocket connection. 13 | 14 | ## Prerequisites 15 | 16 | This section of documentation assumes the following: 17 | 18 | * That you already have [Android Studio](https://developer.android.com/studio/index.html) installed, and are familiar with using Android Studio. If not, [this introduction](https://developer.android.com/studio/intro/index.html) is a good starting point. 19 | * That you already have an Android device, and that you're comfortable loading Android applications on to the device from Android Studio. 20 | 21 | ## Getting Started 22 | 23 | First, you will need to clone the `git` repo and import it into your IDE. 24 | 25 | ```text 26 | $ git clone https://github.com/MycroftAI/Mycroft-Android 27 | Cloning into 'Mycroft-Android'... 28 | remote: Counting objects: 1381, done. 29 | remote: Total 1381 (delta 0), reused 0 (delta 0), pack-reused 1381 30 | Receiving objects: 100% (1381/1381), 538.46 KiB | 210.00 KiB/s, done. 31 | Resolving deltas: 100% (648/648), done. 32 | Checking connectivity... done. 33 | ``` 34 | 35 | From Android Studio, choose `File -> Open file or Project` and select the directory you cloned `Mycroft-Android` into. Android Studio will attempt to load the project, and build the project using [Gradle](https://gradle.org/). You may be prompted to install additional components. 36 | 37 | ![Android Studio with Mycroft-Android loaded](https://mycroft.ai/wp-content/uploads/2017/12/android-studio-with-mycroft-for-android-loaded.png) 38 | 39 | ## Connecting to a `mycroft-core` instance 40 | 41 | @TODO not sure how this works 42 | 43 | ## Pairing Mycroft for Android 44 | 45 | There is no need to pair Mycroft for Android, as the companion app connects to your `mycroft-core` instance, and uses the pairing from that. 46 | 47 | ### Keeping Mycroft for Android updated 48 | 49 | The easiest way to keep Mycroft for Android updated is to clone the `git` repo when you want to build a new image to deploy to your Android phone. 50 | 51 | ## Common issues with Mycroft for Android 52 | 53 | @TODO link to Troubleshooting 54 | 55 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/plugins/stt.md: -------------------------------------------------------------------------------- 1 | --- 2 | Description: STT plugins handle converting Speech to Text. 3 | --- 4 | 5 | # STT Plugins 6 | 7 | All Mycroft STT Plugins need to provide a class derived from the STT base class in `mycroft.stt`. 8 | 9 | When the `__init__()` method of the base class is run the config for that module will be loaded and available through `self.config`. Mycroft's selected language will also be available through `self.lang`. 10 | 11 | For example, the following [Mycroft configuration](../../../using-mycroft-ai/customizations/config-manager.md): 12 | 13 | ```javascript 14 | "stt": { 15 | "module": "example_stt", 16 | "example_stt": { 17 | "server": "https://my_server.com", 18 | "samplerate": 16000 19 | } 20 | } 21 | ``` 22 | 23 | will load the `"example_stt"` structure from the `"stt"` section and provide that in `self.config`. 24 | 25 | ## execute\(\) 26 | 27 | Each STT plugin class needs to define the `execute()` method taking two arguments: 28 | 29 | * `audio` \([AudioData](https://github.com/Uberi/speech_recognition/blob/master/reference/library-reference.rst#audiodataframe_data-bytes-sample_rate-int-sample_width-int---audiodata) object\) - the audio data to be transcribed. 30 | * `lang` \(str\) - _optional_ - the BCP-47 language code \(currently not used in core\). 31 | 32 | The bare minimum STT class will look something like 33 | 34 | ```python 35 | class MySTT(STT): 36 | def execute(audio, language=None): 37 | # Handle audio data and return transcribed text 38 | [...] 39 | return text 40 | ``` 41 | 42 | ## STT base class 43 | 44 | The STT system has a couple of base classes specified. 45 | 46 | ### `STT` 47 | 48 | The base STT, this handles the audio in "batch mode" taking a complete audio file, and returning the complete transcription. 49 | 50 | ### `StreamingSTT` 51 | 52 | A more advanced STT class for streaming data to the STT. This will receive chunks of audio data as they become available and they are streamed to an STT engine. 53 | 54 | The plugin author needs to implement the `create_streaming_thread()` method creating a thread for handling data sent through `self.queue`. The thread this method creates should be based on the [StreamThread class](https://github.com/MycroftAI/mycroft-core/blob/dev/mycroft/stt/__init__.py#L325). `handle_audio_data()` method also needs to be implemented. 55 | 56 | ## Entry point 57 | 58 | To make the class detectable as an STT plugin, the package needs to provide an entry point under the `mycroft.plugin.stt` namespace. 59 | 60 | ```python 61 | setup([...], 62 | entry_points = {'mycroft.plugin.stt': 'example_stt = my_stt:mySTT'} 63 | ) 64 | ``` 65 | 66 | Where `example_stt` is is the STT module name for the plugin, my\_stt is the Python module and mySTT is the class in the module to return. 67 | 68 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/platforms.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | A list of Mycroft hardware devices and software platforms and the repositories 4 | specific to them. 5 | --- 6 | 7 | # Platforms 8 | 9 | Mycroft has a number of devices and software platforms. Most have some unique code or may have dedicated information stored in a repository. 10 | 11 | ## Mark II 12 | 13 | * [Mark II hardware](https://github.com/MycroftAI/hardware-mycroft-mark-II) - all of the designs and technical details of the Mycroft Mark II Prototypes. 14 | * [mycroft-gui](http://github.com/MycroftAI/mycroft-gui) - Qt GUI Framework used by the Mark II and other projects. 15 | * [mycroft-gui-mark-2](https://github.com/MycroftAI/mycroft-gui-mark-2) - Plasma Shell components for the Mark II including the pull-down menu and virtual keyboard. 16 | * [skill-mark-2](https://github.com/MycroftAI/skill-mark-2) - Skill providing user interaction for device specific actions. Much of this will be moved to the Mark II enclosure. 17 | * [skill-wifi-connect](https://github.com/MycroftAI/skill-wifi-connect) - Audio and visual prompts for the Pantacor Wifi Connect system. 18 | * [mycroft-core\#feature/mark-2](https://github.com/mycroftai/mycroft-core/tree/feature/mark-2) - Feature branch of mycroft-core containing Mark II development code. This was branched to allow the development team to rapidly iterate. It will be merged back into the mainline project shortly. 19 | 20 | ### Deprecated 21 | 22 | The following repositories were used by previous versions of the Mark II. 23 | 24 | * [enclosure-mark2](https://github.com/MycroftAI/enclosure-mark2) 25 | * [skill-mark-2-pi](https://github.com/MycroftAI/skill-mark-2-pi) 26 | 27 | ## Mark 1 28 | 29 | * [hardware-mycroft-mark-1](https://github.com/MycroftAI/hardware-mycroft-mark-1) - all of the designs and technical details of the Mycroft Mark 1. 30 | * [enclosure-mark-1](https://github.com/MycroftAI/enclosure-mark1) - the code run on the Arduino within a Mark 1 unit. It manages the eyes, the mouth, and the button. 31 | * [mycroft-mark-1](https://github.com/MycroftAI/mycroft-mark-1) - Skill providing user interaction for device specific actions. 32 | * [skill-mark1-demo](https://github.com/MycroftAI/skill-mark1-demo) - A Skill to showcase the features of the Mark 1. 33 | * [mark-1-update-test](https://github.com/MycroftAI/mark-1-update-test) - This will mount a mark 1 production image and run a test script inside of it with qemu. 34 | * [skill-platform-patch](https://github.com/MycroftAI/skill-platform-patch) - Updates original Mark 1 devices running mycroft-core versions less than 0.9.0, allowing them to upgrade to the latest version of Mycroft. 35 | 36 | ## Picroft 37 | 38 | * [enclosure-picroft](https://github.com/MycroftAI/enclosure-picroft) - Build recipe and assets for creating Picroft images. 39 | 40 | ## Snapcraft 41 | 42 | * [snapcroft-mycroft-core](https://github.com/MycroftAI/snapcraft-mycroft-core) - Snap build configuration for Mycroft. 43 | 44 | -------------------------------------------------------------------------------- /docs/skill-development/user-interaction/intents/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | An intent is the task the user intends to accomplish when they say something. 4 | The role of the intent parser is to extract from the user's speech key data 5 | elements that specify their intent. 6 | --- 7 | 8 | # Intents 9 | 10 | A user can accomplish the same task by expressing their intent in multiple ways. The role of the intent parser is to extract from the user's speech key data elements that specify their intent in more detail. This data can then be passed to other services, such as Skills to help the user accomplish their intended task. 11 | 12 | _Example_: Julie wants to know about today's weather in her current location, which is Melbourne, Australia. 13 | 14 | > "hey mycroft, what's today's weather like?" 15 | > 16 | > "hey mycroft, what's the weather like in Melbourne?" 17 | > 18 | > "hey mycroft, weather" 19 | 20 | Even though these are three different expressions, for most of us they probably have roughly the same meaning. In each case we would assume the user expects Mycroft to respond with today's weather for their current location. The role of an intent parser is to determine what this intent is. 21 | 22 | In the example above, we might extract data elements like: 23 | 24 | * **weather** - we know that Julie wants to know about the weather, but she has not been specific about the type of weather, such as _wind_, _precipitation_, _snowfall_ or the risk of _fire danger_ from bushfires. Melbourne, Australia rarely experiences snowfall, but falls under bushfire risk every summer. 25 | * **location** - Julie has stipulated her location as Melbourne, but she does not state that she means Melbourne, Australia. How do we distinguish this from Melbourne, Florida, United States? 26 | * **date** - Julie has been specific about the _timeframe_ she wants weather data for - today. But how do we know what today means in Julie's timezone. Melbourne, Australia is between 14-18 hours ahead of the United States. We don't want to give Julie yesterday's weather, particularly as Melbourne is renowned for having changeable weather. 27 | 28 | ## Mycroft's Intent Parsing Engines 29 | 30 | Mycroft has two separate Intent parsing engines each with their own strengths. Each of these can be used in most situations, however they will process the utterance in different ways. 31 | 32 | [**Padatious**](../../../mycroft-technologies/padatious.md) is a light-weight neural network that is trained on whole phrases. Padatious intents are generally more accurate however require you to include sample phrases that cover the breadth of ways that a User may ask about something. 33 | 34 | [**Adapt**](../../../mycroft-technologies/adapt/) is a keyword based parser. It is more flexible, as it detects the presence of one or more keywords in an utterance, however this can result in false matches. 35 | 36 | We will now look at each in more detail, including how to use them in a Mycroft Skill. 37 | 38 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/plugins/audioservice.md: -------------------------------------------------------------------------------- 1 | --- 2 | Description: Audioservice Backend plugins handles playing and queuing audio files. 3 | description: >- 4 | An Audioservice Backend plugin adds an interface to a media player allowing 5 | Mycroft to play new types of media files or on remote devices. 6 | --- 7 | 8 | # Audioservice Plugins 9 | 10 | ## AudioBackend 11 | 12 | Each Audioservice plugin must implement a class derived from `AudioBackend` found in `mycroft.audio.service`. This class implements all basic commands and will be called by Mycroft. For complete specification see the [docstrings](https://github.com/MycroftAI/mycroft-core/blob/dev/mycroft/audio/services/__init__.py) for the class. 13 | 14 | Apart from the expected commands \(`play`, `pause`, etc\), there are a couple of important methods to mention. 15 | 16 | The `supported_uris()` method this is used to determine if the service backend can handle the given uri-type \(https://, file://, etc\). A basic implementation will return an iterable with uri types: `('file', 'http', 'https')` 17 | 18 | The playlist handling methods `clear_list()` and `add_list()`, of which the first removes all items from the current list, and the second _appends_ a list of uri's to the list. 19 | 20 | ### RemoteAudioBackend 21 | 22 | If the audio backend that is implemented plays media on another device than the one Mycroft is running on this base class should be used. It is mainly used to sort generic playback queries defaulting to a local audio backend unless otherwise specified. 23 | 24 | ## Instantiation 25 | 26 | An audioservice can instantiate a number of backend objects so an instantiation function called `load_service()` needs to exist in the module. It will be called with the Mycroft audio configuration and a connection to the messagebus. The method should return a tuple with audio backends included in the file. 27 | 28 | The `load_service()` function can also be used to scan the local network for things like chromecast and create an audioservice backend for each found device. 29 | 30 | The basic layout of an Audioservice Plugin will be: 31 | 32 | ```python 33 | from mycroft.audio.service import AudioBackend 34 | 35 | class MyAudioBackend(AudioBackend) 36 | #lots of implementation here 37 | [...] 38 | 39 | 40 | def load_service(config, bus): 41 | return (MyAudioBackend(config, bus), ) 42 | ``` 43 | 44 | Example Audioservices can be found [here](https://github.com/MycroftAI/mycroft-core/tree/97ff8a4708cb91dd6661091e935c46753f2aa4d9/mycroft/audio/services). 45 | 46 | ### Entry point 47 | 48 | To make the audioservice detectable as an Audioservice Backend Plugin, the package needs to provide an entry point under the `mycroft.plugin.audioservice` namespace. 49 | 50 | ```python 51 | setup([...], 52 | entry_points = {'mycroft.plugin.audioservice': 'example_audiobackend = my_audiobackend'} 53 | ) 54 | ``` 55 | 56 | Where `example_audiobackend` is the audio service module name for the plugin, `my_audiobackend` will be the audioservice module containing the `load_service()` function. 57 | 58 | Note that this differs a fair bit from the TTS and STT Plugin entry points that point to a single class. 59 | 60 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/what-can-a-skill-do.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: You can think of skills as the apps of the Mycroft system. 3 | --- 4 | 5 | # What can a Skill do? 6 | 7 | ## What can a Skill do? 8 | 9 | Skills give Mycroft the ability to perform a variety of functions. They can be installed or removed by the user, and can be easily updated to expand functionality. To get a good idea of what skills to build, let’s talk about the best use cases for a voice assistant, and what types of things Mycroft can do. 10 | 11 | Mycroft can run on a variety of platforms from the Linux Desktop to our dedicated Smart Speakers, the Mark I and Mark II. Different devices will have slightly different use cases. Devices in the home are generally located in the living room or kitchen and are ideal for listening to the news, playing music, general information, using timers while cooking, checking the weather, and other similar activities that are easily accomplished hands free. 12 | 13 | ### Basic functions 14 | 15 | We cover a lot of the basics with our Default Skills, things like Timers, Alarms, Weather, Time and Date, and more. 16 | 17 | ### Information 18 | 19 | We also call this General Question and Answer, and it covers all of those factual questions someone might think to ask a voice assistant. Questions like “who was the 32nd President of the United States?”, or “how tall is Eiffel Tower?” Although the Default Skills cover a great deal of questions there is room for more. There are many topics that could use a specific skill such as Science, Academics, Movie Info, TV info, and Music info, etc.. 20 | 21 | ### Media 22 | 23 | One of the biggest use cases for Smart Speakers is playing media. The reason media playback is so popular is because it makes playing a song so easy, all you have to do is say “Hey Mycroft play the Beatles,” and you can be enjoying music without having to reach for a phone or remote. Within our Defualt Skills we support Spotify and Pandora. However, it would be great if Mycroft supported as many streaming services as possible. In addition to listening to music, there are skills that handle podcasts as well. 24 | 25 | ### News 26 | 27 | Much like listening to music, getting the latest news with a simple voice interaction is extremely convenient. Mycroft supports multiple news feeds, and has the ability to support multiple news skills. 28 | 29 | ### Smart Home 30 | 31 | Another popular use case for Voice Assistants is to control Smart Home and IoT products. Within our Marketplace there are skills for Home Assistant, Wink IoT, Lifx and more, but there are many products that we do not have skill for yet. The open source community has been enthusiastically expanding Mycroft's ability to voice control all kinds of smart home products. 32 | 33 | ### Games 34 | 35 | Voice games are becoming more and more popular, especially those that allow multiple users to play together. Trivia games are some of the most popular types of games to develop for voice assistants. There are several games already available in the Mycroft Marketplace. There is a port of the popular text adventure game Zork, a Crystal Ball game, and a Number Guessing game. When working on a game skill it might be worth creating a new Persona to be the host of your game. 36 | 37 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/conversations.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Conversations are made up of a series of individual interactions, but are more 4 | than the sum of their parts. For more complicated Skills conversations are a 5 | necessity. 6 | --- 7 | 8 | # Conversations 9 | 10 | Any time the user needs to input a lot of information or the user needs to sort through a variety of options a conversation will be needed. Users may be used to systems that require them to separate input into different chunks. For example, many personal navigation devices \(GPS units\) require users to input the street number, street name, city, and state separately, like this: 11 | 12 | | Speaker | | 13 | | :--- | :--- | 14 | | **Navigation Device** | _What is the street number_ | 15 | | **User** | _123_ | 16 | | **Navigation Device** | _What is the street name_ | 17 | | **User** | _Main Street_ | 18 | | **Navigation Device** | _What is the city_ | 19 | | **User** | _Springfield_ | 20 | | **Navigation Device** | _Oregon_ | 21 | 22 | This was a limitation of the technology, both the STT and intent parser were not capable of handling an entire address. With Mycroft you do not need to separate the address into multiple inputs. It’s also easy to think like a traditional GUI and translate what are typically separate fields into prompts for the user, but that will only slows down the interaction. 23 | 24 | The best solution is to allow the user to input information both ways, all at once, or in pieces. For example the Mycroft Alarm skill allows the user to set a complicated alarm all in one go, or in steps. 25 | 26 | **In Steps:** 27 | 28 | | Speaker | | 29 | | :--- | :--- | 30 | | **User** | _Hey Mycroft, set a recurring alarm_ | 31 | | **Mycroft** | _For what day?_ | 32 | | **User** | _weekdays_ | 33 | | **Mycroft** | _For what time?_ | 34 | | **User** | _7 am_ | 35 | | **Mycroft** | I've set a recurring Alarm for 7 am on weekdays | 36 | 37 | **All at once:** 38 | 39 | | Speaker | | 40 | | :--- | :--- | 41 | | **User** | Hey Mycroft, set a recurring alarm for weekdays at 7 am | 42 | | Mycroft | I've set a recurring Alarm for 7 am on weekdays | 43 | 44 | ### Context 45 | 46 | Conversational context is a given when speaking to another person, but not always when interacting with a computer based system. Many skills are designed with only a quick back and forth interaction in mind, and can operate well without conversational context, but conversational context can be a powerful addition to a skill and can make the interaction much more life like. Here are some examples of how to use context in conversation. 47 | 48 | | Speaker | | 49 | | :--- | :--- | 50 | | **User** | _Who is Shaquille O'Neal_ | 51 | | **Mycroft** | _Shaquille O'Neal is an American former professional basketball player who is a sports analyst on the television program Inside the NBA on TNT. He is considered one of the greatest players in National Basketball Association history_ | 52 | | **User** | _How tall is he?_ | 53 | | **Mycroft** | _seven foot 1 inches tall_ | 54 | | **User** | _when did he retire_ | 55 | | **Mycroft** | _2011_ | 56 | 57 | To learn more about the technical aspects of Conversational Context, see: 58 | 59 | {% page-ref page="../user-interaction/conversational-context.md" %} 60 | 61 | 62 | 63 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/audio-service.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: The audio service handles playback and queueing of tracks. 3 | --- 4 | 5 | # Audio Service 6 | 7 | The `mycroft-core` distribution of Mycroft includes a _Playback Skill_ which can be used to control playback after it has been started. This means that playback only needs to be started in the **Skill**. Controlling playback can then be done through the _Playback Skill_. 8 | 9 | ## How to set up the Audio Service 10 | 11 | First, import the 12 | 13 | `AudioService` 14 | 15 | class. 16 | 17 | ```python 18 | from mycroft.skills.audioservice import AudioService 19 | ``` 20 | 21 | Then in the `initialize()` method of your **Skill**, instantiate an `AudioService` object: 22 | 23 | ```python 24 | def initialize(self): 25 | self.audio_service = AudioService(self.bus) 26 | 27 | #Other initialize code 28 | [...] 29 | ``` 30 | 31 | ## Starting playback 32 | 33 | Once the `AudioService` instance is created, you can start playback by simply calling the `play()` method with a track URI: 34 | 35 | ```python 36 | self.audio_service.play('file:///path/to/my/track.mp3') 37 | ``` 38 | 39 | or with a list of tracks 40 | 41 | ```python 42 | self.audio_service.play(['file:///path/to/my/track.mp3', 'http://tracks-online.com/my/track.mp3']) 43 | ``` 44 | 45 | see the [AudioServicePlugins](https://mycroft-ai.gitbook.io/docs/mycroft-technologies/mycroft-core/plugins/audioservice#audiobackend) for information about configuring supported uris. 46 | 47 | The play method has an optional second argument to further process the user's **Utterance**. Currently this can only be used to select backend \(where you want to send the audio\) but in the future it will be able to handle requests like 48 | 49 | > Hey Mycroft, play Hello Nasty by the Beastie Boys at half volume. We don't want to wake the neighbours 50 | 51 | To use this feature the **Utterance** received from the intent service must be passed 52 | 53 | ```python 54 | def play_playlist_handler(self, message): 55 | self.audioservice.play(PLAYLIST, message.data['utterance']) 56 | ``` 57 | 58 | ## More technical information 59 | 60 | ## The backends 61 | 62 | The default backend is still mpg123 for mp3 files which is very limited but is the most generally available or multiple platforms. 63 | 64 | Included in this release there's also 65 | 66 | * `VLC` \(a very general purpose media player\) 67 | * `mopidy` \(a common audio server in the Raspberry Pi community\) 68 | * `chromecast` \(experimental\) 69 | 70 | These haven't been extensively tested on the Mark 1 yet. 71 | 72 | ## PulseAudio features 73 | 74 | The audio service hooks into the PulseAudio controls and can mute playback of sound streams beyond Mycroft's control. This is currently deactivated by default but can be enabled by changing the `mycroft.conf` configuration found in `mycroft/configuration/mycroft.conf` 75 | 76 | ```javascript 77 | "play_wav_cmdline": "paplay %1 --stream-name=mycroft-voice", 78 | "Audio": { 79 | "pulseaudio": "mute" 80 | } 81 | ``` 82 | 83 | ## AudioService Technical Documentation 84 | 85 | More information on AudioService methods can be found in the [Mycroft Technical Documentation](https://mycroft-core.readthedocs.io/en/master/source/mycroft.html#audioservice-class). 86 | 87 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/services/audio-service.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: The audio service handles playback and queueing of tracks. 3 | --- 4 | 5 | # Audio Service 6 | 7 | The `mycroft-core` distribution of Mycroft includes a _Playback Skill_ which can be used to control playback after it has been started. This means that playback only needs to be started in the **Skill**. Controlling playback can then be done through the _Playback Skill_. 8 | 9 | ## How to set up the Audio Service 10 | 11 | First, import the 12 | 13 | `AudioService` 14 | 15 | class. 16 | 17 | ```python 18 | from mycroft.skills.audioservice import AudioService 19 | ``` 20 | 21 | Then in the `initialize()` method of your **Skill**, instantiate an `AudioService` object: 22 | 23 | ```python 24 | def initialize(self): 25 | self.audio_service = AudioService(self.bus) 26 | 27 | #Other initialize code 28 | [...] 29 | ``` 30 | 31 | ## Starting playback 32 | 33 | Once the `AudioService` instance is created, you can start playback by simply calling the `play()` method with a track URI: 34 | 35 | ```python 36 | self.audio_service.play('file:///path/to/my/track.mp3') 37 | ``` 38 | 39 | or with a list of tracks 40 | 41 | ```python 42 | self.audio_service.play(['file:///path/to/my/track.mp3', 'http://tracks-online.com/my/track.mp3']) 43 | ``` 44 | 45 | see the [AudioServicePlugins](https://mycroft-ai.gitbook.io/docs/mycroft-technologies/mycroft-core/plugins/audioservice#audiobackend) for information about configuring supported uris. 46 | 47 | The play method has an optional second argument to further process the user's **Utterance**. Currently this can only be used to select backend \(where you want to send the audio\) but in the future it will be able to handle requests like 48 | 49 | > Hey Mycroft, play Hello Nasty by the Beastie Boys at half volume. We don't want to wake the neighbours 50 | 51 | To use this feature the **Utterance** received from the intent service must be passed 52 | 53 | ```python 54 | def play_playlist_handler(self, message): 55 | self.audioservice.play(PLAYLIST, message.data['utterance']) 56 | ``` 57 | 58 | ## More technical information 59 | 60 | ## The backends 61 | 62 | The default backend is still mpg123 for mp3 files which is very limited but is the most generally available or multiple platforms. 63 | 64 | Included in this release there's also 65 | 66 | * `VLC` \(a very general purpose media player\) 67 | * `mopidy` \(a common audio server in the Raspberry Pi community\) 68 | * `chromecast` \(experimental\) 69 | 70 | These haven't been extensively tested on the Mark 1 yet. 71 | 72 | ## PulseAudio features 73 | 74 | The audio service hooks into the PulseAudio controls and can mute playback of sound streams beyond Mycroft's control. This is currently deactivated by default but can be enabled by changing the `mycroft.conf` configuration found in `mycroft/configuration/mycroft.conf` 75 | 76 | ```javascript 77 | "play_wav_cmdline": "paplay %1 --stream-name=mycroft-voice", 78 | "Audio": { 79 | "pulseaudio": "mute" 80 | } 81 | ``` 82 | 83 | ## AudioService Technical Documentation 84 | 85 | More information on AudioService methods can be found in the [Mycroft Technical Documentation](https://mycroft-core.readthedocs.io/en/master/source/mycroft.html#audioservice-class). 86 | 87 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/get-mycroft/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Want to give Mycroft a try? Mycroft is open source software that runs where 4 | you want it to, whether it’s on your own hardware or one of our dedicated 5 | devices. 6 | --- 7 | 8 | # Get Mycroft 9 | 10 | You can purchase a dedicated Mycroft device, or download Mycroft and run it on your own hardware, be it a Raspberry Pi, computer, or an Android phone. 11 | 12 | ## Download 13 | 14 | You can download Mycroft and run it on your own hardware, be it a Raspberry Pi, computer, or an Android phone. 15 | 16 | ### Picroft 17 | 18 | Picroft is specifically packaged to run on a Raspberry Pi 3 or 3B+. It is built on top of the official Raspbian Stretch Lite and is available as a disk image ready to be burned to a Micro SD card. For Mycroft to hear you and respond to you, just connect a speaker and a USB microphone to your Pi. 19 | 20 | {% content-ref url="picroft.md" %} 21 | [picroft.md](picroft.md) 22 | {% endcontent-ref %} 23 | 24 | ### Mycroft for Linux 25 | 26 | If you are a technical user, run Mycroft on your favorite Linux distro, including Ubuntu/Debian, Arch, Fedora. Install it using either a shell script or a standalone installer. 27 | 28 | {% content-ref url="linux.md" %} 29 | [linux.md](linux.md) 30 | {% endcontent-ref %} 31 | 32 | ### Windows and Mac 33 | 34 | We’re not quite ready to release Mycroft natively for Windows and macOS. If you are keen to help and have technical experience on these platforms, please let us know! 35 | 36 | However you can run Mycroft on MacOS and Windows inside a virtual machine using VirtualBox. 37 | 38 | {% content-ref url="macos-and-windows-with-virtualbox.md" %} 39 | [macos-and-windows-with-virtualbox.md](macos-and-windows-with-virtualbox.md) 40 | {% endcontent-ref %} 41 | 42 | ### Mycroft for Docker 43 | 44 | Run Mycroft in a Docker container. This can be installed using our [pre-built image from Docker Hub](https://hub.docker.com/r/mycroftai/docker-mycroft/), or by building the image manually. 45 | 46 | {% content-ref url="docker.md" %} 47 | [docker.md](docker.md) 48 | {% endcontent-ref %} 49 | 50 | ### Mycroft for Android 51 | 52 | Build Mycroft in Android Studio and deploy to your Android devices. We have source code available if you’re comfortable with Android Studio and have the skill set to compile this into an .apk and load onto an Android device. 53 | 54 | {% content-ref url="android.md" %} 55 | [android.md](android.md) 56 | {% endcontent-ref %} 57 | 58 | ## Buy a device 59 | 60 | Get a dedicated device, a voice assistant with no hassles. Plug in, connect to WiFi and go. 61 | 62 | ### Mark II 63 | 64 | The Mark II is our next-generation smart speaker designed for you and your family, whatever your level of technical expertise. At the same time, it is open hardware and software for developers, makers and hackers. 65 | 66 | {% content-ref url="mark-ii/" %} 67 | [mark-ii](mark-ii/) 68 | {% endcontent-ref %} 69 | 70 | ### Mark 1 71 | 72 | Mark 1 is a no-fuss way to get started. It has been designed specifically to work with Mycroft’s capabilities as a standalone device. Great for those who aren’t super technical, but it can be easily extended, altered, modified, and hacked. 73 | 74 | _The Mark 1 is currently sold out._ 75 | 76 | {% content-ref url="mark-1.md" %} 77 | [mark-1.md](mark-1.md) 78 | {% endcontent-ref %} 79 | -------------------------------------------------------------------------------- /docs/skill-development/marketplace-submission/skills-acceptance-process/skills-review-functional-template.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: Template for the Functional Review component of the Skills Acceptance Process. 3 | --- 4 | 5 | # Functional Review Template 6 | 7 | ```text 8 | ## Meta 9 | 10 | * Who: [@your-github-handle](https://github.com/your-github-handle) / @your-mycroft-chat-handle 11 | * Datestamp: run `date +%Y-%m-%d_%H:%M:%S_%Z` 12 | * Language and dialect of tester: 13 | 14 | ## 3. Functional Review - intuitive and expected 15 | 16 | * [ ] **Installation** 17 | 18 | Check that the Skill installs using voice commands. Mycroft will get the user to confirm which Skill should be installed if there is ambiguity in Skill names - such as duplicate names. If possible, name the Skill so that there is minimal duplication and/or conflict. You should also verify that the Skill name can be verbally pronounced by speaking the Skill name into the Mycroft command line several times, and reading the resulting transcriptions. Suggest alternative Skill names if it is difficult to verbally pronounce the Skill name. Please provide confirmation that the Skill was successfully installed and by what means \(voice or `mycroft-msm install`\), as well as what utterance was detected when invoking the install voice command. 19 | 20 | > Install method: 21 | > Output: 22 | > 23 | > 24 | > 25 | > Checking that STT transcribes correctly: 26 | > 27 | > 28 | 29 | * [ ] **Settings** 30 | 31 | If Skill includes a `settingsmeta` file - are the settings well laid out? Does the placeholder text make sense? This can also be checked on [home.mycroft.ai/\#/skill](https://home.mycroft.ai/#/skill) 32 | 33 | > 34 | 35 | * [ ] **Dialog** 36 | 37 | Check the `dialog` directory to ensure that from a voice user interface perspective the dialogs read well. Alway play every `dialog` phrase on the command line by doing `say` so that you can check how the `dialog` is spoken. It is a good idea to run the `dialog` phrases through [mimic](https://mycroft.ai/documentation/mimic/). 38 | 39 | Sometimes the `dialog` files will need a small tweak such as a space between words, or extra vowel sounds, to sound realistic. Sometimes words don't render as expected and alternative wording should be used. Some of the tricks you might need to use include separating words by a space - such as `sub sonic` instead of `subsonic`, or `broad cast` instead of `broadcast`. 40 | 41 | > 42 | 43 | #### Skill Functions 44 | 45 | For each function of the Skill add a new checkbox with the utterance used to invoke the functionality. Confirm the output and behaviour of each. If any setup is required to perform these tests please indicate this directly before the test is described. 46 | 47 | * [ ] **"Are there unread messages on Mycroft Chat"** 48 | 49 | > 50 | 51 | * [ ] **"Name Mycroft Chat channels with unread messages"** 52 | 53 | > 54 | 55 | * [ ] **"Read all unread Mycroft Chat messages"** 56 | 57 | > 58 | 59 | * [ ] **"Read messages for the channel {name}"** 60 | 61 | > 62 | 63 | * [ ] **"Begin monitoring of Mycroft Chat"** 64 | 65 | > 66 | 67 | * [ ] **"Stop monitoring of Mycroft Chat"** 68 | 69 | > 70 | 71 | ### Actions Required 72 | 73 | A short list of any _Actions Required_. It is also great to provide a short statement of your impressions from using the Skill. \* 74 | ``` 75 | 76 | -------------------------------------------------------------------------------- /docs/skill-development/skill-types/fallback-skill.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | A Fallback Skill is a Skill that will be called if no Intent is matched to the 4 | Utterance. 5 | --- 6 | 7 | # Fallback Skill 8 | 9 | ## Fallback **Skill** order of precedence 10 | 11 | The Fallback **Skills** all have a priority and will be checked in order from low priority value to high priority value. If a Fallback **Skill** can handle the **Utterance** it will create a response and return `True`. After this no other Fallback **Skills** are tried. This means the priority for Fallbacks that can handle a _broad_ range of queries should be _high_ \(80-100\) and Fallbacks that only responds to a very specific range of queries should be higher \(20-80\). The more specific, the lower the priority value. 12 | 13 | ## Creating a Fallback **Skill** 14 | 15 | Import the `FallbackSkill` base class: 16 | 17 | ```python 18 | from mycroft import FallbackSkill 19 | ``` 20 | 21 | Create a derived class: 22 | 23 | ```python 24 | class MeaningFallback(FallbackSkill): 25 | """ 26 | A Fallback skill to answer the question about the 27 | meaning of life, the universe and everything. 28 | """ 29 | def __init__(self): 30 | super(MeaningFallback, self).__init__(name='My Fallback Skill) 31 | # Add your own initialization code here 32 | ``` 33 | 34 | Register the handler with the fallback system 35 | 36 | _Note: a `FallbackSkill` can register any number of fallback handlers_ 37 | 38 | ```python 39 | def initialize(self): 40 | """ 41 | Registers the fallback handler 42 | """ 43 | self.register_fallback(self.handle_fallback, 10) 44 | # Any other initialize code you like can be placed here 45 | ``` 46 | 47 | Implement the fallback handler \(the method that will be called to potentially handle the **Utterance**\). The method implements logic to determine if the **Utterance** can be handled and shall output speech if itcan handle the query. It shall return Boolean `True` if the **Utterance** was handled and Boolean `False` if not. 48 | 49 | ```python 50 | def handle_fallback(self, message): 51 | """ 52 | Answers question about the meaning of life, the universe 53 | and everything. 54 | """ 55 | utterance = message.data.get("utterance") 56 | if 'what' in utterance 57 | and 'meaning' in utterance 58 | and ('life' in utterance 59 | or 'universe' in utterance 60 | or 'everything' in utterance): 61 | self.speak('42') 62 | return True 63 | else: 64 | return False 65 | ``` 66 | 67 | Finally, the **Skill** creator must make sure the skill handler is removed when the **Skill** is shutdown by the system. 68 | 69 | ```python 70 | def shutdown(self): 71 | """ 72 | Remove this skill from list of fallback skills. 73 | """ 74 | self.remove_fallback(self.handle_fallback) 75 | super(MeaningFallback, self).shutdown() 76 | ``` 77 | 78 | And as with a normal **Skill** the function `create_skill()` needs to be in the file to instantiate the skill. 79 | 80 | ```python 81 | def create_skill(): 82 | return MeaningFallback() 83 | ``` 84 | 85 | The above example can be found [here](https://github.com/forslund/fallback-meaning). 86 | 87 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. 6 | 7 | ## Our Standards 8 | 9 | Examples of behavior that contributes to creating a positive environment include: 10 | 11 | * Using welcoming and inclusive language 12 | * Being respectful of differing viewpoints and experiences 13 | * Gracefully accepting constructive criticism 14 | * Focusing on what is best for the community 15 | * Showing empathy towards other community members 16 | 17 | Examples of unacceptable behavior by participants include: 18 | 19 | * The use of sexualized language or imagery and unwelcome sexual attention or advances 20 | * Trolling, insulting/derogatory comments, and personal or political attacks 21 | * Public or private harassment 22 | * Publishing others' private information, such as a physical or electronic address, without explicit permission 23 | * Other conduct which could reasonably be considered inappropriate in a professional setting 24 | 25 | ## Our Responsibilities 26 | 27 | Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. 28 | 29 | Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. 30 | 31 | ## Scope 32 | 33 | This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. 34 | 35 | ## Enforcement 36 | 37 | Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at kris.gesling@mycroft.ai. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. 38 | 39 | Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. 40 | 41 | ## Attribution 42 | 43 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] 44 | 45 | [homepage]: http://contributor-covenant.org 46 | [version]: http://contributor-covenant.org/version/1/4/ 47 | -------------------------------------------------------------------------------- /TODO.md: -------------------------------------------------------------------------------- 1 | # Documentation TODO 2 | 3 | The objective of this file is to document the items flagged as @TODO as I go through the documentation so I come back to them and ensure they're resolved before the documentation is put live. 4 | 5 | 6 | ### About Mycroft 7 | 8 | - [ ] Release note - need to find a way to aggregate them from GitHub. This will need the GitHub API. 9 | - [ ] Overview - put in a link to the Precise documentation when it is done. 10 | 11 | ### Your home.mycroft.ai account 12 | 13 | - [ ] Document where different settings are located - device vs Skills 14 | 15 | ### Updating 16 | - [ ] Add or update instructions on how to update each device 17 | 18 | ### Mark 1 Display 19 | - [ ] Mark 1 - should the information on how to control the display for the Mark 1 Enclosure go under the Skills section instead? 20 | - [ ] Mark 1 - how many characters can the Mark 1 display show before text has to scroll? 21 | - [ ] Mark 1 - the mouth_display_png() optional parameters need to be checked - I've guess what each of them mean, but this needs to be checked. 22 | - [ ] Mark 1 - the pieces about how to draw to the **Enclosure** - I think this needs an explanatory graphic. 23 | 24 | ### Picroft 25 | 26 | - [ ] Picroft - how to keep updated - is burning a new disk image to SD card the best approach? 27 | 28 | ### Mycroft for Linux 29 | 30 | - [ ] Linux - should people be connecting headphones and microphones through USB or Bluetooth? Is there any difference or preference? Is this information the same as for Picroft? 31 | - [ ] Linux - what are the other installation options for Mycroft on Linux - there are the installers that @AIIX provided, and I think there's an `apt-get` package. Will need to clarify. 32 | - [ ] Not sure what the easiest way is to keep Mycroft for Linux updated - is it as simple as doing a `git pull` from the remote repo? 33 | - [ ] What are the common issues with Mycroft for Linux? 34 | 35 | ### Mycroft for Android 36 | - [ ] Registration code for Android - not sure how this works or how Android provides a Registration Code 37 | 38 | ### Padatious 39 | 40 | - [ ] Why is having Intents independent of each other a benefit - what does this mean - need to explain it in a bit more detail 41 | 42 | ### Mycroft Skills 43 | 44 | - [ ] Need more documentation on `msm` - what it does, what commands are available, what **Skills** developers need to know about it. 45 | - [ ] What are the common issues in **Skills** development - ie the common problems that Developers encounter - so we can address them. 46 | - [ ] How are Skills developed for platforms like Alexa repurposed for Mycroft? 47 | - [ ] We need a guide or similar for `__init__.py` - that covers items like how to inherit from the `MycroftSkill` class, and the sorts of functions that are used. How are APIs used? What if your **Skill** needs to use another service? 48 | 49 | ### Mycroft language support 50 | 51 | - [ ] The introductory text is available in multiple languages - are there any key languages that I'm missing? 52 | - [ ] If I find a STT engine in the language I want to support, what do I do with it? The documentation on the Languages wiki isn't clear how all the elements come together. 53 | 54 | ### Precise 55 | 56 | NIL 57 | 58 | ### Adapt 59 | 60 | - [ ] are there minimum requirements for installing Adapt - both software and hardware? 61 | 62 | ### Audio service 63 | 64 | - [ ] what URI schemas are supported by the Playback **Skill**? 65 | -------------------------------------------------------------------------------- /docs/skill-development/voight-kampff/reviewing-the-report.md: -------------------------------------------------------------------------------- 1 | # Reviewing the Report 2 | 3 | ## Test Output 4 | 5 | The details of each test scenario is printed to the terminal \(stdout\) as it is run, and at the end of our test a summary is printed. 6 | 7 | ```text 8 | Failing scenarios: 9 | features/hello-world.feature:3 Hello world 10 | 11 | 1 feature passed, 1 failed, 0 skipped 12 | 8 scenarios passed, 1 failed, 0 skipped 13 | 26 steps passed, 1 failed, 0 skipped, 0 undefined 14 | Took 0m14.191s 15 | ``` 16 | 17 | In this example we can see that a scenario called "Hello world" from the `hello-world.feature` file failed. It also points us to the line number where this scenario is defined in that file - line 3. 18 | 19 | The summary also outlines how many features, scenarios and steps have passed, failed or been skipped. Followed by the duration of how long all of the tests took to run. 20 | 21 | Let's now go take a look at that failing test. 22 | 23 | ## Failing tests 24 | 25 | If a scenario fails, the failing `Then` statement will be highlighted in red, and a detailed description of the failure is provided. By default, the test runner will automatically retry the failing scenario a second time before moving on. 26 | 27 | ```bash 28 | 2020-03-20 19:11:14,392 | Voight Kampff | INFO | Starting tests for mycroft-hello-world 29 | Feature: mycroft-hello-world # features/hello-world.feature:1 30 | 31 | Scenario: Hello world # features/hello-world.feature:3 32 | Given an english speaking user # features/steps/utterance_responses.py:108 0.001s 33 | When the user says "Hello world" # features/steps/utterance_responses.py:113 0.001s 34 | Then "mycroft-hello-world" should reply with "I'm doing very well" # features/steps/utterance_responses.py:137 10.019s 35 | Assertion Failed: Mycroft responded with: mycroft-hello-world by mycroftai 36 | Mycroft: hello.world.dialog(HelloWorldSkill) 37 | "Hi to you too" 38 | 39 | Captured stdout: 40 | Matching with the dialog file: how.are.you.dialog 41 | 42 | Captured logging: 43 | INFO:msm.mycroft_skills_manager:Best match (1.0): mycroft-hello-world by mycroftai 44 | ``` 45 | 46 | The example output above shows a test that has failed. The line containing the `Then` statement would be highlighted in red, and it has provided a description of the error. In this case Mycroft responded with "Hi to you too" which is in the `hello.world.dialog` file in the `mycroft-hello-world` Skill. It then indicates that the response string provided in the test "I'm doing very well" was actually found in the `how.are.you.dialog` file. 47 | 48 | Looks like we have an error in our test. Mycroft should respond to "Hello World" with "Hi to you too" or another phrase from `hello.world.dialog`. 49 | 50 | ## Allure Reporting Framework 51 | 52 | The test results are also written to JSON files that can be used with [Allure](http://allure.qatools.ru/), an open source test reporting framework. 53 | 54 | ## Help 55 | 56 | For further assistance with Skill testing, please post your question on the [Community Forums](https://community.mycroft.ai/) or in the [Skills channel on Mycroft Chat](https://chat.mycroft.ai/community/channels/skills). 57 | 58 | See our [tips for how to ask the best questions](../../using-mycroft-ai/troubleshooting/getting-more-support.md). This helps you get a more complete response faster. 59 | 60 | -------------------------------------------------------------------------------- /docs/skill-development/skill-structure/api.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | The Skill API offers a simple and convenient way to use other Skill's methods 4 | and export your own to help other Skill creators. 5 | --- 6 | 7 | # Skill API 8 | 9 | The Skill API uses the Mycroft Message Bus to communicate between Skills and wraps the interaction in simple Python objects making them easy to use. 10 | 11 | ## Making a method available through the Skill API 12 | 13 | A method can be tagged with the `skill_api_method` decorator. This will handle all the basics of making the method available to other Skills over the Message Bus. 14 | 15 | ```python 16 | @skill_api_method 17 | def my_exported_method(self, my_arg, my_other_arg): 18 | """My skill api method documentation 19 | """ 20 | ``` 21 | 22 | The decorator will generate everything needed for accessing the method over the Message Bus and extract the associated docstring. 23 | 24 | ### Limitations 25 | 26 | The Skill API works over the Message Bus. This requires that the return values are json serializable. All common Python builtin types \(such as List, String, None, etc.\) work well, however custom classes are not currently supported. 27 | 28 | ### Example 29 | 30 | ```python 31 | from mycroft.skills import MycroftSkill, skill_api_method 32 | 33 | class RobberSkill(MycroftSkill): 34 | @skill_api_method 35 | def robber_lang(self, sentence): 36 | """Encode a sentence to "Rövarspråket". 37 | 38 | Each consonant gets converted to consonant + "o" + consonant, 39 | vowels are left as is. 40 | 41 | Returns: (str) sentence in the robber language. 42 | """ 43 | wovels = "aeiouyåäö" 44 | tokens = [] 45 | for char in sentence.lower() and char.isalpha(): 46 | if char not in wovels: 47 | tokens.append(char + 'o' + char) 48 | else: 49 | tokens.append(char) 50 | return ' '.join(tokens) 51 | 52 | 53 | def create_skill(): 54 | return RobberSkill() 55 | ``` 56 | 57 | ## Using another Skill's API 58 | 59 | If you want to make use of exported functionality from another Skill, you must fetch that Skill's `SkillApi`. This will give you a small class with the target Skill's exported methods. These methods are nothing special and can be called like any other class's methods. 60 | 61 | To access the `robber_lang()` method we created above, we could write: 62 | 63 | ```python 64 | from mycroft.skills.api import SkillApi 65 | 66 | class NewRobberSkill(MycroftSkill): 67 | def initialize(self): 68 | self.robber = SkillApi.get('robber-skill.forslund') 69 | self.speak(self.robber.robber_lang('hello world')) 70 | 71 | 72 | def create_skill(): 73 | return NewRobberSkill() 74 | ``` 75 | 76 | When the `NewRobberSkill` is initialized, it will assign the API from the Skill `robber-skill.forslund` to `self.robber`. We then run the exported method `robber_lang()` passing the argument `'hello world'`. 77 | 78 | Our `NewRobberSkill` will therefore speak something like "hoh e lol lol o wow o ror lol dod". 79 | 80 | ## Getting information on a Skill's exported API 81 | 82 | The Mycroft CLI has an `:api` command for exploring Skill APIs. 83 | 84 | ```text 85 | :api robber-lang.forslund 86 | ``` 87 | 88 | will show any exported method from the `robber-lang.forslund` Skill. Each exported method's docstring will automatically be extracted and presented, providing information on how each method is intended to be used. 89 | 90 | -------------------------------------------------------------------------------- /docs/skill-development/introduction/python-resources.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Python is an object-oriented, interpreted, high-level programming language. It 4 | is known for being easy to read and learn. 5 | --- 6 | 7 | # Python Resources 8 | 9 | If you are new to Python or programming in general there are lots of resources online to catering to all experience levels. Once you have the basics, a simple Skill can be a great way to try them out in a real project. 10 | 11 | Mycroft uses Python 3 and as of December 2019 supports Python 3.5-3.7. 12 | 13 | The following are a collection of resources recommended by members of our Community. If you know of other great resources please let us know in the [Dev channel of Mycroft Chat](https://chat.mycroft.ai/community/channels/dev). 14 | 15 | ## Video Tutorials 16 | 17 | ### Python Virtual Environments 18 | 19 | {% embed url="https://www.youtube.com/watch?v=QXJZRDYjo7E" %} 20 | Video Tutorial: Python Virtual Environments 21 | {% endembed %} 22 | 23 | ### Python Package Management 24 | 25 | {% embed url="https://www.youtube.com/watch?v=noKRb5uoE-Q" %} 26 | Video Tutorial: Python Package Management 27 | {% endembed %} 28 | 29 | ## Books 30 | 31 | ### Think Python 32 | 33 | "[Think Python](https://greenteapress.com/wp/think-python-2e/) is an introduction to Python programming for beginners. It starts with basic concepts of programming, and is carefully designed to define all terms when they are first used and to develop each new concept in a logical progression. Larger pieces, like recursion and object-oriented programming are divided into a sequence of smaller steps and introduced over the course of several chapters." 34 | 35 | The book is available to purchase through normal retailers, or to download for free from [Green Tea Press](https://greenteapress.com/wp/think-python-2e/) 36 | 37 | ### Automate the Boring Stuff with Python 38 | 39 | "In [Automate the Boring Stuff with Python](https://automatetheboringstuff.com/), you'll learn how to use Python to write programs that do in minutes what would take you hours to do by hand-no prior programming experience required. Once you've mastered the basics of programming, you'll create Python programs that effortlessly perform useful and impressive feats of automation." 40 | 41 | ## Courses 42 | 43 | ### FreeCodeCamp.org Video Tutorial 44 | 45 | [This video tutorial](https://www.youtube.com/watch?v=rfscVS0vtbw) will give you a full introduction into all of the core concepts in python. The entire video runs for almost 4 and a half hours so better grab a coffee before you get started! 46 | 47 | ### LearnPython.org by DataCamp 48 | 49 | [The LearnPython.org interactive tutorial](https://www.learnpython.org/) walks you through all the basics of Python and provides interactive coding challenges to put it into practice. 50 | 51 | ### TutorialsPoint Python Tutorial 52 | 53 | [A comprehensive written tutorial from TutorialsPoint.com](https://www.tutorialspoint.com/python) covering both the basics and more advanced topics. 54 | 55 | ## Documentation 56 | 57 | Books and courses are great to get started but once you have the basics you often just need good documentation to point you in the right direction. The official Python documentation can be found at: [https://docs.python.org/3/](https://docs.python.org/3/) 58 | 59 | For an alternative view of this same documentation, try [DevDocs.io](https://devdocs.io/python\~3.7/). 60 | 61 | If you're a little more visual, check out this [Python Cheat Sheet](https://websitesetup.org/python-cheat-sheet/) that covers all the basics. 62 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/plugins/tts.md: -------------------------------------------------------------------------------- 1 | --- 2 | Description: TTS plugins handles converting Text to Speech 3 | --- 4 | 5 | # TTS Plugins 6 | 7 | All Mycroft TTS plugins need to define a class based on the TTS base class from mycroft.tts 8 | 9 | ```python 10 | from mycroft.tts import TTS 11 | 12 | class myTTS(TTS): 13 | def __init__(self, lang, config): 14 | super().__init__(lang, config, validator, audio_ext='wav', 15 | phonetic_spelling=False, ssml_tags=None) 16 | # Any specific init code goes here 17 | ``` 18 | 19 | The `super()` call does some setup adding specific options to how Mycroft will preprocess the sentence. 20 | 21 | * `audio_ext`: filetype of output, possible options 'wav' and 'mp3'. 22 | * `phonetec_spelling`, True if Mycroft should preprocess some difficult to pronounce words \(eg spotify\) or provide the raw text to the TTS. 23 | * `ssml_tags`: list of valid SSML tags for the TTS if any, otherwise None. 24 | * `validator`: a special class that verifies that the TTS is working in the current configuration. 25 | 26 | It also registers the module's config from the [Mycroft configuration](../../../using-mycroft-ai/customizations/config-manager.md) in `self.config` as well as the current language in `self.lang` 27 | 28 | For the following config snippet 29 | 30 | ```javascript 31 | "tts": { 32 | "module": "example_tts", 33 | "example_tts": { 34 | ... 35 | } 36 | } 37 | ``` 38 | 39 | Mycroft will register the `"example_tts"` part in the TTS's `self.config` 40 | 41 | ## `get_tts()` 42 | 43 | The `get_tts()` method will be called by Mycroft to generate audio and \(optionally\) phonemes. This is the main method that the plugin creator needs to implement. It is called with: 44 | 45 | * `sentence` \(str\): a piece of text to turn into audio. 46 | * `wav_file` \(str\): where the plugin should store the generated audio data. 47 | 48 | This method should generate audio data and return a Tuple `(wav_file, visemes)`: 49 | 50 | * `wav_file` \(str\): path to written data \(generally the input argument\) 51 | * `visemes` \(list\): viseme list for synthesized audio 52 | 53 | As an example see the [`get_tts` method for Mimic2](https://github.com/MycroftAI/mycroft-core/blob/dev/mycroft/tts/mimic2_tts.py#L225). 54 | 55 | ## TTS Validator 56 | 57 | To check if the TTS can be used, a validator class is needed. This should inherit from `mycroft.tts.TTSValidaor`. It will be called with the TTS class as argument and will store it in `self.tts`. 58 | 59 | The following is the bare minimum implementation: 60 | 61 | ```python 62 | class MyValidator(TTSValidator): 63 | def get_tts_class(self): 64 | # Should return a reference to the TTS class it's inteded to validate. 65 | 66 | def validate_lang(self): 67 | # Raise exception if `self.tts.lang` is not supported. 68 | 69 | def validate_connection(self): 70 | # Check that the software needed for the TTS is reachable, 71 | # be it a local executable, python module or remote server and 72 | # if not available raise an exception. 73 | ``` 74 | 75 | ## Entry point 76 | 77 | To make the class detectable as an TTS plugin, the package needs to provide an entry point under the `mycroft.plugin.tts` namespace. 78 | 79 | ```python 80 | setup([...], 81 | entry_points = {'mycroft.plugin.tts': 'example_tts = my_tts:myTTS'} 82 | ) 83 | ``` 84 | 85 | Where `example_tts` is is the TTS module name for the plugin, `my_tts` is the Python module and `myTTS` is the class in the module to return. 86 | 87 | -------------------------------------------------------------------------------- /docs/skill-development/skill-structure/dependencies/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Mycroft Skills are powerful because we can make use of external packages and 4 | applications, or add voice interfaces to existing tools. 5 | --- 6 | 7 | # Dependencies 8 | 9 | There are three main categories of dependencies: 10 | 11 | * Python packages sourced from [PyPI](https://pypi.org/). 12 | * Linux system packages sourced from the repositories available on the Mycroft device. 13 | * Other Mycroft Skills sourced from the [Skills Marketplace](https://market.mycroft.ai/). 14 | 15 | Some of these may already be installed on a Users device, however some may not. To make sure a system has everything that your Skill needs, we can define the dependencies or requirements of the Skill. During installation the Mycroft Skills Manager will then check that they are installed, and if not attempt to do so. 16 | 17 | For more information on Python package management and Python virtual environments, see our general Python Resources: 18 | 19 | {% page-ref page="../../introduction/python-resources.md" %} 20 | 21 | There are three files that we can use to define these dependencies. 22 | 23 | ## Recommended 24 | 25 | `manifest.yml` is the default method. This can include all three types of dependencies including variations for different operating systems if required. 26 | 27 | {% page-ref page="manifest-yml.md" %} 28 | 29 | ## Alternatives 30 | 31 | `requirements.txt` can be used only for Python packages. 32 | 33 | `requirements.sh` is used to run a custom script during installation. 34 | 35 | {% page-ref page="requirements-files.md" %} 36 | 37 | Which ever file you choose to use, it must be located in the root directory of your Skill. 38 | 39 | There is no limit to the number of packages you can install, however these are reviewed during the [Skills Acceptance Process](../../marketplace-submission/skills-acceptance-process/) to ensure they are appropriate for the Skill being installed and do not pose a security concern for Users. 40 | 41 | ## Manual installation 42 | 43 | The files outlined above ensure that dependencies are available on devices when a Skill is being installed by the Mycroft Skills Manager. If you are developing the Skill on your own machine, you may need to install these dependencies manually. 44 | 45 | System packages can be installed using your standard package manager, for example: 46 | 47 | ```bash 48 | apt install system-package-name 49 | ``` 50 | 51 | Mycroft Skills can be installed using the Mycroft Skills Manager: 52 | 53 | ```bash 54 | mycroft-msm install required-skill-name 55 | ``` 56 | 57 | Python packages must be installed in the Mycroft virtual environment. The simplest way to do this is using the helper command `mycroft-pip` located in `mycroft-core/bin/` 58 | 59 | During installation you may have selected to add this directory to your PATH in which case you can run it from anywhere. 60 | 61 | ```bash 62 | mycroft-pip install python-package-name 63 | ``` 64 | 65 | If you don’t want to use the helper commands you can activate the virtual environment and install the packages using the PIP: 66 | 67 | ```bash 68 | cd ~/mycroft-core # or whereever you cloned Mycroft-core 69 | source venv-activate.sh # activate the virtual environment 70 | pip install python-package-name 71 | deactivate # to exit the virtual environment again 72 | ``` 73 | 74 | If you have already defined your Python package dependencies, you can use the `pip -r` flag to install all of these at once: 75 | 76 | ```bash 77 | cd /opt/mycroft/skills/my-skill 78 | mycroft-pip install -r requirements.txt 79 | ``` 80 | 81 | 82 | 83 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/customizations/config-manager.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | The mycroft-config utility simplifies management of the the various 4 | configuration files that control Mycroft. Commands include: edit, show, get, 5 | set, reload. 6 | --- 7 | 8 | # Configuration Manager 9 | 10 | ## About Mycroft Configurations 11 | 12 | Mycroft configurations are stored in [mycroft.conf](mycroft-conf.md) files. These exist in four locations: 13 | 14 | * Default - mycroft-core/mycroft/configuration/mycroft.conf 15 | * Remote - Home.Mycroft.ai 16 | * System - /etc/mycroft/mycroft.conf 17 | * User - $HOME/.mycroft/mycroft.conf 18 | 19 | When the configuration loader starts, it looks in these locations in this order, and loads **all** configurations. Keys that exist in multiple configuration files will be overridden by the last file to contain the value. 20 | 21 | For example, if your account settings at Home.Mycroft.ai set the `date_format` variable to `MDY`, but the `User` configuration file on your device contains a `date_format` value of `DMY`, the loaded configuration will be `DMY` and dates will be provided in the format "29-11-2018". 22 | 23 | This process results in a minimal amount being written for a specific device and user, without modifying default distribution files. 24 | 25 | ## Why a configuration manager? 26 | 27 | When directly editing configuration files it is very easy for even the most careful person to make a minor syntax error. In the past, this has been an extremely common cause of problems. A small error such as a comma in the wrong place breaks your configuration and results in "Mycroft not working". 28 | 29 | {% hint style="warning" %} 30 | We strongly recommend that you use the configuration manager rather than directly editing mycroft.conf files. 31 | {% endhint %} 32 | 33 | ## Available commands 34 | 35 | ### Get a configuration value 36 | 37 | ```text 38 | mycroft-config get 39 | ``` 40 | 41 | To display the requested value, this command first loads the mycroft.conf stack, then outputs the effective value\(s\). 42 | 43 | The variable is specified using jq-style specifiers. For example: 44 | 45 | **Display a specific value** 46 | `mycroft-config get location.timezone.code` 47 | 48 | **Display all values in the location block** 49 | `mycroft-config get location` 50 | 51 | **Display all loaded configuration values** 52 | `mycroft-config get` 53 | 54 | ### Set a configuration value 55 | 56 | ```text 57 | mycroft-config set 58 | ``` 59 | 60 | Set the given variable in the `User` configuration file. For example: 61 | 62 | **Set language to es-es"** 63 | `mycroft-config set lang "es-es"` 64 | 65 | **Set the Mimic1 voice to "Alan Pope" aka British Male** 66 | `mycroft-config set tts.mimic.voice "ap"` 67 | 68 | ### Edit a configuration file 69 | 70 | ```text 71 | mycroft-config edit (default|remote|system|user) 72 | ``` 73 | 74 | Open the requested file in an editor \(nano by default\), performing JSON validation warnings to minimize accidental edit errors. 75 | 76 | ### Show the contents of a configuration file 77 | 78 | ```text 79 | mycroft-config show (default|remote|system|user) 80 | ``` 81 | 82 | Display the contents of the selected configuration file. 83 | 84 | ### Reload configuration values 85 | 86 | ```text 87 | mycroft-config reload 88 | ``` 89 | 90 | Sends a signal on the messagebus telling services to reload the configuration. This automatically occurs after a `set` or `edit` command. 91 | 92 | ## Further information 93 | 94 | For more information on the configuration options available, continue to the [mycroft.conf documentation](mycroft-conf.md). 95 | 96 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mimic-tts/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: Mimic is Mycroft AI's family of Text to Speech engines. 3 | --- 4 | 5 | # Mimic TTS 6 | 7 | ![](../../.gitbook/assets/Mimic3-intro-banner\_v2.3.png) 8 | 9 | Over the years, Mimic like other Mycroft components has gotten clearer, faster, and more flexible. 10 | 11 | You can hear samples from all three Mimic systems below, speaking the same [sentence](https://clagnut.com/blog/2380/#English\_phonetic\_pangrams). 12 | 13 | | Name | Sample | Local | Technology | 14 | | --------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ----- | ----------------------------------------------------------------------------------------------------------- | 15 | | [Mimic 1](../mimic-overview/) | [Mimic 1 sample](https://mycroft.ai/wp-content/uploads/2022/04/mimic1\_pangram.wav) | Yes | [Festival Lite (flite)](http://www.festvox.org/flite) | 16 | | [Mimic 2](mimic-2.md) | [Mimic 2 sample](https://mycroft.ai/wp-content/uploads/2022/04/mimic2\_pangram.wav) | No | [Tacotron](https://github.com/keithito/tacotron) | 17 | | Mimic logo [Mimic 3](mimic-3.md) | [Mimic 3 sample](https://mycroft.ai/wp-content/uploads/2022/04/mimic3\_pangram.wav) | Yes | [VITS: Conditional Variational Autoencoder with Adversarial Learning](https://github.com/jaywalnut310/vits) | 18 | 19 | ### COMING SOON: Mimic 3 20 | 21 | Mimic 3 is a privacy-focused open-source neural Text to Speech (TTS) engine that can run faster than real-time on low-end devices like the Raspberry Pi 4. In human terms that means it sounds great, it can run completely offline or in the cloud, and you can verifiably trust it. 22 | 23 | {% content-ref url="mimic-3.md" %} 24 | [mimic-3.md](mimic-3.md) 25 | {% endcontent-ref %} 26 | 27 | ### Mimic 2 28 | 29 | Mimic 2 is our older machine learning TTS engine designed to run in the cloud. It has been the default voice for most installations of Mycroft for many years. 30 | 31 | {% content-ref url="mimic-2.md" %} 32 | [mimic-2.md](mimic-2.md) 33 | {% endcontent-ref %} 34 | 35 | ### Mimic 1 36 | 37 | Mimic 1 is a fast, light-weight TTS engine based on Carnegie Mellon University's FLITE software. It concatenates speech sounds to create complete phrases. 38 | 39 | {% content-ref url="../mimic-overview/" %} 40 | [mimic-overview](../mimic-overview/) 41 | {% endcontent-ref %} 42 | 43 | ### Mimic Recording Studio 44 | 45 | Mimic Recording Studio provides simplifies the collection of voice training data from individuals, by allowing you to speak and record phrases from a _corpus_. This data can then be used to train a voice model using Mimic 3, Mimic 2, or another machine learning TTS engine. 46 | 47 | {% content-ref url="../mimic-overview/mimic-recording-studio.md" %} 48 | [mimic-recording-studio.md](../mimic-overview/mimic-recording-studio.md) 49 | {% endcontent-ref %} 50 | -------------------------------------------------------------------------------- /docs/skill-development/introduction/your-first-skill.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Ready to create your first Skill? See how easy it is to get a new Skill up and 4 | running, then we will step through the basic anatomy of a Mycroft Skill. 5 | --- 6 | 7 | # Your First Skill 8 | 9 | ## Prerequisites 10 | 11 | If you haven't already, check out our [Introduction to Skill Development](./). This walk through assumes you: 12 | 13 | * Know some basic knowledge of [Python programming](https://www.python.org/), 14 | * have an account on [Github.com](https://github.com), and 15 | * have a [working version of Mycroft](../../using-mycroft-ai/get-mycroft/). 16 | 17 | ## Understand the flow of your Skill 18 | 19 | It's a good idea to start by writing down how your Skill will work, including 20 | 21 | * What words will the User speak to activate the Skill? 22 | * What will Mycroft speak in response? 23 | * What data will you need to deliver the Skill? 24 | * Will you need any additional packages or dependencies? 25 | 26 | Once you've given these some thought, you can get started. 27 | 28 | ## Mycroft Skills Kit (MSK) 29 | 30 | {% embed url="https://www.youtube.com/watch?v=YNoWpRyx26A" %} 31 | Video Tutorial: Mycroft Skills Kit 32 | {% endembed %} 33 | 34 | To setup the foundations of your Skill, we will use the [Mycroft Skills Kit (MSK)](../../mycroft-technologies/mycroft-skills-kit.md) that comes installed with Mycroft. If you chose the defaults during installation, you can run MSK from your Terminal using the command: `mycroft-msk`. Running this command without any arguments will provide a brief overview of what you can do with MSK. 35 | 36 | If you receive a "command not found", then you will need to run `msk` manually from your `mycroft-core` directory. Anytime you see `mycroft-msk` in our documentation you must replace this with: 37 | 38 | ### MSK Create 39 | 40 | `mycroft-msk create` is an interactive script that asks you a few questions and generates a new Skill template. This template can immediately be used as a Skill, however you will most likely want to extend its functionaity. 41 | 42 | To create your first Skill, you will be asked for a: 43 | 44 | 1. Name To be readable within the space available on the [Mycroft Skills Marketplace](https://market.mycroft.ai) the name should be short, generally under 22 characters in length. The name must also be unique. You can check the [Marketplace](https://market.mycroft.ai) to see what other Skills already exist. 45 | 2. Example phrases (known as utterances) Utterances that you expect Users to say to Mycroft, that your Skill will respond to. 46 | 3. Response dialog The dialog that your Skill will respond with. 47 | 4. Short description A one-line description, less than 40 characters long. 48 | 5. Long description This can be as short or as long as you like. 49 | 6. Author This is most often your name, and / or Github @username 50 | 7. Categories The [Mycroft Skills Marketplace](https://market.mycroft.ai) categories your Skill belongs to. It's important to note that the first category you select will be set as the default category. This is where your Skill will most often appear in the Marketplace. 51 | 8. Tags Tags provide an additional means for Users to search for or discover relevant Skills. Unlike categories, you can set your tags to anything you like. 52 | 53 | After inputting this data you will be asked if you would like a Github repo created for your Skill. This provides an easy way to store your Skill, and will be required if you choose to [publish your Skill in the Marketplace](../marketplace-submission/). 54 | 55 | If you have completed all of these steps, your Skill will have been created in the `/opt/mycroft/skills` directory on your device. 56 | 57 | In the next section, we will explore each component that makes up your Skill. 58 | -------------------------------------------------------------------------------- /docs/using-mycroft-ai/troubleshooting/wake-word-troubleshooting.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: Having trouble triggering the wake word? 3 | --- 4 | 5 | # Wake Word Troubleshooting 6 | 7 | > The microphone is working but Mycroft does not trigger when I say the Wake Word 8 | 9 | To verify that it is the Wake Word and not the microphone causing the issue, we will get Mycroft to ask us a question that we can respond to. 10 | 11 | In the Mycroft CLI type in an utterance like "set timer". Mycroft will then ask how long of a timer you would like, play the activation sound and turn on the microphone. Speaking now should result in your utterance being transcribed. 12 | 13 | If your response is successfully transcribed, it is most likely the Wake Word engine causing the problem. 14 | 15 | ## Precise 16 | 17 | ### Check for AVX support on your device 18 | 19 | Our Precise wake word engine also relies upon TensorFlow. For x86 Intel processors this requires the AVX \(Advanced Vector Extensions\) instruction set. To ensure your system supports AVX open a terminal and run: 20 | 21 | ```text 22 | grep avx /proc/cpuinfo 23 | ``` 24 | 25 | AVX should be listed under the flags for each CPU core. If nothing is returned it is most likely that your system does not support AVX. 26 | 27 | The ARM architecture has a similar requirement called SIMD \(Single Instruction, Multiple Data\). This has been available since ARMv7 which includes the Cortex A53 used by the RaspberryPi and the Cortex A7 from the OrangePi. 28 | 29 | #### Resolution 30 | 31 | If your device does not have AVX or SIMD support then our Precise wake word engine will not work. Technical users may be able to build an older version of TensorFlow \(1.13\) from source using the instructions provided on their website that does not have this requirement. 32 | 33 | You may still use Mycroft with the PocketSphinx wake word engine. See [Using a custom wake word](../customizations/wake-word.md) to see how to configure this. 34 | 35 | ### Check microphone input level 36 | 37 | The Wake Word engine can have trouble detecting the specified term if the mic level is set too high. 38 | 39 | #### Resolution 40 | 41 | Reduce the mic level roughly 25-50% and attempt to speak again. 42 | 43 | If you are on a laptop this can be done in your systems sound settings or volume control panel. Alternatively you can use `alsamixer` on the commandline. 44 | 45 | We have a range of Alsamixer tips in the [Audio Troubleshooting Guide](audio-troubleshooting.md) 46 | 47 | ### Test Wake Word detection using another voice 48 | 49 | Whilst we continue to improve our wake word models, it can still have trouble detecting the specified term from some voices. 50 | 51 | If you think this might be the case, you can play a video from our [Youtube channel](https://www.youtube.com/channel/UC1dlmB1lup9RwFQBSGnhA-g) that uses the phrase "Hey Mycroft". Playing this through speakers near the connected microphone should trigger Mycroft. 52 | 53 | #### Resolution 54 | 55 | You may still be able to use Mycroft with the PocketSphinx wake word engine. See [Using a custom wake word](../customizations/wake-word.md) to see how to configure this. 56 | 57 | ## PocketSphinx 58 | 59 | If you are using the PocketSphinx wake word engine, please see [Using a custom wake word](../customizations/wake-word.md) to examine the configuration options available. 60 | 61 | PocketSphinx is an attractive option for a custom wake word as a new term. Whilst it is a simple text configuration, the result is less reliable than a well-trained Precise Wake Word. 62 | 63 | ## Further Information and Support 64 | 65 | You're welcome to post questions and queries to the [Mycroft Forum](https://community.mycroft.ai/c/Help-with-Mycroft-related-issues) or join our [Wake Word channel on Mycroft Chat](https://chat.mycroft.ai/community/channels/wake-word) to connect with others in the Community. 66 | 67 | -------------------------------------------------------------------------------- /docs/skill-development/displaying-information/show-simple-content.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | The Mycroft GUI Framework has a number of common methods for displaying 4 | standard simple content types. 5 | --- 6 | 7 | # Show Simple Content 8 | 9 | ### Text 10 | 11 | Display simple strings of text. 12 | 13 | ```python 14 | self.gui.show_text(self, text, title=None, override_idle=None, override_animations=False) 15 | ``` 16 | 17 | Arguments: 18 | 19 | * text \(str\): Main text content. It will auto-paginate 20 | * title \(str\): A title to display above the text content. 21 | * override\_idle \(boolean, int\): 22 | * True: Takes over the resting page indefinitely 23 | * \(int\): Delays resting page for the specified number of seconds. 24 | * override\_animations \(boolean\): 25 | * True: Disables showing all platform skill animations. 26 | * False: 'Default' always show animations. 27 | 28 | ### Static Image 29 | 30 | Display a static image such as a jpeg or png. 31 | 32 | ```python 33 | self.gui.show_image(self, url, caption=None, title=None, fill=None, override_idle=None, override_animations=False) 34 | ``` 35 | 36 | Arguments: 37 | 38 | * url \(str\): Pointer to the image 39 | * caption \(str\): A caption to show under the image 40 | * title \(str\): A title to display above the image content 41 | * fill \(str\): Fill type - supports: 42 | * 'PreserveAspectFit', 43 | * 'PreserveAspectCrop', 44 | * 'Stretch' 45 | * override\_idle \(boolean, int\): 46 | * True: Takes over the resting page indefinitely 47 | * \(int\): Delays resting page for the specified number of seconds. 48 | * override\_animations \(boolean\): 49 | * True: Disables showing all platform skill animations. 50 | * False: 'Default' always show animations. 51 | 52 | ### Animated Image 53 | 54 | Display an animated image such as a gif. 55 | 56 | ```python 57 | self.gui.show_animated_image(self, url, caption=None, title=None, fill=None, override_idle=None, override_animations=False) 58 | ``` 59 | 60 | Arguments: 61 | 62 | * url \(str\): Pointer to the .gif image 63 | * caption \(str\): A caption to show under the image 64 | * title \(str\): A title to display above the image content 65 | * fill \(str\): Fill type - supports: 66 | * 'PreserveAspectFit', 67 | * 'PreserveAspectCrop', 68 | * 'Stretch' 69 | * override\_idle \(boolean, int\): 70 | * True: Takes over the resting page indefinitely 71 | * \(int\): Delays resting page for the specified number of seconds. 72 | * override\_animations \(boolean\): 73 | * True: Disables showing all platform skill animations. 74 | * False: 'Default' always show animations. 75 | 76 | ### HTML Page 77 | 78 | Display a local HTML page. 79 | 80 | ```python 81 | self.gui.show_html(self, html, resource_url=None, override_idle=None, override_animations=False) 82 | ``` 83 | 84 | Arguments: 85 | 86 | * html \(str\): HTML text to display 87 | * resource\_url \(str\): Pointer to HTML resources 88 | * override\_idle \(boolean, int\): 89 | * True: Takes over the resting page indefinitely 90 | * \(int\): Delays resting page for the specified number of seconds. 91 | * override\_animations \(boolean\): 92 | * True: Disables showing all platform skill animations. 93 | * False: 'Default' always show animations. 94 | 95 | ### Remote URL 96 | 97 | Display a webpage. 98 | 99 | ```python 100 | self.gui.show_url(self, url, override_idle=None, override_animations=False) 101 | ``` 102 | 103 | Arguments: 104 | 105 | * url \(str\): URL to render 106 | * override\_idle \(boolean, int\): 107 | * True: Takes over the resting page indefinitely 108 | * \(int\): Delays resting page for the specified number of seconds. 109 | * override\_animations \(boolean\): 110 | * True: Disables showing all platform skill animations. 111 | * False: 'Default' always show animations. 112 | 113 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/plugins/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Mycroft plugins allows developers to create and distribute new features for 4 | some of Mycroft's main systems. 5 | --- 6 | 7 | # Plugins 8 | 9 | ## Mycroft plugins 10 | 11 | Plugins extend the functionality of Mycroft without needing to modify the source code of Mycroft-core. They can be used to add support for new speech recognition or synthesis engines, or to provide additional audio backends to play media files. 12 | 13 | The intention behind the plugin system is to allow the community to build and publish new functionality, adding support for new services and making Mycroft more easily customizable. The Mycroft Community has grown tremendously over the last few years and Mycroft is being used in many different ways. This gives developers the freedom to make updates at their own pace, without needing to wait for a Mycroft core developer to verify their code contribution \(and then wait for the next release\). It also helps to keep Mycroft-core light and flexible, not requiring everything for every system to be bundled into the one code base. 14 | 15 | ### Pluggable systems 16 | 17 | The plugins are normal pip-installable python modules specifying specially named "entry points". The entry points are a python packaging mechanism for creating common api's and making them discoverable. For more info see the [Python Packaging Guide](https://packaging.python.org/guides/creating-and-discovering-plugins/) 18 | 19 | Mycroft supports plugins for: 20 | 21 | * [Speech to Text \(STT\)](stt.md) 22 | * [Text to Speech \(TTS\)](tts.md) 23 | * [Audioservice backends](audioservice.md) 24 | * [Wake word engines](wake-word-plugins.md) 25 | 26 | These are accessible at through the following entry point types: 27 | 28 | * STT Plugin: `mycroft.plugin.stt` 29 | * TTS Plugin: `mycroft.plugin.tts` 30 | * Audioservice Plugin `mycroft.plugin.audioservice` 31 | * Wake Word Plugin `mycroft.plugin.wake_word` 32 | 33 | ### Plugin setup.py 34 | 35 | A pip installable package always contain a `setup.py` file with install instructions, this will not be described in detail on this page but python-packaging has a [great guide](https://python-packaging.readthedocs.io/en/latest/). 36 | 37 | To make a plugin discoverable an `entry_point` must be added to the setup call in `setup.py`. 38 | 39 | ```python 40 | entry_points={'entrypointtype': 'my_plugin_name = my_module.myPluginClass' 41 | ``` 42 | 43 | `my_plugin_name` will be the module identifier that can be used in the Mycroft configuration to reference a plugin. That plugin name will refer to the class \(or module\) specified by the string to the right of the equal sign. In the example above it will refer to the `myPluginClass` in the module `my_module`. 44 | 45 | A complete `setup.py` example: 46 | 47 | ```python 48 | #!/usr/bin/env python3 49 | from setuptools import setup 50 | 51 | PLUGIN_ENTRY_POINT = 'example_plug = mycroft_example_plugin:ExamplePlugin' 52 | setup( 53 | name='mycroft-example-plugin', 54 | version='1', 55 | description='A TTS plugin for mycroft', 56 | author='Mike Roft', 57 | author_email='mike@email.com', 58 | packages=['mycroft_example_plugin'], 59 | keywords='mycroft plugin tts', 60 | entry_points={'mycroft.plugin.tts': PLUGIN_ENTRY_POINT} 61 | ) 62 | ``` 63 | 64 | This will add a TTS module called _example\_plug_ which will refer to a TTS class called `ExamplePlugin` in the module `mycroft_example_plugin`. 65 | 66 | In the config this will be selected by setting the tts module to `example_plug`. 67 | 68 | ```javascript 69 | "tts": { 70 | "module": "example_plug" 71 | } 72 | ``` 73 | 74 | A complete example plugin re-implementing Google TTS can be found [here](https://github.com/forslund/mycroft-tts-plugin-gtts). A template for creating the `setup.py` can be found [here](https://gist.github.com/forslund/8e51cba0ffd4e671dfc188e4e33fdbd7). 75 | 76 | -------------------------------------------------------------------------------- /docs/skill-development/voight-kampff/scenario-outlines.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | One of the simplest, yet most useful features of Voight Kampff is the ability 4 | to provide a list of examples that will all be tested within one Scenario. 5 | --- 6 | 7 | # Scenario Outlines 8 | 9 | We can do this using Scenario Outlines. Think of these as a template that we setup and can then feed data into to run multiple tests. 10 | 11 | Let's revisit the example from the [previous page](https://github.com/MycroftAI/documentation/tree/384c23ed6db63685b60d97645b47e1ff882cefba/docs/skill-development/introduction.md). 12 | 13 | ```yaml 14 | Feature: current-weather 15 | Scenario: current local weather 16 | Given an English speaking user 17 | When the user says "tell me the weather" 18 | Then "my-weather-skill" should reply with "Right now, it's overcast clouds and 32 degrees." 19 | ``` 20 | 21 | In this Scenario, when the user says "tell me weather", then our Weather Skill should reply with an appropriate dialog. However we know that there are many ways to ask about the weather. Rather than copy and pasting this Scenario for each utterance, we can instead use a `Scenario Outline`. 22 | 23 | ```yaml 24 | Feature: current-weather 25 | Scenario Outline: current local weather 26 | Given an english speaking user 27 | When the user says "" 28 | Then "mycroft-weather" should reply with "Right now, it's overcast clouds and 32 degrees." 29 | 30 | Examples: local weather questions # Table heading 31 | | current local weather | # Column heading 32 | | tell me the weather | # First value 33 | | what's the weather like | # Second value 34 | | current weather | # Third value 35 | ``` 36 | 37 | The most obvious change is the new block of `Examples`. This is a table of data that we want to use in our Scenario. We give that table a title "local weather questions", and for each column in the table we provide a column heading "current local weather", followed by one or more values. 38 | 39 | To use this data, we must first explicitly change from using a `Scenario` to using a `Scenario Outline`. We are then able to use the `Examples` data in our Steps by including the Column Heading surrounding by angle brackets. In this case we have added `` to the `When` Step on line 4. 40 | 41 | Running the above Scenario Outline will test three versions of this Scenario - "tell me the weather", "what's the weather like", and "current weather". It is important to note that "current local weather" is a heading, not a value, and it will not be tested. 42 | 43 | ## Multiple dependent variables 44 | 45 | So far we have only used the Scenario Outline to take in a list of utterances. The same format can be used with tables of data. 46 | 47 | ```yaml 48 | Feature: Pokemon abilities 49 | Scenario Outline: list abilities 50 | Given an english speaking user 51 | When the user says "What abilities does have" 52 | Then "pokemon.retrodaredevil" should reply with " has " 53 | 54 | Examples: Pokemon data # Table heading 55 | | Name | Abilities | # Column heading 56 | | Bulbasaur | chlorophyll and overgrow | # First value 57 | | Charmander | solar-power and blaze | # Second value 58 | | Squirtle | rain-dish and torrent | # Third value 59 | ``` 60 | 61 | In our new `Examples` block, you can see we have expanded this into a two-column table containing the names of some Pokemon and the abilities they have. Using this data we have done two things. 62 | 63 | 1. With the Pokemon `Name`, we have used this in both the question and the response. 64 | 2. The Abilities that are reported back are directly related to the Name from the same table row. If a user was to say "What abilities does Bulbasaur have", and Mycroft replied with "Bulbasaur has solar-power and blaze", then the Scenario would be considered a fail. 65 | 66 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/interactions-and-guidelines/confirmations.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Confirmations are used to verify that the input from the user was understood 4 | correctly. 5 | --- 6 | 7 | # Confirmations 8 | 9 | Confirmation approaches can also be defined by Statements or Prompts , but when we talk about them in the context of confirmations we call them Implicit and Explicit. 10 | 11 | ### Implicit Confirmation 12 | 13 | This type of confirmation is also a statement. The idea is to parrot the information back to the user to confirm that it was correct, but not require additional input from the user. The implicit confirmation can be used in a majority of situations. For example Mycroft’s Timer Skill uses implicit confirmation when creating new timers. 14 | 15 | **Timer Skill** 16 | 17 | | Speaker | | 18 | | :--- | :--- | 19 | | **User** | _Hey Mycroft set a timer for 5 minutes_ | 20 | | **Mycroft** | _Alright, I’m starting a 5 minute timer_ | 21 | 22 | **Avoid** 23 | 24 | | Speaker | | 25 | | :--- | :--- | 26 | | **User** | _Hey Mycroft, add bananas to my shopping list_ | 27 | | **Mycroft** | _done_ | 28 | 29 | **Better** 30 | 31 | | Speaker | | 32 | | :--- | :--- | 33 | | **User** | _Hey Mycroft, add bananas to my shopping list_ | 34 | | **Mycroft** | _Adding bananas_ | 35 | 36 | ### Explicit Confirmation 37 | 38 | This type of confirmation requires an input from the user to verify everything is correct. Only use explicit confirmation when needed as it can be tedious for the user to always confirm their input. This strategy is best when there are multiple steps, and it would be difficult to fix a misstep without starting over, or when deleting or modifying an existing multi-step input. 39 | 40 | **Avoid** 41 | 42 | | **Speaker** | | 43 | | :--- | :--- | 44 | | **User** | _Hey Mycroft, set an alarm for 9:00 am tomorrow_ | 45 | | **Mycroft** | _Ok, I’ve set an alarm for 9:00 am tomorrow, is that correct?_ | 46 | | **User** | _Yes_ | 47 | | **Mycroft** | _9:00 am alarm set_ | 48 | 49 | **Better** 50 | 51 | | Speaker | | | 52 | | :--- | :--- | :--- | 53 | | **User** | _Hey Mycroft, set an alarm for 9:00 am tomorrow_ | | 54 | | **Mycroft** | I've set an alarm for tomorrow at 9:00 am, 16 hours from now | | 55 | 56 | The first example requires the user to confirm the alarm. Since the interaction is quite short it isn’t necessary to use explicit confirmation. 57 | 58 | **Avoid** 59 | 60 | | **Speaker** | | 61 | | :--- | :--- | 62 | | **User** | _Hey Mycroft, set a calendar event for a meeting with Chris Stevens_ | 63 | | **Mycroft** | _When is the event?_ | 64 | | **User** | _Tomorrow at 11:00 am_ | 65 | | **Mycroft** | _should I send an invite to Chris Stevens?_ | 66 | | **User** | _Yes_ | 67 | | **Mycroft** | _Alright, I've created a meeting for Tomorrow_ | 68 | | **Mycroft** | _Alright, I’ve created a meeting for Tomorrow at 11:00 am with Chris Stevens_ | 69 | | **User** | _Cancel my 11:00 am meeting tomorrow_ | 70 | | **Mycroft** | _Done_ | 71 | 72 | **Better** 73 | 74 | | Speaker | | 75 | | :--- | :--- | 76 | | **User** | _Hey Mycroft, set a calendar event for a meeting with Chris Stevens_ | 77 | | **Mycroft** | _When is the event?_ | 78 | | **User** | _Tomorrow at 11:00 am_ | 79 | | **Mycroft** | _should I send an invite to Chris Stevens?_ | 80 | | **User** | Yes | 81 | | **Mycroft** | _Alright, I’ve created a meeting for Tomorrow at 11:00 am with Chris Stevens_ | 82 | | **User** | _Cancel my 11:00 am meeting tomorrow_ | 83 | | **Mycroft** | _Are you sure you want to delete your meeting tomorrow at 11:00 am?_ | 84 | | **User** | _Yes_ | 85 | 86 | Both examples use explicit and implicit confirmation correctly until the end of the interaction when the user wants to cancel the event. Unlike setting a simple timer or an alarm creating this calendar event requires multiple steps and would require reasonable effort to recreate. There may also be significant consequences if the request was misinterpreted and the meeting was cancelled unintentionally. 87 | 88 | -------------------------------------------------------------------------------- /docs/skill-development/skill-structure/dependencies/manifest-yml.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | A Skill's `manifest.yml` file is the default method for defining the 4 | dependencies of a Mycroft Skill. 5 | --- 6 | 7 | # Manifest.yml 8 | 9 | In this file we can include Python packages, Linux applications or other Mycroft skills that are required for our own Skill to function properly. 10 | 11 | {% hint style="info" %} 12 | [YAML](https://en.wikipedia.org/wiki/YAML) is a language commonly used for configuration files. It uses indentation rather than brackets or parentheses to define the structure or hierarchy of its contents. 13 | {% endhint %} 14 | 15 | ## File contents 16 | 17 | We start the `manifest.yml` by defining a top-level key of `dependencies` and the type of dependency we want to include. 18 | 19 | ```yaml 20 | dependencies: 21 | python: 22 | ``` 23 | 24 | ### Python Packages 25 | 26 | Here we can see a simple example that defines the `requests` and `gensim` Python packages as required dependencies. 27 | 28 | ```yaml 29 | dependencies: 30 | python: 31 | - requests 32 | - gensim 33 | ``` 34 | 35 | When a Skill with this `manifest.yml` file is being installed, Mycroft would check for, and if required install, both packages from [PyPI](https://pypi.org/) using the PIP installer. 36 | 37 | There is no limit to the number of packages you can install, however these are reviewed during the [Skills Acceptance Process](../../marketplace-submission/skills-acceptance-process/) to ensure they are appropriate for the Skill being installed. 38 | 39 | ### Linux System Packages 40 | 41 | Linux packages are defined under the `system` key. As Mycroft can be installed on many different Linux distributions, support is provided for a range of package managers. 42 | 43 | For packages that have consistent naming across package managers, we can use `all`. 44 | 45 | ```yaml 46 | dependencies: 47 | system: 48 | all: pianobar piano-dev 49 | ``` 50 | 51 | If the package has a different name on specific platforms, we can define that using the name of the package manager as a key. In the following example, we want to use the `libpiano-dev` package when using the `APT` package manager on Debian, Ubuntu and other related distributions. 52 | 53 | ```yaml 54 | dependencies: 55 | system: 56 | all: pianobar piano-dev 57 | apt-get: pianobar libpiano-dev 58 | ``` 59 | 60 | Finally we can check that certain executables are available for the install to succeed. This is done by checking the [PATH environment variable](http://www.linfo.org/path_env_var.html). 61 | 62 | ```yaml 63 | dependencies: 64 | system: 65 | all: pianobar piano-dev 66 | apt-get: pianobar libpiano-dev 67 | 68 | exes: 69 | - pianobar 70 | ``` 71 | 72 | Here we have installed a number of `pianobar` packages, and then verify that the `pianobar` executable is available for our Skill. 73 | 74 | ### Other Mycroft Skills 75 | 76 | A Skill may even require that other Mycroft Skills are installed rather than duplicate functionality. Here we can see that the Cocktails Skill and the Mozilla Webthings Gateway are listed as dependencies. 77 | 78 | ```yaml 79 | dependencies: 80 | skill: 81 | - cocktails 82 | - webthings-gateway 83 | ``` 84 | 85 | Anything listed in this section will be passed to the [Mycroft Skills Manager](https://mycroft-ai.gitbook.io/docs/mycroft-technologies/mycroft-core/msm) for installation. A feature of MSM is that it will search for Skills that most closely match the given string. So even though the `mozilla-webthings-gateway` Skill has not been perfectly entered, it will still match with a high enough confidence to install the Skill. 86 | 87 | ## Example files 88 | 89 | A complete `manifest.yml` example can be found in the [official Template Skill on Github](https://github.com/MycroftAI/mycroft-skills/blob/19.08/00__skill_template/manifest.yml). 90 | 91 | A simple example from a real Skill can be found in the [Desktop Launcher Skill](https://github.com/MycroftAI/skill-desktop-launcher/blob/19.08/manifest.yml). 92 | 93 | -------------------------------------------------------------------------------- /docs/skill-development/user-interaction/converse.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Converse allows a recently active Skill to inspect utterances before the 4 | normal intent handling process. It allows for Skills to respond to 5 | unstructured follow up utterances. 6 | --- 7 | 8 | # Converse 9 | 10 | Each Skill may define a `converse()` method. This method will be called anytime the Skill has been recently active and a new utterance is processed. 11 | 12 | The converse method expects a single argument which is a standard Mycroft Message object. This is the same object an intent handler receives. 13 | 14 | Converse methods must return a Boolean value. True if an utterance was handled, otherwise False. 15 | 16 | ### Basic usage 17 | 18 | Let's use a version of the Ice Cream Skill we've been building up and add a converse method to catch any brief statements of thanks that might directly follow an order. 19 | 20 | ```python 21 | from mycroft import MycroftSkill, intent_handler 22 | 23 | 24 | class IceCreamSkill(MycroftSkill): 25 | def __init__(self): 26 | MycroftSkill.__init__(self) 27 | self.flavors = ['vanilla', 'chocolate', 'mint'] 28 | 29 | @intent_handler('request.icecream.intent') 30 | def handle_request_icecream(self): 31 | self.speak_dialog('welcome') 32 | selection = self.ask_selection(self.flavors, 'what.flavor') 33 | self.speak_dialog('coming-right-up', {'flavor': selection}) 34 | 35 | def converse(self, message): 36 | if self.voc_match(message.data['utterances'][0], 'Thankyou'): 37 | self.speak_dialog("you-are-welcome") 38 | return True 39 | 40 | 41 | def create_skill(): 42 | return IceCreamSkill() 43 | ``` 44 | 45 | In this example: 46 | 47 | 1. A User might request an ice cream which is handled by `handle_request_icecream()` 48 | 2. The Skill would be added to the system Active Skill list for up to 5 minutes. 49 | 3. Any utterance received by Mycroft would trigger this Skills converse system whilst it is considered active. 50 | 4. If the User followed up with a pleasantry such as "Hey Mycroft, thanks" - the converse method would match this vocab against the `Thankyou.voc` file in the Skill and speak the contents of the `you-are-welcome.dialog` file. The method would return `True` and the utterance would be consumed meaning the intent parsing service would never be triggered. 51 | 5. Any utterance that did not match would be silently ignored and allowed to continue on to other converse methods and finally to the intent parsing service. 52 | 53 | ### Active Skill List 54 | 55 | A Skill is considered active if it has been called in the last 5 minutes. 56 | 57 | Skills are called in order of when they were last active. For example, if a user spoke the following commands: 58 | 59 | > Hey Mycroft, set a timer for 10 minutes 60 | > 61 | > Hey Mycroft, what's the weather 62 | 63 | Then the utterance "what's the weather" would first be sent to the Timer Skill's `converse()` method, then to the intent service for normal handling where the Weather Skill would be called. 64 | 65 | As the Weather Skill was called it has now been added to the front of the Active Skills List. Hence the next utterance received will be directed to: 66 | 67 | 1. `WeatherSkill.converse()` 68 | 2. `TimerSkill.converse()` 69 | 3. Normal intent parsing service 70 | 71 | ### Making a Skill Active 72 | 73 | There are occasions where a Skill has not been triggered by the User but it should still be considered "Active". 74 | 75 | In the case of our Ice Cream Skill - we might have a function that will execute when the customers order is ready. At this point, we also want to be responsive to the customers thanks, so we call `self.make_active()` to manually add our Skill to the front of the Active Skills List. 76 | 77 | {% hint style="warning" %} 78 | **Please note:** Any use of this method will be closely scrutinized during the Skills Marketplace submission process. 79 | 80 | Skills that attempt to use the Converse method or the Active Skills List in ways that a normal user would not intend or appreciate, will not be approved. 81 | {% endhint %} 82 | 83 | ### Further information 84 | 85 | See the [`converse()` method documentation](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.html#mycroft.MycroftSkill.converse). 86 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/mycroft-core/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Mycroft Core sits at the centre of all Mycroft installations. The code itself 4 | includes anything that is generic to all instances of Mycroft. 5 | --- 6 | 7 | # Mycroft Core 8 | 9 | ## Mycroft Services 10 | 11 | Mycroft Core includes four or more distinct Services: 12 | 13 | * [MessageBus](message-bus.md) 14 | * [Skills](https://github.com/MycroftAI/documentation/tree/4a8ffa3702e64c9411fb0ba4239a61d1cca506ab/docs/mycroft-technologies/mycroft-core/skills-service.md) 15 | * [Audio](services/audio-service.md) 16 | * [Voice](https://github.com/MycroftAI/documentation/tree/4a8ffa3702e64c9411fb0ba4239a61d1cca506ab/docs/mycroft-technologies/mycroft-core/voice-service.md) 17 | * [Enclosure](https://github.com/MycroftAI/documentation/tree/4a8ffa3702e64c9411fb0ba4239a61d1cca506ab/docs/mycroft-technologies/mycroft-core/enclosures.md) 18 | 19 | Each of these is started in it's own process, and communicates with other processes by emitting and listening to messages on the MessageBus. 20 | 21 | ### Process Status 22 | 23 | Each Service includes an instance of the ProcessStatus class. This provides a consistent interface to set and query the state of a Service. 24 | 25 | It is constructed with the following signature: 26 | 27 | ```python 28 | status = ProcessStatus(name, bus, on_started=None, on_alive=None, on_ready=None, 29 | on_stopping=None, on_error=None) 30 | ``` 31 | 32 | **Arguments** 33 | 34 | `name` \(str\): process name, will be used to create the messagebus messagetype "mycroft.{name}..." 35 | `bus` \(MessageBusClient\): connection to the Mycroft messagebus. 36 | `on_started` \(callable\): callback to execute when process is started 37 | `on_alive` \(callable\): callback to execute when minimum operation is reached. 38 | `on_ready` \(callable\): callback to execute when loading is 100 % complete. 39 | `on_stopping` \(callable\): callback to execute when shutdown of process starts. 40 | `on_error` \(callable\): callback to execute when an unrecoverable error renders the process unusable. 41 | 42 | #### State and Events 43 | 44 | Each process tracks its state as one of: 45 | 46 | * NOT\_STARTED 47 | * STARTED 48 | * ERROR 49 | * STOPPING 50 | * ALIVE 51 | * READY 52 | 53 | These states are defined in a comparable order. This means if a Process has the state `ALIVE`, then it must also be `STARTED`. Likewise, if a Process is `READY`, it must also be `ALIVE` and `STARTED`. 54 | 55 | These states can be set by calling the relevant method from the ProcessStatus class we initialized above: 56 | 57 | * `status.set_started()` 58 | * `status.set_error()` 59 | * `status.set_alive()` 60 | * `status.set_stopping()` 61 | * `status.set_ready()` 62 | 63 | It is up to each Service to determine it's readiness and set the state accordingly. 64 | 65 | #### Querying Status 66 | 67 | The status of a Service can be queried via the MessageBus. 68 | 69 | The request Message should be in the form - `mycroft.{service_name}.{is_status}`. For example, we can check if the Audio Service is `ALIVE` by emitting: 70 | 71 | ```python 72 | response = self.bus.wait_for_response(Message( 73 | 'mycroft.audio.is_alive')) 74 | ``` 75 | 76 | The response message will include a `status` boolean in the data payload: 77 | 78 | ```python 79 | >>> print(response.data) 80 | {'status': True} 81 | ``` 82 | 83 | ## Proposed Addition 84 | 85 | If you have thoughts or comments on the following status proposals, please [let us know in the Community Forums](https://community.mycroft.ai/t/dev-sync-2020-08-18/9283/4). 86 | 87 | ### Status/Watchdog Service 88 | 89 | The status/watchdog service would be a new systemd service running on devices that run Mycroft-core. The functionality of this service could include, but would not be limited to: 90 | 91 | * Monitoring various aspects of the health of each core service \(is service running, healthy, ready, etc.\). 92 | * Heartbeats could be emitted from each service, or the new service could check the pulse of each service on regular intervals. 93 | * Actions could be taken if a service becomes unhealthy, such as attempting to restart a crashed service. 94 | * Specific checks, such as the existing microphone watchdog, could be included in this service. 95 | * Responsible for restarting services on a daily basis. 96 | * If a user opts in to data sharing, information about exceptions or crashes could be sent to Mycroft for diagnostic purposes. 97 | 98 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/adapt/adapt-tutorial.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | A step by step tutorial in using Adapt Intent Parser, using the 4 | `single_intent_parser.py` file as a starting point. 5 | --- 6 | 7 | # Adapt Tutorial 8 | 9 | ## Prerequisites 10 | 11 | To complete this tutorial, you should have some basic knowledge of the Python programming language, and be comfortable executing commands on a Linux command line. 12 | 13 | {% hint style="info" %} 14 | If you are looking to use Adapt in a Mycroft Skill, please see [Skill Development > Intents](../../skill-development/user-interaction/intents/) 15 | {% endhint %} 16 | 17 | ## Step 1 - Sample Intent 18 | 19 | This is the sample **Intent** around which the tutorial is based. 20 | 21 | ```bash 22 | A sample intent that uses a fixed vocabulary to extract entities for an intent 23 | 24 | try with the following: 25 | PYTHONPATH=. python examples/single_intent_parser.py "what's the weather like in tokyo" 26 | ``` 27 | 28 | ## Step 2 - Import Libraries 29 | 30 | First, we need to import `json` for serializing the Adapt Intent Parser output, and `sys` for reading in command line arguments. 31 | 32 | ```python 33 | import json 34 | import sys 35 | ``` 36 | 37 | Next, we import the `IntentBuilder` and \`IntentDeterminationEngine. 38 | 39 | ```python 40 | from adapt.intent import IntentBuilder 41 | from adapt.engine import IntentDeterminationEngine 42 | ``` 43 | 44 | Next, we instantiate an IntentDeterminationEngine object. 45 | 46 | ```python 47 | engine = IntentDeterminationEngine() 48 | ``` 49 | 50 | Next, we delcare a collection of weather **Keywords**, in JSON syntax. These **Keywords** act as _hints_ to the Adapt Intent Parser about which intent context is being referenced by an **Utterance**. 51 | 52 | ```python 53 | weather_keyword = [ 54 | "weather" 55 | ] 56 | ``` 57 | 58 | Register each **Keyword** with the `engine`. 59 | 60 | ```python 61 | for wk in weather_keyword: 62 | engine.register_entity(wk, "WeatherKeyword") 63 | ``` 64 | 65 | Next, we declare a `collection` of weather types. These act as a query parameter on a Weather **Intent**. 66 | 67 | For example, in the sentence: 68 | 69 | `Will it rain in Seattle tomorrow?` 70 | 71 | the `collection` of weather types can then be used to determine whether that weather type is occurring in Seattle. 72 | 73 | Next, each weather type is registered with the `engine`. 74 | 75 | ```python 76 | for wt in weather_types: 77 | engine.register_entity(wt, "WeatherType") 78 | ``` 79 | 80 | Next, a `collection` of locations is declared. These also act as a query parameter on a Weather **Intent**, and can be used in combination with the weather type `collection`. 81 | 82 | ```python 83 | locations = [ 84 | "Seattle", 85 | "San Francisco", 86 | "Tokyo" 87 | ] 88 | ``` 89 | 90 | Next, each location is registered with the `engine`. 91 | 92 | ```python 93 | for loc in locations: 94 | engine.register_entity(loc, "Location") 95 | ``` 96 | 97 | Next, we construct an `intent parser`. The `intent parser` is named `WeatherIntent` and requires both a `WeatherKeyword` and `Location`, and can optionally include a `WeatherType`. 98 | 99 | ```python 100 | weather_intent = IntentBuilder("WeatherIntent") 101 | .require("WeatherKeyword") 102 | .optionally("WeatherType") 103 | .require("Location") 104 | .build() 105 | ``` 106 | 107 | Next, we register the `intent parser` with the `engine`. 108 | 109 | ```python 110 | engine.register_intent_parser(weather_intent) 111 | ``` 112 | 113 | We then declare an entry point for the script. @TODO - need to explain here what an entry point is. 114 | 115 | ```python 116 | if __name__ == "__main__": 117 | ``` 118 | 119 | Next, pass the command line arguments to this script as an **Utterance** into `engine.determine_intent()`. This function returns a generator, and we then use the generator to iterate through the results. 120 | 121 | ```python 122 | for intent in engine.determine_intent(' '.join(sys.argv[1:])): 123 | ``` 124 | 125 | If the confidence is >0, this is a valid **Intent**. 126 | 127 | ```python 128 | if intent.get('confidence') > 0: 129 | ``` 130 | 131 | Next, serialize the **Intent** and print it to `stdout`. 132 | 133 | ```python 134 | print(json.dumps(intent, indent=4)) 135 | ``` 136 | 137 | Of course, you don't just have to output the **Intent** to `stdout` - you can use it to build all sorts of tools. 138 | 139 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | Mycroft Documentation 3 | [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](http://makeapullrequest.com) 4 | [![Join chat](https://img.shields.io/badge/Mattermost-join_chat-brightgreen.svg)](https://chat.mycroft.ai/community/channels/documentation) 5 | [![CLA Required](https://img.shields.io/badge/CLA%3F-Required-blue.svg)](https://mycroft.ai/cla) 6 | ========== 7 | 8 | Head to https://mycroft.ai/documentation for all public facing documentation. 9 | The website is automatically updated from this repo. 10 | 11 | ## Table of Contents 12 | - [Objective](#objective) 13 | - [Project lead and key contact](#project-lead-and-key-contact) 14 | - [Branches](#branches) 15 | - [Documentation standards](#documentation-standards) 16 | * [Conventions](#conventions) 17 | + [Styling conventions](#styling-conventions) 18 | - [How voice interactions are presented in documentation](#how-voice-interactions-are-presented-in-documentation) 19 | - [How to present a User Speaking](#how-to-present-a-user-speaking) 20 | * [Tone of voice](#tone-of-voice) 21 | * [Documentation checklist](#documentation-checklist) 22 | - [Contributing](#contributing) 23 | - [License](#license) 24 | 25 | ## Objective 26 | The purpose of this repo is to unify all the documentation for the public-facing components of Mycroft.AI, and to reduce the fragmentation of documentation. 27 | 28 | It is intended that the Markdown in this repo will be consumed by transformation layers or integrations to present the documentation in a user-friendly way. 29 | 30 | ## Project lead and key contact 31 | 32 | @krisgesling - Kris Gesling [kris.gesling@mycroft.ai](mailto:kris.gesling@mycroft.ai) 33 | 34 | ## Branches 35 | 36 | At the time of writing, the Docs Repository has only a single branch, `master`. This is the default branch for the repo. 37 | 38 | ## Documentation standards 39 | 40 | ### Conventions 41 | 42 | The following conventions are observed in Mycroft documentation: 43 | 44 | * All code is presented `like this` 45 | * All Mycroft-specific terms are **bolded** 46 | * Information is presented in 'inverted pyramid format'. This means an overview is given first, then more detailed information, then more granular information. 47 | * Human-written documentation is done in Markdown, on GitHub 48 | * Documentation of code is done through Google DocStrings format https://google.github.io/styleguide/pyguide.html#Comments and this then passed to documentation generators. 49 | 50 | 51 | #### Styling conventions 52 | 53 | The following conventions are used for styling technical terms and related phrases. 54 | 55 | * **internet** is always written in lowercase, never as Internet unless at the start of a sentence. 56 | * **Mark 1** is always styled as such 57 | * **Metadata** is always styled as one word 58 | * **Micro SD card** is always styled as such, not as micro sd card, micro SD card or alternatives 59 | * **Picroft** is always styled as such 60 | * **PocketSphinx** is always styled as such 61 | * **Skill Author** is always used to refer to the developer, writer or creator of a **Skill**. 62 | * **Speak** is always capitalized given the importance of Speaking in a voice interface. 63 | * **User** is always capitalized as a sign of respect for our Users. 64 | 65 | ##### How voice interactions are presented in documentation 66 | 67 | To make documentation easier to read, voice interactions should be presented in a consistent style. 68 | 69 | ##### How to present a User Speaking 70 | 71 | When a User Speaks, it is always presented in the style of a quote: 72 | 73 | > Hey Mycroft, what time is it? 74 | 75 | When Mycroft Speaks, it is always presented in quote marks in preformatted style: 76 | 77 | `"The time is 12.30"` 78 | 79 | ### Tone of voice 80 | 81 | Mycroft documentation is written with the following tone of voice: 82 | 83 | * Knowledgeable but never arrogant 84 | * Helpful but encouraging of self-discovery 85 | * Easily readable, but not dumbed down 86 | 87 | ### Documentation checklist 88 | 89 | Use this handy checklist when creating or modifying documentation: 90 | 91 | - [ ] Have you checked for duplicate documentation? 92 | - [ ] Has the documentation been verified for accuracy? 93 | - [ ] Does the documentation comply with the Style Guide? 94 | - [ ] Does the documentation comply with tone of voice? 95 | 96 | ## Contributing 97 | [See the Contributing Guide for Mycroft Documentation](CONTRIBUTING.md) 98 | 99 | ## License 100 | [See the License for Mycroft Documenation](LICENSE.md) 101 | -------------------------------------------------------------------------------- /docs/skill-development/introduction/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Explore the fundamental building blocks of a Skill, and the knowledge required 4 | to create meaningful and engaging voice interactions. 5 | --- 6 | 7 | # Development Setup 8 | 9 | Mycroft Skills are the voice applications that provide different functionality for users. To create a Skill requires at least basic technical experience, a Mycroft installation or device, and an idea of what your Skill will do, and how people will use it. 10 | 11 | ## Technical Requirements 12 | 13 | ### Python programming language 14 | 15 | Skills for Mycroft are written using the [Python programming language](https://www.python.org/). A simple Skill can be a great way for new developers to try Python out in a real project, whilst experienced programmers will quickly see the powerful possibilities available in a well crafted Skill. 16 | 17 | If you aren't familiar with the basics of Python, check out our [list of Python tutorials and resources](python-resources.md) to get you started. If you've programmed in other object-oriented languages, like Javascript or C\#, then you'll be able to pick it up, but if you're totally new to programming, you might want to look at an [introductory programming course](https://www.edx.org/course/introduction-computer-science-mitx-6-00-1x-11). 18 | 19 | ### Github account 20 | 21 | Skills are hosted on [Github](https://github.com), so you will need to create an account there if you don't already have one. It is good to have an understanding of the [GitHub basics](https://guides.github.com/activities/hello-world/), however the [Mycroft Skills Kit](../../mycroft-technologies/mycroft-skills-kit.md) also automates some of the more complex aspects of managing a Github repository \(or repo\). 22 | 23 | ### Running Mycroft 24 | 25 | To test your Skill out, you will need to [set up a Mycroft device](../../using-mycroft-ai/get-mycroft/). Mycroft can be installed on your computer, a Raspberry Pi using Picroft, or a dedicated device like the Mycroft Mark 1. Mycroft also comes with a number of helpful tools to create new Skills, host them on Github, run integration tests, and submit them for inclusion in the [Mycroft Marketplace](https://market.mycroft.ai). If you aren't yet familiar with how Mycroft works, check out the [overview of Mycroft components](http://mycroft.ai/documentation/mycroft-software-hardware/) to understand the many technologies that come together to provide an intelligent voice assistant. 26 | 27 | ## What makes a good Skill? 28 | 29 | ### Fulfilling a need the user has 30 | 31 | Good Skills meet one or more of the user's needs. Popular Skills are popular because people use them frequently - for instance, to set alarms, reminders, or to identify the time in other time zones. On the other hand, a Skill that, say, recites π to 100 digits might be pretty cool, but when was the last time you needed to know π to 100 digits? Contrast that with the last time you set a reminder on your phone. 32 | 33 | ### Having an easy to use voice interface 34 | 35 | Just like a web page with a thoughtfully-designed interface is much more pleasant to use, a Skill with a well designed voice interface is a delight, not a chore, to use. You should anticipate the task the user is trying to accomplish, and how to make that as straightforward as possible. 36 | 37 | If you haven't already, be sure to read our Voice User Interface Design Guidelines: 38 | 39 | {% page-ref page="../voice-user-interface-design-guidelines/" %} 40 | 41 | If you have an idea for a Skill, it's a great idea to join [Mycroft Chat](https://chat.mycroft.ai), specifically the [~skills](https://chat.mycroft.ai/community/channels/skills) channel, and share what your plans are. You'll be able to get constructive and helpful feedback on your Skill from an experienced community. 42 | 43 | ## Skill terminology 44 | 45 | You'll notice some new terms as you start to develop Skills. 46 | 47 | * **utterance** - An utterance is a phrase spoken by the User, after the User says the Wake Word. `what's the weather like in Toronto?` is an utterance. 48 | * **dialog** - A dialog is a phrase that is spoken by Mycroft. Different Skills will have different dialogs, depending on what the Skill does. For example, in a _weather_ Skill, a dialog might be `the.maximum.temperature.is.dialog`. 49 | * **intent** - Mycroft matches utterances that a User speaks with a Skill by determining an intent from the utterance. For example, if a User speaks `Hey Mycroft, what's the weather like in Toronto?` then the intent will be identified as _weather_ and matched with the _Weather Skill_. When you develop new Skills, you need to define new intents. 50 | 51 | If you encounter anything else you're not familiar with, checkout the [Mycroft Glossary](../../about-mycroft-ai/glossary.md). 52 | 53 | -------------------------------------------------------------------------------- /docs/skill-development/mycroft-skills-manager/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Mycroft Skills Manager (msm) is a command line tool used to add, manage and 4 | remove Skills on any Mycroft installation. 5 | --- 6 | 7 | # Mycroft Skills Manager 8 | 9 | Mycroft Skills Manager (MSM) is a command line tool used to add, manage and remove Skills on any Mycroft installation. It can install any Skill listed on the [Mycroft Skills Repository](https://github.com/MycroftAI/mycroft-skills) or from any Github repository. It is a handy tool for Developers and those who may frequently reconfigure, install and uninstall Skills. 10 | 11 | ## Video Tutorial 12 | 13 | {% embed url="https://www.youtube.com/watch?v=jJjLro140Go" %} 14 | Video Tutorial: Mycroft Skills Manager 15 | {% endembed %} 16 | 17 | ## Information 18 | 19 | ### List all Skills 20 | 21 | To list all Skills available in the Marketplace run: 22 | 23 | ``` 24 | mycroft-msm list 25 | ``` 26 | 27 | ### Search for a Skill 28 | 29 | To search the available Skills run: 30 | 31 | ``` 32 | mycroft-msm search 33 | ``` 34 | 35 | ### Show information 36 | 37 | To show all available information on a Skill you can use the name or repository url: 38 | 39 | ``` 40 | mycroft-msm info 41 | ``` 42 | 43 | ## Install 44 | 45 | ### From the Marketplace 46 | 47 | To install a Skill from the Mycroft Marketplace you can provide a rough title and MSM will return the most likely match. 48 | 49 | ``` 50 | mycroft-msm install cocktail 51 | ``` 52 | 53 | ### From a Specific Author 54 | 55 | If you know the author of the Skill, particularly where multiple Skills with similar names exist, the `install` command has an additional optional parameter. 56 | 57 | If I wanted a Dice Skill, but only if it was authored by Forslund, I would run: 58 | 59 | ``` 60 | mycroft-msm install dice forslund 61 | ``` 62 | 63 | Such a Skill doesn't exist so MSM returns a list of alternative options from the same author: 64 | 65 | ``` 66 | INFO - building SkillEntry objects for all skills 67 | INFO - Best match (0.34): cocktails by forslund 68 | MultipleSkillMatches: cocktails, youtube-music-skill.forslund, mycroft-spotify, game-zork, fallback-aiml 69 | ``` 70 | 71 | ### From a Github repository 72 | 73 | {% hint style="warning" %} 74 | Installing Skills from outside the Mycroft Marketplace is done at your own risk. To be included in the Marketplace, all Skills are reviewed to ensure they contain no malicious or otherwise harmful code. We strongly recommend reviewing any code you are loading onto your device. 75 | {% endhint %} 76 | 77 | To install a Skill that is not included in the Mycroft Marketplace use the url of the repository. 78 | 79 | ``` 80 | mycroft-msm install https://github.com/krisgesling/tea-skill 81 | ``` 82 | 83 | ### From a Marketplace branch 84 | 85 | To install a Skill from a different branch of the Mycroft-Skills Marketplace repository, use the `-b` flag: 86 | 87 | ``` 88 | mycroft-msm -b dev install weather 89 | ``` 90 | 91 | ### From an Alternate Marketplace 92 | 93 | To install a Skill from an independently managed listing of Skills, you can specify the url for that Marketplace using the `-u` flag: 94 | 95 | ``` 96 | mycroft-msm -u https://github.com/krisgesling/mycroft-skills install cocktails 97 | ``` 98 | 99 | ## Updating 100 | 101 | ### All installed Skills 102 | 103 | To update all Skills installed on a device run: 104 | 105 | ``` 106 | mycroft-msm update 107 | ``` 108 | 109 | Note that if Mycroft detects that any file in a Skill has been modified, it will not update that Skill. This is to prevent loss of any work you are doing in that directory. See [MSM Troubleshooting > Uncommitted Changes](msm-troubleshooting.md#uncommitted-changes) for tips to resolve this. 110 | 111 | ### Install and update defaults 112 | 113 | To install all default Skills for the device, as well as update all Skills, run: 114 | 115 | ``` 116 | mycroft-msm default 117 | ``` 118 | 119 | ## Removal 120 | 121 | To uninstall or remove any Skill from the device run: 122 | 123 | ``` 124 | mycroft-msm remove 125 | ``` 126 | 127 | Note that default Skills will be reinstalled by the system unless they are explicitly blacklisted. See ["How do I disable a Skill?"](../faq.md#how-do-i-disable-a-skill) for more info. 128 | 129 | ### Specifying author 130 | 131 | The `remove` command also has an optional `author` parameter that can be used to specify which Skill you intend to be uninstalled. 132 | 133 | ``` 134 | mycroft-msm remove 135 | ``` 136 | 137 | ## Show Help Information 138 | 139 | To show help information describing how to use MSM's different commands, use the `-h` flag. 140 | 141 | For a complete overview run: 142 | 143 | ``` 144 | mycroft-msm -h 145 | ``` 146 | 147 | Or for command specific information include the command. 148 | 149 | ``` 150 | mycroft-msm install -h 151 | ``` 152 | -------------------------------------------------------------------------------- /docs/skill-development/displaying-information/mark-1-display.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: How to make use of the enclosure display on the Mark 1. 3 | --- 4 | 5 | # Mark 1 Display 6 | 7 | ## About **Enclosures** 8 | 9 | The Mark 1 **Device** is housed in an **Enclosure**. An Enclosure is a shell that houses a **Device** that runs Mycroft. In the future, different **Enclosures** will have different capabilities. 10 | 11 | ## The Mark 1 Enclosure capabilities 12 | 13 | The Mark 1 _mouth_ and _eyes_ can be controlled by **Skills** using the `self.enclosure` object inherited from the `MycroftSkill` base class. This object acts as an interface to the **Enclosure** and allows the Skill creator to draw to the _mouth_ display. 14 | 15 | This is how the _mouth_ and _eyes_ are made to change during operations such as audio playback. 16 | 17 | ### Drawing to the _mouth_ display 18 | 19 | #### Drawing text to the _mouth_ display 20 | 21 | Text can be sent to the display using the `mouth_text()` method of the `enclosure` object. 22 | 23 | ```python 24 | self.enclosure.mouth_text('The meaning of life, the universe and everything is 42') 25 | ``` 26 | 27 | If the text is too long to fit on the display, the text will scroll. 28 | 29 | @TODO how many characters will fit on the display before it will scroll? 30 | 31 | #### Drawing images to the _mouth_ display 32 | 33 | **Clearing an existing image from the mouth display** 34 | 35 | Before writing an image to the _mouth_ display, you should clear any previous image. 36 | 37 | ```python 38 | self.enclosure.mouth_display(img_code="HIAAAAAAAAAAAAAA", refresh=False) 39 | self.enclosure.mouth_display(img_code="HIAAAAAAAAAAAAAA", x=24, refresh=False) 40 | ``` 41 | 42 | _Copied from the skill-date-time_ 43 | 44 | #### How images are drawn on the _mouth_ display 45 | 46 | The _mouth_ display is a grid, 32 pixels wide and 8 pixels high. 47 | 48 | There are two ways to draw an image on the _mouth_ display. 49 | 50 | **Addressing each pixel using a string encoding** 51 | 52 | You can draw an image to the _mouth_ display by binary encoding pixel information in a string. 53 | 54 | The binary encoding is straightforward value substitution. 55 | 56 | | Letter Value | Pixel value | 57 | | :--- | :--- | 58 | | A | 0 | 59 | | B | 1 | 60 | | C | 2 | 61 | | D | 3 | 62 | | E | 4 | 63 | | F | 5 | 64 | | G | 6 | 65 | | H | 7 | 66 | | I | 8 | 67 | 68 | and so on. 69 | 70 | ```python 71 | self.enclosure.mouth_display(img_code="HIAAAAAAAAAAAAAA", refresh=False) 72 | self.enclosure.mouth_display(img_code="HIAAAAAAAAAAAAAA", x=24, refresh=False) 73 | ``` 74 | 75 | The code above clears the image by sending a string consisting of `HI` which stands for a Width of 7 and a height of 8 and each A stands for a segment of 4 pixels in the off state. 76 | 77 | @TODO I kinda get this, but I think we really need a grid image here to show how it works - to make it easier to understand. 78 | 79 | **Sending a PNG image to the mouth display** 80 | 81 | Another way to draw an image on the _mouth_ display is to create a PNG-formatted image with a width of 32 pixels and a height of 8 pixels, then use the `mouth_display_png()` method of the `enclosure` object. 82 | 83 | The image should be black and white, with white meaning a dark pixel, and black indicating an illuminated pixel. 84 | 85 | `mouth_display_png()` expects the first argument to be the image absolute path. Optional arguments are 86 | 87 | * `threshold`: The value at which a pixel should be considered 'dark' or 'illuminated' 88 | * `invert`: Treat white in the image as illuminated pixels, and black as dark pixels 89 | * `x`: The x position \(horizontal\) at which the image should be displaye, in pixels 90 | * `y`: The y position \(vertical\) at which the image should be displayed, in pixels 91 | * `refresh`: clear the display before writing to it 92 | 93 | @TODO all the above needs to be validated - the information is educated guesswork 94 | 95 | ```python 96 | self.mouth_display_png('/path/to/image.png', threshold=70, invert=False, x=0, y=0, refresh=True) 97 | ``` 98 | 99 | Example image: ![A note symbol](https://akeiexil.files.wordpress.com/2017/08/note1.png) 100 | 101 | **Tools for converting PNG image representations to string representations** 102 | 103 | If you don't want to convert PNG files at runtime \(for example when creating simple animations\) [this short python script](https://gist.github.com/forslund/9d8805fd7adb9e74ec3ea321e1676a21) will convert PNG files to strings compatible with the `img_code` of `self.enclosure.mouth_display()`. 104 | 105 | #### Resetting the display to the default state 106 | 107 | When the **Skill** is finished, you should reset the **Enclosure** to the default state using 108 | 109 | ```python 110 | self.enclosure.reset() 111 | ``` 112 | 113 | This will clear the screen and blink the Mark 1's _eyes_ once. 114 | 115 | -------------------------------------------------------------------------------- /docs/skill-development/skill-structure/lifecycle-methods.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Mycroft Skills provide a number of methods to perform actions at different 4 | points during the lifecycle of the Class instance. 5 | --- 6 | 7 | # Lifecycle Methods 8 | 9 | The MycroftSkill class that all Skills inherit from contains a number of methods that can be overridden by an instance of the Class. This enables a Skill to execute code at specific points in the lifecycle of a Skill. Each of these is optional, meaning none are required to be defined in your Skill. 10 | 11 | ## **\_\_init\_\_** 12 | 13 | The `__init__` method is called when the Skill is first constructed. It is often used to declare variables or perform setup actions, however it cannot utilize other MycroftSkill methods and properties as the class does not yet exist. This includes `self.bus`and `self.settings` which must instead be called from your Skill's `initialize` method. 14 | 15 | Th `__init__` method is optional, but if used, the `__init__` method from the Super Class \(MycroftSkill\) must be called. 16 | 17 | In the following example we assign a variable `learning` to be `True`. The variable is appended to the instance using `self` so that we can access this variable in any part of our Skill. 18 | 19 | ```python 20 | def __init__(self): 21 | super().__init__() 22 | self.learning = True 23 | ``` 24 | 25 | ## Initialize 26 | 27 | The `initialize` method is called after the Skill is fully constructed and registered with the system. It is used to perform any final setup for the Skill including accessing Skill settings. 28 | 29 | In the following example we access the `my_setting` value, that would have been defined in the Skill's [`settingsmeta.json`](skill-settings.md). We use the `get` method in case the variable `my_setting` is undefined. 30 | 31 | ```python 32 | def initialize(self): 33 | my_setting = self.settings.get('my_setting') 34 | ``` 35 | 36 | ## Converse 37 | 38 | The `converse` method can be used to handle follow up utterances prior to the normal intent handling process. It can be useful for handling utterances from a User that do not make sense as a standalone [intent](../user-interaction/intents/). 39 | 40 | The method receives two arguments: 41 | 42 | * `utterances` \(list\): The utterances from the user. If there are multiple utterances, consider them all to be transcription possibilities. Commonly, the first entry is the raw utterance and the second is a `normalized` version of the first utterance. 43 | * `lang` \(string\): The language the utterance is in. This defaults to None. 44 | 45 | Once the Skill has initially been triggered by the User, the `converse` method will be called each time an utterance is received. It is therefore important to check the contents of the utterance to ensure it matches what you expected. 46 | 47 | If the utterance is handled by the converse method, we return `True` to indicate that the utterance should not be passed onto the normal intent matching service and no other action is required by the system. If the utterance was not handled, we return `False` and the utterance is passed on first to other `converse` methods, and then to the normal intent matching service. 48 | 49 | In the following example, we check that utterances is not empty, and if the utterance matches vocabulary from `understood.voc`. If the user has understood we speak a line from `great.dialog` and return `True` to indicate the utterance has been handled. If the vocabulary does not match then we return `False` as the utterance should be passed to the normal intent matching service. 50 | 51 | ```python 52 | def converse(self, utterances, lang): 53 | if utterances and self.voc_match(utterances[0], 'understood'): 54 | self.speak_dialog('great') 55 | return True 56 | else: 57 | return False 58 | ``` 59 | 60 | ## Stop 61 | 62 | The `stop` method is called anytime a User says "Stop" or a similar command. It is useful for stopping any output or process that a User might want to end without needing to issue a Skill specific utterance such as media playback or an expired alarm notification. 63 | 64 | In the following example, we call a method `stop_beeping` to end a notification that our Skill has created. 65 | 66 | ```python 67 | def stop(self): 68 | self.stop_beeping() 69 | ``` 70 | 71 | ## Shutdown 72 | 73 | The `shutdown` method is called during the Skill process termination. It is used to perform any final actions to ensure all processes and operations in execution are stopped safely. This might be particularly useful for Skills that have scheduled future events, may be writing to a file or database, or that have initiated new processes. 74 | 75 | In the following example we cancel a scheduled event and call a method in our Skill to stop a subprocess we initiated. 76 | 77 | ```python 78 | def shutdown(self): 79 | self.cancel_scheduled_event('my_event') 80 | self.stop_my_subprocess() 81 | ``` 82 | 83 | -------------------------------------------------------------------------------- /docs/skill-development/marketplace-submission/skill-readme-md.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | The README.md file in each Skill repository is used to provide an overview of 4 | the Skill and its functionality. 5 | --- 6 | 7 | # Skill README.md 8 | 9 | This file is used by the [Skills Marketplace](https://market.mycroft.ai/) to display the appropriate information including the [Card view](https://market.mycroft.ai/skills) and [Details view](https://market.mycroft.ai/skill/mycroft-weather). 10 | 11 | When creating your Skill, the [Mycroft Skills Kit](https://mycroft.ai/documentation/skills/msk/) will automatically generate your first README.md for you. Alternatively you can use the [Skills Meta Editor](https://mycroft.ai/skill-meta-editor) to generate a compatible README.md and ensure all the relevant information is included. 12 | 13 | ## Top tips for a great README 14 | 15 | * The title should not include the word Skill 16 | * There is limited space on the Skill Card in the Marketplace. Try to keep Skill titles, one-line descriptions and intent examples clear and concise. Longer strings of text will be truncated \(cut off\). Try to keep: 17 | * Titles under 22 characters 18 | * One-line descriptions under 50 characters 19 | * First example under 40 characters \(this will be shown on the Card view and should make sense as a single phrase\) 20 | * All other examples under 50 characters 21 | * Note: The one-line description is the text between the Skill title and the "About" section. 22 | * The primary category, being the category in bold, is where the Skill will be displayed in the Marketplace by default. 23 | 24 | ## Suggesting changes to an existing README 25 | 26 | A GitHub account is required to submit changes; please register for an account now if necessary. Then head to the Skill's GitHub repository \(repo\) to get started. You can always find a link to a Skill's repo in the Marketplace. Simply click on the Skill, and look for a GitHub link on the right. 27 | 28 | ### Edit and make a pull request \(PR\) 29 | 30 | This is the simplest and most direct method for Skill Authors if you have a clear idea of what changes need to be made. 31 | 32 | README.md files are written using the Markdown syntax, which is a way to style text on the web. If you aren't familiar with Markdown formatting, GitHub has an excellent [3 minute guide](https://guides.github.com/features/mastering-markdown/) to get you started. 33 | 34 | #### How to 35 | 36 | **1. Edit the file** 37 | 38 | * Select the README.md file 39 | * Then the pencil icon to edit the file 40 | 41 | **2. Review your changes using the** [**Skills Meta Editor**](https://mycroft.ai/skill-meta-editor) 42 | 43 | * GitHub provides the preview tab for quickly checking your formatting. However, in the Mycroft Marketplace, there are several ways a Skill might be shown. 44 | * Copy and paste the file contents into the README.md tab of the [Meta Editor](https://mycroft.ai/skill-meta-editor) to see a preview of the Card and Detail views. 45 | * Please be aware that after importing text into this tool, it may be modified to fit the standard features of a Mycroft Skill's README. 46 | 47 | **3. Propose the changes** 48 | 49 | * At the bottom of the edit page on GitHub will be a short "Propose file change" form. Submitting this form will automatically create a [fork](https://guides.github.com/activities/forking/) of the repo in your account, as well as a [pull request](https://guides.github.com/activities/forking/#making-a-pull-request) to the main project. 50 | * This provides the Skill Author with a list of any proposed changes, as well as the message you include in this form. 51 | * Please try to be clear and concise in your message as to what has been changed and why. Be aware that we have developers from across the world, and English may not be their first language. 52 | 53 | ### Create a new 'issue' 54 | 55 | This is the best option if you are unsure about what the final text should look like. 56 | 57 | #### How to 58 | 59 | **1. Create a 'new issue'** 60 | 61 | * From the Skills GitHub repo, select the 'Issues' tab 62 | * Then the green 'New issue' button on the right. 63 | 64 | **2. Write a clear and concise issue message** 65 | 66 | * The title of an issue should be a very brief overview of the changes you are suggesting. If the changes are limited to a specific section or for a particular reason, say that here. An example might be, "Suggested changes to description for readability". 67 | * In the main comment area, please detail the changes you are suggesting, and try to be descriptive about which section you are referring to. Instead of "there's a typo in skill", we might write "In the first intent example there is a typo, skill should be skill." 68 | * If you are proposing multiple changes, consider grouping them under headings. 69 | * If you are proposing a block of text, consider using the 'quote' formatting to highlight it. 70 | 71 | **3. Preview and Submit** 72 | 73 | * Just underneath the title field is a 'Preview' tab. This lets you check the formatting of your message. 74 | * Once you're happy, hit submit. 75 | 76 | -------------------------------------------------------------------------------- /docs/mycroft-technologies/precise.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | Precise is a Wake Word Listener based on a neural network trained on sound 4 | samples. 5 | --- 6 | 7 | # Precise 8 | 9 | Like its name suggests, a Wake Word Listener's job is to continually listen to sounds and speech around the Device, and activate when the sounds or speech match a Wake Word. Unlike other hotword detection products, Mycroft Precise is fully open source. 10 | 11 | ## Precise vs PocketSphinx 12 | 13 | Precise is based on a neural network that is trained on _sound patterns_ rather than _word patterns_. This reduces the dependence it has on particular languages or accents. 14 | 15 | [PocketSphinx](https://github.com/cmusphinx/pocketsphinx) is an alternative to Precise. Unlike Precise, PocketSphinx recognizes **Wake Words** based on the [CMU Flite dictionary of sounds](http://www.speech.cs.cmu.edu/cgi-bin/cmudict). 16 | 17 | Precise is the default **Wake Word** Listener for the "Hey Mycroft" wake word, PocketSphinx provides a fallback to this if Precise is unavailable. 18 | 19 | ### How do I change the **Wake Word Listener** to PocketSphinx 20 | 21 | To change the **Wake Word Listener** to PocketSphinx if it has been set to Precise, Speak: 22 | 23 | > Hey Mycroft, set the Listener to default 24 | 25 | or 26 | 27 | > Hey Mycroft, set the listener to PocketSphinx 28 | 29 | Mycroft will respond 30 | 31 | `"I've set the Listener to PocketSphinx"` 32 | 33 | To return to Precise, speak: 34 | 35 | > Hey Mycroft, set the Listener to Precise 36 | 37 | Mycroft will respond 38 | 39 | `"I've set the Listener to Precise"` 40 | 41 | ### How do I tell which **Wake Word Listener** my Mycroft **Device** is using? 42 | 43 | To find out which **Wake Word Listener** is active for the Mycroft **Device** you are using, simply Speak: 44 | 45 | > Hey Mycroft, what is the Listener? 46 | 47 | or 48 | 49 | > Hey Mycroft, tell me what Listener you are using 50 | 51 | If you are using Precise, Mycroft will respond: 52 | 53 | `"The current Listener is Precise"` 54 | 55 | ### How do I install Precise as my **Wake Word Listener**? 56 | 57 | If Precise is not already installed, speak to your **Device**: 58 | 59 | > `Hey Mycroft, set the Listener to Precise` 60 | 61 | Mycroft will respond 62 | 63 | `"Downloading the new listener, this will take a bit and you won't be able to use me until it completes. Give me a minute before attempting to activate me again."` 64 | 65 | _NOTE: As Precise is installing, it will download a 40Mb file to your **Device**. During this initial download period, your **Device** will not be able to respond to a **Wake Word**._ 66 | 67 | ## Other hotword detection technologies 68 | 69 | Unlike other accurate hotword detection products, Mycroft Precise is fully open source. 70 | 71 | | | Open Source | Accurate | Languages | Model Trainer | Commercial Use | GitHub | 72 | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | 73 | | **Precise** | ![](https://images2.imgbox.com/c7/63/UVtCmPfa_o.png) | ![](https://images2.imgbox.com/c7/63/UVtCmPfa_o.png) | Python | [Open Source Script](https://github.com/MycroftAI/mycroft-precise/wiki/Training-your-own-wake-word) | Permitted | [MycroftAI/mycroft-precise](https://github.com/MycroftAI/mycroft-precise) | 74 | | **Snowboy** | ![](https://images2.imgbox.com/99/47/lm3vjOzO_o.png) | ![](https://images2.imgbox.com/c7/63/UVtCmPfa_o.png) | Node, Java, Python, GO, Perl, iOS, Android | [Web API](http://docs.kitt.ai/snowboy/#restful-api) | [License Fee](https://github.com/Kitt-AI/snowboy/blob/master/README_commercial.md#evaluation-license-faq) | [Kitt-AI/snowboy](https://github.com/kitt-ai/snowboy) | 75 | | **Porcupine** | ![](https://images2.imgbox.com/99/47/lm3vjOzO_o.png) | ![](https://images2.imgbox.com/c7/63/UVtCmPfa_o.png) | C, Python, Android | [Closed Binary](https://github.com/Picovoice/Porcupine/tree/master/tools/optimizer) | [License Fee](https://github.com/Picovoice/Porcupine#license) | [Picovoice/Porcupine](https://github.com/Picovoice/Porcupine) | 76 | | **PocketSphinx** | ![](https://images2.imgbox.com/c7/63/UVtCmPfa_o.png) | ![](https://images2.imgbox.com/99/47/lm3vjOzO_o.png) | Almost all | [Open Source Script](https://cmusphinx.github.io/wiki/tutorialam/) | Permitted | [cmusphinx/pocketsphinx](https://github.com/cmusphinx/pocketsphinx) | 77 | 78 | ## Training Custom Wake Words 79 | 80 | Training your own custom Wake Word model for Precise requires at least functional experience using the Linux commandline and an understanding of basic machine learning concepts. It requires time and training data. 81 | 82 | An instructional overview is available on the [Precise repository on Github](https://github.com/MycroftAI/mycroft-precise/wiki/Training-your-own-wake-word#how-to-train-your-own-wake-word). Community member El-tocino has also provided a short write up of their [tips for getting the best result](https://github.com/el-tocino/localcroft/blob/master/precise/Precise.md). 83 | 84 | ## Additional Support 85 | 86 | If you have questions, you can reach out to others in the Community via the [Troubleshooting channel in Chat](https://chat.mycroft.ai/community/channels/troubleshooting). 87 | 88 | -------------------------------------------------------------------------------- /redirects/redirect-mapping.csv: -------------------------------------------------------------------------------- 1 | ^/documentation/mycroft-software-hardware/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/overview$,1 2 | ^/documentation/contributing/?$,https://mycroft-ai.gitbook.io/docs/contributing$,1 3 | ^/documentation/mycroft-roadmap/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/mycroft-core/mycroft-roadmap$,1 4 | ^/documentation/glossary/?$,https://mycroft-ai.gitbook.io/docs/glossary$,1 5 | ^/documentation/getting-mycroft/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/get-mycroft/README$,1 6 | ^/documentation/home-mycroft-ai-pairing/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/pairing-your-device$,1 7 | ^/documentation/basic-commands/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/basic-commands$,1 8 | ^/documentation/mark-1/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/get-mycroft/mark-1$,1 9 | ^/documentation/picroft/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/get-mycroft/picroft$,1 10 | ^/documentation/picroft/picroft-audio/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/get-mycroft/picroft$,1 11 | ^/documentation/picroft/picroft-wifi/)?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/get-mycroft/picroft#connecting-picroft-to-a-wired-or-wifi-network$,1 12 | ^/documentation/picroft/picroft-automatic-update/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/get-mycroft/picroft$,1 13 | ^/documentation/linux/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/get-mycroft/linux$,1 14 | ^/documentation/mycroft-for-macos-and-windows-with-virtualbox/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/get-mycroft/macos-and-windows-with-virtualbox$,1 15 | ^/documentation/docker/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/get-mycroft/docker$,1 16 | ^/documentation/android/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/get-mycroft/android$,1 17 | ^/documentation/mycroft-conf/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/customizations/mycroft-conf$,1 18 | ^/documentation/logs/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/troubleshooting/log-files$,1 19 | ^/documentation/skills/?$,https://market.mycroft.ai/skills$,1 20 | ^/documentation/skills/developing-skills/?$,https://mycroft-ai.gitbook.io/docs/skill-development/introduction$,1 21 | ^/documentation/skills/introduction-developing-skills/?$,https://mycroft-ai.gitbook.io/docs/skill-development/introduction$,1 22 | ^/documentation/skills/automatic-testing/?$,https://mycroft-ai.gitbook.io/docs/skill-development/next-steps/automatic-testing$,1 23 | ^/documentation/skills/skills-acceptance-process/?$,https://mycroft-ai.gitbook.io/docs/skill-development/marketplace-submission/skills-acceptance-process/README$,1 24 | ^/documentation/skills/skill-submission/?$,https://mycroft-ai.gitbook.io/docs/skill-development/marketplace-submission/README$,1 25 | ^/documentation/skills/msk/?$,https://mycroft-ai.gitbook.io/docs/skill-development/mycroft-skills-kit$,1 26 | ^/documentation/skills/skill-settings/?$,https://mycroft-ai.gitbook.io/docs/skill-development/next-steps/skill-settings$,1 27 | ^/documentation/skills/common-issues-in-skills-development/?$,https://mycroft-ai.gitbook.io/docs/skill-development/common-issues$,1 28 | ^/documentation/skills/repurposing-skills/?$,https://mycroft-ai.gitbook.io/docs/skill-development/repurposing-skills$,1 29 | ^/documentation/skills/conversational-context/?$,https://mycroft-ai.gitbook.io/docs/skill-development/next-steps/conversational-context$,1 30 | ^/documentation/skills/audio-service/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/mycroft-core/audio-service$,1 31 | ^/documentation/skills/display-control/?$,https://mycroft-ai.gitbook.io/docs/skill-development/next-steps/display-control$,1 32 | ^/documentation/skills/fallback-skill/?$,https://mycroft-ai.gitbook.io/docs/skill-development/skill-types/fallback-skill$,1 33 | ^/documentation/skills/common-play-framework/?$,https://mycroft-ai.gitbook.io/docs/skill-development/skill-types/common-play-framework$,1 34 | ^/documentation/msm/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/mycroft-core/msm$,1 35 | ^/documentation/message-bus/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/mycroft-core/message-bus$,1 36 | ^/documentation/languages?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/customizations/languages$,1 37 | ^/documentation/adapt/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/adapt/README$,1 38 | ^/documentation/adapt/adapt-examples/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/adapt/adapt-examples$,1 39 | ^/documentation/adapt/adapt-tutorial/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/adapt/adapt-tutorial$,1 40 | ^/documentation/mimic/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/mimic/mimic-overview$,1 41 | ^/documentation/mimic-recording-studio/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/mimic/mimic-recording-studio$,1 42 | ^/documentation/precise/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/precise$,1 43 | ^/documentation/padatious/?$,https://mycroft-ai.gitbook.io/docs/mycroft-technologies/padatious$,1 44 | ^/documentation/troubleshooting/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/troubleshooting/general-troubleshooting$,1 45 | ^/documentation/troubleshooting/audio-troubleshooting/?$,https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/troubleshooting/audio-troubleshooting$,1 46 | -------------------------------------------------------------------------------- /docs/skill-development/voice-user-interface-design-guidelines/interactions-and-guidelines/statements-and-prompts.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: >- 3 | There are two basic ways for Mycroft to interact with the user, Statements and 4 | Prompts. 5 | --- 6 | 7 | # Statements and Prompts 8 | 9 | You can think of **Prompts** as questions and **Statements** as providing information to the user that does not need a follow-up response. For example a weather forecast like this would be considered a statement: 10 | 11 | > Today’s forecast is sunny with a high of 60 and a low of 45. 12 | 13 | ### Statements 14 | 15 | For a lot of skills the conversation might end with a simple statement from Mycroft, and no further action is necessary. Try to imagine what the user is trying to accomplish, if a simple statement gets the job done there is no reason to keep the conversation rolling, and in fact a follow up might annoy the user with unnecessary interaction. 16 | 17 | ### Prompts 18 | 19 | It may be tempting to always give users specific instructions like traditional automated phones systems \(Interactive Voice Response\). Many phone systems are notorious for being too verbose and difficult to follow. 20 | 21 | With Mycroft we’re trying to break that mold and make the interaction natural. If you follow the phone system method you may be giving the user the exact phrasing to say, but you’re also taking up valuable time and training them to think the system is very narrow in capability. In the event that the user does give a response that your skill can not handle, create follow up prompts that steer the user back on track. 22 | 23 | Remember, there are no errors in a cooperative conversation. 24 | 25 | #### Avoid 26 | 27 | | Speaker | | 28 | | :--- | :--- | 29 | | **Mycroft** | _How many people are playing? For example, you can say 2 players._ | 30 | 31 | #### **Better** 32 | 33 | | Speaker | | 34 | | :--- | :--- | 35 | | **Mycroft** | _How many players?_ | 36 | | **User** | _My Brother and Myself_ | 37 | | **Mycroft** | _I’m sorry, what was the number of players?_ | 38 | | **User** | _Two_ | 39 | 40 | In the first example the user is told explicitly what they can say, but the prompt is unnatural, we don’t typically suggest responses to a conversation partner in real life. These long-winded prompts can become tiresome if they are used repeatedly throughout the skill. Remember the phone system example, typically poorly designed automated phone systems inundate the user with many options and additional instructions at every step of the interaction. 41 | 42 | In the second example we see a better prompt, although the user gives a response that is easy for a human to understand it is more difficult to design a skill to understand. Instead the skill designer can apply a re-prompt strategy that steers the user back on track and doesn’t require them to start the interaction over. In this case the re-prompt changes the wording slightly to make it clear that the user needs to say a number. The next time the user interacts with this Skill, they will likely say a number in their first interaction. 43 | 44 | Determining whether or not to respond to the user with a statement or a prompt can be a bit tricky. When the user is somewhat vague it is tempting to assume the user’s intent in order to speed along the interaction. Sometimes this is the right approach and sometimes it is not. If you are very confident in the user’s input then it is alright to assume, and respond with a statement. For example in the Wikipedia Skill, one of the Mycroft Essential Skills, the Wikipedia API is used to confidently select the best response. 45 | 46 | **Mycroft Wikipedia Skill** 47 | 48 | | Speaker | | 49 | | :--- | :--- | 50 | | **User** | _Tell me about Abraham Lincoln_ | 51 | | **Mycroft** | _Abraham Lincoln was an American statesman and lawyer who served as the 16th president of the United States from 1861 to 1865. Lincoln led the nation through its greatest moral, constitutional, and political crisis in the American Civil War_. | 52 | 53 | In contrast let’s look at an example of where a follow up prompt is a better approach. You’re working on a skill that gives important information about birds, such as wingspan, laden and unladen airspeed, etc.. 54 | 55 | **Avoid** 56 | 57 | | Speaker | \*\*\*\* | 58 | | :--- | :--- | 59 | | **User** | _what is the airspeed velocity of an unladen swallow?_ | 60 | | **Mycroft** | _The unladen airspeed of the european swallow is 20.1 miles per hour._ | 61 | 62 | **Better** 63 | 64 | | Speaker | | 65 | | :--- | :--- | 66 | | **User** | _What is the airspeed velocity of an unladen Swallow?_ | 67 | | **Mycroft** | _What do you mean, African or European Swallow?_ | 68 | | **User** | European Swallow. | 69 | | **Mycroft** | _The unladen airspeed of the European Swallow is 20.1 miles per hour._ | 70 | 71 | In the first example Mycroft assumes the user is referring to the European Swallow, however there is nothing to give confidence to that assumption. In this case it’s dangerous to assume the User meant the European Swallow since it is just as likely they meant the African Swallow. When there is no confidence in one answer versus another, Mycroft should follow up with a prompt as demonstrated in the second example. 72 | 73 | Another approach to solving this problem is to offer the most likely answer and inform the user that there are other answers available with a statement. 74 | 75 | --------------------------------------------------------------------------------