├── .github └── PULL_REQUEST_TEMPLATE.md ├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── NOTICE ├── README.md ├── buildspec.yml ├── makefile └── modules ├── assets ├── Amazon_Cake.mp4 ├── alexaCake_960x960.png ├── confetti_1280x800.png ├── confetti_1920x1080.png ├── lights_1280x800.png └── lights_1920x1080.png ├── code ├── module2 │ ├── helloWorld.json │ └── helloWorldData.json ├── module3 │ ├── documents │ │ ├── launchDocument.json │ │ └── launchDocumentDatasource.json │ ├── en-US.json │ ├── index.js │ ├── package.json │ └── util.js ├── module4 │ ├── CORSConfiguration.xml │ ├── documents │ │ ├── birthdayDocument.json │ │ ├── birthdayDocumentDatasource.json │ │ ├── launchDocument.json │ │ ├── launchDocumentCountdownDatasource.json │ │ ├── launchDocumentDatasource.json │ │ └── my-caketime-apl-package.json │ ├── en-US.json │ ├── index.js │ ├── package.json │ └── util.js └── module5 │ ├── documents │ ├── birthdayDocument.json │ ├── birthdayDocumentDatasource.json │ ├── launchDocument.json │ ├── launchDocumentData.json │ └── my-cakewalk-apl-package-v2.json │ ├── en-US.json │ ├── index.js │ ├── package.json │ └── util.js ├── home.adoc ├── images ├── APLSkillFlowDiagram.png ├── AuthoringToolLandingPage.png ├── EnterSkill.png ├── FinalLaunchScreen.gif ├── MakeCakeTime.gif ├── NewExperienceDialog.png ├── S3Access.png ├── S3Provision.png ├── WelcomeToCakeTime.png ├── authoringToolWithBirthdayImage.png ├── birthdayVideo.gif ├── brokenHelloSpot.png ├── createLaunchScreenJSON.gif ├── definedEasingCurves.png ├── finalBirthdayScreen.png ├── finalCountdownScreen.png ├── finalHelloAPL.png ├── finalNoContextScreen.png ├── firstHelloWorld.gif ├── firstHelloWorld.png ├── helloCakeTimeGUI.png ├── interfacesClick.png ├── saveLaunchDocument.png └── toggleAPL.png ├── module1.adoc ├── module2.adoc ├── module3.adoc ├── module4.adoc ├── module5.adoc ├── module6.adoc └── quick-start.adoc /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | *Issue #, if available:* 2 | 3 | *Description of changes:* 4 | 5 | 6 | By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. 7 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | *.html 3 | *.pdf 4 | build/ 5 | 6 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check [existing open](https://github.com/alexa/skill-sample-nodejs-first-apl-skill/issues), or [recently closed](https://github.com/alexa/skill-sample-nodejs-first-apl-skill/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/alexa/skill-sample-nodejs-first-apl-skill/labels/help%20wanted) issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Amazon Software License 1.0 2 | 3 | This Amazon Software License ("License") governs your use, reproduction, and 4 | distribution of the accompanying software as specified below. 5 | 6 | 1. Definitions 7 | 8 | "Licensor" means any person or entity that distributes its Work. 9 | 10 | "Software" means the original work of authorship made available under this 11 | License. 12 | 13 | "Work" means the Software and any additions to or derivative works of the 14 | Software that are made available under this License. 15 | 16 | The terms "reproduce," "reproduction," "derivative works," and 17 | "distribution" have the meaning as provided under U.S. copyright law; 18 | provided, however, that for the purposes of this License, derivative works 19 | shall not include works that remain separable from, or merely link (or bind 20 | by name) to the interfaces of, the Work. 21 | 22 | Works, including the Software, are "made available" under this License by 23 | including in or with the Work either (a) a copyright notice referencing the 24 | applicability of this License to the Work, or (b) a copy of this License. 25 | 26 | 2. License Grants 27 | 28 | 2.1 Copyright Grant. Subject to the terms and conditions of this License, 29 | each Licensor grants to you a perpetual, worldwide, non-exclusive, 30 | royalty-free, copyright license to reproduce, prepare derivative works of, 31 | publicly display, publicly perform, sublicense and distribute its Work and 32 | any resulting derivative works in any form. 33 | 34 | 2.2 Patent Grant. Subject to the terms and conditions of this License, each 35 | Licensor grants to you a perpetual, worldwide, non-exclusive, royalty-free 36 | patent license to make, have made, use, sell, offer for sale, import, and 37 | otherwise transfer its Work, in whole or in part. The foregoing license 38 | applies only to the patent claims licensable by Licensor that would be 39 | infringed by Licensor's Work (or portion thereof) individually and 40 | excluding any combinations with any other materials or technology. 41 | 42 | 3. Limitations 43 | 44 | 3.1 Redistribution. You may reproduce or distribute the Work only if 45 | (a) you do so under this License, (b) you include a complete copy of this 46 | License with your distribution, and (c) you retain without modification 47 | any copyright, patent, trademark, or attribution notices that are present 48 | in the Work. 49 | 50 | 3.2 Derivative Works. You may specify that additional or different terms 51 | apply to the use, reproduction, and distribution of your derivative works 52 | of the Work ("Your Terms") only if (a) Your Terms provide that the use 53 | limitation in Section 3.3 applies to your derivative works, and (b) you 54 | identify the specific derivative works that are subject to Your Terms. 55 | Notwithstanding Your Terms, this License (including the redistribution 56 | requirements in Section 3.1) will continue to apply to the Work itself. 57 | 58 | 3.3 Use Limitation. The Work and any derivative works thereof only may be 59 | used or intended for use with the web services, computing platforms or 60 | applications provided by Amazon.com, Inc. or its affiliates, including 61 | Amazon Web Services, Inc. 62 | 63 | 3.4 Patent Claims. If you bring or threaten to bring a patent claim against 64 | any Licensor (including any claim, cross-claim or counterclaim in a 65 | lawsuit) to enforce any patents that you allege are infringed by any Work, 66 | then your rights under this License from such Licensor (including the 67 | grants in Sections 2.1 and 2.2) will terminate immediately. 68 | 69 | 3.5 Trademarks. This License does not grant any rights to use any 70 | Licensor's or its affiliates' names, logos, or trademarks, except as 71 | necessary to reproduce the notices described in this License. 72 | 73 | 3.6 Termination. If you violate any term of this License, then your rights 74 | under this License (including the grants in Sections 2.1 and 2.2) will 75 | terminate immediately. 76 | 77 | 4. Disclaimer of Warranty. 78 | 79 | THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 80 | EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF 81 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR 82 | NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER 83 | THIS LICENSE. SOME STATES' CONSUMER LAWS DO NOT ALLOW EXCLUSION OF AN 84 | IMPLIED WARRANTY, SO THIS DISCLAIMER MAY NOT APPLY TO YOU. 85 | 86 | 5. Limitation of Liability. 87 | 88 | EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL 89 | THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE 90 | SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, 91 | INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR 92 | RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK (INCLUDING 93 | BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, LOST PROFITS 94 | OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER COMM ERCIAL DAMAGES 95 | OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF 96 | SUCH DAMAGES. 97 | -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | Skill Sample Nodejs First Apl Skill 2 | 3 | Editor's Note: We have changed the name of the Alexa skill in our beginner tutorial from Cake Walk to Cake Time given the term's racially insensitive history. 4 | 5 | Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Skill Sample Nodejs First Apl Skill 2 | 3 | This Alexa Skills Kit sample skill extends the original simple skill that counts down the number of days until the customers birthday with visuals using Alexa Presentation Language. 4 | 5 | [Start Here](modules/home.adoc) 6 | 7 | ## Structure 8 | 9 | The .adoc files are markdown for the course website. 10 | 11 | To find the code for each module, go to modules > code > module# you want to final code for. Each one of these directories contains the final code for that particular module. APL documents are under the documents directory within that code module directory. 12 | 13 | modules > assets contains the assets for the course. 14 | 15 | ## License 16 | 17 | This library is licensed under the Amazon Software License. 18 | -------------------------------------------------------------------------------- /buildspec.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | #env: 4 | #variables: 5 | # ruby: 2.6 6 | # key: "value" 7 | #parameter-store: 8 | # key: "value" 9 | # key: "value" 10 | #git-credential-helper: yes 11 | 12 | phases: 13 | install: 14 | #If you use the Ubuntu standard image 2.0 or later, you must specify runtime-versions. 15 | #If you specify runtime-versions and use an image other than Ubuntu standard image 2.0, the build fails. 16 | runtime-versions: 17 | ruby: 2.6 18 | # name: version 19 | commands: 20 | - gem install asciidoctor 21 | - gem install zip 22 | #pre_build: 23 | #commands: 24 | # - command 25 | # - command 26 | build: 27 | commands: 28 | - mkdir $CODEBUILD_SRC_DIR/build/ 29 | - mkdir $CODEBUILD_SRC_DIR/build/optical-cupcake 30 | - mkdir $CODEBUILD_SRC_DIR/build/optical-cupcake/modules 31 | - mkdir $CODEBUILD_SRC_DIR/build/optical-cupcake/modules/images 32 | - cp $CODEBUILD_SRC_DIR/modules/images/* $CODEBUILD_SRC_DIR/build/optical-cupcake/modules/images 33 | - cd $CODEBUILD_SRC_DIR/ 34 | - find . -name '*.adoc' -exec asciidoctor --destination-dir ./build/optical-cupcake/modules {} \; 35 | - zip -r ./build/assets.zip modules/assets/* 36 | - cd $CODEBUILD_SRC_DIR/build 37 | - ls 38 | #post_build: 39 | #commands: 40 | # - command 41 | # - command 42 | artifacts: 43 | #files: 44 | # - $CODEBUILD_SRC_DIR/build/build.zip 45 | #discard-paths: yes 46 | #name: $(date +%Y-%m-%d) 47 | 48 | base-directory: $CODEBUILD_SRC_DIR/build 49 | files: 50 | - '**/*' 51 | # - location 52 | #discard-paths: yes 53 | #base-directory: location 54 | #cache: 55 | #paths: 56 | # - paths -------------------------------------------------------------------------------- /makefile: -------------------------------------------------------------------------------- 1 | # -------------------------------------------------------------------------- 2 | # Make file for building this package. Needs Asciidoctor installed. 3 | # https://asciidoctor.org/docs/user-manual/ 4 | # 5 | # asciidoctor-pdf 6 | # https://asciidoctor.org/docs/asciidoctor-pdf/ 7 | # Server and open commands assume MacOS 8 | # This relies on Asciidoctor installed to build it. 9 | # See: https://asciidoctor.org/docs/user-manual/ 10 | # 11 | # For the PDFs, you need the pdf package. 12 | # See: https://asciidoctor.org/docs/asciidoctor-pdf/ 13 | # 14 | # For gifs to not give warnings in PDF mode, you need prawn-gmagick which relies on GraphicsMagick 15 | # brew install GraphicsMagick 16 | # Then 17 | # sudo gem install prawn-gmagick 18 | # 19 | ## Usage Instructions 20 | # To compile everything and open a specific page on your local (Apache) server, run something like: 21 | # make all page=module# 22 | # 23 | # Where module# is the file name without the .adoc. 24 | # 25 | # To just build, run: 26 | # make cleanbuild 27 | # 28 | # Cleanbuild target will remove old artifacts and force a new build. 29 | # 30 | # To open a page on your local server (Assumes Apache), simply: 31 | # make open page=module# 32 | # -------------------------------------------------------------------------- 33 | 34 | IP_ADDRESS = `ipconfig getifaddr en0` 35 | 36 | #Puts artifacts into build directory. and the sites dir 37 | build: 38 | mkdir -p ~/Sites/optical-cupcake/modules/images 39 | cp ./modules/images/* ~/Sites/optical-cupcake/modules/images/ 40 | find . -name '*.adoc' -exec asciidoctor -a icons=font --destination-dir ~/Sites/optical-cupcake/modules {} \; 41 | # find . -name '*.adoc' -exec asciidoctor-pdf -a icons=font --destination-dir ./build {} \; 42 | 43 | clean: 44 | rm -rf build/ 45 | rm -rf ~/Sites/optical-cupcake/ 46 | 47 | cleanbuild: clean build 48 | 49 | start-server: 50 | sudo apachectl start 51 | echo $(IP_ADDRESS) 52 | 53 | stop-server: 54 | sudo apachectl stop 55 | 56 | restart-server: stop-server start-server 57 | 58 | open: 59 | open http://$(IP_ADDRESS)/~muoioj/optical-cupcake/modules/$(page).html 60 | 61 | zipall: 62 | zip -r ./build/assets.zip modules/assets/* 63 | 64 | all: cleanbuild zipall open 65 | -------------------------------------------------------------------------------- /modules/assets/Amazon_Cake.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/assets/Amazon_Cake.mp4 -------------------------------------------------------------------------------- /modules/assets/alexaCake_960x960.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/assets/alexaCake_960x960.png -------------------------------------------------------------------------------- /modules/assets/confetti_1280x800.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/assets/confetti_1280x800.png -------------------------------------------------------------------------------- /modules/assets/confetti_1920x1080.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/assets/confetti_1920x1080.png -------------------------------------------------------------------------------- /modules/assets/lights_1280x800.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/assets/lights_1280x800.png -------------------------------------------------------------------------------- /modules/assets/lights_1920x1080.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/assets/lights_1920x1080.png -------------------------------------------------------------------------------- /modules/code/module2/helloWorld.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "APL", 3 | "version": "1.1", 4 | "settings": {}, 5 | "theme": "dark", 6 | "import": [ 7 | { 8 | "name": "alexa-layouts", 9 | "version": "1.1.0" 10 | } 11 | ], 12 | "resources": [], 13 | "styles": { 14 | "bigText": { 15 | "values": [ 16 | { 17 | "fontSize": "72dp", 18 | "textAlign": "center" 19 | } 20 | ] 21 | } 22 | }, 23 | "onMount": [], 24 | "graphics": {}, 25 | "commands": {}, 26 | "layouts": {}, 27 | "mainTemplate": { 28 | "parameters": [ 29 | "text", 30 | "assets" 31 | ], 32 | "items": [ 33 | { 34 | "type": "Container", 35 | "when":"${@viewportProfile != @hubRoundSmall}", 36 | "items": [ 37 | { 38 | "type": "Text", 39 | "style": "bigText", 40 | "paddingTop": "12dp", 41 | "paddingBottom": "12dp", 42 | "text": "${text.start}" 43 | }, 44 | { 45 | "type": "Text", 46 | "style": "bigText", 47 | "paddingTop": "12dp", 48 | "paddingBottom": "12dp", 49 | "text": "${text.middle}" 50 | }, 51 | { 52 | "type": "Text", 53 | "style": "bigText", 54 | "paddingTop": "12dp", 55 | "paddingBottom": "12dp", 56 | "text": "${text.end}" 57 | }, 58 | { 59 | "type": "AlexaImage", 60 | "alignSelf": "center", 61 | "imageSource": "${assets.cake}", 62 | "imageRoundedCorner": false, 63 | "imageScale": "best-fill", 64 | "imageHeight":"40vh", 65 | "imageAspectRatio": "square", 66 | "imageBlurredBackground": false 67 | } 68 | ], 69 | "height": "100%", 70 | "width": "100%" 71 | }, 72 | { 73 | "type": "Container", 74 | "when":"${@viewportProfile == @hubRoundSmall}", 75 | "items": [ 76 | { 77 | "type": "Text", 78 | "style": "bigText", 79 | "paddingTop": "75dp", 80 | "text": "${text.start}" 81 | }, 82 | { 83 | "type": "Text", 84 | "style": "bigText", 85 | "text": "${text.middle}" 86 | }, 87 | { 88 | "type": "Text", 89 | "style": "bigText", 90 | "text": "${text.end}" 91 | } 92 | ], 93 | "height": "100%", 94 | "width": "100%" 95 | } 96 | ] 97 | } 98 | } -------------------------------------------------------------------------------- /modules/code/module2/helloWorldData.json: -------------------------------------------------------------------------------- 1 | { 2 | "text": { 3 | "start": "Welcome", 4 | "middle": "to", 5 | "end": "Cake Time!" 6 | }, 7 | "assets": { 8 | "cake":"https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/alexaCake_960x960.png?raw=true" 9 | } 10 | } -------------------------------------------------------------------------------- /modules/code/module3/documents/launchDocument.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "APL", 3 | "version": "1.1", 4 | "settings": {}, 5 | "theme": "dark", 6 | "import": [ 7 | { 8 | "name": "alexa-layouts", 9 | "version": "1.1.0" 10 | } 11 | ], 12 | "resources": [], 13 | "styles": { 14 | "bigText": { 15 | "values": [ 16 | { 17 | "fontSize": "72dp", 18 | "color": "black", 19 | "textAlign": "center" 20 | } 21 | ] 22 | } 23 | }, 24 | "onMount": [], 25 | "graphics": {}, 26 | "commands": {}, 27 | "layouts": {}, 28 | "mainTemplate": { 29 | "parameters": [ 30 | "text", 31 | "assets" 32 | ], 33 | "items": [ 34 | { 35 | "type": "Container", 36 | "items": [ 37 | { 38 | "type": "AlexaBackground", 39 | "backgroundImageSource": "${assets.backgroundURL}" 40 | }, 41 | { 42 | "type": "Text", 43 | "style": "bigText", 44 | "text": "${text.start}" 45 | }, 46 | { 47 | "type": "Text", 48 | "style": "bigText", 49 | "text": "${text.middle}" 50 | }, 51 | { 52 | "type": "Text", 53 | "style": "bigText", 54 | "text": "${text.end}" 55 | }, 56 | { 57 | "type": "AlexaImage", 58 | "alignSelf": "center", 59 | "imageSource": "${assets.cake}", 60 | "imageRoundedCorner": false, 61 | "imageScale": "best-fill", 62 | "imageHeight":"40vh", 63 | "imageAspectRatio": "square", 64 | "imageBlurredBackground": false 65 | } 66 | ], 67 | "height": "100%", 68 | "width": "100%", 69 | "when": "${@viewportProfile != @hubRoundSmall}" 70 | }, 71 | { 72 | "type": "Container", 73 | "items": [ 74 | { 75 | "type": "AlexaBackground", 76 | "backgroundImageSource": "${assets.backgroundURL}" 77 | }, 78 | { 79 | "type": "Text", 80 | "style": "bigText", 81 | "paddingTop": "75dp", 82 | "text": "${text.start}" 83 | }, 84 | { 85 | "type": "Text", 86 | "style": "bigText", 87 | "text": "${text.middle}" 88 | }, 89 | { 90 | "type": "Text", 91 | "style": "bigText", 92 | "text": "${text.end}" 93 | } 94 | ], 95 | "height": "100%", 96 | "width": "100%", 97 | "when": "${@viewportProfile == @hubRoundSmall}" 98 | } 99 | ] 100 | } 101 | } -------------------------------------------------------------------------------- /modules/code/module3/documents/launchDocumentDatasource.json: -------------------------------------------------------------------------------- 1 | { 2 | "text": { 3 | "start": "Welcome", 4 | "middle": "to", 5 | "end": "Cake Time!" 6 | }, 7 | "assets": { 8 | "cake":"https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/alexaCake_960x960.png?raw=true", 9 | "backgroundURL": "https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-apl-skill/master/modules/assets/lights_1920x1080.png" 10 | } 11 | } 12 | -------------------------------------------------------------------------------- /modules/code/module3/en-US.json: -------------------------------------------------------------------------------- 1 | { 2 | "interactionModel": { 3 | "languageModel": { 4 | "invocationName": "cake time", 5 | "intents": [ 6 | { 7 | "name": "AMAZON.CancelIntent", 8 | "samples": [] 9 | }, 10 | { 11 | "name": "AMAZON.HelpIntent", 12 | "samples": [] 13 | }, 14 | { 15 | "name": "AMAZON.StopIntent", 16 | "samples": [] 17 | }, 18 | { 19 | "name": "AMAZON.NavigateHomeIntent", 20 | "samples": [] 21 | }, 22 | { 23 | "name": "CaptureBirthdayIntent", 24 | "slots": [ 25 | { 26 | "name": "month", 27 | "type": "AMAZON.Month" 28 | }, 29 | { 30 | "name": "day", 31 | "type": "AMAZON.Ordinal" 32 | }, 33 | { 34 | "name": "year", 35 | "type": "AMAZON.FOUR_DIGIT_NUMBER" 36 | } 37 | ], 38 | "samples": [ 39 | "{month} {day}", 40 | "{month} {day} {year}", 41 | "{month} {year}", 42 | "I was born on {month} {day} ", 43 | "I was born on {month} {day} {year}", 44 | "I was born on {month} {year}" 45 | ] 46 | } 47 | ], 48 | "types": [] 49 | }, 50 | "dialog": { 51 | "intents": [ 52 | { 53 | "name": "CaptureBirthdayIntent", 54 | "confirmationRequired": false, 55 | "prompts": {}, 56 | "slots": [ 57 | { 58 | "name": "month", 59 | "type": "AMAZON.Month", 60 | "confirmationRequired": false, 61 | "elicitationRequired": true, 62 | "prompts": { 63 | "elicitation": "Elicit.Slot.303899476312.795077103633" 64 | } 65 | }, 66 | { 67 | "name": "day", 68 | "type": "AMAZON.Ordinal", 69 | "confirmationRequired": false, 70 | "elicitationRequired": true, 71 | "prompts": { 72 | "elicitation": "Elicit.Slot.303899476312.985837334781" 73 | } 74 | }, 75 | { 76 | "name": "year", 77 | "type": "AMAZON.FOUR_DIGIT_NUMBER", 78 | "confirmationRequired": false, 79 | "elicitationRequired": true, 80 | "prompts": { 81 | "elicitation": "Elicit.Slot.303899476312.27341833344" 82 | } 83 | } 84 | ] 85 | } 86 | ], 87 | "delegationStrategy": "ALWAYS" 88 | }, 89 | "prompts": [ 90 | { 91 | "id": "Elicit.Slot.303899476312.795077103633", 92 | "variations": [ 93 | { 94 | "type": "PlainText", 95 | "value": "I was born in November. When what were you born?" 96 | }, 97 | { 98 | "type": "PlainText", 99 | "value": "What month were you born?" 100 | } 101 | ] 102 | }, 103 | { 104 | "id": "Elicit.Slot.303899476312.985837334781", 105 | "variations": [ 106 | { 107 | "type": "PlainText", 108 | "value": "I was born on the sixth. What day were you born?" 109 | } 110 | ] 111 | }, 112 | { 113 | "id": "Elicit.Slot.303899476312.27341833344", 114 | "variations": [ 115 | { 116 | "type": "PlainText", 117 | "value": "I was born in two thousand fifteen, what year were you born?" 118 | } 119 | ] 120 | } 121 | ] 122 | } 123 | } -------------------------------------------------------------------------------- /modules/code/module3/index.js: -------------------------------------------------------------------------------- 1 | // This sample demonstrates handling intents from an Alexa skill using the Alexa Skills Kit SDK (v2). 2 | // Please visit https://alexa.design/cookbook for additional examples on implementing slots, dialog management, 3 | // session persistence, api calls, and more. 4 | const Alexa = require('ask-sdk-core'); 5 | const persistenceAdapter = require('ask-sdk-s3-persistence-adapter'); 6 | const launchDocument = require('./documents/launchDocument.json'); 7 | const util = require('./util'); 8 | 9 | const HasBirthdayLaunchRequestHandler = { 10 | canHandle(handlerInput) { 11 | console.log(JSON.stringify(handlerInput.requestEnvelope.request)); 12 | const attributesManager = handlerInput.attributesManager; 13 | const sessionAttributes = attributesManager.getSessionAttributes() || {}; 14 | 15 | const year = sessionAttributes.hasOwnProperty('year') ? sessionAttributes.year : 0; 16 | const month = sessionAttributes.hasOwnProperty('month') ? sessionAttributes.month : 0; 17 | const day = sessionAttributes.hasOwnProperty('day') ? sessionAttributes.day : 0; 18 | 19 | return handlerInput.requestEnvelope.request.type === 'LaunchRequest' && 20 | year && 21 | month && 22 | day; 23 | }, 24 | async handle(handlerInput) { 25 | 26 | const serviceClientFactory = handlerInput.serviceClientFactory; 27 | const deviceId = handlerInput.requestEnvelope.context.System.device.deviceId; 28 | 29 | const attributesManager = handlerInput.attributesManager; 30 | const sessionAttributes = attributesManager.getSessionAttributes() || {}; 31 | 32 | const year = sessionAttributes.hasOwnProperty('year') ? sessionAttributes.year : 0; 33 | const month = sessionAttributes.hasOwnProperty('month') ? sessionAttributes.month : 0; 34 | const day = sessionAttributes.hasOwnProperty('day') ? sessionAttributes.day : 0; 35 | 36 | let userTimeZone; 37 | try { 38 | const upsServiceClient = serviceClientFactory.getUpsServiceClient(); 39 | userTimeZone = await upsServiceClient.getSystemTimeZone(deviceId); 40 | } catch (error) { 41 | if (error.name !== 'ServiceError') { 42 | return handlerInput.responseBuilder.speak("There was a problem connecting to the service.").getResponse(); 43 | } 44 | console.log('error', error.message); 45 | } 46 | console.log('userTimeZone', userTimeZone); 47 | 48 | const oneDay = 24*60*60*1000; 49 | 50 | // getting the current date with the time 51 | const currentDateTime = new Date(new Date().toLocaleString("en-US", {timeZone: userTimeZone})); 52 | // removing the time from the date because it affects our difference calculation 53 | const currentDate = new Date(currentDateTime.getFullYear(), currentDateTime.getMonth(), currentDateTime.getDate()); 54 | let currentYear = currentDate.getFullYear(); 55 | 56 | console.log('currentDateTime:', currentDateTime); 57 | console.log('currentDate:', currentDate); 58 | 59 | // getting the next birthday 60 | let nextBirthday = Date.parse(`${month} ${day}, ${currentYear}`); 61 | 62 | // adjust the nextBirthday by one year if the current date is after their birthday 63 | if (currentDate.getTime() > nextBirthday) { 64 | nextBirthday = Date.parse(`${month} ${day}, ${currentYear + 1}`); 65 | currentYear++; 66 | } 67 | 68 | // setting the default speakOutput to Happy xth Birthday!! 69 | // Alexa will automatically correct the ordinal for you. 70 | // no need to worry about when to use st, th, rd 71 | const yearsOld = currentYear - year; 72 | let speakOutput = `Happy ${yearsOld}th birthday!`; 73 | let isBirthday = true; 74 | 75 | if (currentDate.getTime() !== nextBirthday) { 76 | isBirthday = false; 77 | const diffDays = Math.round(Math.abs((currentDate.getTime() - nextBirthday)/oneDay)); 78 | speakOutput = `Welcome back. It looks like there are ${diffDays} days until your ${currentYear - year}th birthday.` 79 | } 80 | 81 | return handlerInput.responseBuilder 82 | .speak(speakOutput) 83 | .getResponse(); 84 | } 85 | }; 86 | 87 | const LaunchRequestHandler = { 88 | canHandle(handlerInput) { 89 | return handlerInput.requestEnvelope.request.type === 'LaunchRequest'; 90 | }, 91 | handle(handlerInput) { 92 | const speakOutput = 'Hello! Welcome to Cake Time. What is your birthday?'; 93 | const repromptOutput = 'I was born Nov. 6th, 2015. When were you born?'; 94 | 95 | const viewportProfile = Alexa.getViewportProfile(handlerInput.requestEnvelope); 96 | const backgroundKey = viewportProfile === 'TV-LANDSCAPE-XLARGE' ? "Media/lights_1920x1080.png" : "Media/lights_1280x800.png"; 97 | 98 | // Add APL directive to response 99 | if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) { 100 | // Create Render Directive 101 | handlerInput.responseBuilder.addDirective({ 102 | type: 'Alexa.Presentation.APL.RenderDocument', 103 | document: launchDocument, 104 | datasources: { 105 | text: { 106 | type: 'object', 107 | start: "Welcome", 108 | middle: "to", 109 | end: "Cake Time!" 110 | }, 111 | assets: { 112 | cake: util.getS3PreSignedUrl('Media/alexaCake_960x960.png'), 113 | backgroundURL: util.getS3PreSignedUrl('Media/lights_1920x1080.png') 114 | } 115 | } 116 | }); 117 | } 118 | 119 | return handlerInput.responseBuilder 120 | .speak(speakOutput) 121 | .reprompt(repromptOutput) 122 | .getResponse(); 123 | } 124 | }; 125 | const BirthdayIntentHandler = { 126 | canHandle(handlerInput) { 127 | return handlerInput.requestEnvelope.request.type === 'IntentRequest' 128 | && handlerInput.requestEnvelope.request.intent.name === 'CaptureBirthdayIntent'; 129 | }, 130 | async handle(handlerInput) { 131 | const year = handlerInput.requestEnvelope.request.intent.slots.year.value; 132 | const month = handlerInput.requestEnvelope.request.intent.slots.month.value; 133 | const day = handlerInput.requestEnvelope.request.intent.slots.day.value; 134 | 135 | const attributesManager = handlerInput.attributesManager; 136 | 137 | const birthdayAttributes = { 138 | "year": year, 139 | "month": month, 140 | "day": day 141 | 142 | }; 143 | attributesManager.setPersistentAttributes(birthdayAttributes); 144 | await attributesManager.savePersistentAttributes(); 145 | 146 | const speakOutput = `Thanks, I'll remember that you were born ${month} ${day} ${year}.`; 147 | 148 | return handlerInput.responseBuilder 149 | .speak(speakOutput) 150 | .withShouldEndSession(true) 151 | .getResponse(); 152 | } 153 | }; 154 | 155 | const HelpIntentHandler = { 156 | canHandle(handlerInput) { 157 | return handlerInput.requestEnvelope.request.type === 'IntentRequest' 158 | && handlerInput.requestEnvelope.request.intent.name === 'AMAZON.HelpIntent'; 159 | }, 160 | handle(handlerInput) { 161 | const speakOutput = 'You can say hello to me! How can I help?'; 162 | 163 | return handlerInput.responseBuilder 164 | .speak(speakOutput) 165 | .reprompt(speakOutput) 166 | .getResponse(); 167 | } 168 | }; 169 | const CancelAndStopIntentHandler = { 170 | canHandle(handlerInput) { 171 | return handlerInput.requestEnvelope.request.type === 'IntentRequest' 172 | && (handlerInput.requestEnvelope.request.intent.name === 'AMAZON.CancelIntent' 173 | || handlerInput.requestEnvelope.request.intent.name === 'AMAZON.StopIntent'); 174 | }, 175 | handle(handlerInput) { 176 | const speakOutput = 'Goodbye!'; 177 | return handlerInput.responseBuilder 178 | .speak(speakOutput) 179 | .getResponse(); 180 | } 181 | }; 182 | const SessionEndedRequestHandler = { 183 | canHandle(handlerInput) { 184 | return handlerInput.requestEnvelope.request.type === 'SessionEndedRequest'; 185 | }, 186 | handle(handlerInput) { 187 | // Any cleanup logic goes here. 188 | return handlerInput.responseBuilder.getResponse(); 189 | } 190 | }; 191 | 192 | // The intent reflector is used for interaction model testing and debugging. 193 | // It will simply repeat the intent the user said. You can create custom handlers 194 | // for your intents by defining them above, then also adding them to the request 195 | // handler chain below. 196 | const IntentReflectorHandler = { 197 | canHandle(handlerInput) { 198 | return handlerInput.requestEnvelope.request.type === 'IntentRequest'; 199 | }, 200 | handle(handlerInput) { 201 | const intentName = handlerInput.requestEnvelope.request.intent.name; 202 | const speakOutput = `You just triggered ${intentName}`; 203 | 204 | return handlerInput.responseBuilder 205 | .speak(speakOutput) 206 | //.reprompt('add a reprompt if you want to keep the session open for the user to respond') 207 | .getResponse(); 208 | } 209 | }; 210 | 211 | // Generic error handling to capture any syntax or routing errors. If you receive an error 212 | // stating the request handler chain is not found, you have not implemented a handler for 213 | // the intent being invoked or included it in the skill builder below. 214 | const ErrorHandler = { 215 | canHandle() { 216 | return true; 217 | }, 218 | handle(handlerInput, error) { 219 | console.log(`~~~~ Error handled: ${error.message}`); 220 | const speakOutput = `Sorry, I couldn't understand what you said. Please try again.`; 221 | 222 | return handlerInput.responseBuilder 223 | .speak(speakOutput) 224 | .reprompt(speakOutput) 225 | .getResponse(); 226 | } 227 | }; 228 | 229 | const LoadBirthdayInterceptor = { 230 | async process(handlerInput) { 231 | const attributesManager = handlerInput.attributesManager; 232 | const sessionAttributes = await attributesManager.getPersistentAttributes() || {}; 233 | 234 | const year = sessionAttributes.hasOwnProperty('year') ? sessionAttributes.year : 0; 235 | const month = sessionAttributes.hasOwnProperty('month') ? sessionAttributes.month : 0; 236 | const day = sessionAttributes.hasOwnProperty('day') ? sessionAttributes.day : 0; 237 | 238 | if (year && month && day) { 239 | attributesManager.setSessionAttributes(sessionAttributes); 240 | } 241 | } 242 | } 243 | 244 | // This handler acts as the entry point for your skill, routing all request and response 245 | // payloads to the handlers above. Make sure any new handlers or interceptors you've 246 | // defined are included below. The order matters - they're processed top to bottom. 247 | exports.handler = Alexa.SkillBuilders.custom() 248 | .withPersistenceAdapter( 249 | new persistenceAdapter.S3PersistenceAdapter({bucketName:process.env.S3_PERSISTENCE_BUCKET}) 250 | ) 251 | .addRequestHandlers( 252 | HasBirthdayLaunchRequestHandler, 253 | LaunchRequestHandler, 254 | BirthdayIntentHandler, 255 | HelpIntentHandler, 256 | CancelAndStopIntentHandler, 257 | SessionEndedRequestHandler, 258 | IntentReflectorHandler) // make sure IntentReflectorHandler is last so it doesn't override your custom intent handlers 259 | .addErrorHandlers( 260 | ErrorHandler) 261 | .addRequestInterceptors( 262 | LoadBirthdayInterceptor 263 | ) 264 | .withApiClient(new Alexa.DefaultApiClient()) 265 | .lambda(); 266 | -------------------------------------------------------------------------------- /modules/code/module3/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "cake-time", 3 | "version": "0.9.0", 4 | "description": "alexa utility for quickly building skills", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "author": "Amazon Alexa", 10 | "license": "ISC", 11 | "dependencies": { 12 | "ask-sdk-core": "^2.0.7", 13 | "ask-sdk-model": "^1.4.1", 14 | "aws-sdk": "^2.326.0", 15 | "ask-sdk-s3-persistence-adapter": "^2.0.0" 16 | } 17 | } -------------------------------------------------------------------------------- /modules/code/module3/util.js: -------------------------------------------------------------------------------- 1 | const AWS = require('aws-sdk'); 2 | 3 | const s3SigV4Client = new AWS.S3({ 4 | signatureVersion: 'v4' 5 | }); 6 | 7 | module.exports.getS3PreSignedUrl = function getS3PreSignedUrl(s3ObjectKey) { 8 | 9 | const bucketName = process.env.S3_PERSISTENCE_BUCKET; 10 | const s3PreSignedUrl = s3SigV4Client.getSignedUrl('getObject', { 11 | Bucket: bucketName, 12 | Key: s3ObjectKey, 13 | Expires: 60*1 // the Expires is capped for 1 minute 14 | }); 15 | console.log(`Util.s3PreSignedUrl: ${s3ObjectKey} URL ${s3PreSignedUrl}`); 16 | return s3PreSignedUrl; 17 | 18 | } -------------------------------------------------------------------------------- /modules/code/module4/CORSConfiguration.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | * 5 | GET 6 | 7 | 8 | -------------------------------------------------------------------------------- /modules/code/module4/documents/birthdayDocument.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "APL", 3 | "version": "1.1", 4 | "settings": {}, 5 | "theme": "dark", 6 | "import": [ 7 | { 8 | "name": "my-caketime-apl-package", 9 | "version": "1.0", 10 | "source": "https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-apl-skill/master/modules/code/module4/documents/my-caketime-apl-package.json" 11 | }, 12 | { 13 | "name": "alexa-layouts", 14 | "version": "1.1.0" 15 | } 16 | ], 17 | "resources": [], 18 | "styles": { 19 | "bigText": { 20 | "values": [ 21 | { 22 | "fontSize": "72dp", 23 | "color": "black", 24 | "textAlign": "center" 25 | } 26 | ] 27 | } 28 | }, 29 | "onMount": [], 30 | "graphics": {}, 31 | "commands": {}, 32 | "layouts": {}, 33 | "mainTemplate": { 34 | "parameters": [ 35 | "text", 36 | "assets" 37 | ], 38 | "items": [ 39 | { 40 | "type": "Container", 41 | "items": [ 42 | { 43 | "type": "AlexaBackground", 44 | "backgroundImageSource": "${assets.backgroundURL}" 45 | }, 46 | { 47 | "type": "Container", 48 | "paddingTop":"3vh", 49 | "alignItems": "center", 50 | "items": [{ 51 | "type": "Video", 52 | "height": "85vh", 53 | "width":"90vw", 54 | "source": "${assets.video}", 55 | "autoplay": true 56 | }] 57 | } 58 | ], 59 | "height": "100%", 60 | "width": "100%", 61 | "when": "${@viewportProfile != @hubRoundSmall}" 62 | }, 63 | { 64 | "type": "Container", 65 | "paddingTop": "75dp", 66 | "items": [ 67 | { 68 | "type": "AlexaBackground", 69 | "backgroundImageSource": "${assets.backgroundURL}" 70 | }, 71 | { 72 | "type": "cakeTimeText", 73 | "startText":"${text.start}", 74 | "middleText":"${text.middle}", 75 | "endText":"${text.end}" 76 | } 77 | ], 78 | "height": "100%", 79 | "width": "100%", 80 | "when": "${@viewportProfile == @hubRoundSmall}" 81 | } 82 | ] 83 | } 84 | } -------------------------------------------------------------------------------- /modules/code/module4/documents/birthdayDocumentDatasource.json: -------------------------------------------------------------------------------- 1 | { 2 | "text": { 3 | "start": "Happy Birthday!", 4 | "middle": "From,", 5 | "end": "Alexa <3" 6 | }, 7 | "assets": { 8 | "video": "https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/Amazon_Cake.mp4?raw=true", 9 | "cake":"https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/alexaCake_960x960.png?raw=true", 10 | "backgroundURL": "https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/lights_1280x800.png?raw=true" 11 | } 12 | } -------------------------------------------------------------------------------- /modules/code/module4/documents/launchDocument.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "APL", 3 | "version": "1.1", 4 | "settings": {}, 5 | "theme": "dark", 6 | "import": [ 7 | { 8 | "name": "my-caketime-apl-package", 9 | "version": "1.0", 10 | "source": "https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-apl-skill/master/modules/code/module4/documents/my-caketime-apl-package.json" 11 | }, 12 | { 13 | "name": "alexa-layouts", 14 | "version": "1.1.0" 15 | } 16 | ], 17 | "resources": [], 18 | "styles": {}, 19 | "onMount": [ 20 | { 21 | "type": "AnimateItem", 22 | "easing": "path(0.25, 0.2, 0.5, 0.5, 0.75, 0.8)", 23 | "duration": 2000, 24 | "componentId": "image", 25 | "value": [ 26 | { 27 | "property": "opacity", 28 | "to": 1 29 | }, 30 | { 31 | "property": "transform", 32 | "from": [ 33 | { 34 | "scale": 0.01 35 | }, 36 | { 37 | "rotate": 0 38 | } 39 | ], 40 | "to": [ 41 | { 42 | "scale": 1 43 | }, 44 | { 45 | "rotate": 360 46 | } 47 | ] 48 | } 49 | ] 50 | } 51 | ], 52 | "graphics": {}, 53 | "commands": {}, 54 | "layouts": {}, 55 | "mainTemplate": { 56 | "parameters": [ 57 | "text", 58 | "assets" 59 | ], 60 | "items": [ 61 | { 62 | "type": "Container", 63 | "items": [ 64 | { 65 | "type": "AlexaBackground", 66 | "backgroundImageSource": "${assets.backgroundURL}" 67 | }, 68 | { 69 | "type": "cakeTimeText", 70 | "startText":"${text.start}", 71 | "middleText":"${text.middle}", 72 | "endText":"${text.end}" 73 | }, 74 | { 75 | "type": "AlexaImage", 76 | "alignSelf": "center", 77 | "imageSource": "${assets.cake}", 78 | "imageRoundedCorner": false, 79 | "imageScale": "best-fill", 80 | "imageHeight":"40vh", 81 | "imageAspectRatio": "square", 82 | "imageBlurredBackground": false 83 | } 84 | ], 85 | "height": "100%", 86 | "width": "100%", 87 | "when": "${@viewportProfile != @hubRoundSmall}" 88 | }, 89 | { 90 | "type": "Container", 91 | "paddingTop": "75dp", 92 | "items": [ 93 | { 94 | "type": "AlexaBackground", 95 | "backgroundImageSource": "${assets.backgroundURL}" 96 | }, 97 | { 98 | "type": "cakeTimeText", 99 | "startText":"${text.start}", 100 | "middleText":"${text.middle}", 101 | "endText":"${text.end}" 102 | } 103 | ], 104 | "height": "100%", 105 | "width": "100%", 106 | "when": "${@viewportProfile == @hubRoundSmall}" 107 | } 108 | ] 109 | } 110 | } -------------------------------------------------------------------------------- /modules/code/module4/documents/launchDocumentCountdownDatasource.json: -------------------------------------------------------------------------------- 1 | { 2 | "text": { 3 | "start": "Your birthday", 4 | "middle": "is in", 5 | "end": "123 days!" 6 | }, 7 | "assets": { 8 | "video": "https://public-pics-muoio.s3.amazonaws.com/video/Amazon_Cake.mp4", 9 | "cake":"https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/alexaCake_960x960.png?raw=true", 10 | "backgroundURL": "https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/lights_1280x800.png?raw=true" 11 | } 12 | } -------------------------------------------------------------------------------- /modules/code/module4/documents/launchDocumentDatasource.json: -------------------------------------------------------------------------------- 1 | { 2 | "text": { 3 | "start": "Welcome", 4 | "middle": "to", 5 | "end": "Cake Time!" 6 | }, 7 | "assets": { 8 | "video": "https://public-pics-muoio.s3.amazonaws.com/video/Amazon_Cake.mp4", 9 | "cake":"https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/alexaCake_960x960.png?raw=true", 10 | "backgroundURL": "https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/lights_1280x800.png?raw=true" 11 | } 12 | } -------------------------------------------------------------------------------- /modules/code/module4/documents/my-caketime-apl-package.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "APL", 3 | "version": "1.1", 4 | "settings": {}, 5 | "theme": "dark", 6 | "import": [ 7 | { 8 | "name": "alexa-layouts", 9 | "version": "1.1.0" 10 | } 11 | ], 12 | "resources": [], 13 | "styles": { 14 | "bigText": { 15 | "values": [ 16 | { 17 | "fontSize": "72dp", 18 | "color": "black", 19 | "textAlign": "center" 20 | } 21 | ] 22 | } 23 | }, 24 | "onMount": [], 25 | "graphics": {}, 26 | "commands": {}, 27 | "layouts": { 28 | "cakeTimeText": { 29 | "description": "A basic text layout with start, middle and end. Ids to target each individual layout (for commands) will be textTop, textMiddle, and textBottom, respectively.", 30 | "parameters":[ 31 | { 32 | "name": "startText", 33 | "type": "string" 34 | }, 35 | { 36 | "name": "middleText", 37 | "type": "string" 38 | }, 39 | { 40 | "name": "endText", 41 | "type": "string" 42 | }, 43 | { 44 | "name": "topId", 45 | "type": "string", 46 | "default": "textTop" 47 | }, 48 | { 49 | "name": "middleId", 50 | "type": "string", 51 | "default": "textMiddle" 52 | }, 53 | { 54 | "name": "bottomId", 55 | "type": "string", 56 | "default": "textBottom" 57 | } 58 | ], 59 | "items": [ 60 | { 61 | "type": "Container", 62 | "items": [ 63 | { 64 | "type": "Text", 65 | "style": "bigText", 66 | "paddingTop":"${@viewportProfile == @hubRoundSmall ? 75dp : 0dp}", 67 | "text": "${startText}", 68 | "id":"${topId}" 69 | }, 70 | { 71 | "type": "Text", 72 | "style": "bigText", 73 | "text": "${middleText}", 74 | "id": "${middleId}" 75 | }, 76 | { 77 | "type": "Text", 78 | "style": "bigText", 79 | "text": "${endText}", 80 | "id": "${bottomId}" 81 | } 82 | ] 83 | } 84 | ] 85 | } 86 | } 87 | } 88 | -------------------------------------------------------------------------------- /modules/code/module4/en-US.json: -------------------------------------------------------------------------------- 1 | { 2 | "interactionModel": { 3 | "languageModel": { 4 | "invocationName": "cake time", 5 | "intents": [ 6 | { 7 | "name": "AMAZON.CancelIntent", 8 | "samples": [] 9 | }, 10 | { 11 | "name": "AMAZON.HelpIntent", 12 | "samples": [] 13 | }, 14 | { 15 | "name": "AMAZON.StopIntent", 16 | "samples": [] 17 | }, 18 | { 19 | "name": "AMAZON.NavigateHomeIntent", 20 | "samples": [] 21 | }, 22 | { 23 | "name": "CaptureBirthdayIntent", 24 | "slots": [ 25 | { 26 | "name": "month", 27 | "type": "AMAZON.Month" 28 | }, 29 | { 30 | "name": "day", 31 | "type": "AMAZON.Ordinal" 32 | }, 33 | { 34 | "name": "year", 35 | "type": "AMAZON.FOUR_DIGIT_NUMBER" 36 | } 37 | ], 38 | "samples": [ 39 | "{month} {day}", 40 | "{month} {day} {year}", 41 | "{month} {year}", 42 | "I was born on {month} {day} ", 43 | "I was born on {month} {day} {year}", 44 | "I was born on {month} {year}" 45 | ] 46 | } 47 | ], 48 | "types": [] 49 | }, 50 | "dialog": { 51 | "intents": [ 52 | { 53 | "name": "CaptureBirthdayIntent", 54 | "confirmationRequired": false, 55 | "prompts": {}, 56 | "slots": [ 57 | { 58 | "name": "month", 59 | "type": "AMAZON.Month", 60 | "confirmationRequired": false, 61 | "elicitationRequired": true, 62 | "prompts": { 63 | "elicitation": "Elicit.Slot.303899476312.795077103633" 64 | } 65 | }, 66 | { 67 | "name": "day", 68 | "type": "AMAZON.Ordinal", 69 | "confirmationRequired": false, 70 | "elicitationRequired": true, 71 | "prompts": { 72 | "elicitation": "Elicit.Slot.303899476312.985837334781" 73 | } 74 | }, 75 | { 76 | "name": "year", 77 | "type": "AMAZON.FOUR_DIGIT_NUMBER", 78 | "confirmationRequired": false, 79 | "elicitationRequired": true, 80 | "prompts": { 81 | "elicitation": "Elicit.Slot.303899476312.27341833344" 82 | } 83 | } 84 | ] 85 | } 86 | ], 87 | "delegationStrategy": "ALWAYS" 88 | }, 89 | "prompts": [ 90 | { 91 | "id": "Elicit.Slot.303899476312.795077103633", 92 | "variations": [ 93 | { 94 | "type": "PlainText", 95 | "value": "I was born in November. When what were you born?" 96 | }, 97 | { 98 | "type": "PlainText", 99 | "value": "What month were you born?" 100 | } 101 | ] 102 | }, 103 | { 104 | "id": "Elicit.Slot.303899476312.985837334781", 105 | "variations": [ 106 | { 107 | "type": "PlainText", 108 | "value": "I was born on the sixth. What day were you born?" 109 | } 110 | ] 111 | }, 112 | { 113 | "id": "Elicit.Slot.303899476312.27341833344", 114 | "variations": [ 115 | { 116 | "type": "PlainText", 117 | "value": "I was born in two thousand fifteen, what year were you born?" 118 | } 119 | ] 120 | } 121 | ] 122 | } 123 | } -------------------------------------------------------------------------------- /modules/code/module4/index.js: -------------------------------------------------------------------------------- 1 | // This sample demonstrates handling intents from an Alexa skill using the Alexa Skills Kit SDK (v2). 2 | // Please visit https://alexa.design/cookbook for additional examples on implementing slots, dialog management, 3 | // session persistence, api calls, and more. 4 | const Alexa = require('ask-sdk-core'); 5 | const persistenceAdapter = require('ask-sdk-s3-persistence-adapter'); 6 | const launchDocument = require('./documents/launchDocument.json'); 7 | const birthdayDocument = require("./documents/birthdayDocument.json"); 8 | 9 | const util = require('./util'); 10 | 11 | const HasBirthdayLaunchRequestHandler = { 12 | canHandle(handlerInput) { 13 | console.log(JSON.stringify(handlerInput.requestEnvelope.request)); 14 | const attributesManager = handlerInput.attributesManager; 15 | const sessionAttributes = attributesManager.getSessionAttributes() || {}; 16 | 17 | const year = sessionAttributes.hasOwnProperty('year') ? sessionAttributes.year : 0; 18 | const month = sessionAttributes.hasOwnProperty('month') ? sessionAttributes.month : 0; 19 | const day = sessionAttributes.hasOwnProperty('day') ? sessionAttributes.day : 0; 20 | 21 | return handlerInput.requestEnvelope.request.type === 'LaunchRequest' && 22 | year && 23 | month && 24 | day; 25 | }, 26 | async handle(handlerInput) { 27 | 28 | const serviceClientFactory = handlerInput.serviceClientFactory; 29 | const deviceId = handlerInput.requestEnvelope.context.System.device.deviceId; 30 | 31 | const attributesManager = handlerInput.attributesManager; 32 | const sessionAttributes = attributesManager.getSessionAttributes() || {}; 33 | 34 | const year = sessionAttributes.hasOwnProperty('year') ? sessionAttributes.year : 0; 35 | const month = sessionAttributes.hasOwnProperty('month') ? sessionAttributes.month : 0; 36 | const day = sessionAttributes.hasOwnProperty('day') ? sessionAttributes.day : 0; 37 | 38 | let userTimeZone; 39 | try { 40 | const upsServiceClient = serviceClientFactory.getUpsServiceClient(); 41 | userTimeZone = await upsServiceClient.getSystemTimeZone(deviceId); 42 | } catch (error) { 43 | if (error.name !== 'ServiceError') { 44 | return handlerInput.responseBuilder.speak("There was a problem connecting to the service.").getResponse(); 45 | } 46 | console.log('error', error.message); 47 | } 48 | console.log('userTimeZone', userTimeZone); 49 | 50 | const oneDay = 24*60*60*1000; 51 | 52 | // getting the current date with the time 53 | const currentDateTime = new Date(new Date().toLocaleString("en-US", {timeZone: userTimeZone})); 54 | // removing the time from the date because it affects our difference calculation 55 | const currentDate = new Date(currentDateTime.getFullYear(), currentDateTime.getMonth(), currentDateTime.getDate()); 56 | let currentYear = currentDate.getFullYear(); 57 | 58 | console.log('currentDateTime:', currentDateTime); 59 | console.log('currentDate:', currentDate); 60 | 61 | // getting the next birthday 62 | let nextBirthday = Date.parse(`${month} ${day}, ${currentYear}`); 63 | 64 | // adjust the nextBirthday by one year if the current date is after their birthday 65 | if (currentDate.getTime() > nextBirthday) { 66 | nextBirthday = Date.parse(`${month} ${day}, ${currentYear + 1}`); 67 | currentYear++; 68 | } 69 | 70 | // setting the default speakOutput to Happy xth Birthday!! 71 | // Alexa will automatically correct the ordinal for you. 72 | // no need to worry about when to use st, th, rd 73 | const yearsOld = currentYear - year; 74 | let speakOutput = `Happy ${yearsOld}th birthday!`; 75 | let isBirthday = true; 76 | const diffDays = Math.round(Math.abs((currentDate.getTime() - nextBirthday)/oneDay)); 77 | 78 | if (currentDate.getTime() !== nextBirthday) { 79 | isBirthday = false; 80 | speakOutput = `Welcome back. It looks like there are ${diffDays} days until your ${currentYear - year}th birthday.` 81 | } 82 | 83 | const numberDaysString = diffDays === 1 ? "1 day": diffDays + " days"; 84 | // Add APL directive to response 85 | if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) { 86 | if (currentDate.getTime() !== nextBirthday) { 87 | // Create Render Directive 88 | handlerInput.responseBuilder.addDirective({ 89 | type: 'Alexa.Presentation.APL.RenderDocument', 90 | document: launchDocument, 91 | datasources: { 92 | text: { 93 | type: 'object', 94 | start: "Your Birthday", 95 | middle: "is in", 96 | end: numberDaysString 97 | }, 98 | assets: { 99 | cake: util.getS3PreSignedUrl('Media/alexaCake_960x960.png'), 100 | backgroundURL: getBackgroundURL(handlerInput, "lights") 101 | } 102 | } 103 | }); 104 | } else { 105 | // Create Render Directive 106 | handlerInput.responseBuilder.addDirective({ 107 | type: 'Alexa.Presentation.APL.RenderDocument', 108 | document: birthdayDocument, 109 | datasources: { 110 | text: { 111 | type: 'object', 112 | start: "Happy Birthday!", 113 | middle: "From,", 114 | end: "Alexa <3" 115 | }, 116 | assets: { 117 | video: "https://public-pics-muoio.s3.amazonaws.com/video/Amazon_Cake.mp4", 118 | backgroundURL: getBackgroundURL(handlerInput, "confetti") 119 | } 120 | } 121 | }); 122 | } 123 | } 124 | 125 | return handlerInput.responseBuilder 126 | .speak(speakOutput) 127 | .getResponse(); 128 | } 129 | }; 130 | 131 | const LaunchRequestHandler = { 132 | canHandle(handlerInput) { 133 | return handlerInput.requestEnvelope.request.type === 'LaunchRequest'; 134 | }, 135 | handle(handlerInput) { 136 | const speakOutput = 'Hello! Welcome to Cake Time. What is your birthday?'; 137 | const repromptOutput = 'I was born Nov. 6th, 2015. When were you born?'; 138 | 139 | // Add APL directive to response 140 | if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) { 141 | // Create Render Directive 142 | handlerInput.responseBuilder.addDirective({ 143 | type: 'Alexa.Presentation.APL.RenderDocument', 144 | document: launchDocument, 145 | datasources: { 146 | text: { 147 | type: 'object', 148 | start: "Welcome", 149 | middle: "to", 150 | end: "Cake Time!" 151 | }, 152 | assets: { 153 | cake: util.getS3PreSignedUrl('Media/alexaCake_960x960.png'), 154 | backgroundURL: getBackgroundURL(handlerInput, "lights") 155 | } 156 | } 157 | }); 158 | } 159 | 160 | return handlerInput.responseBuilder 161 | .speak(speakOutput) 162 | .reprompt(repromptOutput) 163 | .getResponse(); 164 | } 165 | }; 166 | const BirthdayIntentHandler = { 167 | canHandle(handlerInput) { 168 | return handlerInput.requestEnvelope.request.type === 'IntentRequest' 169 | && handlerInput.requestEnvelope.request.intent.name === 'CaptureBirthdayIntent'; 170 | }, 171 | async handle(handlerInput) { 172 | const year = handlerInput.requestEnvelope.request.intent.slots.year.value; 173 | const month = handlerInput.requestEnvelope.request.intent.slots.month.value; 174 | const day = handlerInput.requestEnvelope.request.intent.slots.day.value; 175 | 176 | const attributesManager = handlerInput.attributesManager; 177 | 178 | const birthdayAttributes = { 179 | "year": year, 180 | "month": month, 181 | "day": day 182 | 183 | }; 184 | attributesManager.setPersistentAttributes(birthdayAttributes); 185 | await attributesManager.savePersistentAttributes(); 186 | 187 | const headerMessage = "CaptureBirthdayIntent"; 188 | const hintString = "This is my hint"; 189 | const speakOutput = `Thanks, I'll remember that you were born ${month} ${day} ${year}.`; 190 | 191 | return handlerInput.responseBuilder 192 | .speak(speakOutput) 193 | .withShouldEndSession(true) 194 | .getResponse(); 195 | } 196 | }; 197 | 198 | const HelpIntentHandler = { 199 | canHandle(handlerInput) { 200 | return handlerInput.requestEnvelope.request.type === 'IntentRequest' 201 | && handlerInput.requestEnvelope.request.intent.name === 'AMAZON.HelpIntent'; 202 | }, 203 | handle(handlerInput) { 204 | const speakOutput = 'You can say hello to me! How can I help?'; 205 | 206 | return handlerInput.responseBuilder 207 | .speak(speakOutput) 208 | .reprompt(speakOutput) 209 | .getResponse(); 210 | } 211 | }; 212 | const CancelAndStopIntentHandler = { 213 | canHandle(handlerInput) { 214 | return handlerInput.requestEnvelope.request.type === 'IntentRequest' 215 | && (handlerInput.requestEnvelope.request.intent.name === 'AMAZON.CancelIntent' 216 | || handlerInput.requestEnvelope.request.intent.name === 'AMAZON.StopIntent'); 217 | }, 218 | handle(handlerInput) { 219 | const speakOutput = 'Goodbye!'; 220 | return handlerInput.responseBuilder 221 | .speak(speakOutput) 222 | .getResponse(); 223 | } 224 | }; 225 | const SessionEndedRequestHandler = { 226 | canHandle(handlerInput) { 227 | return handlerInput.requestEnvelope.request.type === 'SessionEndedRequest'; 228 | }, 229 | handle(handlerInput) { 230 | // Any cleanup logic goes here. 231 | return handlerInput.responseBuilder.getResponse(); 232 | } 233 | }; 234 | 235 | // The intent reflector is used for interaction model testing and debugging. 236 | // It will simply repeat the intent the user said. You can create custom handlers 237 | // for your intents by defining them above, then also adding them to the request 238 | // handler chain below. 239 | const IntentReflectorHandler = { 240 | canHandle(handlerInput) { 241 | return handlerInput.requestEnvelope.request.type === 'IntentRequest'; 242 | }, 243 | handle(handlerInput) { 244 | const intentName = handlerInput.requestEnvelope.request.intent.name; 245 | const speakOutput = `You just triggered ${intentName}`; 246 | 247 | return handlerInput.responseBuilder 248 | .speak(speakOutput) 249 | //.reprompt('add a reprompt if you want to keep the session open for the user to respond') 250 | .getResponse(); 251 | } 252 | }; 253 | 254 | // Generic error handling to capture any syntax or routing errors. If you receive an error 255 | // stating the request handler chain is not found, you have not implemented a handler for 256 | // the intent being invoked or included it in the skill builder below. 257 | const ErrorHandler = { 258 | canHandle() { 259 | return true; 260 | }, 261 | handle(handlerInput, error) { 262 | console.log(`~~~~ Error handled: ${error.message}`); 263 | const speakOutput = `Sorry, I couldn't understand what you said. Please try again.`; 264 | 265 | return handlerInput.responseBuilder 266 | .speak(speakOutput) 267 | .reprompt(speakOutput) 268 | .getResponse(); 269 | } 270 | }; 271 | 272 | const LoadBirthdayInterceptor = { 273 | async process(handlerInput) { 274 | const attributesManager = handlerInput.attributesManager; 275 | const sessionAttributes = await attributesManager.getPersistentAttributes() || {}; 276 | 277 | const year = sessionAttributes.hasOwnProperty('year') ? sessionAttributes.year : 0; 278 | const month = sessionAttributes.hasOwnProperty('month') ? sessionAttributes.month : 0; 279 | const day = sessionAttributes.hasOwnProperty('day') ? sessionAttributes.day : 0; 280 | 281 | if (year && month && day) { 282 | attributesManager.setSessionAttributes(sessionAttributes); 283 | } 284 | } 285 | } 286 | 287 | function getBackgroundURL(handlerInput, fileNamePrefix) { 288 | const viewportProfile = Alexa.getViewportProfile(handlerInput.requestEnvelope); 289 | const backgroundKey = viewportProfile === 'TV-LANDSCAPE-XLARGE' ? "Media/"+fileNamePrefix+"_1920x1080.png" : "Media/"+fileNamePrefix+"_1280x800.png"; 290 | return util.getS3PreSignedUrl(backgroundKey); 291 | } 292 | 293 | // This handler acts as the entry point for your skill, routing all request and response 294 | // payloads to the handlers above. Make sure any new handlers or interceptors you've 295 | // defined are included below. The order matters - they're processed top to bottom. 296 | exports.handler = Alexa.SkillBuilders.custom() 297 | .withPersistenceAdapter( 298 | new persistenceAdapter.S3PersistenceAdapter({bucketName:process.env.S3_PERSISTENCE_BUCKET}) 299 | ) 300 | .addRequestHandlers( 301 | HasBirthdayLaunchRequestHandler, 302 | LaunchRequestHandler, 303 | BirthdayIntentHandler, 304 | HelpIntentHandler, 305 | CancelAndStopIntentHandler, 306 | SessionEndedRequestHandler, 307 | IntentReflectorHandler) // make sure IntentReflectorHandler is last so it doesn't override your custom intent handlers 308 | .addErrorHandlers( 309 | ErrorHandler) 310 | .addRequestInterceptors( 311 | LoadBirthdayInterceptor 312 | ) 313 | .withApiClient(new Alexa.DefaultApiClient()) 314 | .lambda(); 315 | -------------------------------------------------------------------------------- /modules/code/module4/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "cake-time", 3 | "version": "0.9.0", 4 | "description": "alexa utility for quickly building skills", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "author": "Amazon Alexa", 10 | "license": "ISC", 11 | "dependencies": { 12 | "ask-sdk-core": "^2.0.7", 13 | "ask-sdk-model": "^1.4.1", 14 | "aws-sdk": "^2.326.0", 15 | "ask-sdk-s3-persistence-adapter": "^2.0.0" 16 | } 17 | } -------------------------------------------------------------------------------- /modules/code/module4/util.js: -------------------------------------------------------------------------------- 1 | const AWS = require('aws-sdk'); 2 | 3 | const s3SigV4Client = new AWS.S3({ 4 | signatureVersion: 'v4' 5 | }); 6 | 7 | module.exports.getS3PreSignedUrl = function getS3PreSignedUrl(s3ObjectKey) { 8 | 9 | const bucketName = process.env.S3_PERSISTENCE_BUCKET; 10 | const s3PreSignedUrl = s3SigV4Client.getSignedUrl('getObject', { 11 | Bucket: bucketName, 12 | Key: s3ObjectKey, 13 | Expires: 60*1 // the Expires is capped for 1 minute 14 | }); 15 | console.log(`Util.s3PreSignedUrl: ${s3ObjectKey} URL ${s3PreSignedUrl}`); 16 | return s3PreSignedUrl; 17 | 18 | } -------------------------------------------------------------------------------- /modules/code/module5/documents/birthdayDocument.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "APL", 3 | "version": "1.1", 4 | "settings": {}, 5 | "theme": "dark", 6 | "import": [ 7 | { 8 | "name": "my-caketime-apl-package-v2", 9 | "version": "1.0", 10 | "source": "https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-apl-skill/master/modules/code/module5/documents/my-caketime-apl-package-v2.json" 11 | }, 12 | { 13 | "name": "alexa-layouts", 14 | "version": "1.1.0" 15 | } 16 | ], 17 | "resources": [], 18 | "styles": { 19 | "bigText": { 20 | "values": [ 21 | { 22 | "fontSize": "72dp", 23 | "color": "black", 24 | "textAlign": "center" 25 | } 26 | ] 27 | } 28 | }, 29 | "onMount": [{ 30 | "commands": [ 31 | { 32 | "componentId": "textTop", 33 | "duration": 2000, 34 | "easing": "ease-in-out", 35 | "type": "AnimateItem", 36 | "value": [ 37 | { 38 | "property": "opacity", 39 | "to": 1 40 | }, 41 | { 42 | "from": [ 43 | { 44 | "translateX": 800 45 | } 46 | ], 47 | "property": "transform", 48 | "to": [ 49 | { 50 | "translateX": 0 51 | } 52 | ] 53 | } 54 | ] 55 | }, 56 | { 57 | "componentId": "textMiddle", 58 | "duration": 2000, 59 | "easing": "ease-in-out", 60 | "type": "AnimateItem", 61 | "value": [ 62 | { 63 | "property": "opacity", 64 | "to": 1 65 | }, 66 | { 67 | "from": [ 68 | { 69 | "translateX": -400 70 | } 71 | ], 72 | "property": "transform", 73 | "to": [ 74 | { 75 | "translateX": 0 76 | } 77 | ] 78 | } 79 | ] 80 | }, 81 | { 82 | "componentId": "textBottom", 83 | "duration": 2000, 84 | "easing": "ease-in-out", 85 | "type": "AnimateItem", 86 | "value": [ 87 | { 88 | "property": "opacity", 89 | "to": 1 90 | }, 91 | { 92 | "from": [ 93 | { 94 | "translateY": 1200 95 | } 96 | ], 97 | "property": "transform", 98 | "to": [ 99 | { 100 | "translateX": 0 101 | } 102 | ] 103 | } 104 | ] 105 | } 106 | ], 107 | "type": "Parallel" 108 | }], 109 | "graphics": {}, 110 | "commands": {}, 111 | "layouts": {}, 112 | "mainTemplate": { 113 | "parameters": [ 114 | "text", 115 | "assets" 116 | ], 117 | "items": [ 118 | { 119 | "type": "Container", 120 | "items": [ 121 | { 122 | "type": "AlexaBackground", 123 | "backgroundImageSource": "${assets.backgroundURL}" 124 | }, 125 | { 126 | "type": "Container", 127 | "paddingTop":"3vh", 128 | "alignItems": "center", 129 | "items": [{ 130 | "type": "Video", 131 | "height": "85vh", 132 | "width":"90vw", 133 | "id":"birthdayVideo", 134 | "source": "${assets.video}", 135 | "autoplay": false, 136 | "onPause": [{ 137 | "type": "SetState", 138 | "componentId": "playPauseToggleButtonId", 139 | "state": "checked", 140 | "value": true 141 | }], 142 | "onPlay": [{ 143 | "type": "SetState", 144 | "componentId": "playPauseToggleButtonId", 145 | "state": "checked", 146 | "value": false 147 | }] 148 | }, 149 | { 150 | "primaryControlSize": 50, 151 | "secondaryControlSize": 0, 152 | "autoplay": false, 153 | "mediaComponentId": "birthdayVideo", 154 | "type": "AlexaTransportControls" 155 | } 156 | ] 157 | } 158 | ], 159 | "height": "100%", 160 | "width": "100%", 161 | "when": "${@viewportProfile != @hubRoundSmall}" 162 | }, 163 | { 164 | "type": "Container", 165 | "paddingTop": "75dp", 166 | "items": [ 167 | { 168 | "type": "AlexaBackground", 169 | "backgroundImageSource": "${assets.backgroundURL}" 170 | }, 171 | { 172 | "type": "cakeTimeText", 173 | "startText":"${text.start}", 174 | "middleText":"${text.middle}", 175 | "endText":"${text.end}" 176 | } 177 | ], 178 | "height": "100%", 179 | "width": "100%", 180 | "when": "${@viewportProfile == @hubRoundSmall}" 181 | } 182 | ] 183 | } 184 | } -------------------------------------------------------------------------------- /modules/code/module5/documents/birthdayDocumentDatasource.json: -------------------------------------------------------------------------------- 1 | { 2 | "text": { 3 | "start": "Happy Birthday!", 4 | "middle": "From,", 5 | "end": "Alexa <3" 6 | }, 7 | "assets": { 8 | "video": "https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/Amazon_Cake.mp4?raw=true", 9 | "cake":"https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/alexaCake_960x960.png?raw=true", 10 | "backgroundURL": "https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/lights_1280x800.png?raw=true" 11 | } 12 | } -------------------------------------------------------------------------------- /modules/code/module5/documents/launchDocument.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "APL", 3 | "version": "1.1", 4 | "settings": {}, 5 | "theme": "dark", 6 | "import": [ 7 | { 8 | "name": "my-caketime-apl-package-v2", 9 | "version": "1.0", 10 | "source": "https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-apl-skill/master/modules/code/module5/documents/my-caketime-apl-package-v2.json" 11 | }, 12 | { 13 | "name": "alexa-layouts", 14 | "version": "1.1.0" 15 | } 16 | ], 17 | "resources": [], 18 | "styles": {}, 19 | "onMount": [ 20 | { 21 | "commands": [ 22 | { 23 | "componentId": "textTop", 24 | "duration": 2000, 25 | "easing": "ease-in-out", 26 | "type": "AnimateItem", 27 | "value": [ 28 | { 29 | "property": "opacity", 30 | "to": 1 31 | }, 32 | { 33 | "from": [ 34 | { 35 | "translateX": 800 36 | } 37 | ], 38 | "property": "transform", 39 | "to": [ 40 | { 41 | "translateX": 0 42 | } 43 | ] 44 | } 45 | ] 46 | }, 47 | { 48 | "componentId": "textMiddle", 49 | "duration": 2000, 50 | "easing": "ease-in-out", 51 | "type": "AnimateItem", 52 | "value": [ 53 | { 54 | "property": "opacity", 55 | "to": 1 56 | }, 57 | { 58 | "from": [ 59 | { 60 | "translateX": -400 61 | } 62 | ], 63 | "property": "transform", 64 | "to": [ 65 | { 66 | "translateX": 0 67 | } 68 | ] 69 | } 70 | ] 71 | }, 72 | { 73 | "componentId": "textBottom", 74 | "duration": 2000, 75 | "easing": "ease-in-out", 76 | "type": "AnimateItem", 77 | "value": [ 78 | { 79 | "property": "opacity", 80 | "to": 1 81 | }, 82 | { 83 | "from": [ 84 | { 85 | "translateY": 1200 86 | } 87 | ], 88 | "property": "transform", 89 | "to": [ 90 | { 91 | "translateX": 0 92 | } 93 | ] 94 | } 95 | ] 96 | }, 97 | { 98 | "componentId": "image", 99 | "duration": 2000, 100 | "easing": "path(0.25, 0.2, 0.5, 0.5, 0.75, 0.8)", 101 | "type": "AnimateItem", 102 | "value": [ 103 | { 104 | "property": "opacity", 105 | "to": 1 106 | }, 107 | { 108 | "from": [ 109 | { 110 | "scale": 0.01 111 | }, 112 | { 113 | "rotate": 0 114 | } 115 | ], 116 | "property": "transform", 117 | "to": [ 118 | { 119 | "scale": 1 120 | }, 121 | { 122 | "rotate": 360 123 | } 124 | ] 125 | } 126 | ] 127 | } 128 | ], 129 | "type": "Parallel" 130 | } 131 | ], 132 | "graphics": {}, 133 | "commands": {}, 134 | "layouts": {}, 135 | "mainTemplate": { 136 | "parameters": [ 137 | "text", 138 | "assets" 139 | ], 140 | "items": [ 141 | { 142 | "type": "Container", 143 | "items": [ 144 | { 145 | "type": "AlexaBackground", 146 | "backgroundImageSource": "${assets.backgroundURL}" 147 | }, 148 | { 149 | "type": "cakeTimeText", 150 | "startText":"${text.start}", 151 | "middleText":"${text.middle}", 152 | "endText":"${text.end}" 153 | }, 154 | { 155 | "type": "AlexaImage", 156 | "alignSelf": "center", 157 | "imageSource": "${assets.cake}", 158 | "imageRoundedCorner": false, 159 | "imageScale": "best-fill", 160 | "imageHeight":"40vh", 161 | "imageAspectRatio": "square", 162 | "imageBlurredBackground": false 163 | } 164 | ], 165 | "height": "100%", 166 | "width": "100%", 167 | "when": "${@viewportProfile != @hubRoundSmall}" 168 | }, 169 | { 170 | "type": "Container", 171 | "paddingTop": "75dp", 172 | "items": [ 173 | { 174 | "type": "AlexaBackground", 175 | "backgroundImageSource": "${assets.backgroundURL}" 176 | }, 177 | { 178 | "type": "cakeTimeText", 179 | "startText":"${text.start}", 180 | "middleText":"${text.middle}", 181 | "endText":"${text.end}" 182 | } 183 | ], 184 | "height": "100%", 185 | "width": "100%", 186 | "when": "${@viewportProfile == @hubRoundSmall}" 187 | } 188 | ] 189 | } 190 | } -------------------------------------------------------------------------------- /modules/code/module5/documents/launchDocumentData.json: -------------------------------------------------------------------------------- 1 | { 2 | "text": { 3 | "start": "Welcome", 4 | "middle": "to", 5 | "end": "Cake Time!" 6 | }, 7 | "assets": { 8 | "video": "https://public-pics-muoio.s3.amazonaws.com/video/Amazon_Cake.mp4", 9 | "cake":"https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/alexaCake_960x960.png?raw=true", 10 | "backgroundURL": "https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/lights_1280x800.png?raw=true" 11 | } 12 | } -------------------------------------------------------------------------------- /modules/code/module5/documents/my-cakewalk-apl-package-v2.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "APL", 3 | "version": "1.1", 4 | "settings": {}, 5 | "theme": "dark", 6 | "import": [ 7 | { 8 | "name": "alexa-layouts", 9 | "version": "1.1.0" 10 | } 11 | ], 12 | "resources": [], 13 | "styles": { 14 | "bigText": { 15 | "values": [ 16 | { 17 | "fontSize": "72dp", 18 | "color": "black", 19 | "textAlign": "center" 20 | } 21 | ] 22 | } 23 | }, 24 | "onMount": [{ 25 | "type": "Parallel", 26 | "commands": [ 27 | { 28 | "type": "AnimateItem", 29 | "easing": "ease-in-out", 30 | "duration": 2000, 31 | "componentId": "textTop", 32 | "value": [ 33 | { 34 | "property": "opacity", 35 | "to": 1 36 | }, 37 | { 38 | "property": "transform", 39 | "from": [ 40 | { 41 | "translateX": 800 42 | } 43 | ], 44 | "to": [ 45 | { 46 | "translateX": 0 47 | } 48 | ] 49 | } 50 | ] 51 | }, 52 | { 53 | "type": "AnimateItem", 54 | "easing": "ease-in-out", 55 | "duration": 2000, 56 | "componentId": "textMiddle", 57 | "value": [ 58 | { 59 | "property": "opacity", 60 | "to": 1 61 | }, 62 | { 63 | "property": "transform", 64 | "from": [ 65 | { 66 | "translateX": -400 67 | } 68 | ], 69 | "to": [ 70 | { 71 | "translateX": 0 72 | } 73 | ] 74 | } 75 | ] 76 | }, 77 | { 78 | "type": "AnimateItem", 79 | "easing": "ease-in-out", 80 | "duration": 2000, 81 | "componentId": "textBottom", 82 | "value": [ 83 | { 84 | "property": "opacity", 85 | "to": 1 86 | }, 87 | { 88 | "property": "transform", 89 | "from": [ 90 | { 91 | "translateY": 1200 92 | } 93 | ], 94 | "to": [ 95 | { 96 | "translateX": 0 97 | } 98 | ] 99 | } 100 | ] 101 | } 102 | ] 103 | }], 104 | "graphics": {}, 105 | "commands": {}, 106 | "layouts": { 107 | "cakeTimeText": { 108 | "description": "A basic text layout with start, middle and end.", 109 | "parameters":[ 110 | { 111 | "name": "startText", 112 | "type": "string" 113 | }, 114 | { 115 | "name": "middleText", 116 | "type": "string" 117 | }, 118 | { 119 | "name": "endText", 120 | "type": "string" 121 | } 122 | ], 123 | "items": [ 124 | { 125 | "type": "Container", 126 | "items": [ 127 | { 128 | "type": "Text", 129 | "style": "bigText", 130 | "text": "${startText}", 131 | "id":"textTop" 132 | }, 133 | { 134 | "type": "Text", 135 | "style": "bigText", 136 | "text": "${middleText}", 137 | "id": "textMiddle" 138 | }, 139 | { 140 | "type": "Text", 141 | "style": "bigText", 142 | "text": "${endText}", 143 | "id": "textBottom" 144 | } 145 | ] 146 | } 147 | ] 148 | } 149 | } 150 | } -------------------------------------------------------------------------------- /modules/code/module5/en-US.json: -------------------------------------------------------------------------------- 1 | { 2 | "interactionModel": { 3 | "languageModel": { 4 | "invocationName": "cake time", 5 | "intents": [ 6 | { 7 | "name": "AMAZON.CancelIntent", 8 | "samples": [] 9 | }, 10 | { 11 | "name": "AMAZON.HelpIntent", 12 | "samples": [] 13 | }, 14 | { 15 | "name": "AMAZON.StopIntent", 16 | "samples": [] 17 | }, 18 | { 19 | "name": "AMAZON.NavigateHomeIntent", 20 | "samples": [] 21 | }, 22 | { 23 | "name": "CaptureBirthdayIntent", 24 | "slots": [ 25 | { 26 | "name": "month", 27 | "type": "AMAZON.Month" 28 | }, 29 | { 30 | "name": "day", 31 | "type": "AMAZON.Ordinal" 32 | }, 33 | { 34 | "name": "year", 35 | "type": "AMAZON.FOUR_DIGIT_NUMBER" 36 | } 37 | ], 38 | "samples": [ 39 | "{month} {day}", 40 | "{month} {day} {year}", 41 | "{month} {year}", 42 | "I was born on {month} {day} ", 43 | "I was born on {month} {day} {year}", 44 | "I was born on {month} {year}" 45 | ] 46 | } 47 | ], 48 | "types": [] 49 | }, 50 | "dialog": { 51 | "intents": [ 52 | { 53 | "name": "CaptureBirthdayIntent", 54 | "confirmationRequired": false, 55 | "prompts": {}, 56 | "slots": [ 57 | { 58 | "name": "month", 59 | "type": "AMAZON.Month", 60 | "confirmationRequired": false, 61 | "elicitationRequired": true, 62 | "prompts": { 63 | "elicitation": "Elicit.Slot.303899476312.795077103633" 64 | } 65 | }, 66 | { 67 | "name": "day", 68 | "type": "AMAZON.Ordinal", 69 | "confirmationRequired": false, 70 | "elicitationRequired": true, 71 | "prompts": { 72 | "elicitation": "Elicit.Slot.303899476312.985837334781" 73 | } 74 | }, 75 | { 76 | "name": "year", 77 | "type": "AMAZON.FOUR_DIGIT_NUMBER", 78 | "confirmationRequired": false, 79 | "elicitationRequired": true, 80 | "prompts": { 81 | "elicitation": "Elicit.Slot.303899476312.27341833344" 82 | } 83 | } 84 | ] 85 | } 86 | ], 87 | "delegationStrategy": "ALWAYS" 88 | }, 89 | "prompts": [ 90 | { 91 | "id": "Elicit.Slot.303899476312.795077103633", 92 | "variations": [ 93 | { 94 | "type": "PlainText", 95 | "value": "I was born in November. When what were you born?" 96 | }, 97 | { 98 | "type": "PlainText", 99 | "value": "What month were you born?" 100 | } 101 | ] 102 | }, 103 | { 104 | "id": "Elicit.Slot.303899476312.985837334781", 105 | "variations": [ 106 | { 107 | "type": "PlainText", 108 | "value": "I was born on the sixth. What day were you born?" 109 | } 110 | ] 111 | }, 112 | { 113 | "id": "Elicit.Slot.303899476312.27341833344", 114 | "variations": [ 115 | { 116 | "type": "PlainText", 117 | "value": "I was born in two thousand fifteen, what year were you born?" 118 | } 119 | ] 120 | } 121 | ] 122 | } 123 | } -------------------------------------------------------------------------------- /modules/code/module5/index.js: -------------------------------------------------------------------------------- 1 | // This sample demonstrates handling intents from an Alexa skill using the Alexa Skills Kit SDK (v2). 2 | // Please visit https://alexa.design/cookbook for additional examples on implementing slots, dialog management, 3 | // session persistence, api calls, and more. 4 | const Alexa = require('ask-sdk-core'); 5 | const persistenceAdapter = require('ask-sdk-s3-persistence-adapter'); 6 | const launchDocument = require('./documents/launchDocument.json'); 7 | const birthdayDocument = require('./documents/birthdayDocument.json'); 8 | 9 | const util = require('./util'); 10 | 11 | const HasBirthdayLaunchRequestHandler = { 12 | canHandle(handlerInput) { 13 | console.log(JSON.stringify(handlerInput.requestEnvelope.request)); 14 | const attributesManager = handlerInput.attributesManager; 15 | const sessionAttributes = attributesManager.getSessionAttributes() || {}; 16 | 17 | const year = sessionAttributes.hasOwnProperty('year') ? sessionAttributes.year : 0; 18 | const month = sessionAttributes.hasOwnProperty('month') ? sessionAttributes.month : 0; 19 | const day = sessionAttributes.hasOwnProperty('day') ? sessionAttributes.day : 0; 20 | 21 | return handlerInput.requestEnvelope.request.type === 'LaunchRequest' && 22 | year && 23 | month && 24 | day; 25 | }, 26 | async handle(handlerInput) { 27 | 28 | const serviceClientFactory = handlerInput.serviceClientFactory; 29 | const deviceId = handlerInput.requestEnvelope.context.System.device.deviceId; 30 | 31 | const attributesManager = handlerInput.attributesManager; 32 | const sessionAttributes = attributesManager.getSessionAttributes() || {}; 33 | 34 | const year = sessionAttributes.hasOwnProperty('year') ? sessionAttributes.year : 0; 35 | const month = sessionAttributes.hasOwnProperty('month') ? sessionAttributes.month : 0; 36 | const day = sessionAttributes.hasOwnProperty('day') ? sessionAttributes.day : 0; 37 | 38 | let userTimeZone; 39 | try { 40 | const upsServiceClient = serviceClientFactory.getUpsServiceClient(); 41 | userTimeZone = await upsServiceClient.getSystemTimeZone(deviceId); 42 | } catch (error) { 43 | if (error.name !== 'ServiceError') { 44 | return handlerInput.responseBuilder.speak("There was a problem connecting to the service.").getResponse(); 45 | } 46 | console.log('error', error.message); 47 | } 48 | console.log('userTimeZone', userTimeZone); 49 | 50 | const oneDay = 24*60*60*1000; 51 | 52 | // getting the current date with the time 53 | const currentDateTime = new Date(new Date().toLocaleString("en-US", {timeZone: userTimeZone})); 54 | // removing the time from the date because it affects our difference calculation 55 | const currentDate = new Date(currentDateTime.getFullYear(), currentDateTime.getMonth(), currentDateTime.getDate()); 56 | let currentYear = currentDate.getFullYear(); 57 | 58 | console.log('currentDateTime:', currentDateTime); 59 | console.log('currentDate:', currentDate); 60 | 61 | // getting the next birthday 62 | let nextBirthday = Date.parse(`${month} ${day}, ${currentYear}`); 63 | 64 | // adjust the nextBirthday by one year if the current date is after their birthday 65 | if (currentDate.getTime() > nextBirthday) { 66 | nextBirthday = Date.parse(`${month} ${day}, ${currentYear + 1}`); 67 | currentYear++; 68 | } 69 | 70 | // setting the default speakOutput to Happy xth Birthday!! 71 | // Alexa will automatically correct the ordinal for you. 72 | // no need to worry about when to use st, th, rd 73 | const yearsOld = currentYear - year; 74 | let speakOutput = `Happy ${yearsOld}th birthday!`; 75 | let isBirthday = true; 76 | const diffDays = Math.round(Math.abs((currentDate.getTime() - nextBirthday)/oneDay)); 77 | 78 | if (currentDate.getTime() !== nextBirthday) { 79 | isBirthday = false; 80 | speakOutput = `Welcome back. It looks like there are ${diffDays} days until your ${currentYear - year}th birthday.` 81 | } 82 | 83 | const numberDaysString = diffDays === 1 ? "1 day": diffDays + " days"; 84 | // Add APL directive to response 85 | if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) { 86 | if (currentDate.getTime() !== nextBirthday) { 87 | // Create Render Directive 88 | handlerInput.responseBuilder.addDirective({ 89 | type: 'Alexa.Presentation.APL.RenderDocument', 90 | token: '', 91 | document: launchDocument, 92 | datasources: { 93 | text: { 94 | type: 'object', 95 | start: "Your Birthday", 96 | middle: "is in", 97 | end: numberDaysString 98 | }, 99 | assets: { 100 | cake: util.getS3PreSignedUrl('Media/alexaCake_960x960.png'), 101 | backgroundURL: getBackgroundURL(handlerInput, "lights") 102 | } 103 | } 104 | }); 105 | } else { 106 | // Create Render Directive 107 | handlerInput.responseBuilder.addDirective({ 108 | type: 'Alexa.Presentation.APL.RenderDocument', 109 | token: 'birthdayToken', 110 | document: birthdayDocument, 111 | datasources: { 112 | text: { 113 | type: 'object', 114 | start: "Happy Birthday!", 115 | middle: "From,", 116 | end: "Alexa <3" 117 | }, 118 | assets: { 119 | video: "https://public-pics-muoio.s3.amazonaws.com/video/Amazon_Cake.mp4", 120 | backgroundURL: getBackgroundURL(handlerInput, "confetti") 121 | } 122 | } 123 | }).addDirective({ 124 | type: "Alexa.Presentation.APL.ExecuteCommands", 125 | token: "birthdayToken", 126 | commands: [{ 127 | type: "ControlMedia", 128 | componentId: "birthdayVideo", 129 | command: "play" 130 | }] 131 | }); 132 | } 133 | } 134 | 135 | return handlerInput.responseBuilder 136 | .speak(speakOutput) 137 | .getResponse(); 138 | } 139 | }; 140 | 141 | const LaunchRequestHandler = { 142 | canHandle(handlerInput) { 143 | return handlerInput.requestEnvelope.request.type === 'LaunchRequest'; 144 | }, 145 | handle(handlerInput) { 146 | const speakOutput = 'Hello! Welcome to Cake Time. What is your birthday?'; 147 | const repromptOutput = 'I was born Nov. 6th, 2015. When were you born?'; 148 | 149 | // Add APL directive to response 150 | if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) { 151 | // Create Render Directive 152 | handlerInput.responseBuilder.addDirective({ 153 | type: 'Alexa.Presentation.APL.RenderDocument', 154 | document: launchDocument, 155 | datasources: { 156 | text: { 157 | type: 'object', 158 | start: "Welcome", 159 | middle: "to", 160 | end: "Cake Time!" 161 | }, 162 | assets: { 163 | cake: util.getS3PreSignedUrl('Media/alexaCake_960x960.png'), 164 | backgroundURL: getBackgroundURL(handlerInput, "lights") 165 | } 166 | } 167 | }); 168 | } 169 | 170 | const headerMessage = "Welcome to Cake Time!"; 171 | 172 | return handlerInput.responseBuilder 173 | .speak(speakOutput) 174 | .reprompt(repromptOutput) 175 | .getResponse(); 176 | } 177 | }; 178 | const BirthdayIntentHandler = { 179 | canHandle(handlerInput) { 180 | return handlerInput.requestEnvelope.request.type === 'IntentRequest' 181 | && handlerInput.requestEnvelope.request.intent.name === 'CaptureBirthdayIntent'; 182 | }, 183 | async handle(handlerInput) { 184 | const year = handlerInput.requestEnvelope.request.intent.slots.year.value; 185 | const month = handlerInput.requestEnvelope.request.intent.slots.month.value; 186 | const day = handlerInput.requestEnvelope.request.intent.slots.day.value; 187 | 188 | const attributesManager = handlerInput.attributesManager; 189 | 190 | const birthdayAttributes = { 191 | "year": year, 192 | "month": month, 193 | "day": day 194 | 195 | }; 196 | attributesManager.setPersistentAttributes(birthdayAttributes); 197 | await attributesManager.savePersistentAttributes(); 198 | 199 | const speakOutput = `Thanks, I'll remember that you were born ${month} ${day} ${year}.`; 200 | 201 | return handlerInput.responseBuilder 202 | .speak(speakOutput) 203 | .withShouldEndSession(true) 204 | .getResponse(); 205 | } 206 | }; 207 | 208 | const HelpIntentHandler = { 209 | canHandle(handlerInput) { 210 | return handlerInput.requestEnvelope.request.type === 'IntentRequest' 211 | && handlerInput.requestEnvelope.request.intent.name === 'AMAZON.HelpIntent'; 212 | }, 213 | handle(handlerInput) { 214 | const speakOutput = 'You can say hello to me! How can I help?'; 215 | 216 | return handlerInput.responseBuilder 217 | .speak(speakOutput) 218 | .reprompt(speakOutput) 219 | .getResponse(); 220 | } 221 | }; 222 | const CancelAndStopIntentHandler = { 223 | canHandle(handlerInput) { 224 | return handlerInput.requestEnvelope.request.type === 'IntentRequest' 225 | && (handlerInput.requestEnvelope.request.intent.name === 'AMAZON.CancelIntent' 226 | || handlerInput.requestEnvelope.request.intent.name === 'AMAZON.StopIntent'); 227 | }, 228 | handle(handlerInput) { 229 | const speakOutput = 'Goodbye!'; 230 | return handlerInput.responseBuilder 231 | .speak(speakOutput) 232 | .getResponse(); 233 | } 234 | }; 235 | const SessionEndedRequestHandler = { 236 | canHandle(handlerInput) { 237 | return handlerInput.requestEnvelope.request.type === 'SessionEndedRequest'; 238 | }, 239 | handle(handlerInput) { 240 | // Any cleanup logic goes here. 241 | return handlerInput.responseBuilder.getResponse(); 242 | } 243 | }; 244 | 245 | // The intent reflector is used for interaction model testing and debugging. 246 | // It will simply repeat the intent the user said. You can create custom handlers 247 | // for your intents by defining them above, then also adding them to the request 248 | // handler chain below. 249 | const IntentReflectorHandler = { 250 | canHandle(handlerInput) { 251 | return handlerInput.requestEnvelope.request.type === 'IntentRequest'; 252 | }, 253 | handle(handlerInput) { 254 | const intentName = handlerInput.requestEnvelope.request.intent.name; 255 | const speakOutput = `You just triggered ${intentName}`; 256 | 257 | return handlerInput.responseBuilder 258 | .speak(speakOutput) 259 | //.reprompt('add a reprompt if you want to keep the session open for the user to respond') 260 | .getResponse(); 261 | } 262 | }; 263 | 264 | // Generic error handling to capture any syntax or routing errors. If you receive an error 265 | // stating the request handler chain is not found, you have not implemented a handler for 266 | // the intent being invoked or included it in the skill builder below. 267 | const ErrorHandler = { 268 | canHandle() { 269 | return true; 270 | }, 271 | handle(handlerInput, error) { 272 | console.log(`~~~~ Error handled: ${error.message}`); 273 | const speakOutput = `Sorry, I couldn't understand what you said. Please try again.`; 274 | 275 | return handlerInput.responseBuilder 276 | .speak(speakOutput) 277 | .reprompt(speakOutput) 278 | .getResponse(); 279 | } 280 | }; 281 | 282 | const LoadBirthdayInterceptor = { 283 | async process(handlerInput) { 284 | const attributesManager = handlerInput.attributesManager; 285 | const sessionAttributes = await attributesManager.getPersistentAttributes() || {}; 286 | 287 | const year = sessionAttributes.hasOwnProperty('year') ? sessionAttributes.year : 0; 288 | const month = sessionAttributes.hasOwnProperty('month') ? sessionAttributes.month : 0; 289 | const day = sessionAttributes.hasOwnProperty('day') ? sessionAttributes.day : 0; 290 | 291 | if (year && month && day) { 292 | attributesManager.setSessionAttributes(sessionAttributes); 293 | } 294 | } 295 | } 296 | 297 | function getBackgroundURL(handlerInput, fileNamePrefix) { 298 | const viewportProfile = Alexa.getViewportProfile(handlerInput.requestEnvelope); 299 | const backgroundKey = viewportProfile === 'TV-LANDSCAPE-XLARGE' ? "Media/"+fileNamePrefix+"_1920x1080.png" : "Media/"+fileNamePrefix+"_1280x800.png"; 300 | return util.getS3PreSignedUrl(backgroundKey); 301 | } 302 | 303 | // This handler acts as the entry point for your skill, routing all request and response 304 | // payloads to the handlers above. Make sure any new handlers or interceptors you've 305 | // defined are included below. The order matters - they're processed top to bottom. 306 | exports.handler = Alexa.SkillBuilders.custom() 307 | .withPersistenceAdapter( 308 | new persistenceAdapter.S3PersistenceAdapter({bucketName:process.env.S3_PERSISTENCE_BUCKET}) 309 | ) 310 | .addRequestHandlers( 311 | HasBirthdayLaunchRequestHandler, 312 | LaunchRequestHandler, 313 | BirthdayIntentHandler, 314 | HelpIntentHandler, 315 | CancelAndStopIntentHandler, 316 | SessionEndedRequestHandler, 317 | IntentReflectorHandler) // make sure IntentReflectorHandler is last so it doesn't override your custom intent handlers 318 | .addErrorHandlers( 319 | ErrorHandler) 320 | .addRequestInterceptors( 321 | LoadBirthdayInterceptor 322 | ) 323 | .withApiClient(new Alexa.DefaultApiClient()) 324 | .lambda(); 325 | -------------------------------------------------------------------------------- /modules/code/module5/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "cake-time", 3 | "version": "0.9.0", 4 | "description": "alexa utility for quickly building skills", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "author": "Amazon Alexa", 10 | "license": "ISC", 11 | "dependencies": { 12 | "ask-sdk-core": "^2.0.7", 13 | "ask-sdk-model": "^1.4.1", 14 | "aws-sdk": "^2.326.0", 15 | "ask-sdk-s3-persistence-adapter": "^2.0.0" 16 | } 17 | } -------------------------------------------------------------------------------- /modules/code/module5/util.js: -------------------------------------------------------------------------------- 1 | const AWS = require('aws-sdk'); 2 | 3 | const s3SigV4Client = new AWS.S3({ 4 | signatureVersion: 'v4' 5 | }); 6 | 7 | module.exports.getS3PreSignedUrl = function getS3PreSignedUrl(s3ObjectKey) { 8 | 9 | const bucketName = process.env.S3_PERSISTENCE_BUCKET; 10 | const s3PreSignedUrl = s3SigV4Client.getSignedUrl('getObject', { 11 | Bucket: bucketName, 12 | Key: s3ObjectKey, 13 | Expires: 60*1 // the Expires is capped for 1 minute 14 | }); 15 | console.log(`Util.s3PreSignedUrl: ${s3ObjectKey} URL ${s3PreSignedUrl}`); 16 | return s3PreSignedUrl; 17 | 18 | } -------------------------------------------------------------------------------- /modules/home.adoc: -------------------------------------------------------------------------------- 1 | :link-caketime: https://developer.amazon.com/en-US/docs/alexa/workshops/build-an-engaging-skill/get-started/index.html[Workshop: Build an Engaging Alexa Skill, window=_blank] 2 | :link-quick-setup: link:quick-start.adoc[quick setup instructions, window=_blank] 3 | 4 | = Welcome 5 | 6 | {blank} 7 | 8 | Welcome to How to Build a Multimodal Alexa Skill, a continuation of the deprecated Cake Time tutorial. To use this tutorial, see the {link-quick-setup} to get the code and overview of the Cake Time Skill for use in this tutorial. If you need to learn the basics of building a skill, refer to the new {link-caketime}. After taking this course, you will have the information needed to create rich multimodal experiences for Alexa using Alexa Presentation Language (APL). The course will teach you the following: 9 | 10 | * link:module1.adoc[Module 1]: Why Build Multimodal Alexa Skills? 11 | * link:module2.adoc[Module 2]: Build Your First Visuals with the APL Authoring Tool 12 | * link:module3.adoc[Module 3]: Modify Your Backend to Add LaunchRequest Visuals 13 | * link:module4.adoc[Module 4]: Create More Visuals with Less 14 | * link:module5.adoc[Module 5]: Add Animation and Video Control 15 | * link:module6.adoc[Module 6]: Wrapping Up & Resources 16 | 17 | == What is Cake Time? 18 | The Cake Time course is an introductory skill building course where you can build a simple Alexa skill (Cake Time skill) which remembers your birthday. On repeated usage of the Cake Time skill, you'll either hear a Happy Birthday message from Alexa, or a countdown of days until it is your birthday. In this course, we are going to add visuals and animations to teach the fundamentals of https://developer.amazon.com/docs/alexa-presentation-language/understand-apl.html[building with APL, window=_blank]. If you are new to skill building, we recommend you check out the {link-caketime}, but if you are already building Alexa skills or are already familiar with the concepts, then keep reading! Code for the Cake Time skill and instructions to set the skill up will be provided later in the course if you need it. 19 | -------------------------------------------------------------------------------- /modules/images/APLSkillFlowDiagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/APLSkillFlowDiagram.png -------------------------------------------------------------------------------- /modules/images/AuthoringToolLandingPage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/AuthoringToolLandingPage.png -------------------------------------------------------------------------------- /modules/images/EnterSkill.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/EnterSkill.png -------------------------------------------------------------------------------- /modules/images/FinalLaunchScreen.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/FinalLaunchScreen.gif -------------------------------------------------------------------------------- /modules/images/MakeCakeTime.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/MakeCakeTime.gif -------------------------------------------------------------------------------- /modules/images/NewExperienceDialog.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/NewExperienceDialog.png -------------------------------------------------------------------------------- /modules/images/S3Access.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/S3Access.png -------------------------------------------------------------------------------- /modules/images/S3Provision.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/S3Provision.png -------------------------------------------------------------------------------- /modules/images/WelcomeToCakeTime.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/WelcomeToCakeTime.png -------------------------------------------------------------------------------- /modules/images/authoringToolWithBirthdayImage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/authoringToolWithBirthdayImage.png -------------------------------------------------------------------------------- /modules/images/birthdayVideo.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/birthdayVideo.gif -------------------------------------------------------------------------------- /modules/images/brokenHelloSpot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/brokenHelloSpot.png -------------------------------------------------------------------------------- /modules/images/createLaunchScreenJSON.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/createLaunchScreenJSON.gif -------------------------------------------------------------------------------- /modules/images/definedEasingCurves.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/definedEasingCurves.png -------------------------------------------------------------------------------- /modules/images/finalBirthdayScreen.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/finalBirthdayScreen.png -------------------------------------------------------------------------------- /modules/images/finalCountdownScreen.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/finalCountdownScreen.png -------------------------------------------------------------------------------- /modules/images/finalHelloAPL.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/finalHelloAPL.png -------------------------------------------------------------------------------- /modules/images/finalNoContextScreen.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/finalNoContextScreen.png -------------------------------------------------------------------------------- /modules/images/firstHelloWorld.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/firstHelloWorld.gif -------------------------------------------------------------------------------- /modules/images/firstHelloWorld.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/firstHelloWorld.png -------------------------------------------------------------------------------- /modules/images/helloCakeTimeGUI.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/helloCakeTimeGUI.png -------------------------------------------------------------------------------- /modules/images/interfacesClick.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/interfacesClick.png -------------------------------------------------------------------------------- /modules/images/saveLaunchDocument.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/saveLaunchDocument.png -------------------------------------------------------------------------------- /modules/images/toggleAPL.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexa-samples/skill-sample-nodejs-first-apl-skill/56118004646dff87a209b469ab6459d43c7f0666/modules/images/toggleAPL.png -------------------------------------------------------------------------------- /modules/module1.adoc: -------------------------------------------------------------------------------- 1 | 2 | :imagesdir: ../modules/images 3 | :toc: 4 | 5 | = Why Build Multimodal Alexa Skills? 6 | 7 | {blank} 8 | 9 | Multimodal is the addition of other forms of communication, such as visual aids, to the voice experience that Alexa already provides. While Alexa is always voice first, adding visuals as a secondary mode can greatly enrich your customer's experience for Alexa-enabled devices with screens. There are more than 100 million Alexa devices, including Echo devices such as the Echo Spot and Echo Show, FireTV, Fire tablets, and devices from other manufacturers with Alexa built-in like the Lenovo Smart Tab devices and LG TVs. Adding another mode to your Alexa Skill can enhance the experience for your customers on these devices in the following ways. 10 | 11 | == Increase the level of detail in your skill's response 12 | Multimodal skills can provide more information through visual aids leading to a better customer experience. Voice is a fantastic way to interact with users because it's intuitive and efficient; however, it's not a great choice for presenting complex or large amounts of information at once. Instead, keep the voice response as succinct as possible while providing more detailed information in your visuals. 13 | For example, a weather skill that provides the daily forecast could tell you the temperature and precipitation, while on a screen, it could display a breakdown richer weather data such as humidity, wind speed, and temperature hour by hour. During the voice response, it could display a breakdown of detailed weather data such as: hour by hour temperature, humidity, and wind speed. All of the information provided is potentially important, but it would take far too long for Alexa to read out all of these stats without many of the customers getting frustrated and bored. 14 | If the requesting device does not have a screen, more information such as humidity and wind speed can be read out to the customer. In both cases, you would want to allow them to ask for more details. Even if a user has a multimodal device, we can't assume they're always looking at the screen. Later in this course, we will cover the specifics of how to determine if the requesting Alexa device has a screen. 15 | 16 | == Complementary visual aids 17 | Multimodal devices provide an opportunity to add visual branding to your experience. You can add your own brand logo, color palette and styling to create a unique visual experience for your customers. For example, if you have a business skill, you might want to show a sales summary chart with your skill's logo at the top and set colors representing metrics such as gains and losses. 18 | High quality visuals improves your brand image for your skill in addition to making a more well-rounded customer experience. For instance, it's common for music skills to display visuals such as album art on devices with a screen while playing music. If the customer looks at the screen, they immediately know which service is playing the music as well as the artist, album, and song. Music skills run for long periods of time so having a visual aid can act as a constant, non-obtrusive reminder of the experience to the customer. 19 | High quality visuals also provide an important way to teach your customers how to interact with your skill. One way is to use hints, a short (one sentence) message at the bottom of the screen that describes how you can perform specific actions. Hint messages can be specific to the ongoing interaction that the customer is currently having. For example, in a weather skill you may want to provide a hint that describes how the customer can ask for a breakdown of today's wind speed by hour. By using the screen to convey tangential information, you are not impeding the customer or overloading them with information by voice alone. 20 | 21 | == Rich media experiences 22 | While you always need a voice experience, some skills are used primarily as a way to showcase media. Multimodal interfaces give you the ability to provide videos, images, and animations in conjunction with voice. Consider a skill which displays user photos, videos, and photo metadata from a user's hosted account. On a device without a screen, you'd only have it read out the metadata or play the audio portion of a video file. However, on a device with a screen you'd be able to display, play, and search for visual content easily. Multimodal devices give you the opportunity to do this which does not exist otherwise. 23 | 24 | == Alexa Presentation Language 25 | The https://developer.amazon.com/docs/alexa-presentation-language/understand-apl.html[Alexa Presentation Language] (APL), is designed for rendering visuals across the ever increasing category of Alexa-enabled multimodal devices, allowing you to add graphics, images, slideshows, video, and animations to create a consistent visual experience for your skill. APL gives you reach with a design language that scales across the wide range of Alexa-enabled device types without individually targeting devices. In addition to screens, https://developer.amazon.com/docs/alexa-presentation-language/apl-reference-character-displays.html[APL has variations] to target non-screen multimodal devices such as the https://www.amazon.com/dp/B07N8RPRF7/[Echo dot with clock]. As the types of devices grows, APL grows with it. By understanding and learning to use Alexa Presentation Language, you will have the knowledge and tools needed to reach the most multimodal devices. 26 | 27 | === APL skill flow 28 | 29 | image:APLSkillFlowDiagram.png[] 30 | 31 | Alexa skills all follow a diagram similar to the one above. 32 | 33 | 1. Customers talk to their Alexa-enabled device. 34 | 2. The speech is sent to the Alexa service in the cloud to translate that voice into text, then text into an intention. 35 | 3. The intent (with slots) goes to the correct backend which formulates an appropriate speakOutput with optional additional directives. These directives each tell the device to perform an action. 36 | 4. This response through Alexa services to perform text to speech from the skill's speakOutput and renders visuals on the device if the appropriate directive (RenderDocument) is sent. 37 | 38 | NOTE: Some events can trigger requests to the skill's backend in response to user actions. These are similar to the handlers created for an intent or launch request. 39 | 40 | === APL documents 41 | 42 | An APL document holds all of the definitions of UI elements and their visual hierarchy in a RenderDocument directive. It also holds styles associated with those components, and points where you can bind data. The visual elements on the screen are made up of APL components, and layouts. https://developer.amazon.com/docs/alexa-presentation-language/apl-component.html[Components] are the most basic building blocks of APL and represent small, self-contained UI elements which display on the https://en.wikipedia.org/wiki/Viewport[viewport]. https://developer.amazon.com/docs/alexa-presentation-language/apl-layout.html[Layouts] are used like components, but are not primitive elements. Rather, they combine other layouts and primitive components to create a UI pattern. You can create your own layouts or import pre-defined layouts from other sources. 43 | Every APL document has a "mainTemplate". This simply represents the start state of the APL screen to be rendered. Here is an example blank APL document with all of its top-level properties: 44 | 45 | { 46 | "type": "APL", 47 | "version": "1.1", 48 | "settings": {}, 49 | "theme": "dark", 50 | "import": [], 51 | "resources": [], 52 | "styles": {}, 53 | "onMount": [], 54 | "graphics": {}, 55 | "commands": {}, 56 | "layouts": {}, 57 | "mainTemplate": { 58 | "parameters": [ 59 | "payload" 60 | ], 61 | "items": [] 62 | } 63 | } 64 | 65 | As you progress through the course, we'll cover the parts of an APL document in more detail. Continue to section 2 where you'll use the APL authoring tool to create a simple APL document for the Cake Time skill's LaunchRequest. 66 | 67 | link:module2.adoc[Next Module (2)] 68 | -------------------------------------------------------------------------------- /modules/module2.adoc: -------------------------------------------------------------------------------- 1 | :imagesdir: ../modules/images 2 | :authoringToolLink: https://developer.amazon.com/alexa/console/ask/displays 3 | :sectnums: 4 | :toc: 5 | 6 | = Build Your First Visuals with the APL Authoring Tool 7 | 8 | {blank} 9 | 10 | In this section, we are going to learn about the authoring tool and use this to create an APL LaunchRequest response document for our Cake Time skill. By the end of this section, you'll simply have a screen that says, "Welcome to Cake Time!," with a small image of a cake. To make this screen, you'll learn about the following: 11 | 12 | - APL image and text components 13 | - Styles 14 | - Data binding basics 15 | - Responsive documents 16 | 17 | Let's get started. 18 | 19 | NOTE: We suggest using Firefox or Chrome for this course. Safari is not supported in the authoring tool. 20 | 21 | == Hello Cake Time 22 | 23 | A. Click {authoringToolLink}[here to go to the authoring tool]. Save this link as a bookmark, since we will be referencing it throughout the course. 24 | B. You will be greeted with a request to see the new experience. Click *Yes*. 25 | + 26 | image:NewExperienceDialog.png[] 27 | + 28 | The new experience will allow you to save your APL Documents to your skill. 29 | C. Find the skill you are using in the dropdown and click *Select*. 30 | + 31 | image:EnterSkill.png[] 32 | + 33 | D. Click the *Create Template*. Now, you'll see a number of different templates. 34 | + 35 | image:AuthoringToolLandingPage.png[] 36 | + 37 | E. We are going to *Start from scratch*. This will give us a basic APL document without any components added to the mainTemplate. 38 | F. In the authoring tool, make sure *GUI* is selected on the left side. This will give us the graphical representation of the structure of the document (under *Layouts*) as well as some text boxes where properties can be filled in. On the right side you will see the components you can drag and drop into place. Now drag in a text component. You'll see something like this. 39 | + 40 | image::firstHelloWorld.gif[] 41 | + 42 | G. Let's add some text, "Hello Cake Time!". Click on the text component under the layouts section. 43 | H. Scroll down the middle section until you find the *text* property of the text component. Change this message to `Hello Cake Time!`. 44 | I. Above this, find the fontSize property and change this to 72dp. We want to increase the text size to account for our customers which are looking at the device from far away. 45 | Now the text is outside of the blue bounding box...Uh oh! Because the text is bound to such a small rendering box, it no longer displays the text properly with such a large font. Let's fix this. 46 | J. Scroll down to the *Width* section and find the *width* property of our Text component. Delete the value in the width text box, so it will default to auto. 47 | K. Do the same with *height* under the *Height* section. Auto is a better setting to use since it will automatically scale the size of the bounding box for the text to match the content. This will help us later if we choose to modify the font size in the future as we did above. 48 | 49 | You should now see something like this. 50 | 51 | image::helloCakeTimeGUI.png[] 52 | 53 | Congratulations! You have a functioning hello cake time document. You can see the document output under the *APL* tab in the sidebar. 54 | 55 | For reference here is the JSON that you should have. 56 | 57 | { 58 | "type": "APL", 59 | "version": "1.1", 60 | "settings": {}, 61 | "theme": "dark", 62 | "import": [], 63 | "resources": [], 64 | "styles": {}, 65 | "onMount": [], 66 | "graphics": {}, 67 | "commands": {}, 68 | "layouts": {}, 69 | "mainTemplate": { 70 | "parameters": [ 71 | "payload" 72 | ], 73 | "items": [ 74 | { 75 | "type": "Container", 76 | "items": [ 77 | { 78 | "type": "Text", 79 | "paddingTop": "12dp", 80 | "paddingBottom": "12dp", 81 | "fontSize": "72dp", 82 | "text": "Hello Cake Time!" 83 | } 84 | ], 85 | "height": "100%", 86 | "width": "100%" 87 | } 88 | ] 89 | } 90 | } 91 | 92 | We created a text component and then modified some properties on it using the GUI. To see the full set of text component properties, https://developer.amazon.com/docs/alexa-presentation-language/apl-text.html[check out the text component docs, window=_blank]. If you looked at that page, you will see that paddingTop and paddingBottom are missing. That is because all components also inherit properties from their parents. Here is the https://developer.amazon.com/docs/alexa-presentation-language/apl-component.html[full list of style properties, window=_blank] that all components share. 93 | 94 | The authoring tool also created a container for us. https://developer.amazon.com/docs/alexa-presentation-language/apl-container.html[Containers, window=_blank] are components that house other components in an alignment of "column" or "row". Since containers are also components, they have the basic component properties. Containers are essential for building responsive APL documents. Complex documents will use a mix of row and column containers to properly lay out components in a way which scales nicely across devices. 95 | 96 | == Hello Cake Time with style 97 | 98 | Some of the component properties can be defined using https://developer.amazon.com/docs/alexa-presentation-language/apl-style-definition-and-evaluation.html[styles, window=_blank]. Styles are a named set of properties that can be reused across components. 99 | 100 | NOTE: Not all properties can be styled on all components. https://developer.amazon.com/docs/alexa-presentation-language/apl-styled-properties.html[Here is the full list of style-able properties for each component type., window=_blank] 101 | 102 | A. Click the *styles* sidebar. You'll see an empty set of parenthesis. Now, replace the empty set with this: 103 | + 104 | { 105 | "bigText": { 106 | "values": [ 107 | { 108 | "fontSize": "72dp" 109 | } 110 | ] 111 | } 112 | } 113 | + 114 | B. Click back into the APL tab in the sidebar and you will see your document has been updated with the styles added to the styles section. 115 | C. Now, let's modify the Text item to delete the `fontSize` property and add the following: 116 | + 117 | "style": "bigText" 118 | + 119 | You will see that the properties are still observed since this is now pulling from the style you defined. You can test this by changing the fontSize property in the style block. 120 | Your APL code will now look like this: 121 | + 122 | image::finalHelloAPL.png[] 123 | + 124 | Let's take this a step further and center our text using styles. 125 | D. In the styles section, let's add the https://developer.amazon.com/docs/alexa-presentation-language/apl-text.html#textalign[textAlign, window=_blank] property and set this to centered. 126 | + 127 | "textAlign": "center" 128 | + 129 | This will leave you with a style blob looking like: 130 | + 131 | { 132 | "bigText": { 133 | "values": [ 134 | { 135 | "fontSize": "72dp", 136 | "textAlign": "center" 137 | } 138 | ] 139 | } 140 | } 141 | + 142 | Even though you have not changed the actual text component, since it is using the bigText style, this is now applied to the Text component. 143 | 144 | == Data binding 145 | 146 | Did you notice the *Data* button? This simulates the data source that can be a part of the `Alexa.Presentation.APL.RenderDocument` directive which is what you send from your skill backend to render the document. We'll come back to that later, but first, let's look at how to build our document with data sources. 147 | 148 | To reference data in a data source, you will first need to pass a parameter into your APL document. In earlier versions of APL, the entire data source was bound to a single parameter which defaulted to the name, "payload". Now, however, you can pass in multiple parameters which are defined in your data source as long as none of the parameters are called "payload". Using a parameter named "payload" reverts to this old behavior for backwards-compatibility reasons, but it is not recommended to use the legacy naming. If you look at your current APL document, you will see the default authoring tool parameter name of "payload". We will need to change this to match our data parameters. Let's add and use a simple data source. 149 | 150 | A. Inside the `mainTemplate.parameters` array, replace the word "payload" with "text". This will leave you with: 151 | + 152 | "mainTemplate": { 153 | "parameters": [ 154 | "text" 155 | ] 156 | ... 157 | } 158 | + 159 | Now that we have our parameter passed to the document, we can reference it. The data source is a JSON representation of key value pairs. We can nest this object however makes sense for our application. Now, let's add another Text component which will use a data source and the style we defined. To reference the data, you will write an expression like, `${parameterName.YourDefinedObject}`. Let's modify our *APL* JSON. 160 | + 161 | B. Add the following inside the container's items array, underneath the existing text object: 162 | + 163 | { 164 | "type": "Text", 165 | "style": "bigText", 166 | "text": "${text.middle}" 167 | }, 168 | { 169 | "type": "Text", 170 | "style": "bigText", 171 | "text": "${text.end}" 172 | } 173 | + 174 | C. While we're at it, let's change the text data in our very first text component to `${text.start}`. 175 | Wait a minute... Where did that go? The text disappeared because we have no data in the data source we are referencing. Let's fix this using that *Data* tab. 176 | D. After clicking the *Data* button, you'll see an empty dataset `{}`. We'll need to add data which follows the structure we set with our parameter we named, "text". So we have a "text" object with "start", "middle", and "end" fields. 177 | E. Add the following to the *Data* section of the authoring tool: 178 | + 179 | { 180 | "text": { 181 | "start": "Welcome", 182 | "middle": "to", 183 | "end": "Cake Time!" 184 | } 185 | } 186 | 187 | The data JSON objet represents variable data in the document. We are going to reuse this layout later to render similarly structured text with new data. This technique will allow you to more easily localize the skill since all of the localization logic can live in the backend. In addition, we are going to leverage this functionality to reuse our APL document. You'll see the following: 188 | 189 | image::WelcomeToCakeTime.png[] 190 | 191 | Now, we have a set of reusable styles across this APL document, and we learned about making a screen using data binding. Let's add an image of a birthday cake. 192 | 193 | == Adding a birthday cake 194 | 195 | We'll need to add an image component and use databinding. Image components use a URL to the resource that is storing the image. However, image is a primitive component. To scale the image across all of the viewport sizes would take a lot of effort and multiple image resolutions since it does not auto scale. Instead, use the https://developer.amazon.com/docs/alexa-presentation-language/apl-alexa-image-layout.html[AlexaImage, window=_blank] responsive component so we can use a single image that will scale across all device resolutions. 196 | 197 | To use the AlexaImage component, we'll need to add an import. Imports allow you to reference layouts, styles, and resources defined in other https://developer.amazon.com/docs/alexa-presentation-language/apl-package.html[packages, window=_blank]. We are going to use a standard package called https://developer.amazon.com/docs/alexa-presentation-language/apl-layouts-overview.html#import-the-alexa-layouts-package[`alexa-layouts`, window=_blank]. The import looks like this: 198 | 199 | { 200 | "name": "alexa-layouts", 201 | "version": "1.1.0" 202 | } 203 | 204 | A. Add this above import object to your import list in your APL document import section. Afterwards, this will look like: 205 | + 206 | { 207 | "type": "APL", 208 | "version": "1.1", 209 | "settings": {}, 210 | "theme": "dark", 211 | "import": [ 212 | { 213 | "name": "alexa-layouts", 214 | "version": "1.1.0" 215 | } 216 | ], 217 | ... 218 | } 219 | + 220 | Alexa layouts is an important package for creating https://developer.amazon.com/docs/alexa-presentation-language/apl-build-responsive-apl-documents.html[responsive layouts, window=_blank]. The AlexaImage component has https://developer.amazon.com/docs/alexa-presentation-language/apl-alexa-image-layout.html#alexaimage-parameters[many parameters, window=_blank], most of which are optional. 221 | B. Add the following image block inside of a new container underneath the last text component. This new block should be nested within the existing Container, so be sure to put it in the same "items" array as your text components. 222 | + 223 | { 224 | "type": "AlexaImage", 225 | "alignSelf": "center", 226 | "imageSource": "${assets.cake}", 227 | "imageRoundedCorner": false, 228 | "imageScale": "best-fill", 229 | "imageHeight":"40vh", 230 | "imageAspectRatio": "square", 231 | "imageBlurredBackground": false 232 | } 233 | + 234 | Let's break this down: 235 | + 236 | - For the fields we are using in the AlexaImage, imageSource is important since it specifies the URL where the image is hosted. 237 | - We want to give it the standard landscape aspect ratio since we'll want to maintain our image resolution. 238 | - When the image scales, it will use the best-fit strategy. 239 | - To control the size, we are using the imageHeight property and set it to 40% of the viewport height. 240 | + 241 | To learn more about each of these, check out the parameters in https://developer.amazon.com/docs/alexa-presentation-language/apl-alexa-image-layout.html#alexaimage-parameters[the AlexaImage tech doc, window=_blank]. 242 | If you look at the tech docs, you'll notice no reference to alignSelf. This property exists and works because the component is a child component of a container. AlignSelf will override the container alignment for that child, only. There are https://developer.amazon.com/docs/alexa-presentation-language/apl-container.html#container-children[some other properties, window=_blank] that are added since this is a child of a container, too. 243 | This relies on a new "assets.cake" object to be added to the data section. The new data section will look like: 244 | + 245 | { 246 | "text": { 247 | "start": "Welcome", 248 | "middle": "to", 249 | "end": "Cake Time!" 250 | }, 251 | "assets": { 252 | "cake":"https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/alexaCake_960x960.png?raw=true" 253 | } 254 | } 255 | + 256 | C. Go to the *Data* tab and update your data with the new "assets" object. 257 | D. The last step is to add our new mainTemplate parameter, "assets". Go back to the *APL* tab and add this to the mainTemplate.parameters list, leaving you with: 258 | + 259 | "mainTemplate": { 260 | "parameters": [ 261 | "text", 262 | "assets" 263 | ] 264 | ... 265 | } 266 | + 267 | Then, you'll see: 268 | + 269 | image::authoringToolWithBirthdayImage.png[] 270 | 271 | How does it look? Delicious!? This is starting to look more like a birthday-themed skill. Let's make this work for the other viewport profiles, too. 272 | 273 | == Making a responsive document 274 | 275 | Below the simulator screen that we have been viewing our changes in, you'll see some Echo devices with screens. We have been using the "Medium Hub" device (which is the Echo Show screen parameters) for now, but there are many other supported devices. Now, let's try out our document on other screens. 276 | 277 | A. Click the various symbols on the top and take note of any issues you find. 278 | + 279 | .The simulator device types 280 | * Small Hub [Round] (480x480) 281 | * Small Hub [Landscape] (960x480) 282 | * Medium Hub (1024x600) 283 | * Large Hub (1280x800) 284 | * Extra Large TV (1920x1080) 285 | * Add Custom Device (any x any) 286 | + 287 | The last option gives you the ability to create whichever screen resolution you want to simulate the device rendering. 288 | + 289 | WARNING: spoiler below 290 | + 291 | .Well, that doesn't look quite right... 292 | image::brokenHelloSpot.png[Broken Spot Image] 293 | + 294 | B. Our wording is cut off on the Small Hub (Round) device screen. Let's fix this using the https://developer.amazon.com/docs/alexa-presentation-language/apl-component.html#when[when, window=_blank] property. This property allows for boolean evaluation. If true, it will show a component and its children, but if false, it will not. 295 | In addition to `when`, we will be using https://developer.amazon.com/docs/alexa-presentation-language/apl-resources.html[Resources, window=_blank] from the alexa-layouts import. Resources are simply named constants which are referenced with `@`. This time, we will use the alexa-layouts package's definitions of constants representing the above device types and viewport profiles. It allows you to create statements with predefined viewport-specific constants such as: 296 | + 297 | ${@viewportProfile == @hubLandscapeLarge} 298 | + 299 | rather than 300 | + 301 | ${viewport.width == "1280dp"} 302 | + 303 | There is no difference between these statements for an Echo Show 2 device request. But, let's consider there is a new device with a 1300dp wide screen. Should we add another statement to this conditional? What about for a third device in a similar class? 304 | By using the Amazon defined resources, we will have better scaling APL documents without even knowing all the possible screen size permutations. This is because `@hubLandscapeLarge` represents screens between 1280 and 1920 wide, so it encompasses more devices of that class. Even though it is in the same class of device, since the screen does not match exactly the width we are checking, it will not render anything. 305 | C. Since our document looks good on all devices except for the round small hub device, let's add in a new set of components for that one. Click on the Small Round Hub icon. 306 | D. Since a false evaluation will lead to no children components displaying, let's add the following statement at the top of our first container. 307 | + 308 | "when":"${@viewportProfile != @hubRoundSmall}" 309 | + 310 | E. You should see a black screen! Check it out on the rectangular screens and your components will render. Since we omitted the @hubRoundSmall class from this container and its children, we will need to make a new container which will render when we are on a @hubRoundSmall device. 311 | F. Now under that first container, duplicate the container and child Text components and add it to the items list of the mainTemplate. You'll want to add the inverse of the statement above to this block: 312 | + 313 | "when":"${@viewportProfile == @hubRoundSmall}" 314 | + 315 | G. Now, we'll fix the display. This can be achieved just by adding some padding to the top of the first text component. 316 | + 317 | "paddingTop": "75dp", 318 | + 319 | H. Next, remove all of the other padding values in that those text boxes. 320 | I. Then, remove the cake image. 321 | Now your display should look properly on each of the device types. Check your work across the different classes to make sure it looks right to you. 322 | J. Save your APL document as `launchDocument`. We will use this JSON in the next section. 323 | + 324 | image:saveLaunchDocument.png[] 325 | 326 | As an aside, there are a number of different ways we could have fixed this document for the small round hub profile. We could just keep the image and drop the text, or move the image to the background of the small round hub. In terms of structure, we could keep everything in one container and conditionally add the padding and hide the image to provide the same experience. The benefit to this technical approach is that we will not get newly added components by default in the future. Which also means as we iterate and change the rectangular hubs, we will not be modifying the structure of our small round hub screens. Since the screen is fundamentally different from others especially in our design, we forked it. Feel free to take a different approach for other skills if it suits your designs better! 327 | 328 | The final APL Document JSON for reference: 329 | 330 | { 331 | "type": "APL", 332 | "version": "1.1", 333 | "settings": {}, 334 | "theme": "dark", 335 | "import": [ 336 | { 337 | "name": "alexa-layouts", 338 | "version": "1.1.0" 339 | } 340 | ], 341 | "resources": [], 342 | "styles": { 343 | "bigText": { 344 | "values": [ 345 | { 346 | "fontSize": "72dp", 347 | "textAlign": "center" 348 | } 349 | ] 350 | } 351 | }, 352 | "onMount": [], 353 | "graphics": {}, 354 | "commands": {}, 355 | "layouts": {}, 356 | "mainTemplate": { 357 | "parameters": [ 358 | "text", 359 | "assets" 360 | ], 361 | "items": [ 362 | { 363 | "type": "Container", 364 | "when":"${@viewportProfile != @hubRoundSmall}", 365 | "items": [ 366 | { 367 | "type": "Text", 368 | "style": "bigText", 369 | "paddingTop": "12dp", 370 | "paddingBottom": "12dp", 371 | "text": "${text.start}" 372 | }, 373 | { 374 | "type": "Text", 375 | "style": "bigText", 376 | "paddingTop": "12dp", 377 | "paddingBottom": "12dp", 378 | "text": "${text.middle}" 379 | }, 380 | { 381 | "type": "Text", 382 | "style": "bigText", 383 | "paddingTop": "12dp", 384 | "paddingBottom": "12dp", 385 | "text": "${text.end}" 386 | }, 387 | { 388 | "type": "AlexaImage", 389 | "alignSelf": "center", 390 | "imageSource": "${assets.cake}", 391 | "imageRoundedCorner": false, 392 | "imageScale": "best-fill", 393 | "imageHeight":"40vh", 394 | "imageAspectRatio": "square", 395 | "imageBlurredBackground": false 396 | } 397 | ], 398 | "height": "100%", 399 | "width": "100%" 400 | }, 401 | { 402 | "type": "Container", 403 | "when":"${@viewportProfile == @hubRoundSmall}", 404 | "items": [ 405 | { 406 | "type": "Text", 407 | "style": "bigText", 408 | "paddingTop": "75dp", 409 | "text": "${text.start}" 410 | }, 411 | { 412 | "type": "Text", 413 | "style": "bigText", 414 | "text": "${text.middle}" 415 | }, 416 | { 417 | "type": "Text", 418 | "style": "bigText", 419 | "text": "${text.end}" 420 | } 421 | ], 422 | "height": "100%", 423 | "width": "100%" 424 | } 425 | ] 426 | } 427 | } 428 | 429 | Let's put this document to use in the next section. 430 | 431 | https://github.com/alexa/skill-sample-nodejs-first-apl-skill/tree/master/modules/code/module2[Complete code in Github, window=_blank] 432 | 433 | link:module1.adoc[Previous Module (1)] 434 | link:module3.adoc[Next Module (3)] 435 | -------------------------------------------------------------------------------- /modules/module3.adoc: -------------------------------------------------------------------------------- 1 | 2 | :link-caketime: https://developer.amazon.com/en-US/docs/alexa/workshops/build-an-engaging-skill/get-started/index.html[Workshop: Build an Engaging Alexa Skill, window=_blank] 3 | :link-quick-setup: link:quick-start.adoc[quick setup instructions, window=_blank] 4 | :link-S3-assets: https://github.com/alexa/skill-sample-nodejs-first-apl-skill/tree/master/modules/assets[from this page., window=_blank] 5 | :authoringToolLink: https://developer.amazon.com/alexa/console/ask/displays 6 | :sectnums: 7 | :toc: 8 | 9 | :imagesdir: ../modules/images 10 | 11 | = Modify Your Backend to Add LaunchRequest Visuals 12 | 13 | {blank} 14 | 15 | In this section, you will learn how to modify your backend to safely add APL visuals to your response. At the end, you will have enhanced your Cake Time Launch request with the APL visuals you built in the last section. In addition, you will learn how to use S3 in the Alexa Hosted environment to host your own assets and use this to add a background image to your skill. 16 | 17 | If you are a new Alexa developer, check out the intro {link-caketime}, now, then come back to this module. 18 | 19 | The deprecated version of the Cake Time code (not the link above) will be used in this section. Follow the {link-quick-setup} to get the deprecated code and overview of the Skill for this guide. 20 | 21 | == Adding visuals to your skill 22 | 23 | For an Alexa device to safely render your APL document, you have to respond with a render directive only when the device supports APL. This directive takes the form of `Alexa.Presentation.APL.RenderDocument`. You can add a directive using the nodeJS SDK easily by calling `handlerInput.responseBuilder.addDirective({...});` 24 | 25 | Even though you can add a render directive to every response, not all devices can react to this. In order to safely respond with the `Alexa.Presentation.APL.RenderDocument`, you must first make sure the calling device sends the proper request object. This information is contained in the supportedInterfaces object found like `context.System.device.supportedInterfaces`. For example, your sample request might look like: 26 | 27 | { 28 | ... 29 | "context": { 30 | "System": { 31 | ... 32 | "device": { 33 | "deviceId": "amzn1.ask.device.1...", 34 | "supportedInterfaces": { 35 | "Alexa.Presentation.APL": { 36 | "runtime": { 37 | "maxVersion": "1.1" 38 | } 39 | } 40 | } 41 | } 42 | ... 43 | } 44 | } 45 | } 46 | 47 | If you are using the SDK, checking for the supported interfaces is simple. 48 | 49 | A. In your `LaunchRequestHandler.handle()` before the response is created, add: 50 | + 51 | if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) { 52 | // Create Render Directive. 53 | } 54 | + 55 | This "if" statement is going to check if the APL interface is sent in the request envelope. Only then, do we want to add the response. Now, let's add the response. 56 | B. Underneath the `// Create Render Directive` comment, add: 57 | + 58 | handlerInput.responseBuilder.addDirective({ 59 | type: 'Alexa.Presentation.APL.RenderDocument', 60 | document: launchDocument, 61 | datasources: { 62 | text: { 63 | type: 'object', 64 | start: "Welcome", 65 | middle: "to", 66 | end: "Cake Time!" 67 | } 68 | } 69 | }); 70 | + 71 | Notice, we left out information about the image. There are some additional steps we need in order to serve the image from our Alexa hosted environment, so we'll come back to this later. 72 | Now, let's import our launchDocument. We will be naming our document launchDocument.json since it represents the APL Document for the generic Launch Requests. 73 | + 74 | image:createLaunchScreenJSON.gif[] 75 | + 76 | C. On the left, select the *lambda* folder. Right-click and select *Create Folder*. Call this folder `documents`. Add a new file inside the new folder called `launchDocument.json`. 77 | D. Inside this new file, add the JSON from the authoring tool we created in module 2 (the full JSON document is at the end of module 2). 78 | E. Then, import this at the top of your `index.js` like so: 79 | + 80 | const launchDocument = require('./documents/launchDocument.json'); 81 | + 82 | The launchDocument variable in your javascript code represents the contents of your json file. 83 | F. Now, deploy the code. Remember to test it! 84 | 85 | == Using the developer console simulator to test 86 | 87 | Did you have issues testing your APL additions? Before we can test our document changes, we have to enable APL in the Build tab. 88 | 89 | A. Click on the build Tab. You will need to click on Interfaces on the left side. 90 | + 91 | image::interfacesClick.png[] 92 | + 93 | B. Scroll down and toggle the Alexa Presentation Language and you will see the block expand. Select all of the boxes. 94 | + 95 | Hub Round, Hub Landscape, and TV Fullscreen are selected by default and must be supported. Each of these are screens you will be officially supporting. They will not scale your content, so whatever assets and APL layout you send in the response will be shown as is. This also means whatever viewport profiles are not supported will scale your content from the closest sized viewport you do support. Select them all because we will be verifying and testing each of these profiles. By selecting these, you are saying you tested your layout on each of these profiles. To learn more about selecting viewport profiles, https://developer.amazon.com/docs/alexa-presentation-language/apl-select-the-viewport-profiles-your-skill-supports.html[check out the tech docs., window=_blank] 96 | + 97 | image::toggleAPL.png[] 98 | + 99 | C. After you toggle APL on and select all of the viewport profiles, you must build the interaction model. 100 | Note: this also means changes to the supported profiles need a new skill certification round in order to go live. 101 | + 102 | D. After building, you are ready to test. Go to the test tab and make sure "Device Display" is checked. Now if you scroll down past the request/response json, you can select the device you want to try. 103 | Uncheck the *Skill I/O* so you do not need to scroll down to see visuals! 104 | + 105 | TIP: Remember to delete your user data if you have any! Since we modified the no context launch handler, you must delete your user data from S3 to make sure there is actually no context in order to test this new screen. 106 | + 107 | E. Test your screen on a medium hub by triggering the launch handler for instance by typing, `open cake time`. 108 | + 109 | F. Repeat this test on all the other screen types. Remember, we did not include the birthday cake image, yet, so don't worry if you do not see it! 110 | 111 | == Uploading assets to S3 112 | Before we can make use of image assets, we will need to host these images. Ideally, you will use a https://en.wikipedia.org/wiki/Content_delivery_network[CDN, window=_blank] (such as https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/[Amazon Cloudfront in conjunction with S3, window=_blank]) to serve your assets to locations closer to your users, but for this exercise, we will use S3 since this is provided by Alexa hosted. 113 | 114 | A. To upload the S3 Assets, access your S3 provision under the *Code* tab at the very bottom on the left side. 115 | + 116 | image::S3Access.png[] 117 | + 118 | B. In your S3 provision, open the *media* page. 119 | C. In here, upload all of the assets {link-S3-assets}. For your convenience, we have the https://optical-cupcake-build.s3.amazonaws.com/OpticalCupcake/assets.zip[assets zipped here, window=_blank]. We will use all of these over the course of the module. After you upload, you will see all of the assets on the page in S3. 120 | + 121 | image::S3Provision.png[] 122 | 123 | Now that your assets are uploaded, we can update our Launch Request with more images. 124 | 125 | == Updating our launch request handler 126 | 127 | Now that we have our no context launch request working, it is time to add a background image rather than using the default background. 128 | 129 | A. To do so, first open up the {authoringToolLink}[authoring tool] and paste in the launchDocument.json information. 130 | B. In the data section of the authoring tool, use: 131 | + 132 | { 133 | "text": { 134 | "start": "Welcome", 135 | "middle": "to", 136 | "end": "Cake Time!" 137 | }, 138 | "assets": { 139 | "cake":"https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/alexaCake_960x960.png?raw=true", 140 | "backgroundURL": "https://github.com/alexa/skill-sample-nodejs-first-apl-skill/blob/master/modules/assets/lights_1920x1080.png?raw=true" 141 | } 142 | } 143 | + 144 | You may notice a new field in our data, backgroundURL, under the assets object. This represents where the device will fetch a background image from. We will use the Github repo for hosting it for now while we develop the screen since this is a public link, but our actual code will use the S3 presigned link util function. The presigned link utility is needed to generate a short-lived public URL to the private bucket you uploaded the assets to. Now, we need to add our background component. 145 | C. Go back to the *APL* tab in the authoring tool. 146 | D. We are going to add the https://developer.amazon.com/docs/alexa-presentation-language/apl-alexa-background-layout.html[AlexaBackground responsive component, window=_blank]. To use this, you need the alexa-layouts package which we already have! Using the AlexaBackground is easy; just add the following to the top of both of your containers in the items array of each: 147 | + 148 | { 149 | "type": "AlexaBackground", 150 | "backgroundImageSource": "${assets.backgroundURL}" 151 | }, 152 | + 153 | You should see the background light up... Er... See the lights in the background. 154 | E. Now that we are using a background Image, we want to modify the text color. Since we have a style for all of our text objects this is easy! Simply, add `"color": "black",` as a new property in our `bigText` style. This will give you: 155 | + 156 | "bigText": { 157 | "values": [ 158 | { 159 | "fontSize": "72dp", 160 | "color": "black", 161 | "textAlign": "center" 162 | } 163 | ] 164 | } 165 | + 166 | F. Apply the same changes to the `@hubRoundSmall` variation and ensure it works in the authoring tool. 167 | 168 | NOTE: Make sure the AlexaBackground responsive component is above the other components, otherwise it will occlude them! 169 | 170 | You may notice we are using a single 1920x1080 png for each of the devices and it scales pretty well. We want to use the highest possible resolution to consider FireTV devices. Scaling down produces a better quality image. The tradeoff is that smaller resolution devices which do not support this quality level will download unnecessary data. The best course of action would be to provide two or more different image resolutions for different devices classes. We will see how to do this in the next section. 171 | 172 | == Applying our images 173 | 174 | A. Now that we have our document ready, replace the launchDocument.json contents in your *Code* tab with the JSON from the authoring tool. 175 | B. Go back to the index.js. Since we are adding in links to our private S3 instance, we will need to import the util module. At the top of this file, add in another import: 176 | + 177 | const util = require('./util'); 178 | + 179 | C. Add the new data sources to our code in the index.js. Since our images are in the non-public S3 bucket, we are going to be using the util function to get a short lived public URL to the asset. The S3 object keys in this case are going to be of the form, `'Media/imageName.png'`. Lets add our images inside the APL render block's data. Our `datasources` block will now look like: 180 | + 181 | datasources: { 182 | text: { 183 | type: 'object', 184 | start: "Welcome", 185 | middle: "to", 186 | end: "Cake Time!" 187 | }, 188 | assets: { 189 | cake: util.getS3PreSignedUrl('Media/alexaCake_960x960.png'), 190 | backgroundURL: util.getS3PreSignedUrl('Media/lights_1920x1080.png') 191 | } 192 | } 193 | + 194 | D. Deploy and test your new document on each of the screen sizes. 195 | E. Working? Well, we aren't fully done yet! We are going to add the optimization we mentioned in the last section. This requires another asset. We have already uploaded the lights_1280x800.png. We need to change out images to conditionally pull the right asset. Replace the value for our backgroundURL with: 196 | + 197 | util.getS3PreSignedUrl(backgroundKey) 198 | + 199 | F. To conditionally set the proper `backgroundKey`, we need to use the Ask-sdk-core which we already have imported as `Alexa`. To get the viewport profile, inside your APL conditional, add: 200 | + 201 | const viewportProfile = Alexa.getViewportProfile(handlerInput.requestEnvelope); 202 | + 203 | G. We can implement this logic with the statement added below the viewportProfile statement: 204 | + 205 | const backgroundKey = viewportProfile === 'TV-LANDSCAPE-XLARGE' ? "Media/lights_1920x1080.png" : "Media/lights_1280x800.png"; 206 | + 207 | H. Test this out making sure to use the TV and the hub devices in the test console. You may not notice much of a difference. If you want to verify this is working, check out the *Skill I/O* section in the test console and make sure you have the correct assets served when you use a TV vs a smaller resolution device. 208 | 209 | Since our frontend is scaling properly by using the responsive components, we are done! Let's head to the next module and learn some about more advanced APL document concepts. 210 | 211 | https://github.com/alexa/skill-sample-nodejs-first-apl-skill/tree/master/modules/code/module3[Complete code in Github, window=_blank] 212 | 213 | link:module2.adoc[Previous Module (2)] 214 | link:module4.adoc[Next Module (4)] 215 | -------------------------------------------------------------------------------- /modules/module4.adoc: -------------------------------------------------------------------------------- 1 | :imagesdir: ../modules/images 2 | :sectnums: 3 | :toc: 4 | 5 | = Create More Visuals With Less 6 | 7 | {blank} 8 | 9 | Since we now have a functioning document for our no context launch request, let's make our other screens. In this section you will complete the visuals for both the countdown and birthday screens while learning how to create a reusable layout and APL package. In addition, we will use the video component to deliver a special video on your birthday. 10 | 11 | == Layouts 12 | 13 | So far we have made this screen: 14 | 15 | image::finalNoContextScreen.png[] 16 | 17 | And we want to make these screens: 18 | 19 | image::finalCountdownScreen.png[] 20 | image::finalBirthdayScreen.png[] 21 | 22 | See the similarities? We can already make the countdown screen with our current document! And, we can almost make the Birthday screen. There are only two differences between the birthday screen and our current launch screen. The birthdayt screen has a video instead of an image, and the background asset is different. Let's start by optimizing our current document using https://developer.amazon.com/docs/alexa-presentation-language/apl-layout.html[layouts, window=_blank]. 23 | 24 | Layouts are composite components; they are components made out of other components and layouts. We have already been using layouts (responsive components are layouts defined by Amazon in the APL Package, "alexa-Layouts"), but now it is time to define our own layout based on the patterns we see in our screens for our experience. 25 | 26 | Layouts are also defined in JSON. They have a description, parameters accepted, and an items section which is composed of other components. The https://developer.amazon.com/docs/alexa-presentation-language/apl-layout.html#parameters[parameters, window=_blank] can have default values in case there is no parameter value provided, as well as a specific type (or "any"). We can extract the text parts of our document and create a layout out of it. Let's do so now. 27 | 28 | A. Open the https://developer.amazon.com/alexa/console/ask/displays[APL authoring tool., window=_blank] 29 | B. Add the layout of the repeated patterns to the `layouts` section of the document. The reusable text layout looks like: 30 | + 31 | "cakeTimeText": { 32 | "description": "A basic text layout with start, middle and end.", 33 | "parameters":[ 34 | { 35 | "name": "startText", 36 | "type": "string" 37 | }, 38 | { 39 | "name": "middleText", 40 | "type": "string" 41 | }, 42 | { 43 | "name": "endText", 44 | "type": "string" 45 | } 46 | ], 47 | "items": [ 48 | { 49 | "type": "Container", 50 | "items": [ 51 | { 52 | "type": "Text", 53 | "style": "bigText", 54 | "text": "${startText}" 55 | }, 56 | { 57 | "type": "Text", 58 | "style": "bigText", 59 | "text": "${middleText}" 60 | }, 61 | { 62 | "type": "Text", 63 | "style": "bigText", 64 | "text": "${endText}" 65 | } 66 | ] 67 | } 68 | ] 69 | } 70 | + 71 | This can simplify our already existing document and will allow us to get rid of duplicate JSON. Note, there is one difference here between the text components for Echo spot and the rest of the devices. We will resolve this with a conditional statement inside of the paddingTop property instead of on the container. This greatly simplifies our whole JSON document! 72 | + 73 | C. Remove the redundant code (all of the duplicate code moved to the layout on each section) and replace it with the new "cakeTimeText" layout. We would use this layout like so: 74 | + 75 | { 76 | "type": "cakeTimeText", 77 | "startText":"${text.start}", 78 | "middleText":"${text.middle}", 79 | "endText":"${text.end}" 80 | } 81 | + 82 | When we remove the redundant code and replace it with references to our layout, we get: 83 | + 84 | { 85 | "type": "APL", 86 | "version": "1.1", 87 | "settings": {}, 88 | "theme": "dark", 89 | "import": [ 90 | { 91 | "name": "alexa-layouts", 92 | "version": "1.1.0" 93 | } 94 | ], 95 | "resources": [], 96 | "styles": { 97 | "bigText": { 98 | "values": [ 99 | { 100 | "fontSize": "72dp", 101 | "color": "black", 102 | "textAlign": "center" 103 | } 104 | ] 105 | } 106 | }, 107 | "onMount": [], 108 | "graphics": {}, 109 | "commands": {}, 110 | "layouts": { 111 | "cakeTimeText": { 112 | "description": "A basic text layout with start, middle and end.", 113 | "parameters":[ 114 | { 115 | "name": "startText", 116 | "type": "string" 117 | }, 118 | { 119 | "name": "middleText", 120 | "type": "string" 121 | }, 122 | { 123 | "name": "endText", 124 | "type": "string" 125 | } 126 | ], 127 | "items": [ 128 | { 129 | "type": "Container", 130 | "items": [ 131 | { 132 | "type": "Text", 133 | "style": "bigText", 134 | "paddingTop":"${@viewportProfile == @hubRoundSmall ? 75dp : 0dp}", 135 | "text": "${startText}" 136 | }, 137 | { 138 | "type": "Text", 139 | "style": "bigText", 140 | "text": "${middleText}" 141 | }, 142 | { 143 | "type": "Text", 144 | "style": "bigText", 145 | "text": "${endText}" 146 | } 147 | ] 148 | } 149 | ] 150 | } 151 | }, 152 | "mainTemplate": { 153 | "parameters": [ 154 | "text", 155 | "assets" 156 | ], 157 | "items": [ 158 | { 159 | "type": "Container", 160 | "items": [ 161 | { 162 | "type": "AlexaBackground", 163 | "backgroundImageSource": "${assets.backgroundURL}" 164 | }, 165 | { 166 | "type": "cakeTimeText", 167 | "startText":"${text.start}", 168 | "middleText":"${text.middle}", 169 | "endText":"${text.end}" 170 | }, 171 | { 172 | "type": "AlexaImage", 173 | "alignSelf": "center", 174 | "imageSource": "${assets.cake}", 175 | "imageRoundedCorner": false, 176 | "imageScale": "best-fill", 177 | "imageHeight":"40vh", 178 | "imageAspectRatio": "square", 179 | "imageBlurredBackground": false 180 | } 181 | ], 182 | "height": "100%", 183 | "width": "100%", 184 | "when": "${@viewportProfile != @hubRoundSmall}" 185 | }, 186 | { 187 | "type": "Container", 188 | "items": [ 189 | { 190 | "type": "AlexaBackground", 191 | "backgroundImageSource": "${assets.backgroundURL}" 192 | }, 193 | { 194 | "type": "cakeTimeText", 195 | "startText":"${text.start}", 196 | "middleText":"${text.middle}", 197 | "endText":"${text.end}" 198 | } 199 | ], 200 | "height": "100%", 201 | "width": "100%", 202 | "when": "${@viewportProfile == @hubRoundSmall}" 203 | } 204 | ] 205 | } 206 | } 207 | + 208 | But wait... This isn't that much simpler- it looks longer now! Let's simplify our document further by breaking out the layout and styles into it's own APL package. 209 | 210 | == Hosting your own APL package 211 | 212 | Once you have your components rendering and the launch screen looks the same, it is time to host our layout so it can be used in more than one document. Layouts, styles, and resources can all be hosted in an https://developer.amazon.com/docs/alexa-presentation-language/apl-package.html[APL package, window=_blank]. In fact, an APL package has the same format of an APL document except, there is no mainTemplate. This is a great way to share resources, styles, or your own custom responsive components or UI patterns across multiple APL documents. You can even create your own documents to share with the rest of the Alexa developer community! 213 | 214 | We want to host both our style and our layout. To do so we will use our S3 bucket on our backend. Unfortunately, since we are using the Alexa hosted environment, we cannot modify permissions on the S3 provision we are given. Alexa devices and simulators need the header, `Access-Control-Allow-Origin` to be set and allow *.amazon.com. To learn more about Cross-Origin Resource Sharing, https://developer.amazon.com/docs/alexa-presentation-language/apl-support-for-your-skill.html#support-cors[check out the tech docs, window=_blank]. Also, the link must be public which we cannot do with Alexa hosted. But for this exercise, we will use https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-apl-skill/master/modules/code/module4/documents/my-caketime-apl-package.json[this GitHub link, window=_blank] to host our JSON APL package. Note: Github supports CORS on all domains. 215 | 216 | NOTE: If you are using another service to host your assets that service must also send the appropriate headers. 217 | 218 | Our package will be just the reusable set of properties from our APL document. This includes the layouts and the styles. We also will need the import section because we rely on alexa-layouts in order to render our layout. Imports are transitive in APL. This is basically everything except for the `mainTemplate`. Our package will be: 219 | 220 | { 221 | "type": "APL", 222 | "version": "1.1", 223 | "settings": {}, 224 | "theme": "dark", 225 | "import": [ 226 | { 227 | "name": "alexa-layouts", 228 | "version": "1.1.0" 229 | } 230 | ], 231 | "resources": [], 232 | "styles": { 233 | "bigText": { 234 | "values": [ 235 | { 236 | "fontSize": "72dp", 237 | "color": "black", 238 | "textAlign": "center" 239 | } 240 | ] 241 | } 242 | }, 243 | "onMount": [], 244 | "graphics": {}, 245 | "commands": {}, 246 | "layouts": { 247 | "cakeTimeText": { 248 | "description": "A basic text layout with start, middle and end.", 249 | "parameters":[ 250 | { 251 | "name": "startText", 252 | "type": "string" 253 | }, 254 | { 255 | "name": "middleText", 256 | "type": "string" 257 | }, 258 | { 259 | "name": "endText", 260 | "type": "string" 261 | } 262 | ], 263 | "items": [ 264 | { 265 | "type": "Container", 266 | "items": [ 267 | { 268 | "type": "Text", 269 | "style": "bigText", 270 | "text": "${startText}" 271 | }, 272 | { 273 | "type": "Text", 274 | "style": "bigText", 275 | "text": "${middleText}" 276 | }, 277 | { 278 | "type": "Text", 279 | "style": "bigText", 280 | "text": "${endText}" 281 | } 282 | ] 283 | } 284 | ] 285 | } 286 | } 287 | } 288 | 289 | Now in our main document, we can remove everything except for the mainTemplate blob, and add in a new import for our package. In the authoring tool, we can use https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-apl-skill/master/modules/code/module4/documents/my-caketime-apl-package.json[this public link, window=_blank] to test. 290 | 291 | A. Add this import to your document: 292 | + 293 | { 294 | "name": "my-caketime-apl-package", 295 | "version": "1.0", 296 | "source": "https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-apl-skill/master/modules/code/module4/documents/my-caketime-apl-package.json" 297 | } 298 | + 299 | With this import, we can now reference the values from our custom style (bigText) and layout (cakeTimeText). Our document is now significantly smaller and easier to read since we can remove our layout and style: 300 | + 301 | { 302 | "type": "APL", 303 | "version": "1.1", 304 | "settings": {}, 305 | "theme": "dark", 306 | "import": [ 307 | { 308 | "name": "my-caketime-apl-package", 309 | "version": "1.0", 310 | "source": "https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-apl-skill/master/modules/code/module4/documents/my-caketime-apl-package.json" 311 | }, 312 | { 313 | "name": "alexa-layouts", 314 | "version": "1.1.0" 315 | } 316 | ], 317 | "resources": [], 318 | "styles": {}, 319 | "onMount": [], 320 | "graphics": {}, 321 | "commands": {}, 322 | "layouts": {}, 323 | "mainTemplate": { 324 | "parameters": [ 325 | "text", 326 | "assets" 327 | ], 328 | "items": [ 329 | { 330 | "type": "Container", 331 | "items": [ 332 | { 333 | "type": "AlexaBackground", 334 | "backgroundImageSource": "${assets.backgroundURL}" 335 | }, 336 | { 337 | "type": "cakeTimeText", 338 | "startText":"${text.start}", 339 | "middleText":"${text.middle}", 340 | "endText":"${text.end}" 341 | }, 342 | { 343 | "type": "AlexaImage", 344 | "alignSelf": "center", 345 | "imageSource": "${assets.cake}", 346 | "imageRoundedCorner": false, 347 | "imageScale": "best-fill", 348 | "imageHeight":"40vh", 349 | "imageAspectRatio": "square", 350 | "imageBlurredBackground": false 351 | } 352 | ], 353 | "height": "100%", 354 | "width": "100%", 355 | "when": "${@viewportProfile != @hubRoundSmall}" 356 | }, 357 | { 358 | "type": "Container", 359 | "paddingTop": "75dp", 360 | "items": [ 361 | { 362 | "type": "AlexaBackground", 363 | "backgroundImageSource": "${assets.backgroundURL}" 364 | }, 365 | { 366 | "type": "cakeTimeText", 367 | "startText":"${text.start}", 368 | "middleText":"${text.middle}", 369 | "endText":"${text.end}" 370 | } 371 | ], 372 | "height": "100%", 373 | "width": "100%", 374 | "when": "${@viewportProfile == @hubRoundSmall}" 375 | } 376 | ] 377 | } 378 | } 379 | + 380 | NOTE: Why are we still importing alexa-layouts if imports are transitive? In general, it's best to explicitly declare the dependencies you are directly using. If the caketime apl package decides to no longer use alexa-layouts, your document will break! Therefore, your document also has a dependency on alexa-layouts 381 | + 382 | B. Open the developer portal to your Cake Time skill. 383 | C. Save over your current launchDocument.json with this new document in the code tab of your skill. 384 | 385 | Now, let's make our final document. 386 | 387 | == Adding a Special Birthday Video 388 | 389 | Our birthdayDocument will use a full screen video instead of the image component with the text removed, making sure to provide the same layout on a spot. The https://developer.amazon.com/docs/alexa-presentation-language/apl-video.html[video component, window=_blank] has a simple structure for our use case. 390 | 391 | A. In the code tab, create a new document called `birthdayDocument.json` and make it a copy of our old document. 392 | B. Replace the image component in your birthdayDocument.json with the following video component inside a container. 393 | + 394 | { 395 | "type": "Container", 396 | "paddingTop":"3vh", 397 | "alignItems": "center", 398 | "items": [{ 399 | "type": "Video", 400 | "height": "85vh", 401 | "width":"90vw", 402 | "source": "${assets.video}", 403 | "autoplay": true 404 | }] 405 | } 406 | + 407 | We want to add this container so that we can center the component in our APL Document. The padding is so we see some of the background at the top of the viewport. Notice, we also removed the text object on this first container! We want the video to be front and center. 408 | The video we will be using is of an animated birthday cake with Alexa Singing in the background. It looks like this: 409 | + 410 | video::https://public-pics-muoio.s3.amazonaws.com/video/Amazon_Cake.mp4[width=640] 411 | + 412 | This video really wants to be played fullscreen which is why we made our height 85% of the viewport height and width 90% of the viewport. However, now our text is no longer wanted when we display the video. 413 | C. Remove the CakeTimeText component in the first container (when `${@viewportProfile != @hubRoundSmall}`). 414 | D. Save this in your skill code as a new file, birthdayDocument.json. 415 | 416 | == Wiring up the Backend 417 | 418 | Let's swap back over to the index.js file and wire up our other APL screens. 419 | 420 | The only difference in our current launch document and our known birthday document is the content. Let's start to modify our `HasBirthdayLaunchRequestHandler` to conditionally use our launchDocument.json file or our birthdayDocument.json file depending on the situation. We want it to look like this: 421 | 422 | image::finalCountdownScreen.png[] 423 | 424 | A. We want to avoid duplicating code, so let's make a helper function to get the background image based upon our key. This will also be used to fetch a new background image for the alternate document. In addition, we will use it to contain our device screen size to asset size logic. Add this helper function to your index.js anywhere outside of another function or object: 425 | + 426 | function getBackgroundURL(handlerInput, fileNamePrefix) { 427 | const viewportProfile = Alexa.getViewportProfile(handlerInput.requestEnvelope); 428 | const backgroundKey = viewportProfile === 'TV-LANDSCAPE-XLARGE' ? "Media/"+fileNamePrefix+"_1920x1080.png" : "Media/"+fileNamePrefix+"_1280x800.png"; 429 | return util.getS3PreSignedUrl(backgroundKey); 430 | } 431 | + 432 | This is beneficial since it provides a single place where our assumptions of filenames live. If we want to add more viewport profile detection or we decide to change hosting from our S3 bucket, we have a single place to do so. 433 | B. We now need to refactor our LaunchRequestHandler.handle() code to use the new function. Our new data source will now have a new value for the backgroundURL key: 434 | + 435 | backgroundURL: getBackgroundURL(handlerInput, "lights") 436 | + 437 | And you can delete the lines: 438 | + 439 | const viewportProfile = Alexa.getViewportProfile(handlerInput.requestEnvelope); 440 | const backgroundKey = viewportProfile === 'TV-LANDSCAPE-XLARGE' ? "Media/lights_1920x1080.png" : "Media/lights_1280x800.png"; 441 | + 442 | C. Since we will be using the same launch doc, we already have the import to the JSON representing our document. Add a block in `HasBirthdayLaunchRequestHandler` similar to our LaunchRequestHandler directly before the return statement. 443 | + 444 | // Add APL directive to response 445 | if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) { 446 | // Create Render Directive 447 | } 448 | + 449 | D. We are going to define a variable to use in our data source, `numberDaysString`. This is a variable string which will be something like "1 day" or "234 days". You can represent this by the expression: 450 | + 451 | const numberDaysString = diffDays === 1 ? "1 day": diffDays + " days"; 452 | + 453 | Add this variable just below our `// Add APL directive to response` comment. 454 | E. Now, underneath `// Create Render Directive`, add our directive: 455 | + 456 | handlerInput.responseBuilder.addDirective({ 457 | type: 'Alexa.Presentation.APL.RenderDocument', 458 | document: launchDocument, 459 | datasources: { 460 | text: { 461 | type: 'object', 462 | start: "Your Birthday", 463 | middle: "is in", 464 | end: numberDaysString 465 | }, 466 | assets: { 467 | cake: util.getS3PreSignedUrl('Media/alexaCake_960x960.png'), 468 | backgroundURL: getBackgroundURL(handlerInput, "lights") 469 | } 470 | } 471 | }); 472 | + 473 | Notice we are now using the numberDaysString in our data source, so this will change based on our input and the day of the year. In addition, we are making use of our helper function to construct the lights url to fetch the proper signed URL. 474 | F. Test it out! You will have to go through the whole flow to enter your month, day, and year of birth before you can see this screen. 475 | 476 | == Wiring up the birthday video 477 | 478 | A. Once you have verified this is working, let's build the other path for when it is your birthday. This will make use of our new document, `birthdayDocument.json`, so let's start by importing that as birthdayDocument at the top. 479 | + 480 | const birthdayDocument = require('./documents/birthdayDocument.json'); 481 | + 482 | B. Now, we will need to add some conditional logic to our new code to switch between APL documents depending on if it is our birthday or not. Underneath the comment, `// Create Render Directive`, inside the `HasBirthdayLaunchRequestHandler` handle method, add 483 | + 484 | if (currentDate.getTime() !== nextBirthday) { 485 | //TODO Move the old directive here. 486 | } else { 487 | //TODO Write a birthday specific directive here. 488 | } 489 | + 490 | C. Cut the `handlerInput.responseBuilder.addDirectiveReplace({...})` you added in the last section and replace the comment, `//TODO Move the old directive here.` with this. 491 | D. Inside the else block we can add our new directive using the `birthdayDocument` you imported above. We will be using the `"confetti"` picture. Add this full directive in the else block: 492 | + 493 | // Create Render Directive 494 | handlerInput.responseBuilder.addDirective({ 495 | type: 'Alexa.Presentation.APL.RenderDocument', 496 | document: birthdayDocument, 497 | datasources: { 498 | text: { 499 | type: 'object', 500 | start: "Happy Birthday!", 501 | middle: "From,", 502 | end: "Alexa <3" 503 | }, 504 | assets: { 505 | video: "https://public-pics-muoio.s3.amazonaws.com/video/Amazon_Cake.mp4", 506 | backgroundURL: getBackgroundURL(handlerInput, "confetti") 507 | } 508 | } 509 | }); 510 | + 511 | This new directive differs in the data provided for the text object, the image is replaced with a video, and the background uses the confetti asset. We still need to input start, middle, and end text because our variant for the Hub Round Small device uses this. 512 | E. Now, Test your skill. Clear your user data in S3 and lie so today is your birthday! If you aren't lying, well... Happy Birthday! :) 513 | 514 | When this is working, you can go to the final module to learn about commands. 515 | 516 | https://github.com/alexa/skill-sample-nodejs-first-apl-skill/tree/master/modules/code/module4[Complete code in Github, window=_blank] 517 | 518 | link:module3.adoc[Previous Module (3)] 519 | link:module5.adoc[Next Module (5)] 520 | -------------------------------------------------------------------------------- /modules/module5.adoc: -------------------------------------------------------------------------------- 1 | 2 | :imagesdir: ../modules/images 3 | :sectnums: 4 | :toc: 5 | 6 | = Add Animation and Video Control 7 | 8 | In this section, you will learn about APL Commands and how to add them to your skill. We will animate the text components and images across all of our APL screens and add in a video component. 9 | 10 | == What are APL commands? 11 | 12 | https://developer.amazon.com/docs/alexa-presentation-language/apl-commands.html[APL Commands, window=_blank] are messages which change the visual or audio presentation of the content delivered to your customer. There are a few different ways you can use commands. The first is by using the directive, `Alexa.Presentation.APL.ExecuteCommands`. There are also different handler properties in some components and even in the APL document. For an example, the https://developer.amazon.com/docs/alexa-presentation-language/apl-touchwrapper.html#onpress[TouchWrapper responsive component, window=_blank] has an onPress property which takes in a list of commands. Today, we will be using a similar mechanism within the APL document, the `onMount` property of our APL document which executes commands immediately upon inflating the document. This is useful for the animation commands we are using in our introduction. 13 | 14 | Now, let's animate some text objects. This is what we want to create: 15 | 16 | image:FinalLaunchScreen.gif[] 17 | 18 | == Updating our APL documents 19 | 20 | First, we will need to update out APL Documents to be able to use commands by adding an https://developer.amazon.com/docs/alexa-presentation-language/apl-component.html#id[id property, window=_blank] to our components. Since we want the commands to execute immediately on the APL screen loading, we will be putting the commands inside the onMount property of our launch document. 21 | In order to target our components with the commands, we'll need to add some logical ids for our text components. Hold up though! These are defined in the APL package. If you check the hosted version, you will see there are already component ids added by default! These are `textTop, textMiddle, textBottom` corresponding to the top, middle, and bottom text components, so there is no need to add anything. However, if you would like to add different ids, each text component id is exposed as a parameter in the APL package. 22 | 23 | A. In your launch document (launchDocument.json file), add `"id": "image",` to the AlexaImage component. 24 | B. In your birthday visual, (birthdayDocument.json file), add `"id": "birthdayVideo",` to your video component. 25 | 26 | == Adding commands to our document 27 | 28 | The command we will be using is the https://developer.amazon.com/docs/alexa-presentation-language/apl-standard-commands.html#animate_item_command[AnimateItem command, window=_blank]. This command was added in APL 1.1, so make sure that your APL document is declaring version 1.1 which will look like `version: '1.1'`. The animate item command will run a fixed-duration animation sequence on one or more properties of a single component. We are going to modify the `transform` property. 29 | 30 | The basic structure of our animate item commands will be : 31 | 32 | { 33 | "type": "AnimateItem", 34 | "easing": "ease-in-out", 35 | "duration": 2000, 36 | "componentId": "textTop", 37 | "value": [ 38 | { 39 | "property": "transform", 40 | "from": [ 41 | { 42 | "translateX": 1200 43 | } 44 | ], 45 | "to": [ 46 | { 47 | "translateX": 0 48 | } 49 | ] 50 | } 51 | ] 52 | } 53 | 54 | The transform property requires you to specify where the component is moving to and from. All transformations in the "to" and "from" arrays must match types. In this case, the type is translationX. For more information on valid values, see https://developer.amazon.com/docs/alexa-presentation-language/apl-standard-commands.html#animate_item_command_value_property[the animate command values tech doc]. We are going to use the https://developer.amazon.com/docs/alexa-presentation-language/apl-standard-commands.html#parallel-command[ParallelCommand] in order to run all of our text animations in parallel. This command has a list of commands to execute called child commands. All child commands execute simultaneously. 55 | 56 | { 57 | "type": "Parallel", 58 | "delay": 500, 59 | "commands": [ 60 | 61 | ] 62 | } 63 | 64 | Now that we understand the structure of our commands, we will need to create the JSON representing the commands inside the onMount property of our launchDocument.json file. Let's add some commands. To start, we will animate the text components to all slide in from different directions like in the gif and we want the commands to execute all at the same time, so the 3 text AnimateItem commands will be contained within the parallel command. To arrive at the values for each of these translations, we'll have to understand the viewport coordinate system. 65 | 66 | The viewport is drawn is with coordinate (0,0) contained at the upper left corner. In addition, all of the 'to' properties will be relative to the object's position in the APL document we built, not relative to the viewport coordinates. For the first object, we'll translate from 1600, to 0 on the X axis. The middle text will come from the left, so we will translate from -400 (so we can ensure the Text cannot be seen before animation) to 0 on the X axis, and the last item will come from below, which will use 'translateY' from 1200 to 0, since Y is positive going down. 67 | 68 | A. Add the set of commands to your `onMount` property in your launchDocument.json document. Putting the above inside of the parallel command will leave us with: 69 | + 70 | [ 71 | { 72 | "type": "Parallel", 73 | "commands": [ 74 | { 75 | "type": "AnimateItem", 76 | "easing": "ease-in-out", 77 | "duration": 2000, 78 | "componentId": "textTop", 79 | "value": [ 80 | { 81 | "property": "transform", 82 | "from": [ 83 | { 84 | "translateX": 1200 85 | } 86 | ], 87 | "to": [ 88 | { 89 | "translateX": 0 90 | } 91 | ] 92 | } 93 | ] 94 | }, 95 | { 96 | "type": "AnimateItem", 97 | "easing": "ease-in-out", 98 | "duration": 2000, 99 | "componentId": "textMiddle", 100 | "value": [ 101 | { 102 | "property": "transform", 103 | "from": [ 104 | { 105 | "translateX": -400 106 | } 107 | ], 108 | "to": [ 109 | { 110 | "translateX": 0 111 | } 112 | ] 113 | } 114 | ] 115 | }, 116 | { 117 | "type": "AnimateItem", 118 | "easing": "ease-in-out", 119 | "duration": 2000, 120 | "componentId": "textBottom", 121 | "value": [ 122 | { 123 | "property": "transform", 124 | "from": [ 125 | { 126 | "translateY": 1200 127 | } 128 | ], 129 | "to": [ 130 | { 131 | "translateX": 0 132 | } 133 | ] 134 | } 135 | ] 136 | } 137 | ] 138 | } 139 | ] 140 | + 141 | Once that is working, let's make the more complex animation for the image component. Looking at how this animation runs, we will need to scale our image from a really small scale to 1 (full size). We are also rotating it from 0 to 360 degrees over this duration which will be 2 seconds. You will notice the path it takes is not quite linear and different from the other animations. This is because it is custom defined. You do not have to stick to the https://developer.amazon.com/docs/alexa-presentation-language/apl-standard-commands.html#animate_item_command_easing_property[defined properties] in the chart below, but can define your own curve with https://en.wikipedia.org/wiki/B%C3%A9zier_curve#Cubic_B%C3%A9zier_curves[cubic-bezier curves] or a linear path. In fact, the named curves all have mathematical definitions listed in the chart below. The coordinates start at (0,0) and go to (1,1). Think of the X coordinate as time and Y as magnitude of the change. Here is the curve I defined `"easing": "path(0.25, 0.2, 0.5, 0.5, 0.75, 0.8)",` But if you want to write your own, feel free! 142 | + 143 | image:definedEasingCurves.png[] 144 | + 145 | B. Put this all together for the image command gives us: 146 | + 147 | { 148 | "type": "AnimateItem", 149 | "easing": "path(0.25, 0.2, 0.5, 0.5, 0.75, 0.8)", 150 | "duration": 3000, 151 | "componentId": "image", 152 | "value": [ 153 | { 154 | "property": "transform", 155 | "from": [ 156 | { 157 | "scale": 0.01 158 | }, 159 | { 160 | "rotate": 0 161 | } 162 | ], 163 | "to": [ 164 | { 165 | "scale": 1 166 | }, 167 | { 168 | "rotate": 360 169 | } 170 | ] 171 | } 172 | ] 173 | } 174 | + 175 | Add this in your `launchDocument.json` file inside the onMount command list. 176 | C. Now test it out! 177 | D. Once that is working, enter your birthday and test the launchHandler with context when it is not your birthday. You should see the commands applied to this as well. 178 | We are not quite done. What about animations when it is their birthday? Since, that experience is defined in the birthdayDocument.json file which we have not added commands to. Let's fix this. 179 | 180 | == Adding video control 181 | 182 | Did you notice the other change in the above gif? There is a new component added to the birthdayDocument.json document, the https://developer.amazon.com/docs/alexa-presentation-language/apl-transport-controls-layout.html[AlexaTransportsControls responsive component]. You should always have an on screen control for your video or it may not pass certification. Let's add this. This component is also a part of the alexa-layouts package. 183 | 184 | A. Add the AlexaTransportControls component to the container with the video component inside birthdayDocument.json, since we want this aligned to the center, too. This should go after the video component in the items list. 185 | + 186 | { 187 | "type": "Container", 188 | "alignItems": "center", 189 | "items": [ 190 | ...... 191 | { 192 | "primaryControlSize": 50, 193 | "secondaryControlSize": 0, 194 | "mediaComponentId": "birthdayVideo", 195 | "type": "AlexaTransportControls" 196 | } 197 | ] 198 | } 199 | + 200 | Our component has a secondary control of 0 because we do not want to show the secondary control buttons. These are the skip and rewind buttons if you were playing a series of videos. The primary control size is the size of the play button. The mediaComponentId must reference the VideoComponent earlier in the document. 201 | B. Save and deploy these changes and test for your birthday scenario. Make sure the button is functioning and stops and replays the video when toggled. 202 | 203 | Did you notice the clipping on the audio response from Alexa? You may not notice this if your birthday is close enough, but Alexa's voice response is getting cut off when the video starts to play. To fix this we will need to use the ExecuteCommands Directive. 204 | 205 | == ExecuteCommands directive 206 | 207 | Alexa is cut off from speaking when the video starts to play. We want Alexa to finish speaking, then start the video automatically. We need to turn off autoplay in order to fix this, but it does not make sense for our customers to have to tell the video to start. We'll use commands to solve this. 208 | 209 | To fix the audio, we are going to have to add the https://developer.amazon.com/docs/alexa-presentation-language/apl-execute-command-directive.html[ExecuteCommands directive] to our backend as well as a data source payload for it. The execute commands directive will execute the list of provided commands after Alexa is done speaking. It looks like this: 210 | 211 | { 212 | "type" : "Alexa.Presentation.APL.ExecuteCommands", 213 | "token": "[SkillProvidedToken]", 214 | "commands": [ 215 | 216 | ] 217 | } 218 | 219 | For our usage, we will need the skill provided token for the ExecuteCommand directive to target, so this can be `"birthdayToken"`. Without this, our command will not know which document to target to execute on. 220 | 221 | A. Add a new token field to the APL RenderDocument directive with the value of `birthdayToken`. Your addDirective(...) will now look like: 222 | + 223 | // Create Render Directive 224 | handlerInput.responseBuilder.addDirective({ 225 | type: 'Alexa.Presentation.APL.RenderDocument', 226 | token: 'birthdayToken', 227 | document: birthdayDocument, 228 | datasources: { 229 | ... Omitted for brevity... 230 | } 231 | }); 232 | + 233 | B. In the else block in our `HasBirthdayLaunchRequestHandler`, we will need to add another directive. This can be chained onto our current render directive. Add the below code to the `handlerInput.responseBuilder`. 234 | + 235 | .addDirective({ 236 | type: "Alexa.Presentation.APL.ExecuteCommands", 237 | token: "birthdayToken", 238 | commands: [ 239 | 240 | ] 241 | }); 242 | + 243 | C. Replace the `` with our commands list. This is simply going to be a single command to start the video. Since this happens once Alexa is done speaking, we get the behavior we want! The command looks like this: 244 | + 245 | { 246 | type: "ControlMedia", 247 | componentId: "birthdayVideo", 248 | command: "play" 249 | } 250 | + 251 | You'll end up with APL directive code that looks like this: 252 | + 253 | // Create Render Directive 254 | handlerInput.responseBuilder.addDirective({ 255 | type: 'Alexa.Presentation.APL.RenderDocument', 256 | token: 'birthdayToken', 257 | document: birthdayDocument, 258 | datasources: { 259 | text: { 260 | type: 'object', 261 | start: "Happy Birthday!", 262 | middle: "From,", 263 | end: "Alexa <3" 264 | }, 265 | assets: { 266 | video: "https://public-pics-muoio.s3.amazonaws.com/video/Amazon_Cake.mp4", 267 | backgroundURL: getBackgroundURL(handlerInput, "confetti") 268 | } 269 | } 270 | }).addDirective({ 271 | type: "Alexa.Presentation.APL.ExecuteCommands", 272 | token: "birthdayToken", 273 | commands: [{ 274 | type: "ControlMedia", 275 | componentId: "birthdayVideo", 276 | command: "play" 277 | }] 278 | }); 279 | + 280 | D. Save and deploy and test this out now. 281 | 282 | That's a cool animation isn't it? Great work on expanding your Cake Time with images, text, video, and animations! 283 | 284 | https://github.com/alexa/skill-sample-nodejs-first-apl-skill/tree/master/modules/code/module5[Complete code in Github, window=_blank] 285 | 286 | link:module4.adoc[Previous Module (4)] 287 | link:module6.adoc[Wrap Up & Extra Credit] 288 | -------------------------------------------------------------------------------- /modules/module6.adoc: -------------------------------------------------------------------------------- 1 | 2 | 3 | = Wrapping Up & Resources 4 | 5 | Thank you for joining our introductory APL course! We hope you've learned a lot. For more information about Alexa Multimodal, visit https://developer.amazon.com/en-US/alexa/alexa-skills-kit/multimodal 6 | 7 | 8 | == Food for thought 9 | 10 | There are a few ways you can improve this skill and add a better visual experience. Think about the following: 11 | 12 | 1. Design a custom GUI for FireTV. 13 | 2. Add a screen for the intents we did not cover. For instance, when we successfully get the birthday, you can add visuals of the year, month, and day to that screen (text and a calendar, perhaps?). 14 | 3. Vector art! You can use the https://developer.amazon.com/docs/alexa-presentation-language/apl-avg-format.html[AVG vector format]. There are https://svgtoavg.glitch.me/[some tools] to help convert SVGs. 15 | 4. Make a better fix to the Spot device which includes the image. Instead of displaying the text only, add the image behind the text (and make it look good). 16 | 5. Build the commands in a more reusable way. Define a slideIn command with direction and apply it to each text element. 17 | 18 | == Advanced resources: 19 | 20 | * Technical documentation 21 | ** https://developer.amazon.com/docs/alexa-presentation-language/apl-alexa-packages-overview.html[Alexa Design System for APL] 22 | ** https://developer.amazon.com/docs/alexa-design/apl.html[APL Alexa Design Guide] 23 | ** https://developer.amazon.com/docs/alexa-presentation-language/aplt-interface.html[APL for text only devices] 24 | ** https://developer.amazon.com/docs/alexa-presentation-language/apl-avg-format.html[Using vector graphics with APL] 25 | * Code samples 26 | ** https://github.com/alexa/skill-sample-nodejs-first-apl-skill[APL Cake Time Repo] 27 | ** https://github.com/alexa/skill-sample-nodejs-premium-hello-world[Premium Hello World] 28 | ** https://github.com/alexa/skill-sample-nodejs-fact-in-skill-purchases[Fact In-skill Purchases] 29 | ** https://github.com/alexa-labs/skill-sample-nodejs-sauce-boss[Sauce Boss] 30 | ** Pager karaoke: https://github.com/alexa-labs/skill-sample-python-pager-karaoke[Python], https://github.com/alexa-labs/skill-sample-java-pager-karaoke[Java], https://github.com/alexa-labs/skill-sample-nodejs-firetv-vlogs[NodeJS] 31 | * Training 32 | ** https://developer.amazon.com/en-US/docs/alexa/workshops/build-an-engaging-skill/get-started/index.html[Workshop: Build an Engaging Alexa Skill] 33 | ** https://developer.amazon.com/en-US/alexa/alexa-skills-kit/resources/training-resources/design-for-in-skill-purchasing[How to Design for In-Skill Purchasing] 34 | * Design 35 | ** https://developer.amazon.com/alexa/console/ask/displays[APL Authoring tool] 36 | ** https://developer.amazon.com/en-US/alexa/alexa-skills-kit/situational-design[Learn to Design Complex Interactions with Situational Design] 37 | ** https://developer.amazon.com/blogs/alexa/author/Jaime+Radwan[Design blog posts on APL] 38 | * Support 39 | ** https://forums.developer.amazon.com/topics/apl.html[APL developer forums topic] 40 | ** https://forums.developer.amazon.com/articles/193931/apl-known-issues-and-bugs-2.html[Known issues in APL] 41 | 42 | == Continue the conversation: 43 | * Reach out to us on twitter https://twitter.com/alexadevs[@alexadevs] 44 | * Visit us on https://www.twitch.tv/amazonalexa[twitch] where we dive into advanced voice design concepts and new features of the ASK SDK on a regular basis! 45 | 46 | == Give us feedback 47 | 48 | We're committed to developing online and in-person learning material to aid designers and developers in learning how to develop compelling voice experience 49 | 50 | We'd love to hear from you in regard to: 51 | 52 | * This course. Let us know what you liked, didn't like, and what we can do to improve! 53 | ** Tweet at us about what you liked and what can be improved. Use https://twitter.com/search?q=APLCakeTime[#APLCakeTime]. 54 | ** We take https://github.com/alexa/skill-sample-nodejs-first-apl-skill/pulls[pull requests], too :) 55 | * Have a feature request for APL or general Alexa Skill Development? You have a voice: http://alexa.uservoice.com[uservoice]. 56 | -------------------------------------------------------------------------------- /modules/quick-start.adoc: -------------------------------------------------------------------------------- 1 | 2 | 3 | :imagesdir: ../modules/images 4 | 5 | = Cake Time Quick Start Instructions 6 | 7 | These are the instructions for skill builders who have never worked with the Cake Time skill. 8 | 9 | == Setup Cake Time 10 | 11 | Cake Time is a skill that celebrates your birthday! Tell it your birthday to have it count down the days. Interact with the skill on your special day to hear a happy birthday message. 12 | 13 | On opening Cake Time after you told Alexa your birthday, you are either greeted with a countdown to your birthday, or if it is your birthday, you get a birthday message. To set up your own Cake Time skill, follow the steps: 14 | 15 | A. Open the https://developer.amazon.com/alexa/console/ask[developer console to the ASK page, window=_blank]. Sign in or create an Alexa developer account if you do not yet have one. 16 | B. Click *Create skill*. 17 | C. Enter the skill name as "Cake Time". Make sure to scroll down and click *Alexa-Hosted (Node.JS)* below. Using the Alexa hosted environment means we do not need an AWS account. 18 | + 19 | image:MakeCakeTime.gif[] 20 | + 21 | D. Upload the interaction model https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-skill/master/final/en-US.json[from here, window=_blank] to your skill. This is easy to do from the *build* tab. Copy paste the raw GitHub JSON into the "JSON Editor" which you can find on the left side. 22 | E. Build your interaction model with the *Build Model* button. 23 | F. For the code, switch to the *Code* tab in the developer console. 24 | G. Replace the contents of index.js with the https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-skill/master/final/index.js[contents in Github from here, window=_blank]. 25 | H. Replace the contents of the package.json file with the https://raw.githubusercontent.com/alexa/skill-sample-nodejs-first-skill/master/final/package.json[contents in Github from here, window=_blank]. 26 | I. Save and deploy the code. 27 | J. Go to the test tab and enable the skill in "Development" stage. 28 | K. Test the skill by typing `open cake time` into the Alexa Simulator text box. 29 | L. When this is all working (you will see JSON in the request and response), you have finished setting up Cake Time! 30 | 31 | == Testing Cake Time 32 | 33 | Cake Time has two different LaunchRequest handlers: One which is invoked when there is no context while the other is invoked when your birthday is known. Our skill knows the birthday when it is saved into S3. 34 | 35 | To delete what is in S3: 36 | 37 | A. Access the S3 provision for your Alexa hosted skill by going to the Code tab. 38 | B. On the left you will see: *Media storage: S3...*. Click this. 39 | + 40 | image::S3Access.png[] 41 | + 42 | C. On the S3 page, click the skill ID: *amzn1-ask-skill-...*. 43 | D. Delete the document that starts with "amzn1.ask.account...". 44 | 45 | Perform these steps now if you have already told Alexa your birthday since we will be building a visual for the launchRequest with no context first. 46 | 47 | TIP: An alternative way to test without deleting user data is to fire a one shot intent to reset the day, month, and year values. For instance, "Alexa, tell cake time that I was born on ". This will let you easily set the birthday to today to test when it is your birthday or to a different day to test the non-birthday scenario. We will be building different visuals for each. 48 | --------------------------------------------------------------------------------