├── .gitignore
├── README.md
├── cart
├── CartData.csv
├── ShoppingCart.json
├── addgsi.sh
├── batchget.sh
├── cartcustomer.json
├── cartproduct.json
├── delete.sh
├── get.sh
├── load.js
├── newitem.json
├── put.sh
├── query.sh
├── queryfilter.sh
├── querygsi.sh
├── querysortkey.sh
├── recreate.sh
├── scan.sh
├── scanfilter.sh
├── scangsi.sh
├── table.json
└── update.sh
├── ocean
├── README.md
└── ocean_surface_temps.json
├── reportmgmt
├── query_gsi_by_owner_status_sortdate.sh
├── report_mgmt.json
└── update_report_statusdate.sh
└── tx
├── OnlineBank.json
├── balance_transfer.sh
├── transfer1-allowed.input
├── transfer2-underfunded.input
├── transfer3-txidused.input
└── transfer4-allowed.input
/.gitignore:
--------------------------------------------------------------------------------
1 | .idea/
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # efficiencydemos
2 |
3 | A set of DynamoDB demo scripts and sample data that illustrate the read and write cost of various data access patterns.
4 |
5 | ## Intro
6 |
7 | Amazon DynamoDB is a powerful serverless database that offers virtually unlimited scale when used efficiently.
8 | The table structure and access patterns you choose have a big impact on DynamoDB's efficiency, and ultimately the read and write consumption that you are billed for.
9 | Knowing the best access patterns takes practice and experimentation.
10 | In these labs, you can run a series of DynamoDB data operations using the AWS Command Line Interface,
11 | and get immediate visibility into the cost and effectiveness of your calls.
12 |
13 | ## Pre-requisites
14 |
15 | * An AWS Account with administrator access
16 | * The [AWS CLI](https://aws.amazon.com/cli/) setup and configured
17 | * [Node.JS](https://nodejs.org/en/download/) for loading the sample data
18 | * A bash command-line environment such as Mac Terminal or Windows 10 bash shell
19 | * The [NoSQL Workbench for Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.html)
20 |
21 | *If you do not have an AWS account, you can run [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html) on port 8000 of your laptop instead. Within each script, locate and remove the pound comment in the line with: # ENDPOINTURL=http://localhost:8000*
22 |
23 |
24 | ## Consumed Capacity
25 |
26 | 
27 |
28 | A nice feature of calls to DynamoDB is that you can request a summary of the capacity consumed by your call.
29 | Even if your call is only sending or receiving a small amount of data, it may be consuming a much larger amount of Read Capacity or Write Capacity.
30 |
31 | Capacity is measured in Read Units and Write Units (sometimes called RCUs and WCUs).
32 | Each read unit represents the size of data read, measured in 4 KB units. Each write unit represents the size of data written measured in 1 KB units. Your actual consumption is rounded up to the nearest unit.
33 |
34 | Data operations to DynamoDB have an optional "Return Consumed Capacity" parameter, where you can specify either TOTAL or INDEXES. Specifying INDEXES will provide a total along with a breakdown of capacity consumed by indexes.
35 |
36 | For reads, you will also see two counts returned, the Scanned Count and the Returned Count.
37 | The Scanned Count is the total number of Items (rows) read by the DynamoDB engine, while the Returned Count is the number returned to the user.
38 |
39 |
40 | ## Scenario 1 - Shopping Cart
41 |
42 | We are consultants who have been hired to build a shopping cart for an E-commerce website.
43 | Each cart that is created has a unique ID, such as Cart1 or Cart2. Within each cart, one or many products can exist.
44 | Each product is identified via IDs such as Product100, Product200, etc.
45 | A one-to-many pattern of cart to products is modeled in a DynamoDB table called **ShoppingCart**.
46 |
47 | 
48 |
49 | This table also contains other types of items, such as Customer details and Product details.
50 | The Description attribute for many of these items is a large string of 20,000 bytes, representing a typical JSON document payload.
51 |
52 | 
53 |
54 | 
55 |
56 |
57 | We will take a tour of the DynamoDB read and write operations using simple shell scripts that access this table.
58 |
59 |
60 | ### Setup Steps
61 |
62 | 1. Clone this repository to a folder on your laptop, or download to a working folder via the green button above.
63 | 1. From your command prompt, navigate to the cart folder: ```cd cart```
64 | 1. You may wish to run ```export PATH=$PATH:$(pwd)``` so that scripts in your current folder will be added to your path.
65 | 1. Verify the AWS CLI is setup and running by running ```aws sts get-caller-identity```
66 | and ```aws dynamodb describe-limits```. You should see no errors.
67 | 1. Verify your AWS CLI is pointing to default region **us-east-1** (N. Virginia) by running ```aws configure``` and pressing enter four times. You may enter ```us-east-1``` on the third prompt if necessary.
68 |
69 |
70 | #### Creating the ShoppingCart table
71 | 1. Run ```recreate.sh``` to create your table. Please ignore the ResourceNotFoundException that appears the first time, as it attempts to delete the table should it already exist.
72 | 1. Run ```node load``` which will write the data to your new **ShoppingCart** table from the [CartData.csv](./cart/CartData.csv) file.
73 |
74 | *If you don't have Node.JS, you may use the NoSQL Workbench to deploy this table using the provided [ShoppingCart.json](./cart/ShoppingCart.json) file.*
75 |
76 | #### Return Consumed Capacity
77 | 1. Review the rest of the shell scripts in this [/cart](./cart/) folder.
78 | 1. Open the [scan.sh](./cart/scan.sh) script in your text editor.
79 | 1. Notice the final four lines. We include the option ```--return-consumed-capacity 'TOTAL'``` to request additional information about the cost of our operation.
80 |
81 | The AWS CLI offers it's own ```--query``` utility to do a final client-side format of the data returned by your DynamoDB API calls.
82 | We have chosen to comment out the display of actual returned ```Items[*]``` array of data, instead focusing on three consumed capacity stats.
83 | *Learn more about the AWS CLI data formatting options via the [CLI Documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output.html).*
84 |
85 | ### Shopping Cart Demo
86 |
87 | This sequence of commands helps illustrate the pros and cons of various access patterns.
88 | As you run each example, try and estimate the capacity you think will be consumed, before running each commmand.
89 |
90 |
91 | | Demo | Command |
92 | | --- | --- |
93 | | Scan | ```scan.sh``` |
94 | | Scan with filter | ```scanfilter.sh Cart1``` |
95 | | GetItem Eventual Consistency | ```get.sh Cart1 Product100``` |
96 | | GetItem with projection expression | ```get.sh Cart1 Product100 Price``` |
97 | | GetItem with Strong Consistency | ```get.sh Cart1 Product100 Price STRONG``` |
98 | | BatchGetItem | ```batchget.sh``` |
99 | | Query | ```query.sh Cart1``` |
100 | | Query with Filter | ```queryfilter.sh Cart1 Orange``` |
101 | | Query with Sort Key Expression | ```querysortkey.sh Cart1 Product400``` |
102 | | Delete Item | ```delete.sh Cart1 Product400``` |
103 | | Put Item | ```put.sh```
*writes new item Cart7 Product700* |
104 | | UpdateItem | 1. ```update.sh Cart7 Product700 Price 22.33```
2. ```update.sh Cart7 Product700 CustomerDescription MuchSmaller```
3. ```update.sh Cart7 Product700 Price 44.55``` |
105 | | Add New GSI (ProductName + Price) | ```addgsi.sh```
Wait a couple minutes for **GSI-ProductPrice** to be created. |
106 | | UpdateItem GSI key attribute | ```update.sh Cart1 Product100 Price 12``` |
107 | | UpdateItem non-GSI key attribute | ```update.sh Cart1 Product100 Qty 5``` |
108 | | UpdateItem with cost and non-GSI attribute | ```update.sh Customer John Address 100MainStreet``` |
109 | | Scan a GSI (works the same as base table) | ```scangsi.sh GSI-ProductPrice``` |
110 | | Query a GSI (works the same as base table) | ```querygsi.sh GSI-ProductPrice Turnip``` |
111 | | Write to a GSI | *N/A
you can only write to a base table!* |
112 | | Get Item from GSI | *N/A
you can only get-item from a base table!* |
113 |
114 |
115 |
116 | ---
117 |
118 | ## Scenario 2 - Transactions
119 | 1. Navigate to the project's *tx* folder.
120 | 2. Import model [OnlineBank.json](./tx/OnlineBank.json) into NoSQL Workbench for Amazon DynamoDB.
121 | 3. Use NoSQL Workbench Visualizer to "commit" the model to your AWS account in *us-east-1*.
122 |
123 | You now have two tables called *Accounts* and *Transactions*.
124 |
125 | Scan the tables to look at the existing data - accounts with balances, and a transaction
126 | recorded already.
127 |
128 | ### Transactions Demo
129 | 1. Perform a transfer.
130 |
131 | ```balance_transfer.sh c3e67497-fcb0-4881-8477-b0cbedab7240 transfer1-allowed```
132 |
133 | This succeeds - all conditions are met. Notice the consumed writes, and re-scan the tables to see the new and changed items.
134 |
135 |
136 | 2. Attempt a transfer where the payer has insufficient funds.
137 |
138 | ```balance_transfer.sh 4510ba8a-518b-4701-88b5-3db78e618f71 transfer2-underfunded```
139 |
140 | This fails - the condition is not met on the first action, which is to verify
141 | adequate funding in the source account. Writes are consumed anyway.
142 |
143 | 3. Attempt the same transfer again using the same idempotency key.
144 |
145 | ```balance_transfer.sh 4510ba8a-518b-4701-88b5-3db78e618f71 transfer2-underfunded```
146 |
147 | Because the prior attempt failed due to a condition exception, the idempotency
148 | token is not tracked by DynamoDB. We try again, get the same exception, and
149 | we consume the same writes.
150 |
151 | 4. Attempt a transaction that uses a *txid* that was used in the past
152 |
153 | ```balance_transfer.sh 7d622075-f2f1-4dd4-8aaf-fb29e87c2b9a transfer3-txidused```
154 |
155 | Now we try to make a transfer with a **txid** which matches the one that was
156 | already recorded some time ago - it was in our initial sample data. This fails
157 | because the third condition is not matched - that every new transaction must
158 | have its own unique txid. This consumes writes.
159 | The validation check against historical use of *txid* was part of our application business logic,
160 | and did not involve the idempotency token. Idempotency tokens are only tracked for around 10 mins).
161 |
162 |
163 | 5. Perform a successful transfer, while using an idempotency token.
164 |
165 | ```balance_transfer.sh e896d9e5-818c-43b2-a139-59fd63fbcd12 transfer4-allowed```
166 |
167 | This is a successful transfer - see the writes consumed, balances updated
168 | and new transaction recorded. But what if our client never received the 200
169 | response from DynamoDB saying the transaction was committed? It must retry, which could
170 | be a problem - the exception would be raised due to seeing an existing *txid*,
171 | preventing a repeat transfer, but to the client it still seems like a fail.
172 | No way to know if this is because of retry in the short term or if this is a
173 | genuine *txid* clash from separate balance transfer requests. This sounds messy.
174 |
175 |
176 | 6. Repeat the transfer
177 |
178 | ```balance_transfer.sh e896d9e5-818c-43b2-a139-59fd63fbcd12 transfer4-allowed```
179 |
180 | Thankfully, if we retry within 10 minutes, DynamoDB will return a successful
181 | response code to the client, so it knows it actually succeeded. The
182 | transfer is not actually made again; no updates are made to the account balances and
183 | the transaction is not recorded again. You'll notice that capacity was
184 | consumed - but look carefully. It is read units. No writes were made,
185 | but read units are consumed in checking and confirming that the transaction was
186 | in fact already successfully committed. This adds a great deal of resilience
187 | and integrity. Clients can retry and ascertain the exact status of any
188 | transaction.
189 |
190 |
191 | ---
192 |
193 |
194 | ## Scenario 3 - Report Management
195 |
196 | 
197 |
198 | This is a view of the Global Secondary Index: **StatusDate-by-OwnerID**
199 |
200 | ### Setup Steps
201 | 1. Import model [report_mgmt.json](./reportmgmt/report_mgmt.json) into NoSQL Workbench for Amazon DynamoDB.
202 | 2. Use NoSQL Workbench Visualizer to "commit" the model to your AWS account in *us-east-1*.
203 |
204 | *This table is created with Provisioned Capacity mode, with 5 read and 5 write units.
205 | You have 25 such units free across all your tables. If you will be keeping the table around,
206 | consider switching into On-Demand mode from the Capacity tab in your DynamoDB console.
207 | See further pricing notes at the bottom.*
208 |
209 | ### Report Management Demo
210 | 1. cd into the [reportmgmt](./reportmgmt/) folder.
211 | 2. View the model and example data.
212 | 3. Run this script to update the status and date for one of the entries:
213 | ```
214 | update_report_statusdate.sh 9D2B 9D2B#meta Pending#2019-10-05
215 | ```
216 | Note write throughput consumed - why did the secondary index consume 2 ?
217 |
218 | 3. Run this script to selectively Query the global secondary index retrieving
219 | matches for a particular OwnerID and Status, and return in sort Date order.
220 | ```
221 | query_gsi_by_owner_status_sortdate.sh Paola Pending
222 | ```
223 | Note sorted result set and read throughput consumption reported.
224 |
225 |
226 | ---
227 |
228 |
229 | ## Modeling Exercises
230 | To practice modeling DynamoDB tables using the NoSQL Workbench,
231 | please try the design challenges at:
232 |
233 | * [Ocean Surface Temperatures](./ocean/README.md)
234 | * [amazon-dynamodb-labs.com/scenarios.html](https://amazon-dynamodb-labs.com/scenarios.html)
235 |
236 |
237 | ## Next Steps
238 | The Shopping Cart table you created has 17 items and a size of 282 KB. It was created in **On Demand** capacity mode.
239 | The Reports table is under 2 KB and was created in Provisioned Capacity mode with a default of 5 Write Units and 5 Read Units.
240 | The [pricing page for DynamoDB](https://aws.amazon.com/dynamodb/pricing/) shows that you enjoy 25GB of free-tier storage for your tables.
241 | On On Demand mode, you are billed one penny for each 40,000 read units or 8000 write units consumed.
242 | In Provisioned Capacity mode, your first 25 Write Units and 25 Read Units are always free.
243 | You can delete your tables if desired.
244 |
245 |
246 | Please contribute to this code sample by issuing a Pull Request or creating an Issue.
247 |
248 | Share your feedback at [@robmccauley](https://twitter.com/robmccauley)
249 |
250 |
--------------------------------------------------------------------------------
/cart/addgsi.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 |
6 | # ENDPOINTURL=http://localhost:8000
7 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
8 |
9 |
10 | aws dynamodb update-table --table-name "ShoppingCart" \
11 | --attribute-definitions AttributeName=ProductName,AttributeType=S AttributeName=Price,AttributeType=N \
12 | --global-secondary-index-updates \
13 | "[{\"Create\": \
14 | {\"IndexName\": \"GSI-ProductPrice\", \
15 | \"KeySchema\":[ \
16 | {\"AttributeName\":\"ProductName\",\"KeyType\":\"HASH\"}, \
17 | {\"AttributeName\":\"Price\",\"KeyType\":\"RANGE\"} \
18 | ], \
19 | \"Projection\":{\"ProjectionType\":\"ALL\"}}}]" \
20 | --region $REGION \
21 | --endpoint-url $ENDPOINTURL \
22 | --output json --query '{"New Table":TableDescription.TableName, "Status ":TableDescription.TableStatus }'
23 |
24 |
25 | aws dynamodb wait table-exists --table-name $TABLENAME --region $REGION --endpoint-url $ENDPOINTURL
26 |
27 |
--------------------------------------------------------------------------------
/cart/batchget.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 | ARG1="$1"
6 |
7 |
8 | # ENDPOINTURL=http://localhost:8000
9 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
10 |
11 |
12 | ITEMSFILE=$ARG1
13 |
14 | if [ -z "$ITEMSFILE" ]
15 | then
16 | ITEMSFILE="cartproduct.json"
17 | echo Querying $TABLENAME for Partition Key $PK
18 | fi
19 |
20 |
21 | aws dynamodb batch-get-item --region $REGION --endpoint-url $ENDPOINTURL \
22 | --request-items file://$ITEMSFILE \
23 | --return-consumed-capacity 'TOTAL' \
24 | --output json \
25 | --query '{"Item Keys": Responses.ShoppingCart[*].[*], "Consumed RCUs ":ConsumedCapacity}'
26 |
--------------------------------------------------------------------------------
/cart/cartcustomer.json:
--------------------------------------------------------------------------------
1 | {
2 | "ShoppingCart": {
3 | "Keys": [
4 | {
5 | "PK": {"S": "Cart3"},
6 | "SK": {"S": "Product300"}
7 | },
8 | {
9 | "PK": {"S": "Customer"},
10 | "SK": {"S": "John Stiles"}
11 | }
12 | ]
13 | }
14 | }
--------------------------------------------------------------------------------
/cart/cartproduct.json:
--------------------------------------------------------------------------------
1 | {
2 | "ShoppingCart": {
3 | "Keys": [
4 | {
5 | "PK": {"S": "Cart3"},
6 | "SK": {"S": "Product300"}
7 | },
8 | {
9 | "PK": {"S": "Product300"},
10 | "SK": {"S": "Product300"}
11 | }
12 | ]
13 | }
14 | }
--------------------------------------------------------------------------------
/cart/delete.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 | ARG1="$1"
6 | ARG2="$2"
7 | ARG3="$3"
8 |
9 |
10 | # ENDPOINTURL=http://localhost:8000
11 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
12 |
13 | PK=$ARG1
14 | SK=$ARG2
15 |
16 | if [ -z "$SK" ]
17 | then
18 | SK="Product500"
19 |
20 | if [ -z "$PK" ]
21 | then
22 | PK="Customer5"
23 | echo Deleting Item $PK:$SK
24 | fi
25 | fi
26 |
27 | NEWITEM='{"PK":{"S":"'$PK'"},"SK":{"S":"'$SK'"},"Qty":{"N":"15"}}'
28 |
29 |
30 | aws dynamodb delete-item --region $REGION --endpoint-url $ENDPOINTURL \
31 | --table-name $TABLENAME \
32 | --key '{"PK":{"S":"'$PK'"},"SK":{"S":"'$SK'"}}' \
33 | --return-consumed-capacity 'TOTAL' \
34 | --output json \
35 | --query '{"Consumed WCUs ":ConsumedCapacity}'
36 |
37 |
--------------------------------------------------------------------------------
/cart/get.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 | ARG1="$1"
6 | ARG2="$2"
7 | ARG3="$3"
8 | ARG4="$4"
9 |
10 | # ENDPOINTURL=http://localhost:8000
11 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
12 |
13 |
14 | PK=$ARG1
15 | SK=$ARG2
16 | RETURNATTR=$ARG3
17 | CONSISTENCY=$ARG4
18 |
19 | PROJECTIONEXPRESSION=""
20 | CRFLAG="--no-consistent-read"
21 |
22 | if [ ! -z "$CONSISTENCY" ] && [ "STRONG" = $CONSISTENCY ]
23 | then
24 | CRFLAG="--consistent-read"
25 | fi
26 |
27 |
28 | if [ -z "$RETURNATTR" ] || [ "ALL" = $RETURNATTR ] || [ "All" = $RETURNATTR ] || [ "all" = $RETURNATTR ]
29 | then
30 |
31 | if [ -z "$SK" ]
32 | then
33 | SK="Product100"
34 |
35 | if [ -z "$PK" ]
36 | then
37 | PK="Cart1"
38 | echo Getting Item $PK:$SK
39 | fi
40 | fi
41 | else
42 | PROJECTIONEXPRESSION="--projection-expression $RETURNATTR"
43 | fi
44 |
45 |
46 | aws dynamodb get-item --region $REGION --endpoint-url $ENDPOINTURL \
47 | --table-name $TABLENAME \
48 | --key '{"PK":{"S":"'$PK'"},"SK":{"S":"'$SK'"}}' \
49 | $PROJECTIONEXPRESSION \
50 | $CRFLAG \
51 | --return-consumed-capacity 'TOTAL' \
52 | --output json \
53 | --query '{"Item": Item, "Consumed RCUs ":ConsumedCapacity.CapacityUnits}'
54 |
55 |
--------------------------------------------------------------------------------
/cart/load.js:
--------------------------------------------------------------------------------
1 | //
2 |
3 |
4 | const AWS = require('aws-sdk');
5 | AWS.config.region = process.env.AWS_REGION || 'us-east-1';
6 |
7 | // AWS.config.endpoint = 'http://localhost:8000';
8 |
9 | const fs = require('fs');
10 |
11 | const DYNAMODB_TABLE = 'ShoppingCart';
12 | const DATA_FILE = 'CartData.csv';
13 |
14 | console.log('Loading file: ' + DATA_FILE + ' into table: ' + DYNAMODB_TABLE );
15 | console.log();
16 |
17 | const docClient = new AWS.DynamoDB.DocumentClient();
18 |
19 | let Item = {}; // this obj will be filled with key-value pairs from the data file
20 |
21 |
22 | fs.readFile(DATA_FILE, 'utf8', function(err, rawData) {
23 | let newLineSignal = '\n';
24 |
25 | if(rawData.search('\r\n') > 0) {
26 | newLineSignal = '\r\n';
27 | }
28 | const fileLines = rawData.split(newLineSignal);
29 |
30 | if(fileLines.length < 2) {
31 | console.log('The CSV file ' + DATA_FILE + ' should have column headers on line 1 and data starting on line 2');
32 |
33 | } else {
34 |
35 | const attrNamesQuoted = fileLines[0].match(/(".*?"|[^",\s]+)(?=\s*,|\s*$)/g);
36 |
37 | let attrNames = [];
38 | for(let a=0; a < attrNamesQuoted.length; a++) {
39 | // console.log(attrNamesQuoted[a]);
40 |
41 | attrNames.push(stripQuotes(attrNamesQuoted[a]));
42 | }
43 |
44 | const linesToProcess = fileLines.length;
45 | // const linesToProcess = 3;
46 |
47 | for(let i = 1; i < linesToProcess; i++) {
48 | Item = {};
49 |
50 | // const attrData = fileLines[i]
51 | // .replace(/,,/g, ',null,')
52 | // .match(/(".*?"|[^",]+)(?=\s*,|\s*$)/g);
53 |
54 | const attrData = fileLines[i].split(/,(?=(?:(?:[^"]*"){2})*[^"]*$)/);
55 |
56 | // console.log();
57 | // console.log( fileLines[i]);
58 | // console.log( attrData);
59 |
60 | if(attrData) {
61 | for(let j = 0; j < attrData.length; j++) {
62 |
63 | let attr = stripQuotes(attrData[j]);
64 |
65 | // console.log(attr + ' ' + typeof attr);
66 | if(attr.length > 0) {
67 |
68 | if(!isNaN(attr)) {
69 | attr = attr * 1; // convert to number
70 | Item[attrNames[j]] = attr;
71 |
72 | } else {
73 |
74 | if(attr.charAt(0) === '{' && attr.charAt(attr.length-1) === '}') {
75 | // JSON object within string
76 | const obj = JSON.parse(attr.replace(/""/g,'"'));
77 | console.log('\n^^^^ obj: ' + obj);
78 |
79 | Item[attrNames[j]] = obj;
80 |
81 | } else {
82 | Item[attrNames[j]] = attr;
83 | }
84 | }
85 |
86 | }
87 |
88 |
89 | }
90 | // console.log(JSON.stringify(Item));
91 |
92 | const paramsPut = {
93 | TableName: DYNAMODB_TABLE,
94 | Item: Item
95 | };
96 |
97 | console.log('\n***** paramsPut');
98 | console.log(paramsPut);
99 | console.log();
100 |
101 | docClient.put(paramsPut, function (err, data) {
102 | if (err) {
103 | console.error("Unable to put item. Error JSON:", JSON.stringify(err, null, 2));
104 | return 'error';
105 |
106 | } else {
107 | console.log("UpdateItem succeeded:", JSON.stringify(paramsPut.Item, null, 2));
108 |
109 | }
110 | });
111 |
112 | }
113 | }
114 | }
115 | });
116 |
117 | function stripQuotes(str) {
118 | return str.replace(/^"(.*)"$/, '$1'); // strip quotes
119 | }
120 |
121 |
122 |
--------------------------------------------------------------------------------
/cart/newitem.json:
--------------------------------------------------------------------------------
1 | {
2 | "ProductName": {
3 | "S": "Pear"
4 | },
5 | "Address": {
6 | "S": "122 Ash Street"
7 | },
8 | "Price": {
9 | "N": "47.50"
10 | },
11 | "Customer": {
12 | "S": "Jane Doe"
13 | },
14 | "Qty": {
15 | "N": "1"
16 | },
17 | "CustomerDescription": {
18 | "S": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec lacus magna, consectetur vitae faucibus cursus, volutpat sit amet neque. Etiam eu dolor tempor, porttitor risus at, tristique justo. Mauris sollicitudin gravidae diam vitae auctor. Donec velit nunc, semper at varius vel, ornare ac leo. Mauris ac porta arcu. Nam ullamcorper ac ligula ut lobortis. Quisque in molestie velit, ac rutrum arcu. Mauris em lacus, malesuada id mattis a, hendrerit et nunc. Ut pretium congue nisl molestie ornare. Etiam eget leo finibus, eleifend velit sit amet, condimentum ipsum. Aliquam quis nisi quis orci maximus laoreet id vel mi. Phasellus suscipit, leo sed ullamcorper cursus, est nisi fermentum magna, vitae placerat dui nibh eu ipsum. Phasellus faucibus a ex et tempus. Nulla consequat ornare dui sagittis dictum. Curabitur scelerisque malesuada turpis ac auctor. Suspendisse sit amet sapien ac eros viverra tempor. Nullam convallis velit ornare ante lacinia viverra eget in eros. Quisque et bibendum purus, vel consectetur ipsum. Pellentesque fringilla placerat erat. Fusce ac lacus luctus, porttitor neque a, sollicitudin dui. Aenean luctus eu mi vitae bibendum. Nam pulvinar leo in rhoncus posuere. Integer egestas iaculis tortor vitae mattis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Etiam ullamcorper dapibus mi non mollis. Nam a risus dui. Duis ac varius dolor. Praesent sit amet orci vitae urna bibendum luctus. Nunc erat sapien, pellentesque nec quam a, placerat fringilla sem. Nullam a dolor sed mauris luctus gravida quis ac nibh. Donec vitae neque non eros tristique cursus at non ligula. Suspendisse feugiat lorem at mi tincidunt laoreet. Vivamus ut sodales mauris, eu ultricies mi. Curabitur eget mauris nec tellus vulputate tincidunt. Phasellus tempor mollis turpis vitae imperdiet. Donec ligula mauris, luctus eu ante ac, placerat hendrerit nisl. Donec pretium metus dolor, sit amet cursus tellus ullamcorper in. Aenean vitae leo felis. Nam ac orci est. Praesent imperdiet condimentum lorem vitae faucibus. Nulla porta eros sed erat ultricies pharetra. Nulla aliquet ornare congue. Curabitur malesuada bibendum sem, et luctus lectus accumsan eu. Ut pretium rutrum ultrices. Quisque dapibus diam id vulputate euismod. Mauris placerat mattis augue eget mollis. Aenean convallis neque id neque semper vestibulum. Proin bibendum velit vel velit varius scelerisque. Praesent consequat dapibus tincidunt. Morbi ornare sollicitudin massa, ut molestie enim blandit eu. Donec tincidunt tellus eget vestibulum faucibus. Mauris bibendum, libero vel sollicitudin eleifend, nibh mauris dapibus nisl, vel dapibus ante nisi et velit. Cras pharetra quam ut condimentum pretium. Etiam interdum eu massa hendrerit rutrum. Phasellus sodales lobortis justo. Pellentesque ornare turpis eu magna consectetur tempus. Suspendisse a tempus nunc. Maecenas tincidunt nisi mauris. Aliquam aliquam lorem est, vel placerat lectus feugiat nec. Integer interdum tellus a nunc aliquet, eu tempus nisl pretium. Phasellus tristique enim placerat, condimentum urna non, euismod purus. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Quisque molestie quis lectus sed pretium. Nam ut odio nunc. Cras et finibus est. Curabitur imperdiet leo nec diam convallis hendrerit. Suspendisse id pulvinar dui. Morbi laoreet dapibus scelerisque. Proin lectus dui, dictum vel metus sed, malesuada malesuada dolor. Nulla facilisi. Integer laoreet sit amet arcu ut facilisis. Cras eu mi lacus. Etiam suscipit, nibh vitae gravida finibus, arcu dolor pharetra nisl, non scelerisque mauris erat ac sapien. Duis gravida ligula ac maximus consequat. Nullam posuere finibus ligula, in elementum massa lacinia vel. Suspendisse potenti. Donec a ornare arcu. Vestibulum at tincidunt leo. Aliquam vitae dictum augue. Sed tempor nisl id turpis semper, id dignissim enim iaculis. Suspendisse luctus ipsum vel placerat pulvinar. Duis suscipit eleifend finibus. Proin finibus feugiat enim, et pulvinar nibh posuere non. Nullam at ante convallis, suscipit risus a, consequat leo. Aenean tortor massa, consequat sed convallis ac, interdum sed dolor. Pellentesque in quam elementum, mattis metus eget, mollis diam. Ut convallis sed mauris nec molestie. Mauris vestibulum, mauris et mattis blandit, felis diam iaculis libero, ut mollis mauris nisi et urna. Mauris quis pharetra purus. Sed faucibus est erat, eu volutpat felis fringilla at. Etiam vitae ipsum rutrum enim tempus tempor. Pellentesque porttitor tempor diam. In placerat sapien odio, at tincidunt ipsum porttitor nec. Vivamus nulla augue, gravida at rutrum id, cursus ut risus. Nulla porta nibh quis massa tempus, a semper ipsum malesuada. Nullam tempus ipsum eget purus gravida, sit amet lobortis lorem congue. Integer orci dolor, finibus et libero maximus, dapibus malesuada nulla. Pellentesque egestas et nulla vitae placerat. Integer in arcu vitae ex egestas pretium. Nunc pulvinar sit amet mauris ac fermentum. Sed tincidunt ante ac quam blandit, nec euismod arcu malesuada. Aenean viverra leo a augue consequat, eget ultrices quam sodales. Morbi fringilla lectus eget faucibus tempus. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Sed scelerisque arcu quis eros auctor dignissim. Etiam id urna sit amet elit faucibus molestie. Praesent sit amet lobortis nunc. Quisque tellus purus, consectetur ac pellentesque eu, tristique posuere sem. Suspendisse pellentesque ipsum ut enim pharetra volutpat. Nullam tempor eleifend ultricies. Donec et mollis odio, ac porttitor eros. Suspendisse commodo nisl sit amet risus rhoncus porta. Fusce luctus tristique quam, eget sodales leo maximus quis. Duis ornare mauris nec metus porttitor ultricies. Fusce molestie, lorem vitae consequat laoreet, mauris libero pellentesque turpis, ut condimentum est diam ac purus. Morbi enim elit, feugiat in risus ac, tempor auctor ipsum. Etiam eget dolor quis lectus lobortis tempus. Phasellus a dignissim ligula. Nam ac volutpat sapien, in sodales nulla. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Pellentesque massa risus, lobortis quis mollis ac, maximus nec tellus. Nunc faucibus ante id magna varius, eget pharetra lacus consequat. Donec porttitor mollis est non feugiat. Praesent a sem in nulla ultricies fermentum. Donec mollis lectus at urna vehicula, sit amet sollicitudin enim scelerisque. Nam rutrum nisl porttitor risus imperdiet, at eleifend turpis hendrerit. Curabitur non dapibus ligula, sit amet rhoncus arcu. Integer tincidunt tincidunt quam, non lacinia libero tempus sit amet. Integer sagittis, sapien a porta cursus, ante tellus sodales leo, vel venenatis nunc dui posuere libero. Morbi quis turpis in dui semper porttitor at a velit. Sed felis dui, porta ut orci non, tristique hendrerit mauris. Sed et posuere nunc. Maecenas sollicitudin ligula at tortor vulputate, a fermentum felis sodales. Fusce eget rhoncus risus. Praesent in rutrum libero, sed varius urna. Sed libero sapien, ultricies vel tempor vel, luctus a ante. Vivamus efficitur sem ac neque ornare tristique. Aliquam orci quam, finibus tincidunt ex eu, tincidunt scelerisque leo. Integer imperdiet augue eget maximus viverra. Mauris vel erat sed lacus sollicitudin interdum quis et enim. Cras blandit porta diam, nec posuere diam varius interdum. Praesent vitae condimentum erat, eget fringilla dolor. Mauris finibus nulla nec lorem luctus, eu mattis tellus egestas. Nam enim turpis, dapibus interdum diam a, tempor tincidunt diam. Maecenas eleifend nisl a nisi fringilla mollis. In hac habitasse platea dictumst. Proin lacinia justo orci, eu maximus magna molestie varius. Vestibulum pharetra ligula tortor, quis ullamcorper ligula suscipit et. Ut aliquet sagittis lectus vel tincidunt. Cras nisi neque, efficitur quis ornare quis, lobortis vel lectus. In commodo lacus nec elit convallis consequat. Sed ornare nibh vel eros faucibus, in eleifend ante lobortis. Etiam dolor tortor, porta eget fermentum at, semper eget erat. Vivamus volutpat mi pellentesque scelerisque congue. Fusce eget tortor et augue mollis condimentum laoreet et elit. Nunc vehicula accumsan augue a semper. In sagittis tortor quis diam porttitor pellentesque. Morbi sodales non nunc vel interdum. Aenean ut ornare diam, eget ultricies velit. Morbi id lectus euismod, cursus ipsum in, aliquam dolor. Donec eget dui velit. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Sed quis interdum metus. In imperdiet eros in massa blandit sodales. Donec vel lorem malesuada, scelerisque purus eu, eleifend ligula. Curabitur mollis metus non felis scelerisque molestie. Aenean eget malesuada sapien. Aenean quis massa eros. Curabitur nunc eros, convallis ultricies rhoncus in, facilisis eu dolor. Quisque mauris lorem, pellentesque non orci a, venenatis dictum ipsum. Donec ut commodo velit, nec lacinia orci. In hac habitasse platea dictumst. Fusce a tellus velit. Aenean vel dui justo. Vestibulum semper, augue a auctor placerat, sapien massa molestie metus, ac maximus libero quam quis magna. Sed molestie quam mauris, in convallis neque gravida vel. Quisque risus lectus, bibendum vel felis quis, dictum lacinia purus. Nunc ac orci purus. Nam quis dui suscipit, fermentum augue in, tempus tellus. Duis quis tincidunt sapien. Pellentesque ultricies sem at neque convallis efficitur. Aliquam volutpat sollicitudin justo non scelerisque. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur a elementum nisi, sit amet rhoncus neque. Pellentesque condimentum leo a augue hendrerit, in sagittis metus vulputate. Phasellus eu neque quis eros efficitur malesuada. Integer facilisis odio orci, a interdum risus gravida nec. Fusce pellentesque mauris ac augue volutpat, id lobortis mi porta. Maecenas et mauris eu velit semper tincidunt nec at purus. Nulla quis rhoncus tellus, non vulputate felis. Donec molestie, magna eget dictum laoreet, mauris dui mollis nulla, non egestas elit sapien a diam. Proin convallis id odio non aliquam. Aliquam feugiat mauris a lectus luctus, sed accumsan justo vestibulum. Praesent gravida erat sed leo finibus cursus. Nunc sit amet lobortis dui, sagittis pretium leo. Suspendisse placerat euismod fringilla. Fusce viverra sem ut nisl sollicitudin tristique. Praesent fermentum odio nec tempus bibendum. Quisque porttitor porta tempor. Sed rutrum, neque in congue luctus, felis metus consequat neque, sed tincidunt libero elit at urna. Vivamus lobortis dignissim urna. Aliquam sit amet quam non elit finibus pellentesque. Nunc massa diam, hendrerit ultricies fringilla eu, posuere sed diam. Phasellus vel dolor eros. Sed pellentesque nibh enim, eu pretium ante cursus nec. Nulla nunc lorem, mattis a lectus ut, auctor facilisis lacus. Proin ut tortor at ligula commodo sodales. Praesent vel fringilla orci, in pellentesque nulla. Pellentesque id elit magna. Vestibulum non tempor quam. Quisque vel lacinia nibh, sed commodo dolor. Praesent ante ipsum, dignissim eget purus in, luctus consectetur turpis. Suspendisse ac pellentesque risus, nec varius eros. Vestibulum eget sagittis arcu, eget molestie erat. Sed eget purus arcu. Quisque porttitor, nulla id sagittis aliquam, dolor ante congue eros, in bibendum urna lorem at leo. Nullam vehicula interdum diam, tempor iaculis justo lobortis nec. Pellentesque porttitor magna sit amet massa elementum, a accumsan elit sodales. Etiam libero metus, suscipit quis lectus nec, congue semper libero. Suspendisse augue ex, venenatis nec felis vel, venenatis facilisis ex. Nam ultrices felis vitae euismod finibus. Ut non nisi in augue fermentum semper in non neque. Nunc dapibus sodales nisi, ac pellentesque libero ultrices sit amet. Suspendisse potenti. Pellentesque ut nulla nec purus scelerisque accumsan et sed metus. Pellentesque varius magna eu dapibus consectetur. Vivamus tristique tincidunt ultrices. Maecenas faucibus, sem eget porta dapibus, enim est dictum mauris, eget vehicula turpis arcu in diam. Vivamus aliquet, velit pharetra facilisis fermentum, nulla quam pretium justo, quis mattis ex purus molestie lectus. Etiam tincidunt elit dolor, fringilla pharetra ex convallis in. Cras interdum, neque nec vestibulum fringilla, sem ligula mollis massa, sit amet suscipit libero velit non ante. Cras nec arcu pharetra, tincidunt erat vitae, efficitur mauris. Ut fermentum orci tortor, ut feugiat libero aliquam eget. Donec pellentesque ullamcorper nibh, eu tincidunt nunc eleifend at. Proin a posuere quam. Etiam sem quam, dignissim id neque ac, semper gravida risus. Curabitur rhoncus felis ac eros tincidunt, eu malesuada lacus egestas. Praesent tortor lectus, convallis vitae velit eget, fermentum consectetur neque. Curabitur rutrum efficitur fringilla. Integer nec venenatis lorem, id egestas ligula. Phasellus feugiat scelerisque mi, egestas facilisis nulla auctor sed. Curabitur dignissim massa neque, a accumsan augue rutrum vel. Phasellus ultrices mi eu faucibus facilisis. Suspendisse ultrices vulputate ante quis sodales. Duis elit purus, sodales ut euismod nec, varius quis lorem. Pellentesque at est finibus, eleifend ligula vel, aliquet enim. Aliquam vulputate laoreet dui, non iaculis nulla mollis vitae. Mauris suscipit sagittis dui, non iaculis purus porttitor sit amet. Vestibulum convallis augue et lacus consectetur egestas vitae at ipsum. In a vehicula justo, et sagittis nulla. Vivamus bibendum ac mi nec maximus. Phasellus risus arcu, tristique et ex molestie, aliquam condimentum purus. Maecenas venenatis ac diam sit amet lobortis. Duis volutpat interdum ex ac consectetur. Maecenas congue ac lorem sit amet faucibus. Morbi at massa semper, tempor massa sit amet, imperdiet mi. Nunc eleifend dictum tristique. Aenean placerat magna quis mattis dapibus. Vestibulum malesuada lobortis quam, vitae viverra lectus volutpat vitae. Suspendisse sit amet mauris vitae tortor elementum pharetra nec porta mauris. Duis ornare, lacus in auctor euismod, lorem enim commodo nibh, a ultrices leo nisi ut ipsum. In feugiat, neque ac rutrum posuere, nisl eros fermentum tellus, eget pretium nisl leo quis ligula. Donec aliquet mauris eu leo iaculis elementum. Vivamus venenatis nibh odio, in vestibulum elit blandit at. Etiam mattis ultrices enim laoreet commodo. Nunc sed nisi mauris. Nam rutrum eleifend augue et consectetur. Aliquam vel lacus augue. Ut ac lorem urna. Duis et sem vitae nisl fermentum fringilla vestibulum luctus urna. Aenean sem risus, accumsan a ex eget, vehicula aliquet nunc. Maecenas interdum accumsan orci, vitae dapibus arcu. Morbi diam ipsum, malesuada eu ornare sed, maximus vel augue. Maecenas laoreet libero lacus, quis porta purus cursus ut. Suspendisse bibendum nunc non nibh consequat, nec luctus dolor vulputate. Sed tincidunt purus vitae eros condimentum iaculis. In at faucibus velit, ut bibendum tortor. Curabitur at porta ex, eget dignissim nunc. Praesent vel eleifend lacus. Nulla eros justo, vulputate sit amet eleifend vel, consectetur non ante. Suspendisse vulputate massa orci, ac tempus quam vulputate vitae. Integer tristique sem eget quam commodo auctor eu at orci. Vivamus hendrerit leo at lacus tempor venenatis. Suspendisse semper nunc in fermentum cursus. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Praesent eleifend tempus pretium. Vivamus ut magna id massa lacinia maximus. Nullam ut magna vestibulum, pretium sapien quis, venenatis ipsum. Mauris lacinia sem ac consequat bibendum. Aenean sagittis sit amet eros non fermentum. Fusce nec purus augue. Suspendisse vestibulum metus ullamcorper tortor sodales semper. Aenean cursus et mi sed tempus. Phasellus ac lorem eget ligula cursus varius. Duis faucibus sollicitudin fringilla. Sed blandit eleifend elit, ac scelerisque diam laoreet id. Phasellus lectus ipsum, elementum in mattis in, lacinia eu elit. Pellentesque at lectus lectus. Morbi volutpat vulputate nunc, in blandit massa luctus sit amet. Sed pharetra sagittis ante nec porttitor. Aenean luctus consectetur elit. Lorem ipsum dolor sit amet, consectetur adipiscing elit. In ac sagittis purus. Fusce tincidunt ornare quam, volutpat rutrum dolor scelerisque id. Aliquam erat volutpat. Praesent eu enim sit amet ligula ornare aliquet non non nisi. Integer pellentesque justo magna, non fermentum purus blandit rhoncus. Donec mollis mauris non lacinia laoreet. Quisque tincidunt risus a pretium pretium. Pellentesque molestie leo eu finibus mollis. Sed pretium lacinia dui, quis porttitor lorem egestas ac. Nunc diam nisi, auctor vitae auctor ac, lacinia nec diam. Duis aliquam diam urna, ultrices mollis erat vehicula vel. Praesent et iaculis nisl, eu viverra eros. Nunc suscipit, erat eu imperdiet auctor, leo nisi pharetra mauris, vitae convallis dolor leo id tortor. Duis a libero nisi. Pellentesque hendrerit, justo hendrerit congue pellentesque, magna risus consequat velit, eu suscipit nunc metus non orci. Sed tortor neque, dapibus ut scelerisque nec, faucibus eu risus. Vivamus faucibus accumsan ligula quis rhoncus. Aliquam vel metus eget erat tincidunt tincidunt eget eget justo. Curabitur sit amet erat tortor. Integer aliquam turpis eu orci venenatis varius. Ut commodo at lacus sit amet blandit. Pellentesque ac fermentum nisl. Vestibulum vulputate magna rhoncus lobortis auctor. Aliquam non dolor id eros dignissim ullamcorper sit amet vitae dui. Nam a suscipit velit, sit amet lobortis risus. Nam eu rutrum orci, congue commodo nisl. Vivamus convallis sem id ligula congue mollis. Mauris viverra nibh orci, id cursus quam rhoncus et. Mauris id eros quis metusmus convallis sem id ligula congue mollis. Mauris viverra nibh orci, id cursus quam rhoncus et. Mauris id eros quis metus gravida efficitur. Mauris aliquet felis mi, sit amet porta ex cursus in. Donec ultricies dui eu nisi porta, in commodo nisl ultricies. Nulla vel velit sit amet nisl faucibus tincidunt. Nam congue lacinia lacus in facilisis. Integer lobortis sem sed magna egestas bibendum. Aenean consectetur nulla vel metus placerat hendrerit. Vestibulum vehicula augue lacus, at rutrum mauris imperdiet sed. Mauris vitae luctus diam. Phasellus enim tellus, facilisis eget sollicitudin in, semper eu urna. Pellentesque laoreet felis quis ultrices semper. Duis lacinia ut est id venenatis. Praesent suscipit lacinia malesuada. Integer id mauris vitae augue pellentesque sodales. Nullam sed tortor vitae sapien convallis sagittis. Nam ullamcorper pellentesque nisl nec viverra. Fusce at suscipit ante, fringilla volutpat nisl. Aliquam mattis dapibus hendrerit. Nunc nisl enim, interdum id consectetur eu, varius eu odio. Praesent sagittis consectetur sapien non facilisis. Ut dolor enim, vestibulum ut ligula id, viverra sagittis turpis. Vivamus sit amet sapien viverra, semper ex sit amet, rutrum neque. Donec pellentesque dui vitae dui luctus elementum. Etiam luctus ante id ex ultricies sodales. In non tortor odio. Aliquam convallis urna non sem dapibus tincidunt. Donec tempus justo non tristique auctor. Quisque condimentum diam et turpis semper hendrerit. Morbi ut tortor ut enim feugiat efficitur. Donec at lacus eget est blandit commodo at sit amet sapien. Nulla nec vulputate libero, vel rutrum mi. Suspendisse imperdiet massa aliquam, varius quam in, malesuada justo. Nam viverra eros non justo luctus, a lobortis sapien ultricies. Suspendisse at risus risus. Aliquam a nunc suscipit nunc luctus maximus. Mauris nisl massa, pharetra sed scelerisque id, mollis et purus. Mauris ac viverra lectus. Nunc fringilla scelerisque quam ut finibus. Aenean rhoncus vel augue eleifend ornare. Mauris pretium nisl ullamcorper sem bibendum, at mollis velit efficitur. Nam mattis sollicitudin elit et scelerisque. Nam mollis ultrices lacus quis varius. Nunc id purus et nisl sollicitudin mollis id vel arcu. Phasellus luctus pretium urna, eleifend laoreet nibh lacinia ac. Aliquam dapibus pellentesque luctus. Aenean pharetra at libero sit amet pulvinar. Duis sollicitudin nec mauris aliquet sollicitudin. Vivamus quis arcu ultrices dolor suscipit vehicula. In quis donec."
19 | },
20 | "SK": {
21 | "S": "Product700"
22 | },
23 | "StateCityZip": {
24 | "S": "TX-Houston-77017"
25 | },
26 | "PK": {
27 | "S": "Cart7"
28 | },
29 | "DateAdded": {
30 | "S": "2020-07-01T09:45:30Z"
31 | }
32 |
33 | }
34 |
--------------------------------------------------------------------------------
/cart/put.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 |
6 |
7 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
8 | # ENDPOINTURL=http://localhost:8000
9 |
10 |
11 | aws dynamodb put-item --region $REGION --endpoint-url $ENDPOINTURL \
12 | --table-name $TABLENAME \
13 | --item file://newitem.json \
14 | --return-consumed-capacity 'TOTAL' \
15 | --output json \
16 | --query '{"Consumed WCUs ":ConsumedCapacity.CapacityUnits}'
17 |
18 |
--------------------------------------------------------------------------------
/cart/query.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 | ARG1="$1"
6 |
7 |
8 | # ENDPOINTURL=http://localhost:8000
9 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
10 |
11 |
12 | PK=$ARG1
13 |
14 |
15 | if [ -z "$ARG1" ]
16 | then
17 | PK="Cart1"
18 | echo Querying $TABLENAME for Partition Key $PK
19 | fi
20 |
21 |
22 | aws dynamodb query --region $REGION --endpoint-url $ENDPOINTURL \
23 | --table-name $TABLENAME \
24 | --key-condition-expression "#p = :p" \
25 | --expression-attribute-names '{"#p": "PK" }' \
26 | --expression-attribute-values '{":p" : {"S":"'$PK'"}}' \
27 | --return-consumed-capacity 'TOTAL' \
28 | --output json \
29 | --query '{"Scanned Count":ScannedCount, "Returned Count":Count, "Consumed RCUs ":ConsumedCapacity.CapacityUnits}' \
30 | # --query 'Items[*][PK,SK]'
31 |
32 |
33 |
--------------------------------------------------------------------------------
/cart/queryfilter.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 | ARG1="$1"
6 | ARG2="$2"
7 |
8 | # ENDPOINTURL=http://localhost:8000
9 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
10 |
11 |
12 | PK=$ARG1
13 | PRODUCTNAME=$ARG2
14 |
15 |
16 | if [ -z "$ARG2" ]
17 | then
18 | PRODUCTNAME="Orange"
19 |
20 | if [ -z "$ARG1" ]
21 | then
22 | PK="Cart1"
23 | echo Querying $TABLENAME for Partition Key $PK and filtering on $PRODUCTNAME
24 | fi
25 |
26 | fi
27 |
28 |
29 | aws dynamodb query --region $REGION --endpoint-url $ENDPOINTURL \
30 | --table-name $TABLENAME \
31 | --key-condition-expression "#p = :p" \
32 | --filter-expression "#s = :s" \
33 | --expression-attribute-names '{"#p": "PK", "#s": "ProductName" }' \
34 | --expression-attribute-values '{":p" : {"S":"'$PK'"}, ":s" : {"S":"'$PRODUCTNAME'"}}' \
35 | --return-consumed-capacity 'TOTAL' \
36 | --output json \
37 | --query '{"Scanned Count":ScannedCount, "Returned Count":Count, "Consumed RCUs ":ConsumedCapacity.CapacityUnits}' \
38 | # --query 'Items[*][PK,SK]'
39 |
40 |
41 |
--------------------------------------------------------------------------------
/cart/querygsi.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 | ARG1="$1"
6 | ARG2="$2"
7 |
8 | # ENDPOINTURL=http://localhost:8000
9 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
10 |
11 | INDEX=$ARG1
12 | PK=$ARG2
13 |
14 |
15 | if [ -z "$ARG1" ]
16 | then
17 | PK="Cart1"
18 | echo Querying $TABLENAME for Partition Key $PK
19 | fi
20 |
21 |
22 | aws dynamodb query --region $REGION --endpoint-url $ENDPOINTURL \
23 | --table-name $TABLENAME \
24 | --index-name $INDEX \
25 | --key-condition-expression "#p = :p" \
26 | --expression-attribute-names '{"#p": "ProductName" }' \
27 | --expression-attribute-values '{":p" : {"S":"'$PK'"}}' \
28 | --return-consumed-capacity 'TOTAL' \
29 | --output json \
30 | --query '{"Scanned Count":ScannedCount, "Returned Count":Count, "Consumed RCUs ":ConsumedCapacity.CapacityUnits}' \
31 | # --query 'Items[*][PK,SK]'
32 |
33 |
34 |
--------------------------------------------------------------------------------
/cart/querysortkey.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 | ARG1="$1"
6 | ARG1="$2"
7 |
8 | # ENDPOINTURL=http://localhost:8000
9 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
10 |
11 |
12 | PK=$ARG1
13 | SK=$ARG2
14 |
15 |
16 | if [ -z "$ARG2" ]
17 | then
18 | SK="Product400"
19 |
20 | if [ -z "$ARG1" ]
21 | then
22 | PK="Cart1"
23 | echo Querying $TABLENAME for Partition Key $PK
24 | fi
25 |
26 | fi
27 |
28 |
29 | aws dynamodb query --region $REGION --endpoint-url $ENDPOINTURL \
30 | --table-name $TABLENAME \
31 | --key-condition-expression "#p = :p and #s = :s" \
32 | --expression-attribute-names '{"#p": "PK", "#s": "SK" }' \
33 | --expression-attribute-values '{":p" : {"S":"'$PK'"}, ":s" : {"S":"'$SK'"}}' \
34 | --return-consumed-capacity 'TOTAL' \
35 | --output json \
36 | --query '{"Scanned Count":ScannedCount, "Returned Count":Count, "Consumed RCUs ":ConsumedCapacity.CapacityUnits}' \
37 | # --query 'Items[*][PK,SK]'
38 |
39 |
40 |
--------------------------------------------------------------------------------
/cart/recreate.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 |
6 | # ENDPOINTURL=http://localhost:8000
7 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
8 |
9 | # read -n1 -r -p "Warning! You are going to delete your DynamoDB table $TABLENAME if it exists! press any key to continue..." key
10 |
11 | aws dynamodb delete-table --table-name $TABLENAME --region $REGION \
12 | --endpoint-url $ENDPOINTURL \
13 | --output json --query '{"Deleting ":TableDescription.TableName}'
14 |
15 | aws dynamodb wait table-not-exists --table-name $TABLENAME --region $REGION \
16 | --endpoint-url $ENDPOINTURL \
17 | --output json --query '{"Table ":TableDescription.TableName, "Status:":TableDescription.TableStatus }'
18 |
19 |
20 | aws dynamodb create-table --cli-input-json file://table.json --region $REGION \
21 | --endpoint-url $ENDPOINTURL \
22 | --output json --query '{"New Table":TableDescription.TableName, "Status ":TableDescription.TableStatus }'
23 |
24 | aws dynamodb wait table-exists --table-name $TABLENAME --region $REGION --endpoint-url $ENDPOINTURL
25 |
26 |
27 |
--------------------------------------------------------------------------------
/cart/scan.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 |
6 | # ENDPOINTURL=http://localhost:8000
7 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
8 |
9 |
10 | aws dynamodb scan --region $REGION --endpoint-url $ENDPOINTURL \
11 | --table-name $TABLENAME \
12 | --return-consumed-capacity 'TOTAL' \
13 | --output json \
14 | --query '{"Scanned Count":ScannedCount, "Returned Count":Count, "Consumed RCUs ":ConsumedCapacity.CapacityUnits}' \
15 | # --query 'Items[*][PK,SK,Qty]'
16 |
17 |
--------------------------------------------------------------------------------
/cart/scanfilter.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
6 | # ENDPOINTURL=http://localhost:8000
7 |
8 | ARG1="$1"
9 |
10 | PK=$ARG1
11 |
12 | if [ -z "$ARG1" ]
13 | then
14 | PK="Cart1"
15 | echo Scanning $TABLENAME with a filter on $PK
16 | fi
17 |
18 | aws dynamodb scan --region $REGION --endpoint-url $ENDPOINTURL \
19 | --table-name $TABLENAME \
20 | --filter-expression "#p = :c" \
21 | --expression-attribute-names '{"#p": "PK" }' \
22 | --expression-attribute-values '{":c" : {"S":"'$PK'"}}' \
23 | --return-consumed-capacity 'TOTAL' \
24 | --output json \
25 | --query '{"Scanned Count":ScannedCount, "Returned Count":Count, "Consumed RCUs ":ConsumedCapacity.CapacityUnits}' \
26 | # --query 'Items[*][PK,SK]'
27 |
28 |
--------------------------------------------------------------------------------
/cart/scangsi.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 |
6 | ARG1="$1"
7 |
8 | # ENDPOINTURL=http://localhost:8000
9 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
10 |
11 | INDEX=$ARG1
12 |
13 |
14 | aws dynamodb scan --region $REGION --endpoint-url $ENDPOINTURL \
15 | --table-name $TABLENAME \
16 | --index-name $INDEX \
17 | --return-consumed-capacity 'TOTAL' \
18 | --output json \
19 | --query '{"Scanned Count":ScannedCount, "Returned Count":Count, "Consumed RCUs ":ConsumedCapacity.CapacityUnits}' \
20 | # --query 'Items[*][PK,SK,Qty]'
21 |
22 |
--------------------------------------------------------------------------------
/cart/table.json:
--------------------------------------------------------------------------------
1 | {
2 | "TableName": "ShoppingCart",
3 |
4 | "KeySchema": [
5 | { "AttributeName": "PK", "KeyType": "HASH" },
6 | { "AttributeName": "SK", "KeyType": "RANGE" }
7 | ],
8 |
9 | "AttributeDefinitions": [
10 | { "AttributeName": "PK", "AttributeType": "S" },
11 | { "AttributeName": "SK", "AttributeType": "S" }
12 | ],
13 |
14 | "BillingMode": "PAY_PER_REQUEST"
15 | }
16 |
17 |
18 |
--------------------------------------------------------------------------------
/cart/update.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=ShoppingCart
5 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
6 | # ENDPOINTURL=http://localhost:8000
7 |
8 | ARG1="$1"
9 | ARG2="$2"
10 | ARG3="$3"
11 | ARG4="$4"
12 |
13 | PK=$ARG1
14 | SK=$ARG2
15 | UPDATEKEY=$ARG3
16 | UPDATEVAL=$ARG4
17 |
18 | KEYTYPE="S"
19 |
20 | if [ -z "$UPDATEKEY" ]
21 | then
22 | UPDATEKEY="Qty"
23 | UPDATEVAL=$RANDOM
24 | fi
25 |
26 | re='^[0-9]+$'
27 | if [[ $UPDATEVAL =~ $re ]] ; then
28 | KEYTYPE="N"
29 |
30 | fi
31 |
32 |
33 | if [ -z "$SK" ]
34 | then
35 | SK="Product200"
36 |
37 | if [ -z "$PK" ]
38 | then
39 | PK="Cart2"
40 | echo Updating $TABLENAME for $PK:$SK with $UPDATEKEY = $UPDATEVAL
41 | fi
42 | fi
43 |
44 |
45 |
46 | aws dynamodb update-item --region $REGION --endpoint-url $ENDPOINTURL \
47 | --table-name $TABLENAME \
48 | --key '{"PK":{"S":"'$PK'"},"SK":{"S":"'$SK'"}}' \
49 | --update-expression "SET #q = :q " \
50 | --expression-attribute-names '{"#q": "'$UPDATEKEY'" }' \
51 | --expression-attribute-values '{":q" : {"'$KEYTYPE'":"'$UPDATEVAL'"}}' \
52 | --return-consumed-capacity 'INDEXES' \
53 | --output json \
54 | --query '{"Consumed WCUs ":ConsumedCapacity}'
55 |
--------------------------------------------------------------------------------
/ocean/README.md:
--------------------------------------------------------------------------------
1 | ## Ocean Surface Temperatures
2 |
3 | This example is intended as a learning opportunity. Given a scenario and
4 | a set of access patterns, review the initial data model provided using the
5 | NoSQL Workbench for Amazon DynamoDB. Identify any problems in the model -
6 | access patterns not covered, functional gaps, or sub-optimal efficiency.
7 |
8 | ### Pre-requisites
9 | * The [NoSQL Workbench for Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.html)
10 | * The [ocean_surface_temps.json](ocean_surface_temps.json) Workbench model file
11 |
12 | ### Scenario
13 | A global community project has begun, with the goal of recording a history of
14 | ocean surface temperatures from all over the world. There will be many
15 | participating organizations - oceanographic exploration companies, universities
16 | and other research institutions. A community defined standard design for the
17 | measurement device has been shared, and organizations are free to iterate on
18 | their own implementations. The project aims to provide standard tooling to
19 | help participating organizations manage and maintain their devices, to store the
20 | measurements, and to provide a web based UI for organizations to view and edit
21 | data from their devices.
22 |
23 | The project assigns a unique three digit (zero padded) numeric identifier for
24 | each participating organization. And the organization defines a unique (to
25 | their org) 5 digit (zero padded) numeric identifier for each device. If a
26 | device is redeployed to a new location, it is assigned a new identifier.
27 |
28 | For each device, the org can store the install coordinates, most recent service
29 | date, hardware model/revision, active/inactive status, and an indicator
30 | present only when the device is considered to be in fault - needing service.
31 |
32 | The sensor devices generate a temperature (in Kelvin, with 2 decimal places)
33 | once each minute, and deliver it to a Kinesis Data Stream along with the
34 | assigned org id and device id. In addition, if the device self assesses that
35 | it is in fault, it adds an additional indicator attribute to the records sent
36 | to the stream. Workers read from the stream in batches, and write the records
37 | to DynamoDB using BatchWriteItem. Some organizations have over 1000 devices,
38 | and the project overall has 7 thousand devices today - expecting to reach over
39 | 10 thousand when the project reaches peak scale.
40 |
41 | Temperature readings should be kept for one year and then be deleted.
42 |
43 | ### Access Patterns
44 |
45 | The access patterns we need to support are:
46 |
47 | * CRUD organization information
48 | * CRUD device information
49 | * CRUD sensor reading
50 | * Retrieve sorted time range of temperature records for a device
51 | * Retrieve most recent 60 readings for a device
52 | * Retrieve the first temperature reading on record for a device
53 | * List all devices for an organization
54 | * Find all faulty readings (daily ETL)
55 | * List all devices and their metadata (weekly ETL)
56 | * Find all faulty readings for a particular organization in the last 30 minutes
57 | * Find all faulty readings for a particular device
58 | * Find all devices with a service date more than 1 year in the past
59 |
60 | ### Tasks
61 | We need to notify an organization contact when any of their devices change
62 | fault status. And any time a fault indicated temperature data point arrives,
63 | the status of the device record should change to indicate the fault.
64 |
65 | Some initial work has been made towards modeling for this use case in the
66 | NoSQL Workbench for DynamoDB. Download the exported model, import into the
67 | tool, review the model and make any improvements you can think of.
68 |
69 | ---
70 | back to [HOME](../README.md)
--------------------------------------------------------------------------------
/ocean/ocean_surface_temps.json:
--------------------------------------------------------------------------------
1 | {
2 | "ModelName": "ocean surface temps",
3 | "ModelMetadata": {
4 | "Author": "pnnaylor@",
5 | "DateCreated": "May 26, 2020, 05:48 PM",
6 | "DateLastModified": "Jun 18, 2020, 10:43 AM",
7 | "Description": "Model supports an example for the data modeling session of dbCon 2020.",
8 | "Version": "1.0"
9 | },
10 | "DataModel": [
11 | {
12 | "TableName": "oceantemps",
13 | "KeyAttributes": {
14 | "PartitionKey": {
15 | "AttributeName": "organization_number",
16 | "AttributeType": "N"
17 | },
18 | "SortKey": {
19 | "AttributeName": "device#detail",
20 | "AttributeType": "S"
21 | }
22 | },
23 | "NonKeyAttributes": [
24 | {
25 | "AttributeName": "surface_temperature_kelvin",
26 | "AttributeType": "S"
27 | },
28 | {
29 | "AttributeName": "fault_indicated",
30 | "AttributeType": "BOOL"
31 | },
32 | {
33 | "AttributeName": "ttl",
34 | "AttributeType": "S"
35 | },
36 | {
37 | "AttributeName": "install_longitude",
38 | "AttributeType": "N"
39 | },
40 | {
41 | "AttributeName": "install_latitude",
42 | "AttributeType": "N"
43 | },
44 | {
45 | "AttributeName": "hardware_model_revision",
46 | "AttributeType": "S"
47 | },
48 | {
49 | "AttributeName": "status",
50 | "AttributeType": "S"
51 | },
52 | {
53 | "AttributeName": "serviced_date",
54 | "AttributeType": "S"
55 | }
56 | ],
57 | "TableData": [
58 | {
59 | "organization_number": {
60 | "N": "713"
61 | },
62 | "device#detail": {
63 | "S": "00094#202005262106"
64 | },
65 | "surface_temperature_kelvin": {
66 | "S": "293.55"
67 | },
68 | "fault_indicated": {
69 | "BOOL": false
70 | },
71 | "ttl": {
72 | "S": "1593224182"
73 | },
74 | "install_longitude": {
75 | "N": "-28.614913"
76 | },
77 | "install_latitude": {
78 | "N": "153.620450"
79 | }
80 | },
81 | {
82 | "organization_number": {
83 | "N": "713"
84 | },
85 | "device#detail": {
86 | "S": "00094#metadata"
87 | },
88 | "install_longitude": {
89 | "N": "-28.614913"
90 | },
91 | "install_latitude": {
92 | "N": "153.620450"
93 | }
94 | }
95 | ],
96 | "DataAccess": {
97 | "MySql": {}
98 | }
99 | }
100 | ]
101 | }
--------------------------------------------------------------------------------
/reportmgmt/query_gsi_by_owner_status_sortdate.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=reports
5 | INDEX=StatusDate-by-OwnerID
6 |
7 | # ENDPOINTURL=http://localhost:8000
8 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
9 |
10 | OWNER="$1"
11 | STATUS="$2"
12 |
13 |
14 | aws dynamodb query --region $REGION --endpoint-url $ENDPOINTURL \
15 | --table-name $TABLENAME \
16 | --index-name $INDEX \
17 | --key-condition-expression "#p = :p and begins_with (#q,:q)" \
18 | --expression-attribute-names '{"#p": "OwnerID", "#q": "Status#Date" }' \
19 | --expression-attribute-values '{":p" : {"S":"'$OWNER'"}, ":q" : {"S":"'$STATUS'#" } }' \
20 | --return-consumed-capacity 'TOTAL' \
21 | --output json
22 |
23 |
--------------------------------------------------------------------------------
/reportmgmt/report_mgmt.json:
--------------------------------------------------------------------------------
1 | {
2 | "ModelName": "report_mgmt",
3 | "ModelMetadata": {
4 | "Author": "pnnaylor@",
5 | "DateCreated": "May 30, 2020, 03:25 PM",
6 | "DateLastModified": "May 30, 2020, 03:41 PM",
7 | "Description": "",
8 | "Version": "1.0"
9 | },
10 | "DataModel": [
11 | {
12 | "TableName": "reports",
13 | "KeyAttributes": {
14 | "PartitionKey": {
15 | "AttributeName": "ReportID",
16 | "AttributeType": "S"
17 | },
18 | "SortKey": {
19 | "AttributeName": "ReportDetails",
20 | "AttributeType": "S"
21 | }
22 | },
23 | "NonKeyAttributes": [
24 | {
25 | "AttributeName": "OwnerID",
26 | "AttributeType": "S"
27 | },
28 | {
29 | "AttributeName": "Summary",
30 | "AttributeType": "S"
31 | },
32 | {
33 | "AttributeName": "Status#Date",
34 | "AttributeType": "S"
35 | },
36 | {
37 | "AttributeName": "Document",
38 | "AttributeType": "S"
39 | }
40 | ],
41 | "GlobalSecondaryIndexes": [
42 | {
43 | "IndexName": "StatusDate-by-OwnerID",
44 | "KeyAttributes": {
45 | "PartitionKey": {
46 | "AttributeName": "OwnerID",
47 | "AttributeType": "S"
48 | },
49 | "SortKey": {
50 | "AttributeName": "Status#Date",
51 | "AttributeType": "S"
52 | }
53 | },
54 | "Projection": {
55 | "ProjectionType": "INCLUDE",
56 | "NonKeyAttributes": [
57 | "Summary",
58 | "ReportID"
59 | ]
60 | }
61 | }
62 | ],
63 | "TableData": [
64 | {
65 | "ReportID": {
66 | "S": "AFED"
67 | },
68 | "ReportDetails": {
69 | "S": "AFED#meta"
70 | },
71 | "OwnerID": {
72 | "S": "Paola"
73 | },
74 | "Summary": {
75 | "S": "{Summary: \"Descriptive Line\"}"
76 | },
77 | "Status#Date": {
78 | "S": "Pending#2019-10-02"
79 | }
80 | },
81 | {
82 | "ReportID": {
83 | "S": "AFED"
84 | },
85 | "ReportDetails": {
86 | "S": "AFED#report"
87 | },
88 | "Document": {
89 | "S": "{Data: Blob, Goes: Here}"
90 | }
91 | },
92 | {
93 | "ReportID": {
94 | "S": "3KF8"
95 | },
96 | "ReportDetails": {
97 | "S": "3KF8#meta"
98 | },
99 | "OwnerID": {
100 | "S": "Mike"
101 | },
102 | "Summary": {
103 | "S": "{Summary: \"Descriptive Line\"}"
104 | },
105 | "Status#Date": {
106 | "S": "Processed#2019-10-03"
107 | }
108 | },
109 | {
110 | "ReportID": {
111 | "S": "3KF8"
112 | },
113 | "ReportDetails": {
114 | "S": "3KF8#report"
115 | },
116 | "Document": {
117 | "S": "{Data: Blob, Goes: Here}"
118 | }
119 | },
120 | {
121 | "ReportID": {
122 | "S": "9D2B"
123 | },
124 | "ReportDetails": {
125 | "S": "9D2B#meta"
126 | },
127 | "OwnerID": {
128 | "S": "Paola"
129 | },
130 | "Summary": {
131 | "S": "{Summary: \"Descriptive Line\"}"
132 | },
133 | "Status#Date": {
134 | "S": "Pending#2019-10-04"
135 | }
136 | },
137 | {
138 | "ReportID": {
139 | "S": "9D2B"
140 | },
141 | "ReportDetails": {
142 | "S": "9D2B#report"
143 | },
144 | "Document": {
145 | "S": "{Data: Blob, Goes: Here}"
146 | }
147 | },
148 | {
149 | "ReportID": {
150 | "S": "CT7R"
151 | },
152 | "ReportDetails": {
153 | "S": "CT7R#meta"
154 | },
155 | "OwnerID": {
156 | "S": "Robert"
157 | },
158 | "Summary": {
159 | "S": "{Summary: \"Descriptive Line\"}"
160 | },
161 | "Status#Date": {
162 | "S": "Pending#2019-10-04"
163 | }
164 | },
165 | {
166 | "ReportID": {
167 | "S": "CT7R"
168 | },
169 | "ReportDetails": {
170 | "S": "CT7R#report"
171 | },
172 | "Document": {
173 | "S": "{Data: Blob, Goes: Here}"
174 | }
175 | },
176 | {
177 | "ReportID": {
178 | "S": "ZH2F"
179 | },
180 | "ReportDetails": {
181 | "S": "ZH2F#meta"
182 | },
183 | "OwnerID": {
184 | "S": "Paola"
185 | },
186 | "Summary": {
187 | "S": "{Summary: \"Descriptive Line\"}"
188 | },
189 | "Status#Date": {
190 | "S": "Processed#2019-10-01"
191 | }
192 | },
193 | {
194 | "ReportID": {
195 | "S": "ZH2F"
196 | },
197 | "ReportDetails": {
198 | "S": "ZH2F#report"
199 | },
200 | "Document": {
201 | "S": "{Data: Blob, Goes: Here}"
202 | }
203 | }
204 | ],
205 | "DataAccess": {
206 | "MySql": {}
207 | }
208 | }
209 | ]
210 | }
211 |
--------------------------------------------------------------------------------
/reportmgmt/update_report_statusdate.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | TABLENAME=reports
5 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
6 | # ENDPOINTURL=http://localhost:8000
7 |
8 | ARG1="$1"
9 | ARG2="$2"
10 | ARG3="$3"
11 |
12 | ID=$ARG1
13 | Details=$ARG2
14 | UPDATEVAL=$ARG3
15 | UPDATEKEY="Status#Date"
16 |
17 | aws dynamodb update-item --region $REGION --endpoint-url $ENDPOINTURL \
18 | --table-name $TABLENAME \
19 | --key '{"ReportID":{"S":"'$ID'"},"ReportDetails":{"S":"'$Details'"}}' \
20 | --update-expression "SET #q = :q " \
21 | --expression-attribute-names '{"#q": "'$UPDATEKEY'" }' \
22 | --expression-attribute-values '{":q" : {"S":"'$UPDATEVAL'"}}' \
23 | --return-consumed-capacity 'INDEXES' \
24 | --output json \
25 | --query '{"Consumed WCUs ":ConsumedCapacity}'
26 |
--------------------------------------------------------------------------------
/tx/OnlineBank.json:
--------------------------------------------------------------------------------
1 | {
2 | "ModelName": "OnlineBank",
3 | "ModelMetadata": {
4 | "Author": "pnnaylor@",
5 | "DateCreated": "May 31, 2020, 07:48 PM",
6 | "DateLastModified": "Jun 02, 2020, 07:09 AM",
7 | "Description": "",
8 | "Version": "1.0"
9 | },
10 | "DataModel": [
11 | {
12 | "TableName": "Accounts",
13 | "KeyAttributes": {
14 | "PartitionKey": {
15 | "AttributeName": "acct",
16 | "AttributeType": "N"
17 | }
18 | },
19 | "NonKeyAttributes": [
20 | {
21 | "AttributeName": "bal",
22 | "AttributeType": "N"
23 | }
24 | ],
25 | "TableData": [
26 | {
27 | "acct": {
28 | "N": "12345"
29 | },
30 | "bal": {
31 | "N": "543.55"
32 | }
33 | },
34 | {
35 | "acct": {
36 | "N": "54321"
37 | },
38 | "bal": {
39 | "N": "228.42"
40 | }
41 | }
42 | ],
43 | "DataAccess": {
44 | "MySql": {}
45 | }
46 | },
47 | {
48 | "TableName": "Transactions",
49 | "KeyAttributes": {
50 | "PartitionKey": {
51 | "AttributeName": "txid",
52 | "AttributeType": "S"
53 | }
54 | },
55 | "NonKeyAttributes": [
56 | {
57 | "AttributeName": "time",
58 | "AttributeType": "N"
59 | },
60 | {
61 | "AttributeName": "desc",
62 | "AttributeType": "S"
63 | }
64 | ],
65 | "TableData": [
66 | {
67 | "txid": {
68 | "S": "7d622075-f2f1-4dd4-8aaf-fb29e87c2b9a"
69 | },
70 | "time": {
71 | "N": "1590987629"
72 | },
73 | "desc": {
74 | "S": "$73.00 from 12345 to 54321"
75 | }
76 | }
77 | ],
78 | "DataAccess": {
79 | "MySql": {}
80 | }
81 | }
82 | ]
83 | }
--------------------------------------------------------------------------------
/tx/balance_transfer.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | REGION=us-east-1
4 | ENDPOINTURL=https://dynamodb.$REGION.amazonaws.com
5 | # ENDPOINTURL=http://localhost:8000
6 |
7 | TXID="$1"
8 | TXIN="$2"
9 |
10 | aws dynamodb transact-write-items --region $REGION --endpoint-url $ENDPOINTURL \
11 | --transact-items file://$TXIN.input \
12 | --client-request-token $TXID \
13 | --return-consumed-capacity 'INDEXES' \
14 | --output json
15 |
--------------------------------------------------------------------------------
/tx/transfer1-allowed.input:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "Update": {
4 | "Key": {
5 | "acct": {"N": "54321"}
6 | },
7 | "UpdateExpression": "SET bal = bal - :take",
8 | "ExpressionAttributeValues": {
9 | ":take": {"N": "12.72"}
10 | },
11 | "TableName": "Accounts",
12 | "ConditionExpression": "bal >= :take"
13 | }
14 | },
15 | {
16 | "Update": {
17 | "Key": {
18 | "acct": {"N": "12345"}
19 | },
20 | "UpdateExpression": "SET bal = bal + :give",
21 | "ExpressionAttributeValues": {
22 | ":give": {"N": "12.72"}
23 | },
24 | "TableName": "Accounts",
25 | "ConditionExpression": "attribute_exists(acct)"
26 | }
27 | },
28 | {
29 | "Put": {
30 | "Item": {
31 | "txid": {"S":"c3e67497-fcb0-4881-8477-b0cbedab7240"},
32 | "time": {"N":"1590988629"},
33 | "desc": {"S":"$12.72 from 54321 to 12345"}
34 | },
35 | "TableName": "Transactions",
36 | "ConditionExpression": "attribute_not_exists(txid)"
37 | }
38 | }
39 | ]
40 |
--------------------------------------------------------------------------------
/tx/transfer2-underfunded.input:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "Update": {
4 | "Key": {
5 | "acct": {"N": "54321"}
6 | },
7 | "UpdateExpression": "SET bal = bal - :take",
8 | "ExpressionAttributeValues": {
9 | ":take": {"N": "9512.72"}
10 | },
11 | "TableName": "Accounts",
12 | "ConditionExpression": "bal >= :take"
13 | }
14 | },
15 | {
16 | "Update": {
17 | "Key": {
18 | "acct": {"N": "12345"}
19 | },
20 | "UpdateExpression": "SET bal = bal + :give",
21 | "ExpressionAttributeValues": {
22 | ":give": {"N": "9512.72"}
23 | },
24 | "TableName": "Accounts",
25 | "ConditionExpression": "attribute_exists(acct)"
26 | }
27 | },
28 | {
29 | "Put": {
30 | "Item": {
31 | "txid": {"S":"4510ba8a-518b-4701-88b5-3db78e618f71"},
32 | "time": {"N":"1591088629"},
33 | "desc": {"S":"$9512.72 from 54321 to 12345"}
34 | },
35 | "TableName": "Transactions",
36 | "ConditionExpression": "attribute_not_exists(txid)"
37 | }
38 | }
39 | ]
40 |
--------------------------------------------------------------------------------
/tx/transfer3-txidused.input:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "Update": {
4 | "Key": {
5 | "acct": {"N": "12345"}
6 | },
7 | "UpdateExpression": "SET bal = bal - :take",
8 | "ExpressionAttributeValues": {
9 | ":take": {"N": "13.72"}
10 | },
11 | "TableName": "Accounts",
12 | "ConditionExpression": "bal >= :take"
13 | }
14 | },
15 | {
16 | "Update": {
17 | "Key": {
18 | "acct": {"N": "54321"}
19 | },
20 | "UpdateExpression": "SET bal = bal + :give",
21 | "ExpressionAttributeValues": {
22 | ":give": {"N": "13.72"}
23 | },
24 | "TableName": "Accounts",
25 | "ConditionExpression": "attribute_exists(acct)"
26 | }
27 | },
28 | {
29 | "Put": {
30 | "Item": {
31 | "txid": {"S":"7d622075-f2f1-4dd4-8aaf-fb29e87c2b9a"},
32 | "time": {"N":"1690988629"},
33 | "desc": {"S":"$13.72 from 12345 to 54321"}
34 | },
35 | "TableName": "Transactions",
36 | "ConditionExpression": "attribute_not_exists(txid)"
37 | }
38 | }
39 | ]
40 |
--------------------------------------------------------------------------------
/tx/transfer4-allowed.input:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "Update": {
4 | "Key": {
5 | "acct": {"N": "12345"}
6 | },
7 | "UpdateExpression": "SET bal = bal - :take",
8 | "ExpressionAttributeValues": {
9 | ":take": {"N": "42.42"}
10 | },
11 | "TableName": "Accounts",
12 | "ConditionExpression": "bal >= :take"
13 | }
14 | },
15 | {
16 | "Update": {
17 | "Key": {
18 | "acct": {"N": "54321"}
19 | },
20 | "UpdateExpression": "SET bal = bal + :give",
21 | "ExpressionAttributeValues": {
22 | ":give": {"N": "42.42"}
23 | },
24 | "TableName": "Accounts",
25 | "ConditionExpression": "attribute_exists(acct)"
26 | }
27 | },
28 | {
29 | "Put": {
30 | "Item": {
31 | "txid": {"S":"e896d9e5-818c-43b2-a139-59fd63fbcd12"},
32 | "time": {"N":"1590988629"},
33 | "desc": {"S":"$42.42 from 12345 to 54321"}
34 | },
35 | "TableName": "Transactions",
36 | "ConditionExpression": "attribute_not_exists(txid)"
37 | }
38 | }
39 | ]
40 |
--------------------------------------------------------------------------------