├── README.md ├── _config.yml ├── assets ├── MONGO_SE1.png ├── RDBMS_MongoDB_Mapping.jpg ├── StorageEngine.png ├── column-database.png ├── document-database.png ├── graph-database.png ├── horizontal-vs-vertical-scaling.png ├── journaling.png ├── key-value-database.png ├── mongoSplitting.svg ├── mongodb-compass.png ├── objectid.png ├── replica-set.png ├── sharding.png └── star.png ├── mongodb-code.md ├── mongodb-commands.md └── nodemongodb ├── ReadMe.md ├── controllers └── crud.js ├── index.js ├── models └── todo.js ├── package-lock.json ├── package.json ├── routes └── todo.js └── utils └── dbconfig.js /README.md: -------------------------------------------------------------------------------- 1 | # MongoDB Basics 2 | 3 | > *Click ★ if you like the project. Your contributions are heartily ♡ welcome.* 4 | 5 |
6 | 7 | ## Table of Contents 8 | 9 | * *[MongoDB Commands](mongodb-commands.md)* 10 | * *[MongoDB Coding Practice](mongodb-code.md)* 11 | 12 |
13 | 14 | ## Q. What are the different types of NoSQL databases? 15 | 16 | NoSQL is a non-relational DBMS, that does not require a fixed schema, avoids joins, and is easy to scale. The purpose of using a NoSQL database is for distributed data stores with humongous data storage needs. NoSQL is used for Big data and real-time web apps. 17 | 18 | **Types of NoSQL Databases:** 19 | 20 | * Document databases 21 | * Key-value stores 22 | * Column-oriented databases 23 | * Graph databases 24 | 25 | **1. Document databases:** 26 | 27 | A document database stores data in JSON, BSON , or XML documents. In a document database, documents can be nested. Particular elements can be indexed for faster querying. 28 | 29 | Documents can be stored and retrieved in a form that is much closer to the data objects used in applications, which means less translation is required to use the data in an application. SQL data must often be assembled and disassembled when moving back and forth between applications and storage. 30 | 31 | **Example:** Amazon SimpleDB, CouchDB, MongoDB, Riak, Lotus Notes are popular Document originated DBMS systems. 32 | 33 |

34 | Document Databases 35 |

36 | 37 | **2. Key-value Stores:** 38 | 39 | Data is stored in key/value pairs. It is designed in such a way to handle lots of data and heavy load. Key-value pair storage databases store data as a hash table where each key is unique, and the value can be a JSON, BLOB(Binary Large Objects), string, etc. 40 | 41 | **Example:** of key-value stores are Redis, Voldemort, Riak, and Amazon\'s DynamoDB. 42 | 43 |

44 | Key-value Stores 45 |

46 | 47 | **3. Column-Oriented Databases:** 48 | 49 | Column-oriented databases work on columns and are based on BigTable paper by Google. Every column is treated separately. The values of single column databases are stored contiguously. 50 | 51 | They deliver high performance on aggregation queries like SUM, COUNT, AVG, MIN, etc. as the data is readily available in a column. 52 | 53 | **Example:** Column-based NoSQL databases are widely used to manage data warehouses, business intelligence, CRM, Library card catalogs, HBase, Cassandra, HBase, Hypertable are examples of a column-based database. 54 | 55 |

56 | Column-Oriented Databases 57 |

58 | 59 | **4. Graph Databases:** 60 | 61 | A graph type database stores entities as well the relations amongst those entities. The entity is stored as a node with the relationship as edges. An edge gives a relationship between nodes. Every node and edge has a unique identifier. 62 | 63 | Compared to a relational database where tables are loosely connected, a Graph database is a multi-relational in nature. Traversing relationships as fast as they are already captured into the DB, and there is no need to calculate them. 64 | 65 | Graph base databases mostly used for social networks, logistics, spatial data. 66 | 67 | **Example:** Neo4J, Infinite Graph, OrientDB, FlockDB are some popular graph-based databases. 68 | 69 |

70 | Graph Databases 71 |

72 | 73 |
74 | ↥ back to top 75 |
76 | 77 | ## Q. What is MongoDB? 78 | 79 | **MongoDB** is a document-oriented NoSQL database used for high volume data storage. Instead of using tables and rows as in the traditional relational databases, MongoDB makes use of collections and documents. Documents consist of key-value pairs which are the basic unit of data in MongoDB. Collections contain sets of documents and function which is the equivalent of relational database tables. 80 | 81 | **Key Features:** 82 | 83 | * Document Oriented and NoSQL database. 84 | * Supports Aggregation 85 | * Uses BSON format 86 | * Sharding (Helps in Horizontal Scalability) 87 | * Supports Ad Hoc Queries 88 | * Schema Less 89 | * Capped Collection 90 | * Indexing (Any field in MongoDB can be indexed) 91 | * MongoDB Replica Set (Provides high availability) 92 | * Supports Multiple Storage Engines 93 | 94 | **Key Components:** 95 | 96 | **1. _id**: The `_id` field represents a unique value in the MongoDB document. The `_id` field is like the document\'s primary key. If you create a new document without an `_id` field, MongoDB will automatically create the field. 97 | 98 | **2. Collection**: This is a grouping of MongoDB documents. A collection is the equivalent of a table which is created in any other RDMS such as Oracle. 99 | 100 | **3. Cursor**: This is a pointer to the result set of a query. Clients can iterate through a cursor to retrieve results. 101 | 102 | **4. Database**: This is a container for collections like in RDMS wherein it is a container for tables. Each database gets its own set of files on the file system. A MongoDB server can store multiple databases. 103 | 104 | **5. Document**: A record in a MongoDB collection is basically called a document. The document, in turn, will consist of field name and values. 105 | 106 | **6. Field**: A name-value pair in a document. A document has zero or more fields. Fields are analogous to columns in relational databases. 107 | 108 | **Example:** 109 | 110 | Connecting MongoDB Cloud using MongoDB Compass 111 | 112 |

113 | MongoDB Compass 114 |

115 | 116 | **[[Read More](https://docs.mongodb.com/guides/)]** 117 | 118 |
119 | ↥ back to top 120 |
121 | 122 | ## Q. What are Indexes in MongoDB? 123 | 124 | Indexes support the efficient execution of queries in MongoDB. Without indexes, MongoDB must perform a collection scan, i.e. scan every document in a collection, to select those documents that match the query statement. If an appropriate index exists for a query, MongoDB can use the index to limit the number of documents it must inspect. 125 | 126 | Indexes are special data structures that store a small portion of the collection\'s data set in an easy to traverse form. The index stores the value of a specific field or set of fields, ordered by the value of the field. The ordering of the index entries supports efficient equality matches and range-based query operations. In addition, MongoDB can return sorted results by using the ordering in the index. 127 | 128 | **Example:** 129 | 130 | The `createIndex()` method only creates an index if an index of the same specification does not already exist. The following example ( using Node.js ) creates a single key descending index on the name field: 131 | 132 | ```js 133 | collection.createIndex( { name : -1 }, function(err, result) { 134 | console.log(result); 135 | callback(result); 136 | } 137 | ``` 138 | 139 |
140 | ↥ back to top 141 |
142 | 143 | ## Q. What are the types of Indexes available in MongoDB? 144 | 145 | MongoDB supports the following types of the index for running a query. 146 | 147 | **1. Single Field Index:** 148 | 149 | MongoDB supports user-defined indexes like single field index. A single field index is used to create an index on the single field of a document. With single field index, MongoDB can traverse in ascending and descending order. By default, each collection has a single field index automatically created on the `_id` field, the primary key. 150 | 151 | **Example:** 152 | 153 | ```js 154 | { 155 | "_id": 1, 156 | "person": { name: "Alex", surname: "K" }, 157 | "age": 29, 158 | "city": "New York" 159 | } 160 | ``` 161 | 162 | We can define, a single field index on the age field. 163 | 164 | ```js 165 | db.people.createIndex( {age : 1} ) // creates an ascending index 166 | 167 | db.people.createIndex( {age : -1} ) // creates a descending index 168 | ``` 169 | 170 | With this kind of index we can improve all the queries that find documents with a condition and the age field, like the following: 171 | 172 | ```js 173 | db.people.find( { age : 20 } ) 174 | db.people.find( { name : "Alex", age : 30 } ) 175 | db.people.find( { age : { $gt : 25} } ) 176 | ``` 177 | 178 | **2. Compound Index:** 179 | 180 | A compound index is an index on multiple fields. Using the same people collection we can create a compound index combining the city and age field. 181 | 182 | ```js 183 | db.people.createIndex( {city: 1, age: 1, person.surname: 1  } ) 184 | ``` 185 | 186 | In this case, we have created a compound index where the first entry is the value of the city field, the second is the value of the age field, and the third is the person.name. All the fields here are defined in ascending order. 187 | 188 | Queries such as the following can benefit from the index: 189 | 190 | ```js 191 | db.people.find( { city: "Miami", age: { $gt: 50 } } ) 192 | db.people.find( { city: "Boston" } ) 193 | db.people.find( { city: "Atlanta", age: {$lt: 25}, "person.surname": "Green" } ) 194 | ``` 195 | 196 | **3. Multikey Index:** 197 | 198 | This is the index type for arrays. When creating an index on an array, MongoDB will create an index entry for every element. 199 | 200 | **Example:** 201 | 202 | ```js 203 | { 204 |    "_id": 1, 205 |    "person": { name: "John", surname: "Brown" }, 206 |    "age": 34, 207 |    "city": "New York", 208 |    "hobbies": [ "music", "gardening", "skiing" ] 209 | } 210 | ``` 211 | 212 | The multikey index can be created as: 213 | 214 | ```js 215 | db.people.createIndex( { hobbies: 1} ) 216 | ``` 217 | 218 | Queries such as these next examples will use the index: 219 | 220 | ```js 221 | db.people.find( { hobbies: "music" } ) 222 | db.people.find( { hobbies: "music", hobbies: "gardening" } ) 223 | ``` 224 | 225 | **4. Geospatial Index:** 226 | 227 | GeoIndexes are a special index type that allows a search based on location, distance from a point and many other different features. To query geospatial data, MongoDB supports two types of indexes – `2d indexes` and `2d sphere indexes`. 2d indexes use planar geometry when returning results and 2dsphere indexes use spherical geometry to return results. 228 | 229 | **5. Text Index:** 230 | 231 | It is another type of index that is supported by MongoDB. Text index supports searching for string content in a collection. These index types do not store language-specific stop words (e.g. "the", "a", "or"). Text indexes restrict the words in a collection to only store root words. 232 | 233 | **Example:** 234 | 235 | Let\'s insert some sample documents. 236 | 237 | ```js 238 | var entries = db.people("blogs").entries; 239 | entries.insert( { 240 | title : "my blog post", 241 | text : "i am writing a blog. yay", 242 | site: "home", 243 | language: "english" }); 244 | entries.insert( { 245 | title : "my 2nd post", 246 | text : "this is a new blog i am typing. yay", 247 | site: "work", 248 | language: "english" }); 249 | entries.insert( { 250 | title : "knives are Fun", 251 | text : "this is a new blog i am writing. yay", 252 | site: "home", 253 | language: "english" }); 254 | ``` 255 | 256 | Let\'s define create the text index. 257 | 258 | ```js 259 | var entries = db.people("blogs").entries; 260 | entries.ensureIndex({title: "text", text: "text"}, { weights: { 261 | title: 10, 262 | text: 5 263 | }, 264 | name: "TextIndex", 265 | default_language: "english", 266 | language_override: "language" }); 267 | ``` 268 | 269 | Queries such as these next examples will use the index: 270 | 271 | ```js 272 | var entries = db.people("blogs").entries; 273 | entries.find({$text: {$search: "blog"}, site: "home"}) 274 | ``` 275 | 276 | **6. Hashed Index:** 277 | 278 | MongoDB supports hash-based sharding and provides hashed indexes. These indexes are the hashes of the field value. Shards use hashed indexes and create a hash according to the field value to spread the writes across the sharded instances. 279 | 280 |
281 | ↥ back to top 282 |
283 | 284 | ## Q. Explain Index Properties in MongoDB? 285 | 286 | **1. TTL Indexes:** 287 | 288 | TTL ( **Time To Live** ) is a special option that we can apply only to a single field index to permit the automatic deletion of documents after a certain time. 289 | 290 | During index creation, we can define an expiration time. After that time, all the documents that are older than the expiration time will be removed from the collection. This kind of feature is very useful when we are dealing with data that don\'t need to persist in the database ( eg. `session data` ). 291 | 292 | **Example:** 293 | 294 | ```js 295 | db.sessionlog.createIndex( { "lastUpdateTime": 1 }, { expireAfterSeconds: 1800 } ) 296 | ``` 297 | 298 | In this case, MongoDB will drop the documents from the collection automatically once half an hour (1800 seconds) has passed since the value in **lastUpdateTime** field. 299 | 300 | **Restrictions:** 301 | 302 | * Only single field indexes can have the TTL option 303 | * the `_id` single field index cannot support the TTL option 304 | * the indexed field must be a date type 305 | * a capped collection cannot have a TTL index 306 | 307 | **2. Partial indexes:** 308 | 309 | A partial index is an index that contains only a subset of the values based on a filter rule. They are useful in cases where: 310 | 311 | * The index size can be reduced 312 | * We want to index the most relevant and used values in the query conditions 313 | * We want to index the most selective values of a field 314 | 315 | **Example:** 316 | 317 | ```js 318 | db.people.createIndex( 319 |    { "city": 1, "person.surname": 1 }, 320 |    { partialFilterExpression: { age : { $lt: 30 } } } 321 | ) 322 | ``` 323 | 324 | We have created a compound index on city and person.surname but only for the documents with age less than 30. 325 | In order for the partial index to be used the queries must contain a condition on the age field. 326 | 327 | ```js 328 | db.people.find( { city: "New Tork", age: { $eq: 20} } ) 329 | ``` 330 | 331 | **3. Sparse indexes:** 332 | 333 | Sparse indexes are a subset of partial indexes. A sparse index only contains elements for the documents that have the indexed field, even if it is null. 334 | 335 | Since MongoDB is a schemaless database, the documents in a collection can have different fields, so an indexed field may not be present in some of them. 336 | 337 | **Example:** 338 | 339 | To create such an index use the sparse option: 340 | 341 | ```js 342 | db.people.createIndex( { city: 1 }, { sparse: true } ) 343 | ``` 344 | 345 | In this case, we are assuming there could be documents in the collection with the field city missing. Sparse indexes are based on the existence of a field in the documents and are useful to reduce the size of the index. 346 | 347 | **4. Unique indexes:** 348 | 349 | MongoDB can create an index as unique. An index defined this way cannot contain duplicate entries. 350 | 351 | **Example:** 352 | 353 | ```js 354 | db.people.createIndex( { city: 1 }, { unique: true } ) 355 | ``` 356 | 357 | Uniqueness can be defined for compound indexes too. 358 | 359 | ```js 360 | db.people.createIndex( { city: 1, person.surname: 1}, { unique: true } ) 361 | ``` 362 | 363 | By default, the index on `_id` is automatically created as unique. 364 | 365 |
366 | ↥ back to top 367 |
368 | 369 | ## Q. How many indexes does MongoDB create by default for a new collection? 370 | 371 | By default MongoDB creates a unique index on the `_id` field during the creation of a collection. The `_id` index prevents clients from inserting two documents with the same value for the `_id` field. 372 | 373 |
374 | ↥ back to top 375 |
376 | 377 | ## Q. Can you create an index in an array field in MongoDB? 378 | 379 | Yes, To index a field that holds an array value, MongoDB creates an index key for each element in the array. Multikey indexes can be constructed over arrays that hold both scalar values (e.g. strings, numbers) and nested documents. MongoDB automatically creates a multikey index if any indexed field is an array. 380 | 381 | **Syntax:** 382 | 383 | ```js 384 | db.collection.createIndex( { : < 1 or -1 > } ) 385 | ``` 386 | 387 | For example, consider an inventory collection that contains the following documents: 388 | 389 | ```js 390 | { _id: 10, type: "food", item: "aaa", ratings: [ 5, 8, 9 ] } 391 | { _id: 11, type: "food", item: "bbb", ratings: [ 5, 9 ] } 392 | { _id: 12, type: "food", item: "ccc", ratings: [ 9, 5, 8, 4, 7 ] } 393 | ``` 394 | 395 | The collection has a multikey index on the ratings field: 396 | 397 | ```js 398 | db.inventory.createIndex( { ratings: 1 } ) 399 | ``` 400 | 401 | The following query looks for documents where the ratings field is the array [ 5, 9 ]: 402 | 403 | ```js 404 | db.inventory.find( { ratings: [ 5, 9 ] } ) 405 | ``` 406 | 407 | MongoDB can use the multikey index to find documents that have 5 at any position in the ratings array. Then, MongoDB retrieves these documents and filters for documents whose ratings array equals the query array [ 5, 9 ]. 408 | 409 |
410 | ↥ back to top 411 |
412 | 413 | ## Q. Why does Profiler use in MongoDB? 414 | 415 | The database profiler captures data information about read and write operations, cursor operations, and database commands. The database profiler writes data in the `system.profile` collection, which is a capped collection. 416 | 417 | The database profiler collects detailed information about Database Commands executed against a running mongod instance. This includes CRUD operations as well as configuration and administration commands. 418 | 419 | Profiler has 3 profiling levels. 420 | 421 | * **Level 0** - Profiler will not log any data 422 | * **Level 1** - Profiler will log only slow operations above some threshold 423 | * **Level 2** - Profiler will log all the operations 424 | 425 | **1. To get current profiling level.** 426 | 427 | ```bash 428 | db.getProfilingLevel() 429 | 430 | // Output 431 | 0 432 | ``` 433 | 434 | **2. To check current profiling status:** 435 | 436 | ```bash 437 | db.getProfilingStatus() 438 | 439 | 440 | // Output 441 | { "was" : 0, "slowms" : 100 } 442 | ``` 443 | 444 | **3. To set profiling level:** 445 | 446 | ```bash 447 | db.setProfilingLevel(1, 40) 448 | 449 | // Output 450 | { "was" : 0, "slowms" : 100, "ok" : 1 } 451 | ``` 452 | 453 |
454 | ↥ back to top 455 |
456 | 457 | ## Q. How to remove attribute from MongoDB Object? 458 | 459 | **$unset:** 460 | 461 | The `$unset` operator deletes a particular field. If the field does not exist, then `$unset` does nothing. When used with `$` to match an array element, `$unset` replaces the matching element with `null` rather than removing the matching element from the array. This behavior keeps consistent the array size and element positions. 462 | 463 | syntax: 464 | 465 | ```js 466 | { $unset: { : "", ... } } 467 | ``` 468 | 469 | **Example:** 470 | 471 | delete the `properties.service` attribute from all records on this collection. 472 | 473 | ```js 474 | db.collection.update( 475 | {}, 476 | { 477 | $unset : { 478 | "properties.service" : 1 479 | } 480 | }, 481 | { 482 | multi: true 483 | } 484 | ); 485 | ``` 486 | 487 | **To verify they have been deleted you can use:** 488 | 489 | ```js 490 | db.collection.find( 491 | { 492 | "properties.service" : { 493 | $exists : true 494 | } 495 | } 496 | ).count(true); 497 | ``` 498 | 499 |
500 | ↥ back to top 501 |
502 | 503 | ## Q. What is "Namespace" in MongoDB? 504 | 505 | MongoDB stores BSON (Binary Interchange and Structure Object Notation) objects in the collection. The concatenation of the collection name and database name is called a namespace 506 | 507 |
508 | ↥ back to top 509 |
510 | 511 | ## Q. What is Replication in Mongodb? 512 | 513 | Replication exists primarily to offer data redundancy and high availability. It maintain the durability of data by keeping multiple copies or replicas of that data on physically isolated servers. Replication allows to increase data availability by creating multiple copies of data across servers. This is especially useful if a server crashes or hardware failure. 514 | 515 | With MongoDB, replication is achieved through a **Replica Set**. Writer operations are sent to the primary server (node), which applies the operations across secondary servers, replicating the data. If the primary server fails (through a crash or system failure), one of the secondary servers takes over and becomes the new primary node via election. If that server comes back online, it becomes a secondary once it fully recovers, aiding the new primary node. 516 | 517 |
518 | ↥ back to top 519 |
520 | 521 | ## Q. What is Replica Set in MongoDB? 522 | 523 | It is a group of mongo processes that maintain same data set. Replica sets provide redundancy and high availability, and are the basis for all production deployments. A replica set contains a primary node and multiple secondary nodes. 524 | 525 | The primary node receives all write operations. A replica set can have only one primary capable of confirming writes with `{ w: "majority" }` write concern; although in some circumstances, another mongod instance may transiently believe itself to also be primary. 526 | 527 | The secondaries replicate the primary\'s oplog and apply the operations to their data sets such that the secondaries\' data sets reflect the primary\'s data set. If the primary is unavailable, an eligible secondary will hold an election to elect itself the new primary. 528 | 529 |

530 | Replica Set 531 |

532 | 533 |
534 | ↥ back to top 535 |
536 | 537 | ## Q. How does MongoDB ensure high availability? 538 | 539 | **High Availability (HA)** refers to the improvement of system and app availability by minimizing the downtime caused by routine maintenance operations (planned) and sudden system crashes (unplanned). 540 | 541 | **Replica Set:** 542 | 543 | The replica set mechanism of MongoDB has two main purposes: 544 | 545 | * One is for data redundancy for failure recovery. When the hardware fails, or the node is down for other reasons, you can use a replica for recovery. 546 | * The other purpose is for read-write splitting. It routes the reading requests to the replica to reduce the reading pressure on the primary node. 547 | 548 | MongoDB automatically maintains replica sets, multiple copies of data that are distributed across servers, racks and data centers. Replica sets help prevent database downtime using native replication and automatic failover. 549 | 550 | A replica set consists of multiple replica set members. At any given time, one member acts as the primary member, and the other members act as secondary members. If the primary member fails for any reason (e.g., hardware failure), one of the secondary members is automatically elected to primary and begins to process all reads and writes. 551 | 552 |
553 | ↥ back to top 554 |
555 | 556 | ## Q. What is an Embedded MongoDB Document? 557 | 558 | An embedded, or nested, MongoDB Document is a normal document that is nested inside another document within a MongoDB collection. Embedding connected data in a single document can reduce the number of read operations required to obtain data. In general, we should structure our schema so that application receives all of its required information in a single read operation. 559 | 560 | **Example:** 561 | 562 | In the normalized data model, the address documents contain a reference to the patron document. 563 | 564 | ```js 565 | // patron document 566 | { 567 | _id: "joe", 568 | name: "Joe Bookreader" 569 | } 570 | 571 | // address documents 572 | { 573 | patron_id: "joe", // reference to patron document 574 | street: "123 Fake Street", 575 | city: "Faketon", 576 | state: "MA", 577 | zip: "12345" 578 | } 579 | 580 | { 581 | patron_id: "joe", 582 | street: "1 Some Other Street", 583 | city: "Boston", 584 | state: "MA", 585 | zip: "12345" 586 | } 587 | ``` 588 | 589 | Embedded documents are particularly useful when a **one-to-many** relationship exists between documents. In the example shown above, we see that a single customer has multiple addresses associated with him. The nested document structure makes it easy to retrieve complete address information about this customer with just a single query. 590 | 591 |
592 | ↥ back to top 593 |
594 | 595 | ## Q. How can you achieve primary key - foreign key relationships in MongoDB? 596 | 597 | The primary key-foreign key relationship can be achieved by embedding one document inside the another. As an example, a department document can have its employee document(s). 598 | 599 |
600 | ↥ back to top 601 |
602 | 603 | ## Q. When should we embed one document within another in MongoDB? 604 | 605 | You should consider embedding documents for: 606 | 607 | * *contains* relationships between entities 608 | * One-to-many relationships 609 | * Performance reasons 610 | 611 |
612 | ↥ back to top 613 |
614 | 615 | ## Q. How is data stored in MongoDB? 616 | 617 | In MongoDB, Data is stored in BSON documents (short for `Bin­ary JSON`). These documents are stored in MongoDB in JSON (JavaScript Object Notation) format. JSON documents support embedded fields, so related data and lists of data can be stored with the document instead of an external table. Documents contain one or more fields, and each field contains a value of a specific data type, including arrays, binary data and sub-documents. Documents that tend to share a similar structure are organized as collections. 618 | 619 | JSON is formatted as name/value pairs. In JSON documents, field names and values are separated by a colon, field name and value pairs are separated by commas, and sets of fields are encapsulated in "curly braces" ({}). 620 | 621 | **Example:** 622 | 623 | ```js 624 | { 625 | "name": "notebook", 626 | "qty": 50, 627 | "rating": [ { "score": 8 }, { "score": 9 } ], 628 | "size": { "height": 11, "width": 8.5, "unit": "in" }, 629 | "status": "A", 630 | "tags": [ "college-ruled", "perforated"] 631 | } 632 | ``` 633 | 634 |
635 | ↥ back to top 636 |
637 | 638 | ## Q. What are the differences between MongoDB and SQL-SERVER? 639 | 640 | * The MongoDB store the data in documents with JSON format but SQL store the data in Table format. 641 | * The MongoDB provides high performance, high availability, easy scalability etc. rather than SQL Server. 642 | * In the MongoDB, we can change the structure simply by adding, removing column from the existing documents. 643 | 644 | **MongoDB and SQL Server Comparision Table:** 645 | 646 | |Base of Comparison | MS SQL Server | MongoDB | 647 | |--------------------|--------------------|---------------------| 648 | |Storage Model |RDBMS |Document-Oriented | 649 | |Joins |Yes |No | 650 | |Transaction |ACID |Multi-document ACID Transactions with snapshot isolation | 651 | |Agile practices |No |Yes | 652 | |Data Schema |Fixed |Dynamic | 653 | |Scalability |Vertical |Horizontal | 654 | |Map Reduce |No |Yes | 655 | |Language |SQL query language |JSON Query Language | 656 | |Secondary index |Yes |Yes | 657 | |Triggers |Yes |Yes | 658 | |Foreign Keys |Yes |No | 659 | |Concurrency |Yes |yes | 660 | |XML Support |Yes |No | 661 | 662 |

663 | MongoDB & SQL Server 664 |

665 | 666 |
667 | ↥ back to top 668 |
669 | 670 | ## Q. How can you achieve transaction and locking in MongoDB? 671 | 672 | In MongoDB (4.2), an operation on a single document is atomic. For situations that require atomicity of reads and writes to multiple documents (in a single or multiple collections), MongoDB supports multi-document transactions. With distributed transactions, transactions can be used across multiple operations, collections, databases, documents, and shards. 673 | 674 | MongoDB allows multiple clients to read and write the same data. In order to ensure consistency, it uses locking and other concurrency control measures to prevent multiple clients from modifying the same piece of data simultaneously. 675 | 676 | MongoDB uses **multi-granularity locking** that allows operations to lock at the global, database or collection level, and allows for individual storage engines to implement their own concurrency control below the collection level (e.g., at the document-level in WiredTiger). MongoDB uses reader-writer locks that allow concurrent readers shared access to a resource, such as a database or collection. 677 | 678 | The lock modes are represented as follows: 679 | 680 | |Lock Mode |Description | 681 | |------------|----------------------------| 682 | |R |Represents Shared (S) lock. | 683 | |W |Represents Exclusive (X) lock.| 684 | |r |Represents Intent Shared (IS) lock.| 685 | |w |Represents Intent Exclusive (IX) lock.| 686 | 687 | **Example:** 688 | 689 | The following example highlights the key components of the transactions API 690 | 691 | ```js 692 | const client = new MongoClient(uri); 693 | await client.connect(); 694 | 695 | // Prereq: Create collections. 696 | 697 | await client.db('mydb1').collection('foo').insertOne({ abc: 0 }, { w: 'majority' }); 698 | 699 | await client.db('mydb2').collection('bar').insertOne({ xyz: 0 }, { w: 'majority' }); 700 | 701 | // Step 1: Start a Client Session 702 | const session = client.startSession(); 703 | 704 | // Step 2: Optional. Define options to use for the transaction 705 | const transactionOptions = { 706 | readPreference: 'primary', 707 | readConcern: { level: 'local' }, 708 | writeConcern: { w: 'majority' } 709 | }; 710 | 711 | // Step 3: Use withTransaction to start a transaction, execute the callback, and commit (or abort on error) 712 | // Note: The callback for withTransaction MUST be async and/or return a Promise. 713 | try { 714 | await session.withTransaction(async () => { 715 | const coll1 = client.db('mydb1').collection('foo'); 716 | const coll2 = client.db('mydb2').collection('bar'); 717 | 718 | // Important:: You must pass the session to the operations 719 | 720 | await coll1.insertOne({ abc: 1 }, { session }); 721 | await coll2.insertOne({ xyz: 999 }, { session }); 722 | }, transactionOptions); 723 | } finally { 724 | await session.endSession(); 725 | await client.close(); 726 | } 727 | ``` 728 | 729 |
730 | ↥ back to top 731 |
732 | 733 | ## Q. When to Use MongoDB Rather than MySQL? 734 | 735 | **1. MongoDB:** 736 | 737 | MongoDB is one of the most popular document-oriented databases under the banner of NoSQL database. It employs the format of key-value pairs, here called document store. Document stores in MongoDB are created is stored in BSON files which are, in fact, a little-modified version of JSON files and hence all JS are supported. 738 | 739 | It offers greater efficiency and reliability which in turn can meet your storage capacity and speed demands. The schema-free implementation of MongoDB eliminates the prerequisites of defining a fixed structure. These models allow hierarchical relationships representation and facilitate the ability to change the structure of the record. 740 | 741 | **Pros:** 742 | 743 | * MongoDB has a lower latency per query & spends less CPU time per query because it is doing a lot less work (e.g. no joins, transactions). As a result, it can handle a higher load in terms of queries per second. 744 | * MongoDB is easier to shard (use in a cluster) because it doesn\'t have to worry about transactions and consistency. 745 | * MongoDB has a faster write speed because it does not have to worry about transactions or rollbacks (and thus does not have to worry about locking). 746 | * It supports many Features like automatic repair, easier data distribution, and simpler data models make administration and tuning requirements lesser in NoSQL. 747 | * NoSQL databases are cheap and open source. 748 | * NoSQL database support caching in system memory so it increases data output performance. 749 | 750 | **Cons:** 751 | 752 | * MongoDB does not support transactions. 753 | * In general, MongoDB creates more work (e.g. more CPU cost) for the client server. For example, to join data one has to issue multiple queries and do the join on the client. 754 | * No Stored Procedures in mongo dB (NoSQL database). 755 | 756 | **Reasons to Use a NoSQL Database:** 757 | 758 | * **Storing large volumes of data without structure**: A NoSQL database doesn\'t limit storable data types. Plus, you can add new types as business needs change. 759 | * **Using cloud computing and storage**: Cloud-based storage is a great solution, but it requires data to be easily spread across multiple servers for scaling. Using affordable hardware on-site for testing and then for production in the cloud is what NoSQL databases are designed for. 760 | * **Rapid development**: If you are developing using modern agile methodologies, a relational database will slow you down. A NoSQL database doesn\'t require the level of preparation typically needed for relational databases. 761 | 762 | **2. MySQL:** 763 | 764 | MySQL is a popular open-source relational database management system (RDBMS) that is developed, distributed and supported by Oracle Corporation. MySQL stores data in tables and uses structured query language (SQL) for database access. It uses Structured Query Language SQL to access and transfer the data and commands such as 'SELECT', 'UPDATE', 'INSERT' and 'DELETE' to manage it. 765 | 766 | Related information is stored in different tables but the concept of JOIN operations simplifies the process of correlating it and performing queries across multiple tables and minimize the chances of data duplication. It follows the ACID (Atomic, Consistent, Isolated and Durable) model. This means that once a transaction is complete, the data remains consistent and stable on the disc which may include distinct multiple memory locations. 767 | 768 | **Pros:** 769 | 770 | * SQL databases are table based databases. 771 | * Data store in rows and columns 772 | * Each row contains a unique instance of data for the categories defined by the columns. 773 | * Provide facility primary key, to uniquely identify the rows. 774 | 775 | **Cons:** 776 | 777 | * Users have to scale relational database on powerful servers that are expensive and difficult to handle. To scale relational database, it has to be distributed on to multiple servers. Handling tables across different servers is difficult. 778 | * In SQL server\'s data has to fit into tables anyhow. If your data doesn\'t fit into tables, then you need to design your database structure that will be complex and again difficult to handle. 779 | 780 |
781 | ↥ back to top 782 |
783 | 784 | ## Q. How MongoDB supports ACID transactions and locking functionalities? 785 | 786 | ACID stands that any update is: 787 | 788 | * **Atomic:** it either fully completes or it does not 789 | * **Consistent:** no reader will see a "partially applied" update 790 | * **Isolated:** no reader will see a "dirty" read 791 | * **Durable:** (with the appropriate write concern) 792 | 793 | MongoDB, has always supported ACID transactions in a single document and, when leveraging the document model appropriately, many applications don\'t need ACID guarantees across multiple documents. 794 | 795 | MongoDB is a document based NoSQL database with a flexible schema. Transactions are not operations that should be executed for every write operation since they incur a greater performance cost over a single document writes. With a document based structure and denormalized data model, there will be a minimized need for transactions. Since MongoDB allows document embedding, you don\'t necessarily need to use a transaction to meet a write operation. 796 | 797 | MongoDB version 4.0 provides **multi-document transaction** support for replica set deployments only and probably the version 4.2 will extend support for sharded deployments. 798 | 799 | **Example:** Multi-Document ACID Transactions in MongoDB 800 | 801 | These are multi-statement operations that need to be executed sequentially without affecting each other. For example below we can create two transactions, one to add a user and another to update a user with a field of age. 802 | 803 | ```js 804 | $session.startTransaction() 805 | 806 | db.users.insert({_id: 6, name: "John"}) 807 | 808 | db.users.updateOne({_id: 3, {$set: {age:26} }}) 809 | 810 | session.commit_transaction() 811 | ``` 812 | 813 | Transactions can be applied to operations against multiple documents contained in one or many collection/database. Any changes due to document transaction do not impact performance for workloads not related or do not require them. Until the transaction is committed, uncommitted writes are neither replicated to the secondary nodes nor are they readable outside the transactions. 814 | 815 |
816 | ↥ back to top 817 |
818 | 819 | ## Q. What are the best practices for MongoDB Transactions? 820 | 821 | The multi-document transactions are only supported in the `WiredTiger` storage engine. For a single ACID transaction, if you try performing an excessive number of operations, it can result in high pressure on the WiredTiger cache. The cache is always dictated to maintain state for all subsequent writes since the oldest snapshot was created. This means new writes will accumulate in the cache throughout the duration of the transaction and will be flushed only after transactions currently running on old snapshots are committed or aborted. 822 | 823 | For the best database performance on the transaction, developers should consider: 824 | 825 | 1. Always modify a small number of documents in a transaction. Otherwise, you will need to break the transaction into different parts and process the documents in different batches. At most, process 1000 documents at a time. 826 | 827 | 2. Temporary exceptions such as awaiting to elect primary and transient network hiccups may result in abortion of the transaction. Developers should establish a logic to retry the transaction if the defined errors are presented. 828 | 829 | 3. Configure optimal duration for the execution of the transaction from the default 60 seconds provided by MongoDB. Besides, employ indexing so that it can allow fast data access within the transaction. 830 | 831 | 4. Decompose your transaction into a small set of operation so that it fits the 16MB size constraints. Otherwise, if the operation together with oplog description exceed this limit, the transaction will be aborted. 832 | 833 | 5. All data relating to an entity should be stored in a single, rich document structure. This is to reduce the number of documents that are to be cached when different fields are going to be changed. 834 | 835 |
836 | ↥ back to top 837 |
838 | 839 | ## Q. Explain limitations of MongoDB Transactions? 840 | 841 | MongoDB transactions can exist only for relatively short time periods. By default, a transaction must span no more than one minute of clock time. This limitation results from the underlying MongoDB implementation. MongoDB uses MVCC, but unlike databases such as Oracle, the “older” versions of data are kept only in memory. 842 | 843 | 1. You cannot create or drop a collection inside a transaction. 844 | 2. Transactions cannot make writes to a capped collection 845 | 3. Transactions take plenty of time to execute and somehow they can slow the performance of the database. 846 | 4. Transaction size is limited to 16MB requiring one to split any that tends to exceed this size into smaller transactions. 847 | 5. Subjecting a large number of documents to a transaction may exert excessive pressure on the WiredTiger engine and since it relies on the snapshot capability, there will be a retention of large unflushed operations in memory. This renders some performance cost on the database. 848 | 849 |
850 | ↥ back to top 851 |
852 | 853 | ## Q. Should I normalize my data before storing it in MongoDB? 854 | 855 | Data used by multiple documents can either be embedded (denormalized) or referenced (normalized). Normalization, which is increasing the complexity of the schema by splitting tables into multiple smaller ones to reduce the data redundancy( 1NF, 2NF, 3NF). 856 | 857 | But Mongo follows the exact opposite way of what we do with SQL. In MongoDB, data normalization is not requried. Indeed we need to de-normalize and fit it into a collection of multiple documents. 858 | 859 | **Example:** Let\'s say we have three tables 860 | 861 | * Table - 1 : ColumnA, ColumnB (primary key) 862 | * Table - 2 : ColumnC (Foreign key), ColumnD (primary key) 863 | * Table - 3 : ColumnE (foreign key), ColumnF 864 | 865 | In this case, mongoDB document structure should be as follows. 866 | 867 | ```js 868 | { 869 | ColumnA : ValueA, 870 | ColumnB : ValueB, 871 | Subset1 : [{ 872 | ColumnC : ValueC, 873 | ColumnD : ValueD, 874 | Subset2 : [{ 875 | ColumnE : ValueE, 876 | ColumnF : ValueF 877 | }] 878 | }] 879 | } 880 | ``` 881 | 882 |
883 | ↥ back to top 884 |
885 | 886 | ## Q. What is upsert operation in MongoDB? 887 | 888 | Upsert operation in MongoDB is utilized to save document into collection. If document matches query criteria then it will perform update operation otherwise it will insert a new document into collection. 889 | 890 | Upsert operation is useful while importing data from external source which will update existing documents if matched otherwise it will insert new documents into collection. 891 | 892 | **Example:** Upsert option set for update 893 | 894 | This operation first searches for the document if not present then inserts the new document into the database. 895 | 896 | ```js 897 | 898 | > db.car.update( 899 | ... { name: "Qualis" }, 900 | ... { 901 | ... name: "Qualis", 902 | ... speed: 50 903 | ... }, 904 | ... { upsert: true } 905 | ... ) 906 | WriteResult({ 907 | "nMatched" : 0, 908 | "nUpserted" : 1, 909 | "nModified" : 0, 910 | "_id" : ObjectId("548d3a955a5072e76925dc1c") 911 | }) 912 | ``` 913 | 914 | The car with the name Qualis is checked for existence and if not, a document with car name "Qualis" and speed 50 is inserted into the database. The nUpserted with value "1" indicates a new document is inserted. 915 | 916 |
917 | ↥ back to top 918 |
919 | 920 | ## Q. Is there an "upsert" option in the mongodb insert command? 921 | 922 | The `db.collection.insert()` provides no upsert possibility. Instead, mongo insert inserts a new document into a collection. Upsert is only possible using `db.collection.update()` and `db.collection.save()`. 923 | 924 |
925 | ↥ back to top 926 |
927 | 928 | ## Q. What is oplog? 929 | 930 | The OpLog (Operations Log) is a special capped collection that keeps a rolling record of all operations that modify the data stored in databases. 931 | 932 | MongoDB applies database operations on the primary and then records the operations on the primary\'s oplog. The secondary members then copy and apply these operations in an asynchronous process. All replica set members contain a copy of the oplog, in the **local.oplog.rs** collection, which allows them to maintain the current state of the database. 933 | 934 | Each operation in the oplog is idempotent. That is, oplog operations produce the same results whether applied once or multiple times to the target dataset. 935 | 936 | **Example:** Querying The OpLog 937 | 938 | ```js 939 | MongoDB shell version: 2.0.4 940 | connecting to: mongodb:27017/test 941 | PRIMARY> use local 942 | PRIMARY> db.oplog.rs.find() 943 | ``` 944 | 945 |
946 | ↥ back to top 947 |
948 | 949 | ## Q. Does MongoDB pushes the writes to disk immediately or lazily? 950 | 951 | MongoDB pushes the data to disk lazily. It updates the immediately written to the journal but writing the data from journal to disk happens lazily. 952 | 953 |
954 | ↥ back to top 955 |
956 | 957 | ## Q. How to perform a delete operation in MongoDB? 958 | 959 | MongoDB\'s `db.collection.deleteMany()` and `db.collection.deleteOne()` method is used to delete documents from the collection. Delete operations do not drop indexes, even if deleting all documents from a collection. All write operations in MongoDB are atomic on the level of a single document. 960 | 961 | **Example:** 962 | 963 | ```js 964 | // Create Inventory Collection 965 | db.inventory.insertMany( [ 966 | { item: "journal", qty: 25, size: { h: 14, w: 21, uom: "cm" }, status: "A" }, 967 | { item: "notebook", qty: 50, size: { h: 8.5, w: 11, uom: "in" }, status: "P" }, 968 | { item: "paper", qty: 100, size: { h: 8.5, w: 11, uom: "in" }, status: "D" }, 969 | { item: "planner", qty: 75, size: { h: 22.85, w: 30, uom: "cm" }, status: "D" }, 970 | { item: "postcard", qty: 45, size: { h: 10, w: 15.25, uom: "cm" }, status: "A" }, 971 | ] ); 972 | 973 | 974 | 975 | // Delete Commands 976 | db.inventory.deleteMany({}) // Delete All Documents 977 | 978 | db.inventory.deleteMany({ status : "A" }) // Delete All Documents that Match a Condition 979 | 980 | db.inventory.deleteOne( { status: "D" } ) // Delete Only One Document that Matches a Condition 981 | ``` 982 | 983 |
984 | ↥ back to top 985 |
986 | 987 | ## Q. If you remove a document from database, does MongoDB remove it from disk? 988 | 989 | Yes. If you remove a document from database, MongoDB will remove it from disk too. 990 | 991 |
992 | ↥ back to top 993 |
994 | 995 | ## Q. Explain the structure of ObjectID in MongoDB? 996 | 997 | The **ObjectId(``)** class is the default primary key for a MongoDB document and is usually found in the `_id` field in an inserted document. It returns a new ObjectId value. The 12-byte ObjectId value consists of: 998 | 999 | * a 4-byte timestamp value, representing the ObjectId\'s creation, measured in seconds since the Unix epoch 1000 | * a 5-byte random value 1001 | * a 3-byte incrementing counter, initialized to a random value 1002 | 1003 |

1004 | Document Databases 1005 |

1006 | 1007 | While the BSON format itself is little-endian, the timestamp and counter values are big-endian, with the most significant bytes appearing first in the byte sequence. 1008 | 1009 | **Create ObjectId:** 1010 | 1011 | To create a new objectID manually within the MongoDB we can declare `objectId()` as a method. 1012 | 1013 | ```js 1014 | > newObjectId = ObjectId(); 1015 | 1016 | // Output 1017 | ObjectId("5349b4ddd2781d08c09890f3") 1018 | ``` 1019 | 1020 | **MongoDB provides three methods for ObjectId:** 1021 | 1022 | | Method |Description | 1023 | |-----------------------|------------------------| 1024 | |ObjectId.getTimestamp()|Returns the timestamp portion of the object as a Date.| 1025 | |ObjectId.toString() |Returns the JavaScript representation in the form of a string literal "ObjectId(...)".| 1026 | |ObjectId.valueOf() |Returns the representation of the object as a hexadecimal string.| 1027 | 1028 |
1029 | ↥ back to top 1030 |
1031 | 1032 | ## Q. What is a covered query in MongoDB? 1033 | 1034 | The MongoDB covered query is one which uses an index and does not have to examine any documents. An index will cover a query if it satisfies the following conditions: 1035 | 1036 | * All fields in a query are part of an index. 1037 | * All fields returned in the results are of the same index. 1038 | * No fields in the query are equal to null 1039 | 1040 | Since all the fields present in the query are part of an index, MongoDB matches the query conditions and returns the result using the same index without actually looking inside the documents. 1041 | 1042 | **Example:** 1043 | 1044 | A collection inventory has the following index on the type and item fields: 1045 | 1046 | ```js 1047 | db.inventory.createIndex( { type: 1, item: 1 } ) 1048 | ``` 1049 | 1050 | This index will cover the following operation which queries on the type and item fields and returns only the item field: 1051 | 1052 | ```js 1053 | db.inventory.find( 1054 | { type: "food", item:/^c/ }, 1055 | { item: 1, _id: 0 } 1056 | ) 1057 | ``` 1058 | 1059 |
1060 | ↥ back to top 1061 |
1062 | 1063 | ## Q. Why MongoDB is not preferred over a 32-bit system? 1064 | 1065 | When running a 32-bit system build of MongoDB, the total storage size for the server, including data and indexes, is 2 gigabytes. The reason for this is that the MongoDB storage engine uses memory-mapped files for performance. 1066 | 1067 | If you are running a 64-bit build of MongoDB, there is virtually no limit to storage size. 1068 | 1069 |
1070 | ↥ back to top 1071 |
1072 | 1073 | ## Q. Can one MongoDB operation lock more than one database? 1074 | 1075 | Yes. Operations like `db.copyDatabase()`, `db.repairDatabase()`, etc. can lock more than one databases involved. 1076 | 1077 |
1078 | ↥ back to top 1079 |
1080 | 1081 | ## Q. What is Sharding in MongoDB? 1082 | 1083 | **Sharding** is a method for distributing data across multiple machines. MongoDB uses sharding to support deployments with very large data sets and high throughput operations. 1084 | 1085 | Database systems with large data sets or high throughput applications can challenge the capacity of a single server. For example, high query rates can exhaust the CPU capacity of the server. Working set sizes larger than the system\'s RAM stress the I/O capacity of disk drives. There are two methods for addressing system growth: vertical and horizontal scaling. 1086 | 1087 | **1. Vertical Scaling:** 1088 | 1089 | Vertical Scaling involves increasing the capacity of a single server, such as using a more powerful CPU, adding more RAM, or increasing the amount of storage space. 1090 | 1091 | **2. Horizontal Scaling:** 1092 | 1093 | Horizontal Scaling involves dividing the system dataset and load over multiple servers, adding additional servers to increase capacity as required. While the overall speed or capacity of a single machine may not be high, each machine handles a subset of the overall workload, potentially providing better efficiency than a single high-speed high-capacity server. 1094 | 1095 |

1096 | Document Databases 1097 |

1098 | 1099 | MongoDB supports horizontal scaling through `sharding`. A MongoDB sharded cluster consists of the following components: 1100 | 1101 | * **Shards**: Each shard contains a subset of the sharded data. Each shard can be deployed as a replica set. 1102 | * **Mongos**: The mongos acts as a query router, providing an interface between client applications and the sharded cluster. Starting in MongoDB 4.4, mongos can support hedged reads to minimize latencies. 1103 | * **Config Servers**: Config servers store metadata and configuration settings for the cluster. 1104 | 1105 |
1106 | ↥ back to top 1107 |
1108 | 1109 | ## Q. What is Aggregation in MongoDB? 1110 | 1111 | Aggregation in MongoDB is an operation used to process the data that returns the computed results. Aggregation basically groups the data from multiple documents and operates in many ways on those grouped data in order to return one combined result. 1112 | 1113 | Aggregate function groups the records in a collection, and can be used to provide total number(sum), average, minimum, maximum etc out of the group selected. In order to perform the aggregate function in MongoDB, aggregate () is the function to be used. 1114 | 1115 | **Syntax:** 1116 | 1117 | ```js 1118 | db.collection_name.aggregate(aggregate_operation) 1119 | ``` 1120 | 1121 | MongoDB provides three ways to perform aggregation: 1122 | 1123 | * the aggregation pipeline, 1124 | * the map-reduce function, 1125 | * and single purpose aggregation methods and commands. 1126 | 1127 | MongoDB\'s aggregation framework is modeled on the concept of data processing pipelines. Documents enter a multi-stage pipeline that transforms the documents into an aggregated result. 1128 | 1129 | **Example:** 1130 | 1131 | ```js 1132 | db.orders.aggregate([ 1133 | { $match: { status: "A" } }, 1134 | { $group: { _id: "$cust_id", total: { $sum: "$amount" } } } 1135 | ]); 1136 | ``` 1137 | 1138 | The `$match` stage filters the documents by the status field and passes to the next stage those documents that have status equal to "A". The `$group` stage groups the documents by the cust_id field to calculate the sum of the amount for each unique cust_id. 1139 | 1140 | **Expressions used by Aggregate function:** 1141 | 1142 | |Expression |Description | 1143 | |------------|----------------------------------------------------------------------| 1144 | |$sum |Summates the defined values from all the documents in a collection| 1145 | |$avg |Calculates the average values from all the documents in a collection| 1146 | |$min |Return the minimum of all values of documents in a collection| 1147 | |$max |Return the maximum of all values of documents in a collection| 1148 | |$addToSet |Inserts values to an array but no duplicates in the resulting document| 1149 | |$push |Inserts values to an array in the resulting document| 1150 | |$first |Returns the first document from the source document| 1151 | |$last |Returns the last document from the source document| 1152 | 1153 |
1154 | ↥ back to top 1155 |
1156 | 1157 | ## Q. Why are MongoDB data files large in size? 1158 | 1159 | MongoDB preallocates data files to reserve space and avoid file system fragmentation when you setup the server. 1160 | 1161 |
1162 | ↥ back to top 1163 |
1164 | 1165 | ## Q. How can you isolate your cursors from intervening with the write operations? 1166 | 1167 | As cursor is not isolated during its lifetime, thus intervening write operations on a document may result in cursor that returns a document more than once. The `snapshot()` method can be used on a cursor to isolate the operation for a very specific case. `snapshot()` traverses the index on the `_id` field and guarantees that the query will return each document no more than once. 1168 | 1169 | **Restriction:** 1170 | 1171 | * We cannot use `snapshot()` with sharded collections. 1172 | * We cannot use `snapshot()` with `sort()` or `hint()` cursor methods. 1173 | 1174 |
1175 | ↥ back to top 1176 |
1177 | 1178 | ## Q. At what interval does MongoDB write updates to the disk? 1179 | 1180 | By default configuration, MongoDB writes updates to the disk every 60 seconds. However, this is configurable with the `commitIntervalMs` and `syncPeriodSecs` options. 1181 | 1182 |
1183 | ↥ back to top 1184 |
1185 | 1186 | ## Q. What happens if an index does not fit into RAM? 1187 | 1188 | If the indexes does not fit into RAM, MongoDB reads data from disk which is relatively very much slower than reading from RAM. 1189 | 1190 | Indexes do not have to fit entirely into RAM in all cases. If the value of the indexed field increments with every insert, and most queries select recently added documents; then MongoDB only needs to keep the parts of the index that hold the most recent or "right-most" values in RAM. This allows for efficient index use for read and write operations and minimize the amount of RAM required to support the index. 1191 | 1192 | **Example:** To check the size of indexes 1193 | 1194 | ```js 1195 | > db.collection.totalIndexSize() 1196 | 1197 | // Output (in bytes) 1198 | 4294976499 1199 | ``` 1200 | 1201 |
1202 | ↥ back to top 1203 |
1204 | 1205 | ## Q. Does MongoDB provide a facility to do text search? 1206 | 1207 | MongoDB supports query operations that perform a text search of string content. To perform text search, MongoDB uses a `text index` and the `$text` operator. 1208 | 1209 | **Example:** 1210 | 1211 | A collection stores with the following documents: 1212 | 1213 | ```js 1214 | db.stores.insert( 1215 | [ 1216 | { _id: 1, name: "Java Hut", description: "Coffee and cakes" }, 1217 | { _id: 2, name: "Burger Buns", description: "Gourmet hamburgers" }, 1218 | { _id: 3, name: "Coffee Shop", description: "Just coffee" }, 1219 | { _id: 4, name: "Clothes Clothes Clothes", description: "Discount clothing" }, 1220 | { _id: 5, name: "Java Shopping", description: "Indonesian goods" } 1221 | ] 1222 | ) 1223 | ``` 1224 | 1225 | **1. Text Index:** 1226 | 1227 | MongoDB provides `text indexes` to support text search queries on string content. `text indexes` can include any field whose value is a string or an array of string elements. 1228 | 1229 | ```js 1230 | db.stores.createIndex( { name: "text", description: "text" } ) 1231 | ``` 1232 | 1233 | **2. $text Operator:** 1234 | 1235 | Use the `$text` query operator to perform text searches on a collection with a text index. `$text` will tokenize the search string using whitespace and most punctuation as delimiters, and perform a logical OR of all such tokens in the search string. 1236 | 1237 | **Example:** 1238 | 1239 | ```js 1240 | // Returns all stores containing any terms from the list “coffee”, “shop”, and “java” 1241 | db.stores.find( { $text: { $search: "java coffee shop" } } ) 1242 | 1243 | 1244 | // Returns all documents containing “coffee shop” 1245 | db.stores.find( { $text: { $search: "\"coffee shop\"" } } ) 1246 | 1247 | 1248 | // Returns all stores containing “java” or “shop” but not “coffee” 1249 | db.stores.find( { $text: { $search: "java shop -coffee" } } ) 1250 | ``` 1251 | 1252 |
1253 | ↥ back to top 1254 |
1255 | 1256 | ## Q. How does Journaling work in MongoDB? 1257 | 1258 | Mongod primarily hosts the write operations in memory in shared view. It is called shared because it has memory mapping in actual disc. In this process, a write operation occurs in mongod, which then creates changes in private view. The first block is memory and the second block is "my disc". After a specified interval, which is called a "journal commit interval", the private view writes those operations in journal directory (residing in the disc). 1259 | 1260 | Once the journal commit happens, mongod pushes data into shared view. As part of the process, it gets written to actual data directory from the shared view (as this process happens in background). The basic advantage is, we have a reduced cycle from 60 seconds to 200 milliseconds. 1261 | 1262 | In a scenario where an abruption occurs at any point of time or flash disc remains unavailable for last 59 seconds , then when the next time mongod starts, it basically replays all write operation logs and writes into the actual data directory. 1263 | 1264 |

1265 | Document Databases 1266 |

1267 | 1268 |
1269 | ↥ back to top 1270 |
1271 | 1272 | ## Q. Is MongoDB schema-less? 1273 | 1274 | As a NoSQL database, MongoDB is considered schemaless because it does not require a rigid, pre-defined schema like a relational database. The database management system (DBMS) enforces a partial schema as data is written, explicitly listing collections and indexes. 1275 | 1276 | MongoDB is a document based database, which does not use the concept of tables and columns, instead of which it uses the concept of documents and collections. All the referential data with respect to different modules will be stored as one collection. More over the BSON data structure used by MongoDB can easily have varying sets of data and fields with different types. 1277 | 1278 | When we say **schemaless**, we actually mean **dynamically typed schema**, as opposed to statically typed schemas as available in RDBMS(SQL) databases. JSON is a completely schema free data structure, as opposed to XML which allows you to specify XSD if you need. 1279 | 1280 |
1281 | ↥ back to top 1282 |
1283 | 1284 | ## Q. What is a Storage Engine in MongoDB? 1285 | 1286 | The storage engine is the component of the database that is responsible for managing how data is stored, both in memory and on disk. MongoDB supports multiple storage engines, as different engines perform better for specific workloads. 1287 | 1288 | **Example:** command to find storage engine 1289 | 1290 | ```js 1291 | > db.serverStatus().storageEngine 1292 | 1293 | // Output 1294 | { 1295 | "name" : "wiredTiger", 1296 | "supportsCommittedReads" : true, 1297 | "oldestRequiredTimestampForCrashRecovery" : Timestamp(0, 0), 1298 | "supportsPendingDrops" : true, 1299 | "dropPendingIdents" : NumberLong(0), 1300 | "supportsTwoPhaseIndexBuild" : true, 1301 | "supportsSnapshotReadConcern" : true, 1302 | "readOnly" : false, 1303 | "persistent" : true, 1304 | "backupCursorOpen" : false 1305 | } 1306 | ``` 1307 | 1308 |

1309 | Storage Engine 1310 |

1311 | 1312 | MongoDB supports mainly 3 storage engines whose performance differ in accordance to some specific workloads. The storage engines are: 1313 | 1314 | * WiredTiger Storage Engine 1315 | * In-Memory Storage Engine 1316 | * ~~MMAPv1 Storage Engine~~ (This Storage Engine has been deprecated after MongoDB 4.2) 1317 | 1318 | **1. WiredTiger Storage Engine** 1319 | 1320 | `WiredTiger` is the default storage engine starting in MongoDB 3.2. It is well-suited for most workloads and is recommended for new deployments. WiredTiger provides a document-level concurrency model, checkpointing, and compression, among other features. The WiredTiger storage engine has both configurations of a `B-Tree` Based Engine and a `Log Structured Merge Tree` Based Engine. 1321 | 1322 | **2. In-Memory Storage Engine** 1323 | 1324 | In-Memory Storage Engine is available in MongoDB Enterprise. Rather than storing documents on-disk, it retains them in-memory for more predictable data latencies. 1325 | 1326 | **3. MMAPv1 Storage Engine** 1327 | 1328 | MMAPv1 is a B-tree based system which powers many of the functions such as storage interaction and memory management to the operating system. Its name comes from the fact that it uses memory mapped files to access data. It does so by directly loading and modifying file contents, which are in a virtual memory through a `mmap()` `syscall` methodology. 1329 | 1330 |

1331 | Storage Engine 1332 |

1333 | 1334 |
1335 | ↥ back to top 1336 |
1337 | 1338 | ## Q. How to condense large volumes of data in Mongodb? 1339 | 1340 | **compact:** 1341 | 1342 | Rewrites and defragments all data and indexes in a collection. On `WiredTiger` databases, this command will release unneeded disk space to the operating system. This command will perform a compaction "in-line". 1343 | 1344 | MongoDB compresses the files by: 1345 | 1346 | * copying the files to a new location 1347 | * looping through the documents and re-ordering / re-solving them 1348 | * replacing the original files with the new files 1349 | 1350 | **Syntax:** 1351 | 1352 | ```js 1353 | { compact: } 1354 | ``` 1355 | 1356 |
1357 | ↥ back to top 1358 |
1359 | 1360 | ## Q. Explain relationships in MongoDB? 1361 | 1362 | Relationships in MongoDB are used to specify how one or more documents are related to each other. In MongoDB, the relationships can be modelled either by Embedded way or by using the Reference approach. These relationships can be of the following forms: 1363 | 1364 | * One to One 1365 | * One to Many 1366 | * Many to Many 1367 | 1368 | **Example:** Let us consider the case of storing addresses for users. So, one user can have multiple addresses making this a `1:N` relationship. 1369 | 1370 | User Collection 1371 | 1372 | ```js 1373 | { 1374 | "_id":ObjectId("52ffc33cd85242f436000001"), 1375 | "name": "Alex K", 1376 | "contact": "987654321", 1377 | "dob": "01-01-1990" 1378 | } 1379 | ``` 1380 | 1381 | Address Collection 1382 | 1383 | ```js 1384 | { 1385 | "_id":ObjectId("52ffc4a5d85242602e000000"), 1386 | "building": "22 A, Indiana Apt", 1387 | "pincode": 123456, 1388 | "city": "Los Angeles", 1389 | "state": "California" 1390 | } 1391 | ``` 1392 | 1393 | **1. Modeling Embedded Relationships** 1394 | 1395 | In the embedded approach, we will embed the address document inside the user document. 1396 | 1397 | ```js 1398 | > db.users.insert({ 1399 | { 1400 | "_id":ObjectId("52ffc33cd85242f436000001"), 1401 | "contact": "987654321", 1402 | "dob": "01-01-1991", 1403 | "name": "Alex K", 1404 | "address": [ 1405 | { 1406 | "building": "22 A, Indiana Apt", 1407 | "pincode": 123456, 1408 | "city": "Los Angeles", 1409 | "state": "California" 1410 | }, 1411 | { 1412 | "building": "170 A, Acropolis Apt", 1413 | "pincode": 456789, 1414 | "city": "Chicago", 1415 | "state": "Illinois" 1416 | } 1417 | ] 1418 | } 1419 | }) 1420 | ``` 1421 | 1422 | This approach maintains all the related data in a single document, which makes it easy to retrieve and maintain. The whole document can be retrieved in a single query such as − 1423 | 1424 | ```js 1425 | >db.users.findOne({"name":"Alex K"},{"address":1}) 1426 | ``` 1427 | 1428 | The drawback is that if the embedded document keeps on growing too much in size, it can impact the read/write performance. 1429 | 1430 | **2. Modeling Referenced Relationships:** 1431 | 1432 | This is the approach of designing normalized relationship. In this approach, both the user and address documents will be maintained separately but the user document will contain a field that will reference the address document\'s id field. 1433 | 1434 | ```js 1435 | { 1436 | "_id":ObjectId("52ffc33cd85242f436000001"), 1437 | "contact": "987654321", 1438 | "dob": "01-01-1991", 1439 | "name": "Alex K", 1440 | "address_ids": [ 1441 | ObjectId("52ffc4a5d85242602e000000"), 1442 | ObjectId("52ffc4a5d85242602e000001") 1443 | ] 1444 | } 1445 | ``` 1446 | 1447 | With this approach, we will need two queries: first to fetch the `address_ids` fields from user document and second to fetch these addresses from address collection. 1448 | 1449 | ```js 1450 | >var result = db.users.findOne({"name":"Alex K"},{"address_ids":1}) 1451 | >var addresses = db.address.find({"_id":{"$in":result["address_ids"]}}) 1452 | ``` 1453 | 1454 |
1455 | ↥ back to top 1456 |
1457 | 1458 | ## Q. What is use of capped collection in MongoDB? 1459 | 1460 | **Capped collections** are fixed-size collections that support high-throughput operations that insert and retrieve documents based on insertion order. Capped collections work in a way similar to `circular buffers`: once a collection fills its allocated space, it makes room for new documents by overwriting the oldest documents in the collection. 1461 | 1462 | Capped collections restrict updates to the documents if the update results in increased document size. Since capped collections store documents in the order of the disk storage, it ensures that the document size does not increase the size allocated on the disk. Capped collections are best for storing log information, cache data, or any other high volume data. 1463 | 1464 | **Example:** 1465 | 1466 | ```js 1467 | >db.createCollection( "log", { capped: true, size: 100000 } ) 1468 | 1469 | 1470 | // specify a maximum number of documents for the collection 1471 | >db.createCollection("log", { capped: true, size: 5242880, max: 5000 } ) 1472 | 1473 | 1474 | // check whether a collection is capped or not 1475 | >db.cappedLogCollection.isCapped() 1476 | 1477 | 1478 | // convert existing collection to capped 1479 | >db.runCommand({"convertToCapped": "posts", size: 10000}) 1480 | 1481 | 1482 | // Querying Capped Collection 1483 | >db.cappedLogCollection.find().sort({$natural: -1}) 1484 | ``` 1485 | 1486 |
1487 | ↥ back to top 1488 |
1489 | 1490 | ## Q. What is splitting in MongoDB? 1491 | 1492 | Splitting is a process that keeps chunks from growing too large. When a chunk grows beyond a specified chunk size, or if the number of documents in the chunk exceeds Maximum Number of Documents Per Chunk to Migrate, MongoDB splits the chunk based on the shard key values the chunk represent. 1493 | 1494 |

1495 | Document Databases 1496 |

1497 | 1498 | ## Q. Explain what is horizontal scalability in mongodb? 1499 | 1500 | Horizontal Scaling involves dividing the system dataset and load over multiple servers, adding additional servers to increase capacity as required. While the overall speed or capacity of a single machine may not be high, each machine handles a subset of the overall workload, potentially providing better efficiency than a single high-speed high-capacity server. Expanding the capacity of the deployment only requires adding additional servers as needed, which can be a lower overall cost than high-end hardware for a single machine. The trade off is increased complexity in infrastructure and maintenance for the deployment. 1501 | 1502 |

1503 | Document Databases 1504 |

1505 | 1506 |
1507 | ↥ back to top 1508 |
1509 | -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-cayman -------------------------------------------------------------------------------- /assets/MONGO_SE1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/MONGO_SE1.png -------------------------------------------------------------------------------- /assets/RDBMS_MongoDB_Mapping.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/RDBMS_MongoDB_Mapping.jpg -------------------------------------------------------------------------------- /assets/StorageEngine.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/StorageEngine.png -------------------------------------------------------------------------------- /assets/column-database.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/column-database.png -------------------------------------------------------------------------------- /assets/document-database.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/document-database.png -------------------------------------------------------------------------------- /assets/graph-database.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/graph-database.png -------------------------------------------------------------------------------- /assets/horizontal-vs-vertical-scaling.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/horizontal-vs-vertical-scaling.png -------------------------------------------------------------------------------- /assets/journaling.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/journaling.png -------------------------------------------------------------------------------- /assets/key-value-database.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/key-value-database.png -------------------------------------------------------------------------------- /assets/mongoSplitting.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /assets/mongodb-compass.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/mongodb-compass.png -------------------------------------------------------------------------------- /assets/objectid.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/objectid.png -------------------------------------------------------------------------------- /assets/replica-set.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/replica-set.png -------------------------------------------------------------------------------- /assets/sharding.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/sharding.png -------------------------------------------------------------------------------- /assets/star.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/learning-zone/mongodb-basics/67bca84d46d32b8adff12544e4859daf51b3fc9f/assets/star.png -------------------------------------------------------------------------------- /mongodb-code.md: -------------------------------------------------------------------------------- 1 | # MongoDB Coding Practice 2 | 3 | ## Q. ***Mention the command to insert a document in a database called company and collection called employee?*** 4 | 5 | ```js 6 | use company; 7 | db.employee.insert( { name: "John", email: "john.k@gmail.com" } ) 8 | ``` 9 | 10 |
11 | ↥ back to top 12 |
13 | 14 | ## Q. ***Mention the command to check whether you are on the master server or not?*** 15 | 16 | ```js 17 | db.isMaster() 18 | ``` 19 | 20 |
21 | ↥ back to top 22 |
23 | 24 | ## Q. ***How to combine data from multiple collections into one collection?*** 25 | 26 | **$lookup:** 27 | 28 | Performs a left outer join to an unsharded collection in the same database to filter in documents from the “joined” collection for processing. To each input document, the `$lookup` stage adds a new array field whose elements are the matching documents from the “joined” collection. The `$lookup` stage passes these reshaped documents to the next stage. 29 | 30 | **Syntax:** 31 | 32 | ```js 33 | { 34 | $lookup: 35 | { 36 | from: , 37 | localField: , 38 | foreignField: , 39 | as: 40 | } 41 | } 42 | ``` 43 | 44 |
45 | ↥ back to top 46 |
47 | 48 | ## Q. ***How do I perform the SQL JOIN equivalent in MongoDB?*** 49 | 50 | ```js 51 | // Sample Records 52 | 53 | comments 54 | { uid:12345, pid:444, comment="blah" } 55 | { uid:12345, pid:888, comment="asdf" } 56 | { uid:99999, pid:444, comment="qwer" } 57 | 58 | users 59 | { uid:12345, name:"john" } 60 | { uid:99999, name:"mia" } 61 | ``` 62 | 63 | **Answers** 64 | 65 | ```js 66 | { 67 | $lookup: 68 | { 69 | from: , 70 | localField: , 71 | foreignField: , 72 | as: 73 | } 74 | } 75 | ``` 76 | 77 | ## Q. ***How to query MongoDB with "like"?*** 78 | 79 | ```js 80 | db.users.insert({name: 'paulo'}) 81 | db.users.insert({name: 'patric'}) 82 | db.users.insert({name: 'pedro'}) 83 | 84 | db.users.find({name: /a/}) //like '%a%' 85 | db.users.find({name: /^pa/}) //like 'pa%' 86 | db.users.find({name: /ro$/}) //like '%ro' 87 | ``` 88 | 89 |
90 | ↥ back to top 91 |
92 | 93 | ## Q. ***Find objects between two dates MongoDB?*** 94 | 95 | Operator `$gte` and `$lt` is used to find objects between two dates in MongoDB. 96 | 97 | **Example**: Creating a collection 98 | 99 | ```js 100 | >db.order.insert({"OrderId":1,"OrderAddrees":"US","OrderDateTime":ISODate("2020-02-19")}; 101 | WriteResult({ "nInserted" : 1 }) 102 | 103 | >db.order.insert({"OrderId":2,"OrderAddrees":"UK","OrderDateTime":ISODate("2020-02-26")}; 104 | WriteResult({ "nInserted" : 1 }) 105 | ``` 106 | 107 | Display all documents from the collection using `find()` method. 108 | 109 | ```js 110 | > db.order.find().pretty(); 111 | 112 | // Output 113 | { 114 | "_id" : ObjectId("5c6c072068174aae23f5ef57"), 115 | "OrderId" : 1, 116 | "OrderAddrees" : "US", 117 | "OrderDateTime" : ISODate("2020-02-19T00:00:00Z") 118 | } 119 | { 120 | "_id" : ObjectId("5c6c073568174aae23f5ef58"), 121 | "OrderId" : 2, 122 | "OrderAddrees" : "UK", 123 | "OrderDateTime" : ISODate("2020-02-26T00:00:00Z") 124 | } 125 | ``` 126 | 127 | Here is the query to find objects between two dates: 128 | 129 | ```js 130 | > db.order.find({"OrderDateTime":{ $gte:ISODate("2020-02-10"), $lt:ISODate("2020-02-21") } 131 | }).pretty(); 132 | 133 | 134 | // Output 135 | { 136 | "_id" : ObjectId("5c6c072068174aae23f5ef57"), 137 | "OrderId" : 1, 138 | "OrderAddrees" : "US", 139 | "OrderDateTime" : ISODate("2020-02-19T00:00:00Z") 140 | } 141 | ``` 142 | 143 |
144 | ↥ back to top 145 |
146 | 147 | ## Q. ***Is it possible to update MongoDB field using value of another field?*** 148 | 149 | The aggregate function can be used to update MongoDB field using the value of another field. 150 | 151 | **Example** 152 | 153 | ```js 154 | db.collection.( 155 | {}, 156 | [ 157 | {"$set": {"name": { "$concat": ["$firstName", " ", "$lastName"]}}} 158 | ] 159 | ) 160 | ``` 161 | 162 |
163 | ↥ back to top 164 |
165 | 166 | ## Q. ***How to check if a field contains a substring?*** 167 | 168 | The `$regex` operator can be used to check if a field contains a string in MongoDB. 169 | 170 | ```js 171 | db.users.findOne({"username" : {$regex : ".*some_string.*"}}); 172 | ``` 173 | 174 |
175 | ↥ back to top 176 |
177 | 178 | ## Q. ***How to get the last N records from find?*** 179 | 180 | ```js 181 | // Syntax 182 | db..find().sort({$natural:-1}).limit(value) 183 | 184 | 185 | // Example 186 | db.employee.find().sort({$natural:-1}).limit(100) 187 | ``` 188 | 189 |
190 | ↥ back to top 191 |
192 | 193 | ## Q. ***How to remove a field completely from a MongoDB document?*** 194 | 195 | How do I remove words completely from all the documents in this collection? 196 | 197 | ```js 198 | { 199 | name: 'book', 200 | tags: { 201 | words: ['abc','123'], // <-- remove it comletely 202 | lat: 33, 203 | long: 22 204 | } 205 | } 206 | ``` 207 | 208 | **Answer** 209 | 210 | ```js 211 | db.example.update({}, {$unset: {words: 1}}, false, true); 212 | ``` 213 | 214 |
215 | ↥ back to top 216 |
217 | 218 | ## Q. ***How to find MongoDB records where array field is not empty?*** 219 | 220 | ```js 221 | db.inventory.find({ pictures: { $exists: true, $ne: [] } }) 222 | ``` 223 | 224 |
225 | ↥ back to top 226 |
227 | 228 | ## Q. ***How to find document with array that contains a specific value?*** 229 | 230 | Populate the inventory collection 231 | 232 | ```js 233 | db.inventory.insertMany([ 234 | { item: "journal", qty: 25, tags: ["blank", "red"], dim_cm: [ 14, 21 ] }, 235 | { item: "notebook", qty: 50, tags: ["red", "blank"], dim_cm: [ 14, 21 ] }, 236 | { item: "paper", qty: 100, tags: ["red", "blank", "plain"], dim_cm: [ 14, 21 ] }, 237 | { item: "planner", qty: 75, tags: ["blank", "red"], dim_cm: [ 22.85, 30 ] }, 238 | { item: "postcard", qty: 45, tags: ["blue"], dim_cm: [ 10, 15.25 ] } 239 | ]); 240 | ``` 241 | 242 | To query if the array field contains at least one element with the specified value, use the filter { ``: `` } where `` is the element value. 243 | 244 | ```js 245 | db.inventory.find( { tags: "red" } ) 246 | ``` 247 | 248 |
249 | ↥ back to top 250 |
251 | 252 | ## Q. ***Mention the command to remove indexes and list all the indexes on a particular collection?*** 253 | 254 | **List all Indexes on a Collection** 255 | 256 | ```js 257 | // To view all indexes on the people collection 258 | 259 | db.people.getIndexes() 260 | ``` 261 | 262 | **List all Indexes for a Database** 263 | 264 | ```js 265 | // To list all the collection indexes in a database 266 | 267 | db.getCollectionNames().forEach(function(collection) { 268 | indexes = db[collection].getIndexes(); 269 | print("Indexes for " + collection + ":"); 270 | printjson(indexes); 271 | }); 272 | ``` 273 | 274 | **Remove Indexes** 275 | 276 | MongoDB provides two methods for removing indexes from a collection: 277 | 278 | * `db.collection.dropIndex()` 279 | * `db.collection.dropIndexes()` 280 | 281 | **1. Remove Specific Index** 282 | 283 | ```js 284 | db.accounts.dropIndex( { "tax-id": 1 } ) 285 | 286 | 287 | // Output 288 | { "nIndexesWas" : 3, "ok" : 1 } 289 | ``` 290 | 291 | **2. Remove All Indexes** 292 | 293 | ```js 294 | // The following command removes all indexes from the accounts collection 295 | 296 | db.accounts.dropIndexes() 297 | ``` 298 | 299 |
300 | ↥ back to top 301 |
302 | 303 | ## Q. ***Select only selected elements from given collection?*** 304 | 305 | Consider a ```books``` collection with the following document: 306 | 307 | ```js 308 | { 309 | "_id" : 1, 310 | title: "abc123", 311 | isbn: "0001122223334", 312 | author: { last: "zzz", first: "aaa" }, 313 | copies: 5 314 | } 315 | ``` 316 | 317 | The following ```$project``` stage adds the new fields isbn, lastName, and copiesSold: 318 | 319 | ```js 320 | db.books.aggregate([ 321 | { 322 | $project : { 323 | title:1, 324 | isbn: { 325 | prefix: { $substr: [ "$isbn", 0, 3 ] }, 326 | group: { $substr: [ "$isbn", 3, 2 ] }, 327 | publisher: { $substr: [ "$isbn", 5, 4 ] }, 328 | title: { $substr: [ "$isbn", 9, 3 ] }, 329 | checkDigit: { $substr: [ "$isbn", 12, 1] } 330 | }, 331 | lastName: "$author.last", 332 | copiesSold: "$copies" 333 | } 334 | } 335 | ]) 336 | ``` 337 | 338 | The operation results in the following document: 339 | 340 | ```js 341 | { 342 | "_id" : 1, 343 | "title" : "abc123", 344 | "isbn" : { 345 | "prefix" : "000", 346 | "group" : "11", 347 | "publisher" : "2222", 348 | "title" : "333", 349 | "checkDigit" : "4" 350 | }, 351 | "lastName" : "zzz", 352 | "copiesSold" : 5 353 | } 354 | ``` 355 | 356 |
357 | ↥ back to top 358 |
359 | 360 | ## Q. ***How to join 3 or more collections in MongoDB?*** 361 | 362 | Let's say we have 3 hypothetical collections in MongoDB: customers, orders, and orderItems. 363 | 364 | Each customer has multiple orders, and each order has multiple order items. 365 | 366 | **Example:** 367 | 368 | ```js 369 | // customers 370 | [ 371 | { 372 | customer_id: 1, 373 | name: "Jim Smith", 374 | email: "jim.smith@example.com" 375 | }, 376 | { 377 | customer_id: 2, 378 | name: "Bob Jones", 379 | email: "bob.jones@example.com" 380 | } 381 | ] 382 | 383 | 384 | // orders 385 | [ 386 | { 387 | order_id: 1, 388 | customer_id: 1 389 | }, 390 | { 391 | order_id: 2, 392 | customer_id: 1 393 | } 394 | ] 395 | 396 | 397 | // orderItems 398 | [ 399 | { 400 | order_item_id: 1, 401 | name: "Foo", 402 | price: 4.99, 403 | order_id: 1 404 | }, 405 | { 406 | order_item_id: 2, 407 | name: "Bar", 408 | price: 17.99, 409 | order_id: 1 410 | }, 411 | { 412 | order_item_id: 3, 413 | name: "baz", 414 | price: 24.99, 415 | order_id: 2 416 | } 417 | ] 418 | ``` 419 | 420 | **Desired Result:** 421 | 422 | ```js 423 | [ 424 | { 425 | customer_id: 1, 426 | name: "Jim Smith", 427 | email: "jim.smith@example.com" 428 | orders: [ 429 | { 430 | order_id: 1, 431 | items: [ 432 | { 433 | name: "Foo", 434 | price: 4.99 435 | }, 436 | { 437 | name: "Bar", 438 | price: 17.99 439 | } 440 | ] 441 | }, 442 | { 443 | order_id: 2, 444 | items: [ 445 | { 446 | name: "baz", 447 | price: 24.99 448 | } 449 | ] 450 | } 451 | ] 452 | }, 453 | { 454 | customer_id: 2, 455 | name: "Bob Jones", 456 | email: "bob.jones@example.com" 457 | orders: [] 458 | } 459 | ] 460 | ``` 461 | 462 | ### Answer 463 | 464 | Do nested lookup using lookup with pipeline, 465 | 466 | 1. ```$lookup``` with orders collection. 467 | 2. ```let```, define variable customer_id that is from main collection, to access this reference variable inside pipeline using ```$$``` like ```$$customer_id```. 468 | 3. ```pipeline``` can add pipeline stages same as we do in root level pipeline 469 | 4. ```$expr``` whenever we match internal fields it requires expression match condition, so ```$$customer_id``` is parent collection field that declared in let and $customer_id is child collection's/current collection's field 470 | 5. ```$lookup``` with orderitems collection 471 | 472 | ```js 473 | db.customers.aggregate([ 474 | { 475 | $lookup: { 476 | from: "orders", 477 | let: { customer_id: "$customer_id" }, 478 | pipeline: [ 479 | { $match: { $expr: { $eq: ["$$customer_id", "$customer_id"] } } }, 480 | { 481 | $lookup: { 482 | from: "orderitems", 483 | localField: "order_id", 484 | foreignField: "order_id", 485 | as: "items" 486 | } 487 | } 488 | ], 489 | as: "orders" 490 | } 491 | } 492 | ]) 493 | ``` 494 | 495 |
496 | ↥ back to top 497 |
498 | 499 | ## Q. ***How to validate data in mongodb?*** 500 | 501 | `db.collection.validate()` validates a collection. The method scans a collection data and indexes for correctness and returns the result. 502 | 503 | **Syntax:** 504 | 505 | ```js 506 | db.collection.validate( { 507 | full: , // Optional 508 | repair: // Optional, added in MongoDB 5.0 509 | } ) 510 | ``` 511 | 512 | **Example:** 513 | 514 | ```js 515 | // validate a collection using the default validation setting 516 | db.myCollection.validate({ }) 517 | 518 | // perform a full validation of collection 519 | db.myCollection.validate( { full: true } ) 520 | 521 | // repair collection 522 | db.myCollection.validate( { repair: true } ) 523 | ``` 524 | 525 |
526 | ↥ back to top 527 |
528 | -------------------------------------------------------------------------------- /mongodb-commands.md: -------------------------------------------------------------------------------- 1 | # MongoDB Commands 2 | 3 | ### Show All Databases 4 | 5 | ```js 6 | show dbs 7 | ``` 8 | 9 | ### Show Current Database 10 | 11 | ```js 12 | db 13 | ``` 14 | 15 | ### Create Or Switch Database 16 | 17 | ```js 18 | use acme 19 | ``` 20 | 21 | ### Drop 22 | 23 | ```js 24 | db.dropDatabase() 25 | ``` 26 | 27 | ### Create Collection 28 | 29 | ```js 30 | db.createCollection('posts') 31 | ``` 32 | 33 | ### Show Collections 34 | 35 | ```js 36 | show collections 37 | ``` 38 | 39 | ### Insert Row 40 | 41 | ```js 42 | db.posts.insert({ 43 | title: 'Post One', 44 | body: 'Body of post one', 45 | category: 'News', 46 | tags: ['news', 'events'], 47 | user: { 48 | name: 'John Doe', 49 | status: 'author' 50 | }, 51 | date: Date() 52 | }) 53 | ``` 54 | 55 | ### Insert Multiple Rows 56 | 57 | ```js 58 | db.posts.insertMany([ 59 | { 60 | title: 'Post Two', 61 | body: 'Body of post two', 62 | category: 'Technology', 63 | date: Date() 64 | }, 65 | { 66 | title: 'Post Three', 67 | body: 'Body of post three', 68 | category: 'News', 69 | date: Date() 70 | }, 71 | { 72 | title: 'Post Four', 73 | body: 'Body of post three', 74 | category: 'Entertainment', 75 | date: Date() 76 | } 77 | ]) 78 | ``` 79 | 80 | ### Get All Rows 81 | 82 | ```js 83 | db.posts.find() 84 | ``` 85 | 86 | ### Get All Rows Formatted 87 | 88 | ```js 89 | db.find().pretty() 90 | ``` 91 | 92 | ### Find Rows 93 | 94 | ```js 95 | db.posts.find({ category: 'News' }) 96 | ``` 97 | 98 | ### Sort Rows 99 | 100 | ```js 101 | # asc 102 | db.posts.find().sort({ title: 1 }).pretty() 103 | # desc 104 | db.posts.find().sort({ title: -1 }).pretty() 105 | ``` 106 | 107 | ### Count Rows 108 | 109 | ```js 110 | db.posts.find().count() 111 | db.posts.find({ category: 'news' }).count() 112 | ``` 113 | 114 | ### Limit Rows 115 | 116 | ```js 117 | db.posts.find().limit(2).pretty() 118 | ``` 119 | 120 | ### Chaining 121 | 122 | ```js 123 | db.posts.find().limit(2).sort({ title: 1 }).pretty() 124 | ``` 125 | 126 | ### Foreach 127 | 128 | ```js 129 | db.posts.find().forEach(function(doc) { 130 | print("Blog Post: " + doc.title) 131 | }) 132 | ``` 133 | 134 | ### Find One Row 135 | 136 | ```js 137 | db.posts.findOne({ category: 'News' }) 138 | ``` 139 | 140 | ### Find Specific Fields 141 | 142 | ```js 143 | db.posts.find({ title: 'Post One' }, { 144 | title: 1, 145 | author: 1 146 | }) 147 | ``` 148 | 149 | ### Update Row 150 | 151 | ```js 152 | db.posts.update({ title: 'Post Two' }, 153 | { 154 | title: 'Post Two', 155 | body: 'New body for post 2', 156 | date: Date() 157 | }, 158 | { 159 | upsert: true 160 | }) 161 | ``` 162 | 163 | ### Update Specific Field 164 | 165 | ```js 166 | db.posts.update({ title: 'Post Two' }, 167 | { 168 | $set: { 169 | body: 'Body for post 2', 170 | category: 'Technology' 171 | } 172 | }) 173 | ``` 174 | 175 | ### Increment Field (\$inc) 176 | 177 | ```js 178 | db.posts.update({ title: 'Post Two' }, 179 | { 180 | $inc: { 181 | likes: 5 182 | } 183 | }) 184 | ``` 185 | 186 | ### Rename Field 187 | 188 | ```js 189 | db.posts.update({ title: 'Post Two' }, 190 | { 191 | $rename: { 192 | likes: 'views' 193 | } 194 | }) 195 | ``` 196 | 197 | ### Delete Row 198 | 199 | ```js 200 | db.posts.remove({ title: 'Post Four' }) 201 | ``` 202 | 203 | ### Sub-Documents 204 | 205 | ```js 206 | db.posts.update({ title: 'Post One' }, 207 | { 208 | $set: { 209 | comments: [ 210 | { 211 | body: 'Comment One', 212 | user: 'Mary Williams', 213 | date: Date() 214 | }, 215 | { 216 | body: 'Comment Two', 217 | user: 'Harry White', 218 | date: Date() 219 | } 220 | ] 221 | } 222 | }) 223 | ``` 224 | 225 | ### Find By Element in Array (\$elemMatch) 226 | 227 | ```js 228 | db.posts.find({ 229 | comments: { 230 | $elemMatch: { 231 | user: 'Mary Williams' 232 | } 233 | } 234 | } 235 | ) 236 | ``` 237 | 238 | ### Add Index 239 | 240 | ```js 241 | db.posts.createIndex({ title: 'text' }) 242 | ``` 243 | 244 | ### Text Search 245 | 246 | ```js 247 | db.posts.find({ 248 | $text: { 249 | $search: "\"Post O\"" 250 | } 251 | }) 252 | ``` 253 | 254 | ### Greater & Less Than 255 | 256 | ```js 257 | db.posts.find({ views: { $gt: 2 } }) 258 | db.posts.find({ views: { $gte: 7 } }) 259 | db.posts.find({ views: { $lt: 7 } }) 260 | db.posts.find({ views: { $lte: 7 } }) 261 | ``` 262 | 263 | ### List of the indexes in a collection 264 | 265 | ```js 266 | db.restaurants.getIndexes() 267 | [ 268 | { 269 | "v" : 2, 270 | "key" : { 271 | "_id" : 1 272 | }, 273 | "name" : "_id_", 274 | "ns" : "test.restaurants" 275 | }, 276 | { 277 | "v" : 2, 278 | "key" : { 279 | "borough" : 1 280 | }, 281 | "name" : "borough_1", 282 | "ns" : "test.restaurants" 283 | } 284 | ] 285 | ``` 286 | 287 | ### Drop an existing index 288 | 289 | ```js 290 | db.restaurants.dropIndex("cuisine_1_grades.score_1") 291 | { "nIndexesWas" : 4, "ok" : 1 } 292 | ``` 293 | 294 | ### CreateView 295 | Syntax 296 | ```js 297 | db.createView(, , , ) 298 | ``` 299 | 300 | Example 301 | ```js 302 | db.createView('postsTitles', 'posts',[{$project:{title:1}}]); 303 | { "ok" : 1 } 304 | 305 | 306 | db.postsTitles.find(); 307 | ``` 308 | 309 | 310 | 311 | 312 |
313 | ↥ back to top 314 |
315 | 316 | -------------------------------------------------------------------------------- /nodemongodb/ReadMe.md: -------------------------------------------------------------------------------- 1 | # Illustrating mongodb with the node js run time 2 | ## What is mongodb 3 | * it is a 4 | - non relational database that 5 | - stores data in form of documents and collections instead of rows and columns 6 | 7 | ## How do we access mongodb 8 | * First you have to ensure that you have Mongodb installed on your machine if you develop locally 9 | * Or you can use mongodb atlas if your developments are online 10 | * The first option is the easiest 11 | 12 | 13 | ## To use mongo db in Node Js 14 | - we use the mongoose package to 15 | - make the database connection by passing the database uri 16 | - make the queries to the database 17 | 18 | ## In this folder we are using the Node Js runtime to interact with mongodb 19 | 20 | # You can follow the steps of creating a node js application here 21 | - [Node js Basics](https://github.com/junrdev/nodejs-basics/edit/master/README.md) 22 | 23 | # After succesfully creating the Node Js Application 24 | 25 | ## 1. Define the folders structure 26 | ```bash 27 | cd myapp 28 | mkdir models controllers routes utils 29 | ``` 30 | - the ``models`` folders will store all the js files that define the collection(s) of the application 31 | - the ``controllers`` folder will contain files that will have the business logic of the application 32 | - the ``utils`` folder will contain all the helper function files of our folder this includes the file that makes the connection to the 33 | - the ``routes`` folder will contain all the routing done in the application 34 | 35 | ## 2. We require mongoose to interact with mongodb 36 | 37 | - use the following command to install mongoose 38 | ```bash 39 | npm install mongoose 40 | ``` 41 | 42 | ## 3. Define the database connection 43 | ```bash 44 | cd utils 45 | touch dbconfig.js 46 | ``` 47 | - inside this file we create a function that will help make the database connection and exports it inside the module exports object 48 | - this function takes one parameter that is the database connection 49 | 50 | - we use mongoose to make the connection we therefore require it 51 | 52 | ```js 53 | const mongoose = require('mongoose') 54 | ``` 55 | 56 | ```js 57 | module.exports = async (dburi) => { 58 | try { 59 | await mongoose.connect(dburi).then(() => { 60 | console.log(`Database connection successful`) 61 | }) 62 | } catch (e) { 63 | throw e 64 | } 65 | } 66 | ``` 67 | 68 | ## 3. Define the apps models 69 | - we define our collection object first we move to the file 70 | ``` 71 | models/todo.js 72 | ``` 73 | - require the mongoose package 74 | ```js 75 | const mongoose = require('mongoose') 76 | ``` 77 | 78 | - define the mongoose collection 79 | ```js 80 | const todoSchema = mongoose.Schema({ 81 | 82 | title: String, 83 | 84 | content: String, 85 | 86 | isDone: { 87 | type: Boolean, 88 | default: false 89 | }, 90 | 91 | dateCreated: { 92 | type: Date, 93 | default: Date.now() 94 | } 95 | }); 96 | ``` 97 | 98 | - we use the mongoose object to define the schema of our object (todo) 99 | - the fields that will be contained by the todo collection are defined as follows 100 | 101 | - Field with only the type 102 | ```js 103 | //name_of_field:type 104 | title : String 105 | ``` 106 | - Field with parameters. This is used when the object defines multiple limitations e.g. length, default values e.t.c 107 | ```js 108 | name_of_field:{ 109 | //requirements of the object 110 | } 111 | 112 | title:{ 113 | type:String, 114 | required:true, 115 | lowercase:true 116 | } 117 | ``` 118 | - thereafter we export the model to use in other parts of the application 119 | - this command create a collection using the model above 120 | ```js 121 | mongoose.model('Todo', todoSchema); 122 | ``` 123 | - then we export it inside the module exports object 124 | ```js 125 | module.exports=mongoose.model('Todo', todoSchema); 126 | ``` 127 | - we can now use our model in any part of the application 128 | 129 | ## 4.Define the Business logic 130 | - we are going to define the CRUD operations on top of the Todo model above. first move to the controllers folder 131 | - ```bash 132 | cd controllers 133 | ``` 134 | - open the crud.js file 135 | 136 | - we require the model to interact with mongodb 137 | ```js 138 | const Todo = require('../models/todo'); 139 | ``` 140 | - ## Create 141 | - This is a request to the database to create an object of a certain collection and persist it to the database. 142 | - first we recieve the todo fields from the request body 143 | ```js 144 | const {title, content} = req.body 145 | ``` 146 | - using the method create on top we create a new document on top of the Todo collection 147 | ```js 148 | try { 149 | const todo = await Todo.create({ title, content }); 150 | 151 | res.status(201).json({ 152 | success: true, 153 | todo 154 | }) 155 | } catch (e) { 156 | throw e 157 | } 158 | ``` 159 | - we wrap the whole process in a try catch block since errors might occur while creating the document 160 | - we must await the create method since it might take some time to create and save the document 161 | - we then return a response to the user with a 201 status code to indicate that a resource has been created 162 | 163 | 164 | - ## Read 165 | - inlvoves the client querying for documents 166 | * ## First we query for all the todos in the Todo collection 167 | - using the find methods we fetch all the documents in a certain collection 168 | ```js 169 | const todos = await Todo.find(); 170 | ``` 171 | 172 | - this returns a list of all available todos 173 | - Note : We can pass filters inside the find function to fetch only documents that match a certain criteria for example, to fetch all todos with a specific title we could use: 174 | - ```js 175 | const todos = await Todo.find({title:'some title'}); 176 | ``` 177 | - example response 178 | ```json 179 | { 180 | "success": true, 181 | "todos": [ 182 | { 183 | "_id": "644cf9423dce0406956e3829", 184 | "isDone": false, 185 | "dateCreated": "2023-04-29T11:02:11.515Z", 186 | "__v": 0 187 | }, 188 | { 189 | "_id": "644cf98f2d1344e7eb8ecd31", 190 | "title": "hello", 191 | "content": "Hello world", 192 | "isDone": false, 193 | "dateCreated": "2023-04-29T11:03:35.022Z", 194 | "__v": 0 195 | } 196 | ] 197 | } 198 | ``` 199 | * ## Alternatively we can use the object id to fetch the object 200 | - When a new document is created a field ``` _id ``` is added to it. 201 | - we can use the contents of this field to access it 202 | - passing the id through the url param 203 | ```html 204 | /todos/{id} 205 | ``` 206 | - Inside the getTodoById function we recieve the id 207 | 208 | ```js 209 | const {id} = req.params 210 | ``` 211 | - then we use the readily available method of the mongoose package to fetch the document with the specific id 212 | ```js 213 | try { 214 | const todo = await Todo.findById(id) 215 | 216 | if (!todo) 217 | res.status(404).json({ 218 | success: 'failed', 219 | message: `Todo with id ${id} not found` 220 | }); 221 | 222 | res.status(200).json({ 223 | success: true, 224 | todo 225 | }) 226 | } catch (e) { 227 | throw e 228 | } 229 | ``` 230 | 231 | - the response is a json object since the _id field is unique for every object it is like the primary key 232 | - url 233 | ```html 234 | http://localhost:9009/todos/644cf94c3dce0406956e382b 235 | ``` 236 | - example response 237 | ```json 238 | { 239 | "success": true, 240 | "todo": { 241 | "_id": "644cf98f2d1344e7eb8ecd31", 242 | "title": "hello", 243 | "content": "Hello world", 244 | "isDone": false, 245 | "dateCreated": "2023-04-29T11:03:35.022Z", 246 | "__v": 0 247 | } 248 | } 249 | ``` 250 | 251 | - ## Update 252 | - Updating involves altering the fields of already existing document(s) 253 | - If we are updating a single document we require a unique identifier that distinguishes it from the other documents. 254 | - for example it could be the _id field, or a unique title for the todos 255 | - in this folder we use the _id field. we pass it through the request url 256 | ```html 257 | /todos/{id} 258 | ``` 259 | - first inside our update function we check that the todo we are trying to update actually exists 260 | ```js 261 | //fetching the id from the url 262 | const { id } = req.params 263 | 264 | //getting the todo for the provided id 265 | const todo = await Todo.findById(id); 266 | 267 | //returnning an error message 268 | if (!todo) 269 | res.status(404).json({ 270 | success: 'failed', 271 | message: `Todo with id ${id} not found` 272 | 273 | }); 274 | ``` 275 | - if not we return a 404 Not Found response 276 | - if it exists we call another method to find and update the todo. We pass the updated field 277 | ```js 278 | await Todo.findByIdAndUpdate(id, { isDone, title, content }); 279 | ``` 280 | and await it to finish 281 | - then we return a success response to the user 282 | ```js 283 | res.status(201).json({ 284 | success: true, 285 | message: "update successful" 286 | }) 287 | ``` 288 | - We can also use a method that updates all the documents in a collection 289 | ```js 290 | await Todo.updateMany() 291 | ``` 292 | - we can choose to pass filters to delete documents that fit a certain category 293 | ```js 294 | await Todo.updateMany({isDone:true}) 295 | //updates all todos where the isDone 296 | ``` 297 | - ## Delete 298 | - the process of deleting is quite the similar to only this time we are removing the document permanently from the collection 299 | - deleting involves removing existing document(s) from the collection 300 | - If we are deleting a single document we require a unique identifier that distinguishes it from the other documents. 301 | - for example it could be the _id field, or a unique title for the todos 302 | - we use the _id field. we pass it through the request url 303 | ```html 304 | /todos/{id} 305 | ``` 306 | - first inside our delete function we check that the todo we are trying to delete actually exists 307 | ```js 308 | //fetching the id from the url 309 | const { id } = req.params 310 | 311 | //getting the todo for the provided id 312 | const todo = await Todo.findById(id); 313 | 314 | //returnning an error message 315 | if (!todo) 316 | res.status(404).json({ 317 | success: 'failed', 318 | message: `Todo with id ${id} not found` 319 | 320 | }); 321 | ``` 322 | - if not we return a 404 Not Found response 323 | - if it exists we call another method to find and delete the todo . We pass the updated field 324 | ```js 325 | await Todo.findByIdAndUpdate(id, { isDone, title, content }); 326 | ``` 327 | and await it to finish 328 | - then we return a success ``204`` response to the user 329 | ```js 330 | res.status(204).json({ 331 | success: true, 332 | message: "update successful" 333 | }) 334 | ``` 335 | - We can also use a method that deletes all the documents in a collection 336 | ```js 337 | await Todo.deleteMany() 338 | ``` 339 | - we can choose to pass filters to delete documents that fit a certain category 340 | ```js 341 | await Todo.deleteMany({isDone:true}) 342 | //updates all todos where the isDone 343 | ``` 344 | ## 5. Routing 345 | - Routing is providing a way through which we can access the resources offered by our node js application 346 | - We define the routes in the `routes` folder 347 | - Open the file 348 | ```bash 349 | route.js 350 | ``` 351 | - first we require the express package 352 | ```js 353 | const express = require('express'); 354 | ``` 355 | - we will use this to create out router. By typing the following 356 | ```js 357 | const router = express.Router(); 358 | ``` 359 | this creates a router on which we can define the ``Http`` Methods ``Get, Post, Patch, Put, Delete`` 360 | - we define the methods by using the router object in the following syntax 361 | ```js 362 | router.('', ,) 363 | ``` 364 | - for example in our file we have the method to get all todos 365 | ```js 366 | router.get('/', getTodos) 367 | ``` 368 | - since we have no middlewares we call the getTodos function directly. To access this function we import it from the controllers/crud.js file 369 | ```js 370 | const { 371 | createTodo, getTodos, getTodoById, updateTodo, deleteTodo 372 | } = require('../controllers/crud.js') 373 | ``` 374 | - this includes all the methods (business logic) related to the Todo model 375 | - Now that we have this we can do the Mappings of the routes to the related business logic function 376 | ```js 377 | router.get("/all", getTodos) 378 | .post("/new", createTodo) 379 | .get("/:id", getTodoById) 380 | .patch("/:id", updateTodo) 381 | .delete("/:id", deleteTodo) 382 | ``` 383 | - the urls that have **/:id** define that a parameter, id, is required 384 | 385 | - Finally we export this router object 386 | ```js 387 | module.exports = router; 388 | ``` 389 | ## Final Step Our server file 390 | - move to the index.js fill which is in the root of our application structure 391 | ```bash 392 | index.js 393 | ``` 394 | - In here we had defined our node js server and now we add the mongodb functionality 395 | - First import the helper function from `./utils` to access the function to connect to the database 396 | ```js 397 | const dbConnect = require('./utils/dbconfig') 398 | ``` 399 | - then import the routes from the `./routes/route.js` 400 | ```js 401 | ``` 402 | - create the database connection by passing the connection string 403 | - The connection string can be defined in the `enviromental variables` in a ``.env`` file. This will requie the following 404 | - installling the dotenv package 405 | ```bash 406 | npm install dotenv 407 | ``` 408 | - defining our enviromental variables 409 | ```env 410 | uri=mongodb:localhost:27017/todos 411 | ``` 412 | - configuring the application to recognize the enviromental variable. Add this to the `index.js` file 413 | ```js 414 | require('dotenv/config') 415 | ``` 416 | - getting the enviromantal variables from our application 417 | ```js 418 | const uri = process.env.uri 419 | 420 | //connecting to db 421 | dbConnect(uri) 422 | ``` 423 | - Or you can just type it statically 424 | ```js 425 | const uri = "mongodb:localhost:127017/todos" 426 | dbConnect(uri) 427 | ``` 428 | - After connecting to db. Map the root url to the routes from the routes 429 | ```js 430 | //import the routes 431 | const router = require('./routes/todo'); 432 | 433 | //mapping all the todo urls to the todo route 434 | app.use("/todos", router) 435 | 436 | ``` 437 | - Now the application can be accessed by typing the root url `http://localhost:9009/todos` followed by any extension as defined in the routes file 438 | - for example 439 | ```html 440 | GET http://localhost:90009/todos/all - gets all todos 441 | ``` 442 | 443 | ```html 444 | GET http://localhost:90009/todos/644cf94c3dce0406956e382b 445 | - gets a todo with id 644cf94c3dce0406956e382b 446 | ``` 447 | 448 | ``` 449 | POST http://localhost:9009/todos/new - creates a todo from request body 450 | ``` 451 | 452 | ```html 453 | PATCH http://localhost:9009/todos/644cf94c3dce0406956e382b 454 | - updates todo with id 644cf94c3dce0406956e382b 455 | ``` 456 | ``` 457 | DELETE http://localhost:9009/todos/644cf94c3dce0406956e382b 458 | -deletes todo with id 644cf94c3dce0406956e382b 459 | ``` 460 | 461 | - Run the application by typing 462 | ```bash 463 | npm start 464 | ``` 465 | ## This documentation is open to PRs :heart: -------------------------------------------------------------------------------- /nodemongodb/controllers/crud.js: -------------------------------------------------------------------------------- 1 | /** 2 | This file contains all the function required to handle the user request and return a response 3 | */ 4 | 5 | const Todo = require('../models/todo') 6 | 7 | exports.createTodo = async (req, res, next) => { 8 | 9 | if(!req.body){ 10 | return next(Error("Invalid request body")) 11 | } 12 | 13 | const { title, content } = req.body 14 | 15 | if(!title || !content) 16 | return next(new Error("Provide title and content")) 17 | 18 | try{ 19 | 20 | const todo = await Todo.create({ title, content }); 21 | 22 | res.status(201).json({ 23 | success: true, 24 | todo 25 | }) 26 | }catch (e) { 27 | throw e 28 | } 29 | 30 | } 31 | 32 | exports.getTodos = async (req, res, next) => { 33 | 34 | const todos = await Todo.find(); 35 | 36 | res.status(200).json({ 37 | success: true, 38 | todos 39 | }) 40 | 41 | } 42 | 43 | exports.getTodoById = async (req, res, next) => { 44 | const { id } = req.params 45 | 46 | if(!id){ 47 | throw Error("Please provide an id") 48 | process.exit(1) 49 | } 50 | 51 | const todo = await Todo.findById(id) 52 | 53 | if (!todo){ 54 | res.status(404).json({ 55 | success: 'failed', 56 | message: `Todo with id ${id} not found` 57 | }); 58 | return; 59 | } 60 | 61 | res.status(200).json({ 62 | success: true, 63 | todo 64 | }) 65 | } 66 | 67 | exports.updateTodo = async (req, res, next) => { 68 | 69 | const { isDone } = req.body 70 | 71 | const { id } = req.params 72 | 73 | try { 74 | const todo = await Todo.findById(id) 75 | 76 | if (!todo) 77 | res.status(404).json({ 78 | success: 'failed', 79 | message: `Todo with id ${id} not found` 80 | 81 | }); 82 | 83 | await Todo.findByIdAndUpdate(id, { isDone }); 84 | // Todo.findByIdAndDelete() 85 | res.status(201).json({ 86 | success: true, 87 | message: "update successful" 88 | }) 89 | } catch (e) { 90 | throw e 91 | } 92 | 93 | 94 | } 95 | 96 | exports.deleteTodo = async (req, res, next) => { 97 | 98 | const { id } = req.params 99 | 100 | try { 101 | const todo = await Todo.findById(id) 102 | 103 | if (!todo){ 104 | res.status(404).json({ 105 | success: 'failed', 106 | message: `Todo with id ${id} not found` 107 | 108 | }); 109 | return; 110 | } 111 | 112 | await Todo.findByIdAndDelete(id); 113 | 114 | res.status(204).json({}) 115 | 116 | } catch (e) { 117 | throw e 118 | } 119 | } -------------------------------------------------------------------------------- /nodemongodb/index.js: -------------------------------------------------------------------------------- 1 | const express = require('express'); 2 | const dbConnect = require('./utils/dbconfig'); 3 | const router = require('./routes/todo'); 4 | 5 | 6 | const app = express(); 7 | 8 | //json body parsing 9 | app.use(express.json()) 10 | 11 | 12 | app.get('/', (req,res)=>{ 13 | console.log(req.body); 14 | res.send({msg:"done"}) 15 | }) 16 | 17 | //to acces the todo resources we postfix the root url with this 18 | app.use("/todos", router) 19 | 20 | //database connection 21 | dbConnect("mongodb://localhost:27017/todos" ) 22 | 23 | app.listen(process.env.PORT || 9009, ()=>{ 24 | console.log("Server started"); 25 | }) -------------------------------------------------------------------------------- /nodemongodb/models/todo.js: -------------------------------------------------------------------------------- 1 | /** 2 | 3 | In here we use mongoose to define the schema of our Todo collection 4 | 5 | As in relation databases collections are like tables 6 | 7 | the fields inside the schema map to attributes of the Todo collection jus like columns of a table in relation databases 8 | */ 9 | 10 | const mongoose = require('mongoose'); 11 | 12 | const todoSchema = mongoose.Schema({ 13 | 14 | title: String, 15 | 16 | content: String, 17 | 18 | isDone: { 19 | type: Boolean, 20 | default: false 21 | }, 22 | 23 | dateCreated: { 24 | type: Date, 25 | default: Date.now() 26 | } 27 | 28 | }); 29 | 30 | module.exports = mongoose.model('Todo', todoSchema); -------------------------------------------------------------------------------- /nodemongodb/package-lock.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "nodemongodb", 3 | "version": "1.0.0", 4 | "lockfileVersion": 3, 5 | "requires": true, 6 | "packages": { 7 | "": { 8 | "name": "nodemongodb", 9 | "version": "1.0.0", 10 | "license": "ISC", 11 | "dependencies": { 12 | "express": "^4.18.2", 13 | "mongoose": "^7.1.0" 14 | }, 15 | "devDependencies": { 16 | "morgan": "^1.10.0" 17 | } 18 | }, 19 | "node_modules/@types/node": { 20 | "version": "18.16.3", 21 | "resolved": "https://registry.npmjs.org/@types/node/-/node-18.16.3.tgz", 22 | "integrity": "sha512-OPs5WnnT1xkCBiuQrZA4+YAV4HEJejmHneyraIaxsbev5yCEr6KMwINNFP9wQeFIw8FWcoTqF3vQsa5CDaI+8Q==" 23 | }, 24 | "node_modules/@types/webidl-conversions": { 25 | "version": "7.0.0", 26 | "resolved": "https://registry.npmjs.org/@types/webidl-conversions/-/webidl-conversions-7.0.0.tgz", 27 | "integrity": "sha512-xTE1E+YF4aWPJJeUzaZI5DRntlkY3+BCVJi0axFptnjGmAoWxkyREIh/XMrfxVLejwQxMCfDXdICo0VLxThrog==" 28 | }, 29 | "node_modules/@types/whatwg-url": { 30 | "version": "8.2.2", 31 | "resolved": "https://registry.npmjs.org/@types/whatwg-url/-/whatwg-url-8.2.2.tgz", 32 | "integrity": "sha512-FtQu10RWgn3D9U4aazdwIE2yzphmTJREDqNdODHrbrZmmMqI0vMheC/6NE/J1Yveaj8H+ela+YwWTjq5PGmuhA==", 33 | "dependencies": { 34 | "@types/node": "*", 35 | "@types/webidl-conversions": "*" 36 | } 37 | }, 38 | "node_modules/accepts": { 39 | "version": "1.3.8", 40 | "resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz", 41 | "integrity": "sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==", 42 | "dependencies": { 43 | "mime-types": "~2.1.34", 44 | "negotiator": "0.6.3" 45 | }, 46 | "engines": { 47 | "node": ">= 0.6" 48 | } 49 | }, 50 | "node_modules/array-flatten": { 51 | "version": "1.1.1", 52 | "resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz", 53 | "integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==" 54 | }, 55 | "node_modules/basic-auth": { 56 | "version": "2.0.1", 57 | "resolved": "https://registry.npmjs.org/basic-auth/-/basic-auth-2.0.1.tgz", 58 | "integrity": "sha512-NF+epuEdnUYVlGuhaxbbq+dvJttwLnGY+YixlXlME5KpQ5W3CnXA5cVTneY3SPbPDRkcjMbifrwmFYcClgOZeg==", 59 | "dev": true, 60 | "dependencies": { 61 | "safe-buffer": "5.1.2" 62 | }, 63 | "engines": { 64 | "node": ">= 0.8" 65 | } 66 | }, 67 | "node_modules/basic-auth/node_modules/safe-buffer": { 68 | "version": "5.1.2", 69 | "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz", 70 | "integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==", 71 | "dev": true 72 | }, 73 | "node_modules/body-parser": { 74 | "version": "1.20.1", 75 | "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.1.tgz", 76 | "integrity": "sha512-jWi7abTbYwajOytWCQc37VulmWiRae5RyTpaCyDcS5/lMdtwSz5lOpDE67srw/HYe35f1z3fDQw+3txg7gNtWw==", 77 | "dependencies": { 78 | "bytes": "3.1.2", 79 | "content-type": "~1.0.4", 80 | "debug": "2.6.9", 81 | "depd": "2.0.0", 82 | "destroy": "1.2.0", 83 | "http-errors": "2.0.0", 84 | "iconv-lite": "0.4.24", 85 | "on-finished": "2.4.1", 86 | "qs": "6.11.0", 87 | "raw-body": "2.5.1", 88 | "type-is": "~1.6.18", 89 | "unpipe": "1.0.0" 90 | }, 91 | "engines": { 92 | "node": ">= 0.8", 93 | "npm": "1.2.8000 || >= 1.4.16" 94 | } 95 | }, 96 | "node_modules/bson": { 97 | "version": "5.2.0", 98 | "resolved": "https://registry.npmjs.org/bson/-/bson-5.2.0.tgz", 99 | "integrity": "sha512-HevkSpDbpUfsrHWmWiAsNavANKYIErV2ePXllp1bwq5CDreAaFVj6RVlZpJnxK4WWDCJ/5jMUpaY6G526q3Hjg==", 100 | "engines": { 101 | "node": ">=14.20.1" 102 | } 103 | }, 104 | "node_modules/bytes": { 105 | "version": "3.1.2", 106 | "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", 107 | "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", 108 | "engines": { 109 | "node": ">= 0.8" 110 | } 111 | }, 112 | "node_modules/call-bind": { 113 | "version": "1.0.2", 114 | "resolved": "https://registry.npmjs.org/call-bind/-/call-bind-1.0.2.tgz", 115 | "integrity": "sha512-7O+FbCihrB5WGbFYesctwmTKae6rOiIzmz1icreWJ+0aA7LJfuqhEso2T9ncpcFtzMQtzXf2QGGueWJGTYsqrA==", 116 | "dependencies": { 117 | "function-bind": "^1.1.1", 118 | "get-intrinsic": "^1.0.2" 119 | }, 120 | "funding": { 121 | "url": "https://github.com/sponsors/ljharb" 122 | } 123 | }, 124 | "node_modules/content-disposition": { 125 | "version": "0.5.4", 126 | "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.4.tgz", 127 | "integrity": "sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==", 128 | "dependencies": { 129 | "safe-buffer": "5.2.1" 130 | }, 131 | "engines": { 132 | "node": ">= 0.6" 133 | } 134 | }, 135 | "node_modules/content-type": { 136 | "version": "1.0.5", 137 | "resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.5.tgz", 138 | "integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==", 139 | "engines": { 140 | "node": ">= 0.6" 141 | } 142 | }, 143 | "node_modules/cookie": { 144 | "version": "0.5.0", 145 | "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.5.0.tgz", 146 | "integrity": "sha512-YZ3GUyn/o8gfKJlnlX7g7xq4gyO6OSuhGPKaaGssGB2qgDUS0gPgtTvoyZLTt9Ab6dC4hfc9dV5arkvc/OCmrw==", 147 | "engines": { 148 | "node": ">= 0.6" 149 | } 150 | }, 151 | "node_modules/cookie-signature": { 152 | "version": "1.0.6", 153 | "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz", 154 | "integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ==" 155 | }, 156 | "node_modules/debug": { 157 | "version": "2.6.9", 158 | "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", 159 | "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", 160 | "dependencies": { 161 | "ms": "2.0.0" 162 | } 163 | }, 164 | "node_modules/depd": { 165 | "version": "2.0.0", 166 | "resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz", 167 | "integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==", 168 | "engines": { 169 | "node": ">= 0.8" 170 | } 171 | }, 172 | "node_modules/destroy": { 173 | "version": "1.2.0", 174 | "resolved": "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz", 175 | "integrity": "sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==", 176 | "engines": { 177 | "node": ">= 0.8", 178 | "npm": "1.2.8000 || >= 1.4.16" 179 | } 180 | }, 181 | "node_modules/ee-first": { 182 | "version": "1.1.1", 183 | "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", 184 | "integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==" 185 | }, 186 | "node_modules/encodeurl": { 187 | "version": "1.0.2", 188 | "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz", 189 | "integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==", 190 | "engines": { 191 | "node": ">= 0.8" 192 | } 193 | }, 194 | "node_modules/escape-html": { 195 | "version": "1.0.3", 196 | "resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz", 197 | "integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==" 198 | }, 199 | "node_modules/etag": { 200 | "version": "1.8.1", 201 | "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", 202 | "integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==", 203 | "engines": { 204 | "node": ">= 0.6" 205 | } 206 | }, 207 | "node_modules/express": { 208 | "version": "4.18.2", 209 | "resolved": "https://registry.npmjs.org/express/-/express-4.18.2.tgz", 210 | "integrity": "sha512-5/PsL6iGPdfQ/lKM1UuielYgv3BUoJfz1aUwU9vHZ+J7gyvwdQXFEBIEIaxeGf0GIcreATNyBExtalisDbuMqQ==", 211 | "dependencies": { 212 | "accepts": "~1.3.8", 213 | "array-flatten": "1.1.1", 214 | "body-parser": "1.20.1", 215 | "content-disposition": "0.5.4", 216 | "content-type": "~1.0.4", 217 | "cookie": "0.5.0", 218 | "cookie-signature": "1.0.6", 219 | "debug": "2.6.9", 220 | "depd": "2.0.0", 221 | "encodeurl": "~1.0.2", 222 | "escape-html": "~1.0.3", 223 | "etag": "~1.8.1", 224 | "finalhandler": "1.2.0", 225 | "fresh": "0.5.2", 226 | "http-errors": "2.0.0", 227 | "merge-descriptors": "1.0.1", 228 | "methods": "~1.1.2", 229 | "on-finished": "2.4.1", 230 | "parseurl": "~1.3.3", 231 | "path-to-regexp": "0.1.7", 232 | "proxy-addr": "~2.0.7", 233 | "qs": "6.11.0", 234 | "range-parser": "~1.2.1", 235 | "safe-buffer": "5.2.1", 236 | "send": "0.18.0", 237 | "serve-static": "1.15.0", 238 | "setprototypeof": "1.2.0", 239 | "statuses": "2.0.1", 240 | "type-is": "~1.6.18", 241 | "utils-merge": "1.0.1", 242 | "vary": "~1.1.2" 243 | }, 244 | "engines": { 245 | "node": ">= 0.10.0" 246 | } 247 | }, 248 | "node_modules/finalhandler": { 249 | "version": "1.2.0", 250 | "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.2.0.tgz", 251 | "integrity": "sha512-5uXcUVftlQMFnWC9qu/svkWv3GTd2PfUhK/3PLkYNAe7FbqJMt3515HaxE6eRL74GdsriiwujiawdaB1BpEISg==", 252 | "dependencies": { 253 | "debug": "2.6.9", 254 | "encodeurl": "~1.0.2", 255 | "escape-html": "~1.0.3", 256 | "on-finished": "2.4.1", 257 | "parseurl": "~1.3.3", 258 | "statuses": "2.0.1", 259 | "unpipe": "~1.0.0" 260 | }, 261 | "engines": { 262 | "node": ">= 0.8" 263 | } 264 | }, 265 | "node_modules/forwarded": { 266 | "version": "0.2.0", 267 | "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", 268 | "integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==", 269 | "engines": { 270 | "node": ">= 0.6" 271 | } 272 | }, 273 | "node_modules/fresh": { 274 | "version": "0.5.2", 275 | "resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz", 276 | "integrity": "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==", 277 | "engines": { 278 | "node": ">= 0.6" 279 | } 280 | }, 281 | "node_modules/function-bind": { 282 | "version": "1.1.1", 283 | "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.1.tgz", 284 | "integrity": "sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A==" 285 | }, 286 | "node_modules/get-intrinsic": { 287 | "version": "1.2.0", 288 | "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.2.0.tgz", 289 | "integrity": "sha512-L049y6nFOuom5wGyRc3/gdTLO94dySVKRACj1RmJZBQXlbTMhtNIgkWkUHq+jYmZvKf14EW1EoJnnjbmoHij0Q==", 290 | "dependencies": { 291 | "function-bind": "^1.1.1", 292 | "has": "^1.0.3", 293 | "has-symbols": "^1.0.3" 294 | }, 295 | "funding": { 296 | "url": "https://github.com/sponsors/ljharb" 297 | } 298 | }, 299 | "node_modules/has": { 300 | "version": "1.0.3", 301 | "resolved": "https://registry.npmjs.org/has/-/has-1.0.3.tgz", 302 | "integrity": "sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==", 303 | "dependencies": { 304 | "function-bind": "^1.1.1" 305 | }, 306 | "engines": { 307 | "node": ">= 0.4.0" 308 | } 309 | }, 310 | "node_modules/has-symbols": { 311 | "version": "1.0.3", 312 | "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.3.tgz", 313 | "integrity": "sha512-l3LCuF6MgDNwTDKkdYGEihYjt5pRPbEg46rtlmnSPlUbgmB8LOIrKJbYYFBSbnPaJexMKtiPO8hmeRjRz2Td+A==", 314 | "engines": { 315 | "node": ">= 0.4" 316 | }, 317 | "funding": { 318 | "url": "https://github.com/sponsors/ljharb" 319 | } 320 | }, 321 | "node_modules/http-errors": { 322 | "version": "2.0.0", 323 | "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz", 324 | "integrity": "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==", 325 | "dependencies": { 326 | "depd": "2.0.0", 327 | "inherits": "2.0.4", 328 | "setprototypeof": "1.2.0", 329 | "statuses": "2.0.1", 330 | "toidentifier": "1.0.1" 331 | }, 332 | "engines": { 333 | "node": ">= 0.8" 334 | } 335 | }, 336 | "node_modules/iconv-lite": { 337 | "version": "0.4.24", 338 | "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz", 339 | "integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==", 340 | "dependencies": { 341 | "safer-buffer": ">= 2.1.2 < 3" 342 | }, 343 | "engines": { 344 | "node": ">=0.10.0" 345 | } 346 | }, 347 | "node_modules/inherits": { 348 | "version": "2.0.4", 349 | "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", 350 | "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==" 351 | }, 352 | "node_modules/ip": { 353 | "version": "2.0.0", 354 | "resolved": "https://registry.npmjs.org/ip/-/ip-2.0.0.tgz", 355 | "integrity": "sha512-WKa+XuLG1A1R0UWhl2+1XQSi+fZWMsYKffMZTTYsiZaUD8k2yDAj5atimTUD2TZkyCkNEeYE5NhFZmupOGtjYQ==" 356 | }, 357 | "node_modules/ipaddr.js": { 358 | "version": "1.9.1", 359 | "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz", 360 | "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==", 361 | "engines": { 362 | "node": ">= 0.10" 363 | } 364 | }, 365 | "node_modules/kareem": { 366 | "version": "2.5.1", 367 | "resolved": "https://registry.npmjs.org/kareem/-/kareem-2.5.1.tgz", 368 | "integrity": "sha512-7jFxRVm+jD+rkq3kY0iZDJfsO2/t4BBPeEb2qKn2lR/9KhuksYk5hxzfRYWMPV8P/x2d0kHD306YyWLzjjH+uA==", 369 | "engines": { 370 | "node": ">=12.0.0" 371 | } 372 | }, 373 | "node_modules/media-typer": { 374 | "version": "0.3.0", 375 | "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz", 376 | "integrity": "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ==", 377 | "engines": { 378 | "node": ">= 0.6" 379 | } 380 | }, 381 | "node_modules/memory-pager": { 382 | "version": "1.5.0", 383 | "resolved": "https://registry.npmjs.org/memory-pager/-/memory-pager-1.5.0.tgz", 384 | "integrity": "sha512-ZS4Bp4r/Zoeq6+NLJpP+0Zzm0pR8whtGPf1XExKLJBAczGMnSi3It14OiNCStjQjM6NU1okjQGSxgEZN8eBYKg==", 385 | "optional": true 386 | }, 387 | "node_modules/merge-descriptors": { 388 | "version": "1.0.1", 389 | "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.1.tgz", 390 | "integrity": "sha512-cCi6g3/Zr1iqQi6ySbseM1Xvooa98N0w31jzUYrXPX2xqObmFGHJ0tQ5u74H3mVh7wLouTseZyYIq39g8cNp1w==" 391 | }, 392 | "node_modules/methods": { 393 | "version": "1.1.2", 394 | "resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz", 395 | "integrity": "sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==", 396 | "engines": { 397 | "node": ">= 0.6" 398 | } 399 | }, 400 | "node_modules/mime": { 401 | "version": "1.6.0", 402 | "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", 403 | "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", 404 | "bin": { 405 | "mime": "cli.js" 406 | }, 407 | "engines": { 408 | "node": ">=4" 409 | } 410 | }, 411 | "node_modules/mime-db": { 412 | "version": "1.52.0", 413 | "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz", 414 | "integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==", 415 | "engines": { 416 | "node": ">= 0.6" 417 | } 418 | }, 419 | "node_modules/mime-types": { 420 | "version": "2.1.35", 421 | "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz", 422 | "integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==", 423 | "dependencies": { 424 | "mime-db": "1.52.0" 425 | }, 426 | "engines": { 427 | "node": ">= 0.6" 428 | } 429 | }, 430 | "node_modules/mongodb": { 431 | "version": "5.3.0", 432 | "resolved": "https://registry.npmjs.org/mongodb/-/mongodb-5.3.0.tgz", 433 | "integrity": "sha512-Wy/sbahguL8c3TXQWXmuBabiLD+iVmz+tOgQf+FwkCjhUIorqbAxRbbz00g4ZoN4sXIPwpAlTANMaGRjGGTikQ==", 434 | "dependencies": { 435 | "bson": "^5.2.0", 436 | "mongodb-connection-string-url": "^2.6.0", 437 | "socks": "^2.7.1" 438 | }, 439 | "engines": { 440 | "node": ">=14.20.1" 441 | }, 442 | "optionalDependencies": { 443 | "saslprep": "^1.0.3" 444 | }, 445 | "peerDependencies": { 446 | "@aws-sdk/credential-providers": "^3.201.0", 447 | "mongodb-client-encryption": ">=2.3.0 <3", 448 | "snappy": "^7.2.2" 449 | }, 450 | "peerDependenciesMeta": { 451 | "@aws-sdk/credential-providers": { 452 | "optional": true 453 | }, 454 | "mongodb-client-encryption": { 455 | "optional": true 456 | }, 457 | "snappy": { 458 | "optional": true 459 | } 460 | } 461 | }, 462 | "node_modules/mongodb-connection-string-url": { 463 | "version": "2.6.0", 464 | "resolved": "https://registry.npmjs.org/mongodb-connection-string-url/-/mongodb-connection-string-url-2.6.0.tgz", 465 | "integrity": "sha512-WvTZlI9ab0QYtTYnuMLgobULWhokRjtC7db9LtcVfJ+Hsnyr5eo6ZtNAt3Ly24XZScGMelOcGtm7lSn0332tPQ==", 466 | "dependencies": { 467 | "@types/whatwg-url": "^8.2.1", 468 | "whatwg-url": "^11.0.0" 469 | } 470 | }, 471 | "node_modules/mongoose": { 472 | "version": "7.1.0", 473 | "resolved": "https://registry.npmjs.org/mongoose/-/mongoose-7.1.0.tgz", 474 | "integrity": "sha512-shoo9z/7o96Ojx69wpJn65+EC+Mt3q1SWTducW+F2Y4ieCXo0lZwpCZedgC841MIvJ7V8o6gmzoN1NfcnOTOuw==", 475 | "dependencies": { 476 | "bson": "^5.2.0", 477 | "kareem": "2.5.1", 478 | "mongodb": "5.3.0", 479 | "mpath": "0.9.0", 480 | "mquery": "5.0.0", 481 | "ms": "2.1.3", 482 | "sift": "16.0.1" 483 | }, 484 | "engines": { 485 | "node": ">=14.0.0" 486 | }, 487 | "funding": { 488 | "type": "opencollective", 489 | "url": "https://opencollective.com/mongoose" 490 | } 491 | }, 492 | "node_modules/mongoose/node_modules/ms": { 493 | "version": "2.1.3", 494 | "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", 495 | "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==" 496 | }, 497 | "node_modules/morgan": { 498 | "version": "1.10.0", 499 | "resolved": "https://registry.npmjs.org/morgan/-/morgan-1.10.0.tgz", 500 | "integrity": "sha512-AbegBVI4sh6El+1gNwvD5YIck7nSA36weD7xvIxG4in80j/UoK8AEGaWnnz8v1GxonMCltmlNs5ZKbGvl9b1XQ==", 501 | "dev": true, 502 | "dependencies": { 503 | "basic-auth": "~2.0.1", 504 | "debug": "2.6.9", 505 | "depd": "~2.0.0", 506 | "on-finished": "~2.3.0", 507 | "on-headers": "~1.0.2" 508 | }, 509 | "engines": { 510 | "node": ">= 0.8.0" 511 | } 512 | }, 513 | "node_modules/morgan/node_modules/on-finished": { 514 | "version": "2.3.0", 515 | "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz", 516 | "integrity": "sha512-ikqdkGAAyf/X/gPhXGvfgAytDZtDbr+bkNUJ0N9h5MI/dmdgCs3l6hoHrcUv41sRKew3jIwrp4qQDXiK99Utww==", 517 | "dev": true, 518 | "dependencies": { 519 | "ee-first": "1.1.1" 520 | }, 521 | "engines": { 522 | "node": ">= 0.8" 523 | } 524 | }, 525 | "node_modules/mpath": { 526 | "version": "0.9.0", 527 | "resolved": "https://registry.npmjs.org/mpath/-/mpath-0.9.0.tgz", 528 | "integrity": "sha512-ikJRQTk8hw5DEoFVxHG1Gn9T/xcjtdnOKIU1JTmGjZZlg9LST2mBLmcX3/ICIbgJydT2GOc15RnNy5mHmzfSew==", 529 | "engines": { 530 | "node": ">=4.0.0" 531 | } 532 | }, 533 | "node_modules/mquery": { 534 | "version": "5.0.0", 535 | "resolved": "https://registry.npmjs.org/mquery/-/mquery-5.0.0.tgz", 536 | "integrity": "sha512-iQMncpmEK8R8ncT8HJGsGc9Dsp8xcgYMVSbs5jgnm1lFHTZqMJTUWTDx1LBO8+mK3tPNZWFLBghQEIOULSTHZg==", 537 | "dependencies": { 538 | "debug": "4.x" 539 | }, 540 | "engines": { 541 | "node": ">=14.0.0" 542 | } 543 | }, 544 | "node_modules/mquery/node_modules/debug": { 545 | "version": "4.3.4", 546 | "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz", 547 | "integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==", 548 | "dependencies": { 549 | "ms": "2.1.2" 550 | }, 551 | "engines": { 552 | "node": ">=6.0" 553 | }, 554 | "peerDependenciesMeta": { 555 | "supports-color": { 556 | "optional": true 557 | } 558 | } 559 | }, 560 | "node_modules/mquery/node_modules/ms": { 561 | "version": "2.1.2", 562 | "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", 563 | "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==" 564 | }, 565 | "node_modules/ms": { 566 | "version": "2.0.0", 567 | "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", 568 | "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==" 569 | }, 570 | "node_modules/negotiator": { 571 | "version": "0.6.3", 572 | "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz", 573 | "integrity": "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==", 574 | "engines": { 575 | "node": ">= 0.6" 576 | } 577 | }, 578 | "node_modules/object-inspect": { 579 | "version": "1.12.3", 580 | "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.12.3.tgz", 581 | "integrity": "sha512-geUvdk7c+eizMNUDkRpW1wJwgfOiOeHbxBR/hLXK1aT6zmVSO0jsQcs7fj6MGw89jC/cjGfLcNOrtMYtGqm81g==", 582 | "funding": { 583 | "url": "https://github.com/sponsors/ljharb" 584 | } 585 | }, 586 | "node_modules/on-finished": { 587 | "version": "2.4.1", 588 | "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz", 589 | "integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==", 590 | "dependencies": { 591 | "ee-first": "1.1.1" 592 | }, 593 | "engines": { 594 | "node": ">= 0.8" 595 | } 596 | }, 597 | "node_modules/on-headers": { 598 | "version": "1.0.2", 599 | "resolved": "https://registry.npmjs.org/on-headers/-/on-headers-1.0.2.tgz", 600 | "integrity": "sha512-pZAE+FJLoyITytdqK0U5s+FIpjN0JP3OzFi/u8Rx+EV5/W+JTWGXG8xFzevE7AjBfDqHv/8vL8qQsIhHnqRkrA==", 601 | "dev": true, 602 | "engines": { 603 | "node": ">= 0.8" 604 | } 605 | }, 606 | "node_modules/parseurl": { 607 | "version": "1.3.3", 608 | "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz", 609 | "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==", 610 | "engines": { 611 | "node": ">= 0.8" 612 | } 613 | }, 614 | "node_modules/path-to-regexp": { 615 | "version": "0.1.7", 616 | "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz", 617 | "integrity": "sha512-5DFkuoqlv1uYQKxy8omFBeJPQcdoE07Kv2sferDCrAq1ohOU+MSDswDIbnx3YAM60qIOnYa53wBhXW0EbMonrQ==" 618 | }, 619 | "node_modules/proxy-addr": { 620 | "version": "2.0.7", 621 | "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz", 622 | "integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==", 623 | "dependencies": { 624 | "forwarded": "0.2.0", 625 | "ipaddr.js": "1.9.1" 626 | }, 627 | "engines": { 628 | "node": ">= 0.10" 629 | } 630 | }, 631 | "node_modules/punycode": { 632 | "version": "2.3.0", 633 | "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.0.tgz", 634 | "integrity": "sha512-rRV+zQD8tVFys26lAGR9WUuS4iUAngJScM+ZRSKtvl5tKeZ2t5bvdNFdNHBW9FWR4guGHlgmsZ1G7BSm2wTbuA==", 635 | "engines": { 636 | "node": ">=6" 637 | } 638 | }, 639 | "node_modules/qs": { 640 | "version": "6.11.0", 641 | "resolved": "https://registry.npmjs.org/qs/-/qs-6.11.0.tgz", 642 | "integrity": "sha512-MvjoMCJwEarSbUYk5O+nmoSzSutSsTwF85zcHPQ9OrlFoZOYIjaqBAJIqIXjptyD5vThxGq52Xu/MaJzRkIk4Q==", 643 | "dependencies": { 644 | "side-channel": "^1.0.4" 645 | }, 646 | "engines": { 647 | "node": ">=0.6" 648 | }, 649 | "funding": { 650 | "url": "https://github.com/sponsors/ljharb" 651 | } 652 | }, 653 | "node_modules/range-parser": { 654 | "version": "1.2.1", 655 | "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", 656 | "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", 657 | "engines": { 658 | "node": ">= 0.6" 659 | } 660 | }, 661 | "node_modules/raw-body": { 662 | "version": "2.5.1", 663 | "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.1.tgz", 664 | "integrity": "sha512-qqJBtEyVgS0ZmPGdCFPWJ3FreoqvG4MVQln/kCgF7Olq95IbOp0/BWyMwbdtn4VTvkM8Y7khCQ2Xgk/tcrCXig==", 665 | "dependencies": { 666 | "bytes": "3.1.2", 667 | "http-errors": "2.0.0", 668 | "iconv-lite": "0.4.24", 669 | "unpipe": "1.0.0" 670 | }, 671 | "engines": { 672 | "node": ">= 0.8" 673 | } 674 | }, 675 | "node_modules/safe-buffer": { 676 | "version": "5.2.1", 677 | "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", 678 | "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", 679 | "funding": [ 680 | { 681 | "type": "github", 682 | "url": "https://github.com/sponsors/feross" 683 | }, 684 | { 685 | "type": "patreon", 686 | "url": "https://www.patreon.com/feross" 687 | }, 688 | { 689 | "type": "consulting", 690 | "url": "https://feross.org/support" 691 | } 692 | ] 693 | }, 694 | "node_modules/safer-buffer": { 695 | "version": "2.1.2", 696 | "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", 697 | "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==" 698 | }, 699 | "node_modules/saslprep": { 700 | "version": "1.0.3", 701 | "resolved": "https://registry.npmjs.org/saslprep/-/saslprep-1.0.3.tgz", 702 | "integrity": "sha512-/MY/PEMbk2SuY5sScONwhUDsV2p77Znkb/q3nSVstq/yQzYJOH/Azh29p9oJLsl3LnQwSvZDKagDGBsBwSooag==", 703 | "optional": true, 704 | "dependencies": { 705 | "sparse-bitfield": "^3.0.3" 706 | }, 707 | "engines": { 708 | "node": ">=6" 709 | } 710 | }, 711 | "node_modules/send": { 712 | "version": "0.18.0", 713 | "resolved": "https://registry.npmjs.org/send/-/send-0.18.0.tgz", 714 | "integrity": "sha512-qqWzuOjSFOuqPjFe4NOsMLafToQQwBSOEpS+FwEt3A2V3vKubTquT3vmLTQpFgMXp8AlFWFuP1qKaJZOtPpVXg==", 715 | "dependencies": { 716 | "debug": "2.6.9", 717 | "depd": "2.0.0", 718 | "destroy": "1.2.0", 719 | "encodeurl": "~1.0.2", 720 | "escape-html": "~1.0.3", 721 | "etag": "~1.8.1", 722 | "fresh": "0.5.2", 723 | "http-errors": "2.0.0", 724 | "mime": "1.6.0", 725 | "ms": "2.1.3", 726 | "on-finished": "2.4.1", 727 | "range-parser": "~1.2.1", 728 | "statuses": "2.0.1" 729 | }, 730 | "engines": { 731 | "node": ">= 0.8.0" 732 | } 733 | }, 734 | "node_modules/send/node_modules/ms": { 735 | "version": "2.1.3", 736 | "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", 737 | "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==" 738 | }, 739 | "node_modules/serve-static": { 740 | "version": "1.15.0", 741 | "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.15.0.tgz", 742 | "integrity": "sha512-XGuRDNjXUijsUL0vl6nSD7cwURuzEgglbOaFuZM9g3kwDXOWVTck0jLzjPzGD+TazWbboZYu52/9/XPdUgne9g==", 743 | "dependencies": { 744 | "encodeurl": "~1.0.2", 745 | "escape-html": "~1.0.3", 746 | "parseurl": "~1.3.3", 747 | "send": "0.18.0" 748 | }, 749 | "engines": { 750 | "node": ">= 0.8.0" 751 | } 752 | }, 753 | "node_modules/setprototypeof": { 754 | "version": "1.2.0", 755 | "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz", 756 | "integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==" 757 | }, 758 | "node_modules/side-channel": { 759 | "version": "1.0.4", 760 | "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.0.4.tgz", 761 | "integrity": "sha512-q5XPytqFEIKHkGdiMIrY10mvLRvnQh42/+GoBlFW3b2LXLE2xxJpZFdm94we0BaoV3RwJyGqg5wS7epxTv0Zvw==", 762 | "dependencies": { 763 | "call-bind": "^1.0.0", 764 | "get-intrinsic": "^1.0.2", 765 | "object-inspect": "^1.9.0" 766 | }, 767 | "funding": { 768 | "url": "https://github.com/sponsors/ljharb" 769 | } 770 | }, 771 | "node_modules/sift": { 772 | "version": "16.0.1", 773 | "resolved": "https://registry.npmjs.org/sift/-/sift-16.0.1.tgz", 774 | "integrity": "sha512-Wv6BjQ5zbhW7VFefWusVP33T/EM0vYikCaQ2qR8yULbsilAT8/wQaXvuQ3ptGLpoKx+lihJE3y2UTgKDyyNHZQ==" 775 | }, 776 | "node_modules/smart-buffer": { 777 | "version": "4.2.0", 778 | "resolved": "https://registry.npmjs.org/smart-buffer/-/smart-buffer-4.2.0.tgz", 779 | "integrity": "sha512-94hK0Hh8rPqQl2xXc3HsaBoOXKV20MToPkcXvwbISWLEs+64sBq5kFgn2kJDHb1Pry9yrP0dxrCI9RRci7RXKg==", 780 | "engines": { 781 | "node": ">= 6.0.0", 782 | "npm": ">= 3.0.0" 783 | } 784 | }, 785 | "node_modules/socks": { 786 | "version": "2.7.1", 787 | "resolved": "https://registry.npmjs.org/socks/-/socks-2.7.1.tgz", 788 | "integrity": "sha512-7maUZy1N7uo6+WVEX6psASxtNlKaNVMlGQKkG/63nEDdLOWNbiUMoLK7X4uYoLhQstau72mLgfEWcXcwsaHbYQ==", 789 | "dependencies": { 790 | "ip": "^2.0.0", 791 | "smart-buffer": "^4.2.0" 792 | }, 793 | "engines": { 794 | "node": ">= 10.13.0", 795 | "npm": ">= 3.0.0" 796 | } 797 | }, 798 | "node_modules/sparse-bitfield": { 799 | "version": "3.0.3", 800 | "resolved": "https://registry.npmjs.org/sparse-bitfield/-/sparse-bitfield-3.0.3.tgz", 801 | "integrity": "sha512-kvzhi7vqKTfkh0PZU+2D2PIllw2ymqJKujUcyPMd9Y75Nv4nPbGJZXNhxsgdQab2BmlDct1YnfQCguEvHr7VsQ==", 802 | "optional": true, 803 | "dependencies": { 804 | "memory-pager": "^1.0.2" 805 | } 806 | }, 807 | "node_modules/statuses": { 808 | "version": "2.0.1", 809 | "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz", 810 | "integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==", 811 | "engines": { 812 | "node": ">= 0.8" 813 | } 814 | }, 815 | "node_modules/toidentifier": { 816 | "version": "1.0.1", 817 | "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz", 818 | "integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==", 819 | "engines": { 820 | "node": ">=0.6" 821 | } 822 | }, 823 | "node_modules/tr46": { 824 | "version": "3.0.0", 825 | "resolved": "https://registry.npmjs.org/tr46/-/tr46-3.0.0.tgz", 826 | "integrity": "sha512-l7FvfAHlcmulp8kr+flpQZmVwtu7nfRV7NZujtN0OqES8EL4O4e0qqzL0DC5gAvx/ZC/9lk6rhcUwYvkBnBnYA==", 827 | "dependencies": { 828 | "punycode": "^2.1.1" 829 | }, 830 | "engines": { 831 | "node": ">=12" 832 | } 833 | }, 834 | "node_modules/type-is": { 835 | "version": "1.6.18", 836 | "resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz", 837 | "integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==", 838 | "dependencies": { 839 | "media-typer": "0.3.0", 840 | "mime-types": "~2.1.24" 841 | }, 842 | "engines": { 843 | "node": ">= 0.6" 844 | } 845 | }, 846 | "node_modules/unpipe": { 847 | "version": "1.0.0", 848 | "resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz", 849 | "integrity": "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==", 850 | "engines": { 851 | "node": ">= 0.8" 852 | } 853 | }, 854 | "node_modules/utils-merge": { 855 | "version": "1.0.1", 856 | "resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz", 857 | "integrity": "sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA==", 858 | "engines": { 859 | "node": ">= 0.4.0" 860 | } 861 | }, 862 | "node_modules/vary": { 863 | "version": "1.1.2", 864 | "resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz", 865 | "integrity": "sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==", 866 | "engines": { 867 | "node": ">= 0.8" 868 | } 869 | }, 870 | "node_modules/webidl-conversions": { 871 | "version": "7.0.0", 872 | "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-7.0.0.tgz", 873 | "integrity": "sha512-VwddBukDzu71offAQR975unBIGqfKZpM+8ZX6ySk8nYhVoo5CYaZyzt3YBvYtRtO+aoGlqxPg/B87NGVZ/fu6g==", 874 | "engines": { 875 | "node": ">=12" 876 | } 877 | }, 878 | "node_modules/whatwg-url": { 879 | "version": "11.0.0", 880 | "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-11.0.0.tgz", 881 | "integrity": "sha512-RKT8HExMpoYx4igMiVMY83lN6UeITKJlBQ+vR/8ZJ8OCdSiN3RwCq+9gH0+Xzj0+5IrM6i4j/6LuvzbZIQgEcQ==", 882 | "dependencies": { 883 | "tr46": "^3.0.0", 884 | "webidl-conversions": "^7.0.0" 885 | }, 886 | "engines": { 887 | "node": ">=12" 888 | } 889 | } 890 | } 891 | } 892 | -------------------------------------------------------------------------------- /nodemongodb/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "nodemongodb", 3 | "version": "1.0.0", 4 | "description": "", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1", 8 | "start": "node index.js" 9 | }, 10 | "author": "", 11 | "license": "ISC", 12 | "dependencies": { 13 | "express": "^4.18.2", 14 | "mongoose": "^7.1.0" 15 | }, 16 | "devDependencies": { 17 | "morgan": "^1.10.0" 18 | } 19 | } 20 | -------------------------------------------------------------------------------- /nodemongodb/routes/todo.js: -------------------------------------------------------------------------------- 1 | const express = require('express'); 2 | const { getTodos, createTodo, getTodoById, updateTodo, deleteTodo } = require('../controllers/crud'); 3 | 4 | 5 | const router = express.Router(); 6 | 7 | router.get("/all", getTodos) 8 | .post("/new", createTodo) 9 | .get("/:id", getTodoById) 10 | .patch("/:id", updateTodo) 11 | .delete("/:id", deleteTodo) 12 | 13 | module.exports = router; -------------------------------------------------------------------------------- /nodemongodb/utils/dbconfig.js: -------------------------------------------------------------------------------- 1 | /** * 2 | This file is used to make the database connection 3 | 4 | we are using mongodb so we use the mongoose library to create the connection 5 | 6 | */ 7 | 8 | const mongoose = require('mongoose'); 9 | 10 | /** 11 | we export the db connection function so that we may use it in other files 12 | it takes a single parameter which is the connection string 13 | */ 14 | 15 | module.exports = async (dburi) => { 16 | try { 17 | await mongoose.connect(dburi).then(() => { 18 | console.log(`Database connection successful`) 19 | }) 20 | } catch (e) { 21 | throw e 22 | } 23 | } --------------------------------------------------------------------------------