├── .gitignore ├── Discovering-MongoDB.md ├── LICENSE ├── Learning-Trail.md ├── README.md ├── course-m101p ├── .gitignore ├── README.md ├── hw1-1.184820ec29b6.zip ├── hw1-2.21394489c9ad.py ├── hw1-3.e594fc84d4ac.py ├── hw2-1-answer.md ├── hw2-1-grades.ef42a2b3e7ff.json ├── hw2-2-answer.md ├── hw2-2-remove.py ├── hw2-3-answer.md ├── hw2-3-userDAO.py ├── hw2-3.4a405a23b442.zip ├── hw3-1-answer.md ├── hw3-1-remove.py ├── hw3-1-students.e7ed0a289cbe.json ├── hw3-2-answer.md ├── hw3-2-blogPostDAO.py ├── hw3-2and3-3.cb3a025ac81c.zip ├── hw3-3-answer.md ├── hw3-3-blogPostDAO.py ├── hw4-1-answer.md ├── hw4-2-answer.md ├── hw4-3-answer.md ├── hw4-3.f798e22df86d.zip ├── hw4-4-answer.md ├── hw4-4-sysprofile.acfbb9617420.json ├── hw5-3-grades.json ├── hw5-4-zips.4854d69c2ac3.json ├── week1-introduction.md ├── week2-crud.md ├── week3-schema.md ├── week4-performance.md ├── week5-aggregation-fw.md ├── week5-handout01-simple-example-data-products.ec1bc22f28be.js ├── week5-handout01-simple-example.4cb11c82cac4.js ├── week5-handout02-compound-grouping.31358db44867.js ├── week5-handout02-compoung-grouping-simple-example1.d2b6d05536bc.js ├── week5-handout03-using-sum-quiz-data-zips.json.zip ├── week5-handout03-using-sum-quiz-sum-by-state.357e89fd2088.js ├── week5-handout03-using-sum.de06d4138610.js ├── week5-handout04-using-avg.e128056b622a.js ├── week5-handout05-using-addToSet.a535135c35d5.js ├── week5-handout06-using-push.61b93aea9929.js ├── week5-handout07-using-max.0a8c88c4f6f7.js ├── week5-handout08-double-group.d75135079baa.js ├── week5-handout08-single-group.a2dcedc60ceb.js ├── week5-handout09-using-project-quiz.e4476d90db89.js ├── week5-handout09-using-project-reshape-products.51551839bd7a.js ├── week5-handout10-match-and-group.1b9ed10759ea.js ├── week5-handout10-match-group-and-project.19245fa529df.js ├── week5-handout10-match.deb14a3cf1ca.js ├── week5-handout11-sort.fd13909073dd.js ├── week5-handout12-skip-and-limit.6ece6f722d9b.js ├── week5-handout13-first-phase1.a83bcb182633.js ├── week5-handout13-first-phase2.281349fd65a4.js ├── week5-handout13-first-phase3.37e2f560bc6b.js ├── week5-handout13-first.bdf958d63359.js ├── week5-handout14-unwind-blog-tags.995fc80195d0.js ├── week5-handout14-unwind-quiz.2332e562e2ef.js ├── week5-handout15-double-unwind.97e478dd0b7c.js ├── week5-handout15-reversing-double-unwind.71fe17935216.js └── week5-handout15-reversing-double-unwind2.ca75fafef175.js └── webinar-build-an-app ├── .gitignore ├── 20140130-getting-started.md ├── 20140206-build-app-part1-getting-started.md ├── 20140206-build-app-part1-getting-started ├── Capture d’écran 2014-02-06 à 16.14.41.png ├── Capture d’écran 2014-02-06 à 16.37.38.png ├── Capture d’écran 2014-02-06 à 16.38.00.png ├── Capture d’écran 2014-02-06 à 16.50.37.png └── Capture d’écran 2014-02-06 à 16.53.31.png ├── 20140220-build-app-part2-interacting-database.md └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | *.sublime-* 2 | -------------------------------------------------------------------------------- /Discovering-MongoDB.md: -------------------------------------------------------------------------------- 1 | # Discovering MongoDB 2 | 3 | ### Documentation 4 | 5 | * [MongoDB 2.4 Manual](http://docs.mongodb.org) 6 | * [MongoDB Quick Reference Cards](https://www.mongodb.com/reference) 7 | 8 | ### Courses and Tutorials 9 | 10 | * [Course M101P «MongoDB for developers»](../tree/master/course-m101p) 11 | * [Webminar «Building an application with MongoDB»](../tree/master/webinar-build-an-app) 12 | 13 | ### Channels 14 | 15 | * [Geneva MongoDB User Group](http://genevamug.ch) 16 | * [MongoDB User Google Group](https://groups.google.com/forum/#!forum/mongodb-user) 17 | * Twitter: [#MongoDBBasics](https://twitter.com/search?q=%23MongoDBBasics) [@MongoDB](https://twitter.com/MongoDB) 18 | 19 | ### Articles 20 | 21 | * [Call me maybe: MongoDB](http://aphyr.com/posts/284-call-me-maybe-mongodb) 22 | * [MongoDB schema design pitfalls](https://blog.serverdensity.com/mongodb-schema-design-pitfalls/) 23 | * [Analyze Performance of Database Operations](http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/) enable with `db.setProfilingLevel(2)` and query with `db.system.profile.find( { millis : { $gt : 5 } } ).pretty()` for instance; `show profile` displays the five most recent events that took more than 1ms 24 | * [Measuring time and analyzing a query: explain()](http://docs.mongodb.org/manual/reference/method/cursor.explain/) `db.collection.find().explain().millis` gives the time taken by a given query 25 | * [Ways to implement data versioning in MongoDB](http://stackoverflow.com/questions/4185105/ways-to-implement-data-versioning-in-mongodb) StackOverflow 26 | * [Atomic Counters using MongoDB’s findAndModify with PHP](http://chemicaloliver.net/programming/atomic-counters-using-mongodbs-findandmodify-with-php/) 27 | * [How to apply automatic versioning to a C# class](http://stackoverflow.com/questions/20351698/how-to-apply-automatic-versioning-to-a-c-sharp-class) 28 | * [Data Models](http://docs.mongodb.org/manual/data-modeling) 29 | * [Use Case: Metadata and Asset Management](http://docs.mongodb.org/ecosystem/use-cases/metadata-and-asset-management/) describes the design and pattern of a content management system using MongoDB modeled on the popular Drupal CMS 30 | * [Use Case: Storing comments](http://docs.mongodb.org/ecosystem/use-cases/storing-comments/) utlines the basic patterns for storing user-submitted comments in a content management system (CMS). 31 | * [Use Case: Storing Log Data](http://docs.mongodb.org/ecosystem/use-cases/storing-log-data/) outlines the basic patterns and principles for using MongoDB as a persistent storage engine for log data from servers and other machine data 32 | * [Use Case: Pre-Aggregated Reports](http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/) outlines the basic patterns and principles for using MongoDB as an engine for collecting and processing events in real time for use in generating up to the minute or second reports 33 | 34 | ### Some basic facts 35 | 36 | * Schema-less 37 | * Collections of Documents 38 | * A single JavaScript call will retrieve a document and all of its nested content; getting into nested content involves further operations 39 | * JSON output (data stored as BSON) 40 | * Object IDs: always 12 bytes, composed of a timestamp, client machine ID, client process ID, and a 3-byte incremented counter; this autonumbering scheme makes that each process on every machine can handle its own ID generation without colliding with other _mongod_ instances (source: «7 databases in 7 weeks», p.138) 41 | * Javascript for queries and manipulation 42 | * No server-side joins 43 | * Map-reduce queries 44 | 45 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Eclipse Public License - v 1.0 2 | 3 | THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC 4 | LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM 5 | CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. 6 | 7 | 1. DEFINITIONS 8 | 9 | "Contribution" means: 10 | 11 | a) in the case of the initial Contributor, the initial code and documentation 12 | distributed under this Agreement, and 13 | b) in the case of each subsequent Contributor: 14 | i) changes to the Program, and 15 | ii) additions to the Program; 16 | 17 | where such changes and/or additions to the Program originate from and are 18 | distributed by that particular Contributor. A Contribution 'originates' from 19 | a Contributor if it was added to the Program by such Contributor itself or 20 | anyone acting on such Contributor's behalf. Contributions do not include 21 | additions to the Program which: (i) are separate modules of software 22 | distributed in conjunction with the Program under their own license 23 | agreement, and (ii) are not derivative works of the Program. 24 | 25 | "Contributor" means any person or entity that distributes the Program. 26 | 27 | "Licensed Patents" mean patent claims licensable by a Contributor which are 28 | necessarily infringed by the use or sale of its Contribution alone or when 29 | combined with the Program. 30 | 31 | "Program" means the Contributions distributed in accordance with this Agreement. 32 | 33 | "Recipient" means anyone who receives the Program under this Agreement, 34 | including all Contributors. 35 | 36 | 2. GRANT OF RIGHTS 37 | a) Subject to the terms of this Agreement, each Contributor hereby grants 38 | Recipient a non-exclusive, worldwide, royalty-free copyright license to 39 | reproduce, prepare derivative works of, publicly display, publicly perform, 40 | distribute and sublicense the Contribution of such Contributor, if any, and 41 | such derivative works, in source code and object code form. 42 | b) Subject to the terms of this Agreement, each Contributor hereby grants 43 | Recipient a non-exclusive, worldwide, royalty-free patent license under 44 | Licensed Patents to make, use, sell, offer to sell, import and otherwise 45 | transfer the Contribution of such Contributor, if any, in source code and 46 | object code form. This patent license shall apply to the combination of the 47 | Contribution and the Program if, at the time the Contribution is added by 48 | the Contributor, such addition of the Contribution causes such combination 49 | to be covered by the Licensed Patents. The patent license shall not apply 50 | to any other combinations which include the Contribution. No hardware per 51 | se is licensed hereunder. 52 | c) Recipient understands that although each Contributor grants the licenses to 53 | its Contributions set forth herein, no assurances are provided by any 54 | Contributor that the Program does not infringe the patent or other 55 | intellectual property rights of any other entity. Each Contributor 56 | disclaims any liability to Recipient for claims brought by any other entity 57 | based on infringement of intellectual property rights or otherwise. As a 58 | condition to exercising the rights and licenses granted hereunder, each 59 | Recipient hereby assumes sole responsibility to secure any other 60 | intellectual property rights needed, if any. For example, if a third party 61 | patent license is required to allow Recipient to distribute the Program, it 62 | is Recipient's responsibility to acquire that license before distributing 63 | the Program. 64 | d) Each Contributor represents that to its knowledge it has sufficient 65 | copyright rights in its Contribution, if any, to grant the copyright 66 | license set forth in this Agreement. 67 | 68 | 3. REQUIREMENTS 69 | 70 | A Contributor may choose to distribute the Program in object code form under its 71 | own license agreement, provided that: 72 | 73 | a) it complies with the terms and conditions of this Agreement; and 74 | b) its license agreement: 75 | i) effectively disclaims on behalf of all Contributors all warranties and 76 | conditions, express and implied, including warranties or conditions of 77 | title and non-infringement, and implied warranties or conditions of 78 | merchantability and fitness for a particular purpose; 79 | ii) effectively excludes on behalf of all Contributors all liability for 80 | damages, including direct, indirect, special, incidental and 81 | consequential damages, such as lost profits; 82 | iii) states that any provisions which differ from this Agreement are offered 83 | by that Contributor alone and not by any other party; and 84 | iv) states that source code for the Program is available from such 85 | Contributor, and informs licensees how to obtain it in a reasonable 86 | manner on or through a medium customarily used for software exchange. 87 | 88 | When the Program is made available in source code form: 89 | 90 | a) it must be made available under this Agreement; and 91 | b) a copy of this Agreement must be included with each copy of the Program. 92 | Contributors may not remove or alter any copyright notices contained within 93 | the Program. 94 | 95 | Each Contributor must identify itself as the originator of its Contribution, if 96 | any, in a manner that reasonably allows subsequent Recipients to identify the 97 | originator of the Contribution. 98 | 99 | 4. COMMERCIAL DISTRIBUTION 100 | 101 | Commercial distributors of software may accept certain responsibilities with 102 | respect to end users, business partners and the like. While this license is 103 | intended to facilitate the commercial use of the Program, the Contributor who 104 | includes the Program in a commercial product offering should do so in a manner 105 | which does not create potential liability for other Contributors. Therefore, if 106 | a Contributor includes the Program in a commercial product offering, such 107 | Contributor ("Commercial Contributor") hereby agrees to defend and indemnify 108 | every other Contributor ("Indemnified Contributor") against any losses, damages 109 | and costs (collectively "Losses") arising from claims, lawsuits and other legal 110 | actions brought by a third party against the Indemnified Contributor to the 111 | extent caused by the acts or omissions of such Commercial Contributor in 112 | connection with its distribution of the Program in a commercial product 113 | offering. The obligations in this section do not apply to any claims or Losses 114 | relating to any actual or alleged intellectual property infringement. In order 115 | to qualify, an Indemnified Contributor must: a) promptly notify the Commercial 116 | Contributor in writing of such claim, and b) allow the Commercial Contributor to 117 | control, and cooperate with the Commercial Contributor in, the defense and any 118 | related settlement negotiations. The Indemnified Contributor may participate in 119 | any such claim at its own expense. 120 | 121 | For example, a Contributor might include the Program in a commercial product 122 | offering, Product X. That Contributor is then a Commercial Contributor. If that 123 | Commercial Contributor then makes performance claims, or offers warranties 124 | related to Product X, those performance claims and warranties are such 125 | Commercial Contributor's responsibility alone. Under this section, the 126 | Commercial Contributor would have to defend claims against the other 127 | Contributors related to those performance claims and warranties, and if a court 128 | requires any other Contributor to pay any damages as a result, the Commercial 129 | Contributor must pay those damages. 130 | 131 | 5. NO WARRANTY 132 | 133 | EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN 134 | "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR 135 | IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, 136 | NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each 137 | Recipient is solely responsible for determining the appropriateness of using and 138 | distributing the Program and assumes all risks associated with its exercise of 139 | rights under this Agreement , including but not limited to the risks and costs 140 | of program errors, compliance with applicable laws, damage to or loss of data, 141 | programs or equipment, and unavailability or interruption of operations. 142 | 143 | 6. DISCLAIMER OF LIABILITY 144 | 145 | EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY 146 | CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, 147 | SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST 148 | PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 149 | STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 150 | OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS 151 | GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 152 | 153 | 7. GENERAL 154 | 155 | If any provision of this Agreement is invalid or unenforceable under applicable 156 | law, it shall not affect the validity or enforceability of the remainder of the 157 | terms of this Agreement, and without further action by the parties hereto, such 158 | provision shall be reformed to the minimum extent necessary to make such 159 | provision valid and enforceable. 160 | 161 | If Recipient institutes patent litigation against any entity (including a 162 | cross-claim or counterclaim in a lawsuit) alleging that the Program itself 163 | (excluding combinations of the Program with other software or hardware) 164 | infringes such Recipient's patent(s), then such Recipient's rights granted under 165 | Section 2(b) shall terminate as of the date such litigation is filed. 166 | 167 | All Recipient's rights under this Agreement shall terminate if it fails to 168 | comply with any of the material terms or conditions of this Agreement and does 169 | not cure such failure in a reasonable period of time after becoming aware of 170 | such noncompliance. If all Recipient's rights under this Agreement terminate, 171 | Recipient agrees to cease use and distribution of the Program as soon as 172 | reasonably practicable. However, Recipient's obligations under this Agreement 173 | and any licenses granted by Recipient relating to the Program shall continue and 174 | survive. 175 | 176 | Everyone is permitted to copy and distribute copies of this Agreement, but in 177 | order to avoid inconsistency the Agreement is copyrighted and may only be 178 | modified in the following manner. The Agreement Steward reserves the right to 179 | publish new versions (including revisions) of this Agreement from time to time. 180 | No one other than the Agreement Steward has the right to modify this Agreement. 181 | The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation 182 | may assign the responsibility to serve as the Agreement Steward to a suitable 183 | separate entity. Each new version of the Agreement will be given a 184 | distinguishing version number. The Program (including Contributions) may always 185 | be distributed subject to the version of the Agreement under which it was 186 | received. In addition, after a new version of the Agreement is published, 187 | Contributor may elect to distribute the Program (including its Contributions) 188 | under the new version. Except as expressly stated in Sections 2(a) and 2(b) 189 | above, Recipient receives no rights or licenses to the intellectual property of 190 | any Contributor under this Agreement, whether expressly, by implication, 191 | estoppel or otherwise. All rights in the Program not expressly granted under 192 | this Agreement are reserved. 193 | 194 | This Agreement is governed by the laws of the State of New York and the 195 | intellectual property laws of the United States of America. No party to this 196 | Agreement will bring a legal action under this Agreement more than one year 197 | after the cause of action arose. Each party waives its rights to a jury trial in 198 | any resulting litigation. 199 | -------------------------------------------------------------------------------- /Learning-Trail.md: -------------------------------------------------------------------------------- 1 | # Learning Trail -- MongoDB 2 | 3 | ### 18.03.2014 4 | 5 | * [M101P: MongoDB for Developers · Week 6: Application Engineering](course-m101p/week6-app-engineering.md) watched videos and submitted homework 6 | * [Geneva MongoDB User Group](http://genevamug.ch) Meetup at HEPIA; attended the conferences: 7 | * [MongoDB Aggregation Framework](http://bit.ly/1hxjLWA) by Alain Helaili (@AlainHelaili) 8 | * [Fuzzy Search in MongoDB](http://bit.ly/1fX1lkn) by John Page (@johnlpage); see [github.com/johnlpage/FuzzGo](https://github.com/johnlpage/FuzzGo) 9 | 10 | ### 11.03.2014 11 | 12 | * [M101P: MongoDB for Developers · Week 5: Aggregation Framework](course-m101p/week5-aggregation-fw.md) watched videos and submitted homework 13 | 14 | ### 04.03.2014 15 | 16 | * [M101P: MongoDB for Developers · Week 4: Performance](course-m101p/week4-performance.md) watched videos and submitted homework 17 | 18 | ### 25.02.2014 19 | 20 | * [M101P: MongoDB for Developers · Week 3: Schema Design](course-m101p/week3-schema.md) watched videos and submitted homework 21 | 22 | ### 20.02.2014 23 | 24 | * [Webinar: Build an Application Series · Session 3 · Part Two](webinar-build-an-app/20140220-build-app-part2-interacting-database.md) 25 | 26 | ### 18.02.2014 27 | 28 | * [M101P: MongoDB for Developers · Week 2: CRUD](course-m101p/week2-crud.md) watched videos and submitted homework 29 | 30 | ### 11.02.2014 31 | 32 | * [M101P: MongoDB for Developers · Week 1: Introduction](course-m101p/week1-introduction.md): watched videos and submitted homework 33 | 34 | ### 06.02.2014 35 | 36 | * [Webinar: Build an Application Series · Session 2 · Part One](webinar-build-an-app/20140206-build-app-part1-getting-started.md) 37 | 38 | ### 30.01.2014 39 | 40 | * [Webinar: Getting Started with MongoDB · Back to Basics](webinar-build-an-app/20140130-getting-started.md) 41 | * [Downloaded](http://www.mongodb.org/downloads) and installed MongoDB 2.4.9 _up & running in a few inutes: download, unpack, create a folder for the data and start the MongoDB server:_ `cd mongo; mkdir -p data/db; bin/mongod --dbpath data/db/` 42 | 43 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Learning MongoDB 2 | ================ 3 | 4 | Discovering and learning the MongoDB database -- personal notes. 5 | 6 | * [Learning trail](Learning-Trail.md) 7 | * [Discovering MongoDB](Discovering-MongoDB.md) _overview, articles, tutorials and resources_ 8 | * [Course M101P «MongoDB for developers»](https://github.com/olange/learning-mongodb/tree/master/course-m101p) _courseware, resources and homework_ 9 | * [Webinar «Building an application with MongoDB»](https://github.com/olange/learning-mongodb/tree/master/webinar-build-an-app) _notes about and transcripts of the webinar_ 10 | 11 | -------------------------------------------------------------------------------- /course-m101p/.gitignore: -------------------------------------------------------------------------------- 1 | data 2 | dump 3 | venv 4 | activate 5 | hw2-3 6 | hw3-2and3-3 7 | hw4-3 8 | *.pyc 9 | -------------------------------------------------------------------------------- /course-m101p/README.md: -------------------------------------------------------------------------------- 1 | # MongoDB Course M101P 2 | 3 | Courseware, homework and resources related to the course [M101P: MongoDB for Developers](https://education.mongodb.com/courses/10gen/M101P/2014_February/info). 4 | 5 | ## Notes and homework 6 | 7 | * [Week 4: Performance](week4-performance.md) 8 | * [Week 3: Schema Design](week3-schema.md) 9 | * [Week 2: CRUD](week2-crud.md) 10 | * [Week 1: Introduction](week1-introduction.md) 11 | 12 | ## Courseware 13 | 14 | Access to these links requires an account and login to the MongoDB University website. 15 | 16 | * [Course info](https://education.mongodb.com/courses/10gen/M101P/2014_February/info) overview and news 17 | * [Course wiki](https://education.mongodb.com/courses/10gen/M101P/2014_February/wiki/M101P-Feb-2014/) links to all videos on YouTube 18 | * [Course forum](https://education.10gen.com/courses/10gen/M101P/2014_February/discussion/forum) 19 | 20 | ## Running 21 | 22 | Start MongoDB: 23 | 24 | $ cd course-m101p 25 | $ mongod --dbpath=data/db/ & 26 | 27 | Activate the Python Virtual Environment: 28 | 29 | $ source venv/bin/activate 30 | 31 | Start the Mongo shell 32 | 33 | $ mongo 34 | 35 | To deactivate the Python Virtual Environment: 36 | 37 | $ deactivate 38 | 39 | ## Installing 40 | 41 | ### Generic Python install for Mac OS X 42 | 43 | XCode Command Line Tools: 44 | 45 | $ xcode-select --install 46 | 47 | Mongo: 48 | 49 | $ brew install mongodb 50 | 51 | PIP: 52 | 53 | $ sudo easy_install pip 54 | 55 | Python Virtual Env: 56 | 57 | $ sudo pip install virtualenv 58 | $ virtualenv --distribute venv 59 | 60 | ### Course specific installs 61 | 62 | PyMongo and Bottle modules for Python: 63 | 64 | $ cd course-m101p 65 | $ source venv/bin/activate 66 | $ pip install pymongo 67 | $ pip install bottle 68 | 69 | Restoring the database from a dump: 70 | 71 | $ cd course-m101p 72 | $ wget https://education.mongodb.com/static/10gen_2014_M101P_February/handouts/hw1-1.184820ec29b6.zip 73 | $ unzip hw1-1.184820ec29b6.zip 74 | $ restoremongo dump 75 | 76 | Importing content of a database from a JSON file: 77 | 78 | $ mongoimport -d students -c grades < grades.ef42a2b3e7ff.json 79 | connected to: 127.0.0.1 80 | Tue Feb 18 15:44:34.158 check 9 800 81 | Tue Feb 18 15:44:34.158 imported 800 objects 82 | -------------------------------------------------------------------------------- /course-m101p/hw1-1.184820ec29b6.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/olange/learning-mongodb/9c6cd4a3a36adb148e655bb7eb4ec53f1eb4e157/course-m101p/hw1-1.184820ec29b6.zip -------------------------------------------------------------------------------- /course-m101p/hw1-2.21394489c9ad.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | import pymongo 4 | import sys 5 | 6 | 7 | # Copyright 2013, 10gen, Inc. 8 | # Author: Andrew Erlichson 9 | 10 | 11 | # connnecto to the db on standard port 12 | connection = pymongo.MongoClient("mongodb://localhost") 13 | 14 | 15 | 16 | db = connection.m101 # attach to db 17 | collection = db.funnynumbers # specify the colllection 18 | 19 | 20 | magic = 0 21 | 22 | try: 23 | iter = collection.find() 24 | for item in iter: 25 | if ((item['value'] % 3) == 0): 26 | magic = magic + item['value'] 27 | 28 | except: 29 | print "Error trying to read collection:" + sys.exc_info()[0] 30 | 31 | 32 | print "The answer to Homework One, Problem 2 is " + str(int(magic)) 33 | 34 | 35 | -------------------------------------------------------------------------------- /course-m101p/hw1-3.e594fc84d4ac.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | import pymongo 4 | import bottle 5 | import sys 6 | 7 | 8 | # Copyright 2012, 10gen, Inc. 9 | # Author: Andrew Erlichson 10 | 11 | 12 | @bottle.get("/hw1/") 13 | def get_hw1(n): 14 | 15 | # connnecto to the db on standard port 16 | connection = pymongo.MongoClient("mongodb://localhost") 17 | 18 | n = int(n) 19 | 20 | db = connection.m101 # attach to db 21 | collection = db.funnynumbers # specify the colllection 22 | 23 | 24 | magic = 0 25 | 26 | try: 27 | iter = collection.find({},limit=1, skip=n).sort('value', direction=1) 28 | for item in iter: 29 | return str(int(item['value'])) + "\n" 30 | except: 31 | print "Error trying to read collection:", sys.exc_info()[0] 32 | 33 | 34 | bottle.debug(True) 35 | bottle.run(host='localhost', port=8080) 36 | 37 | 38 | -------------------------------------------------------------------------------- /course-m101p/hw2-1-answer.md: -------------------------------------------------------------------------------- 1 | # Homework 2.1 2 | 3 | Analyzing a data set [Description complète](https://education.10gen.com/courses/10gen/M101P/2014_February/courseware/Week_2_CRUD/529396dee2d423246e7c43e6/) 4 | 5 | ## Pièces jointes 6 | 7 | * [hw2-1-grades.ef42a2b3e7ff.json](hw2-1-grades.ef42a2b3e7ff.json) fichier JSON avec données 8 | 9 | ## Problème 10 | 11 | Now it’s your turn to analyze the data set. Find all exam scores greater than or equal to 65, and sort those scores from lowest to highest. What is the student_id of the lowest exam score above 65: 115, 22, 48, 67, 87 or 114? 12 | 13 | ## Résolution 14 | 15 | db.grades.find( { "score": { $gte: 65}}).sort( { "score": 1}) 16 | { "_id" : ObjectId("50906d7fa3c412bb040eb5cf"), "student_id" : 22, "type" : "exam", "score" : 65.02518811936324 } 17 | { "_id" : ObjectId("50906d7fa3c412bb040eb70a"), "student_id" : 100, "type" : "homework", "score" : 65.29214756759019 } 18 | { "_id" : ObjectId("50906d7fa3c412bb040eb676"), "student_id" : 63, "type" : "homework", "score" : 65.31038121884853 } 19 | ... 20 | 21 | ## Réponse 22 | 23 | 22 24 | 25 | ## Préparation 26 | 27 | (venv) course-m101p $ mongoimport -d students -c grades < grades.ef42a2b3e7ff.js 28 | connected to: 127.0.0.1 29 | Tue Feb 18 15:44:34.158 check 9 800 30 | Tue Feb 18 15:44:34.158 imported 800 objects 31 | 32 | (venv) course-m101p $ mongo 33 | MongoDB shell version: 2.4.9 34 | 35 | > use students 36 | switched to db students 37 | 38 | > db.grades.count() 39 | 800 40 | 41 | > db.grades.aggregate({'$group':{'_id':'$student_id', 'average':{$avg:'$score'}}}, {'$sort':{'average':-1}}, {'$limit':1}) 42 | { 43 | "result" : [ 44 | { 45 | "_id" : 164, 46 | "average" : 89.29771818263372 47 | } 48 | ], 49 | "ok" : 1 50 | } 51 | -------------------------------------------------------------------------------- /course-m101p/hw2-2-answer.md: -------------------------------------------------------------------------------- 1 | # Homework 2.2 2 | 3 | [Description complète](https://education.10gen.com/courses/10gen/M101P/2014_February/courseware/Week_2_CRUD/52939732e2d423246e7c43e9/) 4 | 5 | ## Pièces jointes 6 | 7 | * [hw2-2-remove.py](hw2-2-remove.py) 8 | 9 | ## Problème 10 | 11 | Write a program in the language of your choice that will remove the grade of type "homework" with the lowest score for each student from the dataset that you imported in HW 2.1. Since each document is one grade, it should remove one document per student. 12 | 13 | Provide the identity of the student with the highest average in the class, using given query. The answer will appear in the _id field of the resulting document. 14 | 15 | ## Résolution 16 | 17 | Suppression des documents (voir [hw2-2-remove.py](hw2-2-remove.py)): 18 | 19 | (venv) course-m101p $ python hw2-2-remove.py 20 | Removing {u'student_id': 0, u'_id': ObjectId('50906d7fa3c412bb040eb57a'), u'type': u'homework', u'score': 63.98402553675503} 21 | Removing {u'student_id': 1, u'_id': ObjectId('50906d7fa3c412bb040eb57e'), u'type': u'homework', u'score': 44.31667452616328} 22 | Removing {u'student_id': 2, u'_id': ObjectId('50906d7fa3c412bb040eb582'), u'type': u'homework', u'score': 97.75889721343528} 23 | Removing {u'student_id': 3, u'_id': ObjectId('50906d7fa3c412bb040eb586'), u'type': u'homework', u'score': 92.71871597581605} 24 | ... 25 | 26 | Nombre de documents après suppression: 27 | 28 | (venv) course-m101p $ mongo students 29 | MongoDB shell version: 2.4.9 30 | connecting to: students 31 | 32 | > db.grades.count() 33 | 600 34 | 35 | Student who holds the 101st best grade across all grades: 36 | 37 | > db.grades.find().sort({'score':-1}).skip(100).limit(1) 38 | { "_id" : ObjectId("50906d7fa3c412bb040eb709"), "student_id" : 100, "type" : "homework", "score" : 88.50425479139126 } 39 | 40 | Top five students sorted by student_id, score: 41 | 42 | > db.grades.find({},{'student_id':1, 'type':1, 'score':1, '_id':0}).sort({'student_id':1, 'score':1, }).limit(5) 43 | { "student_id" : 0, "type" : "quiz", "score" : 31.95004496742112 } 44 | { "student_id" : 0, "type" : "exam", "score" : 54.6535436362647 } 45 | { "student_id" : 0, "type" : "homework", "score" : 63.98402553675503 } 46 | { "student_id" : 1, "type" : "homework", "score" : 44.31667452616328 } 47 | { "student_id" : 1, "type" : "exam", "score" : 74.20010837299897 } 48 | 49 | ## Réponse 50 | 51 | Provide the identity of the student with the highest average in the class with following query that uses the aggregation framework. The answer will appear in the _id field of the resulting document. 52 | 53 | > db.grades.aggregate({'$group':{'_id':'$student_id', 'average':{$avg:'$score'}}}, {'$sort':{'average':-1}}, {'$limit':1}) 54 | {"result" : [ {"_id" : 54, "average" : 96.19488173037341 } ], "ok" : 1 } 55 | 56 | --> 54 57 | -------------------------------------------------------------------------------- /course-m101p/hw2-2-remove.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: UTF8 -*- 3 | 4 | import pymongo 5 | import sys 6 | 7 | # Homework 2.2 · Course M101P 8 | # 9 | # Write a program in the language of your choice that will remove the grade 10 | # of type "homework" with the lowest score for each student from the dataset 11 | # that you imported in HW 2.1. Since each document is one grade, it should 12 | # remove one document per student. 13 | 14 | connection = pymongo.MongoClient("mongodb://localhost") 15 | db = connection.students 16 | grades = db.grades 17 | 18 | try: 19 | cursor = grades.find( { "type": "homework" }) \ 20 | .sort( [ ("student_id", pymongo.ASCENDING), \ 21 | ( "score", pymongo.ASCENDING)]) 22 | except: 23 | print "Unexpected error:", sys.exc_info()[ 0] 24 | 25 | previous_id = None 26 | student_id = None 27 | 28 | for doc in cursor: 29 | student_id = doc[ 'student_id'] 30 | if student_id != previous_id: 31 | previous_id = student_id 32 | print "Removing", doc 33 | grades.remove( { '_id': doc[ '_id'] } ) 34 | -------------------------------------------------------------------------------- /course-m101p/hw2-3-answer.md: -------------------------------------------------------------------------------- 1 | # Homework 2.3 2 | 3 | [Description complète](https://education.10gen.com/courses/10gen/M101P/2014_February/courseware/Week_2_CRUD/52939a93e2d423246e7c43f7/) 4 | 5 | ## Pièces jointes 6 | 7 | * [hw2-3-userDAO.py](hw2-3-userDAO.py) script modifié 8 | * [hw2-3.4a405a23b442.zip](hw2-3.4a405a23b442.zip) application blog fournie par MongoDB 9 | 10 | ## Problème 11 | 12 | We have removed two pymongo statements from `userDAO.py` and marked the area where you need to work with XXX. You should not need to touch any other code. The pymongo statements that you are going to add will add a new user upon sign-up and validate a login by retrieving the right user document. 13 | 14 | ## Résolution 15 | 16 | Validation du login: 17 | 18 | class UserDAO: 19 | ... 20 | 21 | def validate_login(self, username, password): 22 | ... 23 | try: 24 | # XXX HW 2.3 Students Work Here 25 | user = self.users.find_one( { '_id': username}) 26 | ... 27 | 28 | Ajout d'un utilisateur: 29 | 30 | def add_user(self, username, password, email): 31 | ... 32 | try: 33 | # XXX HW 2.3 Students work here 34 | self.users.insert( user) 35 | ... 36 | 37 | ## Réponse 38 | 39 | (venv) course-m101p/hw2-3 $ python validate.py 40 | Welcome to the HW 2.3 validation tester 41 | Trying to create a test user aqOmuEl 42 | Found the test user aqOmuEl in the users collection 43 | User creation successful. 44 | Trying to login for test user aqOmuEl 45 | User login successful. 46 | Validation Code is jkfds9834j98fnm39njf0920fn2 47 | 48 | --> jkfds9834j98fnm39njf0920fn2 49 | -------------------------------------------------------------------------------- /course-m101p/hw2-3-userDAO.py: -------------------------------------------------------------------------------- 1 | # -*- coding: UTF8 -*- 2 | 3 | # 4 | # Copyright (c) 2008 - 2013 10gen, Inc. 5 | # 6 | # Licensed under the Apache License, Version 2.0 (the "License"); 7 | # you may not use this file except in compliance with the License. 8 | # You may obtain a copy of the License at 9 | # 10 | # http://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, software 13 | # distributed under the License is distributed on an "AS IS" BASIS, 14 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | # See the License for the specific language governing permissions and 16 | # limitations under the License. 17 | # 18 | # 19 | import hmac 20 | import random 21 | import string 22 | import hashlib 23 | import pymongo 24 | 25 | 26 | # The User Data Access Object handles all interactions with the User collection. 27 | class UserDAO: 28 | 29 | def __init__(self, db): 30 | self.db = db 31 | self.users = self.db.users 32 | self.SECRET = 'verysecret' 33 | 34 | # makes a little salt 35 | def make_salt(self): 36 | salt = "" 37 | for i in range(5): 38 | salt = salt + random.choice(string.ascii_letters) 39 | return salt 40 | 41 | # implement the function make_pw_hash(name, pw) that returns a hashed password 42 | # of the format: 43 | # HASH(pw + salt),salt 44 | # use sha256 45 | 46 | def make_pw_hash(self, pw,salt=None): 47 | if salt == None: 48 | salt = self.make_salt(); 49 | return hashlib.sha256(pw + salt).hexdigest()+","+ salt 50 | 51 | # Validates a user login. Returns user record or None 52 | def validate_login(self, username, password): 53 | 54 | user = None 55 | try: 56 | # XXX HW 2.3 Students Work Here 57 | # you will need to retrieve right document from the users collection. 58 | user = self.users.find_one( { '_id': username}) 59 | except: 60 | print "Unable to query database for user" 61 | 62 | if user is None: 63 | print "User not in database" 64 | return None 65 | 66 | salt = user['password'].split(',')[1] 67 | 68 | if user['password'] != self.make_pw_hash(password, salt): 69 | print "user password is not a match" 70 | return None 71 | 72 | # Looks good 73 | return user 74 | 75 | 76 | # creates a new user in the users collection 77 | def add_user(self, username, password, email): 78 | password_hash = self.make_pw_hash(password) 79 | 80 | user = {'_id': username, 'password': password_hash} 81 | if email != "": 82 | user['email'] = email 83 | 84 | try: 85 | # XXX HW 2.3 Students work here 86 | # You need to insert the user into the users collection. 87 | # Don't over think this one, it's a straight forward insert. 88 | self.users.insert( user) 89 | 90 | except pymongo.errors.OperationFailure: 91 | print "oops, mongo error" 92 | return False 93 | except pymongo.errors.DuplicateKeyError as e: 94 | print "oops, username is already taken" 95 | return False 96 | 97 | return True 98 | 99 | 100 | -------------------------------------------------------------------------------- /course-m101p/hw2-3.4a405a23b442.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/olange/learning-mongodb/9c6cd4a3a36adb148e655bb7eb4ec53f1eb4e157/course-m101p/hw2-3.4a405a23b442.zip -------------------------------------------------------------------------------- /course-m101p/hw3-1-answer.md: -------------------------------------------------------------------------------- 1 | # Homework 3.1 2 | 3 | Remove the lowest homework score for each student [Description complète](https://education.10gen.com/courses/10gen/M101P/2014_February/courseware/Week_3_Schema_Design/529dffcae2d42347509fb3a2/) 4 | 5 | ## Pièces jointes 6 | 7 | * [hw3-1-remove.py](hw3-1-remove.py) script réalisé 8 | 9 | ## Problème 10 | 11 | Write a program in the language of your choice that will remove the lowest homework score for each student. Since there is a single document for each student containing an array of scores, you will need to update the scores array and remove the homework. Provide the identify of the student with the highest average in the class, using given query. 12 | 13 | ## Résolution 14 | 15 | Suppression des scores (voir [hw3-1-remove.py](hw3-1-remove.py)): 16 | 17 | (venv) course-m101p $ python hw3-1-remove.py 18 | ... 19 | updated doc 199: 20 | scores: [{u'score': 82.11742562118049, u'type': u'exam'}, {u'score': 49.61295450928224, u'type': u'quiz'}, {u'score': 28.86823689842918, u'type': u'homework'}, {u'score': 5.861613903793295, u'type': u'homework'}] 21 | new scores: [{u'score': 82.11742562118049, u'type': u'exam'}, {u'score': 49.61295450928224, u'type': u'quiz'}, {u'score': 28.86823689842918, u'type': u'homework'}] 22 | 23 | Nombre de documents après suppression: 24 | 25 | (venv) course-m101p $ mongo school 26 | > db.students.count() 27 | 200 28 | 29 | Demarcus Audette's record: 30 | 31 | > db.students.find({_id:100}).pretty() 32 | { 33 | "_id" : 100, 34 | "name" : "Demarcus Audette", 35 | "scores" : [ 36 | {"score" : 47.42608580155614, "type" : "exam"}, 37 | {"score" : 44.83416623719906, "type" : "quiz"}, 38 | {"score" : 39.01726616178844, "type" : "homework"} 39 | ] 40 | } 41 | 42 | ## Réponse 43 | 44 | Provide the identify of the student with the highest average in the class with following query that uses the aggregation framework. The answer will appear in the _id field of the resulting document. 45 | 46 | > db.students.aggregate({'$unwind':'$scores'},{'$group':{'_id':'$_id', 'average':{$avg:'$scores.score'}}}, {'$sort':{'average':-1}}, {'$limit':1}) 47 | 48 | { "result" : [ { "_id" : 13, "average" : 91.98315917172745 } ], "ok" : 1 } 49 | 50 | --> 13 51 | 52 | ## Préparation 53 | 54 | (venv) course-m101p $ mongoimport -d school -c students < hw3-1-students.e7ed0a289cbe.json 55 | connected to: 127.0.0.1 56 | Tue Feb 25 16:02:10.702 check 9 200 57 | Tue Feb 25 16:02:10.702 imported 200 objects 58 | -------------------------------------------------------------------------------- /course-m101p/hw3-1-remove.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: UTF8 -*- 3 | 4 | import pymongo 5 | import sys 6 | 7 | # Homework 3.1 · Course M101P 8 | # 9 | # Write a program in the language of your choice that will remove 10 | # the lowest homework score for each student. Since there is a single 11 | # document for each student containing an array of scores, you will 12 | # need to update the scores array and remove the homework. 13 | # 14 | # Note: when run twice, all homework scores will have been removed. 15 | 16 | connection = pymongo.MongoClient("mongodb://localhost") 17 | db = connection.school 18 | students = db.students 19 | 20 | def remove_lowest_homework_score( scores): 21 | homework_scores = [score for score in scores if score[u"type"] == u"homework"] 22 | lowest = min( [score[u"score"] for score in homework_scores]) 23 | return [score for score in scores \ 24 | if ( score[u"type"] != u"homework") \ 25 | or( score[u"type"] == u"homework" and score[u"score"] != lowest)] 26 | 27 | def update_scores( coll, doc_id, new_scores): 28 | try: 29 | coll.update( { "_id": doc_id}, { "$set": { "scores": new_scores }}) 30 | except: 31 | print "Unexpected error while updating:", sys.exc_info()[ 0] 32 | 33 | def main( argv): 34 | try: 35 | # All students having a score of type "homework" in their scores array 36 | cursor = students.find( { 'scores.type': 'homework' }) 37 | except: 38 | print "Unexpected error while finding:", sys.exc_info()[ 0] 39 | 40 | for doc in cursor: 41 | id = doc[ "_id"] 42 | scores = doc[ "scores"] 43 | updated_scores = remove_lowest_homework_score( scores) 44 | update_scores( students, id, updated_scores) 45 | print "updated doc %s:\n scores: %s\nnew scores: %s" % ( id, scores, updated_scores) 46 | 47 | if __name__ == "__main__": 48 | main(sys.argv[1:]) 49 | -------------------------------------------------------------------------------- /course-m101p/hw3-1-students.e7ed0a289cbe.json: -------------------------------------------------------------------------------- 1 | { "_id" : 0, "name" : "aimee Zank", "scores" : [ { "type" : "exam", "score" : 1.463179736705023 }, { "type" : "quiz", "score" : 11.78273309957772 }, { "type" : "homework", "score" : 6.676176060654615 }, { "type" : "homework", "score" : 35.8740349954354 } ] } 2 | { "_id" : 1, "name" : "Aurelia Menendez", "scores" : [ { "type" : "exam", "score" : 60.06045071030959 }, { "type" : "quiz", "score" : 52.79790691903873 }, { "type" : "homework", "score" : 71.76133439165544 }, { "type" : "homework", "score" : 34.85718117893772 } ] } 3 | { "_id" : 2, "name" : "Corliss Zuk", "scores" : [ { "type" : "exam", "score" : 67.03077096065002 }, { "type" : "quiz", "score" : 6.301851677835235 }, { "type" : "homework", "score" : 20.18160621941858 }, { "type" : "homework", "score" : 66.28344683278382 } ] } 4 | { "_id" : 3, "name" : "Bao Ziglar", "scores" : [ { "type" : "exam", "score" : 71.64343899778332 }, { "type" : "quiz", "score" : 24.80221293650313 }, { "type" : "homework", "score" : 1.694720653897219 }, { "type" : "homework", "score" : 42.26147058804812 } ] } 5 | { "_id" : 4, "name" : "Zachary Langlais", "scores" : [ { "type" : "exam", "score" : 78.68385091304332 }, { "type" : "quiz", "score" : 90.29631013680419 }, { "type" : "homework", "score" : 34.41620148042529 }, { "type" : "homework", "score" : 19.21886443577987 } ] } 6 | { "_id" : 5, "name" : "Wilburn Spiess", "scores" : [ { "type" : "exam", "score" : 44.87186330181261 }, { "type" : "quiz", "score" : 25.72395114668016 }, { "type" : "homework", "score" : 10.53058536508186 }, { "type" : "homework", "score" : 63.42288310628662 } ] } 7 | { "_id" : 6, "name" : "Jenette Flanders", "scores" : [ { "type" : "exam", "score" : 37.32285459166097 }, { "type" : "quiz", "score" : 28.32634976913737 }, { "type" : "homework", "score" : 16.58341639738951 }, { "type" : "homework", "score" : 81.57115318686338 } ] } 8 | { "_id" : 7, "name" : "Salena Olmos", "scores" : [ { "type" : "exam", "score" : 90.37826509157176 }, { "type" : "quiz", "score" : 42.48780666956811 }, { "type" : "homework", "score" : 67.18328596625217 }, { "type" : "homework", "score" : 96.52986171633331 } ] } 9 | { "_id" : 8, "name" : "Daphne Zheng", "scores" : [ { "type" : "exam", "score" : 22.13583712862635 }, { "type" : "quiz", "score" : 14.63969941335069 }, { "type" : "homework", "score" : 75.94123677556644 }, { "type" : "homework", "score" : 73.29753303407691 } ] } 10 | { "_id" : 9, "name" : "Sanda Ryba", "scores" : [ { "type" : "exam", "score" : 97.00509953654694 }, { "type" : "quiz", "score" : 97.80449632538915 }, { "type" : "homework", "score" : 12.47568017314781 }, { "type" : "homework", "score" : 25.27368532432955 } ] } 11 | { "_id" : 10, "name" : "Denisha Cast", "scores" : [ { "type" : "exam", "score" : 45.61876862259409 }, { "type" : "quiz", "score" : 98.35723209418343 }, { "type" : "homework", "score" : 19.31113429145131 }, { "type" : "homework", "score" : 55.90835657173456 } ] } 12 | { "_id" : 11, "name" : "Marcus Blohm", "scores" : [ { "type" : "exam", "score" : 78.42617835651868 }, { "type" : "quiz", "score" : 82.58372817930675 }, { "type" : "homework", "score" : 87.49924733328717 }, { "type" : "homework", "score" : 15.81264595052612 } ] } 13 | { "_id" : 12, "name" : "Quincy Danaher", "scores" : [ { "type" : "exam", "score" : 54.29841278520669 }, { "type" : "quiz", "score" : 85.61270164694737 }, { "type" : "homework", "score" : 14.78936520432093 }, { "type" : "homework", "score" : 80.40732356118075 } ] } 14 | { "_id" : 13, "name" : "Jessika Dagenais", "scores" : [ { "type" : "exam", "score" : 90.47179954427436 }, { "type" : "quiz", "score" : 90.3001402468489 }, { "type" : "homework", "score" : 95.17753772405909 }, { "type" : "homework", "score" : 78.18795058912879 } ] } 15 | { "_id" : 14, "name" : "Alix Sherrill", "scores" : [ { "type" : "exam", "score" : 25.15924151998215 }, { "type" : "quiz", "score" : 68.64484047692098 }, { "type" : "homework", "score" : 13.66179556675781 }, { "type" : "homework", "score" : 24.68462152686763 } ] } 16 | { "_id" : 15, "name" : "Tambra Mercure", "scores" : [ { "type" : "exam", "score" : 69.15650225331579 }, { "type" : "quiz", "score" : 3.311794422000724 }, { "type" : "homework", "score" : 45.03178973642521 }, { "type" : "homework", "score" : 42.19409476640781 } ] } 17 | { "_id" : 16, "name" : "Dodie Staller", "scores" : [ { "type" : "exam", "score" : 7.772386442858281 }, { "type" : "quiz", "score" : 31.84300235104542 }, { "type" : "homework", "score" : 80.52136407989194 }, { "type" : "homework", "score" : 70.3632405320854 } ] } 18 | { "_id" : 17, "name" : "Fletcher Mcconnell", "scores" : [ { "type" : "exam", "score" : 39.41011069729274 }, { "type" : "quiz", "score" : 81.13270307809924 }, { "type" : "homework", "score" : 31.15090466987088 }, { "type" : "homework", "score" : 97.70116640402922 } ] } 19 | { "_id" : 18, "name" : "Verdell Sowinski", "scores" : [ { "type" : "exam", "score" : 62.12870233109035 }, { "type" : "quiz", "score" : 84.74586220889356 }, { "type" : "homework", "score" : 81.58947824932574 }, { "type" : "homework", "score" : 69.09840625499065 } ] } 20 | { "_id" : 19, "name" : "Gisela Levin", "scores" : [ { "type" : "exam", "score" : 44.51211101958831 }, { "type" : "quiz", "score" : 0.6578497966368002 }, { "type" : "homework", "score" : 93.36341655949683 }, { "type" : "homework", "score" : 49.43132782777443 } ] } 21 | { "_id" : 20, "name" : "Tressa Schwing", "scores" : [ { "type" : "exam", "score" : 42.17439799514388 }, { "type" : "quiz", "score" : 71.99314840599558 }, { "type" : "homework", "score" : 81.23972632069464 }, { "type" : "homework", "score" : 48.33010230766873 } ] } 22 | { "_id" : 21, "name" : "Rosana Vales", "scores" : [ { "type" : "exam", "score" : 46.2289476258328 }, { "type" : "quiz", "score" : 98.34164225207036 }, { "type" : "homework", "score" : 11.61342324347139 }, { "type" : "homework", "score" : 36.18769746805938 } ] } 23 | { "_id" : 22, "name" : "Margart Vitello", "scores" : [ { "type" : "exam", "score" : 75.04996547553947 }, { "type" : "quiz", "score" : 10.23046475899236 }, { "type" : "homework", "score" : 96.72520512117761 }, { "type" : "homework", "score" : 6.488940333376703 } ] } 24 | { "_id" : 23, "name" : "Tamika Schildgen", "scores" : [ { "type" : "exam", "score" : 45.65432764125526 }, { "type" : "quiz", "score" : 64.32927049658846 }, { "type" : "homework", "score" : 83.53933351660562 }, { "type" : "homework", "score" : 64.53669892045296 } ] } 25 | { "_id" : 24, "name" : "Jesusa Rickenbacker", "scores" : [ { "type" : "exam", "score" : 86.03197021556829 }, { "type" : "quiz", "score" : 1.967495200433389 }, { "type" : "homework", "score" : 61.10861071547914 }, { "type" : "homework", "score" : 47.65460674687512 } ] } 26 | { "_id" : 25, "name" : "Rudolph Domingo", "scores" : [ { "type" : "exam", "score" : 74.75289335591543 }, { "type" : "quiz", "score" : 38.5413647805495 }, { "type" : "homework", "score" : 35.2554340953413 }, { "type" : "homework", "score" : 6.94324481190195 } ] } 27 | { "_id" : 26, "name" : "Jonie Raby", "scores" : [ { "type" : "exam", "score" : 19.17861192576963 }, { "type" : "quiz", "score" : 76.3890359749654 }, { "type" : "homework", "score" : 44.39605672647002 }, { "type" : "homework", "score" : 5.668263730201028 } ] } 28 | { "_id" : 27, "name" : "Edgar Sarkis", "scores" : [ { "type" : "exam", "score" : 8.606983261043888 }, { "type" : "quiz", "score" : 58.71180464203724 }, { "type" : "homework", "score" : 15.33726210596508 }, { "type" : "homework", "score" : 14.35947594380606 } ] } 29 | { "_id" : 28, "name" : "Laureen Salomone", "scores" : [ { "type" : "exam", "score" : 3.677565278992456 }, { "type" : "quiz", "score" : 7.119462599229987 }, { "type" : "homework", "score" : 82.87308922617427 }, { "type" : "homework", "score" : 7.697331666318108 } ] } 30 | { "_id" : 29, "name" : "Gwyneth Garling", "scores" : [ { "type" : "exam", "score" : 48.36644963899371 }, { "type" : "quiz", "score" : 10.37827022865908 }, { "type" : "homework", "score" : 22.00937866160616 }, { "type" : "homework", "score" : 93.26639335532833 } ] } 31 | { "_id" : 30, "name" : "Kaila Deibler", "scores" : [ { "type" : "exam", "score" : 15.89771199662455 }, { "type" : "quiz", "score" : 56.93965183412178 }, { "type" : "homework", "score" : 66.64493295066322 }, { "type" : "homework", "score" : 27.25353739456094 } ] } 32 | { "_id" : 31, "name" : "Tandra Meadows", "scores" : [ { "type" : "exam", "score" : 24.90138146001744 }, { "type" : "quiz", "score" : 28.8266541837344 }, { "type" : "homework", "score" : 13.53683769386709 }, { "type" : "homework", "score" : 97.16831550665721 } ] } 33 | { "_id" : 32, "name" : "Gwen Honig", "scores" : [ { "type" : "exam", "score" : 87.14345376886205 }, { "type" : "quiz", "score" : 99.45824441135635 }, { "type" : "homework", "score" : 35.3788193463544 }, { "type" : "homework", "score" : 76.66460454219344 } ] } 34 | { "_id" : 33, "name" : "Sadie Jernigan", "scores" : [ { "type" : "exam", "score" : 73.15861249943812 }, { "type" : "quiz", "score" : 2.987718065941702 }, { "type" : "homework", "score" : 51.81946118482261 }, { "type" : "homework", "score" : 82.54104198590488 } ] } 35 | { "_id" : 34, "name" : "Carli Belvins", "scores" : [ { "type" : "exam", "score" : 7.112266875518214 }, { "type" : "quiz", "score" : 67.734668378287 }, { "type" : "homework", "score" : 32.25008698445316 }, { "type" : "homework", "score" : 88.99855402666871 } ] } 36 | { "_id" : 35, "name" : "Synthia Labelle", "scores" : [ { "type" : "exam", "score" : 27.22049103148209 }, { "type" : "quiz", "score" : 31.28760039265919 }, { "type" : "homework", "score" : 29.18912718046906 }, { "type" : "homework", "score" : 79.23285425688643 } ] } 37 | { "_id" : 36, "name" : "Eugene Magdaleno", "scores" : [ { "type" : "exam", "score" : 73.055900093666 }, { "type" : "quiz", "score" : 79.85621560462026 }, { "type" : "homework", "score" : 66.09143669040472 }, { "type" : "homework", "score" : 39.44092839803375 } ] } 38 | { "_id" : 37, "name" : "Meagan Oakes", "scores" : [ { "type" : "exam", "score" : 86.06759716616264 }, { "type" : "quiz", "score" : 79.45097452834857 }, { "type" : "homework", "score" : 28.41090281547689 }, { "type" : "homework", "score" : 25.31831395970249 } ] } 39 | { "_id" : 38, "name" : "Richelle Siemers", "scores" : [ { "type" : "exam", "score" : 34.64373397163318 }, { "type" : "quiz", "score" : 91.46799649446983 }, { "type" : "homework", "score" : 31.74461197651704 }, { "type" : "homework", "score" : 56.12615074082559 } ] } 40 | { "_id" : 39, "name" : "Mariette Batdorf", "scores" : [ { "type" : "exam", "score" : 0.04381116979284005 }, { "type" : "quiz", "score" : 90.25774974259562 }, { "type" : "homework", "score" : 65.88612319625227 }, { "type" : "homework", "score" : 16.40598305673743 } ] } 41 | { "_id" : 40, "name" : "Rachell Aman", "scores" : [ { "type" : "exam", "score" : 84.53009035375172 }, { "type" : "quiz", "score" : 25.25568126160764 }, { "type" : "homework", "score" : 38.61623256739443 }, { "type" : "homework", "score" : 70.42062575402956 } ] } 42 | { "_id" : 41, "name" : "Aleida Elsass", "scores" : [ { "type" : "exam", "score" : 28.02518041693717 }, { "type" : "quiz", "score" : 95.25243105389065 }, { "type" : "homework", "score" : 68.05980405338909 }, { "type" : "homework", "score" : 19.39841999081942 } ] } 43 | { "_id" : 42, "name" : "Kayce Kenyon", "scores" : [ { "type" : "exam", "score" : 44.62441703708117 }, { "type" : "quiz", "score" : 27.38208798553111 }, { "type" : "homework", "score" : 97.43587143437509 }, { "type" : "homework", "score" : 56.66014675388287 } ] } 44 | { "_id" : 43, "name" : "Ernestine Macfarland", "scores" : [ { "type" : "exam", "score" : 15.29147856258362 }, { "type" : "quiz", "score" : 78.40698797039501 }, { "type" : "homework", "score" : 31.03031764716336 }, { "type" : "homework", "score" : 0.7058771516102902 } ] } 45 | { "_id" : 44, "name" : "Houston Valenti", "scores" : [ { "type" : "exam", "score" : 98.06441387027331 }, { "type" : "quiz", "score" : 0.8760893342659504 }, { "type" : "homework", "score" : 15.2177618920215 }, { "type" : "homework", "score" : 11.33991806278614 } ] } 46 | { "_id" : 45, "name" : "Terica Brugger", "scores" : [ { "type" : "exam", "score" : 42.1011312120801 }, { "type" : "quiz", "score" : 41.73654145887228 }, { "type" : "homework", "score" : 13.3625292343721 }, { "type" : "homework", "score" : 18.91287189072117 } ] } 47 | { "_id" : 46, "name" : "Lady Lefevers", "scores" : [ { "type" : "exam", "score" : 16.89237820123443 }, { "type" : "quiz", "score" : 65.97505910406456 }, { "type" : "homework", "score" : 48.42527123437286 }, { "type" : "homework", "score" : 21.7574602601663 } ] } 48 | { "_id" : 47, "name" : "Kurtis Jiles", "scores" : [ { "type" : "exam", "score" : 92.96916908741805 }, { "type" : "quiz", "score" : 22.86854192921203 }, { "type" : "homework", "score" : 12.10418920555013 }, { "type" : "homework", "score" : 31.89793879453222 } ] } 49 | { "_id" : 48, "name" : "Barbera Lippman", "scores" : [ { "type" : "exam", "score" : 35.43490750932609 }, { "type" : "quiz", "score" : 97.42074160188449 }, { "type" : "homework", "score" : 64.33897612104403 }, { "type" : "homework", "score" : 74.1092960902528 } ] } 50 | { "_id" : 49, "name" : "Dinah Sauve", "scores" : [ { "type" : "exam", "score" : 96.64807532447064 }, { "type" : "quiz", "score" : 14.56470882270576 }, { "type" : "homework", "score" : 72.00519420743191 }, { "type" : "homework", "score" : 68.40650882547841 } ] } 51 | { "_id" : 50, "name" : "Alica Pasley", "scores" : [ { "type" : "exam", "score" : 19.38544736721771 }, { "type" : "quiz", "score" : 88.70752686639557 }, { "type" : "homework", "score" : 60.62755218680213 }, { "type" : "homework", "score" : 51.04001259232448 } ] } 52 | { "_id" : 51, "name" : "Elizabet Kleine", "scores" : [ { "type" : "exam", "score" : 86.81245449846962 }, { "type" : "quiz", "score" : 36.196443334522 }, { "type" : "homework", "score" : 77.94001750905642 }, { "type" : "homework", "score" : 14.07686094256286 } ] } 53 | { "_id" : 52, "name" : "Tawana Oberg", "scores" : [ { "type" : "exam", "score" : 80.59006098671075 }, { "type" : "quiz", "score" : 93.28438118988183 }, { "type" : "homework", "score" : 93.12134003887978 }, { "type" : "homework", "score" : 68.64511133845058 } ] } 54 | { "_id" : 53, "name" : "Malisa Jeanes", "scores" : [ { "type" : "exam", "score" : 33.44580005842922 }, { "type" : "quiz", "score" : 7.172746439960975 }, { "type" : "homework", "score" : 80.53328849494751 }, { "type" : "homework", "score" : 61.9194866926309 } ] } 55 | { "_id" : 54, "name" : "Joel Rueter", "scores" : [ { "type" : "exam", "score" : 87.53636893952853 }, { "type" : "quiz", "score" : 92.70974674256513 }, { "type" : "homework", "score" : 61.79032586247813 }, { "type" : "homework", "score" : 2.926650636099537 } ] } 56 | { "_id" : 55, "name" : "Tresa Sinha", "scores" : [ { "type" : "exam", "score" : 94.93136959210354 }, { "type" : "quiz", "score" : 72.32226123565266 }, { "type" : "homework", "score" : 4.988845385625684 }, { "type" : "homework", "score" : 77.24876881176699 } ] } 57 | { "_id" : 56, "name" : "Danika Loeffler", "scores" : [ { "type" : "exam", "score" : 21.54531707142236 }, { "type" : "quiz", "score" : 41.75962115078149 }, { "type" : "homework", "score" : 4.923819407716035 }, { "type" : "homework", "score" : 55.70195462204016 } ] } 58 | { "_id" : 57, "name" : "Chad Rahe", "scores" : [ { "type" : "exam", "score" : 40.84572027366789 }, { "type" : "quiz", "score" : 29.22733629679561 }, { "type" : "homework", "score" : 93.12112348179406 }, { "type" : "homework", "score" : 27.06916803280036 } ] } 59 | { "_id" : 58, "name" : "Joaquina Arbuckle", "scores" : [ { "type" : "exam", "score" : 28.66671659815553 }, { "type" : "quiz", "score" : 40.48858382583742 }, { "type" : "homework", "score" : 51.51393116681172 }, { "type" : "homework", "score" : 9.965515162346817 } ] } 60 | { "_id" : 59, "name" : "Vinnie Auerbach", "scores" : [ { "type" : "exam", "score" : 95.45508256300009 }, { "type" : "quiz", "score" : 7.512188017365151 }, { "type" : "homework", "score" : 28.5905754294006 }, { "type" : "homework", "score" : 23.91300715707971 } ] } 61 | { "_id" : 60, "name" : "Dusti Lemmond", "scores" : [ { "type" : "exam", "score" : 17.27725327681863 }, { "type" : "quiz", "score" : 83.24439414725833 }, { "type" : "homework", "score" : 81.84258722611811 }, { "type" : "homework", "score" : 24.70433764307496 } ] } 62 | { "_id" : 61, "name" : "Grady Zemke", "scores" : [ { "type" : "exam", "score" : 51.91561300267121 }, { "type" : "quiz", "score" : 50.08349374829509 }, { "type" : "homework", "score" : 95.34139273570386 }, { "type" : "homework", "score" : 34.13252938412605 } ] } 63 | { "_id" : 62, "name" : "Vina Matsunaga", "scores" : [ { "type" : "exam", "score" : 51.38190070034149 }, { "type" : "quiz", "score" : 34.63479282877322 }, { "type" : "homework", "score" : 46.27059093183421 }, { "type" : "homework", "score" : 19.16579262350994 } ] } 64 | { "_id" : 63, "name" : "Rubie Winton", "scores" : [ { "type" : "exam", "score" : 7.176062073558509 }, { "type" : "quiz", "score" : 46.32426882511162 }, { "type" : "homework", "score" : 19.24312817599633 }, { "type" : "homework", "score" : 0.6444077420911576 } ] } 65 | { "_id" : 64, "name" : "Whitley Fears", "scores" : [ { "type" : "exam", "score" : 89.61845831842888 }, { "type" : "quiz", "score" : 82.44879156010508 }, { "type" : "homework", "score" : 92.2308421188758 }, { "type" : "homework", "score" : 96.57912148645883 } ] } 66 | { "_id" : 65, "name" : "Gena Riccio", "scores" : [ { "type" : "exam", "score" : 67.58395308948619 }, { "type" : "quiz", "score" : 67.24135009515879 }, { "type" : "homework", "score" : 42.93471779899529 }, { "type" : "homework", "score" : 30.12776583664075 } ] } 67 | { "_id" : 66, "name" : "Kim Xu", "scores" : [ { "type" : "exam", "score" : 19.96531774799065 }, { "type" : "quiz", "score" : 17.52966217224916 }, { "type" : "homework", "score" : 49.42403230686458 }, { "type" : "homework", "score" : 57.32983091095816 } ] } 68 | { "_id" : 67, "name" : "Merissa Mann", "scores" : [ { "type" : "exam", "score" : 75.1949733626123 }, { "type" : "quiz", "score" : 52.56522605123723 }, { "type" : "homework", "score" : 83.8722548112793 }, { "type" : "homework", "score" : 94.67518167209815 } ] } 69 | { "_id" : 68, "name" : "Jenise Mcguffie", "scores" : [ { "type" : "exam", "score" : 40.15210496060384 }, { "type" : "quiz", "score" : 90.60219950183566 }, { "type" : "homework", "score" : 23.21791929422224 }, { "type" : "homework", "score" : 51.58720341010564 } ] } 70 | { "_id" : 69, "name" : "Cody Strouth", "scores" : [ { "type" : "exam", "score" : 4.784730508547719 }, { "type" : "quiz", "score" : 99.80348240553108 }, { "type" : "homework", "score" : 25.57926736852058 }, { "type" : "homework", "score" : 97.89665889862901 } ] } 71 | { "_id" : 70, "name" : "Harriett Velarde", "scores" : [ { "type" : "exam", "score" : 33.7733570443736 }, { "type" : "quiz", "score" : 96.05228578589255 }, { "type" : "homework", "score" : 46.24926696413032 }, { "type" : "homework", "score" : 21.38522866276249 } ] } 72 | { "_id" : 71, "name" : "Kam Senters", "scores" : [ { "type" : "exam", "score" : 81.56497719010976 }, { "type" : "quiz", "score" : 5.247410853581524 }, { "type" : "homework", "score" : 92.10078400854972 }, { "type" : "homework", "score" : 55.4770385973351 } ] } 73 | { "_id" : 72, "name" : "Leonida Lafond", "scores" : [ { "type" : "exam", "score" : 92.10605086888438 }, { "type" : "quiz", "score" : 32.66022211621239 }, { "type" : "homework", "score" : 38.22914395140913 }, { "type" : "homework", "score" : 82.15588797092647 } ] } 74 | { "_id" : 73, "name" : "Devorah Smartt", "scores" : [ { "type" : "exam", "score" : 69.60160495436016 }, { "type" : "quiz", "score" : 6.931507591998553 }, { "type" : "homework", "score" : 55.66005349294464 }, { "type" : "homework", "score" : 40.18626524256411 } ] } 75 | { "_id" : 74, "name" : "Leola Lundin", "scores" : [ { "type" : "exam", "score" : 31.62936464207764 }, { "type" : "quiz", "score" : 91.28658941188532 }, { "type" : "homework", "score" : 23.95163257932222 }, { "type" : "homework", "score" : 93.71671632774428 } ] } 76 | { "_id" : 75, "name" : "Tonia Surace", "scores" : [ { "type" : "exam", "score" : 80.93655069496523 }, { "type" : "quiz", "score" : 79.54620208144452 }, { "type" : "homework", "score" : 27.51843538237827 }, { "type" : "homework", "score" : 41.34308724166419 } ] } 77 | { "_id" : 76, "name" : "Adrien Renda", "scores" : [ { "type" : "exam", "score" : 57.24794864351232 }, { "type" : "quiz", "score" : 19.5118228072558 }, { "type" : "homework", "score" : 59.50508603413107 }, { "type" : "homework", "score" : 70.71043448913191 } ] } 78 | { "_id" : 77, "name" : "Efrain Claw", "scores" : [ { "type" : "exam", "score" : 55.41266579085205 }, { "type" : "quiz", "score" : 31.30359328252952 }, { "type" : "homework", "score" : 82.03870199947903 }, { "type" : "homework", "score" : 88.73134194093676 } ] } 79 | { "_id" : 78, "name" : "Len Treiber", "scores" : [ { "type" : "exam", "score" : 21.21850173315791 }, { "type" : "quiz", "score" : 13.2282768150266 }, { "type" : "homework", "score" : 16.68695227176766 }, { "type" : "homework", "score" : 82.49842801247594 } ] } 80 | { "_id" : 79, "name" : "Mariela Sherer", "scores" : [ { "type" : "exam", "score" : 61.20158144877323 }, { "type" : "quiz", "score" : 52.75657259917104 }, { "type" : "homework", "score" : 63.95107452142731 }, { "type" : "homework", "score" : 90.97004773806381 } ] } 81 | { "_id" : 80, "name" : "Echo Pippins", "scores" : [ { "type" : "exam", "score" : 27.77924608896123 }, { "type" : "quiz", "score" : 85.1861976198818 }, { "type" : "homework", "score" : 75.62132497619177 }, { "type" : "homework", "score" : 92.50671800180454 } ] } 82 | { "_id" : 81, "name" : "Linnie Weigel", "scores" : [ { "type" : "exam", "score" : 66.0349256424749 }, { "type" : "quiz", "score" : 67.57096025532985 }, { "type" : "homework", "score" : 28.82283586295377 }, { "type" : "homework", "score" : 38.33608066073369 } ] } 83 | { "_id" : 82, "name" : "Santiago Dollins", "scores" : [ { "type" : "exam", "score" : 33.48242310776701 }, { "type" : "quiz", "score" : 60.49199094204558 }, { "type" : "homework", "score" : 75.58039801610906 }, { "type" : "homework", "score" : 87.02564768982076 } ] } 84 | { "_id" : 83, "name" : "Tonisha Games", "scores" : [ { "type" : "exam", "score" : 29.13833807032966 }, { "type" : "quiz", "score" : 35.25054111123917 }, { "type" : "homework", "score" : 66.73047056293319 }, { "type" : "homework", "score" : 20.04018866516366 } ] } 85 | { "_id" : 84, "name" : "Timothy Harrod", "scores" : [ { "type" : "exam", "score" : 93.23020013495737 }, { "type" : "quiz", "score" : 49.06010347848443 }, { "type" : "homework", "score" : 74.00788699415295 }, { "type" : "homework", "score" : 43.46258375716373 } ] } 86 | { "_id" : 85, "name" : "Rae Kohout", "scores" : [ { "type" : "exam", "score" : 63.86894250781692 }, { "type" : "quiz", "score" : 55.81549538273672 }, { "type" : "homework", "score" : 59.13566011309437 }, { "type" : "homework", "score" : 5.956241581565125 } ] } 87 | { "_id" : 86, "name" : "Brain Lachapelle", "scores" : [ { "type" : "exam", "score" : 2.013473187690951 }, { "type" : "quiz", "score" : 45.01802394825918 }, { "type" : "homework", "score" : 63.74658824265818 }, { "type" : "homework", "score" : 88.04712649447521 } ] } 88 | { "_id" : 87, "name" : "Toshiko Sabella", "scores" : [ { "type" : "exam", "score" : 21.05570509531929 }, { "type" : "quiz", "score" : 26.43387483146958 }, { "type" : "homework", "score" : 18.96368757115227 }, { "type" : "homework", "score" : 42.80331214002496 } ] } 89 | { "_id" : 88, "name" : "Keesha Papadopoulos", "scores" : [ { "type" : "exam", "score" : 82.35397321850031 }, { "type" : "quiz", "score" : 3.064361273717464 }, { "type" : "homework", "score" : 98.46867828216399 }, { "type" : "homework", "score" : 92.99045377889979 } ] } 90 | { "_id" : 89, "name" : "Cassi Heal", "scores" : [ { "type" : "exam", "score" : 43.04310994985133 }, { "type" : "quiz", "score" : 0.006247360551892012 }, { "type" : "homework", "score" : 63.88558436723092 }, { "type" : "homework", "score" : 14.78122510144431 } ] } 91 | { "_id" : 90, "name" : "Osvaldo Hirt", "scores" : [ { "type" : "exam", "score" : 67.44931456608883 }, { "type" : "quiz", "score" : 41.77986504201782 }, { "type" : "homework", "score" : 76.30879472084027 }, { "type" : "homework", "score" : 63.0968728899343 } ] } 92 | { "_id" : 91, "name" : "Ty Barbieri", "scores" : [ { "type" : "exam", "score" : 38.43781607953586 }, { "type" : "quiz", "score" : 95.70340794272111 }, { "type" : "homework", "score" : 72.80272364761178 }, { "type" : "homework", "score" : 18.65883614099724 } ] } 93 | { "_id" : 92, "name" : "Ta Sikorski", "scores" : [ { "type" : "exam", "score" : 30.02140506101446 }, { "type" : "quiz", "score" : 23.89164976236439 }, { "type" : "homework", "score" : 31.70116444848026 }, { "type" : "homework", "score" : 61.82907698626848 } ] } 94 | { "_id" : 93, "name" : "Lucinda Vanderburg", "scores" : [ { "type" : "exam", "score" : 27.55843343656866 }, { "type" : "quiz", "score" : 11.45699271327768 }, { "type" : "homework", "score" : 75.53546873615787 }, { "type" : "homework", "score" : 3.316018745342064 } ] } 95 | { "_id" : 94, "name" : "Darby Wass", "scores" : [ { "type" : "exam", "score" : 6.867644836612586 }, { "type" : "quiz", "score" : 63.4908039680606 }, { "type" : "homework", "score" : 85.41865347441522 }, { "type" : "homework", "score" : 26.82623527074511 } ] } 96 | { "_id" : 95, "name" : "Omar Bowdoin", "scores" : [ { "type" : "exam", "score" : 8.58858127638702 }, { "type" : "quiz", "score" : 88.40377630359677 }, { "type" : "homework", "score" : 25.71387474240768 }, { "type" : "homework", "score" : 23.73786528217532 } ] } 97 | { "_id" : 96, "name" : "Milan Mcgavock", "scores" : [ { "type" : "exam", "score" : 69.11554341921843 }, { "type" : "quiz", "score" : 10.2027724707151 }, { "type" : "homework", "score" : 2.815944526266023 }, { "type" : "homework", "score" : 24.87545552041663 } ] } 98 | { "_id" : 97, "name" : "Maren Scheider", "scores" : [ { "type" : "exam", "score" : 94.4329121733663 }, { "type" : "quiz", "score" : 77.28263690107663 }, { "type" : "homework", "score" : 6.872536184428357 }, { "type" : "homework", "score" : 59.46326216544371 } ] } 99 | { "_id" : 98, "name" : "Carli Ector", "scores" : [ { "type" : "exam", "score" : 88.18040268522668 }, { "type" : "quiz", "score" : 60.3111085581054 }, { "type" : "homework", "score" : 10.27537091986367 }, { "type" : "homework", "score" : 96.33612053785647 } ] } 100 | { "_id" : 99, "name" : "Jaclyn Morado", "scores" : [ { "type" : "exam", "score" : 70.27627082122453 }, { "type" : "quiz", "score" : 56.78470387064279 }, { "type" : "homework", "score" : 21.74758014553796 }, { "type" : "homework", "score" : 47.48518298423097 } ] } 101 | { "_id" : 100, "name" : "Demarcus Audette", "scores" : [ { "type" : "exam", "score" : 47.42608580155614 }, { "type" : "quiz", "score" : 44.83416623719906 }, { "type" : "homework", "score" : 19.85604968544429 }, { "type" : "homework", "score" : 39.01726616178844 } ] } 102 | { "_id" : 101, "name" : "Tania Hulett", "scores" : [ { "type" : "exam", "score" : 21.84617015735916 }, { "type" : "quiz", "score" : 53.8568257735492 }, { "type" : "homework", "score" : 14.77762216150431 }, { "type" : "homework", "score" : 79.60533635579307 } ] } 103 | { "_id" : 102, "name" : "Mercedez Garduno", "scores" : [ { "type" : "exam", "score" : 49.52877007656483 }, { "type" : "quiz", "score" : 44.55505066212384 }, { "type" : "homework", "score" : 12.14582444697345 }, { "type" : "homework", "score" : 81.50869746632009 } ] } 104 | { "_id" : 103, "name" : "Fleta Duplantis", "scores" : [ { "type" : "exam", "score" : 84.37799696030743 }, { "type" : "quiz", "score" : 15.95792143439528 }, { "type" : "homework", "score" : 77.80745176713172 }, { "type" : "homework", "score" : 14.24113076894851 } ] } 105 | { "_id" : 104, "name" : "Brittny Warwick", "scores" : [ { "type" : "exam", "score" : 69.54399888097534 }, { "type" : "quiz", "score" : 82.00469934215849 }, { "type" : "homework", "score" : 27.31675193678847 }, { "type" : "homework", "score" : 95.96446106607902 } ] } 106 | { "_id" : 105, "name" : "Shin Allbright", "scores" : [ { "type" : "exam", "score" : 62.28388941877533 }, { "type" : "quiz", "score" : 85.26863799439475 }, { "type" : "homework", "score" : 88.9947941542333 }, { "type" : "homework", "score" : 52.68629677727286 } ] } 107 | { "_id" : 106, "name" : "Karry Petrarca", "scores" : [ { "type" : "exam", "score" : 3.677125771067413 }, { "type" : "quiz", "score" : 40.39799056667404 }, { "type" : "homework", "score" : 14.38347127905983 }, { "type" : "homework", "score" : 0.3398440134644742 } ] } 108 | { "_id" : 107, "name" : "Beckie Millington", "scores" : [ { "type" : "exam", "score" : 69.52419218194589 }, { "type" : "quiz", "score" : 24.85411404016219 }, { "type" : "homework", "score" : 34.92039455520659 }, { "type" : "homework", "score" : 29.82403628902139 } ] } 109 | { "_id" : 108, "name" : "Mikaela Meidinger", "scores" : [ { "type" : "exam", "score" : 63.75595052560389 }, { "type" : "quiz", "score" : 59.52298111997963 }, { "type" : "homework", "score" : 88.66481441499843 }, { "type" : "homework", "score" : 0.08157369764142386 } ] } 110 | { "_id" : 109, "name" : "Flora Duell", "scores" : [ { "type" : "exam", "score" : 40.68238966626067 }, { "type" : "quiz", "score" : 46.77972040308903 }, { "type" : "homework", "score" : 65.90098502289938 }, { "type" : "homework", "score" : 69.29400057020965 } ] } 111 | { "_id" : 110, "name" : "Nobuko Linzey", "scores" : [ { "type" : "exam", "score" : 67.40792606687442 }, { "type" : "quiz", "score" : 58.58331128403415 }, { "type" : "homework", "score" : 47.44831568815929 }, { "type" : "homework", "score" : 19.27081566886746 } ] } 112 | { "_id" : 111, "name" : "Gennie Ratner", "scores" : [ { "type" : "exam", "score" : 62.74309964110307 }, { "type" : "quiz", "score" : 92.18013849235186 }, { "type" : "homework", "score" : 34.50565589246531 }, { "type" : "homework", "score" : 53.11174468047395 } ] } 113 | { "_id" : 112, "name" : "Myrtle Wolfinger", "scores" : [ { "type" : "exam", "score" : 73.93895528856032 }, { "type" : "quiz", "score" : 35.99397009906073 }, { "type" : "homework", "score" : 93.85826506506328 }, { "type" : "homework", "score" : 71.21962876453497 } ] } 114 | { "_id" : 113, "name" : "", "scores" : [ { "type" : "exam", "score" : 77.57315913088024 }, { "type" : "quiz", "score" : 13.28135073340091 }, { "type" : "homework", "score" : 67.27527802263116 }, { "type" : "homework", "score" : 55.74198976046431 } ] } 115 | { "_id" : 114, "name" : "aimee Zank", "scores" : [ { "type" : "exam", "score" : 15.91636686717778 }, { "type" : "quiz", "score" : 96.12953798826392 }, { "type" : "homework", "score" : 18.92628947700149 }, { "type" : "homework", "score" : 18.52035674134503 } ] } 116 | { "_id" : 115, "name" : "Aurelia Menendez", "scores" : [ { "type" : "exam", "score" : 5.105728872755167 }, { "type" : "quiz", "score" : 7.375913405784407 }, { "type" : "homework", "score" : 19.64468598879037 }, { "type" : "homework", "score" : 92.62414866541212 } ] } 117 | { "_id" : 116, "name" : "Corliss Zuk", "scores" : [ { "type" : "exam", "score" : 76.45468797439878 }, { "type" : "quiz", "score" : 53.02642890026489 }, { "type" : "homework", "score" : 91.86573111689813 }, { "type" : "homework", "score" : 72.54824624119813 } ] } 118 | { "_id" : 117, "name" : "Bao Ziglar", "scores" : [ { "type" : "exam", "score" : 37.22753032391262 }, { "type" : "quiz", "score" : 52.75139192596129 }, { "type" : "homework", "score" : 64.06863625194231 }, { "type" : "homework", "score" : 33.15177269905534 } ] } 119 | { "_id" : 118, "name" : "Zachary Langlais", "scores" : [ { "type" : "exam", "score" : 62.20457822364115 }, { "type" : "quiz", "score" : 61.03733414415722 }, { "type" : "homework", "score" : 8.548735651522431 }, { "type" : "homework", "score" : 82.41688205392703 } ] } 120 | { "_id" : 119, "name" : "Wilburn Spiess", "scores" : [ { "type" : "exam", "score" : 52.36963021569788 }, { "type" : "quiz", "score" : 96.5715450678789 }, { "type" : "homework", "score" : 61.35034001494281 }, { "type" : "homework", "score" : 28.10477578379966 } ] } 121 | { "_id" : 120, "name" : "Jenette Flanders", "scores" : [ { "type" : "exam", "score" : 22.0445143239363 }, { "type" : "quiz", "score" : 22.43958080566196 }, { "type" : "homework", "score" : 48.18625673013256 }, { "type" : "homework", "score" : 63.38749542414235 } ] } 122 | { "_id" : 121, "name" : "Salena Olmos", "scores" : [ { "type" : "exam", "score" : 0.8007809823509016 }, { "type" : "quiz", "score" : 44.71135559183793 }, { "type" : "homework", "score" : 65.17342981800904 }, { "type" : "homework", "score" : 1.534599234830614 } ] } 123 | { "_id" : 122, "name" : "Daphne Zheng", "scores" : [ { "type" : "exam", "score" : 61.47626628718472 }, { "type" : "quiz", "score" : 21.99638326978255 }, { "type" : "homework", "score" : 88.2119997542672 }, { "type" : "homework", "score" : 14.73041165208587 } ] } 124 | { "_id" : 123, "name" : "Sanda Ryba", "scores" : [ { "type" : "exam", "score" : 10.62413290291121 }, { "type" : "quiz", "score" : 3.544356815821981 }, { "type" : "homework", "score" : 9.83679437441134 }, { "type" : "homework", "score" : 57.10297055409504 } ] } 125 | { "_id" : 124, "name" : "Denisha Cast", "scores" : [ { "type" : "exam", "score" : 2.723204808959712 }, { "type" : "quiz", "score" : 38.47056093169111 }, { "type" : "homework", "score" : 27.36840575569066 }, { "type" : "homework", "score" : 77.04035583743548 } ] } 126 | { "_id" : 125, "name" : "Marcus Blohm", "scores" : [ { "type" : "exam", "score" : 64.47719204148157 }, { "type" : "quiz", "score" : 23.68353886432903 }, { "type" : "homework", "score" : 31.19282777390965 }, { "type" : "homework", "score" : 48.87355812474999 } ] } 127 | { "_id" : 126, "name" : "Quincy Danaher", "scores" : [ { "type" : "exam", "score" : 40.53136904234401 }, { "type" : "quiz", "score" : 83.09270171511093 }, { "type" : "homework", "score" : 66.99694110405133 }, { "type" : "homework", "score" : 79.004550587978 } ] } 128 | { "_id" : 127, "name" : "Jessika Dagenais", "scores" : [ { "type" : "exam", "score" : 96.93459855769822 }, { "type" : "quiz", "score" : 95.67563715431869 }, { "type" : "homework", "score" : 70.78873021065969 }, { "type" : "homework", "score" : 66.55635416801556 } ] } 129 | { "_id" : 128, "name" : "Alix Sherrill", "scores" : [ { "type" : "exam", "score" : 43.67436243299881 }, { "type" : "quiz", "score" : 14.98112420690882 }, { "type" : "homework", "score" : 0.5302276471334189 }, { "type" : "homework", "score" : 23.62416821198536 } ] } 130 | { "_id" : 129, "name" : "Tambra Mercure", "scores" : [ { "type" : "exam", "score" : 62.61423873241083 }, { "type" : "quiz", "score" : 47.64776674251425 }, { "type" : "homework", "score" : 13.21061064113872 }, { "type" : "homework", "score" : 85.20578508528978 } ] } 131 | { "_id" : 130, "name" : "Dodie Staller", "scores" : [ { "type" : "exam", "score" : 52.16051124848157 }, { "type" : "quiz", "score" : 83.51563143820728 }, { "type" : "homework", "score" : 61.18853232500236 }, { "type" : "homework", "score" : 63.88857636557489 } ] } 132 | { "_id" : 131, "name" : "Fletcher Mcconnell", "scores" : [ { "type" : "exam", "score" : 24.98670635479149 }, { "type" : "quiz", "score" : 94.90809903126159 }, { "type" : "homework", "score" : 6.631220621711343 }, { "type" : "homework", "score" : 29.37194792367135 } ] } 133 | { "_id" : 132, "name" : "Verdell Sowinski", "scores" : [ { "type" : "exam", "score" : 20.1442549902647 }, { "type" : "quiz", "score" : 47.66457425945161 }, { "type" : "homework", "score" : 77.87844292368344 }, { "type" : "homework", "score" : 17.90304994248164 } ] } 134 | { "_id" : 133, "name" : "Gisela Levin", "scores" : [ { "type" : "exam", "score" : 15.88727528055548 }, { "type" : "quiz", "score" : 91.49884857295594 }, { "type" : "homework", "score" : 16.56032169309347 }, { "type" : "homework", "score" : 1.704262924559419 } ] } 135 | { "_id" : 134, "name" : "Tressa Schwing", "scores" : [ { "type" : "exam", "score" : 54.53947018434061 }, { "type" : "quiz", "score" : 22.26443529294689 }, { "type" : "homework", "score" : 89.29532364756331 }, { "type" : "homework", "score" : 73.46945821298166 } ] } 136 | { "_id" : 135, "name" : "Rosana Vales", "scores" : [ { "type" : "exam", "score" : 15.73156258820246 }, { "type" : "quiz", "score" : 33.70281347493842 }, { "type" : "homework", "score" : 62.79875994037851 }, { "type" : "homework", "score" : 26.00910752231358 } ] } 137 | { "_id" : 136, "name" : "Margart Vitello", "scores" : [ { "type" : "exam", "score" : 99.33685767140612 }, { "type" : "quiz", "score" : 1.25322762871457 }, { "type" : "homework", "score" : 66.22827571617455 }, { "type" : "homework", "score" : 32.11998011108809 } ] } 138 | { "_id" : 137, "name" : "Tamika Schildgen", "scores" : [ { "type" : "exam", "score" : 4.433956226109692 }, { "type" : "quiz", "score" : 65.50313785402548 }, { "type" : "homework", "score" : 89.5950384993947 }, { "type" : "homework", "score" : 54.75994689226145 } ] } 139 | { "_id" : 138, "name" : "Jesusa Rickenbacker", "scores" : [ { "type" : "exam", "score" : 15.6237624645333 }, { "type" : "quiz", "score" : 7.856092232737 }, { "type" : "homework", "score" : 92.06889864132863 }, { "type" : "homework", "score" : 42.60399593657424 } ] } 140 | { "_id" : 139, "name" : "Rudolph Domingo", "scores" : [ { "type" : "exam", "score" : 33.02956040417582 }, { "type" : "quiz", "score" : 35.99586495205484 }, { "type" : "homework", "score" : 19.00539011248412 }, { "type" : "homework", "score" : 91.06098699300175 } ] } 141 | { "_id" : 140, "name" : "Jonie Raby", "scores" : [ { "type" : "exam", "score" : 7.307863391324043 }, { "type" : "quiz", "score" : 21.72514968277675 }, { "type" : "homework", "score" : 51.52263848501853 }, { "type" : "homework", "score" : 73.8284408290604 } ] } 142 | { "_id" : 141, "name" : "Edgar Sarkis", "scores" : [ { "type" : "exam", "score" : 65.99888014434269 }, { "type" : "quiz", "score" : 58.75598946266268 }, { "type" : "homework", "score" : 37.45690195380933 }, { "type" : "homework", "score" : 75.06379354463246 } ] } 143 | { "_id" : 142, "name" : "Laureen Salomone", "scores" : [ { "type" : "exam", "score" : 42.54322973844196 }, { "type" : "quiz", "score" : 33.03152379449381 }, { "type" : "homework", "score" : 77.52357320933667 }, { "type" : "homework", "score" : 77.25060450002907 } ] } 144 | { "_id" : 143, "name" : "Gwyneth Garling", "scores" : [ { "type" : "exam", "score" : 44.29553481758053 }, { "type" : "quiz", "score" : 23.15599504527296 }, { "type" : "homework", "score" : 65.51373966284551 }, { "type" : "homework", "score" : 84.83695219376807 } ] } 145 | { "_id" : 144, "name" : "Kaila Deibler", "scores" : [ { "type" : "exam", "score" : 20.85988856264308 }, { "type" : "quiz", "score" : 73.51120532285645 }, { "type" : "homework", "score" : 6.35209942166356 }, { "type" : "homework", "score" : 88.72483530139125 } ] } 146 | { "_id" : 145, "name" : "Tandra Meadows", "scores" : [ { "type" : "exam", "score" : 19.07796402740767 }, { "type" : "quiz", "score" : 7.63846325490759 }, { "type" : "homework", "score" : 60.84655775785094 }, { "type" : "homework", "score" : 1.688891309809448 } ] } 147 | { "_id" : 146, "name" : "Gwen Honig", "scores" : [ { "type" : "exam", "score" : 35.99646382910844 }, { "type" : "quiz", "score" : 74.46323507534565 }, { "type" : "homework", "score" : 90.95590422002779 }, { "type" : "homework", "score" : 19.80158922408524 } ] } 148 | { "_id" : 147, "name" : "Sadie Jernigan", "scores" : [ { "type" : "exam", "score" : 6.14281392478545 }, { "type" : "quiz", "score" : 44.94102013771302 }, { "type" : "homework", "score" : 4.404845344749941 }, { "type" : "homework", "score" : 89.94407975401369 } ] } 149 | { "_id" : 148, "name" : "Carli Belvins", "scores" : [ { "type" : "exam", "score" : 84.43618167501189 }, { "type" : "quiz", "score" : 1.702113040528119 }, { "type" : "homework", "score" : 22.47397850465176 }, { "type" : "homework", "score" : 88.48032660881387 } ] } 150 | { "_id" : 149, "name" : "Synthia Labelle", "scores" : [ { "type" : "exam", "score" : 11.06312649271668 }, { "type" : "quiz", "score" : 89.27462706564148 }, { "type" : "homework", "score" : 19.79277113037917 }, { "type" : "homework", "score" : 41.1722010153017 } ] } 151 | { "_id" : 150, "name" : "Eugene Magdaleno", "scores" : [ { "type" : "exam", "score" : 69.64543341032858 }, { "type" : "quiz", "score" : 17.46202326917462 }, { "type" : "homework", "score" : 9.963963073384708 }, { "type" : "homework", "score" : 39.41502498794787 } ] } 152 | { "_id" : 151, "name" : "Meagan Oakes", "scores" : [ { "type" : "exam", "score" : 75.02808260234913 }, { "type" : "quiz", "score" : 35.45524188731927 }, { "type" : "homework", "score" : 75.84754202828454 }, { "type" : "homework", "score" : 32.8699662015027 } ] } 153 | { "_id" : 152, "name" : "Richelle Siemers", "scores" : [ { "type" : "exam", "score" : 52.0158789874646 }, { "type" : "quiz", "score" : 19.25549934746802 }, { "type" : "homework", "score" : 68.33217408510437 }, { "type" : "homework", "score" : 3.572967434977992 } ] } 154 | { "_id" : 153, "name" : "Mariette Batdorf", "scores" : [ { "type" : "exam", "score" : 91.38690728885123 }, { "type" : "quiz", "score" : 39.98831767858929 }, { "type" : "homework", "score" : 22.4218288031231 }, { "type" : "homework", "score" : 51.59702098442595 } ] } 155 | { "_id" : 154, "name" : "Rachell Aman", "scores" : [ { "type" : "exam", "score" : 94.50988306850947 }, { "type" : "quiz", "score" : 5.68414255121964 }, { "type" : "homework", "score" : 64.46720717616572 }, { "type" : "homework", "score" : 47.34684739970935 } ] } 156 | { "_id" : 155, "name" : "Aleida Elsass", "scores" : [ { "type" : "exam", "score" : 42.89558347656537 }, { "type" : "quiz", "score" : 94.10647660402866 }, { "type" : "homework", "score" : 9.36808988965816 }, { "type" : "homework", "score" : 30.56402201379193 } ] } 157 | { "_id" : 156, "name" : "Kayce Kenyon", "scores" : [ { "type" : "exam", "score" : 54.00824880446614 }, { "type" : "quiz", "score" : 19.20300722190935 }, { "type" : "homework", "score" : 71.57649363606814 }, { "type" : "homework", "score" : 61.11962751036258 } ] } 158 | { "_id" : 157, "name" : "Ernestine Macfarland", "scores" : [ { "type" : "exam", "score" : 9.666623747888858 }, { "type" : "quiz", "score" : 98.76040135775126 }, { "type" : "homework", "score" : 38.72724984277037 }, { "type" : "homework", "score" : 51.67453757397309 } ] } 159 | { "_id" : 158, "name" : "Houston Valenti", "scores" : [ { "type" : "exam", "score" : 68.36209185504055 }, { "type" : "quiz", "score" : 15.83819664395878 }, { "type" : "homework", "score" : 81.7258704821604 }, { "type" : "homework", "score" : 37.28292198755799 } ] } 160 | { "_id" : 159, "name" : "Terica Brugger", "scores" : [ { "type" : "exam", "score" : 97.82203054104301 }, { "type" : "quiz", "score" : 91.56280485763772 }, { "type" : "homework", "score" : 0.1196067678406854 }, { "type" : "homework", "score" : 62.01976292987356 } ] } 161 | { "_id" : 160, "name" : "Lady Lefevers", "scores" : [ { "type" : "exam", "score" : 89.14702404133767 }, { "type" : "quiz", "score" : 11.85715160788611 }, { "type" : "homework", "score" : 26.58243041911046 }, { "type" : "homework", "score" : 87.70817474845785 } ] } 162 | { "_id" : 161, "name" : "Kurtis Jiles", "scores" : [ { "type" : "exam", "score" : 38.84932631249875 }, { "type" : "quiz", "score" : 75.6856190089661 }, { "type" : "homework", "score" : 39.33927779851575 }, { "type" : "homework", "score" : 54.8262895255851 } ] } 163 | { "_id" : 162, "name" : "Barbera Lippman", "scores" : [ { "type" : "exam", "score" : 10.1210778879972 }, { "type" : "quiz", "score" : 57.39236107118298 }, { "type" : "homework", "score" : 56.36039761834183 }, { "type" : "homework", "score" : 50.67103627747744 } ] } 164 | { "_id" : 163, "name" : "Dinah Sauve", "scores" : [ { "type" : "exam", "score" : 9.660849614328693 }, { "type" : "quiz", "score" : 0.710026283123355 }, { "type" : "homework", "score" : 10.15169758905915 }, { "type" : "homework", "score" : 64.85706587155985 } ] } 165 | { "_id" : 164, "name" : "Alica Pasley", "scores" : [ { "type" : "exam", "score" : 41.3852820348269 }, { "type" : "quiz", "score" : 87.0183839032626 }, { "type" : "homework", "score" : 26.97036086827311 }, { "type" : "homework", "score" : 37.22917544696978 } ] } 166 | { "_id" : 165, "name" : "Elizabet Kleine", "scores" : [ { "type" : "exam", "score" : 23.35599596646158 }, { "type" : "quiz", "score" : 45.42989961046475 }, { "type" : "homework", "score" : 59.29421526983006 }, { "type" : "homework", "score" : 28.26581014556331 } ] } 167 | { "_id" : 166, "name" : "Tawana Oberg", "scores" : [ { "type" : "exam", "score" : 79.24755285478162 }, { "type" : "quiz", "score" : 97.28127199858804 }, { "type" : "homework", "score" : 45.37121216955846 }, { "type" : "homework", "score" : 67.0528222080174 } ] } 168 | { "_id" : 167, "name" : "Malisa Jeanes", "scores" : [ { "type" : "exam", "score" : 40.68676040665008 }, { "type" : "quiz", "score" : 52.60826688242043 }, { "type" : "homework", "score" : 94.67979508129564 }, { "type" : "homework", "score" : 56.90401843569644 } ] } 169 | { "_id" : 168, "name" : "Joel Rueter", "scores" : [ { "type" : "exam", "score" : 21.78981361637835 }, { "type" : "quiz", "score" : 1.182228345865832 }, { "type" : "homework", "score" : 23.12554402902722 }, { "type" : "homework", "score" : 43.70843975739338 } ] } 170 | { "_id" : 169, "name" : "Tresa Sinha", "scores" : [ { "type" : "exam", "score" : 52.22632020277269 }, { "type" : "quiz", "score" : 65.68701091428014 }, { "type" : "homework", "score" : 86.80410157346574 }, { "type" : "homework", "score" : 77.08119670943164 } ] } 171 | { "_id" : 170, "name" : "Danika Loeffler", "scores" : [ { "type" : "exam", "score" : 80.13802901122058 }, { "type" : "quiz", "score" : 9.613195588726075 }, { "type" : "homework", "score" : 88.15801147882929 }, { "type" : "homework", "score" : 19.395996312327 } ] } 172 | { "_id" : 171, "name" : "Chad Rahe", "scores" : [ { "type" : "exam", "score" : 81.24054522370292 }, { "type" : "quiz", "score" : 17.44929152365297 }, { "type" : "homework", "score" : 53.35511776938777 }, { "type" : "homework", "score" : 82.77870021356301 } ] } 173 | { "_id" : 172, "name" : "Joaquina Arbuckle", "scores" : [ { "type" : "exam", "score" : 35.43562368815135 }, { "type" : "quiz", "score" : 89.74640983145014 }, { "type" : "homework", "score" : 4.717041035570746 }, { "type" : "homework", "score" : 99.13868686848834 } ] } 174 | { "_id" : 173, "name" : "Vinnie Auerbach", "scores" : [ { "type" : "exam", "score" : 57.26312067710243 }, { "type" : "quiz", "score" : 20.63583040849144 }, { "type" : "homework", "score" : 77.02638482252677 }, { "type" : "homework", "score" : 53.31631231243156 } ] } 175 | { "_id" : 174, "name" : "Dusti Lemmond", "scores" : [ { "type" : "exam", "score" : 91.51968055194875 }, { "type" : "quiz", "score" : 50.37682668957234 }, { "type" : "homework", "score" : 51.53939113583016 }, { "type" : "homework", "score" : 34.42622923380332 } ] } 176 | { "_id" : 175, "name" : "Grady Zemke", "scores" : [ { "type" : "exam", "score" : 10.37320113489379 }, { "type" : "quiz", "score" : 10.51344428386458 }, { "type" : "homework", "score" : 6.53353459742646 }, { "type" : "homework", "score" : 85.47180043794621 } ] } 177 | { "_id" : 176, "name" : "Vina Matsunaga", "scores" : [ { "type" : "exam", "score" : 73.30054989074031 }, { "type" : "quiz", "score" : 4.21754550016783 }, { "type" : "homework", "score" : 8.217818112853726 }, { "type" : "homework", "score" : 56.31150858550771 } ] } 178 | { "_id" : 177, "name" : "Rubie Winton", "scores" : [ { "type" : "exam", "score" : 36.1767454709986 }, { "type" : "quiz", "score" : 89.39738121365069 }, { "type" : "homework", "score" : 90.83326208217305 }, { "type" : "homework", "score" : 90.05379113305032 } ] } 179 | { "_id" : 178, "name" : "Whitley Fears", "scores" : [ { "type" : "exam", "score" : 20.84454374176408 }, { "type" : "quiz", "score" : 57.14851257871499 }, { "type" : "homework", "score" : 99.77237745070993 }, { "type" : "homework", "score" : 97.95928979563497 } ] } 180 | { "_id" : 179, "name" : "Gena Riccio", "scores" : [ { "type" : "exam", "score" : 81.49070346172086 }, { "type" : "quiz", "score" : 23.12653402998139 }, { "type" : "homework", "score" : 96.54590960898932 }, { "type" : "homework", "score" : 2.928015597639488 } ] } 181 | { "_id" : 180, "name" : "Kim Xu", "scores" : [ { "type" : "exam", "score" : 29.1596029917098 }, { "type" : "quiz", "score" : 74.41836270655918 }, { "type" : "homework", "score" : 49.42399313436395 }, { "type" : "homework", "score" : 56.64965514703727 } ] } 182 | { "_id" : 181, "name" : "Merissa Mann", "scores" : [ { "type" : "exam", "score" : 0.7300279717432967 }, { "type" : "quiz", "score" : 39.49170592908128 }, { "type" : "homework", "score" : 58.58116890946589 }, { "type" : "homework", "score" : 60.49619334485811 } ] } 183 | { "_id" : 182, "name" : "Jenise Mcguffie", "scores" : [ { "type" : "exam", "score" : 83.68438201130127 }, { "type" : "quiz", "score" : 73.79931763764928 }, { "type" : "homework", "score" : 89.57200947426745 }, { "type" : "homework", "score" : 45.23301509928876 } ] } 184 | { "_id" : 183, "name" : "Cody Strouth", "scores" : [ { "type" : "exam", "score" : 32.99854612126559 }, { "type" : "quiz", "score" : 78.61720316992681 }, { "type" : "homework", "score" : 13.9668168128685 }, { "type" : "homework", "score" : 89.62847560459466 } ] } 185 | { "_id" : 184, "name" : "Harriett Velarde", "scores" : [ { "type" : "exam", "score" : 41.47988283148075 }, { "type" : "quiz", "score" : 95.69493673358075 }, { "type" : "homework", "score" : 80.97431054020797 }, { "type" : "homework", "score" : 83.03916048182315 } ] } 186 | { "_id" : 185, "name" : "Kam Senters", "scores" : [ { "type" : "exam", "score" : 49.8822537074033 }, { "type" : "quiz", "score" : 45.29515361387067 }, { "type" : "homework", "score" : 68.88048980292801 }, { "type" : "homework", "score" : 62.04872055245107 } ] } 187 | { "_id" : 186, "name" : "Leonida Lafond", "scores" : [ { "type" : "exam", "score" : 8.125073097960179 }, { "type" : "quiz", "score" : 0.2017888852605676 }, { "type" : "homework", "score" : 90.13081857264544 }, { "type" : "homework", "score" : 76.23920499960322 } ] } 188 | { "_id" : 187, "name" : "Devorah Smartt", "scores" : [ { "type" : "exam", "score" : 23.94616611315642 }, { "type" : "quiz", "score" : 13.27371116063025 }, { "type" : "homework", "score" : 48.88469934384718 }, { "type" : "homework", "score" : 63.17281121561749 } ] } 189 | { "_id" : 188, "name" : "Leola Lundin", "scores" : [ { "type" : "exam", "score" : 60.314725741828 }, { "type" : "quiz", "score" : 41.12327471818652 }, { "type" : "homework", "score" : 74.8699176311771 }, { "type" : "homework", "score" : 37.66083751900988 } ] } 190 | { "_id" : 189, "name" : "Tonia Surace", "scores" : [ { "type" : "exam", "score" : 67.93405589675187 }, { "type" : "quiz", "score" : 31.49721116485943 }, { "type" : "homework", "score" : 82.36495908047985 }, { "type" : "homework", "score" : 25.40065606052529 } ] } 191 | { "_id" : 190, "name" : "Adrien Renda", "scores" : [ { "type" : "exam", "score" : 64.16109192679477 }, { "type" : "quiz", "score" : 66.93730600935531 }, { "type" : "homework", "score" : 28.13821151700559 }, { "type" : "homework", "score" : 96.05603402270469 } ] } 192 | { "_id" : 191, "name" : "Efrain Claw", "scores" : [ { "type" : "exam", "score" : 94.67153825229884 }, { "type" : "quiz", "score" : 82.30087932110595 }, { "type" : "homework", "score" : 67.17105893041146 }, { "type" : "homework", "score" : 75.86075840047938 } ] } 193 | { "_id" : 192, "name" : "Len Treiber", "scores" : [ { "type" : "exam", "score" : 39.19832917406515 }, { "type" : "quiz", "score" : 98.71679252899352 }, { "type" : "homework", "score" : 29.99356456040445 }, { "type" : "homework", "score" : 44.8228929481132 } ] } 194 | { "_id" : 193, "name" : "Mariela Sherer", "scores" : [ { "type" : "exam", "score" : 47.67196715489599 }, { "type" : "quiz", "score" : 41.55743490493954 }, { "type" : "homework", "score" : 70.4612811769744 }, { "type" : "homework", "score" : 48.60803337116214 } ] } 195 | { "_id" : 194, "name" : "Echo Pippins", "scores" : [ { "type" : "exam", "score" : 18.09013691507853 }, { "type" : "quiz", "score" : 35.00306967250408 }, { "type" : "homework", "score" : 57.80044701194817 }, { "type" : "homework", "score" : 80.17965154316731 } ] } 196 | { "_id" : 195, "name" : "Linnie Weigel", "scores" : [ { "type" : "exam", "score" : 52.44578368517977 }, { "type" : "quiz", "score" : 90.7775054046383 }, { "type" : "homework", "score" : 11.75008382913026 }, { "type" : "homework", "score" : 11.27477189117074 } ] } 197 | { "_id" : 196, "name" : "Santiago Dollins", "scores" : [ { "type" : "exam", "score" : 52.04052571137036 }, { "type" : "quiz", "score" : 33.63300076481705 }, { "type" : "homework", "score" : 4.629511012591447 }, { "type" : "homework", "score" : 78.79257377604428 } ] } 198 | { "_id" : 197, "name" : "Tonisha Games", "scores" : [ { "type" : "exam", "score" : 38.51269589995049 }, { "type" : "quiz", "score" : 31.16287577231703 }, { "type" : "homework", "score" : 79.15856355963004 }, { "type" : "homework", "score" : 56.17504143517339 } ] } 199 | { "_id" : 198, "name" : "Timothy Harrod", "scores" : [ { "type" : "exam", "score" : 11.9075674046519 }, { "type" : "quiz", "score" : 20.51879961777022 }, { "type" : "homework", "score" : 55.85952928204192 }, { "type" : "homework", "score" : 64.85650354990375 } ] } 200 | { "_id" : 199, "name" : "Rae Kohout", "scores" : [ { "type" : "exam", "score" : 82.11742562118049 }, { "type" : "quiz", "score" : 49.61295450928224 }, { "type" : "homework", "score" : 28.86823689842918 }, { "type" : "homework", "score" : 5.861613903793295 } ] } 201 | -------------------------------------------------------------------------------- /course-m101p/hw3-2-answer.md: -------------------------------------------------------------------------------- 1 | # Homework 3.2 2 | 3 | Making your blog accept posts [Description complète](https://education.10gen.com/courses/10gen/M101P/2014_February/courseware/Week_3_Schema_Design/529e0064e2d42347509fb3a6/) 4 | 5 | ## Pièces jointes 6 | 7 | * [hw3-2-blogPostDAO.py](hw3-2-blogPostDAO.py) script modifié 8 | * [hw3-2and3-3.cb3a025ac81c.zip](hw3-2and3-3.cb3a025ac81c.zip) application blog fournie par MongoDB 9 | 10 | ## Problème 11 | 12 | In this homework you will be enhancing the blog project to insert entries into the posts collection. After this, the blog will work. It will allow you to add blog posts with a title, body and tags and have it be added to the posts collection properly. 13 | 14 | We have provided the code that creates users and allows you to login (the assignment from last week). To get started, please download [hw3-2and3-3.cb3a025ac81c.zip](hw3-2and3-3.cb3a025ac81c.zip) from the Download Handout link and unpack. You will be using these file for this homework and the [HW 3.3](hw3-3-answer.md). 15 | 16 | The areas where you need to add code are marked with XXX. You need only touch the [BlogPostDAO.py](hw3-2-blogPostDAO.py) file. There are three locations for you to add code for this problem. Scan that file for `XXX` to see where to work. 17 | 18 | ## Résolution 19 | 20 | Ajouter un nouveau post: 21 | 22 | class BlogPostDAO: 23 | … 24 | 25 | def insert_entry( self, title, post, tags_array, author): 26 | … 27 | try: 28 | # XXX HW 3.2 Work Here to insert the post 29 | self.posts.insert( post, j=True) 30 | … 31 | 32 | Retourner 10 derniers posts: 33 | 34 | def get_posts( self, num_posts): 35 | # XXX HW 3.2 Work here to get the posts 36 | cursor = self.posts.find().sort([( "date", DESCENDING)]).limit( num_posts) 37 | … 38 | 39 | Retourner un post selon son permalink: 40 | 41 | def get_post_by_permalink( self, permalink): 42 | # XXX Work here to retrieve the specified post 43 | post = self.posts.find_one( { "permalink": permalink }) 44 | … 45 | 46 | ## Réponse 47 | 48 | (venv)~/Sources/learning-mongodb/course-m101p/hw3-2and3-3 $ python validate.py 49 | Welcome to the HW 3.2 and HW 3.3 validation tester 50 | Trying to create a test user MHHqhyv 51 | Found the test user MHHqhyv in the users collection 52 | User creation successful. 53 | Trying to login for test user MHHqhyv 54 | User login successful. 55 | Trying to submit a post with title zxCJIVOmrlwzSJPkcpOVOdKknvIzRW 56 | Submission of single post successful 57 | Trying to submit a post with title RUxxZqlZzumJLHumcfOhQquhpdGfVb 58 | Submission of second post successful 59 | Trying to grab the blog home page at url http://localhost:8082/ 60 | Block index looks good. 61 | Found blog post in posts collection 62 | Tests Passed for HW 3.2. Your HW 3.2 validation code is 89jklfsjrlk209jfks2j2ek 63 | Trying to submit a blog comment for post with title zxCJIVOmrlwzSJPkcpOVOdK 64 | Can't add blog comments (so HW 3.3 not yet complete) 65 | 66 | --> 89jklfsjrlk209jfks2j2ek 67 | 68 | ## Préparation 69 | 70 | Ajouter `reloader=True` à `bottle.run` dans `blog.py` (voir [Bottle Tutorial › Development › Autoreloading](http://bottlepy.org/docs/dev/tutorial.html#auto-reloading)): 71 | 72 | … 73 | bottle.debug( True) 74 | bottle.run( host='localhost', port=8082, reloader=True) 75 | 76 | Démarrer le serveur: 77 | 78 | (venv) course-m101p/hw3-2and3-3 $ python blog.py 79 | Bottle v0.12.3 server starting up (using WSGIRefServer())... 80 | Listening on http://localhost:8082/ 81 | Hit Ctrl-C to quit. 82 | -------------------------------------------------------------------------------- /course-m101p/hw3-2-blogPostDAO.py: -------------------------------------------------------------------------------- 1 | __author__ = 'aje' 2 | 3 | 4 | # 5 | # Copyright (c) 2008 - 2013 10gen, Inc. 6 | # 7 | # Licensed under the Apache License, Version 2.0 (the "License"); 8 | # you may not use this file except in compliance with the License. 9 | # You may obtain a copy of the License at 10 | # 11 | # http://www.apache.org/licenses/LICENSE-2.0 12 | # 13 | # Unless required by applicable law or agreed to in writing, software 14 | # distributed under the License is distributed on an "AS IS" BASIS, 15 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16 | # See the License for the specific language governing permissions and 17 | # limitations under the License. 18 | # 19 | # 20 | 21 | import sys 22 | import re 23 | import datetime 24 | from pymongo import DESCENDING 25 | 26 | # The Blog Post Data Access Object handles interactions with the Posts collection 27 | class BlogPostDAO: 28 | 29 | # constructor for the class 30 | def __init__(self, database): 31 | self.db = database 32 | self.posts = database.posts 33 | 34 | # inserts the blog entry and returns a permalink for the entry 35 | def insert_entry(self, title, post, tags_array, author): 36 | print "inserting blog entry", title, post 37 | 38 | # fix up the permalink to not include whitespace 39 | 40 | exp = re.compile('\W') # match anything not alphanumeric 41 | whitespace = re.compile('\s') 42 | temp_title = whitespace.sub("_",title) 43 | permalink = exp.sub('', temp_title) 44 | 45 | # Build a new post 46 | post = {"title": title, 47 | "author": author, 48 | "body": post, 49 | "permalink":permalink, 50 | "tags": tags_array, 51 | "comments": [], 52 | "date": datetime.datetime.utcnow()} 53 | 54 | # now insert the post 55 | try: 56 | # XXX HW 3.2 Work Here to insert the post 57 | self.posts.insert( post, j=True) 58 | print "Inserting the post" 59 | except: 60 | print "Error inserting post" 61 | print "Unexpected error:", sys.exc_info()[0] 62 | 63 | return permalink 64 | 65 | # returns an array of num_posts posts, reverse ordered 66 | def get_posts(self, num_posts): 67 | 68 | # XXX HW 3.2 Work here to get the posts 69 | cursor = self.posts.find().sort([( "date", DESCENDING)]).limit( num_posts) 70 | 71 | l = [] 72 | 73 | for post in cursor: 74 | post['date'] = post['date'].strftime("%A, %B %d %Y at %I:%M%p") # fix up date 75 | if 'tags' not in post: 76 | post['tags'] = [] # fill it in if its not there already 77 | if 'comments' not in post: 78 | post['comments'] = [] 79 | 80 | l.append({'title':post['title'], 'body':post['body'], 'post_date':post['date'], 81 | 'permalink':post['permalink'], 82 | 'tags':post['tags'], 83 | 'author':post['author'], 84 | 'comments':post['comments']}) 85 | 86 | return l 87 | 88 | 89 | # find a post corresponding to a particular permalink 90 | def get_post_by_permalink(self, permalink): 91 | 92 | # XXX Work here to retrieve the specified post 93 | post = self.posts.find_one( { "permalink": permalink }) 94 | 95 | if post is not None: 96 | # fix up date 97 | post['date'] = post['date'].strftime("%A, %B %d %Y at %I:%M%p") 98 | 99 | return post 100 | 101 | # add a comment to a particular blog post 102 | def add_comment(self, permalink, name, email, body): 103 | 104 | comment = {'author': name, 'body': body} 105 | 106 | if (email != ""): 107 | comment['email'] = email 108 | 109 | try: 110 | last_error = {'n':-1} # this is here so the code runs before you fix the next line 111 | # XXX HW 3.3 Work here to add the comment to the designated post 112 | 113 | 114 | return last_error['n'] # return the number of documents updated 115 | 116 | except: 117 | print "Could not update the collection, error" 118 | print "Unexpected error:", sys.exc_info()[0] 119 | return 0 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | -------------------------------------------------------------------------------- /course-m101p/hw3-2and3-3.cb3a025ac81c.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/olange/learning-mongodb/9c6cd4a3a36adb148e655bb7eb4ec53f1eb4e157/course-m101p/hw3-2and3-3.cb3a025ac81c.zip -------------------------------------------------------------------------------- /course-m101p/hw3-3-answer.md: -------------------------------------------------------------------------------- 1 | # Homework 3.3 2 | 3 | Making your blog accept comments [Description complète](https://education.10gen.com/courses/10gen/M101P/2014_February/courseware/Week_3_Schema_Design/529e0109e2d42347509fb3aa/) 4 | 5 | ## Pièces jointes 6 | 7 | * [hw3-3-blogPostDAO.py](hw3-3-blogPostDAO.py) script modifié 8 | * [hw3-2and3-3.cb3a025ac81c.zip](hw3-2and3-3.cb3a025ac81c.zip) application blog fournie par MongoDB 9 | 10 | ## Problème 11 | 12 | In this homework you will add code to your blog so that it accepts comments. You will be using the same code as you downloaded for [HW 3.2](hw3-2-answer.md). 13 | 14 | Once again, the area where you need to work is marked with an `XXX in the [blogPostDAO.py](hw3-3-blogPostDAO.py) file. There are one location. You don't need to figure out how to retrieve comments for this homework because the code you did in 3.2 already pulls the entire blog post (unless you specifically projected to eliminate the comments) and we gave you the code that pulls them out of the JSON document. 15 | 16 | This assignment has fairly little code, but it's a little more subtle than the previous assignment because you are going to be manipulating an array within the Mongo document. 17 | 18 | ## Résolution 19 | 20 | Ajouter un nouveau commentaire à un post: 21 | 22 | class BlogPostDAO: 23 | … 24 | 25 | def add_comment(self, permalink, name, email, body): 26 | … 27 | try: 28 | # XXX HW 3.3 Work here to add the comment to the designated post 29 | self.posts.update( { "permalink": permalink}, \ 30 | { "$push": { "comments": comment }}) 31 | last_error = db.runCommand({"getLastError":1}) 32 | return last_error['n'] # return the number of documents updated 33 | … 34 | 35 | ## Réponse 36 | 37 | (venv) course-m101p/hw3-2and3-3 $ python validate.py 38 | Welcome to the HW 3.2 and HW 3.3 validation tester 39 | … 40 | Trying to submit a blog comment for post with title ldXXwoAXbAeSPtVHXTWKOhNRBMrOWy 41 | Successfully added blog comments 42 | Tests Passed for HW 3.3. Your HW 3.3 validation code is jk1310vn2lkv0j2kf0jkfs 43 | 44 | --> jk1310vn2lkv0j2kf0jkfs 45 | 46 | ## Préparation 47 | 48 | Cf. «Préparation» dans réponse [HW 3.2](hw3-2-answer.md#préparation). -------------------------------------------------------------------------------- /course-m101p/hw3-3-blogPostDAO.py: -------------------------------------------------------------------------------- 1 | __author__ = 'aje' 2 | 3 | 4 | # 5 | # Copyright (c) 2008 - 2013 10gen, Inc. 6 | # 7 | # Licensed under the Apache License, Version 2.0 (the "License"); 8 | # you may not use this file except in compliance with the License. 9 | # You may obtain a copy of the License at 10 | # 11 | # http://www.apache.org/licenses/LICENSE-2.0 12 | # 13 | # Unless required by applicable law or agreed to in writing, software 14 | # distributed under the License is distributed on an "AS IS" BASIS, 15 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16 | # See the License for the specific language governing permissions and 17 | # limitations under the License. 18 | # 19 | # 20 | 21 | import sys 22 | import re 23 | import datetime 24 | from pymongo import DESCENDING 25 | 26 | # The Blog Post Data Access Object handles interactions with the Posts collection 27 | class BlogPostDAO: 28 | 29 | # constructor for the class 30 | def __init__(self, database): 31 | self.db = database 32 | self.posts = database.posts 33 | 34 | # inserts the blog entry and returns a permalink for the entry 35 | def insert_entry(self, title, post, tags_array, author): 36 | print "inserting blog entry", title, post 37 | 38 | # fix up the permalink to not include whitespace 39 | 40 | exp = re.compile('\W') # match anything not alphanumeric 41 | whitespace = re.compile('\s') 42 | temp_title = whitespace.sub("_",title) 43 | permalink = exp.sub('', temp_title) 44 | 45 | # Build a new post 46 | post = {"title": title, 47 | "author": author, 48 | "body": post, 49 | "permalink":permalink, 50 | "tags": tags_array, 51 | "comments": [], 52 | "date": datetime.datetime.utcnow()} 53 | 54 | # now insert the post 55 | try: 56 | # XXX HW 3.2 Work Here to insert the post 57 | self.posts.insert( post, j=True) 58 | print "Inserting the post" 59 | except: 60 | print "Error inserting post" 61 | print "Unexpected error:", sys.exc_info()[0] 62 | 63 | return permalink 64 | 65 | # returns an array of num_posts posts, reverse ordered 66 | def get_posts(self, num_posts): 67 | 68 | # XXX HW 3.2 Work here to get the posts 69 | cursor = self.posts.find().sort([( "date", DESCENDING)]).limit( num_posts) 70 | 71 | l = [] 72 | 73 | for post in cursor: 74 | post['date'] = post['date'].strftime("%A, %B %d %Y at %I:%M%p") # fix up date 75 | if 'tags' not in post: 76 | post['tags'] = [] # fill it in if its not there already 77 | if 'comments' not in post: 78 | post['comments'] = [] 79 | 80 | l.append({'title':post['title'], 'body':post['body'], 'post_date':post['date'], 81 | 'permalink':post['permalink'], 82 | 'tags':post['tags'], 83 | 'author':post['author'], 84 | 'comments':post['comments']}) 85 | 86 | return l 87 | 88 | 89 | # find a post corresponding to a particular permalink 90 | def get_post_by_permalink(self, permalink): 91 | 92 | # XXX Work here to retrieve the specified post 93 | post = self.posts.find_one( { "permalink": permalink }) 94 | 95 | if post is not None: 96 | # fix up date 97 | post['date'] = post['date'].strftime("%A, %B %d %Y at %I:%M%p") 98 | 99 | return post 100 | 101 | # add a comment to a particular blog post 102 | def add_comment(self, permalink, name, email, body): 103 | 104 | comment = {'author': name, 'body': body} 105 | 106 | if (email != ""): 107 | comment['email'] = email 108 | 109 | try: 110 | # XXX HW 3.3 Work here to add the comment to the designated post 111 | self.posts.update( { "permalink": permalink}, \ 112 | { "$push": { "comments": comment }}) 113 | last_error = db.runCommand({"getLastError":1}) 114 | return last_error['n'] # return the number of documents updated 115 | 116 | except: 117 | print "Could not update the collection, error" 118 | print "Unexpected error:", sys.exc_info()[0] 119 | return 0 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | -------------------------------------------------------------------------------- /course-m101p/hw4-1-answer.md: -------------------------------------------------------------------------------- 1 | # Homework 4.1 2 | 3 | Which of the following queries can utilize an index? [Description complète](https://education.mongodb.com/courses/10gen/M101P/2014_February/courseware/Week_4_Performance/52aa2f48e2d4232c54a18ac9/) 4 | 5 | ## Problème 6 | 7 | Given a collection with the following indexes (voir [description complète](https://education.mongodb.com/courses/10gen/M101P/2014_February/courseware/Week_4_Performance/52aa2f48e2d4232c54a18ac9/), ou _Préparation_ ci-dessous), which of the following queries can utilize an index? 8 | 9 | 1. `db.products.find({'brand':"GE"})` 10 | 2. `db.products.find({'brand':"GE"}).sort({price:1})` 11 | 3. `db.products.find({$and:[{price:{$gt:30}},{price:{$lt:50}}]}).sort({brand:1})` 12 | 4. `db.products.find({brand:'GE'}).sort({category:1, brand:-1}).explain()` 13 | 14 | ## Résolution 15 | 16 | Première requête fait un full scan de la collection: 17 | 18 | > db.products.find({'brand':"GE"}).explain() 19 | { 20 | "cursor" : "BasicCursor", 21 | "isMultiKey" : false, 22 | "n" : 0, 23 | "nscannedObjects" : 0, 24 | "nscanned" : 0, 25 | "nscannedObjectsAllPlans" : 0, 26 | "nscannedAllPlans" : 0, 27 | "scanAndOrder" : false, 28 | "indexOnly" : false, 29 | "nYields" : 0, 30 | "nChunkSkips" : 0, 31 | "millis" : 0, 32 | "indexBounds" : { 33 | 34 | }, 35 | "server" : "QUARK-N.local:27017" 36 | } 37 | 38 | Deuxième requête utilise l'index sur le prix pour le tri (index inversé): 39 | 40 | > db.products.find({'brand':"GE"}).sort({price:1}).explain() 41 | { 42 | "cursor" : "BtreeCursor price_-1 reverse", 43 | "isMultiKey" : false, 44 | "n" : 0, 45 | "nscannedObjects" : 2, 46 | "nscanned" : 2, 47 | "nscannedObjectsAllPlans" : 4, 48 | "nscannedAllPlans" : 4, 49 | "scanAndOrder" : false, 50 | "indexOnly" : false, 51 | "nYields" : 0, 52 | "nChunkSkips" : 0, 53 | "millis" : 0, 54 | "indexBounds" : { 55 | "price" : [ 56 | [ 57 | { 58 | "$minElement" : 1 59 | }, 60 | { 61 | "$maxElement" : 1 62 | } 63 | ] 64 | ] 65 | }, 66 | "server" : "QUARK-N.local:27017" 67 | } 68 | 69 | Troisième requête utilise index sur le prix pour la recherche: 70 | 71 | > db.products.find({$and:[{price:{$gt:30}},{price:{$lt:50}}]}).sort({brand:1}).explain() 72 | { 73 | "cursor" : "BtreeCursor price_-1", 74 | "isMultiKey" : false, 75 | "n" : 0, 76 | "nscannedObjects" : 0, 77 | "nscanned" : 0, 78 | "nscannedObjectsAllPlans" : 0, 79 | "nscannedAllPlans" : 1, 80 | "scanAndOrder" : true, 81 | "indexOnly" : false, 82 | "nYields" : 0, 83 | "nChunkSkips" : 0, 84 | "millis" : 0, 85 | "indexBounds" : { 86 | "price" : [ 87 | [ 88 | 50, 89 | 30 90 | ] 91 | ] 92 | }, 93 | "server" : "QUARK-N.local:27017" 94 | } 95 | 96 | Quatrième requête n'utilise aucun index, ni pour la recherche, ni pour le tri: 97 | 98 | > db.products.find({brand:'GE'}).sort({category:1, brand:-1}).explain() 99 | { 100 | "cursor" : "BasicCursor", 101 | "isMultiKey" : false, 102 | "n" : 0, 103 | "nscannedObjects" : 2, 104 | "nscanned" : 2, 105 | "nscannedObjectsAllPlans" : 2, 106 | "nscannedAllPlans" : 2, 107 | "scanAndOrder" : true, 108 | "indexOnly" : false, 109 | "nYields" : 0, 110 | "nChunkSkips" : 0, 111 | "millis" : 0, 112 | "indexBounds" : { 113 | 114 | }, 115 | "server" : "QUARK-N.local:27017" 116 | } 117 | 118 | ## Réponse 119 | 120 | 2. `db.products.find({'brand':"GE"}).sort({price:1})` utilise l'index `price_-1` pour le tri (reversed index) 121 | 3. `db.products.find({$and:[{price:{$gt:30}},{price:{$lt:50}}]}).sort({brand:1})` utilise l'index `price_-1` pour la recherche 122 | 123 | ## Préparation 124 | 125 | $ mongo test 126 | > db.products.insert({ sku: 123, price: 100.00, description: "Refrigerator H-100", category: "Fridge", brand: "Hoover", reviews: [ { author: "Olivier", comment: "Freeeezing"}]}) 127 | > db.products.insert({ sku: 236, price: 120.00, description: "Refrigerator E-100", category: "Fridge", brand: "GE", reviews: [ { author: "Olivier", comment: "Still freeeezing"}]}) 128 | > db.products.ensureIndex({ sku: 1},{ unique: true}) 129 | > db.products.ensureIndex({ price: -1}) 130 | > db.products.ensureIndex({ description: 1}) 131 | > db.products.ensureIndex({ category: 1, brand: 1}) 132 | > db.products.ensureIndex({ "reviews.author": 1}) 133 | > db.products.getIndexes() 134 | [ 135 | { 136 | "v" : 1, 137 | "key" : { 138 | "_id" : 1 139 | }, 140 | "ns" : "test.products", 141 | "name" : "_id_" 142 | }, 143 | { 144 | "v" : 1, 145 | "key" : { 146 | "sku" : 1 147 | }, 148 | "unique" : true, 149 | "ns" : "test.products", 150 | "name" : "sku_1" 151 | }, 152 | { 153 | "v" : 1, 154 | "key" : { 155 | "price" : -1 156 | }, 157 | "ns" : "test.products", 158 | "name" : "price_-1" 159 | }, 160 | { 161 | "v" : 1, 162 | "key" : { 163 | "description" : 1 164 | }, 165 | "ns" : "test.products", 166 | "name" : "description_1" 167 | }, 168 | { 169 | "v" : 1, 170 | "key" : { 171 | "category" : 1, 172 | "brand" : 1 173 | }, 174 | "ns" : "test.products", 175 | "name" : "category_1_brand_1" 176 | }, 177 | { 178 | "v" : 1, 179 | "key" : { 180 | "reviews.author" : 1 181 | }, 182 | "ns" : "test.products", 183 | "name" : "reviews.author_1" 184 | } 185 | ] 186 | -------------------------------------------------------------------------------- /course-m101p/hw4-2-answer.md: -------------------------------------------------------------------------------- 1 | # Homework 4.2 2 | 3 | What can you infer from the following `explain` output? [Description complète](https://education.mongodb.com/courses/10gen/M101P/2014_February/courseware/Week_4_Performance/52aa32dfe2d4232c54a18ace/) 4 | 5 | ## Problème 6 | 7 | Suppose you have a collection called tweets whose documents contain information about the created_at time of the tweet and the user's followers_count at the time they issued the tweet. What can you infer from the following explain output? 8 | 9 | > db.tweets.find({"user.followers_count":{$gt:1000}}) 10 | .sort({"created_at" : 1 }) 11 | .limit(10).skip(5000) 12 | .explain() 13 | { 14 | "cursor" : "BtreeCursor created_at_-1 reverse", 15 | "isMultiKey" : false, 16 | "n" : 10, 17 | "nscannedObjects" : 46462, 18 | "nscanned" : 46462, 19 | "nscannedObjectsAllPlans" : 49763, 20 | "nscannedAllPlans" : 49763, 21 | "scanAndOrder" : false, 22 | "indexOnly" : false, 23 | "nYields" : 0, 24 | "nChunkSkips" : 0, 25 | "millis" : 205, 26 | "indexBounds" : { 27 | "created_at" : [ 28 | [ 29 | { 30 | "$minElement" : 1 31 | }, 32 | { 33 | "$maxElement" : 1 34 | } 35 | ] 36 | ] 37 | }, 38 | "server" : "localhost.localdomain:27017" 39 | } 40 | 41 | 1. This query performs a collection scan. 42 | 2. The query uses an index to determine the order in which to return result documents. 43 | 3. The query uses an index to determine which documents match. 44 | 4. The query returns 46462 documents. 45 | 5. The query visits 46462 documents. 46 | 6. The query is a "covered index query". 47 | 48 | ## Réponse 49 | 50 | 1. True ⟸ `"nscanned" : 46462` 51 | 2. True ⟸ `"cursor" : "BtreeCursor created_at_-1 reverse"` 52 | 3. False ⟸ index on field `created_at` was used for sorting, but none was used for searching 53 | 4. False ⟸ `"n" : 10` means 10 documents were returned, not 46462 54 | 5. True ⟸ `"nscannedObjects" : 46462` 55 | 6. False ⟸ `"indexOnly" : false`; it would be true if it was a [covered index query](http://docs.mongodb.org/manual/reference/method/cursor.explain/#explain.indexOnly) 56 | -------------------------------------------------------------------------------- /course-m101p/hw4-3-answer.md: -------------------------------------------------------------------------------- 1 | # Homework 4.3 2 | 3 | Making the blog fast [Description complète](https://education.mongodb.com/courses/10gen/M101P/2014_February/courseware/Week_4_Performance/52aa333ae2d4232c54a18ad2/) 4 | 5 | ## Pièces jointes 6 | 7 | * [hw4-3.f798e22df86d.zip](hw4-3.f798e22df86d.zip) application blog mise à dispo par Mongo 8 | 9 | ## Problème 10 | 11 | Your assignment is to make the following blog pages fast: 12 | 13 | * The blog home page 14 | * The page that displays blog posts by tag (http://localhost:8082/tag/whatever) 15 | * The page that displays a blog entry by permalink (http://localhost:8082/post/permalink) 16 | 17 | By fast, we mean that indexes should be in place to satisfy these queries such that we only need to scan the number of documents we are going to return. 18 | 19 | To figure out what queries you need to optimize, you can read the `blog.py` code and see what it does to display those pages. Isolate those queries and use `explain` to explore. 20 | 21 | Once you have added the indexes to make those pages fast run the following: 22 | 23 | python validate.py 24 | 25 | ## Résolution 26 | 27 | A quoi ressemblent les données? 28 | 29 | (venv) course-m101p $ mongo 30 | > use blog 31 | > db.posts.findOne() 32 | { 33 | "_id" : ObjectId("50ab0f8bbcf1bfe2536dc3f8"), 34 | "body" : "We the People of the United States […]", 35 | "permalink" : "TqoHkbHyUgLyCKWgPLqm", 36 | "author" : "machine", 37 | "title" : "US Constitution", 38 | "tags" : [ "trade", "fowl", "forecast", "pest", "professor", 39 | "willow", "rise", "brace", "ink", "road" ], 40 | "comments" : [ 41 | { "body" : "Lorem ipsum […]", 42 | "email" : "LkvwxBUa@LOPSfuBf.com", 43 | "author" : "Tonia Surace" }, 44 | … ], 45 | "date" : ISODate("2012-11-20T05:05:15.229Z") 46 | } 47 | 48 | ### Blog home page 49 | 50 | Situation initiale: 51 | 52 | (venv) course-m101p/hw4-3 $ python validate.py 53 | … 54 | Sorry, executing the query to display the home page is too slow. 55 | We should be scanning no more than 10 documents. You scanned 1000 56 | here is the output from explain 57 | { … u'millis': 8, u'n': 10, … u'nscanned': 1000 … } 58 | Sorry, the query to display the blog home page is too slow. 59 | 60 | Dans le log de `mongod`, après accès à `http://localhost:8082/`: 61 | 62 | [conn2] query blog.posts query: { $query: {}, $orderby: { date: -1 } } ntoreturn:10 ntoskip:0 nscanned:1000 scanAndOrder:1 keyUpdates:0 locks(micros) W:2213 r:359909 nreturned:10 reslen:309772 360ms 63 | 64 | Ajout index inverse sur le champ `date`: 65 | 66 | (venv) course-m101p $ mongo blog 67 | > db.posts.ensureIndex({ date: -1}) 68 | 69 | (venv) course-m101p/hw4-3 $ python validate.py 70 | … Home page is super fast. Nice job. … 71 | 72 | ### Blog entry per permalink 73 | 74 | Situation initiale: 75 | 76 | (venv) course-m101p/hw4-3 $ python validate.py 77 | … 78 | Sorry, executing the query to retrieve a post by permalink is too slow 79 | We should be scanning no more than 1 documents. You scanned 1000 80 | here is the output from explain 81 | { … u'n': 1, u'nscanned': 1000, … } 82 | Sorry, the query to retrieve a blog post by permalink is too slow. 83 | 84 | Dans le log de Mongo, après accès à `http://localhost:8082/post/TLxrBfyxTZjqOKqxgnUP`: 85 | 86 | [conn2] query blog.posts query: { permalink: "TLxrBfyxTZjqOKqxgnUP" } ntoreturn:1 ntoskip:0 nscanned:1000 keyUpdates:0 locks(micros) r:5522 nreturned:1 reslen:27991 5ms 87 | 88 | Ajout index sur le champ `permalink`: 89 | 90 | > db.posts.ensureIndex({ permalink: 1}, {unique: true}) 91 | 92 | (venv) course-m101p/hw4-3 $python validate.py 93 | … Home page is super fast. Nice job. … 94 | 95 | ### Blog posts per tag 96 | 97 | Situation initiale: 98 | 99 | (venv) course-m101p/hw4-3 $python validate.py 100 | … 101 | Sorry, executing the query to retrieve posts by tag is too slow. 102 | We should be scanning no more than 10 documents. You scanned 690 103 | here is the output from explain 104 | { … u'cursor': u'BtreeCursor date_-1', u'n': 10, … u'nscanned': 690, … } 105 | Sorry, the query to retrieve all posts with a certain tag is too slow 106 | 107 | Ajout index sur le champ `tags` 108 | 109 | > db.posts.ensureIndex({ tags: 1}) 110 | 111 | Première tentative, on a réduit à 13, mais cela ne suffit pas: 112 | 113 | (venv) course-m101p/hw4-3 $python validate.py 114 | … 115 | Sorry, executing the query to retrieve posts by tag is too slow. 116 | We should be scanning no more than 10 documents. You scanned 13 117 | here is the output from explain 118 | { … u'cursor': u'BtreeCursor tags_1', 119 | u'indexBounds': {u'tags': [[u'sphynx', u'sphynx']]}, 120 | u'indexOnly': False, u'isMultiKey': True, u'millis': 1, 121 | u'n': 10, … u'nscanned': 13, u'nscannedObjects': 13, … } 122 | Sorry, the query to retrieve all posts with a certain tag is too slow 123 | 124 | Il n'utilise en effet que l'index sur les tags, pour la recherche, mais pas celui sur la date, pour limiter les résultats retournés: 125 | 126 | > db.posts.find({ tags: 'sphynx'}).sort({ date: -1}).limit( 10).explain() 127 | { … "cursor" : "BtreeCursor tags_1", … "n" : 10, "nscannedObjects" : 13, … } 128 | 129 | Combien d'objets? 130 | 131 | > db.posts.count({ tags: 'sphynx'}) 132 | 13 133 | 134 | Dans le log de Mongo (la requête n'est pas claire): 135 | 136 | [conn5] command blog.$cmd command: { aggregate: "posts", pipeline: [ { $project: { tags: 1 } }, { $unwind: "$tags" }, { $group: { count: { $sum: 1 }, _id: "$tags" } }, { $sort: { count: -1 } }, { $limit: 10 } ] } ntoreturn:1 keyUpdates:0 locks(micros) r:12884 reslen:407 15ms 137 | 138 | Code source: 139 | 140 | (blog.py) 141 | … 142 | @bottle.route('/tag/') 143 | def posts_by_tag(tag="notfound"): 144 | … 145 | l = posts.get_posts_by_tag(tag, 10) 146 | … 147 | 148 | (blogPostDAO.py) 149 | … 150 | def get_posts_by_tag(self, tag, num_posts): 151 | cursor = self.posts.find({'tags':tag}).sort('date', direction=-1).limit(num_posts) 152 | … 153 | 154 | Ajout d'un hint? Ca ne joue pas avec les deux champs: 155 | 156 | > db.posts.find({ tags: 'sphynx'}).sort({ date: -1}).limit( 10) 157 | .hint({ tags: 1, date: -1}).explain() 158 | error: { "$err" : "bad hint", "code" : 10113 } 159 | 160 | Que veut dire cette erreur? En essayant la même chose dans le code source: 161 | 162 | (blogPostDAO.py) 163 | def get_posts_by_tag(self, tag, num_posts): 164 | cursor = self.posts.find({'tags':tag}).sort('date', direction=-1).limit(num_posts) 165 | cursor = cursor.hint( [ ("date", pymongo.DESCENDING), ("tags", pymongo.ASCENDING)]) 166 | 167 | … la validation montre qu'il envisage trois plans, mais n'utilise pas les index sur les tags et la date de façon combinée: 168 | 169 | (venv) course-m101p/hw4-3 $ python validate.py 170 | … 171 | Sorry, executing the query to retrieve posts by tag is too slow. 172 | We should be scanning no more than 10 documents. You scanned 13 173 | here is the output from explain 174 | {u'allPlans': [{u'cursor': u'BtreeCursor tags_1', … u'nscannedObjects': 13}, 175 | {u'cursor': u'BtreeCursor date_-1', … u'nscannedObjects': 13}, 176 | {u'cursor': u'BasicCursor', … u'nscannedObjects': 13}], 177 | 178 | ah! L'erreur obtenue signifie qu'il n'y a pas d'index pour satisfaire le hint. On ajoute l'index combiné et ça joue avec le hint: 179 | 180 | > db.posts.ensureIndex( { tags:1, date: -1}) 181 | > db.posts.find({ tags: 'sphynx'}).sort({ date: -1}).limit( 10) 182 | .hint({ tags: 1, date: -1}).explain() 183 | { "cursor" : "BtreeCursor tags_1_date_-1", … "n" : 10, "nscannedObjects" : 10 … } 184 | 185 | Ca joue d'ailleurs sans même le hint, l'index combiné était la clé: 186 | 187 | > db.posts.find({ tags: 'sphynx'}).sort({ date: -1}).limit( 10).explain() 188 | { "cursor" : "BtreeCursor tags_1_date_-1", … "n" : 10, "nscannedObjects" : 10 … } 189 | 190 | ## Réponse 191 | 192 | (venv) course-m101p/hw4-3 $ python validate.py 193 | Welcome to the HW 4.3 Checker. My job is to make sure you added the indexes 194 | that make the blog fast in the following three situations 195 | When showing the home page 196 | When fetching a particular post 197 | When showing all posts for a particular tag 198 | Data looks like it is properly loaded into the posts collection 199 | Home page is super fast. Nice job. 200 | 201 | Blog retrieval by permalink is super fast. Nice job. 202 | 203 | Blog retrieval by tag is super fast. Nice job. 204 | 205 | Tests Passed for HW 4.3. Your HW 4.3 validation code is 893jfns29f728fn29f20f2 206 | 207 | --> 893jfns29f728fn29f20f2 208 | 209 | ## Préparation 210 | 211 | Chargement des données 212 | 213 | (venv) course-m101p $ mongo 214 | > use blog 215 | > db.posts.drop() 216 | > exit 217 | 218 | (venv) course-m101p $ mongoimport -d blog -c posts < hw4-3/posts.json 219 | connected to: 127.0.0.1 220 | Tue Mar 4 18:36:12.354 500 166/second 221 | Tue Mar 4 18:36:15.268 1000 166/second 222 | Tue Mar 4 18:36:15.268 check 9 1000 223 | Tue Mar 4 18:36:15.268 imported 1000 objects 224 | 225 | Serveur Mongo en mode _profiling_ 226 | 227 | (venv) course-m101p $ tail -f /usr/local/var/log/mongodb/mongo.log & 228 | … 229 | (venv) course-m101p $ mongod --dbpath data/db --profile 1 --slowms 2 230 | … 231 | 232 | Application 233 | 234 | (venv) course-m101p $ python blog.py 235 | -------------------------------------------------------------------------------- /course-m101p/hw4-3.f798e22df86d.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/olange/learning-mongodb/9c6cd4a3a36adb148e655bb7eb4ec53f1eb4e157/course-m101p/hw4-3.f798e22df86d.zip -------------------------------------------------------------------------------- /course-m101p/hw4-4-answer.md: -------------------------------------------------------------------------------- 1 | # Homework 4.4 2 | 3 | Analyze a profile log taken from a mongoDB instance [Description complète](https://education.mongodb.com/courses/10gen/M101P/2014_February/courseware/Week_4_Performance/52cf29bae2d423570a05b92d/) 4 | 5 | ## Pièces jointes 6 | 7 | * [hw4-4-sysprofile.acfbb9617420.json](hw4-4-sysprofile.acfbb9617420.json) jeu de données mis à dispo par Mongo 8 | 9 | ## Problème 10 | 11 | In this problem you will analyze a profile log taken from a mongoDB instance. Query the profile data, looking for all queries to the `students` collection in the database `school2`, sorted in order of decreasing latency. What is the latency of the longest running operation to the collection, in milliseconds? 12 | 13 | ## Résolution 14 | 15 | $ mongo 16 | > db.profile.find({ op:"query", ns:/school2.students/}) 17 | .sort({ millis: -1}).limit( 1) 18 | { "_id" : ObjectId("531604231f113c79eae77957"), "ts" : ISODate("2012-11-20T20:09:49.862Z"), "op" : "query", "ns" : "school2.students", "query" : { "student_id" : 80 }, "ntoreturn" : 0, "ntoskip" : 0, "nscanned" : 10000000, "keyUpdates" : 0, "numYield" : 5, "lockStats" : { "timeLockedMicros" : { "r" : 19776550, "w" : 0 }, "timeAcquiringMicros" : { "r" : 4134067, "w" : 5 } }, "nreturned" : 10, "responseLength" : 2350, "millis" : 15820, "client" : "127.0.0.1", "user" : "" } 19 | 20 | ## Réponse 21 | 22 | 15820 23 | 24 | ## Préparation 25 | 26 | (venv) course-m101p $ mongoimport -d m101 -c profile < hw4-4-sysprofile.acfbb9617420.json 27 | connected to: 127.0.0.1 28 | Tue Mar 4 17:49:39.868 check 9 1515 29 | Tue Mar 4 17:49:39.868 imported 1515 objects 30 | -------------------------------------------------------------------------------- /course-m101p/week1-introduction.md: -------------------------------------------------------------------------------- 1 | # M101P · Week 1 · Introduction 2 | 3 | Reading notes and homework of course [Week 1: Introduction](https://education.mongodb.com/courses/10gen/M101P/2014_February/courseware/Week_1_Introduction/). 4 | 5 | ## Homework 6 | 7 | * Homework 1.1: Installed MongoDB, Python Virtual Environment, PyMongo driver and Bottle modules; restored the Mongo database `m101` from a [provided database dump](hw1-1.184820ec29b6.zip); performed a `db.hw1.findOne()`, returning a single document with the answer **42** 8 | * Homework 1.2: ran the Python script [hw1.2-py](hw1-2.21394489c9ad.py) that connected to and retrieved the value **1815** from the DB 9 | * Homework 1.3: ran Python Bottle script [hw1-3.py](hw1-3.e594fc84d4ac.py) that did the same, delivering the result **53** as the answer to an HTTP GET request on `http://localhost:8080/hw1/50`. 10 | 11 | ## Recap 12 | 13 | Overview of MongoDB characteristics; install of MongoDB and intro to the Mongo shell; documents and intro to schema design; intro to the course project, a blog with posts, tags and comments; recap of JSON, the Python language (lists and dicts, iterating over those, functions, exceptions) and overview of the Bottle framework (see below); inserting a doc and handling exceptions with PyMongo. 14 | 15 | Remember that Python dictionaries do not retain the order of their keys, whereas Javascript dictionaries do. For that reason, Pymongo idioms use arrays and tuples in some cases (sorting), rather than dictionaries. 16 | 17 | Functions of the [Bottle Python framework](http://bottlepy.org/docs/dev/index.html) that were introduced: starting the listener ``bottle.debug``and ``bottle.run``, URL handlers ``bottle.route/get/post``, templates with arguments ``bottle.template`` (arg list or dict), handling form content ``bottle.post`` and ``bottle.request.forms.get``, using cookies ``bottle.request.get_cookie`` and ``bottle.response.set_cookie`` and redirecting ``bottle.redirect``. 18 | 19 | Dans les exercices, ajouter `reloader=True` à l'instruction `bottle.run` dans `blog.py` pour qu'il recharge automatiquement les changements aux scripts (voir [Bottle Tutorial › Development › Autoreloading](http://bottlepy.org/docs/dev/tutorial.html#auto-reloading)): 20 | 21 | … 22 | bottle.debug( True) 23 | bottle.run( host='localhost', port=8082, reloader=True) 24 | -------------------------------------------------------------------------------- /course-m101p/week2-crud.md: -------------------------------------------------------------------------------- 1 | # M101P · Week 2 · CRUD 2 | 3 | Reading notes and homework related to [Week 2: CRUD](https://education.mongodb.com/courses/10gen/M101P/2014_February/courseware/Week_2_CRUD/). A hearty part of the course, about 5 hours study and 45 min. homework. 4 | 5 | ## Homework 6 | 7 | * [Homework 2.1](hw2-1-answer.md) Find all exam scores greater than or equal to 65, and sort those scores from lowest to highest `db.grades.find( { "score": { $gte: 65}}).sort( { "score": 1})` 8 | * [Homework 2.2](hw2-2-answer.md) Identity of the student with the highest average in the class 9 | * [Homework 2.3](hw2-3-answer.md) Adapt `userDAO.py` to add a new user upon sign-up and validate a login by retrieving the right user document. 10 | 11 | ## Recap 12 | 13 | ### CRUD, Mongo Shell and BSON types 14 | 15 | * CRUD is IFUR: `insert()`, `find()`, `update()`, `remove()` 16 | * 1. **sort** 2. **skip** 3. **limit**; sort order is _lexicographic_, according to the binary UTF-8 encoding of strings 17 | * Recap on JS properties and dictionary lookup 18 | * Data is stored in [BSON](http://bsonspec.org) format in MongoDB, which is a superset of JSON, having namely _ObjectId_, _date_, _integer 32/64_ and _binary_ types 19 | * MongoDB Shell command-line editing tips, namely history and autocomplete; the _shell_ has support for the various MongoDB types: `Date()` (gets converted to `ISODate()` automatically), `NumberInt()`, `NumberLong()` for instance 20 | * ObjectId is a UUID; each doc must have a PK, which is immutable in the DB 21 | 22 | ### Inserting 23 | 24 | * Inserting: `db.fruits.insert({ name: "apple", color: "red", shape: "round"})` 25 | 26 | ### Querying 27 | 28 | * `.find(…).pretty()` displays results in a more human readable form 29 | * Finding: `find()`, `findOne()`, excluding fields from output, specifying query by example; for instance: `db.scores.find({ type: "essay", score: 50}, { student: true, _id: false})` 30 | * All search ops are strongly typed and dynamically typed; when using polymorphic field contents, beware! such searches can be refined with `$type` (for instance, `.find( { name: { $type: 2 } })` with _type_ according to [BSON spec](http://bsonspec.org/#/specification) of element types; type 2 being a string) and `$exists`(for instance, `.find( { profession: { $exists: true | false }})`) 31 | * Querying for string patterns with regular expressions: `.find( { name: { $regex: "e$" }} )`; regular expressions tend not to be efficiently optimized, except a few cases, such as expressions starting with a caret (for instance, `$regex: "^A"`) 32 | * Combining operators: `db.users.find({"name": {$regex: "q"}, "email": {$exists: true}})` 33 | * Combining expressions on the same field with `$or: [ …, …]` and `$and: [ …, …]`: `db.scores.find({ $or: [ { score: { $lt: 50}}, { score: { $gt: 90}} ] })`; if the expression are bound to different fields, `$and` or `$or` are not needed 34 | * Beware! Javascript will silently parse the following, although it is probably an incomplete query expression: `db.scores.find( { score: { $lt: 50}}, { score: { $gt: 90}})` will return all documents with score greater than 90 – the second property definition replaces the first 35 | * Querying arrays: matching is polymorphic over arrays (top level only); `db.products.find( { tags: "shiny"})` would match `{ _id: 42, name: "wiz-o-matic", tags: [ "awesome", "shiny", "green"]}` as well as `{ _id: 42, name: "snap-o-lux", tags: "shiny"}` 36 | * Querying for all given values with `$all: [ …, …]`: `db.accounts.find( { favorites: { $all: [ "pretzels", "beer"] }})` the array values have to be a subset of the values of the field that is queried for, in any order 37 | * Querying for any of the given values with `$in: […, …]`: `db.accounts.find( { name: { $in: [ "Howard", "John" ]}})` 38 | * Combining both `$all` and `$in`: `db.users.find( { friends: { $all: [ "Joe" , "Bob" ] }, favorites: { $in: [ "running" , "pickles" ]}})` will match `{ name : "Cliff" , friends : [ "Pete" , "Joe" , "Tom" , "Bob" ] , favorites : [ "pickles", "cycling" ] }` 39 | * Querying on fields in embedded documents, beware: in `db.users.find( { email: { work: "richard@10gen.com", personal: "kreuter@home.org"}})` the order of the fields in the embedded document has to match the order of the fields in the database! subdocuments in queries by example are compared byte to byte to the database's contents. So the following query won't return results if all subdocs have a `"personal"` field: ``db.users.find( { email: { work: "richard@10gen.com"}})`` 40 | * Queries with Dot notation: use **dot notation** to recurse into subdocuments and look for **any** matching field: `db.users.find( { "email.work": "richard@10gen.com" })`; dot notation allows to reach fields inside a subdocument, without any knowledge of the surrounding fields; to query for all products that cost more than 10,000 and that have a rating of 5 or better: `db.catalog.find( { price: { $gt: 10000}, "reviews.rating": { $gte: 5}})` 41 | * Query cursors, to iterate programmatically: `cur = db.people.find(); null; while( cur.hasNext()) printjson( cur.next());`; cursors have many methods: `cur.limit(5)`, `cur.sort({ name: -1})`, `cur.skip(2)`; these modifiers return the cursor, so they can be chained: `cur.sort({ name: -1}).limit(3)`; must be called before retrieving any result 42 | * Query modifiers are executed in the server (not the client) in following order: 1. sort 2. skip 3. limit; `db.scores.find( { type: "exam"}).sort( { score: -1}).skip( 50).limit( 20)` retrieves exam documents, sorted by score in descending order, skipping the first 50 and showing only the next 20. 43 | 44 | ### Counting 45 | 46 | * Counting results of a query: use `db.coll.count( …)` with a query I would give to `db.coll.find( …)` 47 | 48 | ### Updating 49 | 50 | In the Mongo Shell, the API for `update()` does 4 different things: 51 | 52 | 1. **Wholesale updating** – a bit dangerous, completely replaces the documents content: `db.people.update( { name: "Smith"}, { name: "Thompson", salary: 50000})`will replace all documents matching the primary keys returned by the first query, discard their content and replace it with the second document – keeping only the primary key value 53 | 54 | 2. **Manipulating individual fields** with `$set`: `db.people.update( { name: "Alice"}, { $set: { age: 30}})` will define the field `age` if it doesn't exist, or modify its value if it exists. 55 | 56 | Similary, `$inc` allows to modify a value, or define it if it doesn't exist, with the value of the increment step: `db.people.update( { name: "Alice"}, { $inc: { age: 1 }})`; these operations are efficient in MongoDB. 57 | 58 | Manipulating arrays in documents with `$push`, `$pop`, `$pull`, `$pushAll`, `$pullAll` and `addToSet`; note that `pop` removes one or more elements from the end of the array (or beginning if the given element count is negative), while `pull` removes the actual given values; `addToSet` considers the array as a set, rather than an ordered list: it will push a value if it doesn't exist, otherwise do nothing. 59 | 60 | For instance: `db.friends.update( { _id: "Mike" }, { $push: { interests : "skydiving" }})` will add "skydiving" to the right hand side of the array; 61 | `db.friends.update( { _id: "Mike" }, { $pop: { interests : -1 }})` will pop the leftmost element of the array; 62 | `db.friends.update( { _id: "Mike" }, { $addToSet: { interests : "skydiving" }})` would add "skydiving" to the array if it was not already there, otherwise would do nothing; 63 | `db.friends.update( { _id: "Mike" }, { $pushAll: { interests: [ "skydiving" , "skiing" ]}})` appends to the right hand side of the array, leaving us with possible duplicate values. 64 | 65 | Removing a field and its value with `$unset`: `db.people.find({ name: "Jones"}, { $unset: { profession: 1}})` (some value must be specified, 1 in this example, but it is ignored); `$unset` may be used to change the schema: `db.users.update( {}, { $unset: { interests: 1}}, { multi: true})` (see multi-update below for the `multi` extra argument) 66 | 67 | 3. **Upserting** with the `.update( …, …, { upsert: true})` third optional argument: update a record that does exist, otherwise insert a new document; frequent use case when merging data from a data vendor: `db.users.update( { name: "George"}, { $set: { interests: [ "cat", "dog"]}}, { upsert: true})` will create a document { name: "George", interests: [ "cat", "dog"]} if it doesn't already exist, otherwise update the existing document. 68 | 69 | 4. Updating **multiple documents** in a collection: default behavior of MongoDB is to update just one document in the collection; add the `{multi:true}` extra argument to update all matching documents: `db.scores.update( { score: {$lt: 70}}, {$inc: { score: 20}}, { multi:true}))` will give every document with a score less than 70 an extra 20 points; `{multi:true}` is for Javascript – some drivers have a separate method for multi-updates. 70 | 71 | These multi-updates are atomically executed for each document; however, from the concurrency point of view, the server does not offer isolation: these updates are a sequential collection of updates, executed in a single thread, that might however be _yielded_ (paused) by the server allow for other write and read operations. 72 | 73 | ### Removing 74 | 75 | * Removing documents matching a query: use `db.coll.remove( …)` with a query I would give to `db.coll.find( …)` 76 | * With no arguments `db.coll.remove()`, all documents are removed from the collection, one by one (keeping the indexes) 77 | * It is more efficient to use `db.coll.drop()`, which removes the contents and the metadata (namely indexes) 78 | * Remember: `remove()` operations are not atomically isolated in a transaction; a concurrent read or write might see the state of the collection halfway thru the removal; the `remove` itself is however atomic. 79 | 80 | ### Handling errors 81 | 82 | * To check the status of the last operation: `db.runCommand( { getLastError: 1})` allows to determine if an operation did succeed or fail 83 | * On successful operations, it can be used to determine how much documents where updated with a multi-update 84 | * On erroneous operations, it allow to discover what didn't work: 85 | 86 | ### PyMongo 87 | 88 | * Selecting fields with `find( query, selector)`, `query = { "type": "exam"}`, `selector = {"student_id": 1, "_id": 0}` 89 | * Reading contents from an URL with `parsed_feed = json.loads( feed.read())` and `feed = urllib2.urlopen("http://…/feed.json")` (requires `import json` and `import urllib2`) 90 | * Using regular expressions with `find( query, …)` and `query = { "title": { "$regex": "Microsoft" }}` 91 | * Using Dot notation with `find( query, selector)`, `query = { "media.oembed.type": "video"}`, `selector = { "media.oembed.url": 1, "_id": 0}` (will find all documents that have these subdocuments `media` and `oembed`, with field `type` having value `video`; very flexible indeed) 92 | * When projecting onto a key that doesn't exist in the matching documents, MongoDB will return an empty document 93 | * Sort, skip and limit with `.sort("student_id": pymongo.ASCENDING).skip(4).limit(1)` 94 | * Sorting on multiple keys with tuples in an ordered array: `.sort( [ ( "student_id": pymongo.ASCENDING), ( "score": pymongo.DESCENDING)])`, because Python does not retain the key order within its dictionaries (whereas in the Mongo shell it would be `.sort( { student_id: 1, score: -1})`, because Javascript does retain the order of keys within dictionaries) 95 | * Inserting with `.insert( doc)` 96 | * Updating with `.save( doc)` (_insert/update combo_), which inserts the document if there is no `_id` field, otherwise updates the whole document 97 | * _Wholesale update_ can also be done with `.update( query, doc)` (where the document can contain an `_id`, as long as it is equal to the one in the matching document 98 | * _Selective update_ with `.update( { "student_id": 1, "type": "homework"}, { "$set": { "review_date": datetime.datetime.utcnow()}})` (`$set`, `$unset`, …, see _manipulating individual fields_ above) 99 | * Getting current time compatible with `ISODate()`: `datetime.datetime.utcnow()` 100 | * Upserting (insert/update combo) with `things.update({ "thing": "apple"}, { "$set": { "color": "green"}}, upsert = True)` and `things.update({ "thing": "pear"}, { "color": "green" }, upsert = True)`; beware: the second form will insert a new document with just the field `{ "color": "green" }` (as given, but one might want to have the `"thing": "pear"` field), where as with `$set`, the query is part of what's inserted, so a document `{ "thing": "apple", "color": "green" }` would be inserted, if it didn't exist 101 | * Using `find_and_modify` to produce a sequence number: 102 | 103 | ```python 104 | def get_next_seq_value( seq_name): 105 | counter = counters.find_and_modify( query = { "name": seq_name}, 106 | update = { "$inc": { "value": 1}}, 107 | upsert = True, new = True) 108 | return counter[ "value"] 109 | ``` 110 | 111 | [`db.coll.findAndModify()`](http://docs.mongodb.org/manual/reference/method/db.collection.findAndModify/) atomically modifies and returns a single document; by default, the returned document does not include the modifications made on the update; to return the document with the modifications made on the update, use the `new: true` option. 112 | -------------------------------------------------------------------------------- /course-m101p/week3-schema.md: -------------------------------------------------------------------------------- 1 | # M101P · Week 3 · Schema Design 2 | 3 | Notes and homework related to [Week 3: Schema Design](https://education.10gen.com/courses/10gen/M101P/2014_February/courseware/Week_3_Schema_Design/). 2 hours for the videos, 2 hours for the homework. 4 | 5 | ## Homework 6 | 7 | * [Homework 3.1](hw3-1-answer.md) Write a [program](hw3-1-remove.py) to remove the lowest homework score for each student 8 | * [Homework 3.2](hw3-2-answer.md) Adapt [blogPostDAO.py](hw3-2-blogPostDAO.py) to make your blog accept posts 9 | * [Homework 3.3](hw3-3-answer.md) Adapt [blogPostDAO.py](hw3-3-blogPostDAO.py) to make your blog accept comments 10 | 11 | ## Recap 12 | 13 | ### General considerations 14 | 15 | * Application-Driven Schema -- matching the data access patterns of the application (very different from normalization in relational databases, which try to avoid bias toward any particular data access pattern) 16 | * Rich Documents 17 | * Pre Join / Embed Data -- always try to embed the data, which allows to keep the data consistent in some way (the foreign keys will be right) 18 | * No Mongo Joins -- if you have to read ids and seek in another collection, you're probably doing it the relational way, not the Mongo way 19 | * No Constraints 20 | * Atomic Operations within a single document -- not ACID: no transactions over multiple documents or tables (can be alleviated by restructuring data, or by implementing the transaction in software, such as when transfering money between two distinct banking systems); eventual inconsistency of state of data between multiple clients (which can be tolerated in most cases) 21 | * No declared schema 22 | 23 | ### Modeling relationships 24 | 25 | * **1:1 relationship**, such as Employee/Resume, Building/Floor plan, Patient/Medical history -- considerations for embedding or keeping in separate collections: 1. depending on frequency of access (if mostly updating Resume but leaving Employee as is, maybe better to have separate collections); 2. size of items (documents must be <16 MB) and working set size in memory (if embedded, both documents are loaded into memory when pulled); 3. atomicity of data (embed to allow atomic update of both documents at once) 26 | * **1:many relationships**, such as City/Person -- it is recommended to use **document linking** (and multiple collection) whenever the *many is large* (true one to many), to avoid duplication and enforce consistency: People `{ name: "Andrew", city: "NYC" }`, City `{ _id: "NYC", area: … }`; whenever the *many is actually few* (one to few), such as Blog Post/Comments, it's probably better to **embed** documents: `{ title: "Blog title", comments: [ … ]}` 27 | * **Many:many relationships**, such as Books/Authors, Students/Teachers: Books to Authors is probably Few to Few (each book has a small number of authors, each author has a few books); recommend to model as two entities, with references to the _id in arrays; embed for performance reason, if needed, but at the risk of duplicating data; the same applies to Students/Teachers: model as separate entities (Students `{ _id: 1, name: "Andrew", teachers: [1,2,3,4,5]}`, Teachers `{ _id: 2, name: "Mark Horowitz"})`, with the additional reason that, you'll probably start adding teachers in the system, and then students 28 | * Multikey Indexes, for each value in an array of a field: how to find all students having given teachers? `db.students.ensureIndex({"teachers":1})`, `db.students.find({"teachers": {"$all": [1,3]}})`; `find(…).explain()` proves that Mongo has indeed used an index; multikey indexes allow to query efficiently on embedded and linked documents 29 | * **Benefits of embedding**: improve read performance -- disks have high latency (it takes 1 ms to get to the first byte) and high bandwidth (the next bytes come very fast), so it is better to have data co-located; on the other side, writes might need to move the document around, if the document size expands 30 | * **Representing trees** such as Products/Categories, where Categories is a hierarchy (Outdoors > Winter > Snow): leverage the fact that Mongo can store arrays: Products `{ name: "Leaf Blower", category: 7 }`, Categories `{ _id :7, name: "Snow", ancestors: [ 3, 5] }`; to find all descendants of category 7 Snow: `db.categories.find({ ancestors: 7})` 31 | 32 | ### When to denormalize? 33 | 34 | * In the relational world, we're normalizing to avoid _modification anomalies_ 35 | * As long as we don't duplicate data, we do not open to modification anomalies 36 | * In 1:1 relationships, it is OK to embed the data, as we do not duplicate data 37 | * In 1:many, when embedding from the many to the one, we do not duplicate data; when embedding from the one to the many, use document linking to avoid the duplication of data; however, it would make sense to embed for a performance reason, especially if the data is rarely changing or being updated 38 | * In Many:Many, link documents to avoid modification anomalies thru arrays of object ids. 39 | 40 | If you're duplicating data, it's up to you, the application programmer, to keep the data up to date. 41 | 42 | ### Handling large files (BLOBs) 43 | 44 | * GRIDFS -- stores large files (>16MB) in chunks and metadata in a separate collection 45 | * Storing a file: `grid = gridfs.Gridfs( db, "videos"); f = open( "video.mp4", "r"); _id = grid.put( f); f.close(); db.videos_meta.insert({ "grid_id": _id, "filename": "video.mp4"})` 46 | * Retrieving: `db.videos.files.find({"_id": _id})`; the data itself is in `db.videos.chunks` 47 | -------------------------------------------------------------------------------- /course-m101p/week4-performance.md: -------------------------------------------------------------------------------- 1 | # M101P · Week 4 · Performance 2 | 3 | Notes and homework related to [Week 4: Performance](https://education.mongodb.com/courses/10gen/M101P/2014_February/courseware/Week_4_Performance/). 3 hours 45 min. of videos and notes, 45 min. of homework. 4 | 5 | ## Homework 6 | 7 | * [Homework 4.1](hw4-1-answer.md) Which of the following queries can utilize an index? 8 | * [Homework 4.2](hw4-2-answer.md) What can you infer from the following `explain` output? 9 | * [Homework 4.3](hw4-3-answer.md) Making the blog fast 10 | * [Homework 4.4](hw4-4-answer.md) Analyze a profile log taken from a mongoDB instance 11 | 12 | ## Recap 13 | 14 | ### Indexes 15 | 16 | * Creating an index: `db.students.ensureIndex({ student_id: 1, class: -1})` puts a composite index on field `student_id`, in ascending order, and on field `class`, in descending order; order has no effect on searching, but makes a difference when sorting on these fields 17 | * Finding indexes: all with `db.system.indexes.find()`; for a given collection with `db.students.getIndexes()` 18 | * Dropping an index: `db.students.dropIndex({ student_id: 1})` 19 | * Multikey indexes: automatically created when inserting an array in a field participating in an index; for compound indexes, only one of the fields participating in the index can be an array (Mongo cannot index parallel arrays); multikey indexes can be defined for subdocuments also: `db.people.ensureIndex( {"adresses.tag": 1})` 20 | * Creating unique indexes: `db.students.ensureIndex({ student_id: 1}, { unique: true})` 21 | * … and removing duplicate keys: `db.students.ensureIndex({ student_id: 1}, { unique: true, dropDups: true})`; be very careful: there is No Way to determine which of the duplicates keys will be removed; and the removed documents are gone forever 22 | * Sparse indexes: if the key of an index is missing within a document, it maps to Null; if the index is unique, there can be only one null; it is however possible to create sparse indexes, that index only keys being set: `db.products.ensureIndex({ size: 1}, { [unique: true,] sparse: true})`; **beware**: when sorting on the key of a sparse index, documents without an entry won't appear in the results! it is an artifact of the sparse index, which gets used by the sort, therefore changing its semantics 23 | * Creating indexes in the background: by default, indexes are created in the foreground, which is faster, but is blocking all writers; with `db.people.ensureIndex( {…}, { background: true})`, it will be created in the background, not blocking other writers, however being slower; in a replica set, one server can be pulled out of the set to be indexed, and then put back again; a mongod instance can only build one background index at a time per database 24 | 25 | ### Explain 26 | 27 | * `db.foo.find({c:1}).explain()` 28 | * `"cursor": "BasicCursor"` --> full collection scan 29 | * `"millis": 3` --> time taken 30 | * `"cursor": "BtreeCursor a_1_b_1_c_1"` --> index `a_1_b_1_c_1` was used 31 | * `"isMultiKey": false` --> none of the values are arrays 32 | * `"n": 1` --> number of documents returned 33 | * `"nscannedObjects": 1` --> number of documents that were scanned 34 | * `"nscanned": 1` --> number of documents or index entries that were scanned 35 | * `"indexBounds": [ …]` --> strategy followed and boundary values 36 | * `"indexOnly": false` --> `true` if everything queryied for is satisfied by the index, without retrieving the actual document 37 | 38 | ### Considerations about indexes 39 | 40 | * **When is an index used**? When there are many indexes available for a query, MongoDB runs different _query plans_ in parallel with the different indexes, and memorizes the one that returned first; this happens automatically behind the scenes; every hundred or so queries, it will forget what it learned and start over. 41 | * **Index Size**: if an index can be kept in memory, it is blazing fast; use `db.students.stats()` to see how much place the data takes on disk; `db.students.totalIndexSize()` to see how much place the indexes take 42 | * **Index cardinality**: in _regular indexes_, there is an index point for every key (#index points --1:1-- #documents); in _sparse indexes_, there could be less index points than documents, as null keys are dropped (#index points <= # documents); in _multikey documents_, there could be more index points than documents, as we have one for each value in arrays (#index points > #documents) 43 | * **Index selectivity**: have them be very selective if possible, it improves performance; for instance, in a logging collection, with millions of documents having a timestamp and opcode, where the opcode represents a small set of operations, it is best to create an index on `(timestamp, opcode)` rather than `(opcode, timestamp)`, because queries on a time interval with the first index will eliminate 4/5 of the collection's data upfront, leaving only 1/5 documents to scan within on the opcode 44 | * **Providing hints** to the database: `db.foo.find().hint({ a:1, b:1, c:1})`; or `.hint( "$natural": 1)` to use no index (the «natural index») and have a cursor that goes thru all documents 45 | * … in Pymongo: `doc = db.foo.find(query).hint( [ ("c", pymongo.ASCENDING)])` (providing an array of tuples, as dictionaries are not ordered in Python -- as they are in Javascript and JSON) 46 | * **Efficiency of index use**: `$gt`, `$lt`, `$ne` operators are not particulary efficient to use with an index, as they leave a lot of documents to go thru; the same for regexps, which require generally to go thru all documents -- to the exception of `^abc` regexps (matches strings _starting with_ "abc"); so it is important to not only consider if and which index was used for a query, but also how it was used, if the database still had to go thru millions of documents 47 | 48 | ### Geospatial indexes 49 | 50 | * What is the location closest to an (x,y) point? `db.places.ensureIndex({ location: "2d"})` 51 | * Finding with `db.places.find({ location: { "$near": [ x, y]}})`; will be return by order of increasing distance, so add a limit `.limit( 3)` 52 | * To have geospatial indexes considering the curvature of the earth, in a _spherical space model_: you still create the index with `.ensureIndex({ stores: "2d"})`, but run a query with `db.runCommand({ geoNear: "stores"}, near:[ long, lat], spherical: true, maxDistance: rad})` -- order of the tuple (long, lat) matters, _longitude_ comes first, then _latitude_ 53 | 54 | ### Logging and Profiling 55 | 56 | * **Log slow queries** is built in: above 100ms, Mongo automatically writes an entry within its logs (console or logfiles) 57 | * **Profiler** levels: 0 = off; 1 = log slow queries; 2 = log all queries (debugging) 58 | * Start _mongod_ with `mongod --dbpath … --profile 1 --slowms 2` to have Mongo automatically profile slow queries running more than 2ms 59 | * Use `db.system.profile.find()` to see what was logged; note `db.system.profile` is a capped-collection, which rotates its contents 60 | * Use `db.system.profile.find({ ns:/school.students/}).sort({ ts: 1})` to find queries on collection `students` within DB `school`, sorting by the timestamp 61 | * Use `db.system.profile.find({ millis: { $gt: 1}}).sort({ ts: 1})` to find queries that were slower than 1ms 62 | * Turning the profiler from the Mongo shell: `db.getProfilingLevel()` to read the level, `db.getProfilingStatus()` to read the status, `db.setProfilingLevel( 1, 4)` to set profiling level to 1 and look for slow queries longer than 4ms; `db.setProfilingLevel( 0)` to turn it off 63 | * **Mongostat**: most looked information is `idx miss %` --> index miss rate in memory, percentage of queries where Mongo has to go to disk; beware when it says 0: consider if either the query uses no index at all, or it resides entirely in memory 64 | * **Mongotop**: high-level view of where Mongo is spending its time; `mongotop 3` will display how much time MongoDB spent accessing/updating each collection in the repeating interval of the last 3 seconds 65 | 66 | ### Sharding 67 | 68 | * _Sharding_ is a technique for splitting up a large collection among multiple servers; in front of multiple `mongod` sharded servers, there is a `mongos` server, which is a router; the application accesses this `mongos` server 69 | * About architecture: The `mongos` server is often _colocated_ on the same machine as the _app server_. There can be _many_ `mongos` servers. 70 | * Then, the `mongod` servers are generally on different physical servers, and each of them can -- it is recommended -- be part of a _replicat set_, that is, another group of `mongod` servers, where data is kept in sync, so that if one server goes down, no data is lost; the _replica set_ can be looked at as _one shard_ 71 | * Mongo shards according to a _shard key_; when performing queries and updates, the shard key will be used to sent them to the right system 72 | * An _insert_ must include the shard key (the entire shard key if it's a compound key) to complete; so the developer has to be aware of what the shard key is 73 | * For _find/update/remove_ operations, to get higher performance, the shard key should be specified, if one knowns it; otherwise, `mongos` will have to broadcast the queries to the `mongod` servers, which will keep each of them busy 74 | * By the way, _updates_ on sharded collections should be made _multi_ 75 | * Choosing a shard key is a topic in itself. 76 | 77 | ### Review 78 | 79 | * Indexes are critical to performance 80 | * Explain command to look how the database uses the indexes for particular queries 81 | * Hint command to instruct the database which particular index to use 82 | * Profiling to figure out which queries are slow, then use explain and hinting to improve them, possibly create new indexes. 83 | -------------------------------------------------------------------------------- /course-m101p/week5-aggregation-fw.md: -------------------------------------------------------------------------------- 1 | # M101P · Week 5 · Aggregation Framework 2 | 3 | Notes and homework related to [Week 5: Aggregation Framework](https://education.mongodb.com/courses/10gen/M101P/2014_February/courseware/Week_5_Aggregation_Framework/). 4 |   5 | ## Homework 6 | 7 | * [Homework 5.1](hw5-1-answer.md) 8 | * [Homework 5.2](hw5-2-answer.md) 9 | * [Homework 5.3](hw5-3-answer.md) 10 | * [Homework 5.4](hw5-4-answer.md) 11 | 12 | ## Additional resources 13 | 14 | * [Aggregation reference](http://docs.mongodb.org/manual/reference/aggregation/) (MongoDB Manual) 15 | 16 | ## Recap 17 | 18 | * Simple example: `db.products.aggregate([ { $group: { _id: "$manufacturer", num_products: { $sum: 1}}}])` returns a document with fields `_id` and `num_products`, containing a product count per manufacturer 19 | * Compound grouping: `db.products.aggregate([ { $group: { _id: { "manufacturer": "$manufacturer", "category": "$category" }, num_products: { $sum: 1}}}])` is similar to `SELECT manufacturer, category, COUNT(*) FROM products GROUP BY manufacturer, category` 20 | * By the way, documents can be used for `_id` in general: `db.foo.insert({ _id: { name: "andrew", class: "m101"}, hometown: "NY"}` is perfectly valid 21 | 22 | ### Aggregation pipeline 23 | 24 | * Documents of a collection are piped thru an _aggregation pipeline_, which is a queue in FIFO-order of _aggregation operations_ (or _aggregation stages_), that is going to transform the collection 25 | * … expressed by `db.products.aggregate( ‹pipeline›)` in the Mongo shell, where _‹pipeline›_ is an _ordered array_ of _aggregation operations_ 26 | * … each operation can appear more than once in the pipeline, at different _stages_ 27 | * Sample pipeline: _collection_ → `$project` → `$match` → `$group` → `$sort` → _result_ 28 | * `$project` operation -- select, reshape (1:1) 29 | * `$match` operation -- filters (N:1 / reducing effect) 30 | * `$group` operation -- aggregation (N:1 / reducing effect) 31 | * `$project` operation -- sorting (1:1) 32 | * `$skip` operation -- skip documents (N:1 / reducing effect) 33 | * `$limit` operation -- limit number of documents (N:1 / reducing effect) 34 | * `$unwind` operation -- unjoins data of arrays or subdocuments (1:N / augmenting effect), normalizing the data to ease aggregation; for instance, when _unwinding_ its field `tags: ['sports', 'outdoors', 'summer']`, a blog post would be exploded into three documents 35 | 36 | ### Grouping operation (N:1) · Aggregation expressions 37 | 38 | * _Aggregation expressions_ in the **`$group`** stage: `$sum` (sums up the key value with `{$sum: "$price"}`, or sums up the given value to count `'$sum': 1`), `$avg` (average, for instance `{$avg: "$price"}`), `$min` and `$max` (find the minimum and maximum values of the key), `$push` and `$addToSet` (build arrays, the last adding uniquely each key), `$first` and `$last` (both requiring to sort, otherwise the returned keys are arbitrary) 39 | * Sum operation `$sum`: `db.zips.aggregate([{ "$group": { _id: "$state", population: { $sum: "$pop" }}}])` 40 | * Average operation `$avg`: `db.products.aggregate([{ $group: { _id: { category: "$category"}, avg_price: { $avg: "$price" }}}])` 41 | * Building arrays with `$addToSet`: `db.products.aggregate([{ $group: { _id: { maker: "$manufacturer"}, categories: { $addToSet: "$category" }}}])` will build a new set of documents grouped by manufacturer, looking thru the array to see if the category is already there, adding it if it was not; quiz answer: `db.zips.aggregate([{ $group: { _id: "$city", postal_codes: { $addToSet: "$_id" }}}, { $limit: 5}])`; note that the `"$_id"` expression refers to the `_id` in the _source documents_ of the pipeline stage 42 | * Building arrays with `$push`: similar to `$addToSet`, but does not guarantee that it adds each item only once; for instance `db.products.aggregate([{ $group: { _id: { maker: "$manufacturer"}, categories: { $push: "$category" }}}])` would again build a set of new documents grouped by manufacturer, where the categories might appear more than once; quiz: `$push` is obviously identical to `$addToSet` if its expression contains unique values, although the order of documents might be different 43 | * Picking the maximum and minimum value with the `$max` and `$min` operations: `db.products.aggregate([{ $group: { _id: { maker: "$manufacturer"}, max_price: { $max: "$price"}}}])` will find the maximum price of the products of a manufacturer 44 | * However, getting the product matching that maximum price can't be done within this grouping stage alone, additional `$sort` and `$limit` stages would be needed 45 | * Quiz answer: `db.zips.aggregate([{ $group: { "_id": "$state", pop: { $max: "$pop"}}}])` 46 | 47 | ### Double grouping stages 48 | 49 | * To figure out the average class grade in each class, we need to average all the students' grades within the class, and after that first stage, average those grades (aka scores) in a second stage: `db.grades.aggregate([{ "$group":{ _id: { class_id: "$class_id", student_id: "$student_id"}, average:{ "$avg": "$score"}}}, { "$group":{ _id: "$_id.class_id", average:{ "$avg": "$average"}}}])` 50 | * Quiz answer: `db.fun.aggregate([{ $group:{ _id:{ a: "$a", b: "$b"}, c:{ $max: "$c"}}}, { $group:{ _id: "$_id.a", c:{ $min: "$c"}}}])` returns documents `{ _id: 0, c: 52}` and `{ _id: 1, c: 22}` 51 | 52 | ### Projection operation (1:1) 53 | 54 | * Allows to reshape the documents coming thru: remove keys, add new keys, reshape keys, use simple functions on keys, such as `$toUpper`, `$toLower`, `$add` or `$multiply` 55 | * For instance `db.zips.aggregate([{ $project:{ _id: 0, city:{ $toLower: "$city"}, pop:1, state:1, zip: "$_id"}}])` will suppress the `_id` field (`0` means drop it) and replace it with `zip`, including the fields `pop` and `state` as they are (`1` means just include it), plus `city` lowercased (more complex expressions are evaluated) 56 | * Quiz answer: see the example above 57 | 58 | ### Matching operation (N:1) 59 | 60 | * Matching operation `$match`: performs a filter (on the documents as they pass thru the pipe) and has a reducing effect (N:1, or 1:1 if all documents are matched) 61 | * For instance `db.zips.aggregate([{ $match: { state: "NY"}}])` 62 | * Matching and grouping: `db.zips.aggregate([{ $match:{ state:"NY"} }, { $group:{ _id: "$city", population:{ $sum:"$pop"}, zip_codes:{ $addToSet: "$_id" }}}])` 63 | * Matching, grouping and projecting: `db.zips.aggregate([{ $match:{ state:"NY"}}, { $group:{ _id: "$city", population:{ $sum:"$pop"}, zip_codes:{ $addToSet: "$_id"}}}, { $project:{ _id: 0, city: "$_id", population: 1, zip_codes: 1 }}])` 64 | * Quiz answer: `db.zips.aggregate([{ $match: { pop: { $gt: 100000}}}])` 65 | 66 | ### Sorting operation (1:1) 67 | 68 | * Sorting operation `$sort`: before or after grouping stage; beware memory usage! the aggreation framework handles everything in memory 69 | * If the sort is _after_ the grouping stage, the operation won't be able to use an index 70 | * Sort can be used multiple times 71 | * Sorting _before_ grouping is often very useful when aggregating 72 | * For instance `db.zips.aggregate([ …, { $sort: { population: -1 }}])` 73 | * … within a complete pipeline: `db.zips.aggregate([{ $match:{ state:"NY" }}, { $group:{ _id: "$city", population:{ $sum:"$pop" }}}, { $project:{ _id: 0, city: "$_id", population: 1 }}, { $sort:{ population: -1 }}])` 74 | * Quiz answer: `db.zips.aggregate([{ $sort: { state: 1, city: 1}}])` 75 | 76 | ### Skipping and limiting operations (N:1) 77 | 78 | * Skip forward in the document set with `$skip` and limiting the resultset with `$limit` 79 | * The _order_ they are specified **does matter** (to the contrary of `find`); generally you first _skip_, then _limit_ 80 | * Skipping doesn't make a lot of sense unless you sort, so you first sort, then skip and/or limit 81 | * For instance `db.zips.aggregate([ … { $sort: { population: -1 }}, { $skip: 10}, { $limit: 5}])` 82 | * … within a complete example: `db.zips.aggregate([{ $match: { state: "NY"}}, { $group:{ _id: "$city", population:{ $sum: "$pop"}}}, { $project:{ _id: 0, city: "$_id", population: 1}}, { $sort:{ population: -1 }}, { $skip: 10}, { $limit: 5 }])` 83 | * Quiz answer: there will be zero documents in the result set; if you first limit it to 5 documents, and then skip 10 documents, there will be none left 84 | 85 | ### Grouping operation (N:1) · Revisiting $first and $last expressions 86 | 87 | * `$first` and `$last` are group operators, which allow to get the _first_ or _last value_ in _each group_ as the aggregation pipeline processes documents 88 | * After a _sort stage_, these operators can find the first or last document in _sorted order_ in each group 89 | * If the document set _#{ {A:0, B:23}, {A:0, B:45}, {A:1, B:17}, {A:1, B:68}}_ gets sorted by (A, B), then grouped on A, `$first` would return `B:23` and `B:17`, and `$last` would return `B:45` and `B:68` 90 | * For instance `db.zips.aggregate([{ $sort: { "_id.state": 1, "population": -1}}, { $group: { _id: "$_id.state", city:{ $first: "$_id.city"}, population:{ $first: "$population" }}}])` 91 | * Complete example: we want to find the largest city in every state 92 | 93 | ```javascript 94 | db.zips.aggregate([ 95 | /* Get the population of every city in every state */ 96 | { $group: { _id:{ state: "$state", city: "$city"}, population:{ $sum: "$pop"}}}, 97 | /* Sort by state, population (note we don't need the dollar operator with $sort) */ 98 | { $sort: { "_id.state": 1, "population": -1}}, 99 | /* Group by state, get the first item in each group */ 100 | { $group: { _id: "$_id.state", city:{ $first: "$_id.city"}, population:{ $first: "$population"}}}, 101 | /* Now sort by state again */ 102 | { $sort: { "_id": 1}} 103 | ]) 104 | ``` 105 | 106 | ### Unwind and double unwind operations (1:N) 107 | 108 | * Allows to _unjoin_ (or _explode_) the data of an array and rejoin it in a way that let us do grouping operations; for instance `db.items.aggregate([{ $unwind: "$attributes"}])` 109 | * Complete example: we want to count the number times that each tag appears in a post (or how many posts were tagged with each tag); therefore, we're going to unwind the tags, and then group by the tag, and count: 110 | 111 | ```javascript 112 | db.posts.aggregate([ 113 | /* Unwind by tags */ 114 | { "$unwind": "$tags"}, 115 | /* Now group by tags, counting each tag */ 116 | { "$group": { _id: "$tags", count:{ $sum: 1}}}, 117 | /* Sort by popularity */ 118 | { "$sort": { count: -1}}, 119 | /* Show me the top 10 */ 120 | { "$limit": 10}, 121 | /* Change the name of _id to be tag */ 122 | {"$project": { _id: 0, tag: "$_id", count: 1}} 123 | ]) 124 | ``` 125 | * Answer to quiz: `$push` allows to reverse the effect of an `$unwind` (grouping by the rest of the document); another option would be `$addToSet`, depending if the array contained only unique values in the original document 126 | * If there is more than one array in a document, and you want to create a cartesian product of the arrays, you have to _double unwind_ 127 | * Complete example: 128 | 129 | ```javascript 130 | db.inventory.aggregate([ 131 | { $unwind: "$sizes"}, 132 | { $unwind: "$colors"}, 133 | /* Create the color array */ 134 | { $group: { "_id": { name: "$name", size: "$sizes"}, 135 | "colors":{ $push: "$colors"}}}, 136 | /* Create the size array */ 137 | { $group: { "_id": { "name": "$_id.name", "colors": "$colors"}, 138 | "sizes":{ $push: "$_id.size"}}}, 139 | /* Reshape for beauty */ 140 | { $project: { _id:0, "name":"$_id.name", "sizes":1, "colors": "$_id.colors"}} 141 | ]) 142 | ``` 143 | * Answer to quiz: can you reverse the effect of a _double unwind_ (two unwinds in a row) with the `$push` operator? Yes, with [two pushes in a row](https://education.mongodb.com/courses/10gen/M101P/2014_February/courseware/Week_5_Aggregation_Framework/52aa3ec0e2d4232c54a18b29/); or even in a single stage using twice `$addToSet`, if the values were unique 144 | 145 | ### Mapping from SQL 146 | 147 | * Mapping from SQL: `WHERE` → `$match`, `GROUP BY` → `$group`, `HAVING` → `$match`, `SELECT` → `$project`, `ORDER BY` → `$sort`, `LIMIT` → `$limit`, `SUM()` → `$sum`, `COUNT()` → `$sum`, `join` → no direct corresponding operator; `$unwind` allows for similar functionality, but with fields embedded within the document 148 | * Some common examples: see [SQL to Aggregation Mapping Chart](http://docs.mongodb.org/manual/reference/sql-aggregation-comparison/) 149 | 150 | ### Limitations of the Aggregation Framework 151 | 152 | * Resultset limited to **16 MB of memory** 153 | * Can use **max. 10% of the memory** on a machine; use `$match` to do early filtering, if all data is not needed in following stages; similarily, use `$project` to filter out some of the fields 154 | * **Sharding**, that is collections split among multiple Mongo instances (which can be _replica sets_ in turn), with a `mongos` router in front of the shards: aggregation does work on a sharded environment, but after the first `$group` or `$sort` stage, the aggregation has to be brought back to the `mongos`, so that it can gather the results before sending them for the next stage of the pipeline (which needs to see the finished result); so when having a long pipeline, all calculations will wind up to the `mongos` server -- often colocated on the application server, therefore pushing the server resources 155 | * **mapreduce** is an alternative to the Aggregation framework 156 | * **Hadoop** is also an alternative to Aggregation framework; there is a connector for Hadoop in MongoDB, and Hadoop is an implementation of _mapreduce_ 157 | -------------------------------------------------------------------------------- /course-m101p/week5-handout01-simple-example-data-products.ec1bc22f28be.js: -------------------------------------------------------------------------------- 1 | 2 | use agg 3 | db.products.drop() 4 | 5 | db.products.insert({'name':'iPad 16GB Wifi', 'manufacturer':"Apple", 6 | 'category':'Tablets', 7 | 'price':499.00}) 8 | db.products.insert({'name':'iPad 32GB Wifi', 'category':'Tablets', 9 | 'manufacturer':"Apple", 10 | 'price':599.00}) 11 | db.products.insert({'name':'iPad 64GB Wifi', 'category':'Tablets', 12 | 'manufacturer':"Apple", 13 | 'price':699.00}) 14 | db.products.insert({'name':'Galaxy S3', 'category':'Cell Phones', 15 | 'manufacturer':'Samsung', 16 | 'price':563.99}) 17 | db.products.insert({'name':'Galaxy Tab 10', 'category':'Tablets', 18 | 'manufacturer':'Samsung', 19 | 'price':450.99}) 20 | db.products.insert({'name':'Vaio', 'category':'Laptops', 21 | 'manufacturer':"Sony", 22 | 'price':499.00}) 23 | db.products.insert({'name':'Macbook Air 13inch', 'category':'Laptops', 24 | 'manufacturer':"Apple", 25 | 'price':499.00}) 26 | db.products.insert({'name':'Nexus 7', 'category':'Tablets', 27 | 'manufacturer':"Google", 28 | 'price':199.00}) 29 | db.products.insert({'name':'Kindle Paper White', 'category':'Tablets', 30 | 'manufacturer':"Amazon", 31 | 'price':129.00}) 32 | db.products.insert({'name':'Kindle Fire', 'category':'Tablets', 33 | 'manufacturer':"Amazon", 34 | 'price':199.00}) 35 | -------------------------------------------------------------------------------- /course-m101p/week5-handout01-simple-example.4cb11c82cac4.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.products.aggregate([ 3 | {$group: 4 | { 5 | _id:"$manufacturer", 6 | num_products:{$sum:1} 7 | } 8 | } 9 | ]) 10 | 11 | 12 | -------------------------------------------------------------------------------- /course-m101p/week5-handout02-compound-grouping.31358db44867.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.products.aggregate([ 3 | {$group: 4 | { 5 | _id: { 6 | "manufacturer":"$manufacturer", 7 | "category" : "$category"}, 8 | num_products:{$sum:1} 9 | } 10 | } 11 | ]) 12 | 13 | 14 | 15 | -------------------------------------------------------------------------------- /course-m101p/week5-handout02-compoung-grouping-simple-example1.d2b6d05536bc.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.products.aggregate([ 3 | {$group: 4 | { 5 | _id: {'manufacturer':"$manufacturer"}, 6 | num_products:{$sum:1} 7 | } 8 | } 9 | ]) 10 | 11 | 12 | -------------------------------------------------------------------------------- /course-m101p/week5-handout03-using-sum-quiz-data-zips.json.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/olange/learning-mongodb/9c6cd4a3a36adb148e655bb7eb4ec53f1eb4e157/course-m101p/week5-handout03-using-sum-quiz-data-zips.json.zip -------------------------------------------------------------------------------- /course-m101p/week5-handout03-using-sum-quiz-sum-by-state.357e89fd2088.js: -------------------------------------------------------------------------------- 1 | use agg; 2 | db.zips.aggregate([{"$group":{"_id":"$state", "population":{$sum:"$pop"}}}]) 3 | -------------------------------------------------------------------------------- /course-m101p/week5-handout03-using-sum.de06d4138610.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.products.aggregate([ 3 | {$group: 4 | { 5 | _id: { 6 | "maker":"$manufacturer" 7 | }, 8 | sum_prices:{$sum:"$price"} 9 | } 10 | } 11 | ]) 12 | 13 | 14 | -------------------------------------------------------------------------------- /course-m101p/week5-handout04-using-avg.e128056b622a.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.products.aggregate([ 3 | { $group: { 4 | _id: { "category":"$category"}, 5 | avg_price:{ $avg: "$price"} 6 | }} 7 | ]) 8 | 9 | 10 | -------------------------------------------------------------------------------- /course-m101p/week5-handout05-using-addToSet.a535135c35d5.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.products.aggregate([ 3 | {$group: 4 | { 5 | _id: { 6 | "maker":"$manufacturer" 7 | }, 8 | categories:{$addToSet:"$category"} 9 | } 10 | } 11 | ]) 12 | 13 | 14 | -------------------------------------------------------------------------------- /course-m101p/week5-handout06-using-push.61b93aea9929.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.products.aggregate([ 3 | {$group: 4 | { 5 | _id: { 6 | "maker":"$manufacturer" 7 | }, 8 | categories:{$push:"$category"} 9 | } 10 | } 11 | ]) 12 | 13 | 14 | -------------------------------------------------------------------------------- /course-m101p/week5-handout07-using-max.0a8c88c4f6f7.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.products.aggregate([ 3 | {$group: 4 | { 5 | _id: { 6 | "maker":"$manufacturer" 7 | }, 8 | maxprice:{$max:"$price"} 9 | } 10 | } 11 | ]) 12 | 13 | 14 | -------------------------------------------------------------------------------- /course-m101p/week5-handout08-double-group.d75135079baa.js: -------------------------------------------------------------------------------- 1 | 2 | use agg 3 | db.grades.aggregate([ 4 | {'$group':{_id:{class_id:"$class_id", student_id:"$student_id"}, 'average':{"$avg":"$score"}}}, 5 | {'$group':{_id:"$_id.class_id", 'average':{"$avg":"$average"}}}]) 6 | 7 | 8 | 9 | -------------------------------------------------------------------------------- /course-m101p/week5-handout08-single-group.a2dcedc60ceb.js: -------------------------------------------------------------------------------- 1 | 2 | use agg 3 | db.grades.aggregate([ 4 | {'$group':{_id:{class_id:"$class_id", student_id:"$student_id"}, 'average':{"$avg":"$score"}}}]) 5 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /course-m101p/week5-handout09-using-project-quiz.e4476d90db89.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.zips.aggregate([{$project:{_id:0, city:{$toLower:"$city"}, pop:1, state:1, zip:"$_id"}}]) 3 | 4 | -------------------------------------------------------------------------------- /course-m101p/week5-handout09-using-project-reshape-products.51551839bd7a.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.products.aggregate([ 3 | {$project: 4 | { 5 | _id:0, 6 | 'maker': {$toLower:"$manufacturer"}, 7 | 'details': {'category': "$category", 8 | 'price' : {"$multiply":["$price",10]} 9 | }, 10 | 'item':'$name' 11 | } 12 | } 13 | ]) 14 | 15 | 16 | -------------------------------------------------------------------------------- /course-m101p/week5-handout10-match-and-group.1b9ed10759ea.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.zips.aggregate([ 3 | {$match: 4 | { 5 | state:"NY" 6 | } 7 | }, 8 | {$group: 9 | { 10 | _id: "$city", 11 | population: {$sum:"$pop"}, 12 | zip_codes: {$addToSet: "$_id"} 13 | } 14 | } 15 | ]) 16 | 17 | 18 | -------------------------------------------------------------------------------- /course-m101p/week5-handout10-match-group-and-project.19245fa529df.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.zips.aggregate([ 3 | {$match: 4 | { 5 | state:"NY" 6 | } 7 | }, 8 | {$group: 9 | { 10 | _id: "$city", 11 | population: {$sum:"$pop"}, 12 | zip_codes: {$addToSet: "$_id"} 13 | } 14 | }, 15 | {$project: 16 | { 17 | _id: 0, 18 | city: "$_id", 19 | population: 1, 20 | zip_codes:1 21 | } 22 | } 23 | 24 | ]) 25 | 26 | 27 | -------------------------------------------------------------------------------- /course-m101p/week5-handout10-match.deb14a3cf1ca.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.zips.aggregate([ 3 | {$match: 4 | { 5 | state:"NY" 6 | } 7 | } 8 | ]) 9 | 10 | 11 | -------------------------------------------------------------------------------- /course-m101p/week5-handout11-sort.fd13909073dd.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.zips.aggregate([ 3 | {$match: 4 | { 5 | state:"NY" 6 | } 7 | }, 8 | {$group: 9 | { 10 | _id: "$city", 11 | population: {$sum:"$pop"}, 12 | } 13 | }, 14 | {$project: 15 | { 16 | _id: 0, 17 | city: "$_id", 18 | population: 1, 19 | } 20 | }, 21 | {$sort: 22 | { 23 | population:-1 24 | } 25 | } 26 | 27 | 28 | 29 | ]) 30 | 31 | 32 | -------------------------------------------------------------------------------- /course-m101p/week5-handout12-skip-and-limit.6ece6f722d9b.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.zips.aggregate([ 3 | {$match: 4 | { 5 | state:"NY" 6 | } 7 | }, 8 | {$group: 9 | { 10 | _id: "$city", 11 | population: {$sum:"$pop"}, 12 | } 13 | }, 14 | {$project: 15 | { 16 | _id: 0, 17 | city: "$_id", 18 | population: 1, 19 | } 20 | }, 21 | {$sort: 22 | { 23 | population:-1 24 | } 25 | }, 26 | {$skip: 10}, 27 | {$limit: 5} 28 | ]) 29 | 30 | 31 | -------------------------------------------------------------------------------- /course-m101p/week5-handout13-first-phase1.a83bcb182633.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.zips.aggregate([ 3 | /* get the population of every city in every state */ 4 | {$group: 5 | { 6 | _id: {state:"$state", city:"$city"}, 7 | population: {$sum:"$pop"}, 8 | } 9 | } 10 | ]) 11 | -------------------------------------------------------------------------------- /course-m101p/week5-handout13-first-phase2.281349fd65a4.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.zips.aggregate([ 3 | /* get the population of every city in every state */ 4 | {$group: 5 | { 6 | _id: {state:"$state", city:"$city"}, 7 | population: {$sum:"$pop"}, 8 | } 9 | }, 10 | /* sort by state, population */ 11 | {$sort: 12 | {"_id.state":1, "population":-1} 13 | } 14 | ]) 15 | -------------------------------------------------------------------------------- /course-m101p/week5-handout13-first-phase3.37e2f560bc6b.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.zips.aggregate([ 3 | /* get the population of every city in every state */ 4 | {$group: 5 | { 6 | _id: {state:"$state", city:"$city"}, 7 | population: {$sum:"$pop"}, 8 | } 9 | }, 10 | /* sort by state, population */ 11 | {$sort: 12 | {"_id.state":1, "population":-1} 13 | }, 14 | /* group by state, get the first item in each group */ 15 | {$group: 16 | { 17 | _id:"$_id.state", 18 | city: {$first: "$_id.city"}, 19 | population: {$first:"$population"} 20 | } 21 | } 22 | ]) 23 | -------------------------------------------------------------------------------- /course-m101p/week5-handout13-first.bdf958d63359.js: -------------------------------------------------------------------------------- 1 | use agg 2 | db.zips.aggregate([ 3 | /* get the population of every city in every state */ 4 | {$group: 5 | { 6 | _id: {state:"$state", city:"$city"}, 7 | population: {$sum:"$pop"}, 8 | } 9 | }, 10 | /* sort by state, population */ 11 | {$sort: 12 | {"_id.state":1, "population":-1} 13 | }, 14 | 15 | /* group by state, get the first item in each group */ 16 | {$group: 17 | { 18 | _id:"$_id.state", 19 | city: {$first: "$_id.city"}, 20 | population: {$first:"$population"} 21 | } 22 | }, 23 | 24 | /* now sort by state again */ 25 | {$sort: 26 | {"_id":1} 27 | } 28 | ]) 29 | -------------------------------------------------------------------------------- /course-m101p/week5-handout14-unwind-blog-tags.995fc80195d0.js: -------------------------------------------------------------------------------- 1 | use blog; 2 | db.posts.aggregate([ 3 | /* unwind by tags */ 4 | {"$unwind":"$tags"}, 5 | /* now group by tags, counting each tag */ 6 | {"$group": 7 | {"_id":"$tags", 8 | "count":{$sum:1} 9 | } 10 | }, 11 | /* sort by popularity */ 12 | {"$sort":{"count":-1}}, 13 | /* show me the top 10 */ 14 | {"$limit": 10}, 15 | /* change the name of _id to be tag */ 16 | {"$project": 17 | {_id:0, 18 | 'tag':'$_id', 19 | 'count' : 1 20 | } 21 | } 22 | ]) 23 | -------------------------------------------------------------------------------- /course-m101p/week5-handout14-unwind-quiz.2332e562e2ef.js: -------------------------------------------------------------------------------- 1 | use agg; 2 | db.items.drop(); 3 | db.items.insert({_id:'nail', 'attributes':['hard', 'shiny', 'pointy', 'thin']}); 4 | db.items.insert({_id:'hammer', 'attributes':['heavy', 'black', 'blunt']}); 5 | db.items.insert({_id:'screwdriver', 'attributes':['long', 'black', 'flat']}); 6 | db.items.insert({_id:'rock', 'attributes':['heavy', 'rough', 'roundish']}); 7 | db.items.aggregate([{$unwind:"$attributes"}]); 8 | -------------------------------------------------------------------------------- /course-m101p/week5-handout15-double-unwind.97e478dd0b7c.js: -------------------------------------------------------------------------------- 1 | use agg; 2 | db.inventory.drop(); 3 | db.inventory.insert({'name':"Polo Shirt", 'sizes':["Small", "Medium", "Large"], 'colors':['navy', 'white', 'orange', 'red']}) 4 | db.inventory.insert({'name':"T-Shirt", 'sizes':["Small", "Medium", "Large", "X-Large"], 'colors':['navy', "black", 'orange', 'red']}) 5 | db.inventory.insert({'name':"Chino Pants", 'sizes':["32x32", "31x30", "36x32"], 'colors':['navy', 'white', 'orange', 'violet']}) 6 | db.inventory.aggregate([ 7 | {$unwind: "$sizes"}, 8 | {$unwind: "$colors"}, 9 | {$group: 10 | { 11 | '_id': {'size':'$sizes', 'color':'$colors'}, 12 | 'count' : {'$sum':1} 13 | } 14 | } 15 | ]) 16 | 17 | -------------------------------------------------------------------------------- /course-m101p/week5-handout15-reversing-double-unwind.71fe17935216.js: -------------------------------------------------------------------------------- 1 | use agg; 2 | db.inventory.drop(); 3 | db.inventory.insert({'name':"Polo Shirt", 'sizes':["Small", "Medium", "Large"], 'colors':['navy', 'white', 'orange', 'red']}) 4 | db.inventory.insert({'name':"T-Shirt", 'sizes':["Small", "Medium", "Large", "X-Large"], 'colors':['navy', "black", 'orange', 'red']}) 5 | db.inventory.insert({'name':"Chino Pants", 'sizes':["32x32", "31x30", "36x32"], 'colors':['navy', 'white', 'orange', 'violet']}) 6 | db.inventory.aggregate([ 7 | {$unwind: "$sizes"}, 8 | {$unwind: "$colors"}, 9 | {$group: 10 | { 11 | '_id': "$name", 12 | 'sizes': {$addToSet: "$sizes"}, 13 | 'colors': {$addToSet: "$colors"}, 14 | } 15 | } 16 | ]) 17 | 18 | -------------------------------------------------------------------------------- /course-m101p/week5-handout15-reversing-double-unwind2.ca75fafef175.js: -------------------------------------------------------------------------------- 1 | use agg; 2 | db.inventory.drop(); 3 | db.inventory.insert({'name':"Polo Shirt", 'sizes':["Small", "Medium", "Large"], 'colors':['navy', 'white', 'orange', 'red']}) 4 | db.inventory.insert({'name':"T-Shirt", 'sizes':["Small", "Medium", "Large", "X-Large"], 'colors':['navy', "black", 'orange', 'red']}) 5 | db.inventory.insert({'name':"Chino Pants", 'sizes':["32x32", "31x30", "36x32"], 'colors':['navy', 'white', 'orange', 'violet']}) 6 | db.inventory.aggregate([ 7 | {$unwind: "$sizes"}, 8 | {$unwind: "$colors"}, 9 | /* create the color array */ 10 | {$group: 11 | { 12 | '_id': {name:"$name",size:"$sizes"}, 13 | 'colors': {$push: "$colors"}, 14 | } 15 | }, 16 | /* create the size array */ 17 | {$group: 18 | { 19 | '_id': {'name':"$_id.name", 20 | 'colors' : "$colors"}, 21 | 'sizes': {$push: "$_id.size"} 22 | } 23 | }, 24 | /* reshape for beauty */ 25 | {$project: 26 | { 27 | _id:0, 28 | "name":"$_id.name", 29 | "sizes":1, 30 | "colors": "$_id.colors" 31 | } 32 | } 33 | ]) 34 | 35 | -------------------------------------------------------------------------------- /webinar-build-an-app/.gitignore: -------------------------------------------------------------------------------- 1 | data 2 | mycms_mongodb 3 | -------------------------------------------------------------------------------- /webinar-build-an-app/20140130-getting-started.md: -------------------------------------------------------------------------------- 1 | # Session 1 · 30.01.2014 · Getting started 2 | 3 | Back to Basics · Introduction 4 | 5 | ## Summary 6 | 7 | * Document Model 8 | * Simplifiy development 9 | * Simplify scale out 10 | * Improve performance 11 | * MongoDB 12 | * Rich general purpose database 13 | * Built in high availability and failover 14 | * Build in scale out 15 | 16 | ## Further reading 17 | 18 | * [Analyze Performance of Database Operations](http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/) enable with `db.setProfilingLevel(2)` and query with `db.system.profile.find( { millis : { $gt : 5 } } ).pretty()` for instance; `show profile` displays the five most recent events that took more than 1ms 19 | * [Measuring time and analyzing a query: explain()](http://docs.mongodb.org/manual/reference/method/cursor.explain/) `db.collection.find().explain().millis` gives the time taken by a given query 20 | * [Ways to implement data versioning in MongoDB](http://stackoverflow.com/questions/4185105/ways-to-implement-data-versioning-in-mongodb) StackOverflow 21 | * [Atomic Counters using MongoDB’s findAndModify with PHP](http://chemicaloliver.net/programming/atomic-counters-using-mongodbs-findandmodify-with-php/) 22 | * [How to apply automatic versioning to a C# class](http://stackoverflow.com/questions/20351698/how-to-apply-automatic-versioning-to-a-c-sharp-class) 23 | 24 | ## Live Chat 25 | 26 | (pas sauvegardé cette fois-là) -------------------------------------------------------------------------------- /webinar-build-an-app/20140206-build-app-part1-getting-started.md: -------------------------------------------------------------------------------- 1 | # Session 2 · 06.02.2014 · Build an Application Series 2 | 3 | Part One · Getting started · Schema design and application architecture 4 | 5 | ## Summary 6 | 7 | * Flexible schema documents with ability to embed rich and complex data structures for performance 8 | * Schema is designed around data access patterns – and not data storage 9 | * Referencing for more flexibility 10 | * Develop schema with scale out in mind; shard key consideration is important. 11 | 12 | ## Further reading 13 | 14 | * [myCMS Skeleton source code](http://www.github.com/mattbates/mycms_mongodb) 15 | * [Data Models](http://docs.mongodb.org/manual/data-modeling) 16 | * [Use Case: Metadata and Asset Management](http://docs.mongodb.org/ecosystem/use-cases/metadata-and-asset-management/) describes the design and pattern of a content management system using MongoDB modeled on the popular Drupal CMS 17 | * [Use Case: Storing comments](http://docs.mongodb.org/ecosystem/use-cases/storing-comments/) utlines the basic patterns for storing user-submitted comments in a content management system (CMS). 18 | 19 | ## Getting ready with the MyCMS Skeleton Code 20 | 21 | $ git clone http://www.github.com/mattbates/mycms_mongodb 22 | $ cd mycms_mongodb 23 | $ virtualenv venv 24 | $ source venv/bin/activate 25 | $ pip install -r requirements.txt 26 | 27 | $ mkdir -p data/db 28 | $ mongod --dbpath=data/db --fork --logpath=mongod.log 29 | 30 | $ python web.py 31 | $ deactivate 32 | 33 | ## Live Chat 34 | 35 | […] 36 | 37 | Jon Rangel: @Raju, yes we have an officially supported Scala driver 38 | 39 | John Page: @Raju - MongoDB does support Scala and this webinar will not mention mongoviewer, no. 40 | 41 | alain helaili: @Chen asked if MongoDB could guarantee writes. Yes, MongoDB has several levels of write concerns so one can ensure that data has been written to one or several servers 42 | 43 | Nick Geoghegan: Nathan: No 44 | 45 | Arthur Viegers: @sumit: MongoDB is an operational database, hadoop is not. Hadoop is for offline crunching of data. 46 | 47 | Henrik Ingo: @sumit asks about difference between MongoDB and Hadoop. These are two different databases / data storage products. Hadoop is typically used only for quite heavy batch analytics. MongoDB is more like a traditional database, can be used both for app development and analytical data storage. 48 | 49 | Daniel Roberts: Is the sound better than last week? 50 | 51 | Nick Geoghegan: Paras: It depends on your usecase. An m1.medium is good, as it guarentees CPU, RAM and networking 52 | 53 | Henrik Ingo: @Jean-Babtiste, yes, the recording, slides and code examples will be made available after the webinar. 54 | 55 | John Page: @James asks about file locking and writes - MongoDB uses lock to ensure multiple threads do not corrupt data but these locks are held for a very short time and writes can be scaled fairly easily to hundreds of thousands per second. 56 | 57 | Nick Geoghegan: Paras: There are also MongoDB AMIs available in the AWS marketplace 58 | 59 | Eoin Brazil: Hi Leon Liang - if you want to learn about schema's you can watch the first webinar in this series at - http://www.mongodb.com/webinar/intro_mongodb_jan14 60 | 61 | Daniel Roberts: Code is at git.io/BldUqA 62 | 63 | Nick Geoghegan: William: No, YOU have the best surname ;-) 64 | 65 | Tugdual Grall: @sumit : I invite you to look at this Webinar about Hadoop and MongoDB: http://www.mongodb.com/webinar/mongodb-hadoop-jan2014 66 | 67 | Sharon Elkayam: @Shlomo: The previous webinar slides and recordings can be found here: http://pages.mongodb.com/017HGS5930003q800E8aY00 and http://pages.mongodb.com/017HGS5930003q900E8aY00 68 | 69 | 70 | 71 | Arthur Viegers: @ofri: only actions on documents are atomic in MongoDB. You cannot do atomic actions affecting more than one document. 72 | 73 | Nick Geoghegan: Paras: For production, 3 nodes in a replicaset is best. It gives you high availablity and redundency. 74 | 75 | Tugdual Grall: @Antonio, yes you have some tools to integrate with reporting tools such as Pentaho , Jasper Report, for the forms today people are using many Web framework depending of your programming language 76 | 77 | Owen Hughes: The webinars will be recorded for playback later. The code will be posted to Github 78 | 79 | Henrik Ingo: @ofri asks about transactions or atomicity. The answer is that updates are atomic on a per-document granularity. So for example updating 2 fields in the same document is atomic. This turns out to be surprisingly useful. However, updating multiple documents is not possible in a transactional, atomic manner. See http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/ for more info 80 | 81 | alain helaili: re there tools for windows forms and reporting? 82 | 83 | Nick Geoghegan: Dejan: It is not recommended as openVZ does interesting things with memory management. 84 | 85 | alain helaili: @Antonio asked if there were tools for windows forms and reporting : there are a bunch of frameworks on the market and some reporting tools such as Pentaho and Jaspersoft provide support for MongoDB 86 | 87 | Arthur Viegers: @ofri: Yes, the only way to do this is to handle it at the application level. 88 | 89 | Tugdual Grall: @Roland, both are good, it really depends of your application logic. 90 | 91 | John Page: Lets say I had 5,000 writes to perform for a single database, but different collections. Is this a bad idea? Would it take a long time? 92 | 93 | Henrik Ingo: @Bojan asks about Lucene indexing. To be precise, MongoDB 2.4 introduced full text indexes. This is a MongoDB feature implemented by ourselves, it is not based on Lucene. In other words, you can use MongoDB full text search, or use a separate lucene server if you specifically want lucene. 94 | 95 | Eoin Brazil: Hi @Chen Roth - you can look at our case studies for ecommerce at http://www.mongodb.com/industries/retail or contact us (https://www.mongodb.com/contact) and we'll have a sales rep talk to you directly 96 | 97 | Ger Hartnett: @roland Reusing objectid will encapsulate timestamps and save you having to make sure the id is unique. 98 | 99 | John Page: @Sergey - it's technically possible to implement distributed transactions but it's not a focus for us as for most use cases - if you understand how to use the document model you don't need them. It's like asking when Node.JS is going to implement threads. 100 | 101 | alain helaili: @Shilpen asked about monitoring tools : There are a bunch of monitoring tools such has mongostats and MMS. 102 | 103 | Sharon Elkayam: @Borat - that depends on the requirements you have from text search in your application. MongoDB built-in text search covers stemming and tokenization for 14 languages at the moment, including support within aggregation framework (2.6). more details can be found on our official docs 104 | 105 | Nick Geoghegan: Levi: Yes, it wll be available later. 106 | 107 | Henrik Ingo: @Samuel asks if the sample app is available for Java? We use Python for this webinar series, as it should be an easy language for everyone to follow. However, we plan to do a similar series with Java and JavaScript later. 108 | 109 | Eoin Brazil: Hi @Sohail Iqbal, do you want to put a post in Google Group with your product example and we can discuss it in more detail there - https://groups.google.com/forum/#!overview 110 | 111 | Olivier Lange: Clojure is missing from the survey; it is my language of choice 112 | 113 | Nick Geoghegan: Shilpen: On disk, or in transit? In transit, yes - using SSL. 114 | 115 | Tugdual Grall: @Martin at the end the native drive is used. Mongoose is just an ODM (Object Document Model) so it depends really of your application and how you want to develop. What I saw from npm stat is 50% of the developers seam to use the node native driver alone other with a top layer such mongoose 116 | 117 | Ger Hartnett: @Sergey if you really want to distribute transactions across documents you can use this design pattern http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/ 118 | 119 | Arthur Viegers: @Shilpen: MongoDB does not encrypt data at rest. SSL on all members in a replica set is supported. 120 | 121 | Eoin Brazil: Hi @joe connolly, can you provide a little more detail on your serialiser question with the console/log output in Google Group - https://groups.google.com/forum/#!overview 122 | 123 | Tugdual Grall: @Shilpen : you can get some stats from your database using mongostat, and explain; take a look to http://docs.mongodb.org/manual/reference/program/mongostat/ 124 | 125 | Henrik Ingo: @Samuel asks if there is sample code in Java available on Github? No, not related to this webinar specifically. To get started with MongoDB in Java, have a look at http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-java-driver/ 126 | 127 | Ger Hartnett: @olivier There are a number of clojure MongoDB drivers 128 | 129 | Arthur Viegers: @Daniel: You can reuse your DB object (it is what I do by default). In your connection to the DB configure it to auto-reconnect in case the DB connection gets lost. 130 | 131 | alain helaili: @Maor asked if there are ORM for Mongo : yes, there are multiple choices here, but you'll find you don't need it that much 132 | 133 | Massimo Brignoli: @Andreas is asking about concurrency: MongoDB has a database level locking, so at the end you will have the last value 134 | 135 | Tugdual Grall: @Maor: yes you habe Morphia ( https://code.google.com/p/morphia/ ) or Hibernate OGM (https://docs.jboss.org/hibernate/ogm/4.0/reference/en-US/html_single/#ogm-mongodb ) but sometimes it is also good to stay with the native driver just to keep the benefits of "dynamic approach" 136 | 137 | alain helaili: @Willial asked : Adding an image to a document is only possible when the image is not bigger than 16MB? Yes, GridFS in MongoDB does that. 138 | 139 | Daniel Roberts: @Sumit - Yes… use Spring Data for MongoDB… We have customers in production using it. 140 | 141 | Tugdual Grall: @William you if you need to store "files" you can use GriFS that allow you to store files in the database http://docs.mongodb.org/manual/core/gridfs/ 142 | 143 | Daniel Roberts: np 144 | 145 | John Page: @Tibor asks when to use one or many collections - generally each type of record shoudl be in one collection althought they don't have to be identical take a new collection when there is no reason to put them in the same one. 146 | 147 | Arthur Viegers: @Santi: You can use GridFS to store pictures in MongoDB 148 | 149 | Marc Schwering: @Maor, yes there are ODM (Object Data Mapper) for some Languages like Morphia (https://code.google.com/p/morphia/) or Spring http://projects.spring.io/spring-data-mongodb/ and there is hibernate support. 150 | 151 | Nick Geoghegan: William: Lots! 24000 152 | 153 | Nick Geoghegan: William Geoghegan: That's the default, mind you. It can be increased fairly easily enough 154 | 155 | Tugdual Grall: @Andreas Gridfs create chunks and store that in a special collections http://docs.mongodb.org/manual/core/gridfs/ 156 | 157 | Nick Geoghegan: William Geoghegan: Yeah. Be a bit of a nightmare to admin, though. It'd depend on your usecase 158 | 159 | Charlie Page: @Carlos you can't change the max 16meg size without modifing the source code 160 | 161 | Henrik Ingo: @Thomas asks why there is a 16 MB limit for document size? It is a somewhat arbitrary decision. 16 MB is quite a lot for most cases, for example it is more than the longest book ever written. You can also store larger objects in a MongoDB cluster using the GridFS API. This can take a large file, which contents is transparently split and stored into multiple records, each smaller than 16 MB. 162 | 163 | Henrik Ingo: A lot of questions about GridFS. See http://docs.mongodb.org/manual/core/gridfs/ for getting started / quick overview. 164 | 165 | Charlie Page: @Andy a schema is what your program expects to find in the database essentially. The objects are singular but a find can return one or many of them 166 | 167 | John Page: take a new collection when there is no reason to put them in the same one. 168 | 169 | Nick Geoghegan: Schmuel: On disk, there will be fragmentation. This space will try to be used again, by the storage algorithm 170 | 171 | John Page: @Miguel asked is _id is a naming convention, yes the Primary Key field (identifier) must always be called _id its the only field you MUST have 172 | 173 | Ger Hartnett: @Jean-Baptiste The document limit does not apply to blobs stored in GridFS 174 | 175 | Henrik Ingo: @Jean-Babtiste it is possible to store objects or files larger than 16 MB using the GridFS API. See my link earlier in this chat. 176 | 177 | Tugdual Grall: @Andreas if you do not use GridFS, you have to "explode" the document yourself 178 | 179 | Jon Rangel: @Tibor: there is no limit on the number of fields in a document, only on the maximum size of the document (16MB) 180 | 181 | Henrik Ingo: @Rajesh asks, when you delete comments from the array, would your document still be contiguous in memory? The answer is yes. Essentially the document is "rewritten" in place. 182 | 183 | Sharon Elkayam: @Rui Pedro Lima - yes, it is supported 184 | 185 | Eoin Brazil: Hi @Amisha Patel - there will be a DBA course in Feb - https://education.mongodb.com/courses/10gen/M102/2014_February/about and an advanced ops course in March, https://education.mongodb.com/courses/10gen/M202/2014_April/about 186 | 187 | Nick Geoghegan: Tibor: 16MB is a lot of data - imagine all the works of shakespere and Arthur Conan Doyle together...you would still have PLENTY of room free 188 | 189 | Arthur Viegers: @Francois-Xavier: GridFS stores your data over to collections: one to store the data and the other to store the meta data. 190 | 191 | John Page: @Miguel asked is _id is a naming convention, yes the Primary Key field (identifier) must always be called _id its the only field you MUST have 192 | 193 | Eoin Brazil: Hi @Fulvio Di Domizio, we provide GridFS to act as a file storage layer for larger documents than 16MB 194 | 195 | Tugdual Grall: @Mohamed Yes you have ETL connectors for exampel for Informatica, Talend see our partners page: http://www.mongodb.com/partners/software 196 | 197 | Henrik Ingo: @Mohamed asks if it is possible to pull data from MongoDB using conventional ETL tools. A few ETL tools already support MongoDB (really well). See http://www.mongodb.com/partners/software 198 | 199 | alain helaili: @Maor asked if updating an existing document without reading the document completly into memory was possible. Yes, you do not need to read the document to update it 200 | 201 | Ger Hartnett: @Aleksandr there had to be some limit and 16MB was ensures a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth 202 | 203 | Tugdual Grall: @Andreas, you will get an exception. But remember 16Mb is JSON is HUGE! 204 | 205 | Massimo Brignoli: @Ritesh: check GridFS in our manual page: http://docs.mongodb.org/manual/core/gridfs/ 206 | 207 | Nick Geoghegan: Dejan: It takes up RAM, which can be used for data instead of names 208 | 209 | Eoin Brazil: Hi @Dimtry Kiselev, we don't support embedded platforms at this point in time, our supported platforms can be see at http://www.mongodb.org/downloads 210 | 211 | Charlie Page: @Colm collections should be used to divide your data up logically, however you can also use it break up data for other reasons. There is complete data modeling information here: http://docs.mongodb.org/manual/data-modeling/ 212 | 213 | Charlie Page: @Maor you have to read it all into memory 214 | 215 | Nick Geoghegan: Taaniel: By reusing space. If your document is expected to grow in size, over time, you can use "power of 2 sizes" to limit the fragmentation on disk 216 | 217 | Ger Hartnett: @Maor, yes you can update a document without loading it all into memory first 218 | 219 | Jon Rangel: @Bojan Saric asked for confirmation about the most recent data model shown: this is one article document containing the most recent 10 comments, and one document per comment bucket, each containing up to 100 comments 220 | 221 | Sharon Elkayam: @Sean Furth - you can also use capped arrays (introduced in version 2.4) 222 | 223 | Charlie Page: @Moar MongoDB loads a document into memory to modify it 224 | 225 | John Page: @Maor - that's actually a tricky question so you got two answers - on the server, the whole record will be read from disk into RAM during the update - but you do not need to pull the whole document to your client , update it and push it all back to the server. Hope that clears it up. 226 | 227 | Henrik Ingo: @Mykhailo asks if it costs a lot to update the arrays or buckets? The answer is that in MongoDB updates of documents are often "update in place", which is very efficient. But this is not always the case, there are several things that can lead to inefficient updates. 228 | 229 | Eoin Brazil: Hi @Vihang Shah, in this example there is no caching layer but you can easily add another cache layer for performance if required 230 | 231 | Jon Rangel: @mykhailo tiurin asked about updating the "top 10" array in the article document. MongoDB provides an easy and efficient way to maintain 'capped' arrays of this nature: http://blog.mongodb.org/post/58782996153/push-to-sorted-array 232 | 233 | John Page: @Jean Baptiste - there is no real reason we took Bottle and Angular for this but we had to choose something, your choice of UI frameworks does not affect how you design Schemas and use MongoDB hopefully. 234 | 235 | Olivier Lange: Is there an easy way to measure time taken by a query from the shell? db.coll.find().explain() does not seem to return time. 236 | 237 | Eoin Brazil: Hi @sumit kumar & @chris burton - there are details regarding the setup and install of PIP available at - http://www.pip-installer.org/en/latest/ 238 | 239 | John Page: @Oliver asks how to measure time taken for a query from the shell, use explain() and look in the millis field. 240 | 241 | John Page: @Oliver asks how to measure time taken for a query from the shell, use explain() and look in the millis field. 242 | 243 | Henrik Ingo: @Shlomo asks if there is a library in C# that converts objects to BSON. In fact the MongoDB C# connector can do exactly that. http://docs.mongodb.org/ecosystem/tutorial/serialize-documents-with-the-csharp-driver/ 244 | 245 | Arthur Viegers: @Burim: We will do a node.js webinar later in the series. 246 | 247 | alain helaili: @Burin asked about a node.js demo : We're working on it. Coming soon. 248 | 249 | John Page: @Mykhalio - this app is similar to the one in M101P but not exactly the same 250 | 251 | Massimo Brignoli: @Rajender: We will have a java webinar later in the series 252 | 253 | alain helaili: @Rui asked about indexing embedded fields : this is totally doable and won't hurt at all. Actually, probably a good idea if you're going to query on that. 254 | 255 | Tugdual Grall: @Rajender @Alession : yes we will have a Webinar around Java Development - You can also start looking that the Java Driver tutorial here : http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-java-driver/#getting-started-with-java-driver 256 | 257 | Massimo Brignoli: @Gianni: Yes, I'll replicate this webinar series in italian starting the 11th of march 258 | 259 | Olivier Lange: oh! I missed the millis field. Thanks. So this works: db.users.find({"user":"mattbates"}).explain() -> { ..., millis: 6, ... } 260 | 261 | Eoin Brazil: Hi Jean-Baptiste, this is a longer question can you post the Ember/Python/MongoDB schema layout you envisage and questions at our Google Group https://groups.google.com/forum/#!forum/mongodb-user 262 | 263 | Massimo Brignoli: @Gianni: Prego, if someone is close to Milan, can come also to our mongodb user group, next meeting the 18th of march 264 | 265 | Arthur Viegers: @Denizhan: You can use your own created unique values for the _id field (even composites are allowed). There are no performance drawbacks. The only requirement is that the value is unique in the collection. 266 | 267 | John Page: @Denzian asks what the implications of supplying _id yourself are. It can be faster than using auto generated _id's but you are responsible for generating unique ones also there are performance implications at the high end where constantly incrementing _id's are faster than random unless using shards where you need to understand how to get optimal ones as shard keys. 268 | 269 | Laura Czajkowski: To find your nearest MongoDB Meet up haev a look at http://www.mongodb.com/user-groups-mongodb 270 | 271 | Massimo Brignoli: @Fulvio @Gianni: check the meetup page: http://www.meetup.com/MongoDB-Milan/ 272 | 273 | Laura Czajkowski: Sign up to the news letter to know when the nearest and newest event is happening http://www.mongodb.com/newsletter 274 | 275 | Eoin Brazil: Hi @Mykhailo Tiurin, there are a few good presentations on example schemas - http://www.slideshare.net/friedo/data-modeling-examples and design pitfalls - https://blog.serverdensity.com/mongodb-schema-design-pitfalls/ 276 | 277 | John Page: @Shlomo asks - do you get the _id on inserts - yes you do 278 | 279 | alain helaili: Cam I update an existing document without reading the document completly into memory? 280 | 281 | Massimo Brignoli: @Manuel: this is the madrid user group: 282 | 283 | Massimo Brignoli: http://www.meetup.com/Madrid-MongoDB-User-Group/ 284 | 285 | Laura Czajkowski: List of all user groups can be found - http://www.mongodb.com/user-groups-mongodb 286 | 287 | Nick Geoghegan: Nicolas: Yes, they will all be made available after the talk. http://www.mongodb.com/webinar/intro_mongodb_jan14 288 | 289 | Sam Weaver: @Daniel Maxerov: it would be done manually. 290 | 291 | Henrik Ingo: @Simone asks if we can do Java examples next time? Answer is "no", this series will continue in python. But there will be another series later which is in Java. Note that we also have a Java based developer course M101J on education.mongodb.com. 292 | 293 | Nick Geoghegan: William G: So are we! 294 | 295 | John Page: @Denzian asks what the implications of supplying _id yourself are. It can be faster than using auto generated _id's but you are responsible for generating unique ones also there are performance implications at the high end where constantly incrementing _id's are faster than random unless using shards where you need to understand how to get optimal ones as shard keys. 296 | 297 | John Page: @Jay Ma asks how you can generate a unique sequence value - _id's are generated by a combination of datetime/client ip/process id/ oneup in the client not from the server. If you want to generate your own you can use findAndModify and $inc to create something like a sequence although for performance you would want to batch requests to it to avoid a server call each time. 298 | 299 | Sam Weaver: @jay ma: the _id is generated automatically using a hash of mac address, and timestamp and some other things which makes it unique. You can over-ride it with your own value if you like You can also ensure it has a unique index so no duplicates can exist. 300 | 301 | Sam Weaver: @Ritesh Aryal: http://www.php.net/manual/en/class.mongodb.php 302 | 303 | Eoin Brazil: Hi @Laurence Curtis, it depends on whether each user is represented by a document or whether a list of user is represented in a single document. A longer question at https://groups.google.com/forum/#!forum/mongodb-user might be better to explain these differences. 304 | 305 | John Page: @Maor - Already answered that, yes you cab get back a subset of ids using the project property in the query to choose the fields you want back. 306 | 307 | Eoin Brazil: Just a reminder to all the hashtag for this session is #MongoDBBasics if you want to follow up over twitter 308 | 309 | Nick Geoghegan: William Geoghegan: I play icecream truck music 310 | 311 | Marc Schwering: For PHP i suggest you should take a look into: http://www.php.net/manual/en/mongo.tutorial.php There are also existing web frameworks using MongoDB and PHP like Lithium: http://li3.me/ 312 | 313 | Henrik Ingo: @Vivek asked why it is not adviceable to have an unbound array that can grow in size, is it not good practice? From a style point of view it is actually not a problem to do that. The reason we discourage it is that an update that makes the document grow, can be much more expensive. It may cause that the document has to be moved to another physical location, with new space allocated for it, and index pointers updated. Especially if the array field itself is index, that can cause a lot of index updates. Other than performance reasons, there is no problem in growing arrays. Also, you of course need to be mindful of the 16MB limit, but usually you don't come anywhere close to that. 314 | 315 | Massimo Brignoli: Agli italiani in ascolto: il prossimo meetup di MongoDB Milano sara' il 18 marzo presso la sede di Red Hat Italia, in via Gustavo Fara 28 alle ore 19.00 316 | -------------------------------------------------------------------------------- /webinar-build-an-app/20140206-build-app-part1-getting-started/Capture d’écran 2014-02-06 à 16.14.41.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/olange/learning-mongodb/9c6cd4a3a36adb148e655bb7eb4ec53f1eb4e157/webinar-build-an-app/20140206-build-app-part1-getting-started/Capture d’écran 2014-02-06 à 16.14.41.png -------------------------------------------------------------------------------- /webinar-build-an-app/20140206-build-app-part1-getting-started/Capture d’écran 2014-02-06 à 16.37.38.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/olange/learning-mongodb/9c6cd4a3a36adb148e655bb7eb4ec53f1eb4e157/webinar-build-an-app/20140206-build-app-part1-getting-started/Capture d’écran 2014-02-06 à 16.37.38.png -------------------------------------------------------------------------------- /webinar-build-an-app/20140206-build-app-part1-getting-started/Capture d’écran 2014-02-06 à 16.38.00.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/olange/learning-mongodb/9c6cd4a3a36adb148e655bb7eb4ec53f1eb4e157/webinar-build-an-app/20140206-build-app-part1-getting-started/Capture d’écran 2014-02-06 à 16.38.00.png -------------------------------------------------------------------------------- /webinar-build-an-app/20140206-build-app-part1-getting-started/Capture d’écran 2014-02-06 à 16.50.37.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/olange/learning-mongodb/9c6cd4a3a36adb148e655bb7eb4ec53f1eb4e157/webinar-build-an-app/20140206-build-app-part1-getting-started/Capture d’écran 2014-02-06 à 16.50.37.png -------------------------------------------------------------------------------- /webinar-build-an-app/20140206-build-app-part1-getting-started/Capture d’écran 2014-02-06 à 16.53.31.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/olange/learning-mongodb/9c6cd4a3a36adb148e655bb7eb4ec53f1eb4e157/webinar-build-an-app/20140206-build-app-part1-getting-started/Capture d’écran 2014-02-06 à 16.53.31.png -------------------------------------------------------------------------------- /webinar-build-an-app/20140220-build-app-part2-interacting-database.md: -------------------------------------------------------------------------------- 1 | # Session 3 · 20.02.2014 · Build an Application Series 2 | 3 | Part Two · Interacting with the database 4 | 5 | ## Further reading 6 | 7 | * [Use Case: Storing Log Data](http://docs.mongodb.org/ecosystem/use-cases/storing-log-data/) outlines the basic patterns and principles for using MongoDB as a persistent storage engine for log data from servers and other machine data 8 | * [Use Case: Pre-Aggregated Reports](http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/) outlines the basic patterns and principles for using MongoDB as an engine for collecting and processing events in real time for use in generating up to the minute or second reports. 9 | 10 | ## Live Chat 11 | 12 | Eoin Brazil: Hi All, don't forget that if you have specific questions after this talk or on other topics you can post them to our Google Group - https://groups.google.com/forum/#!topic/mongodb-user 13 | 14 | Eoin Brazil: For anyone who is interested these slides and the related video will be available on our website under the event page. Our last session, 2, can be found at http://www.mongodb.com/presentations/webinar-build-application-series-session-2-getting-started 15 | 16 | Adam Comerford: @Andrew Mori - the mongodb-user group is where to go for the C driver (and Jira of course): https://github.com/mongodb/mongo-c-driver#support--feedback 17 | 18 | Jon Rangel: @andrewkentish asked about storing data to track changes over time. There are some good examples on schema design for storing time series data in MongoDB here: http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/. Also, see this blog post: http://blog.mongodb.org/post/65517193370/schema-design-for-time-series-data-in-mongodb 19 | 20 | Norberto Leite: Andy asked: Is there a $notexists command? 21 | Yes there is http://docs.mongodb.org/manual/reference/operator/query/ne/ 22 | with this you can check the differences between documents that have certain fields from others that don't 23 | 24 | Sam Weaver: @Byron Tie - yes that's how projections work. Anything that you don't specify in the project is assumed as 0, except for _id. That is the only thing that requires an explicit set to 0 or it will come back 25 | 26 | Henrik Ingo: Continuing to Andy's question: You may also want to combine $not and $exists to get what you're asking: http://docs.mongodb.org/manual/reference/operator/query/not/#op._S_not 27 | 28 | Henrik Ingo: Paras asked if there is an IN operator. Yes, it's called $in http://docs.mongodb.org/manual/reference/operator/query/in/ 29 | 30 | Matthew Bates: Andrew asked about how to specify which document is updated. The first parameter to the update method is a query; this can be a query for an exact match (ie with _id) or any other query using the query selectors. More info at http://docs.mongodb.org/manual/reference/method/db.collection.update/. 31 | 32 | Henrik Ingo: Paras asks if using IN has any performance overhead in a sharded cluster (like it has in Cassandra, apparently)? Answer: it's no different from any other query. Depending on the elements you list in the $in array, the query will be sent to a single shard, or all of them, as needed. Each shard will then perform the query in parallel. (So in many cases, a sharded cluster will be faster than a single server.) 33 | 34 | Jon Rangel: @Antonis asked how the content in this webinar series compares to the MongoDB University online education courses: The topics covered here are broadly similar to those covered in the M101 course, but the online courses additionally have hands-on exercises and go deeper in a number of areas. The online M101 course also comes in 3 flavours: M101J for Java developers, M101P for Python and M101JS for Node.js devs 35 | 36 | Norberto Leite: Zafrul asked: so how are the comments 4,5,6 AND 7,8,9 associated with original document ? will they go into different buckets ? 37 | When you perform a $push on an array what happens is that they are appended to the beggining of the array structure. They are still part of the original document:http://docs.mongodb.org/manual/reference/operator/update/push/ 38 | 39 | Eoin Brazil: Hi @Antonis, in addition to the courses on MongoDB University mentioned by Jon we have a DBA specific course M102 and a follow M202 for Advanced deployment and operations. All of these courses are free. 40 | 41 | Norberto Leite: Sergio asked: what means single atomic opeation?? 42 | Atomic operation means that every instruction received and executed by the mongodb (mongod) server will be completed. This means that a document cannot partially changed in what an operation is concerned. If update 10 fields, all 10 fields will be changed. If insert a document the full document is inserted. http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-support-acid-transactions 43 | 44 | Jon Rangel: @Rotem Gabay asked: How to decrement counters in an update operation? Answer: Use the $inc update operator with a negative value. For example: $inc : { "daily.comments" : -1 } 45 | 46 | Henrik Ingo: Byron asks if write concern can be used to insert 2 documents as an atomic transaction? The answer is no. In MongoDB an insert/update/delete is atomic separately for each document. Write concerns do not change this at all. Write concerns simply affect at which point the connector returns the call to the application: after the write is applied, after it is flushed to disk, after it is replicated, and so on. But it doesn't change the atomicity of each write operation. 47 | 48 | Adam Comerford: Rotem Gabay asked: in case i dont have a jornal file when will the daa be flushed to disk? in case i dont have a jornal file when will the daa be flushed to disk? Answer: Without a journal file the flush to disk is every 60 seconds in the background by default (fsync). Running without a journal is *not recommended* in any scenario where you care about your data. 49 | 50 | Eoin Brazil: Hi All, just a reminder that the hashtag for this series of webinars is #MongoDBBasics 51 | 52 | Henrik Ingo: Paras asks if there are any issues wrt MongoDB writes using EBS on EC2? Answer: No, except that the standard EBS has really poor performance. On the other hand the Provisioned IOPS and High IO options on EC2 have great performance and Amazon is a very popular platform to deploy MongoDB on. 53 | 54 | Norberto Leite: Clayton Piscopo asked: Will practical side to Replica Sets covered on these webinars? 55 | Yes, stay tunned! 56 | 57 | Eoin Brazil: Hi All as @Rotem asked, the first webinar is at http://www.mongodb.com/webinar/intro_mongodb_jan14 and the second webinar is at http://www.mongodb.com/presentations/webinar-build-application-series-session-2-getting-started 58 | 59 | Matthew Bates: Daniel Maxerov asks where he can read more about bucketing and overflowing comments: best place to start is at the MongoDB docs, with an article on patterns for storing comments. http://docs.mongodb.org/ecosystem/use-cases/storing-comments/. Talks about schema design, indexing and sharding considerations. 60 | 61 | Adam Comerford: Rotem Gabay asks: what is writte to the jornal? all of the data ? Answer: it actually is flushing write-ahead redo logs, and has a concept of group commits, so yes (sort of), but it's more complicated than that. You can find out more here: http://docs.mongodb.org/manual/core/journaling/ and here: http://www.mongodb.com/presentations/storage-engine-internals 62 | 63 | Daniel Roberts: Thanks for your time guys… See you next time…! 64 | 65 | Eoin Brazil: The next webinar, 3rd in the series, will be on the 6th of March covering indexing including geo spatial and full text search. The 4th webinar in the series, on March 20th will focus on aggregration and reporting. If you have registered you will receive a follow up email with the details as a reminder. Hope to see you all on the 6th of March. 66 | 67 | Daniel Roberts: Please ping us via #MongoDBBasics 68 | 69 | Henrik Ingo: Paras complains that PIOPS EBS costs more. This is true. But it's usually worth it, for example you can often have less RAM when the disk is faster. Anyway, you can also build a RAID of the standard EBS disks to make them faster. Also some people use the ephemeral disks, relying on replication for data durability. Or to be safer you could have a few servers using ephemeral disks and one member of the replica set using an EBS disk (and allow it to be slow). 70 | 71 | Adam Comerford: Tony Farrell asks: crazy question... we need really fast performance... IT is talking about a SSD... can the journal file be on the SSD drive, while the data file(s) are on a standard platter drive? Answer: Usually you want it the other way around. The access pattern of the data is key - SSD do really well for random access time (no seeking) so for the standard MongoDB data access, they perform really well. The journal is more of a standard sequential access pattern and so sees less benefit from SSD (though it is still better than spinning disk) and fits a spinning disk better. 72 | 73 | Jon Rangel: Anybody interested in deploying MongoDB on Amazon EC2 should also check out this page in the docs for additional deployment tips: http://docs.mongodb.org/ecosystem/platforms/amazon-ec2/ 74 | 75 | Norberto Leite: Lasse asked: What do you think about a mongoDB hosting service like MongoLab? Is it ok to use for production? 76 | MongoLab has a particular good integration with Windows Azure so if you are thinking on deploying on Azure that might be the best option if you don't want to manage your MongoDB cluster yourself. If you like a complete overview of our cloud partenrs have a look into this: http://www.mongodb.com/partners/cloud 77 | 78 | Owen Hughes: Thanks everyone, we are closing down the genius bar. Please ask questions or comment on twitter using #MongoDBBasics 79 | 80 | Eoin Brazil: Hi All, If you want to ask deeper questions than twitter allows, you can post to our Google Group https://groups.google.com/forum/#!forum/mongodb-user and we can continue the conversation there until the next webinar. 81 | 82 | Adam Comerford: Lasse Soberg asks: Ok, I'm currently on AWS and I see that they support almost all regions. How is their hosting on AWS compared to Azure? Answer: we've definitely seen more usage and there is more collateral (guides, whitepapers) for AWS, but Azure and other platforms are supported. Best thing to do is test for your use case and see how it performs. 83 | 84 | Chairperson: Thanks to everyone for attending. This Session will be closing in 2 minutes. Please ask questions or comment on twitter using #MongoDBBasics 85 | -------------------------------------------------------------------------------- /webinar-build-an-app/README.md: -------------------------------------------------------------------------------- 1 | # Webinar Building an App with MongoDB 2 | 3 | Notes about the eight part webinar [Building an application with MongoDB](https://www.mongodb.com/webinar/build_app-part_1), started on 30.01.2014. 4 | 5 | ## Summary, further reading and live chat transcripts 6 | 7 | * [20.02.2014 · Build an App Series · Part 2 · Interacting with DB](20140220-build-app-part2-interacting-database.md) 8 | * [06.02.2014 · Build an App Series · Part 1 · Schema design and app architecture](20140206-build-app-part1-getting-started.md) 9 | * [30.01.2014 · Getting started with MongoDB](20140130-getting-started.md) 10 | 11 | ## Slides 12 | 13 | * [Session 3 · 20.02.2014 · Build an App Series · Part 2](https://www.mongodb.com/webinar/build_app-part_1) (1h) Matthew Bates, MongoDB 14 | * [Session 2 · 06.02.2014 · Build an App Series · Part 1](https://www.mongodb.com/presentations/webinar-build-application-series-session-2-getting-started) (1h) Matthew Bates, MongoDB 15 | * [Session 1 · 30.01.2014 · Getting Started with MongoDB · Back to Basics](https://www.mongodb.com/webinar/intro_mongodb_jan14) (45 min.) Daniel Roberts, MongoDB 16 | 17 | ## Interacting 18 | 19 | * [MongoDB User Google Group](https://groups.google.com/forum/#!forum/mongodb-user) 20 | * Twitter: [#MongoDBBasics](https://twitter.com/search?q=%23MongoDBBasics) [@MongoDB](https://twitter.com/search?q=%40MongoDB) 21 | --------------------------------------------------------------------------------