`.
82 |
83 | ## 4. Code along
84 |
85 | In this week's code along you won't be building a complete app, but get familiar with using MySQL together with Node.js.
86 |
87 | - [Using MySQL with Node.js](https://www.youtube.com/watch?v=EN6Dx22cPRI)
88 |
89 | ## **5. Career Training 2 (If not completed yet)**
90 |
91 | Remember that the Career Training 2 session is coming up (check your cohort channel on slack for the exact date). Before the session make sure you have:
92 |
93 | - Read the whole [‘Interview Preparation’ Repo](https://github.com/HackYourFuture/interviewpreparation).
94 | - Done the assignment: make a copy of [this file](https://docs.google.com/document/u/2/d/114rTGS4eG6tpkrMAyVIdvgTrnpmkRL6ax_smkw1B0HI/copy) and submit your answers to the team [here](https://hackyourfuture.typeform.com/to/s6zYAugm).
95 |
96 | ## SUBMIT YOUR ASSIGNMENT!
97 |
98 | After you've finished your todo list it's time to show us what you got! The assignment that needs to be submitted is the
99 | following:
100 |
101 | 1. MySQL exercises
102 |
103 | Upload your code to your forked Databases repository in GitHub. Make a pull request to `HackYourAssignment/databases-cohortXX`
104 | repository.
105 |
106 | > Forgotten how to upload your assignment? Go through the [guide](../hand-in-assignments-guide.md) to learn how to do this
107 | > again.
108 |
109 | _Deadline Tuesday 23.59 CET_
110 |
--------------------------------------------------------------------------------
/Week1/QA_PREP_EXERCISE.md:
--------------------------------------------------------------------------------
1 | # Prep exercise week 1
2 |
3 | As a preparation step for the upcoming Q&A, you will work through the following exercise. The expected result is a list
4 | of SQL `CREATE TABLE` statements representing the tables you will design.
5 |
6 | We suggest you first think of the design of the tables (names, columns, types) and then proceed to write the SQL
7 | statements. That way, you're thinking about the problem you're trying to solve (creating a database that holds data)
8 | rather than the implementation first (creating a set of SQL statements).
9 |
10 | ## Exercise
11 |
12 | Design the tables for a database that contains food recipes.
13 |
14 | - Each recipe should have a name, one or more categories, a list of ingredients, and a list of steps to follow to
15 | complete the recipe.
16 | - Many recipes might share the same ingredients or the same list of steps (For example, "cook pasta according to the
17 | package instructions" could be a regular step seen in multiple recipes).
18 | - You can create your own data, focusing on Japanese, cake and vegetarian recipes. You can find some inspiration online,
19 | but keep it simple!
20 |
21 | You don't need to write any specific queries now, but the design should consider that we will execute queries on the
22 | tables to extract data such as:
23 |
24 | - You should be able to list all the recipes.
25 | - You should be able to list recipes under a single category (Salads, Mexican, etc).
26 |
27 | Some questions you can ask yourself:
28 |
29 | - Which entities can you spot in the above problem ?
30 | - Which tables do you need to create to store the above data?
31 | - What are the relationships between those entities?
32 |
33 | [This short video](https://www.youtube.com/watch?v=C3icLzBtg8I) explains how relationships work, and you can use this
34 | for inspiration. We will expand this topic the following week.
35 |
--------------------------------------------------------------------------------
/Week1/README.md:
--------------------------------------------------------------------------------
1 | # Setup your first database
2 |
3 | Follow these steps if you missed them from
4 | the [home page of the repository](https://github.com/HackYourFuture/databases).
5 |
6 | In this document you'll learn how to setup your first database. Most of the commands are done in the command line, so
7 | make sure you have yours open before you start.
8 |
9 | **Step 1: Logging in with the `root` user**
10 |
11 | To get started with your new MySQL client, we first have to login with the `root` user.
12 |
13 | > A root user, also known as a `superuser` is a special user account that has access to all commands and files of any
14 | > particular software.
15 |
16 | In Windows OS, if you click on the Start menu and type `MySQL Command line Client`, then
17 | the MySQL Command Line Client gives you a `msql>` prompt after typing in your root password.
18 | Note that this password is the one you used for the `root user` of the mysql during the installation.
19 | Linux and MAC users can execute `mysql -uroot -p` and then type your root password.
20 |
21 | **Step 2: Creating a `hyfuser` account**
22 |
23 | After loggin in with the root user, it's time to create the account we'll be using for this module. Execute the
24 | following commands, one after the other:
25 |
26 | ```bash
27 | # Step 1: This command creates a user 'hyfuser' with password 'hyfpassword' for the database server at 'localhost'
28 |
29 | mysql> create user 'hyfuser'@'localhost' identified with mysql_native_password by 'hyfpassword';
30 |
31 | # If this does not work try the alternative command:
32 |
33 | mysql> create user 'hyfuser'@'localhost' identified by 'hyfpassword';
34 |
35 | # Step 2: This command gives all permissions to user 'hyfuser'. The (*.*) means every table of every database.
36 |
37 | mysql> grant all privileges on *.* to 'hyfuser'@'localhost';
38 |
39 | # Step 3: This command flushes all privileges so that mysql reloads the grant table and our changes are enabled
40 |
41 | msyql> flush privileges;
42 |
43 | # Step 4: This command creates a database named 'userdb'
44 |
45 | mysql> create database userdb;
46 | ```
47 |
48 | **Step 3: Installing MySQL driver to use with Node.js**
49 |
50 | We want to use MySQL with JavaScript and to this end, we use the following [package](https://github.com/mysqljs/mysql).
51 | You can create an project wherever you want and install it. To test everything is working you can use
52 | the `connection-test.js` file. If you run it it should output `The solution is: 2`.
53 |
54 | # Reading Material Databases Week 1
55 |
56 | ## Agenda
57 |
58 | These are the topics for week 1:
59 |
60 | 1. [Introduction to databases](https://hackyourfuture.github.io/study/#/databases/README)
61 | - What is an information (system)?
62 | - What are entities?
63 | - What is a database?
64 | - What is the role of a database in an application?
65 | 2. [Introduction to Structured Query Language (SQL)](https://hackyourfuture.github.io/study/#/databases/sql/README)
66 | - What are data types (as applied to databases)?
67 | - How to use SQL to Create, Read, Update and Delete (CRUD)
68 | 3. [Using database dumps](https://hackyourfuture.github.io/study/#/databases/sql/dumps)
69 | 4. [Career Training II (Interview preparation)](https://github.com/HackYourFuture/interviewpreparation)
70 |
71 | ## Week goals
72 |
73 | This week we want to introduce you to the world of databases! We will start with a general introduction of what databases are, go over some terms and explain how to start thinking about your application's data. Have a look at the introduction [here](https://hackyourfuture.github.io/study/#/databases/README).
74 |
75 | In this module, we will be cover both the relational type of databases (SQL), as well as document based databases (NoSQL). For now you do not need to know the difference, that will become apparent when we have covered both. We will start by going into relational database by learning about SQL, which is the language used when interacting with most relational databases. Have a look [here](https://hackyourfuture.github.io/study/#/databases/sql/README).
76 |
77 | Lastly, let's go over some practical information that will allow you to save your databases and share it with others! Learn all about database dumps [here](https://hackyourfuture.github.io/study/#/databases/sql/dumps).
78 |
79 | ## Career training II (Interview preparation)
80 | Finally, it is time to continue with developing your soft skills to ace those job interviews. For that, have a look at the [interview preparation repo](https://github.com/HackYourFuture/interviewpreparation). It is a lot to get through, so you may need some time next week as well. There is an assignment for the career training next week, you can already have a look at it if you find these things difficult.
81 |
82 | ## Finished?
83 |
84 | Are you done going through the materials? High five! If you feel ready to get practical,
85 | click [here](./MAKEME.md).
86 |
--------------------------------------------------------------------------------
/Week1/async-create.js:
--------------------------------------------------------------------------------
1 | const util = require('util');
2 | const mysql = require('mysql');
3 |
4 | const connection = mysql.createConnection({
5 | host: 'localhost',
6 | user: 'hyfuser',
7 | password: 'hyfpassword',
8 | database: 'class19',
9 | });
10 |
11 | const execQuery = util.promisify(connection.query.bind(connection));
12 |
13 | async function seedDatabase() {
14 | const CREATE_STUDENTS_TABLE = `
15 | CREATE TABLE IF NOT EXISTS students (
16 | student_number INT,
17 | student_name VARCHAR(50),
18 | date_of_birth DATE,
19 | grade FLOAT,
20 | gender ENUM('m', 'f')
21 | );`;
22 | const CREATE_TEACHERS_TABLE = `
23 | CREATE TABLE IF NOT EXISTS teachers (
24 | teacher_number INT,
25 | teacher_name VARCHAR(50),
26 | date_of_birth DATE,
27 | subject TEXT,
28 | gender ENUM('m', 'f')
29 | );`;
30 | const students = [
31 | {
32 | student_number: 1001,
33 | student_name: 'Ben',
34 | date_of_birth: '1995-04-26',
35 | grade: 8.3,
36 | gender: 'm',
37 | },
38 | {
39 | student_number: 1002,
40 | student_name: 'Henri',
41 | date_of_birth: '1998-05-12',
42 | grade: 8.5,
43 | gender: 'm',
44 | },
45 | ];
46 |
47 | connection.connect();
48 |
49 | try {
50 | await Promise.all[execQuery(CREATE_STUDENTS_TABLE), execQuery(CREATE_TEACHERS_TABLE)];
51 |
52 | await Promise.all(students.map(student =>
53 | execQuery('INSERT INTO students SET ?', student)
54 | );
55 | } catch (error) {
56 | console.error(error);
57 | }
58 |
59 | connection.end();
60 | }
61 |
62 | seedDatabase();
63 |
--------------------------------------------------------------------------------
/Week1/connection-test.js:
--------------------------------------------------------------------------------
1 | var mysql = require('mysql');
2 | var connection = mysql.createConnection({
3 | host : 'localhost',
4 | user : 'hyfuser',
5 | password : 'hyfpassword',
6 | database : 'userdb',
7 | // If you get connection with
8 | // errno: -61,
9 | // code: 'ECONNREFUSED',
10 | // Then, the following line will solve it
11 | // Note that your socket file path may be different
12 | socketPath: '/tmp/mysql.sock'
13 |
14 | // port : xxxx // Uncomment this line and replace xxxx with the selected port number if you are not using default 3306. I also suggest to download MySQL version 5.7 because recent versions has authentication problems
15 |
16 | });
17 |
18 | connection.connect();
19 |
20 | connection.query('SELECT 1 + 1 AS solution', function (error, results, fields) {
21 | if (error) throw error;
22 | console.log('The solution is: ', results[0].solution);
23 | });
24 |
25 | connection.end();
26 |
--------------------------------------------------------------------------------
/Week1/create-table.js:
--------------------------------------------------------------------------------
1 | var mysql = require('mysql');
2 | var connection = mysql.createConnection({
3 | host : 'localhost',
4 | user : 'hyfuser',
5 | password : 'hyfpassword',
6 | database : 'company',
7 | // port : 3307
8 | });
9 |
10 | connection.connect();
11 |
12 | var create_query = "create table projects (project_id int, project_name varchar(50), start_date date, manager varchar(50))"
13 |
14 | connection.query(create_query, function (error, results, fields) {
15 | if (error) {
16 | throw error;
17 | }
18 | console.log("the reply is ", results[0]);
19 | });
20 | connection.end();
21 |
--------------------------------------------------------------------------------
/Week1/databases/imdb.sql:
--------------------------------------------------------------------------------
1 | -- MySQL dump 10.13 Distrib 5.7.17, for osx10.12 (x86_64)
2 | --
3 | -- Host: localhost Database: imdb
4 | -- ------------------------------------------------------
5 | -- Server version 5.7.17
6 |
7 | /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
8 | /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
9 | /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
10 | /*!40101 SET NAMES utf8 */;
11 | /*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
12 | /*!40103 SET TIME_ZONE='+00:00' */;
13 | /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
14 | /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
15 | /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
16 | /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
17 |
18 | --
19 | -- Table structure for table `actors`
20 | --
21 |
22 | DROP TABLE IF EXISTS `actors`;
23 | /*!40101 SET @saved_cs_client = @@character_set_client */;
24 | /*!40101 SET character_set_client = utf8 */;
25 | CREATE TABLE `actors` (
26 | `aid` int(11) NOT NULL AUTO_INCREMENT,
27 | `aname` varchar(50) DEFAULT NULL,
28 | `biography` text,
29 | `won_oscar` tinyint(1) DEFAULT NULL,
30 | PRIMARY KEY (`aid`)
31 | ) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8;
32 | /*!40101 SET character_set_client = @saved_cs_client */;
33 |
34 | --
35 | -- Dumping data for table `actors`
36 | --
37 |
38 | LOCK TABLES `actors` WRITE;
39 | /*!40000 ALTER TABLE `actors` DISABLE KEYS */;
40 | INSERT INTO `actors` VALUES (1,'Brad Pitt','lot of adopted children',0),(2,'Orlando Bloom','',0),(3,'Arnold Schwartzenegger','body builder',0),(4,'Elpidia Carrillo','Nothing',0);
41 | /*!40000 ALTER TABLE `actors` ENABLE KEYS */;
42 | UNLOCK TABLES;
43 |
44 | --
45 | -- Table structure for table `movies`
46 | --
47 |
48 | DROP TABLE IF EXISTS `movies`;
49 | /*!40101 SET @saved_cs_client = @@character_set_client */;
50 | /*!40101 SET character_set_client = utf8 */;
51 | CREATE TABLE `movies` (
52 | `mid` int(11) NOT NULL AUTO_INCREMENT,
53 | `mname` varchar(50) DEFAULT NULL,
54 | `release_date` date DEFAULT NULL,
55 | `rating` int(11) DEFAULT NULL,
56 | PRIMARY KEY (`mid`)
57 | ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8;
58 | /*!40101 SET character_set_client = @saved_cs_client */;
59 |
60 | --
61 | -- Dumping data for table `movies`
62 | --
63 |
64 | LOCK TABLES `movies` WRITE;
65 | /*!40000 ALTER TABLE `movies` DISABLE KEYS */;
66 | INSERT INTO `movies` VALUES (1,'Predator','1987-06-12',9),(2,'Troy','2004-05-14',8);
67 | /*!40000 ALTER TABLE `movies` ENABLE KEYS */;
68 | UNLOCK TABLES;
69 |
70 | --
71 | -- Table structure for table `roles`
72 | --
73 |
74 | DROP TABLE IF EXISTS `roles`;
75 | /*!40101 SET @saved_cs_client = @@character_set_client */;
76 | /*!40101 SET character_set_client = utf8 */;
77 | CREATE TABLE `roles` (
78 | `aid` int(11) NOT NULL,
79 | `mid` int(11) NOT NULL,
80 | `rname` varchar(50) DEFAULT NULL,
81 | `rid` int(11) NOT NULL AUTO_INCREMENT,
82 | PRIMARY KEY (`rid`),
83 | KEY `mid` (`mid`),
84 | KEY `aid` (`aid`),
85 | CONSTRAINT `roles_ibfk_1` FOREIGN KEY (`mid`) REFERENCES `movies` (`mid`),
86 | CONSTRAINT `roles_ibfk_2` FOREIGN KEY (`aid`) REFERENCES `actors` (`aid`)
87 | ) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8;
88 | /*!40101 SET character_set_client = @saved_cs_client */;
89 |
90 | --
91 | -- Dumping data for table `roles`
92 | --
93 |
94 | LOCK TABLES `roles` WRITE;
95 | /*!40000 ALTER TABLE `roles` DISABLE KEYS */;
96 | INSERT INTO `roles` VALUES (1,2,'Achilees',1),(2,2,'Paris',2),(3,1,'someone',4),(4,1,'someone else',5);
97 | /*!40000 ALTER TABLE `roles` ENABLE KEYS */;
98 | UNLOCK TABLES;
99 | /*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;
100 |
101 | /*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
102 | /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
103 | /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
104 | /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
105 | /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
106 | /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
107 | /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
108 |
109 | -- Dump completed on 2018-09-30 15:01:25
110 |
--------------------------------------------------------------------------------
/Week1/databases/musicians.sql:
--------------------------------------------------------------------------------
1 | DROP DATABASE IF EXISTS hyf_musicians;
2 | CREATE DATABASE hyf_musicians;
3 | USE hyf_musicians;
4 |
5 | CREATE TABLE Musicians(
6 | Id INT NOT NULL,
7 | FirstName TEXT NOT NULL,
8 | LastName TEXT NOT NULL,
9 | Born INT NOT NULL,
10 | PRIMARY KEY (Id)
11 | );
12 |
13 | CREATE TABLE InstrumentsPlayed(
14 | Id INT NOT NULL,
15 | Musician INT NOT NULL,
16 | Instrument TEXT NOT NULL,
17 | PRIMARY KEY (Id),
18 | FOREIGN KEY (Musician) REFERENCES Musicians(Id)
19 | );
20 |
21 | INSERT INTO Musicians (Id, FirstName, LastName, Born) VALUES (1, 'Thelonious', 'Monk', 1917);
22 | INSERT INTO Musicians (Id, FirstName, LastName, Born) VALUES (2, 'Sonny', 'Rollins', 1930);
23 | INSERT INTO Musicians (Id, FirstName, LastName, Born) VALUES (3, 'Steve', 'Lehman', 1978);
24 |
25 | INSERT INTO InstrumentsPlayed (Id, Musician, Instrument) VALUES (1, 1, 'Piano');
26 | INSERT INTO InstrumentsPlayed (Id, Musician, Instrument) VALUES (2, 2, 'Tenor saxophone');
27 | INSERT INTO InstrumentsPlayed (Id, Musician, Instrument) VALUES (3, 2, 'Soprano saxophone');
28 | INSERT INTO InstrumentsPlayed (Id, Musician, Instrument) VALUES (4, 3, 'Alto saxophone');
29 |
--------------------------------------------------------------------------------
/Week1/databases/musicians2.sql:
--------------------------------------------------------------------------------
1 | DROP database IF EXISTS hyf_musicians2;
2 | CREATE database hyf_musicians2;
3 | USE hyf_musicians2;
4 |
5 | CREATE TABLE Musicians(
6 | Id INT NOT NULL,
7 | FirstName TEXT NOT NULL,
8 | LastName TEXT NOT NULL,
9 | Born INT NOT NULL,
10 | PRIMARY KEY (Id)
11 | );
12 |
13 | CREATE TABLE Instruments(
14 | Id INT NOT NULL,
15 | Name TEXT NOT NULL,
16 | Type TEXT NOT NULL,
17 | PRIMARY KEY (Id)
18 | );
19 |
20 | CREATE TABLE InstrumentsPlayed(
21 | Id INT NOT NULL,
22 | Musician INT NOT NULL,
23 | Instrument INT NOT NULL,
24 | PRIMARY KEY (Id),
25 | FOREIGN KEY (Musician) REFERENCES Musicians(Id),
26 | FOREIGN KEY (Instrument) REFERENCES Instruments(Id)
27 | );
28 |
29 | INSERT INTO Musicians (Id, FirstName, LastName, Born) VALUES (1, 'Thelonious', 'Monk', 1917);
30 | INSERT INTO Musicians (Id, FirstName, LastName, Born) VALUES (2, 'Sonny', 'Rollins', 1930);
31 | INSERT INTO Musicians (Id, FirstName, LastName, Born) VALUES (3, 'Steve', 'Lehman', 1978);
32 |
33 | INSERT INTO Instruments (Id, Name, Type) VALUES (1, 'Piano', 'Keys');
34 | INSERT INTO Instruments (Id, Name, Type) VALUES (2, 'Tenor saxophone', 'Wind');
35 | INSERT INTO Instruments (Id, Name, Type) VALUES (3, 'Soprano saxophone', 'Wind');
36 | INSERT INTO Instruments (Id, Name, Type) VALUES (4, 'Alto saxophone', 'Wind');
37 | INSERT INTO Instruments (Id, Name, Type) VALUES (5, 'Guitar', 'String');
38 |
39 | INSERT INTO InstrumentsPlayed (Id, Musician, Instrument) VALUES (1, 1, 1);
40 | INSERT INTO InstrumentsPlayed (Id, Musician, Instrument) VALUES (2, 2, 2);
41 | INSERT INTO InstrumentsPlayed (Id, Musician, Instrument) VALUES (3, 2, 3);
42 | INSERT INTO InstrumentsPlayed (Id, Musician, Instrument) VALUES (4, 3, 4);
43 |
--------------------------------------------------------------------------------
/Week1/insert-values.js:
--------------------------------------------------------------------------------
1 | var mysql = require('mysql');
2 | var connection = mysql.createConnection({
3 | host : 'localhost',
4 | user : 'hyfuser',
5 | password : 'hyfpassword',
6 | database : 'company'
7 | });
8 |
9 | connection.connect();
10 | var insert_queries = [
11 | "insert into employees values (106, 'Ibrahim', 2000, '2019-03-10', 'm')",
12 | "insert into employees values (107, 'Ali', 3000, '2019-04-10', 'm')",
13 | ]
14 |
15 | for(var i in insert_queries){
16 | console.log("Going to run ", insert_queries[i]) // [] subscript operator : Of
17 | connection.query(insert_queries[i], function (error, results, fields) {
18 | if (error) {
19 | throw error;
20 | }
21 | console.log("the reply is ", results[0]);
22 | });
23 | }
24 | connection.end();
25 |
--------------------------------------------------------------------------------
/Week1/select-star.js:
--------------------------------------------------------------------------------
1 | var mysql = require('mysql');
2 | var connection = mysql.createConnection({
3 | host : 'localhost',
4 | user : 'hyfuser',
5 | password : 'hyfpassword',
6 | database : 'userdb'
7 | });
8 |
9 | connection.connect();
10 |
11 | var select_query = "select * from student"
12 |
13 | console.log("Going to run ", select_query)
14 | connection.query(select_query, function (error, results, fields) {
15 | if (error) {
16 | throw error;
17 | }
18 | for (i in results) {
19 | console.log(results[i]);
20 | }
21 | });
22 | connection.end();
23 |
--------------------------------------------------------------------------------
/Week1/slides/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | Databases week 1
6 |
7 |
31 |
32 |
33 |
34 |
651 |
653 |
656 |
657 |
658 |
659 |
--------------------------------------------------------------------------------
/Week1/slides/media/isbn.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/Week1/slides/media/isbn.png
--------------------------------------------------------------------------------
/Week1/slides/media/qrcode.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/Week1/slides/media/qrcode.png
--------------------------------------------------------------------------------
/Week1/slides/media/thelonious-monk.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/Week1/slides/media/thelonious-monk.jpg
--------------------------------------------------------------------------------
/Week2/LESSONPLAN.md:
--------------------------------------------------------------------------------
1 | # Lesson Plan Databases Week 2
2 |
3 | The lesson plan is primarily written for teachers so that they can
4 | use examples and anecdotes from this document in conjunction with the README
5 | and explain the concepts better in the class.
6 |
7 | ## Topics
8 |
9 | 0. Async nature of MySQL-Nodejs interaction
10 | 1. Identifiers (Primary key, Foreign key, Composite key)
11 | 2. Relationships (One-to-One, One-to-Many, Many-to-Many)
12 | 3. Joins (inner, left and right) and aliases
13 | 4. More SQL clauses (group by, having, distinct and Aggregate functions)
14 | 5. Indexes
15 | 6. Domain modeling (ERD)
16 |
17 | ### 0. Async nature of MySQL-Nodejs interaction
18 |
19 | #### Explanation
20 |
21 | The nature of database queries is asynchronous.
22 | Some queries can take long time to execute.
23 | When our JavaScript MySQL client is sending the queries to the MySQL server,
24 | it may not want to block until the answer is returned.
25 | However if the JavaScript MySQL client is sending multiple queries such that
26 | the second query (for example insert) depends on the first query (for example create),
27 | then it must wait until the execution of the first query is successful.
28 | To ensure smooth interaction with the MySQL server, promises can be used in conjunction
29 | with the await() method.
30 |
31 | #### Example(s)
32 |
33 | Demonstrate with four programs at [this repository](https://github.com/unmeshvrije/database_examples)
34 | how
35 |
36 | 1. Program `1-db-naive.js` fails because the connection is closed prematurely.
37 | 2. Program `2-db-callback.js` solves the problem but looks ugly because of the callback-hell.
38 | 3. Program `3-db-promise.js` uses the promise chaining to make it better.
39 | about building those promises
40 | 4. Program `4-db-await.js` uses promisify() and await() to make it the best.
41 |
42 | #### Exercise
43 |
44 | The program called `async-create-insert.js` can be found in `Week2` folder.
45 | Add a select query to that program using await and promisify.
46 |
47 | #### Essence
48 |
49 | > async keyword : to create asynchronous function and ensure they return promise without having to worry
50 |
51 | > await : to call a function returning promise without having to call .then() over that promise
52 |
53 | > promisify() : to convert a callback based function to a promise based one.
54 |
55 | ### 1. Identifiers (Primary key, Foreign key, Composite key)
56 |
57 | #### Explanation
58 |
59 | 1. A column can be declared as the UNIQUE column. Such a column has UNIQUE values.
60 | It can also have NULL values. Thus, two rows can have same NULL value in the column that
61 | is declared as UNIQUE (In other words, this is a UNIQUE CONSTRAINT on that column).
62 | 2. A column can be declared as the PRIMARY KEY column. Such a column has UNIQUE values too.
63 | They cannot be NULL values. Thus two rows can NEVER have same values in the column
64 | that is declared as PRIMARY KEY (In other words, this is a PRIMARY KEY CONSTRAINT on that column).
65 |
66 | > There are more constraints in MySQL. Read more about
67 | > them [here](https://www.w3resource.com//creating-table-advance/constraint.php).
68 |
69 | #### Example
70 |
71 | Consider the following commands (# represents comments).
72 |
73 | ```sql
74 | -- create table with two columns: one with primary key and one with unique key constraint
75 | CREATE TABLE pri_uniq_demo
76 | (
77 | id_pr int PRIMARY KEY,
78 | id_un int UNIQUE
79 | );
80 |
81 | -- Note the error that says that the primary key column cannot be NULL
82 | INSERT INTO pri_uniq_demo VALUES (NULL, NULL);
83 | ERROR 1048 (23000): Column 'id_pr' cannot be null
84 |
85 | -- Note that the UNIQUE key column can be NULL
86 | INSERT INTO pri_uniq_demo VALUES (1, NULL);
87 | Query OK, 1 row affected (0.00 sec)
88 |
89 | -- Normal insertion
90 | INSERT INTO pri_uniq_demo VALUES (2, 2);
91 | Query OK, 1 row affected (0.05 sec)
92 |
93 | -- Note that you cannot insert 2 in the id_un column because it should be UNIQUE
94 | INSERT INTO pri_uniq_demo VALUES (3, 2);
95 | ERROR 1062 (23000): Duplicate entry '2' for key 'id_un'
96 |
97 | -- Note that you cannot insert 2 in the id_pr column because it is PRIMARY KEY
98 | INSERT INTO pri_uniq_demo VALUES (2, 3);
99 | ERROR 1062 (23000): Duplicate entry '2' for key 'PRIMARY'
100 |
101 | ```
102 |
103 | #### Exercise
104 |
105 | ```sql
106 | -- Find out type T and Constraint C for each column.
107 | CREATE TABLE Airline_passengers(ticket_numer T C, passenger_name T C, date_of_birth T C, passport_number T C);
108 |
109 | -- Hint: A very young baby may not need a ticket!
110 | ```
111 |
112 | #### Essence
113 |
114 | Primary key is a special case of UNIQUE key. UNIQUE key can be NULL and
115 | primary key cannot be NULL. A table can have multiple UNIQUE keys but ONLY ONE primary key.
116 |
117 | ### 2.1 Relationships (One-to-One, One-to-Many, Many-to-Many)
118 |
119 | #### Explanation
120 |
121 | When one entity is related to another, such a relationship has a so called **cardinality**.
122 | The cardinality determines how many instances of one entity can participate in the relationship
123 | with how many other instances of the other entity.
124 |
125 | For example, an employee may have only one account in the company.
126 | Also one account is tied to exactly one employee.
127 | This relationship employee and account has `1-1` (read as one-to-one) cardinality.
128 | This means that one instance of Employees (say John Smith) has exactly one account
129 | (instance with account ID 3409011) in some company X.
130 |
131 | An employee belongs to exactly one department, however one department may have many employees.
132 | This relationship between employee and department has `M-1` (read as Many-to-one) cardinality.
133 | Reversely, the relationshop between department and employee has `1-M` (read as One-to-many) cardinality.
134 | Note that `1-M` and `M-1` cardinalities are only reverse of each other.
135 | The following two sentences convey the same information in different words.
136 |
137 | 1. The Sales department (an instance of Department entity) of company X has three employees.
138 | 2. John Smith, Raj Joshi and Su Li are employees of company X that belong to the Sales Department.
139 |
140 | To represent `1-1` or `1-M` relationship in MySQL tables, we need a column in one table
141 | that `refers` a column in another table. Such a column should be a primary key column of another table
142 | and is called as a `foreign key`.
143 | In the Account table, `employee_id` is the column that acts as the foreign key which **refers to the
144 | employee_id column of the employees table in which it works as the primary key.**
145 | In the Employees table, `dept_id` is the column that acts as the foreign key which **refers to the
146 | dept_id column of the departments table in which it works as the primary key.**
147 |
148 | #### Example
149 |
150 | ```sql
151 | -- Add the column dept_id to the employees table
152 | ALTER TABLE employees
153 | ADD COLUMN dept_id int;
154 |
155 | -- Add the constraint foreign key
156 | ALTER TABLE employees
157 | ADD CONSTRAINT fk_dept FOREIGN KEY (dept_id) REFERENCES departments (dept_id);
158 |
159 | -- Add some sample rows in the departments table
160 | INSERT INTO departments VALUES (5001, "Sales");
161 | INSERT INTO departments
162 | VALUES (5002, "Development");
163 | INSERT INTO departments
164 | VALUES (5003, "Marketing");
165 |
166 | -- Try updating the dept_id of an employee with an existing department
167 | UPDATE employees
168 | SET dept_id = 5001
169 | where employee_id = 101;
170 |
171 | -- Try updating the dept_id of an employee with a department that does not exist
172 | UPDATE employees
173 | SET dept_id = 9999
174 | where employee_id = 101;
175 |
176 | -- Example of 1-1 relationship
177 | -- Creating table Account with the same primary key as the Employees table
178 | CREATE TABLE Account
179 | (
180 | employee_id int,
181 | email varchar(50),
182 | primary key (employee_id),
183 | CONSTRAINT fk_emp FOREIGN KEY (employee_id) REFERENCES employees (employee_id)
184 | );
185 |
186 | ```
187 |
188 | #### Exercise
189 |
190 | 1. Write an INSERT query for the Account table that returns an error.
191 | 2. Write an INSERT query for the Account table that is valid (returns no error).
192 |
193 | #### Essence
194 |
195 | For a relationship with `1-M` cardinality. The primary key of `1` side of the relationship
196 | becomes the foreign key of the `M` side of the relationship.
197 | E.g. `Departments-Employees`. The primary key of the Departments table (dept_id)
198 | becomes the foreign key in the Employees table.
199 |
200 | ### 2.2 Relationships (M-M and composite keys)
201 |
202 | #### Explanation
203 |
204 | The cardinality of a relationship can also be `M-M` (read as Many-to-many) where
205 | one instance of an entity participates in many other instances of the other entity
206 | and vice a versa.
207 | For example, an employee may work on many projects at a time.
208 | A project may have many employees working on it.
209 |
210 | To represent an `M-M` relationship in MySQL, we need a **new relationship table**
211 | that uses two foreign keys (primary keys from both tables). For such a relationship
212 | table, primary key is composed of two foreign keys.
213 | For example, one entry in the employee-project relationship table represents
214 | **X** is working on **Y** project.
215 | There can be other rows with the employee **X**,
216 | There can be other rows with the project **Y**
217 | Hence none of these columns can act as the primary key alone.
218 | The primary key must be the combination of two columns. Such a primary
219 | key is called as the **Composite Key**
220 |
221 | #### Example
222 |
223 | ```sql
224 | -- create projects table
225 | CREATE TABLE projects
226 | (
227 | proj_id int,
228 | proj_name varchar(50),
229 | start_date datetime
230 | );
231 |
232 | -- Insert sample values
233 | INSERT INTO projects VALUES(9001, "Jazz", "2018-01-01");
234 | INSERT INTO projects
235 | VALUES (9002, "Mellow", "2019-03-01");
236 | INSERT INTO projects
237 | VALUES (9003, "Classical", "2020-01-01");
238 |
239 | -- create emp_proj relationship table with composite primary key
240 | CREATE TABLE emp_proj
241 | (
242 | emp_id int,
243 | proj_id int,
244 | PRIMARY KEY (emp_id, proj_id),
245 | CONSTRAINT fk_emp FOREIGN KEY (emp_id) REFERENCES employees (employee_id),
246 | CONSTRAINT fk_pro FOREIGN KEY (proj_id) REFERENCES projects (proj_id)
247 | );
248 | ```
249 |
250 | #### Exercise
251 |
252 | 1. Write an INSERT query for the emp_proj table that returns an error.
253 | 2. Write an INSERT query for the emp_proj table that is valid (returns no error).
254 |
255 | #### Essence
256 |
257 | For a `M-M` relationship between two tables, new table must be created which uses a composite primary
258 | key that consists of foreign keys that reference primary keys of both tables.
259 |
260 | ### 3.1 Joins (comma, inner)
261 |
262 | #### Explanation
263 |
264 | When the answer of the query cannot be found from only one table, we must join the two tables.
265 | If we don't join the table, then such a query must be written using nested subquery.
266 | For example, consider the following query: **Find out all employees who work in the Sales department.**
267 | Also assume that the name of the department is obtained from the HTML form as the input.
268 | thus, you must obtain the `dept_id` from the departments table where the name is **Sales**.
269 | The following query should give the answer.
270 | `SELECT dept_id FROM departments where dept_name = Sales;`
271 | Now, we can use the above query as the nested (sub)query as follows:
272 | `SELECT employee_name FROM employees WHERE dept_id in (SELECT dept_id FROM departments where dept_name = Sales);`
273 |
274 | Another way of solving this query is to use **joins**.
275 | The preferred way of joining two tables is to use the **INNER JOIN** clause followed by **ON** followed by
276 | the condition that generally matches the columns shared by the two tables we are joining.
277 | Another way is to use a **comma (,) between table names** after FROM and then matching columns in the **WHERE**
278 | clause.
279 |
280 | #### Example
281 |
282 | ```sql
283 | -- We must join the tables `employees` and `departments` and then choose the relevant rows.
284 |
285 | -- INNER JOIN
286 | SELECT
287 | employee_name
288 | FROM employees as E
289 | INNER JOIN
290 | departments as D
291 | ON E.dept_id = D.dept_id
292 | WHERE D.dept_name = "Sales";
293 |
294 | -- Comma (,) or CROSS join
295 | SELECT
296 | employee_name
297 | FROM employees as E,
298 | departments as D
299 | where E.dept_id = D.dept_id
300 | and D.dept_name = "Sales";
301 | ```
302 |
303 | #### Exercise
304 |
305 | 1. Guess the output of the following query.
306 | `SELECT count(*) FROM employees, departments, projects;`
307 |
308 | 2. Print the sum of salary of all employees that work in "Sales" department and
309 | work on "Jazz" project.
310 |
311 | #### Essence
312 |
313 | > When we use a comma (,) after the FROM clause of MySQL, it gives you the vector product of two tables.
314 |
315 | > In MySQL, there is no difference between
316 | > (1) An INNER JOIN with columns-matching condition after ON and
317 | > (2) The join using a comma (,) between tables and where clause with condition for columns-matching.
318 |
319 | ### 3.2 Joins (self)
320 |
321 | #### Explanation
322 |
323 | When the answer of the query cannot be easily found with a where clause but the answer(s) can be found
324 | in the same table, then consider the following case:
325 |
326 | 1. One column of the table contains values from the other column of the same table (but different rows)
327 | E.g. `reports_to` column of Employees table contains values from the `employee_id`.
328 | 2. We want to print rows that share column values from other rows
329 | E.g. Say we add a column `city` to Employees table, then
330 | we want to print all employees who come from the same city as `John Smith`
331 |
332 | For both of these cases, a table must be joined to itself (self join).
333 | for self join, we must use **aliases** so that disambiguation of column names can be achieved.
334 |
335 | #### Example
336 |
337 | ```sql
338 | -- When we want to print employees and their reporting mangagers.
339 | SELECT E1.employee_name as Employee, E2. employee_name as Manager
340 | FROM employees as E1
341 | INNER JOIN
342 | employees as E2
343 | ON E1.reports_to = E2.employee_id
344 | ```
345 |
346 | #### Exercise
347 |
348 | ```sql
349 | -- Add the city column, update records in the employees table
350 | ALTER TABLE employees
351 | add column city varchar(50);
352 | UPDATE employees
353 | SET city = 'Berlin'
354 | where employee_name = 'John';
355 | UPDATE employees
356 | SET city = 'Berlin'
357 | where employee_name = 'Friend of John';
358 | UPDATE employees
359 | SET city = 'Berlin'
360 | where employee_name = 'Another friend of John';
361 |
362 | SELECT
363 | employee_name,
364 | city
365 | FROM employees
366 | WHERE city = (SELECT city FROM employees WHERE employee_name = 'John');
367 | ```
368 |
369 | Re-Write the above query to print names of employees that come from the same city as John using **self join**.
370 |
371 | Reveal Query
372 |
373 |
374 | ```sql
375 | SELECT
376 | E1.employee_name,
377 | E2.city
378 | FROM employees as E1
379 | INNER JOIN employees as E2
380 | ON E1.city = E2.city
381 | WHERE E2.employee_name = 'John';
382 | ```
383 |
384 |
385 |
386 |
387 | #### Essence
388 |
389 | For self joins, aliases for tables must be used. Otherwise, column names are ambiguous.
390 |
391 | ### 3.3 Joins (LEFT OUTER and RIGHT OUTER)
392 |
393 | #### Explanation
394 |
395 | When we join two tables based on a common column, some rows do not have a match in the other table.
396 | In the following statement `FROM A LEFT JOIN B ON A.col = B.col`,
397 | the table A is the LEFT table and the table B i the RIGHT table.
398 | In a LEFT JOIN, we print **all rows** from the LEFT table even though they don't have a match in the RIGHT table.
399 |
400 | In the following statement `FROM A RIGHT JOIN B ON A.col = B.col`,
401 | the table A is the LEFT table and the table B i the RIGHT table.
402 | In a RIGHT JOIN, we print **all rows** from the RIGHT table even though they don't have a match in the LEFT table.
403 |
404 | #### Example
405 |
406 | Some employees may not have a department associated with them but they
407 | are still employed by the company.
408 | Thus, if we want to print all employees and their department names,
409 | then a LEFT JOIN (FROM employees to departments) allows us to print **everything** from the LEFT table
410 | and the matching rows from the other table.
411 |
412 | #### Exercise
413 |
414 | Use Self left join to print all employees and their managers.
415 | Note that it should include the employees who don't have mangers too.
416 |
417 | Reveal Query
418 |
419 |
420 | ```sql
421 | SELECT
422 | E.employee_name as Employee,
423 | E2.employee_name as Manager
424 | FROM employees as E1
425 | LEFT JOIN
426 | employees as E2
427 | ON E1.reports_to = E2.employee_id
428 | ```
429 |
430 |
431 |
432 |
433 | #### Essence
434 |
435 | - LEFT JOIN : All rows from the LEFT table
436 | - RIGHT JOIN: All rows from the RIGHT table
437 |
438 | ### 4.1. Aggregate Functions
439 |
440 | #### Explanation
441 |
442 | In database management an aggregate function is a function where the values of multiple rows are grouped
443 | together as input on certain criteria.
444 | Some important aggregate functions are
445 |
446 | 1. SUM
447 | 2. COUNT
448 | 3. MAX
449 | 4. MIN
450 | 5. AVG
451 |
452 | #### Example
453 |
454 | We want to return the sum of the salaries of all female employees:
455 |
456 | ```sql
457 | SELECT
458 | SUM(E.salary) AS Expenses
459 | FROM employees as E
460 | WHERE gender = 'f';
461 | ```
462 |
463 | Or get the number of employees:
464 |
465 | ```sql
466 | SELECT
467 | COUNT(*)
468 | FROM employees;
469 | ```
470 |
471 | #### Exercise
472 |
473 | Write SQL queries to get the maximum and average of all employees' salaries.
474 |
475 | #### Essence
476 |
477 | Using these functions, you can do some data processing on Database level. For example, you can get max or min of the
478 | data that exists in database with no need to process them.
479 |
480 | ### 4.2. DISTINCT
481 |
482 | #### Explanation
483 |
484 | Distinct: this statement is used to return only distinct (different) values. This keyword prevents the duplicate values.
485 |
486 | #### Example
487 |
488 | We want to to get the number of the departments that have at least one employee:
489 |
490 | ```sql
491 | SELECT
492 | COUNT(DISTINCT E.dept_id) AS Working_Departments
493 | FROM employees as E
494 | ```
495 |
496 | #### Exercise
497 |
498 | N/A
499 |
500 | #### Essence
501 |
502 | Distinct gives unique values.
503 |
504 | #### 4.3 Group by
505 |
506 | #### Explanation
507 |
508 | Group by: this statement groups rows that have the same value in a certain column and generally
509 | applies an aggregate function on another column
510 |
511 | #### Example
512 |
513 | We want to get the sum of salary and number of employees grouped by gender:
514 |
515 | ```sql
516 | SELECT
517 | gender,
518 | count(employee_id),
519 | sum(salary)
520 | FROM employees
521 | GROUP BY gender;
522 | ```
523 |
524 | #### Exercise
525 |
526 | Write a query that retrieves all managers with the number of employees that are reporting to them.
527 |
528 | Reveal Query
529 |
530 |
531 | ```sql
532 | SELECT
533 | E2.employee_name,
534 | count(E1.employee_name) as Employee_cnt
535 | FROM employee as E1
536 | LEFT JOIN employee as E2
537 | ON E1.reports_to = E2.employee_id
538 | group by E2.employee_name;
539 | ```
540 |
541 |
542 |
543 |
544 | #### Essence
545 |
546 | Group by clause can only print columns that are grouped by or apply aggregate functions on the other columns.
547 |
548 | ### 4.4 Having
549 |
550 | #### Explanation
551 |
552 | Having clause was added to SQL because the WHERE keyword could not be used with aggregate functions.
553 | Using having clause you can have conditional clauses on aggregate funtions.
554 |
555 | #### Example
556 |
557 | Print all departments that are spending more than 5000 in salaries
558 | (In other words, all departments where the sum of salaries of employees working in them is more than 5000)
559 |
560 | ```sql
561 |
562 | SELECT
563 | dept_name,
564 | sum(salary)
565 | FROM employees as E
566 | INNER JOIN
567 | departments as D
568 | ON E.dept_no = D.dept_no
569 | GROUP BY dept_name
570 | HAVING sum(salary) > 5000;
571 | ```
572 |
573 | #### Exercise
574 |
575 | Write a query that retrieves all managers with more than 3 employees reporting to them.
576 | Hint: In this query use DISTINCT and GROUP BY keywords with HAVING clause.
577 |
578 | #### Essence
579 |
580 | Having clause can only filter the rows with columns selected by the GROUP BY clause.
581 |
582 | ### 5. Indexes
583 |
584 | #### Explanation
585 |
586 | Indexes are a type of a look-up table where the database server can quickly look up rows in the database tables.
587 | Indexes are created when rows are inserted or they are updated when the indexed columns are updated in the database.
588 | Creating or updating indexes takes computation time and storing indexes takes up data storage space.
589 | However, when retrieving a specific row from the database, the database can use these stored indexes to find the
590 | requested row(s) much faster.
591 | Therefore, indexes make update or insertion operations more expensive/slow, however they speed-up data retrieval (
592 | SELECT/JOIN/WHERE/...) operations.
593 | Also, they increase the total size of the database, as they are stored together with their corresponding tables.
594 |
595 | ##### Analogy
596 |
597 | Imagine a (technical) textbook which has the index at the end. This index contains keywords in that book and it tells
598 | you on which pages those keyword appear.
599 | It helps to find pages that contains a word `promise` instead of looking for each page one by one. Note that a keyword
600 | may appear on more than one pages.
601 | In this case, you will see all pages on which this keyword appears. In a JavaScript book, the word `function` may appear
602 | on many pages while the word
603 | `prototype chaining` may appear only once. In the index, you can quickly find on which page these words appear.
604 |
605 | Here is
606 | a [link to a Medium article](https://medium.com/javarevisited/indexes-when-to-use-and-when-to-avoid-them-39c56e5a7329)
607 | that describes indexes concisely.
608 |
609 | #### Example
610 |
611 | First we will create a table with a large number of records.
612 | The full program can be found in `Week2/generate_big_table.js`, but here is the snippet
613 |
614 | ```js
615 | async function seedDatabase() {
616 | const CREATE_TABLE = `
617 | CREATE TABLE IF NOT EXISTS big
618 | (
619 | id_pk INT PRIMARY KEY AUTO_INCREMENT,
620 | number INT
621 | );`;
622 |
623 | execQuery(CREATE_TABLE);
624 | let rows = [];
625 | for (i = 1; i <= 1000000; i++) {
626 | rows.push([i]);
627 | if (i % 10000 === 0) {
628 | console.log("i=" + i);
629 | await execQuery("INSERT INTO big(number) VALUES ?", [rows]);
630 | rows = [];
631 | }
632 | }
633 | }
634 | ```
635 |
636 | The following two queries will show the difference (in execution time) between using the index and not using the index
637 | when we retrieve the data.
638 |
639 | ```sql
640 | mysql> SELECT * FROM big WHERE id_pk = 1000;
641 | +-------+--------+
642 | | id_pk | number |
643 | +-------+--------+
644 | | 1000 | 1000 |
645 | +-------+--------+
646 | 1 row in set (0.00 sec)
647 |
648 | mysql> SELECT * FROM big WHERE number = 1000;
649 | +-------+--------+
650 | | id_pk | number |
651 | +-------+--------+
652 | | 1000 | 1000 |
653 | +-------+--------+
654 | 1 row in set (0.19 sec)
655 | ```
656 |
657 | The first query's result is instant because the `WHERE` clause uses the `id_pk` column which is a primary key.
658 | Note that for a primary key column, MySQL automatically creates an index. This can be confirmed with the following query
659 |
660 | ```sql
661 | mysql> SHOW indexes from big;
662 | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
663 | | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type |
664 | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
665 | | big | 0 | PRIMARY | 1 | id_pk | A | 12769223 | NULL | NULL | | BTREE |
666 | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
667 | 1 row in set (0.01 sec)
668 | ```
669 |
670 | The query `SELECT * FROM big WHERE number = 1000` takes longer to run because the column `number` is not indexed. MySQL
671 | has to
672 | go in the `big` table and search row by row to check which row contains the value 1000 in `number` column.
673 |
674 | The `describe` command shows how many rows are accessed to fetch the result of the query.
675 | Check the `rows` column in the output of the following queries.
676 |
677 | ```sql
678 | mysql> DESCRIBE SELECT * FROM big WHERE number = 1000;
679 | +----+-------------+-------+------------+------+---------------+------+---------+------+----------+----------+-------------+
680 | | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
681 | +----+-------------+-------+------------+------+---------------+------+---------+------+----------+----------+-------------+
682 | | 1 | SIMPLE | big | NULL | ALL | NULL | NULL | NULL | NULL | 998568 | 10.00 | Using where |
683 | +----+-------------+-------+------------+------+---------------+------+---------+------+----------+----------+-------------+
684 | 1 row in set, 1 warning (0.00 sec)
685 |
686 | mysql> DESCRIBE SELECT * FROM big WHERE id_pk = 1000;
687 | +----+-------------+-------+------------+-------+---------------+---------+---------+-------+------+----------+-------+
688 | | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
689 | +----+-------------+-------+------------+-------+---------------+---------+---------+-------+------+----------+-------+
690 | | 1 | SIMPLE | big | NULL | const | PRIMARY | PRIMARY | 4 | const | 1 | 100.00 | NULL |
691 | +----+-------------+-------+------------+-------+---------------+---------+---------+-------+------+----------+-------+
692 | ```
693 |
694 | We can now create an index on the `number` column as follows:
695 |
696 | ```sql
697 | CREATE INDEX idx_number ON big(number);
698 | ```
699 |
700 | Now we can re-run the select query which will be faster:
701 |
702 | ```sql
703 | mysql> SELECT * FROM big WHERE number = 1000;
704 | +-------+--------+
705 | | id_pk | number |
706 | +-------+--------+
707 | | 1000 | 1000 |
708 | +-------+--------+
709 | 1 row in set (0.00 sec)
710 | ```
711 |
712 | We have seen that having an index helps in fetching the data faster. However, for updates/inserts, having an index
713 | is more expensive. After doing an update to the indexed column, MySQL also has to internally update indexes for that
714 | column.
715 |
716 | Look at the query below:
717 |
718 | ```
719 | mysql> UPDATE big SET number = number + 100000;
720 | Query OK, 1000000 rows affected (14.01 sec)
721 | Rows matched: 1000000 Changed: 1000000 Warnings: 0
722 | ```
723 |
724 | Now, let us remove the index
725 |
726 | ```sql
727 | mysql> DROP INDEX idx_number ON big;
728 | Query OK, 0 rows affected (1.59 sec)
729 | Records: 0 Duplicates: 0 Warnings: 0
730 | ```
731 |
732 | and re-run the update query.
733 |
734 | ```sql
735 | mysql> UPDATE big SET number = number + 100000;
736 | Query OK, 1000000 rows affected (6.14 sec)
737 | Rows matched: 1000000 Changed: 1000000 Warnings: 0
738 | ```
739 |
740 | We can see that without the index, update of the number column is much faster (6 seconds as compared to 14).
741 |
742 | #### Exercise
743 |
744 | Create a composite index using columns (`employee_name and salary`) on the `employees` table and check the query
745 | performance of following queries
746 |
747 | ```sql
748 | DESCRIBE SELECT * FROM employees WHERE employee_name = 'John' and salary = 50000
749 | DESCRIBE SELECT * FROM employees WHERE employee_name = 'John'
750 | DESCRIBE SELECT * FROM employees WHERE salary = 50000
751 | ```
752 |
753 | Make sure to have at least 100 records in the `employees` table including someone named `John` with salary 50000.
754 |
755 | #### Essence
756 |
757 | Indexes in databases can be used to increase the performance for finding and retrieving specific rows.
758 | However, they do also add overhead to the database (especially for updates/inserts), so they should be used with care.
759 |
760 | ### 6. Domain Modeling
761 |
762 | #### Explanation
763 |
764 | - Domain modeling is making the models for the domain of the problem or the system.
765 | - It makes use of the concepts like entities and relations.
766 | - Entity Relationship Diagrams (ERD) are used widely in domain modeling.
767 | - In ERD, **entities** are shown by boxes and are abstract things. E.g. John Smith is an instance. Student is the
768 | entity. An entity in ERD is converted to a table in MySQL.
769 | - Entities are connected to each other with a line (**relationships**) with **cardinalities** (1-1, 1-M etc.).
770 | - Entities have **attributes** shown in the shape of an ellipse. An attribute of the entity is translated to
771 | the column of the corresponding table.
772 |
773 | #### Example
774 |
775 | Draw the **ERD** for the employees, departments and projects.
776 |
777 | #### Exercise
778 |
779 | Draw the **ERD** for the school database. Identify tables, attributes and relationships.
780 |
781 | #### Essence
782 |
783 | Domain Modeling using ERD diagrams helps the system analysts and database designers to have a concrete view to the
784 | system and how to apply it in databases.
785 |
--------------------------------------------------------------------------------
/Week2/MAKEME.md:
--------------------------------------------------------------------------------
1 | > This Assignment assumes you have installed the software [MySQL](https://dev.mysql.com/downloads/installer/) on your
2 | > computer. If not, please do that first!
3 |
4 | # Assignment Databases Week 2
5 |
6 | ## Todo list
7 |
8 | 1. Practice the concepts
9 | 2. Prep exercises
10 | 3. MySQL exercises
11 | 4. Keep practicing (Optional)
12 |
13 | ## 1. Practice the concepts
14 |
15 | Let's do some interactive exercises first. In the following online course, **do sections 3 (Aggregate Functions) and
16 | sections 4 (Multiple Tables)**.
17 |
18 | - [Codecademy: Learn SQL](https://www.codecademy.com/learn/learn-sql)
19 |
20 | Also make time to do lesson 6-12 from the following:
21 |
22 | - [SQLBolt](https://sqlbolt.com/lesson/select_queries_with_joins)
23 |
24 | ## 2. Prep exercises
25 |
26 | > Prep exercises are exercises that you should work on _before_ the session on Sunday. These are a little more difficult
27 | > or show an important concept and as such are a great exercise to talk about with your mentor. Have a solution ready by
28 | > Sunday as you may be asked to show what you did.
29 |
30 | The [QA_PREP_EXERCISE.md](./QA_PREP_EXERCISE.md) file explains what needs to be done for the prep exercise this week.
31 | There will also be some questions at the bottom to think about. Go through them _before_ the session on Sunday as it
32 | will be covered then.
33 |
34 | ## 3. MySQL exercises
35 |
36 | > You will need to do some research to solve these exercises. All the concepts necessary to solve
37 | > these exercises are NOT covered in the reading material. This is on purpose.
38 |
39 | This week we'll practice some more with writing SQL queries using JavaScript. For each exercise make a separate `.js`
40 | file; make sure to give it an appropriate name!
41 |
42 | ### 3.1. Exercise 1: Keys
43 |
44 | 1. Create a table, called `authors`. Give it the following
45 | fields: `(author_id(Primary Key), author_name, university, date_of_birth, h_index, gender)`
46 | 2. Write a query that adds a column called `mentor` to `authors` table that references the column `author_id`.
47 | For integrity add a `foreign key` on this column.
48 |
49 | ### 3.2. Exercise 2: Relationships
50 |
51 | 1. Create another table, called `research_Papers` with the following
52 | fields: `(paper_id, paper_title, conference, publish_date, ...)`
53 | 2. What is the relationship between Authors and Research papers ? Make necessary changes to `authors` and
54 | `research_Papers` tables and add more tables if necessary.
55 | 3. Read exercises 3 and 4 and then add information (insert rows) of 15 authors and 30 research papers such that
56 | all queries in the exercises 3 and 4 will return some answers
57 |
58 | ### 3.3. Exercise 3: Joins
59 |
60 | 1. Write a query that prints names of all `authors` and their corresponding `mentors`.
61 | 2. Write a query that prints all columns of `authors` and their published `paper_title`.
62 | If there is an author without any `research_Papers`, print the information of that `author` too.
63 |
64 | ### 3.4 Exercise 4: Aggregate Functions
65 |
66 | Write some queries to retrieve the following rows:
67 |
68 | 1. All research papers and the number of authors that wrote that paper.
69 | 2. Sum of the research papers published by all female authors.
70 | 3. Average of the h-index of all authors per university.
71 | 4. Sum of the research papers of the authors per university.
72 | 5. Minimum and maximum of the h-index of all authors per university.
73 |
74 | ## 4. Keep practicing (Optional)
75 |
76 | It is always good to keep practicing, so if you have time then have a go at the following:
77 |
78 | - [SQL Murder Mystery: Solve a crime using SQL](https://mystery.knightlab.com/)
79 |
80 | ## SUBMIT YOUR ASSIGNMENT!
81 |
82 | After you've finished your todo list it's time to show us what you got! The assignment that needs to be submitted is the
83 | following:
84 |
85 | 1. MySQL exercises
86 |
87 | Upload both to your forked Databases repository in GitHub. Make a pull request to HackYourFuture's forked repository.
88 |
89 | > Forgotten how to upload your Assignment? Go through the [guide](../hand-in-assignment-guide.md) to learn how to do this
90 | > again.
91 |
92 | _Deadline Tuesday 23.59 CET_
93 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/.gitignore:
--------------------------------------------------------------------------------
1 | # Logs
2 | logs
3 | *.log
4 | npm-debug.log*
5 | yarn-debug.log*
6 | yarn-error.log*
7 | lerna-debug.log*
8 |
9 | # Diagnostic reports (https://nodejs.org/api/report.html)
10 | report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
11 |
12 | # Runtime data
13 | pids
14 | *.pid
15 | *.seed
16 | *.pid.lock
17 |
18 | # Directory for instrumented libs generated by jscoverage/JSCover
19 | lib-cov
20 |
21 | # Coverage directory used by tools like istanbul
22 | coverage
23 | *.lcov
24 |
25 | # nyc test coverage
26 | .nyc_output
27 |
28 | # Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
29 | .grunt
30 |
31 | # Bower dependency directory (https://bower.io/)
32 | bower_components
33 |
34 | # node-waf configuration
35 | .lock-wscript
36 |
37 | # Compiled binary addons (https://nodejs.org/api/addons.html)
38 | build/Release
39 |
40 | # Dependency directories
41 | node_modules/
42 | jspm_packages/
43 |
44 | # TypeScript v1 declaration files
45 | typings/
46 |
47 | # TypeScript cache
48 | *.tsbuildinfo
49 |
50 | # Optional npm cache directory
51 | .npm
52 |
53 | # Optional eslint cache
54 | .eslintcache
55 |
56 | # Optional REPL history
57 | .node_repl_history
58 |
59 | # Output of 'npm pack'
60 | *.tgz
61 |
62 | # Yarn Integrity file
63 | .yarn-integrity
64 |
65 | # dotenv environment variables file
66 | .env
67 | .env.test
68 |
69 | # parcel-bundler cache (https://parceljs.org/)
70 | .cache
71 |
72 | # next.js build output
73 | .next
74 |
75 | # nuxt.js build output
76 | .nuxt
77 |
78 | # vuepress build output
79 | .vuepress/dist
80 |
81 | # Serverless directories
82 | .serverless/
83 |
84 | # FuseBox cache
85 | .fusebox/
86 |
87 | # DynamoDB Local files
88 | .dynamodb/
89 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/ERD.mwb:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/Week2/QA-session-content/ERD.mwb
--------------------------------------------------------------------------------
/Week2/QA-session-content/README.md:
--------------------------------------------------------------------------------
1 | This folder contains a small codebase to setup a database with some content.
2 | The database will contain information about users their projects and tasks.
3 | Tasks can be assigned to users optionally which make for some nice example queries.
4 |
5 | Create a database called `db_qa_session` or make changes to the `knexfile.js`
6 |
7 | ```
8 | npm install
9 | npm install -g knex
10 | knex migrate:latest
11 | knex seed:run
12 | ```
13 |
14 | Your database should be filled with some demo content :)
15 |
16 | These seeds and migrations themselves aren't within scope of this course.
17 | Its just a convenient way to create the demo database that will have some meaningful content to demonstrate queries.
18 |
19 | Some exercise that cover the topics of this week:
20 |
21 | 1. List all tasks
22 | 1. only show the ones that aren't done yet.
23 | `NOW()` can be used for the current dateTime.
24 | 2. add the first name and last name of the assigned user
25 | 3. add the project for the task
26 |
27 | 2. List all projects
28 | 1. reduce the list to the projects that aren't ended
29 | 2. add a column with the amount of tasks on the project
30 |
31 | 3. In your frontend you might have a url like `/projects/3` or even prettier `/projects/{CODE}`.
32 | Pick a code from a project.
33 | 1. Fetch all tasks for that project (don't return the project table columns)
34 |
35 | 4. For an admin page you want to list the users with their projects and their project aliases
36 |
37 | 5. For analytics, we want to have the amount of tasks done per user per month.
38 | 1. Create a separate query with average amount of tasks done per month per user.
39 | For example returning: Tomas finishes 4 tasks per month on average.
40 |
41 |
42 |
43 |
44 |
45 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/docker-compose.yml:
--------------------------------------------------------------------------------
1 | ---
2 | version: '3.7'
3 |
4 | services:
5 | mysql:
6 | image: mysql:5.7
7 | volumes:
8 | - mysqldata:/var/lib/mysql
9 | environment:
10 | MYSQL_ROOT_PASSWORD: "secret"
11 | MYSQL_DATABASE: "db_qa_session"
12 | restart: unless-stopped
13 | ports:
14 | - 3306:3306
15 |
16 | volumes:
17 | mysqldata:
18 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/knexfile.js:
--------------------------------------------------------------------------------
1 | // Update with your config settings.
2 |
3 | module.exports = {
4 |
5 | development: {
6 | client: 'mysql',
7 | connection: {
8 | host : '127.0.0.1',
9 | user : 'root',
10 | password : 'secret',
11 | database : 'db_qa_session'
12 | },
13 | seeds: {
14 | directory: __dirname + '/seeds'
15 | },
16 | migrations: {
17 | directory: __dirname + '/migrations'
18 | }
19 | },
20 |
21 | };
22 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/migrations/20200623130136_create_projects.js:
--------------------------------------------------------------------------------
1 |
2 | exports.up = async (knex) => {
3 | return knex.schema.createTable('projects', function (table) {
4 | table.increments();
5 | table.string('name').notNullable();
6 | table.date('start_date');
7 | table.date('end_date');
8 | table.string('code').unique();
9 | })
10 | };
11 |
12 | exports.down = async (knex) => {
13 | return knex.schema.dropTableIfExists('projects')
14 | };
15 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/migrations/20200623130141_create_users.js:
--------------------------------------------------------------------------------
1 |
2 | exports.up = async (knex) => {
3 | return knex.schema.createTable('users', function (table) {
4 | table.increments();
5 | table.string('name');
6 | table.string('username').notNullable();
7 | table.string('url');
8 | })
9 | };
10 |
11 | exports.down = async (knex) => {
12 | return knex.schema.dropTableIfExists('users')
13 | };
14 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/migrations/20200623130146_create_tasks.js:
--------------------------------------------------------------------------------
1 |
2 | exports.up = async (knex) => {
3 | return knex.schema.createTable('tasks', function (table) {
4 | table.increments();
5 | table.string('name');
6 | table.integer('assigned_to')
7 | .unsigned()
8 | .references('id')
9 | .inTable('users');
10 | table.integer('project_id')
11 | .unsigned()
12 | .references('id')
13 | .inTable('projects')
14 | .notNullable();
15 | table.dateTime('done_at').defaultTo(null);
16 | })
17 | };
18 |
19 | exports.down = async (knex) => {
20 | return knex.schema.dropTableIfExists('tasks')
21 | };
22 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/migrations/20200623130154_create_project_users.js:
--------------------------------------------------------------------------------
1 |
2 | exports.up = async (knex) => {
3 | return knex.schema.createTable('project_users', function (table) {
4 | table.integer('user_id')
5 | .unsigned()
6 | .references('id')
7 | .inTable('users');
8 | table.integer('project_id')
9 | .unsigned()
10 | .references('id')
11 | .inTable('projects');
12 | table.string('project_alias').nullable();
13 | })
14 | };
15 |
16 | exports.down = async (knex) => {
17 | return knex.schema.dropTableIfExists('project_users')
18 | };
19 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/package-lock.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "epic_todolist",
3 | "version": "1.0.0",
4 | "lockfileVersion": 1,
5 | "requires": true,
6 | "dependencies": {
7 | "bignumber.js": {
8 | "version": "9.0.0",
9 | "resolved": "https://registry.npmjs.org/bignumber.js/-/bignumber.js-9.0.0.tgz",
10 | "integrity": "sha512-t/OYhhJ2SD+YGBQcjY8GzzDHEk9f3nerxjtfa6tlMXfe7frs/WozhvCNoGvpM0P3bNf3Gq5ZRMlGr5f3r4/N8A=="
11 | },
12 | "colorette": {
13 | "version": "2.0.19",
14 | "resolved": "https://registry.npmjs.org/colorette/-/colorette-2.0.19.tgz",
15 | "integrity": "sha512-3tlv/dIP7FWvj3BsbHrGLJ6l/oKh1O3TcgBqMn+yyCagOxc23fyzDS6HypQbgxWbkpDnf52p1LuR4eWDQ/K9WQ=="
16 | },
17 | "commander": {
18 | "version": "9.5.0",
19 | "resolved": "https://registry.npmjs.org/commander/-/commander-9.5.0.tgz",
20 | "integrity": "sha512-KRs7WVDKg86PWiuAqhDrAQnTXZKraVcCc6vFdL14qrZ/DcWwuRo7VoiYXalXO7S5GKpqYiVEwCbgFDfxNHKJBQ=="
21 | },
22 | "core-util-is": {
23 | "version": "1.0.2",
24 | "resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.2.tgz",
25 | "integrity": "sha1-tf1UIgqivFq1eqtxQMlAdUUDwac="
26 | },
27 | "data-uri-to-buffer": {
28 | "version": "4.0.0",
29 | "resolved": "https://registry.npmjs.org/data-uri-to-buffer/-/data-uri-to-buffer-4.0.0.tgz",
30 | "integrity": "sha512-Vr3mLBA8qWmcuschSLAOogKgQ/Jwxulv3RNE4FXnYWRGujzrRWQI4m12fQqRkwX06C0KanhLr4hK+GydchZsaA=="
31 | },
32 | "debug": {
33 | "version": "4.3.4",
34 | "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz",
35 | "integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==",
36 | "requires": {
37 | "ms": "2.1.2"
38 | }
39 | },
40 | "escalade": {
41 | "version": "3.1.1",
42 | "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.1.1.tgz",
43 | "integrity": "sha512-k0er2gUkLf8O0zKJiAhmkTnJlTvINGv7ygDNPbeIsX/TJjGJZHuh9B2UxbsaEkmlEo9MfhrSzmhIlhRlI2GXnw=="
44 | },
45 | "esm": {
46 | "version": "3.2.25",
47 | "resolved": "https://registry.npmjs.org/esm/-/esm-3.2.25.tgz",
48 | "integrity": "sha512-U1suiZ2oDVWv4zPO56S0NcR5QriEahGtdN2OR6FiOG4WJvcjBVFB0qI4+eKoWFH483PKGuLuu6V8Z4T5g63UVA=="
49 | },
50 | "faker": {
51 | "version": "4.1.0",
52 | "resolved": "https://registry.npmjs.org/faker/-/faker-4.1.0.tgz",
53 | "integrity": "sha1-HkW7vsxndLPBlfrSg1EJxtdIzD8="
54 | },
55 | "fetch-blob": {
56 | "version": "3.1.4",
57 | "resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-3.1.4.tgz",
58 | "integrity": "sha512-Eq5Xv5+VlSrYWEqKrusxY1C3Hm/hjeAsCGVG3ft7pZahlUAChpGZT/Ms1WmSLnEAisEXszjzu/s+ce6HZB2VHA==",
59 | "requires": {
60 | "node-domexception": "^1.0.0",
61 | "web-streams-polyfill": "^3.0.3"
62 | }
63 | },
64 | "formdata-polyfill": {
65 | "version": "4.0.10",
66 | "resolved": "https://registry.npmjs.org/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz",
67 | "integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==",
68 | "requires": {
69 | "fetch-blob": "^3.1.2"
70 | }
71 | },
72 | "function-bind": {
73 | "version": "1.1.1",
74 | "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.1.tgz",
75 | "integrity": "sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A=="
76 | },
77 | "get-package-type": {
78 | "version": "0.1.0",
79 | "resolved": "https://registry.npmjs.org/get-package-type/-/get-package-type-0.1.0.tgz",
80 | "integrity": "sha512-pjzuKtY64GYfWizNAJ0fr9VqttZkNiK2iS430LtIHzjBEr6bX8Am2zm4sW4Ro5wjWW5cAlRL1qAMTcXbjNAO2Q=="
81 | },
82 | "getopts": {
83 | "version": "2.3.0",
84 | "resolved": "https://registry.npmjs.org/getopts/-/getopts-2.3.0.tgz",
85 | "integrity": "sha512-5eDf9fuSXwxBL6q5HX+dhDj+dslFGWzU5thZ9kNKUkcPtaPdatmUFKwHFrLb/uf/WpA4BHET+AX3Scl56cAjpA=="
86 | },
87 | "has": {
88 | "version": "1.0.3",
89 | "resolved": "https://registry.npmjs.org/has/-/has-1.0.3.tgz",
90 | "integrity": "sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==",
91 | "requires": {
92 | "function-bind": "^1.1.1"
93 | }
94 | },
95 | "inherits": {
96 | "version": "2.0.4",
97 | "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
98 | "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="
99 | },
100 | "interpret": {
101 | "version": "2.2.0",
102 | "resolved": "https://registry.npmjs.org/interpret/-/interpret-2.2.0.tgz",
103 | "integrity": "sha512-Ju0Bz/cEia55xDwUWEa8+olFpCiQoypjnQySseKtmjNrnps3P+xfpUmGr90T7yjlVJmOtybRvPXhKMbHr+fWnw=="
104 | },
105 | "is-core-module": {
106 | "version": "2.11.0",
107 | "resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.11.0.tgz",
108 | "integrity": "sha512-RRjxlvLDkD1YJwDbroBHMb+cukurkDWNyHx7D3oNB5x9rb5ogcksMC5wHCadcXoo67gVr/+3GFySh3134zi6rw==",
109 | "requires": {
110 | "has": "^1.0.3"
111 | }
112 | },
113 | "isarray": {
114 | "version": "1.0.0",
115 | "resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz",
116 | "integrity": "sha1-u5NdSFgsuhaMBoNJV6VKPgcSTxE="
117 | },
118 | "knex": {
119 | "version": "2.4.0",
120 | "resolved": "https://registry.npmjs.org/knex/-/knex-2.4.0.tgz",
121 | "integrity": "sha512-i0GWwqYp1Hs2yvc2rlDO6nzzkLhwdyOZKRdsMTB8ZxOs2IXQyL5rBjSbS1krowCh6V65T4X9CJaKtuIfkaPGSA==",
122 | "requires": {
123 | "colorette": "2.0.19",
124 | "commander": "^9.1.0",
125 | "debug": "4.3.4",
126 | "escalade": "^3.1.1",
127 | "esm": "^3.2.25",
128 | "get-package-type": "^0.1.0",
129 | "getopts": "2.3.0",
130 | "interpret": "^2.2.0",
131 | "lodash": "^4.17.21",
132 | "pg-connection-string": "2.5.0",
133 | "rechoir": "^0.8.0",
134 | "resolve-from": "^5.0.0",
135 | "tarn": "^3.0.2",
136 | "tildify": "2.0.0"
137 | }
138 | },
139 | "lodash": {
140 | "version": "4.17.21",
141 | "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
142 | "integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg=="
143 | },
144 | "ms": {
145 | "version": "2.1.2",
146 | "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz",
147 | "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w=="
148 | },
149 | "mysql": {
150 | "version": "2.18.1",
151 | "resolved": "https://registry.npmjs.org/mysql/-/mysql-2.18.1.tgz",
152 | "integrity": "sha512-Bca+gk2YWmqp2Uf6k5NFEurwY/0td0cpebAucFpY/3jhrwrVGuxU2uQFCHjU19SJfje0yQvi+rVWdq78hR5lig==",
153 | "requires": {
154 | "bignumber.js": "9.0.0",
155 | "readable-stream": "2.3.7",
156 | "safe-buffer": "5.1.2",
157 | "sqlstring": "2.3.1"
158 | }
159 | },
160 | "node-domexception": {
161 | "version": "1.0.0",
162 | "resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz",
163 | "integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ=="
164 | },
165 | "node-fetch": {
166 | "version": "3.1.1",
167 | "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-3.1.1.tgz",
168 | "integrity": "sha512-SMk+vKgU77PYotRdWzqZGTZeuFKlsJ0hu4KPviQKkfY+N3vn2MIzr0rvpnYpR8MtB3IEuhlEcuOLbGvLRlA+yg==",
169 | "requires": {
170 | "data-uri-to-buffer": "^4.0.0",
171 | "fetch-blob": "^3.1.3",
172 | "formdata-polyfill": "^4.0.10"
173 | }
174 | },
175 | "path-parse": {
176 | "version": "1.0.7",
177 | "resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz",
178 | "integrity": "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw=="
179 | },
180 | "pg-connection-string": {
181 | "version": "2.5.0",
182 | "resolved": "https://registry.npmjs.org/pg-connection-string/-/pg-connection-string-2.5.0.tgz",
183 | "integrity": "sha512-r5o/V/ORTA6TmUnyWZR9nCj1klXCO2CEKNRlVuJptZe85QuhFayC7WeMic7ndayT5IRIR0S0xFxFi2ousartlQ=="
184 | },
185 | "process-nextick-args": {
186 | "version": "2.0.1",
187 | "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz",
188 | "integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag=="
189 | },
190 | "readable-stream": {
191 | "version": "2.3.7",
192 | "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.7.tgz",
193 | "integrity": "sha512-Ebho8K4jIbHAxnuxi7o42OrZgF/ZTNcsZj6nRKyUmkhLFq8CHItp/fy6hQZuZmP/n3yZ9VBUbp4zz/mX8hmYPw==",
194 | "requires": {
195 | "core-util-is": "~1.0.0",
196 | "inherits": "~2.0.3",
197 | "isarray": "~1.0.0",
198 | "process-nextick-args": "~2.0.0",
199 | "safe-buffer": "~5.1.1",
200 | "string_decoder": "~1.1.1",
201 | "util-deprecate": "~1.0.1"
202 | }
203 | },
204 | "rechoir": {
205 | "version": "0.8.0",
206 | "resolved": "https://registry.npmjs.org/rechoir/-/rechoir-0.8.0.tgz",
207 | "integrity": "sha512-/vxpCXddiX8NGfGO/mTafwjq4aFa/71pvamip0++IQk3zG8cbCj0fifNPrjjF1XMXUne91jL9OoxmdykoEtifQ==",
208 | "requires": {
209 | "resolve": "^1.20.0"
210 | }
211 | },
212 | "resolve": {
213 | "version": "1.22.1",
214 | "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.22.1.tgz",
215 | "integrity": "sha512-nBpuuYuY5jFsli/JIs1oldw6fOQCBioohqWZg/2hiaOybXOft4lonv85uDOKXdf8rhyK159cxU5cDcK/NKk8zw==",
216 | "requires": {
217 | "is-core-module": "^2.9.0",
218 | "path-parse": "^1.0.7",
219 | "supports-preserve-symlinks-flag": "^1.0.0"
220 | }
221 | },
222 | "resolve-from": {
223 | "version": "5.0.0",
224 | "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-5.0.0.tgz",
225 | "integrity": "sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw=="
226 | },
227 | "safe-buffer": {
228 | "version": "5.1.2",
229 | "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz",
230 | "integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g=="
231 | },
232 | "sqlstring": {
233 | "version": "2.3.1",
234 | "resolved": "https://registry.npmjs.org/sqlstring/-/sqlstring-2.3.1.tgz",
235 | "integrity": "sha1-R1OT/56RR5rqYtyvDKPRSYOn+0A="
236 | },
237 | "string_decoder": {
238 | "version": "1.1.1",
239 | "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz",
240 | "integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==",
241 | "requires": {
242 | "safe-buffer": "~5.1.0"
243 | }
244 | },
245 | "supports-preserve-symlinks-flag": {
246 | "version": "1.0.0",
247 | "resolved": "https://registry.npmjs.org/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz",
248 | "integrity": "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w=="
249 | },
250 | "tarn": {
251 | "version": "3.0.2",
252 | "resolved": "https://registry.npmjs.org/tarn/-/tarn-3.0.2.tgz",
253 | "integrity": "sha512-51LAVKUSZSVfI05vjPESNc5vwqqZpbXCsU+/+wxlOrUjk2SnFTt97v9ZgQrD4YmxYW1Px6w2KjaDitCfkvgxMQ=="
254 | },
255 | "tildify": {
256 | "version": "2.0.0",
257 | "resolved": "https://registry.npmjs.org/tildify/-/tildify-2.0.0.tgz",
258 | "integrity": "sha512-Cc+OraorugtXNfs50hU9KS369rFXCfgGLpfCfvlc+Ud5u6VWmUQsOAa9HbTvheQdYnrdJqqv1e5oIqXppMYnSw=="
259 | },
260 | "util-deprecate": {
261 | "version": "1.0.2",
262 | "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz",
263 | "integrity": "sha1-RQ1Nyfpw3nMnYvvS1KKJgUGaDM8="
264 | },
265 | "web-streams-polyfill": {
266 | "version": "3.2.0",
267 | "resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.2.0.tgz",
268 | "integrity": "sha512-EqPmREeOzttaLRm5HS7io98goBgZ7IVz79aDvqjD0kYXLtFZTc0T/U6wHTPKyIjb+MdN7DFIIX6hgdBEpWmfPA=="
269 | }
270 | }
271 | }
272 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "epic_todolist",
3 | "version": "1.0.0",
4 | "description": "epic_todolist",
5 | "main": "index.js",
6 | "scripts": {
7 | "test": "echo \"Error: no test specified\" && exit 1"
8 | },
9 | "author": "",
10 | "license": "ISC",
11 | "dependencies": {
12 | "faker": "^4.1.0",
13 | "knex": "^2.4.0",
14 | "mysql": "^2.18.1",
15 | "node-fetch": "^3.1.1"
16 | }
17 | }
18 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/seeds/0-reset.js:
--------------------------------------------------------------------------------
1 | exports.seed = async (knex) => {
2 |
3 | await knex.raw('SET foreign_key_checks = 0;');
4 | await knex('project_users').truncate();
5 | await knex('tasks').truncate();
6 | await knex('projects').truncate();
7 | await knex('users').truncate();
8 | await knex.raw('SET foreign_key_checks = 1;');
9 |
10 | };
11 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/seeds/1-addProjects.js:
--------------------------------------------------------------------------------
1 | const faker = require('faker');
2 |
3 | const createProject = function (i) {
4 | let companyName = faker.company.companyName();
5 | let abbreviation = companyName
6 | .replace(/,|and|-/gi, '')
7 | .split(' ')
8 | .map((name) => name[0])
9 | .join('');
10 |
11 | return {
12 | name: companyName + ' - ' + (i % 2 ? 'website' : 'app'),
13 | start_date: faker.date.past(),
14 | end_date: (i % 5 ? faker.date.future() : faker.date.past()),
15 | code: abbreviation
16 | }
17 | }
18 |
19 | exports.seed = async function (knex) {
20 |
21 | const fakes = [];
22 | const codes = [];
23 | const amount = 20;
24 | for (let i = 0; i < amount; i++) {
25 | let project = createProject(i);
26 | if(codes.includes(project.code)){
27 | project.code += i;
28 | }
29 | codes.push(project.code);
30 | fakes.push(project);
31 | }
32 |
33 | await knex('projects').insert(fakes);
34 | };
35 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/seeds/2-addUsers.js:
--------------------------------------------------------------------------------
1 | const faker = require('faker');
2 | const fetch = require("node-fetch");
3 |
4 | const createFakeuser = () => ({
5 | name: faker.name.firstName() + ' ' + faker.name.lastName(),
6 | username: faker.internet.userName(),
7 | url: null
8 | })
9 |
10 | exports.seed = async function (knex) {
11 | const fakeUsers = [];
12 |
13 | const response = await fetch('https://api.github.com/repos/HackYourAssignment/databases/pulls', {
14 | headers: {'Content-Type': 'application/json'}
15 | });
16 | const pullRequests = await response.json();
17 |
18 | let logins = [...new Set(pullRequests.map((pr) => { return pr.user.login }) )];
19 |
20 | await Promise.all(logins.map(async (login) => {
21 | const response = await fetch('https://api.github.com/users/' + login, {
22 | headers: {'Content-Type': 'application/json'}
23 | });
24 |
25 | const profile = await response.json();
26 | fakeUsers.push({
27 | username: profile.login,
28 | name: profile.name,
29 | url: profile.html_url,
30 | });
31 | }));
32 |
33 | const desiredFakeUsers = 10;
34 | while (fakeUsers.length < desiredFakeUsers) {
35 | fakeUsers.push(createFakeuser());
36 | }
37 |
38 | await knex('users').insert(fakeUsers);
39 | };
40 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/seeds/3-addTasks.js:
--------------------------------------------------------------------------------
1 | const faker = require('faker');
2 |
3 | const createTask = (i) => ({
4 | name: faker.hacker.verb() + ' '+ faker.hacker.adjective() + ' '+ faker.hacker.noun(),
5 | project_id: Math.ceil(Math.random() * 20),
6 | assigned_to: (i % 3 ? null : Math.ceil(Math.random() * 10)),
7 | done_at: (i % 4 ? faker.date.past() : null)
8 | })
9 |
10 | exports.seed = async function (knex) {
11 |
12 | const fakes = [];
13 | const amount = 100;
14 | for (let i = 0; i < amount; i++) {
15 | fakes.push(createTask(i));
16 | }
17 |
18 | await knex('tasks').insert(fakes);
19 | };
20 |
--------------------------------------------------------------------------------
/Week2/QA-session-content/seeds/4-addProjectUsers.js:
--------------------------------------------------------------------------------
1 | const faker = require('faker');
2 |
3 | const createRelation = (i) => ({
4 | // user 2 to 9 will have projects
5 | user_id: 1 + Math.ceil(Math.random() * 8),
6 | // project 4 to 18 will have projects
7 | project_id: 3 + Math.ceil(Math.random() * 15),
8 | project_alias: (i%5 ? null : faker.random.word())
9 | })
10 |
11 | exports.seed = async function (knex) {
12 |
13 | const fakes = [];
14 | const amount = 30;
15 | for (let i = 0; i < amount; i++) {
16 | fakes.push(createRelation(i));
17 | }
18 |
19 | await knex('project_users').insert(fakes);
20 | };
21 |
--------------------------------------------------------------------------------
/Week2/QA_PREP_EXERCISE.md:
--------------------------------------------------------------------------------
1 | # Prep exercise week 2
2 |
3 | As a preparation step for the upcoming Q&A, you need to work on the following exercise, which is based on the prep exercise of the previous week.
4 |
5 | ## Exercise
6 |
7 | Last week you designed a database for storing food recipes. How can you improve your tables, based on what you learned this week?
8 |
9 | - What changes do you need to make to include [these new recipes](recipes.md) in your database?
10 |
11 | - Where can you add a primary key?
12 | - Which tables should be linked with a foreign key?
13 | - What type of relationships do you see between the tables?
14 |
15 | - Can you make queries to get:
16 | - All the vegetarian recipes with potatoes
17 | - All the cakes that do not need baking
18 | - All the vegan and Japanese recipes
19 |
20 | Since your table data might be different from the above questions, feel free to improvise.
21 | Make sure that your database is running locally and that your queries are reproducable in .js files.
22 |
23 | Having learned what an ERD is, how would you design one for the database you created? The diagram must visualize:
24 |
25 | - Your entities
26 | - Their attributes
27 | - The relationships between the tables
28 |
29 | You can use [Lucidchart](https://www.lucidchart.com/pages/?) or [draw.io](https://www.drawio.com/) to create your diagrams.
30 |
--------------------------------------------------------------------------------
/Week2/README.md:
--------------------------------------------------------------------------------
1 | # Reading Material Databases Week 2
2 |
3 | ## Agenda
4 |
5 | These are the topics for week 2:
6 |
7 | 1. [Identifiers (Primary key, Foreign key, Composite key)](https://hackyourfuture.github.io/study/#/databases/sql/identifiers)
8 | 2. [Relationships (One-to-One, One-to-Many, Many-to-Many)](https://hackyourfuture.github.io/study/#/databases/sql/relationships)
9 | 3. [SQL clauses](https://hackyourfuture.github.io/study/#/databases/sql/advanced-sql)
10 | - Joins
11 | - Aggregate functions
12 | - Distinct
13 | - Group By
14 | - Having
15 | 4. [Indexes](https://hackyourfuture.github.io/study/#/databases/sql/indexes)
16 | 5. [Domain modeling](https://hackyourfuture.github.io/study/#/databases/sql/domain-modeling)
17 |
18 | ## Week goals
19 |
20 | This week, we dive deeper into SQL and how to structure your tables. Let's start by looking into identifiers that allow you to add constraints to your database [here](https://hackyourfuture.github.io/study/#/databases/sql/identifiers). After that, let's look into relationships between tables. There are three types of relationships (one-to-one, one-to-many and many-to-many), read about when to use them over [here](https://hackyourfuture.github.io/study/#/databases/sql/relationships).
21 |
22 | Next, we will look at more complex SQL commands that are needed now that the table structures are getting more complicated. Let's look into the advanced SQL commands and when to use them [here](https://hackyourfuture.github.io/study/#/databases/sql/advanced-sql).
23 |
24 | With more complex and bigger databases, performance can become a problem. To help you solve this, you can configure your database in such a way to speed up specific queries using [Indexes](https://hackyourfuture.github.io/study/#/databases/sql/indexes). Finally, with all of these relationships between multiple tables, it can become a little hard to keep track. To help visualise your databases, we generally use what is called an Entity Relationship Diagram, read all about them [here](https://hackyourfuture.github.io/study/#/databases/sql/domain-modeling).
25 |
26 | Bonus: We have a bonus video [here](https://www.youtube.com/watch?v=8yIuyUum3XU) of one of our mentors (@Unmesh on Slack! Thank him if you found it useful) going through a practical example of how to deal with the asynchronicity of working with databases.
27 |
28 | Bonus 2: A second bonus video [here](https://www.youtube.com/watch?v=H08wAwrWEec) that goes through some of the topics of this week: keys, relationships and joins. Again it is @Unmesh on Slack, a thanks if this helped you is much appreciated!
29 |
30 | ## Career training II (Interview preparation)
31 | If you haven't finished all the material yet, then continue with it this week.
32 |
33 | ## Finished?
34 |
35 | Have you gone through all of the materials? High five! If you feel ready to get practical, click [here](./MAKEME.md).
36 |
--------------------------------------------------------------------------------
/Week2/async-create-insert.js:
--------------------------------------------------------------------------------
1 | const util = require('util');
2 | const mysql = require('mysql');
3 |
4 | const connection = mysql.createConnection({
5 | host: 'localhost',
6 | user: 'hyfuser',
7 | password: 'hyfpassword',
8 | database: 'userdb',
9 | });
10 |
11 | // Promisify the bind function of query function of connection object
12 | // Pass connection object (because bind expects "this")
13 | // Afterwards execQuery will work as a function that returns a promise but
14 | // we don't have to call "then" over that promise
15 | const execQuery = util.promisify(connection.query.bind(connection));
16 |
17 | async function seedDatabase() {
18 | const CREATE_STUDENTS_TABLE = `
19 | CREATE TABLE IF NOT EXISTS students (
20 | student_number INT,
21 | student_name VARCHAR(50),
22 | date_of_birth DATE,
23 | grade FLOAT,
24 | gender ENUM('m', 'f')
25 | );`;
26 | const CREATE_TEACHERS_TABLE = `
27 | CREATE TABLE IF NOT EXISTS teachers (
28 | teacher_number INT,
29 | teacher_name VARCHAR(50),
30 | date_of_birth DATE,
31 | subject TEXT,
32 | gender ENUM('m', 'f')
33 | );`;
34 | const students = [
35 | {
36 | student_number: 4444,
37 | student_name: 'Benno',
38 | date_of_birth: '1995-04-26',
39 | grade: 8.3,
40 | gender: 'm',
41 | },
42 | {
43 | student_number: 3333,
44 | student_name: 'Henriata',
45 | date_of_birth: '1998-05-12',
46 | grade: 8.5,
47 | gender: 'm',
48 | },
49 | ];
50 |
51 | connection.connect();
52 |
53 | try {
54 | // call the function that returns promise
55 | await execQuery(CREATE_STUDENTS_TABLE);
56 | await execQuery(CREATE_TEACHERS_TABLE);
57 | students.forEach(async student => {
58 | await execQuery('INSERT INTO students SET ?', student);
59 | });
60 | } catch (error) {
61 | console.error(error);
62 | connection.end();
63 | }
64 |
65 | connection.end();
66 | }
67 |
68 | seedDatabase();
69 |
--------------------------------------------------------------------------------
/Week2/generate_big_table.js:
--------------------------------------------------------------------------------
1 | const util = require('util');
2 | const mysql = require('mysql');
3 |
4 | const connection = mysql.createConnection({
5 | host: 'localhost',
6 | user: 'hyfuser',
7 | password: 'hyfpassword',
8 | database: 'big',
9 | });
10 |
11 |
12 | const execQuery = util.promisify(connection.query.bind(connection));
13 | connection.connect();
14 |
15 | async function seedDatabase() {
16 |
17 | const CREATE_TABLE = `
18 | CREATE TABLE IF NOT EXISTS big
19 | (
20 | id_pk INT PRIMARY KEY AUTO_INCREMENT,
21 | number INT
22 | );`;
23 |
24 | execQuery(CREATE_TABLE);
25 | let rows = []
26 | for (i = 1; i <= 1000000; i++) {
27 | rows.push([i]);
28 | if(i%10000 === 0){
29 | console.log("i="+i);
30 | await execQuery('INSERT INTO big(number) VALUES ?',[rows]);
31 | rows = [];
32 | }
33 | }
34 | }
35 |
36 | async function queries() {
37 | let start = new Date();
38 | await execQuery('select * from big where id_pk = 230');
39 |
40 | let end = new Date() - start
41 | console.info('Execution time with a primary key: %dms', end)
42 |
43 |
44 | start = new Date();
45 | await execQuery('select * from big where number = 230');
46 | end = new Date() - start
47 | console.info('Execution time with no index: %dms', end)
48 |
49 | }
50 |
51 | seedDatabase().then(() => queries().then(()=>connection.end()));
52 |
53 |
--------------------------------------------------------------------------------
/Week2/optional advanced SQL/query4-stored_proc.sql:
--------------------------------------------------------------------------------
1 | mysql> delimiter $$
2 | mysql> CREATE PROCEDURE getSimilarCountries(input_country text)
3 | -> BEGIN
4 | -> DECLARE message text;
5 | -> DECLARE myregion text;
6 | -> DECLARE mylang text;
7 | -> DECLARE cntCountries int;
8 | -> SELECT region into myregion from country where Name = input_country;
9 | -> SELECT language into mylang from country as c, countrylanguage as cl
10 | -> WHERE c.code = cl.countrycode
11 | -> AND c.Name = input_country
12 | -> AND isofficial ='T';
13 | -> SELECT count(*) into cntCountries from country as c, countrylanguage as cl
14 | -> WHERE c.code = cl.countrycode
15 | -> AND region = myregion
16 | -> AND language = mylang
17 | -> AND isofficial = 'T';
18 | -> if (cntCountries > 1) then
19 | -> SELECT c.Name from country as c, countrylanguage as cl
20 | -> WHERE c.code = cl.countrycode
21 | -> AND region =myregion
22 | -> AND language=mylang
23 | -> AND isofficial= 'T';
24 | -> else
25 | -> SET message = 'FALSE: no such countries';
26 | -> SET lc_message=message;
27 | -> SIGNAL SQLSTATE='45000';
28 | -> end if;
29 | -> END;
30 | -> $$
31 | -> delimiter ;
32 |
--------------------------------------------------------------------------------
/Week2/optional advanced SQL/trigger-insert-employees.sql:
--------------------------------------------------------------------------------
1 | delimiter $$
2 | CREATE TRIGGER start_date_trigger
3 | BEFORE INSERT
4 | ON employee
5 |
6 | FOR EACH ROW
7 | BEGIN
8 | DECLARE message varchar(100);
9 | IF new.starting_date > '2022-01-01'
10 | THEN
11 | set message='Employees cannot start in the year 2022';
12 | SET lc_messages=message;
13 | SIGNAL SQLSTATE '45000';
14 | END IF;
15 | END $$
16 |
--------------------------------------------------------------------------------
/Week2/optional advanced SQL/trigger-on-project-date.sql:
--------------------------------------------------------------------------------
1 | delimiter $$
2 | CREATE TRIGGER date_trigger
3 | BEFORE INSERT
4 | ON emp_proj
5 | FOR EACH ROW
6 | BEGIN
7 | DECLARE message VARCHAR(100);
8 | DECLARE project_start_date datetime ;
9 | DECLARE project_end_date datetime ;
10 | DECLARE employee_join_date datetime ;
11 | SET project_start_date = (select start_date from projects where proj_no=new.proj_no);
12 | SET project_end_date = (select end_date from projects where proj_no=new.proj_no);
13 | SET employee_join_date = (select start_djoining_date from employees where emp_no=new.emp_no);
14 | IF employee_join_date not between project_start_date and project_end_date
15 | THEN
16 | set message= 'Employee start date is not within project dates';
17 | SET lc_messages=message; SIGNAL SQLSTATE '45000';
18 | END IF;
19 | END $$
20 |
21 | delimiter ;
22 |
--------------------------------------------------------------------------------
/Week2/optional advanced SQL/trigger-on-project-end-date.sql:
--------------------------------------------------------------------------------
1 | delimiter $$
2 | CREATE TRIGGER end_date_trigger
3 | BEFORE INSERT
4 | ON project
5 | FOR EACH ROW
6 | BEGIN
7 | DECLARE message VARCHAR(100);
8 | IF new.start_date > new.end_date
9 | THEN
10 | set message= 'Project end date cannot be before the start date';
11 | SET lc_messages=message; SIGNAL SQLSTATE '45000';
12 | END IF;
13 | END $$
14 |
15 | delimiter ;
16 |
--------------------------------------------------------------------------------
/Week2/optional advanced SQL/trigger.js:
--------------------------------------------------------------------------------
1 | const util = require('util');
2 | const mysql = require('mysql');
3 |
4 | const connection = mysql.createConnection({
5 | host: 'localhost',
6 | user: 'hyfuser',
7 | password: 'hyfpassword',
8 | database: 'test',
9 | });
10 |
11 | // Promisify the bind function of query function of connection object
12 | // Pass connection object (because bind expects "this")
13 | // Afterwards execQuery will work as a function that returns a promise but
14 | // we don't have to call "then" over that promise
15 | const execQuery = util.promisify(connection.query.bind(connection));
16 |
17 | async function seedDatabase() {
18 |
19 | /*
20 | When using JS client for MySQL server,
21 | we do not have to change the delimiter.
22 | Just create trigger query will do.
23 |
24 | When using the MySQL client (command prompt),
25 | we have to change the delimiter, because semicolon (;)
26 | is the delimiter for MySQL client
27 | */
28 | const CREATE_TRIGGER = `CREATE TRIGGER end_date_trigger
29 | BEFORE INSERT
30 | ON project
31 | FOR EACH ROW
32 | BEGIN
33 | DECLARE message VARCHAR(100);
34 | IF new.start_date > new.end_date
35 | THEN
36 | set message= 'Project end date cannot be before the start date';
37 | SET lc_messages=message; SIGNAL SQLSTATE '45000';
38 | END IF;
39 | END;`
40 |
41 |
42 | connection.connect();
43 |
44 | try {
45 | // call the function that returns promise
46 | await execQuery(CREATE_TRIGGER);
47 | } catch (error) {
48 | console.error(error);
49 | }
50 |
51 | connection.end();
52 | }
53 |
54 | seedDatabase();
55 |
--------------------------------------------------------------------------------
/Week2/recipes.md:
--------------------------------------------------------------------------------
1 | # 1. No-Bake Cheesecake:
2 |
3 | ### Categories:
4 | Cake
5 | No-Bake
6 | Vegetarian
7 |
8 | ### Ingredients:
9 | Condensed milk
10 | Cream Cheese
11 | Lemon Juice
12 | Pie Crust
13 | Cherry Jam
14 |
15 | ### Steps:
16 | Beat Cream Cheese
17 | Add condensed Milk and blend
18 | Add Lemon Juice and blend
19 | Add the mix to the pie crust
20 | Spread the Cherry Jam
21 | Place in refrigerator for 3h.
22 |
23 |
24 | # Roasted Brussels Sprouts
25 |
26 | ### Categories:
27 | Vegan
28 | Gluten-Free
29 |
30 | ### Ingredients:
31 | Brussels Sprouts
32 | Lemon juice
33 | Sesame seeds
34 | Pepper
35 | Salt
36 | Olive oil
37 |
38 | ### Steps:
39 | Preheat the oven
40 | Mix the ingredients in a bowl
41 | Spread the mix on baking sheet
42 | Bake for 30'
43 |
44 | # Mac & Cheese:
45 |
46 | ### Categories:
47 | Vegetarian
48 |
49 | ### Ingredients:
50 | Macaroni
51 | Butter
52 | Flour
53 | Salt
54 | Pepper
55 | Milk
56 | Shredded Cheddar cheese
57 |
58 | ### Steps:
59 | Cook Macaroni for 8'
60 | Melt butter in a saucepan
61 | Add flour, salt, pepper and mix
62 | Add Milk and mix
63 | Cook until mix is smooth
64 | Add cheddar cheese
65 | Add the macaroni
66 |
67 | # Tamagoyaki Japanese Omelette
68 |
69 | ### Categories:
70 | Vegetarian
71 | Japanese
72 |
73 | ### Ingredients:
74 | Eggs
75 | Soy sauce
76 | Sugar
77 | Salt
78 | Olive Oil
79 |
80 | ### Steps:
81 | Beat the eggs
82 | Add soya sauce, sugar and salt
83 | Add oil to a sauce pan
84 | Bring to medium heat
85 | Add some mix to the sauce pan
86 | Let is cook for 1'
87 | Add oil to a sauce pan
88 | Add some mix to the sauce pan
89 | Let is cook for 1'
90 | Remove pan from fire
91 |
92 |
93 |
94 |
95 |
--------------------------------------------------------------------------------
/Week2/world.sql:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/Week2/world.sql
--------------------------------------------------------------------------------
/Week3/LESSONPLAN.md:
--------------------------------------------------------------------------------
1 | # Lesson Plan Databases Week 3
2 |
3 | The lesson plan is primarily written for mentors so that they can use examples and anecdotes from this document in conjunction with the README and explain the concepts better in the session.
4 |
5 | ## Topics (essentially same as the README file)
6 |
7 | 0. Super Key vs Candidate Key vs Primary key
8 | 1. Normalization
9 | 2. Transactions
10 | 3. SQL injection
11 | 4. NoSQL (with MongoDB)
12 |
13 | ## 0. Super Key vs Candidate Key vs Primary key
14 |
15 | #### Explanation
16 |
17 | 1. Super key is a set of columns that uniquely identify a row.
18 | 2. Candidate key is a minimal super key that can uniquely identify a row.
19 | 3. Primary key is a choice of candidate key chosen by the database designer.
20 |
21 | #### Example
22 |
23 | For the following table
24 | `Employee (employee_id, employee_name, gender, salary, department, age, city)`
25 |
26 | - Two super keys from this table are
27 |
28 | 1. SK1 = `{employee_name, department, age, city}`
29 | 2. SK2 = `{employee_id, employee_name, salary}`
30 |
31 | - The candidate keys derived from these super key can be
32 |
33 | 1. CK1 from SK1 = `{employee_name, city}`
34 | if two employees with the same name always come from different city.
35 | Then, we don't need the `age` and `department` columns in this candidate key.
36 | 2. CK2 from SK2 = `{employee_id}` if different identifier is generated for every
37 | employee. Then we don't need the `employee_name` and `salary` columns in this candidate key.
38 |
39 | - The primary key chosen from these candidate keys could be `employee_id`.
40 |
41 | #### Exercise
42 |
43 | Consider the following table:
44 | `Book (ISBN int, book_name, author_name, publication_year, publisher, book_language)`.
45 | Find out 2 sets of super keys, candidate keys and choose an appropriate primary key.
46 |
47 | #### Essence
48 |
49 | Primary key uniquely identifies rows.
50 | Super keys and Candidate keys are used in database design.
51 |
52 | ## 1. Normalization and normal forms
53 |
54 | ### Explanation
55 |
56 | The goal of normalization is to reduce duplication of data.
57 | Different levels of normalization are called _normal forms_.
58 | A table is said to be in `X normal form` if it satisfies all rules
59 | defined by that normal form and all the normal forms before X.
60 |
61 | #### 1NF (5 rules)
62 |
63 | 1. Single valued columns (each column should have atomic value, no multiple values)
64 | 2. Column domain (for any column) should not change.
65 | 3. Unique names for columns.
66 | 4. Order (of rows/columns) does not matter.
67 | 5. No duplicate records (every record has a primary key).
68 |
69 | #### 2NF (1NF + rule)
70 |
71 | - no non-prime column that is not a part of primary key
72 | should be functionally dependent on any proper subset of a candidate key.
73 | In other words, there should be
74 | No partial dependency (no column should depend on the part of the primary key).
75 |
76 | ```
77 | Functional dependency: Denoted with A => B.
78 | A and B are columns of a table. An oversimplified explanation of Function dependency is
79 | as follows.
80 | If I know a value in column A then I for sure know the value in column B
81 | but the reverse is not true.
82 | E.g. A is student number and B is student name.
83 | I can tell the name of the student from student number, but
84 | I cannot tell the student number from the name because there could be multiple students with the same name.
85 | ```
86 |
87 | If you feel adventurous, then read
88 | this [Functional Dependency Wikipage](https://en.wikipedia.org/wiki/Functional_dependency)
89 |
90 | #### 3NF (2NF + rule)
91 |
92 | - No transitive dependency (i.e. no column should depend on non-key column).
93 |
94 | #### 3.5NF AKA BCNF (3NF + rule)
95 |
96 | - For any dependency A => B, A should be a super key.
97 | In other words, for a dependency A => B, A cannot be a non-prime column, if B is a prime column.
98 |
99 | #### 4NF (3NF + rule)
100 |
101 | - No multi-value dependency.
102 |
103 | ### Example
104 |
105 | #### 1NF
106 |
107 | Consider the following table
108 |
109 | ```sql
110 | +-------------+------------+-----------------------+
111 | | Employee Id | Name | Contact |
112 | +-------------+------------+-----------------------+
113 | | 101 | Amit | 0684927317 |
114 | | 102 | Ben | 0634899234, ben@bu.nl |
115 | | 103 | Cathy | 0647882102, cat@dog.us|
116 | | 104 | Dua | 0622467559 |
117 | +-------------+------------+-----------------------+
118 | ```
119 |
120 | This table is not in 1NF because rule (1) of 1NF is violated because
121 | row 2 and row 3 contain multiple values for the column `Contact`.
122 | Also the rule (2) of 1NF is violated because the column `Contact` contains
123 | numeric values (for phone numbers) and string value (for emails).
124 |
125 | This table could be converted to 1NF as follows:
126 |
127 | ```sql
128 | +-------------+------------+------------------------+
129 | | Employee Id | Name | Phone | Email |
130 | +-------------+------------+------------------------+
131 | | 101 | Amit | 0684927317 | NULL |
132 | | 102 | Ben | 0634899234 | ben@bu.nl |
133 | | 103 | Cathy | 0647882102 | cat@dog.us|
134 | | 104 | Dua | 0622467559 | NULL |
135 | +-------------+------------+----------------------- +
136 | ```
137 |
138 | In real life, you actually need to
139 |
140 | - DROP column Contact.
141 | - ADD column Phone with the type int.
142 | - ADD column Email with the type varchar(50).
143 |
144 | #### 2NF
145 |
146 | Consider the following table (employee-project M-M relationship table).
147 |
148 | ```sql
149 | +-------------+------------+-----------------------+
150 | | Employee Id | Project ID | Project Budget |
151 | +-------------+------------+-----------------------+
152 | | 101 | 1001 | 317 |
153 | | 102 | 1001 | 234 |
154 | | 103 | 2001 | 102 |
155 | | 104 | 2001 | 559 |
156 | +-------------+------------+-----------------------+
157 | ```
158 |
159 | 2NF is violated here because
160 |
161 | ```
162 | porj_budget (non prime column)
163 | proj_no => proj_budget (functionally dependent on proj_no)
164 | proj_no (It is part of the candidate key)
165 | emp_no + proj_no (is a candidate key)
166 | ```
167 |
168 | This table could be converted to 2NF by removing the `Project Budget` column and
169 | adding it to the project table.
170 |
171 | #### 3NF
172 |
173 | Consider the following table (employees)
174 |
175 | ```sql
176 | +-------------+------------+-----------------------+
177 | | Employee Id | Dept Id | Dept Location |
178 | +-------------+------------+-----------------------+
179 | | 101 | 2221 | Amsterdam |
180 | | 102 | 2221 | Amsterdam |
181 | | 103 | 3335 | Rome |
182 | | 104 | 3335 | Rome |
183 | +-------------+------------+-----------------------+
184 | ```
185 |
186 | This table violates the 3NF because there is a transitive dependency.
187 | `Employee Id => Dept Id` and `Dept Id => Dept Location.`
188 | `Dept Location` column depends on the `Dept Id` which is not a primary key column.
189 |
190 | #### 3.5 NF (AKA BCNF)
191 |
192 | Consider the following table (students opting for subjects)
193 |
194 | ```sql
195 | +-------------+------------+-----------------------+
196 | | Student Id | Subject | Professor |
197 | +-------------+------------+-----------------------+
198 | | 101 | Java | X |
199 | | 102 | Java | X |
200 | | 101 | C++ | Y |
201 | | 103 | C++ | Y |
202 | | 103 | Java | X |
203 | | 104 | C++ | Y |
204 | +-------------+------------+-----------------------+
205 | ```
206 |
207 | This table violates the 3.5NF because there is a functional dependency
208 | `Professor => Subject` and `Professor` is not a super key.
209 | `Student Id + Subject` is the primary key. Hence `Subject` is a prime column.
210 |
211 | This table could be converted to 3.5NF as follows:
212 |
213 | ```sql
214 | +-------------+------------+
215 | | Student Id | Prof Id |
216 | +-------------+------------+
217 | | 101 | P0001 |
218 | | 102 | P0001 |
219 | | 101 | P0002 |
220 | | 103 | P0002 |
221 | | 103 | P0001 |
222 | | 104 | P0002 |
223 | +-------------+------------+
224 | ```
225 |
226 | and
227 |
228 | ```sql
229 | +-------------+------------+----------+
230 | | Prof Id | Professor | Subject |
231 | +-------------+------------+----------+
232 | | P0001 | X | C++ |
233 | | P0002 | Y | Java |
234 | +-------------+------------+----------+
235 | ```
236 |
237 | #### 4NF
238 |
239 | Consider the following table (students opting for subjects)
240 |
241 | ```sql
242 | +-------------+------------+-----------+
243 | | Student | Subject | Hobby |
244 | +-------------+------------+-----------+
245 | | Benno | Excel | Violin |
246 | | Benno | Python | Woodwork |
247 | | Benno | Dutch | Paint |
248 | | Lukas | Java | Running |
249 | | Lukas | C++ | Reading |
250 | +-------------+------------+-----------+
251 | ```
252 |
253 | This table violates 4NF because `Subject` and `Hobby` are independent of each other.
254 | Hence the hobby of the student must be repeated in the table with each subject
255 | the student chooses.
256 |
257 | ```sql
258 | +-------------+------------+-----------------------+
259 | | Student | Subject | Hobby |
260 | +-------------+------------+-----------------------+
261 | | Benno | Excel | Violin |
262 | | Benno | Excel | Woodwork |
263 | | Benno | Excel | Paint |
264 | | Benno | Python | Violin |
265 | | Benno | Python | Woodwork |
266 | | Benno | Python | Paint |
267 | | Benno | Dutch | Violin |
268 | | Benno | Dutch | Woodwork |
269 | | Benno | Dutch | Paint |
270 | +-------------+------------+-----------------------+
271 | ```
272 |
273 | It leads to a lot of repetition.
274 | This table could be converted to 4NF by splitting it into two.
275 |
276 | ```sql
277 | +-------------+------------+
278 | | Student | Subject |
279 | +-------------+------------+
280 | | Benno | Excel |
281 | | Benno | Python |
282 | | Benno | Dutch |
283 | | Lukas | Java |
284 | | Lukas | C++ |
285 | +-------------+------------+
286 | ```
287 |
288 | and
289 |
290 | ```sql
291 | +-------------+-----------+
292 | | Student | Hobby |
293 | +-------------+-----------+
294 | | Benno | Violin |
295 | | Benno | Woodwork |
296 | | Benno | Paint |
297 | | Lukas | Running |
298 | | Lukas | Reading |
299 | +-------------+-----------+
300 | ```
301 |
302 | ### Exercise
303 |
304 | Normalize the following table.
305 |
306 | ```sql
307 | +-------------+------------+-----------------------------------------------+------------+
308 | | Full name | Adddress | Movies rented | Salutation |
309 | +-------------+------------+-----------------------------------------------+------------+
310 | | Janet Jones | 5 John St | Pirate of the Carribean, Clash of the Titans | Ms. |
311 | | Rob Smith | 12 Ann St | Shawshank Redemption, Beautiful mind | Mr. |
312 | | Rob Smith | 9 Joy St | Clash of the Titans | Mr. |
313 | +-------------+------------+-----------------------------------------------+------------+
314 | ```
315 |
316 | ### Essence
317 |
318 | Normal forms help in a better database design mostly by reducing redundancy.
319 |
320 | ## 2. Transactions
321 |
322 | ### Explanation
323 |
324 | We explain the need for the transaction with the following anecdotal illustration:
325 |
326 | Assume that the balance in Ali's bank is 500€ and
327 | the balance in the bank account of Birgul is 700€.
328 | Imagine that Ali is transferring 50€ to Birgul. Then, at the end
329 | of this money transaction, Ali should have 450€ and Birgul should have 750€.
330 | Note that this involved two database queries.
331 |
332 | 1. Update the row of the account of Ali and _subtract_ the balance by 50.
333 | 2. Update the row of the account of Birgul and _add_ the balance by 50.
334 |
335 | These two database queries together form a transaction. If we execute only
336 | one of them, then there is inconsistency.
337 |
338 | Transactions have the following syntax:
339 |
340 | ```sql
341 | start transaction;
342 | SQL command 1
343 | SQL command 2 ...
344 | SQL command N
345 |
346 | rollback OR commit;
347 |
348 | -- "rollback" aborts the transaction (also ends the transaction)
349 | -- "commit" commits the transaction (also ends the transaction)
350 | ```
351 |
352 | > Note that there is no "end transaction" command. To end the transaction,
353 | > we have to either commit the transaction or rollback the transaction.
354 |
355 | #### ACID properties
356 |
357 | Transactions in relational databases (like MySQL) follow the
358 | following properties.
359 |
360 | 1. Atomicity : Execute all commands in the transaction or execute zero commands in the transaction (all or none).
361 | 2. Consistency : A transaction brings database from one valid state to the next valid state.
362 | 3. Isolation : Concurrent execution of transactions (possibly by different users) should leave the database in a
363 | consistent state.
364 | 4. Durability : When a transaction is committed, it will remain committed even in the case of system failure. In other
365 | words,
366 | committed transactions are recorded on the disk.
367 |
368 | ### Example
369 |
370 | Atomicity can be demonstrated with the following `rollback` and `commit` examples:
371 |
372 | #### Rollback example
373 |
374 | ```sql
375 | set autocommit = 0; # default is 1 which automatically commits every command as transaction.
376 |
377 | start transaction;
378 |
379 | select * from employees; # Show all the rows of the table
380 |
381 | update employees set salary = 10000 where employee_id = 101; # Update the salary of one employee
382 |
383 | select * from employees; # Show the new salary
384 |
385 | rollback; # Does not show any output but actually rolls back the transaction
386 |
387 | select * from employees; # Show the old salary
388 | ```
389 |
390 | > There can be hundreds of commands after `start transaction`. rollback command will undo all of them.
391 |
392 | #### Commit example
393 |
394 | ```sql
395 | set autocommit = 0; # default is 1 which automatically commits every command as transaction.
396 |
397 | start transaction;
398 |
399 | select * from employees; # Show all the rows of the table
400 |
401 | update employees set salary = 10000 where employee_id = 101; # Update the salary of one employee
402 |
403 | select * from employees; # Show the new salary
404 |
405 | commit; # Does not show any output but actually commits the transaction
406 |
407 | select * from employees; # Show the new salary
408 | ```
409 |
410 | > After commit, the changes are written permanently on the disk.
411 |
412 | #### Isolation and Consistency examples
413 |
414 | Start two `mysql` command line clients.
415 |
416 | ```sql
417 | --- First client
418 |
419 | update employees set city = 'Mumbai' where employee_id = 101;
420 |
421 | commit;
422 | ```
423 |
424 | In the second client, show that the value is updated.
425 |
426 | ```sql
427 | --- Second client
428 | select * from employees;
429 |
430 | ```
431 |
432 | > The change made by one database client in the database server will be seen by the other client(s). Thus,
433 | > both clients have the consistent view on the database.
434 |
435 | ```sql
436 | --- First client
437 |
438 | set autocommit = 1;
439 |
440 | LOCK TABLES employees WRITE;
441 |
442 | update employees set salary = 7000 where employee_id = 101;
443 |
444 | ```
445 |
446 | ```sql
447 | -- Second client
448 |
449 | select * from employees; # Will hang because First client has the WRITE lock on that table
450 | ```
451 |
452 | As soon as the First client executes `UNLOCK TABLES;` command,
453 | the Second client will get the output of the `select` command.
454 |
455 | > Transactions can also be created from the JavaScript client. Demo program is [async-transaction](async-transaction.js)
456 | > .
457 |
458 | ### Exercise
459 |
460 | Discuss the transaction in the context of an Uber ride. How many operations/actions are involved in the successful
461 | transaction ?
462 | When can the transaction be aborted ? What would be the database tables ?
463 |
464 | ### Essence
465 |
466 | A transaction is a set of SQL commands that is treated as ONE command.
467 |
468 | ## 3. SQL injection
469 |
470 | ### Explanation
471 |
472 | SQL injection is a type of hacker attack where the attacker tries to get the program to execute a query to read/write
473 | the data that they should not have access to.
474 |
475 | ### Example
476 |
477 | Use the `prompt` package in `input-demo.js` to simulate the input from HTML forms.
478 |
479 | `sql-injection.js` contains three ways of passing the input to the select query
480 |
481 | ```js
482 | // 1. Naive way of passing the parameter to the query
483 | const select_query = `select * from employees WHERE employee_id = ${input_number};`;
484 | ```
485 |
486 | This way is vulnerable to the following attacks.
487 |
488 | ```bash
489 | $ node sql-injection.js # Execute the Javascript program from the (VS code) terminal.
490 |
491 | prompt: employee_number: 1234 OR 1=1
492 | # select * from employees where employee_id = 1234 OR 1=1;
493 |
494 |
495 | prompt: employee_number: 1234 OR 1=1; show tables;
496 | # select * from employees where employee_id = 1234 OR 1=1; show tables;
497 |
498 |
499 | prompt: employee_number: 1234 OR 1=1; drop table demo;
500 | # select * from employees where employee_id = 1234 OR 1=1; drop table demo;
501 | ```
502 |
503 | To solve this problem, there are two ways of sanitizing the input:
504 |
505 | ```js
506 | // 1. Escaping the parameter ( replacing the unwanted characters)
507 | const select_query =
508 | `select * from employees WHERE employee_id =` +
509 | connection.escape(input_number);
510 |
511 | // 2. Using a question mark syntax to do the escaping
512 | const select_query = `select * from employees WHERE employee_id = ?`;
513 | ```
514 |
515 | ### Exercise
516 |
517 | https://www.hacksplaining.com/exercises/sql-injection#/start
518 |
519 | ### Essence
520 |
521 | SQL injections are dangerous. Always sanitize the input from your HTML forms.
522 |
523 | ## 4. No SQL
524 |
525 | ### Explanation
526 |
527 | ### Example
528 |
529 | > use the same tables here (as Week 1 lessonplan) to be consistent and show students how to make similar databases using
530 | > MySQL and MongoDB
531 |
532 | ### Exercise
533 |
534 | ### Essence
535 |
--------------------------------------------------------------------------------
/Week3/MAKEME.md:
--------------------------------------------------------------------------------
1 | # Assignment Databases Week 3
2 |
3 | ## Todo list
4 |
5 | 1. Practice the concepts
6 | 2. Prep exercises
7 | 3. Database exercises
8 | 4. Code along
9 |
10 | ## 1. Practice the concepts
11 |
12 | Let's do a warming up with some interactive exercises! Let's start with doing lesson 13-18 from the following:
13 |
14 | - [SQLBolt](https://sqlbolt.com/lesson/select_queries_introduction)
15 |
16 | Then let's practice some MongoDB queries! Start by creating an Atlas account
17 | over [here](https://www.mongodb.com/cloud/atlas/lp/try2). Atlas is MongoDB's own cloud service that hosts mongodb
18 | databases and luckily they offer a very gracious free tier to play around with. Once you have created an account make
19 | sure to create a database cluster which can be your playground for the following:
20 |
21 | - [MongoDB practice exercises](https://gist.githubusercontent.com/theRemix/7305403e1ab6fc8674f0/raw/c068ab51e930eb133a9443caa314205a89ef4d61/exercise.md)
22 | Note that this is a raw markdown file so you may want to copy it into your editor and save it as `.md` to get syntax
23 | highlighting. The live version has comments with answers which you can check out after
24 | over [here](https://gist.github.com/theRemix/7305403e1ab6fc8674f0#file-exercise-md).
25 |
26 | ## 2. **Prep exercises**
27 |
28 | > Prep exercises are exercises that you should work on _before_ the session on Sunday. These are a little more difficult
29 | > or show an important concept and as such are a great exercise to talk about with your mentor. Have a solution ready by
30 | > Sunday as you may be asked to show what you did.
31 |
32 | The [QA_PREP_EXERCISES.md](./QA_PREP_EXERCISES.md) file explains what needs to be done for the prep exercise this week.
33 | There will also be some questions at the bottom to think about. Go through them _before_ the session on Sunday as it
34 | will be covered then.
35 |
36 | ## 3. Database exercises
37 |
38 | > You will need to do some research to solve these exercises. NOT all the concepts necessary to solve these exercises are
39 | > covered in the reading material. This is on purpose.
40 |
41 | > Save all of your files in the `assignment` folder inside `Week3`!
42 |
43 | ### 3.1. Exercise 1 : SQL Normalization
44 |
45 | The manager of the dinner club would like to manage the information system that assists him to keep track of the dinners
46 | had by members.
47 | Because the manager is not an expert of Information Systems, (s)he uses the following table to store the information.
48 | Please help the manger by using the knowledge of database normal forms.
49 | Save all answers in a text file / MD file.
50 |
51 | 1. What columns violate 1NF?
52 | 2. What entities do you recognize that could be extracted?
53 | 3. Name all the tables and columns that would make a 3NF compliant solution.
54 |
55 | ```
56 | +-----------+---------------+----------------+-----------+-------------+------------+-------------------+-----------+------------------+
57 | | member_id | member_name | member_address | dinner_id | dinner_date | venue_code | venue_description | food_code | food_description |
58 | +-----------+---------------+----------------+-----------+-------------+------------+-------------------+-----------+------------------+
59 | | 1 | Amit | 325 Max park | D00001001 | 2020-03-15 | B01 | Grand Ball Room | C1, C2 | Curry, Cake |
60 | | 2 | Ben | 24 Hudson lane | D00001002 | 2020/03/15 | B02 | Zoku Roof Top | S1, C2 | Soup, Cake |
61 | | 3 | Cristina | 516 6th Ave | D00001002 | 2020/03/15 | B02 | Zoku Roof Top | S1, C2 | Soup, Cake |
62 | | 4 | Dan | 89 John St | D00001003 | 20-03-2020 | B03 | Goat Farm | P1, T1, M1| Pie, Tea, Mousse |
63 | | 1 | Amit | 325 Max park | D00001003 | 20-03-2020 | B03 | Goat Farm | P1, T1, M1| Pie, Tea, Mousse |
64 | | 3 | Cristina | 516 6th Ave | D00001004 | Mar 25 '20 | B04 | Mama's Kitchen | F1, M1 | Falafal, Mousse |
65 | | 5 | Gabor | 54 Vivaldi St | D00001005 | Mar 26 '20 | B05 | Hungry Hungary | G1, P2 | Goulash, Pasca |
66 | | 6 | Hema | 9 Peter St | D00001003 | 01-04-2020 | B03 | Goat Farm | P1, T1, M1| Pie, Tea, Mousse |
67 | +-----------+---------------+----------------+-----------+-------------+------------+-------------------+-----------+------------------+
68 | ```
69 |
70 | ### 3.2. Exercise 2 : SQL Transactions
71 |
72 | 1. Create two tables `account` and `account_changes` (write transactions-create-tables.js file)
73 | 2. `account` table should have following fields : `account_number, balance`.
74 | 3. `account_changes` table should have the following
75 | fields : `change_number, account_number, amount, changed_date, remark`.
76 | 4. Choose the appropriate data types and keys for these tables.
77 | 5. Insert some sample data in these tables. (write transactions-insert-values.js file)
78 | 6. Transfer the amount of 1000 from account number 101 to account number 102 and log the changes in the
79 | table `account_changes`.
80 | Do this in a _single transaction_ (Write transaction.js file)
81 |
82 | Submit all three files (`transactions-create-tables.js`, `transactions-insert-values.js` and `transaction.js`).
83 |
84 | ### 3.3. Exercise 3 : SQL injection
85 |
86 | You are given the below function which returns the population of a specific country from the [world](../Week2/world.sql)
87 | database.
88 |
89 | ```js
90 | function getPopulation(Country, name, code, cb) {
91 | // assuming that connection to the database is established and stored as conn
92 | conn.query(
93 | `SELECT Population FROM ${Country} WHERE Name = '${name}' and code = '${code}'`,
94 | function (err, result) {
95 | if (err) cb(err);
96 | if (result.length == 0) cb(new Error("Not found"));
97 | cb(null, result[0].name);
98 | }
99 | );
100 | }
101 | ```
102 |
103 | 1. Give an example of a value that can be passed as `name` and `code` that would take advantage of SQL-injection and (
104 | fetch all the records in the database)
105 | 2. Rewrite the function so that it is no longer vulnerable to SQL injection
106 |
107 | ### 3.4. Exercise 4 : MongoDB CRUD
108 |
109 | You should already have an Atlas account that we will use again for this exercise. Let's first create a new database
110 | that this exercise can use: `databaseWeek3` and the collection `bob_ross_episodes`. You can do this manually in Atlas,
111 | look up how to do that yourself.
112 |
113 | Once you have created the database it is time to set up our environment so that our code can connect to this database.
114 | In the past you may have put this connection information (think of API keys) in your PRs, but from now on that should
115 | not happen anymore. The way we usually do this is by creating a `.env` file and adding that to the `.gitignore` file so
116 | that it does not get pushed to git. We have set up the `.gitignore` file and provided you with an `.env.example` file
117 | that gives an example of what you `.env` file should look like. Have a look in it to see how you should create
118 | the `.env` file.
119 |
120 | > You will need to figure out a way to get these `.env` variables into the process environment. This is almost always
121 | > done using a library, but it is up to you to figure out which one this is and set it up correctly.
122 |
123 | Now that everything is set up, have a look at `index.js` to see what we would like you to do. We have provided
124 | a `seedDatabase` file that cleans up the database and collection to ensure that you are working with the same data every
125 | time.
126 |
127 | > The `index.js` file also assumes some things are set up, when you run it you will encounter an error that you will
128 | > need to solve.
129 |
130 | In this exercise we are going to work with the Bob Ross episode data, if you haven't heard of Bob Ross he was a painter
131 | that made a legendary TV show called [The Joy of Painting](https://en.wikipedia.org/wiki/The_Joy_of_Painting). In every
132 | episode he created a landscape painting that was easy to follow along to, have a look at
133 | the [official youtube channel](https://www.youtube.com/c/BobRossIncVideos) to watch some! The data in the `data.json`
134 | file is a list of all the episodes, with their title and the elements he painted in that episode. Note that we massage
135 | this data a bit in the `seedDatabase` file so have a look there and in your database on what the structure is in the
136 | end.
137 |
138 | ## 4. Code along
139 |
140 | In this week's code along you'll be building a complete CRUD app. However, instead of using MySQL it'll be MongoDB you'
141 | ll use as a database!
142 |
143 | - [CRUD app from scratch using Node.js](https://www.youtube.com/watch?v=CyTWPr_WwdI)
144 |
145 | ## SUBMIT YOUR ASSIGNMENT!
146 |
147 | After you've finished your todo list it's time to show us what you got! The assignment that needs to be submitted is the
148 | following:
149 |
150 | 1. Database exercises
151 |
152 | Upload your code to your forked Databases repository in GitHub. Make a pull request to HackYourAssignment's forked
153 | repository.
154 |
155 | > Forgotten how to upload your assignment? Go through the [guide](../hand-in-assignments-guide.md) to learn how to do this
156 | > again.
157 |
158 | _Deadline Tuesday 23.59 CET_
159 |
--------------------------------------------------------------------------------
/Week3/QA_PREP_EXERCISES.md:
--------------------------------------------------------------------------------
1 | # Prep exercise week 3
2 |
3 | As a preparation step for the upcoming Q&A, you need to work on the following exercise, which is based on the prep
4 | exercise of the previous week.
5 |
6 | ## Exercise
7 |
8 | Last week you created an ERD for the database for storing food recipes.
9 | How can you normalize your database based on what you learned this week?
10 | In particular, try answering following questions and provide table definitions from the last week
11 | and this week to explain the changes.
12 |
13 | - Was your database already in 2NF / 3 NF?
14 | - What changes did you have to do to normalize your database?
15 |
16 | ## Discussion
17 |
18 | - If you want to add thousands of recipes to your database, what challenges do you foresee?
19 | - Try to write answers to these questions in text, provide queries and commands when necessary.
20 |
--------------------------------------------------------------------------------
/Week3/README.md:
--------------------------------------------------------------------------------
1 | # Reading Material Databases Week 3
2 |
3 | ## Agenda
4 |
5 | These are the topics for week 3:
6 |
7 | - [Normalization](https://hackyourfuture.github.io/study/#/databases/sql/normalization)
8 | - [Transactions](https://hackyourfuture.github.io/study/#/databases/sql/transactions)
9 | - [SQL injection](https://hackyourfuture.github.io/study/#/databases/sql/sql-injection)
10 | - [NoSQL](https://hackyourfuture.github.io/study/#/databases/nosql/README.md)
11 | - [MongoDB](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/README.md)
12 | - [Basic operations](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/basic-operations.md)
13 |
14 | ## Week goals
15 |
16 | This week, we'll finalise our SQL knowledge with even more advanced topics. Let's first look at [Normalization](https://hackyourfuture.github.io/study/#/databases/sql/normalization) which is a technique you should always use when designing your databases!
17 |
18 | Next we'll look at [Transactions](https://hackyourfuture.github.io/study/#/databases/sql/transactions), which are ways to ensure that your data stays clean. Sometimes certain updates need to either all happen or none happen and transactions allow you to do that.
19 |
20 | Lastly for SQL, le'ts dive into the topic of [SQL injection](https://hackyourfuture.github.io/study/#/databases/sql/sql-injection) which you, as backend engineers, you should be wary about. Not handling this could enable others to delete your whole database, which as you well imagine would be a terrible thing...
21 |
22 | It is now time to start with the basics of NoSQL. This is an extremely broad topic, as it pretty much includes every way excluding the SQL way. Have a look at the basics of NoSQL [here](https://hackyourfuture.github.io/study/#/databases/nosql/README.md).
23 |
24 | Most of NoSQL is out of the scope of our curriculum, as storing data in specific ways (like graphs) is generally only used for very specific use cases. We will look at the documented-oriented way of storing data, as that is the most common thing you will see in the wild. In that field, we will use `MongoDB` as our implementation. Be sure that you know that `MongoDB` is an implementation of a document-oriented database and not the only one. Kind of like how `MySQL` is an implementation of an SQL database. Read up on document-oriented databases [here](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/README.md) and then have a look at how to do the basic CRUD operations with MongoDB [here](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/basic-operations.md), so that we can get our feet wet.
25 |
26 | ## Finished?
27 |
28 | Have you gone through all of the materials? High five! If you feel ready to get practical, click [here](./MAKEME.md).
29 |
--------------------------------------------------------------------------------
/Week3/async-transaction.js:
--------------------------------------------------------------------------------
1 | const util = require('util');
2 | const mysql = require('mysql');
3 |
4 | const connection = mysql.createConnection({
5 | host: 'localhost',
6 | user: 'hyfuser',
7 | password: 'hyfpassword',
8 | database: 'userdb',
9 | });
10 |
11 | const execQuery = util.promisify(connection.query.bind(connection));
12 |
13 | async function seedDatabase() {
14 | connection.connect();
15 |
16 | try {
17 | await execQuery("START TRANSACTION");
18 |
19 | await execQuery('UPDATE students SET grade = 10 WHERE student_number = 4444');
20 | await execQuery('UPDATE students SET grade = 2 WHERE student_number = 3333');
21 |
22 | await execQuery("COMMIT");
23 | } catch (error) {
24 | console.error(error);
25 | await execQuery("ROLLBACK");
26 | connection.end();
27 | }
28 |
29 | connection.end();
30 | }
31 |
32 | seedDatabase();
33 |
--------------------------------------------------------------------------------
/Week3/homework/mongodb/.env.example:
--------------------------------------------------------------------------------
1 | # This is an example .env file. We use .env files to store data that describes the environment the code needs to run on.
2 | # For this exercise we need a MONGODB_URL, so:
3 | # - make a copy of this file, call it `.env`
4 | # - look up how to connect to your database in atlas, there is a nice `connect` button that will help you out
5 | # - fill in the link to your new database. Make sure the database is `databaseWeek3`!
6 |
7 | MONGODB_URL=mongodb+srv://:@/databaseWeek3?retryWrites=true&w=majority
--------------------------------------------------------------------------------
/Week3/homework/mongodb/index.js:
--------------------------------------------------------------------------------
1 | const { MongoClient, ServerApiVersion } = require("mongodb");
2 |
3 | const { seedDatabase } = require("./seedDatabase.js");
4 |
5 | async function createEpisodeExercise(client) {
6 | /**
7 | * We forgot to add the last episode of season 9. It has this information:
8 | *
9 | * episode: S09E13
10 | * title: MOUNTAIN HIDE-AWAY
11 | * elements: ["CIRRUS", "CLOUDS", "CONIFER", "DECIDIOUS", "GRASS", "MOUNTAIN", "MOUNTAINS", "RIVER", "SNOWY_MOUNTAIN", "TREE", "TREES"]
12 | */
13 |
14 | // Write code that will add this to the collection!
15 |
16 | console.log(
17 | `Created season 9 episode 13 and the document got the id ${"TODO: fill in variable here"}`
18 | );
19 | }
20 |
21 | async function findEpisodesExercises(client) {
22 | /**
23 | * Complete the following exercises.
24 | * The comments indicate what to do and what the result should be!
25 | */
26 |
27 | // Find the title of episode 2 in season 2 [Should be: WINTER SUN]
28 |
29 | console.log(
30 | `The title of episode 2 in season 2 is ${"TODO: fill in variable here"}`
31 | );
32 |
33 | // Find the season and episode number of the episode called "BLACK RIVER" [Should be: S02E06]
34 |
35 | console.log(
36 | `The season and episode number of the "BLACK RIVER" episode is ${"TODO: fill in variable here"}`
37 | );
38 |
39 | // Find all of the episode titles where Bob Ross painted a CLIFF [Should be: NIGHT LIGHT, EVENING SEASCAPE, SURF'S UP, CLIFFSIDE, BY THE SEA, DEEP WILDERNESS HOME, CRIMSON TIDE, GRACEFUL WATERFALL]
40 |
41 | console.log(
42 | `The episodes that Bob Ross painted a CLIFF are ${"TODO: fill in variable here"}`
43 | );
44 |
45 | // Find all of the episode titles where Bob Ross painted a CLIFF and a LIGHTHOUSE [Should be: NIGHT LIGHT]
46 |
47 | console.log(
48 | `The episodes that Bob Ross painted a CLIFF and a LIGHTHOUSE are ${"TODO: fill in variable here"}`
49 | );
50 | }
51 |
52 | async function updateEpisodeExercises(client) {
53 | /**
54 | * There are some problems in the initial data that was filled in.
55 | * Let's use update functions to update this information.
56 | *
57 | * Note: do NOT change the data.json file
58 | */
59 |
60 | // Episode 13 in season 30 should be called BLUE RIDGE FALLS, yet it is called BLUE RIDGE FALLERS now. Fix that
61 |
62 | console.log(
63 | `Ran a command to update episode 13 in season 30 and it updated ${"TODO: fill in variable here"} episodes`
64 | );
65 |
66 | // Unfortunately we made a mistake in the arrays and the element type called 'BUSHES' should actually be 'BUSH' as sometimes only one bush was painted.
67 | // Update all of the documents in the collection that have `BUSHES` in the elements array to now have `BUSH`
68 | // It should update 120 episodes!
69 |
70 | console.log(
71 | `Ran a command to update all the BUSHES to BUSH and it updated ${"TODO: fill in variable here"} episodes`
72 | );
73 | }
74 |
75 | async function deleteEpisodeExercise(client) {
76 | /**
77 | * It seems an errand episode has gotten into our data.
78 | * This is episode 14 in season 31. Please remove it and verify that it has been removed!
79 | */
80 |
81 | console.log(
82 | `Ran a command to delete episode and it deleted ${"TODO: fill in variable here"} episodes`
83 | );
84 | }
85 |
86 | async function main() {
87 | if (process.env.MONGODB_URL == null) {
88 | throw Error(
89 | `You did not set up the environment variables correctly. Did you create a '.env' file and add a package to create it?`
90 | );
91 | }
92 | const client = new MongoClient(process.env.MONGODB_URL, {
93 | useNewUrlParser: true,
94 | useUnifiedTopology: true,
95 | serverApi: ServerApiVersion.v1,
96 | });
97 |
98 | try {
99 | await client.connect();
100 |
101 | // Seed our database
102 | await seedDatabase(client);
103 |
104 | // CREATE
105 | await createEpisodeExercise(client);
106 |
107 | // READ
108 | await findEpisodesExercises(client);
109 |
110 | // UPDATE
111 | await updateEpisodeExercises(client);
112 |
113 | // DELETE
114 | await deleteEpisodeExercise(client);
115 | } catch (err) {
116 | console.error(err);
117 | } finally {
118 | // Always close the connection at the end
119 | client.close();
120 | }
121 | }
122 |
123 | main();
124 |
125 | /**
126 | * In the end the console should read something like this:
127 |
128 | Created season 9 episode 13 and the document got the id 625e9addd11e82a59aa9ff93
129 | The title of episode 2 in season 2 is WINTER SUN
130 | The season and episode number of the "BLACK RIVER" episode is S02E06
131 | The episodes that Bob Ross painted a CLIFF are NIGHT LIGHT, EVENING SEASCAPE, SURF'S UP, CLIFFSIDE, BY THE SEA, DEEP WILDERNESS HOME, CRIMSON TIDE, GRACEFUL WATERFALL
132 | The episodes that Bob Ross painted a CLIFF and a LIGHTHOUSE are NIGHT LIGHT
133 | Ran a command to update episode 13 in season 30 and it updated 1 episodes
134 | Ran a command to update all the BUSHES to BUSH and it updated 120 episodes
135 | Ran a command to delete episode and it deleted 1 episodes
136 |
137 | */
138 |
--------------------------------------------------------------------------------
/Week3/homework/mongodb/seedDatabase.js:
--------------------------------------------------------------------------------
1 | const data = require("./data.json");
2 |
3 | /**
4 | * This function will drop and recreate the collection of sample data in our csv file.
5 | * By doing this we ensure that your functions are working on the same data, very similar to how you would set up a test environment.
6 | *
7 | * @param {MongoClient} client - The client that is connected to your database
8 | */
9 | const seedDatabase = async (client) => {
10 | const hasCollection = await client
11 | .db("databaseWeek3")
12 | .listCollections({ name: "bob_ross_episodes" })
13 | .hasNext();
14 |
15 | if (hasCollection) {
16 | const bobRossCollection = await client
17 | .db("databaseWeek3")
18 | .collection("bob_ross_episodes");
19 |
20 | // Remove all the documents
21 | await bobRossCollection.deleteMany({});
22 |
23 | // Convert data to array version of elements
24 | const documents = data.map((dataItem) => {
25 | const { EPISODE, TITLE } = dataItem;
26 |
27 | const depictionElementKeys = Object.keys(dataItem).filter(
28 | (key) => !["EPISODE", "TITLE"].includes(key)
29 | );
30 | const depictionElements = depictionElementKeys.filter(
31 | (key) => dataItem[key] === 1
32 | );
33 |
34 | return {
35 | episode: EPISODE,
36 | // Remove the extra quotation marks
37 | title: TITLE.replaceAll('"', ""),
38 | elements: depictionElements,
39 | };
40 | });
41 |
42 | // Add our documents
43 | await bobRossCollection.insertMany(documents);
44 | } else {
45 | throw Error("The collection `bob_ross_episodes` does not exist!");
46 | }
47 | };
48 |
49 | module.exports = {
50 | seedDatabase,
51 | };
52 |
--------------------------------------------------------------------------------
/Week3/input-demo.js:
--------------------------------------------------------------------------------
1 | const prompt = require('prompt');
2 |
3 | // Start the prompt
4 | prompt.start();
5 |
6 | // Get two properties from the user: username and email
7 | prompt.get(['username', 'email'], function (err, result) {
8 | // Log the results.
9 | console.log('Command-line input received:');
10 | console.log(' username: ' + result.username);
11 | console.log(' email: ' + result.email);
12 | }
13 | );
14 |
--------------------------------------------------------------------------------
/Week3/mongodb/create-employees.js:
--------------------------------------------------------------------------------
1 | const MongoClient = require("mongodb").MongoClient;
2 | const url = "mongodb://127.0.0.1:27017/";
3 |
4 | async function seedDatabase() {
5 | const client = new MongoClient(url);
6 | try {
7 | await client.connect();
8 | const emp1 = {
9 | Id: 101,
10 | Name: "Mohammed Ali",
11 | Salary: 4500,
12 | department: "Testing",
13 | };
14 | const result1 = await client
15 | .db("company")
16 | .collection("employees")
17 | .insertOne(emp1);
18 | console.log(result1);
19 |
20 | const emp2 = { Id: 103, Work: "Sell cakes", Salary: 500 };
21 | const result2 = await client
22 | .db("company")
23 | .collection("employees")
24 | .insertOne(emp2);
25 | console.log(result2);
26 | } catch (error) {
27 | console.error(error);
28 | } finally {
29 | await client.close();
30 | }
31 | }
32 |
33 | seedDatabase();
34 |
--------------------------------------------------------------------------------
/Week3/mongodb/mongoose-start.js:
--------------------------------------------------------------------------------
1 | // This code is inspired from https://mongoosejs.com/docs/index.html
2 | const mongoose = require('mongoose');
3 | main().catch(err => console.log(err));
4 |
5 | async function main() {
6 | await mongoose.connect('mongodb://127.0.0.1:27017/test');
7 |
8 | console.log("hello, connection established!");
9 |
10 | // Create a schema
11 | const kittySchema = new mongoose.Schema({
12 | name: String
13 | });
14 |
15 | // Create a model
16 | const Kitten = mongoose.model('Kitten', kittySchema);
17 | const silence = new Kitten({ name: 'Silence' });
18 | console.log(silence.name); // 'Silence'
19 |
20 | // Save the document in the Mongo database
21 | await silence.save();
22 |
23 | // Find all documents
24 | const res = await Kitten.find();
25 | console.log(res);
26 | }
27 |
--------------------------------------------------------------------------------
/Week3/sql-injection.js:
--------------------------------------------------------------------------------
1 | var prompt = require('prompt');
2 | var mysql = require('mysql');
3 | const util = require('util');
4 |
5 | const connection = mysql.createConnection({
6 | host : 'localhost',
7 | user : 'hyfuser',
8 | password : 'hyfpassword',
9 | database : 'company',
10 | multipleStatements: true
11 | });
12 |
13 | const execQuery = util.promisify(connection.query.bind(connection))
14 | const input = util.promisify(prompt.get.bind(this))
15 |
16 | async function queryDatabase() {
17 |
18 | var input_number = ""
19 | prompt.start();
20 | try {
21 | const result = await input(['employee_number']);
22 | input_number = result.employee_number
23 |
24 | // 1. Naive way of passing the parameter to the query
25 | const select_query = `select * from employees WHERE employee_id = ${input_number};`
26 |
27 | // 2. Escaping the parameter ( replacing the unwanted characters)
28 | //const select_query = `select * from employees WHERE employee_id =` + connection.escape(input_number);
29 |
30 | // 3. Using a question mark syntax to do the escaping
31 | //const select_query = `select * from employees WHERE employee_id = ?`
32 |
33 | connection.connect();
34 | console.log(select_query);
35 | const results = await execQuery(select_query, input_number);
36 | for (r of results) {
37 | console.log(r);
38 | }
39 | } catch(error) {
40 | console.error(error);
41 | }
42 |
43 | connection.end();
44 | }
45 |
46 | queryDatabase();
47 |
--------------------------------------------------------------------------------
/Week4/LESSON_PLAN.md:
--------------------------------------------------------------------------------
1 | # Lesson Plan Databases Week 4
2 |
3 | The lesson plan is primarily written for mentors so that they can use examples and anecdotes from this document in conjunction with the README and explain the concepts better during the session.
4 |
5 | ## Topics
6 |
7 | 1. Embedded vs Normalised data modeling
8 | 2. Advanced MongoDB operations
9 | 3. Pagination
10 | 4. Indexes in MongoDB
11 | 5. Transactions in MongoDB
12 | 6. SQL vs NoSQL databases
13 |
14 | ## 1. Embedded vs Normalised data modeling
15 |
16 | ### Explanation
17 |
18 | 1. Embedded means that information is stored in a sub-object in a collection
19 | 2. Normalised means that information is stored in different collection but are referencing each other
20 |
21 | ### Example
22 |
23 | Let's have a look at a bug logging database.
24 |
25 | #### The embedded way
26 |
27 | ```js
28 | const bugs = [
29 | {
30 | timestamp: new Date(),
31 | page: "about",
32 | stacktrace: "", // removed for clarity
33 | reporter: {
34 | email: "rob@thebugcreator.com",
35 | name: "Rob",
36 | },
37 | },
38 | ];
39 | ```
40 |
41 | #### The normalised way
42 |
43 | ```js
44 | const accounts = [
45 | {
46 | id: ObjectId("507f191e810c19729de86032"),
47 | email: "rob@thebugcreator.com",
48 | name: "Rob",
49 | },
50 | ];
51 |
52 | const bugs = [
53 | {
54 | timestamp: new Date(),
55 | page: "about",
56 | stacktrace: "", // removed for clarity
57 | reporterId: ObjectId("507f191e810c19729de86032"),
58 | },
59 | ];
60 | ```
61 |
62 | ### Exercise
63 |
64 | Discuss the differences and what the advantages/disadvantages are to each approach. For example:
65 |
66 | Embedded allows for faster querying.
67 | Normalized allows for less data duplication.
68 |
69 | ### Essence
70 |
71 | There are advantages to both approaches and in the wild you will have to decide which to use every time.
72 |
73 | ## 2. Advanced MongoDB operations
74 |
75 | ### Explanation
76 |
77 | 1. The `sort` command allows you to sort data that you get back from your query.
78 | 2. The `limit` command allows you to limit how many items you get back from your query.
79 | 3. The `aggregate` command allows the combination and calculation of data in one or more collections.
80 |
81 | ### Example
82 |
83 | Let's assume a log database with the following information:
84 |
85 | ```js
86 | const bugs = [
87 | {
88 | timestamp: new Date('2000-06-07T11:24:00'),
89 | page: "about",
90 | stacktrace: "", // removed for clarity
91 | reporterId: ObjectId("507f191e810c19729de86032"),
92 | }, {
93 | timestamp: new Date('2000-06-06T12:23:00'),
94 | page: "about",
95 | stacktrace: "", // removed for clarity
96 | reporterId: ObjectId("507f191e810c19729de86032"),
97 | }. {
98 | timestamp: new Date('2000-06-08T12:33:00'),
99 | page: "contact",
100 | stacktrace: "", // removed for clarity
101 | reporterId: ObjectId("e810507f191de86032c19729"),
102 | }, {
103 | timestamp: new Date('2000-06-06T12:34:00'),
104 | page: "home",
105 | stacktrace: "", // removed for clarity
106 | reporterId: ObjectId("e810507f191de86032c19729"),
107 | }
108 | ];
109 | ```
110 |
111 | #### Sort
112 |
113 | If we want to sort the find query on the timestamp to find the latest bugs we can run the following query:
114 |
115 | ```js
116 | client.db("logging").collection("bugs").find().sort({ timestamp: -1 });
117 | ```
118 |
119 | #### Limit
120 |
121 | The above query will give you back all of the bugs which may not be what you want. You may want to have a dashboard with
122 | the last 10 bugs, to do this we can use the limit command as follows:
123 |
124 | ```js
125 | client
126 | .db("logging")
127 | .collection("bugs")
128 | .find()
129 | .sort({ timestamp: -1 })
130 | .limit(10);
131 | ```
132 |
133 | This will give only the last 10 bugs rather than all of them. Note that you can put the `sort` and `limit` commands in
134 | any order!
135 |
136 | #### Aggregate
137 |
138 | Let's say we want to have a count of how many bugs appear per page. To do that we can do the following:
139 |
140 | ```js
141 | client
142 | .db("logging")
143 | .collection("bugs")
144 | .aggregate([
145 | {
146 | $group: {
147 | _id: "$page",
148 | count: { $count: {} },
149 | },
150 | },
151 | ]);
152 | ```
153 |
154 | This will give back an object with the page field in the `_id` field and the number of bugs that were logged on that
155 | page is in the `count` field!
156 |
157 | ### Exercise
158 |
159 | - Sort all bugs by pages and then by timestamp.
160 | - Group all the bugs by reporter and find the last reported bug by them
161 |
162 | ### Essence
163 |
164 | MongoDB does a lot for you, the syntax is a little different than you are probably used to, but the documentation is
165 | very detailed so make use of it!
166 |
167 | ## 3. Pagination
168 |
169 | Using the same bugs collection, let's look at offset and cursor-based pagination using that collection.
170 |
171 | ### Explanation
172 |
173 | Pagination is the splitting of content into different pages, which is done if there are too many results to put into one
174 | web page. Think for example of the search results in Google where the user can click through to find what they are
175 | looking for. We use pagination to restrict the amount of data that we send to the user as sometimes sending everything
176 | will make the request too big. The application will become sluggish as it waits for the request which would result in
177 | unhappy users.
178 |
179 | ### Example
180 |
181 | Given the bugs database in the previous section let's implement both types of pagination:
182 |
183 | #### Offset-based
184 |
185 | ```js
186 | client
187 | .db("logging")
188 | .collection("bugs")
189 | .find()
190 | .sort({ timestamp: -1 })
191 | .limit(10)
192 | .skip(20);
193 | ```
194 |
195 | This would skip 20 results and then show the next 10. So would be on page 3 if we show 10 results per page!
196 |
197 | #### Cursor-based
198 |
199 | ```js
200 | const latestBugs = await client
201 | .db("logging")
202 | .collection("bugs")
203 | .find({
204 | timestamp: { $lt: next || new Date() },
205 | })
206 | .sort({ timestamp: -1 })
207 | .limit(10);
208 |
209 | const cursorToGiveToUser = latestBugs[latestBugs.length - 1].timestamp;
210 | ```
211 |
212 | Two important things here:
213 |
214 | - You need to always have the data sorted if you do cursor-based sorting as you are including the point you are at in
215 | the query.
216 | - You have to provide the user of your endpoint the information they need to send for the next query
217 |
218 | In the above code we do it with the timestamp, in other implementations an ID can be given.
219 |
220 | ### Exercise
221 |
222 | Discuss the advantages and disadvantages of both approaches.
223 |
224 | ### Essence
225 |
226 | Pagination can be very useful when dealing with big datasets to limit the amount of data needing to be prepared. There
227 | are multiple ways of implementing pagination and it is dependent on the situation on how you want to implement it.
228 | Something with a relatively low amount of pages can easily be done using the `skip/limit` combination, but when it comes
229 | to huge data sets this approach becomes slow. A cursor-based approach is more complex to implement, but provides a
230 | better performance. As usual it is up to you to balance what is needed for the situation.
231 |
232 | ## 4. Indexes
233 |
234 | ## 5. Transactions
235 |
236 | ### Explanation
237 |
238 | The idea behind an index and a transaction should already be clear as it has been handled in SQL. So purely syntax here,
239 | but if trainees cannot explain why we do these things, then go through it with them again.
240 |
241 | ### Example
242 |
243 | ```js
244 | client.db("logging").collection("bugs").createIndex({ timestamp: -1 });
245 | ```
246 |
247 | This creates an index for sorting descending on timestamp which we have been querying a lot.
248 |
249 | ```js
250 | async function transferCredits(fromAccountId, toAccountId, amount) {
251 | const accountsCollection = client.db("billing").collection("accounts");
252 | const session = client.startSession();
253 |
254 | try {
255 | session.withTransaction(async () => {
256 | // Remove from fromUser
257 | await accountsCollection.updateOne(
258 | { _id: fromAccountId },
259 | { $inc: { credits: amount * -1 } },
260 | { session }
261 | );
262 |
263 | // Add to toUser
264 | await accountsCollection.updateOne(
265 | { _id: toAccountId },
266 | { $inc: { credits: amount } },
267 | { session }
268 | );
269 | });
270 | } catch (err) {
271 | await session.abortTransaction();
272 | } finally {
273 | await session.endSession();
274 | }
275 | }
276 | ```
277 |
278 | ### Exercise
279 |
280 | Discuss when and why to do indexes and transactions. What kind of scenarios are there.
281 |
282 | ### Essence
283 |
284 | Both indexes and transactions have a cost attached to them, but can improve your databases performance and security!
285 |
286 | ## 6. SQL vs NoSQL
287 |
288 | The prep exercise handles this, have a look at it [here](./QA_PREP_EXERCISE.md)
289 |
--------------------------------------------------------------------------------
/Week4/MAKEME.md:
--------------------------------------------------------------------------------
1 | # Assignment Databases Week 4
2 |
3 | ## **Todo list**
4 |
5 | 1. Practice the concepts
6 | 2. Prep exercises
7 | 3. MongoDB exercises
8 | 4. Code along
9 |
10 | ## 1. Practice the concepts
11 |
12 | Let's practice some advanced MongoDB queries. Have a look at the following exercises. If you feel unsure about the
13 | commands you used last week then feel free to also do 01 and 02 of this series. You will need to go back to 00 to import
14 | the data into your database, the datasets can be
15 | found [here](https://github.com/mattdavis0351/mongodb/tree/master/datasets):
16 |
17 | - [Advanced MongoDB exercises](https://github.com/mattdavis0351/mongodb-labs/blob/master/exercises/03_advanced-mongo-queries.md)
18 |
19 | ## 2. **Prep exercises**
20 |
21 | > Prep exercises are exercises that you should work on _before_ the session on Sunday. These are a little more difficult
22 | > or show an important concept and as such are a great exercise to talk about with your mentor. Have a solution ready by
23 | > Sunday as you may be asked to show what you did.
24 |
25 | The [QA_PREP_EXERCISE.md](./QA_PREP_EXERCISE.md) file explains what needs to be done for the prep exercise this week.
26 | There will also be some questions at the bottom to think about. Go through them _before_ the session on Sunday as it
27 | will be covered then.
28 |
29 | ## 3. MongoDB exercises
30 |
31 | Let's create a fresh database for this week's assignment. You can create a database called `databaseWeek4` that you can
32 | use for the following exercises.
33 |
34 | ### 3.1 Exercise 1 : Aggregation
35 |
36 | Let's practice some aggregation queries, for that we will have to use some data and
37 | luckily [kaggle](https://www.kaggle.com/) is a great website that provides datasets to use. In the `ex1-aggregation`
38 | folder you will find a csv file with data for you.
39 |
40 | 1. Find a way to get the data in the csv file into your MongoDB database. The documents should look like:
41 |
42 | ```json
43 | {
44 | _id: ObjectId(625ff77ada84ee8b5dd06e82),
45 | Country: "Afghanistan",
46 | Year: 1950,
47 | Age: "20-24",
48 | M: 374109,
49 | F: 318392
50 | }
51 | ```
52 |
53 | 2. Write a function that will return the array of the total population (M + F over all age groups) for a given `Country`
54 | per year. The result should look something like this, these are the values for `Netherlands`:
55 |
56 | ```json
57 | [
58 | {
59 | _id: 1950,
60 | countPopulation: 10042051
61 | },
62 | {
63 | _id: 1960,
64 | countPopulation: 11448815
65 | },
66 | {
67 | _id: 1970,
68 | countPopulation: 13001941
69 | },
70 | {
71 | _id: 1980,
72 | countPopulation: 14148410
73 | },
74 | {
75 | _id: 1990,
76 | countPopulation: 14965442
77 | },
78 | {
79 | _id: 2000,
80 | countPopulation: 15926188
81 | },
82 | {
83 | _id: 2010,
84 | countPopulation: 16682925
85 | },
86 | {
87 | _id: 2020,
88 | countPopulation: 17134872
89 | },
90 | {
91 | _id: 2022,
92 | countPopulation: 17211448
93 | }
94 | ];
95 | ```
96 |
97 | 3. Write a function that will return all the information of each continent for a given `Year` and `Age` field but add
98 | a new field `TotalPopulation` that will be the addition of `M` and `F`. For example, if I would give `2020` for
99 | the `Year` and `100+` for the `Age` it should return something like this:
100 |
101 | ```json
102 | [
103 | {
104 | _id: new
105 | ObjectId(
106 | "62600561b0a05834e3382cf8"
107 | ),
108 | Country: "AFRICA",
109 | Year: 2020,
110 | Age: "100+",
111 | M: 1327,
112 | F: 2723,
113 | TotalPopulation: 4050
114 | },
115 | {
116 | _id: new
117 | ObjectId(
118 | "62600561b0a05834e3382da0"
119 | ),
120 | Country: "ASIA",
121 | Year: 2020,
122 | Age: "100+",
123 | M: 57019,
124 | F: 207883,
125 | TotalPopulation: 264902
126 | },
127 | {
128 | _id: new
129 | ObjectId(
130 | "62600561b0a05834e33832a1"
131 | ),
132 | Country: "EUROPE",
133 | Year: 2020,
134 | Age: "100+",
135 | M: 22579,
136 | F: 102056,
137 | TotalPopulation: 124635
138 | },
139 | {
140 | _id: new
141 | ObjectId(
142 | "62600561b0a05834e33835d4"
143 | ),
144 | Country: "LATIN AMERICA AND THE CARIBBEAN",
145 | Year: 2020,
146 | Age: "100+",
147 | M: 19858,
148 | F: 49218,
149 | TotalPopulation: 69076
150 | },
151 | {
152 | _id: new
153 | ObjectId(
154 | "62600561b0a05834e3383946"
155 | ),
156 | Country: "NORTHERN AMERICA",
157 | Year: 2020,
158 | Age: "100+",
159 | M: 22267,
160 | F: 83419,
161 | TotalPopulation: 105686
162 | },
163 | {
164 | _id: new
165 | ObjectId(
166 | "62600561b0a05834e3383985"
167 | ),
168 | Country: "OCEANIA",
169 | Year: 2020,
170 | Age: "100+",
171 | M: 1094,
172 | F: 3980,
173 | TotalPopulation: 5074
174 | }
175 | ];
176 | ```
177 |
178 | ### 3.2 Exercise 2 : Transactions
179 |
180 | Just like last week, let's solve the same transaction problem but then in MongoDB. You can use the same data as in
181 | Week3. Note that you will need to include some libraries which means you will need to set that up as well (create
182 | a `ex2-transactions` folder for this). You will also probably want to create an `index.js` that calls the functions we
183 | will create to test it out, we leave the implementation of that up to you. Now let's get started, we will split our code
184 | into multiple files again, first being the setup file:
185 |
186 | 1. Create a `setup.js` file.
187 | 2. It should clean up the `accounts` array and then fill it with some sample data. Just like last last week we want an
188 | account document to have an `account_number` and `balance` field. Then it should have another field
189 | called `account_changes` that is an array that contains the fields: `change_number, amount, changed_date, remark`.
190 | 3. It's probably best to make this a function that you can export and call
191 |
192 | Then it is time to write our transaction function:
193 |
194 | 1. Create a `transfer.js` file that will hold our `transfer` function.
195 | 2. It should transfer money from one account to another, so it will need to know the following things: from which
196 | account, to which account, the amount and the remark for this transaction.
197 | 3. This should update the balances of both accounts and for each account add a change to the list. The change number
198 | should be incremented, so if the latest `change_number` is 30, the `change_number` for the new change should be 31.
199 | 4. Test that it works by calling the function to transfer 1000 from account number 101 to account number 102.
200 |
201 | Submit the `setup.js` and `transfer.js` files.
202 |
203 | ## 4. Code Along
204 |
205 | This week we have a small code along to show how to implement pagination and make it reusable for all your routes!
206 |
207 | - [Paginated API with Node and Mongoose](https://www.youtube.com/watch?v=ZX3qt0UWifc). We will be working in Mongoose in
208 | the final project, but this gives a little teaser on what mongoose is!
209 |
210 | ## SUBMIT YOUR ASSIGNMENT!
211 |
212 | After you've finished your todo list it's time to show us what you got! The assignment that needs to be submitted is the
213 | following:
214 |
215 | 1. MongoDB exercises
216 |
217 | Upload your code to your forked Databases repository in GitHub. Make a pull request to HackYourAssignments's forked
218 | repository.
219 |
220 | > Forgotten how to upload your assignment? Go through the [guide](../hand-in-assignments-guide.md) to learn how to do this
221 | > again.
222 |
223 | _Deadline Tuesday 23.59 CET_
224 |
--------------------------------------------------------------------------------
/Week4/QA_PREP_EXERCISE.md:
--------------------------------------------------------------------------------
1 | # Prep exercise week 4
2 |
3 | As a preparation step for the upcoming Q&A, you need to work on the following exercise, which is based on the prep
4 | exercise of the previous week.
5 |
6 | ## Exercise
7 |
8 | Last week you updated your database to be normalized. Now that you have some more NoSQL knowledge, convert your database
9 | to a document-based database. Think about the following:
10 |
11 | - What are the collections?
12 | - What information will you embed in a document and which will you store normalised?
13 |
14 | ## Discussion (Try to write answers to these questions in text, provide queries and commands when necessary)
15 |
16 | - What made you decide when to embed information? What assumptions did you make?
17 | - If you were given MySQL and MongoDB as choices to build the recipe's database at the beginning, which one would you
18 | choose and why?
--------------------------------------------------------------------------------
/Week4/README.md:
--------------------------------------------------------------------------------
1 | # Reading Material Databases Week 4
2 |
3 | ## Agenda
4 |
5 | These are the topics for week 4, all in MongoDB:
6 |
7 | - [Embedded vs Normalised data modeling](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/embedded-vs-normalised.md)
8 | - [Advanced MongoDB operations](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/advanced-operations.md)
9 | - [Pagination](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/pagination.md)
10 | - [Indexes in MongoDB](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/indexes.md)
11 | - [Transactions in MongoDB](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/transactions.md)
12 | - [SQL vs NoSQL databases](https://hackyourfuture.github.io/study/#/databases/sql-vs-nosql-databases.md)
13 |
14 | ## Week goals
15 |
16 | This week we are going to focus fully on MongoDB as the main example of a document-based database which is the most common subset of NoSQL databases. Let's start with some more advanced data modeling ideas by talking about the difference in [Embedded and Normalised data modeling (https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/embedded-vs-normalised.md).
17 |
18 | To work with more complex data models, we also need some more advanced MongoDB operations. Have a look at the [Advanced MongoDB operations](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/advanced-operations.md) section to get acquainted with some of the more commonly used ones. After that, have a look at the [Pagination](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/pagination.md) section for a common use case with these functions.
19 |
20 | Next let's address some of the topics that are theoretically the same as in SQL, but this time looking at how they work in MongoDB. Have a look at [Indexes](https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/indexes.md) and [Transactions (https://hackyourfuture.github.io/study/#/databases/nosql/mongodb/transactions.md).
21 |
22 | Last, now that you have an idea of what both SQL and NoSQL mean and how they work, you're have to ask yourself when to use one over the other in the [SQL vs NoSQL](https://hackyourfuture.github.io/study/#/databases/sql-vs-nosql-databases.md) section.
23 |
24 | ## Finished?
25 |
26 | Have you gone through all of the materials? High five! If you feel ready to get practical, click [here](./MAKEME.md).
27 |
--------------------------------------------------------------------------------
/assets/MySQL-security-setup.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/assets/MySQL-security-setup.jpg
--------------------------------------------------------------------------------
/assets/databases.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/assets/databases.png
--------------------------------------------------------------------------------
/assets/erd-solution.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/assets/erd-solution.png
--------------------------------------------------------------------------------
/assets/playlist-thumbnail.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/assets/playlist-thumbnail.png
--------------------------------------------------------------------------------
/assets/submit-homework.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/assets/submit-homework.png
--------------------------------------------------------------------------------
/assets/weekflow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HackYourFuture/databases/acbedd4c02787850566611282609d3f77412fb7d/assets/weekflow.png
--------------------------------------------------------------------------------
/hand-in-assignment-guide.md:
--------------------------------------------------------------------------------
1 | # How to hand in assignments
2 |
3 | In this module you'll submit your assignments only using Git and GitHub.
4 |
5 | 1. [GitHub account containing the assignment hand-in repositories](https://github.com/HackYourAssignment)
6 |
7 | ## 1. GitHub assignment guide
8 |
9 |
10 |
11 | Watch the video (by clicking the image) or go through the following walk-through to learn how to submit your assignments:
12 |
13 | ### ONE TIME ONLY (START OF EVERY MODULE)
14 |
15 | 1. Create a [fork](https://help.github.com/en/articles/fork-a-repo) of
16 | the [HackYourAssignment](https://github.com/HackYourAssignment) module repository.
17 | For Databases, the assignment module repository is `https://www.github.com/HackYourAssignment/databases-cohort##` where '
18 | ##' is your cohort number.
19 | You do this by using the `fork` option on the top right
20 | 2. Navigate to the URL of the cloned repository (it should be in your personal GitHub account, under "repositories")
21 | 3. Clone the repository, using SSH, to your local machine. You can do this by typing in `git clone ` in the
22 | command line (terminal)
23 | 4. On your local machine, navigate to the folder using the command line
24 | 5. Make sure you've cloned it correctly by running `git status` from the command line.
25 |
26 | ### EVERY WEEK
27 |
28 | 1. Do a `git pull` on your main branch to get the latest version.
29 | 2. Create a new branch for each week you have an assignment. For example, for the week 1 assignment for JavaScript create a
30 | branch called `YOUR_NAME-w1-JavaScript`.
31 | Don't forget to checkout this branch after creating it.
32 | 3. Make your assignment (name your assignment files corresponding to the exercises to help out the reviewer. For
33 | example: `ex1 - transactions`)
34 | 4. Once you're finished, add your assignment to a commit. Make sure you *only* commit your assignment files and nothing
35 | else. You can use `git add -p` if you only want to add a couple files.
36 | You can always check what is happening with the `git status` command (as one of our mentors always says, it is the
37 | console.log of git!).
38 | 5. Create the commit (`git commit`). Make the commit message meaningful, for
39 | example `finished project for assignment week1`.
40 | 6. Push the branch to your forked repository
41 | 7. On the GitHub page of your forked repository, click on the `create pull request` button. Make sure
42 | the `base repository` is your mentor's repository, on branch master
43 | 8. Give the pull request a title in the following format:
44 |
45 | ```markdown
46 | Assignment week 1
47 | ```
48 |
49 | 9. Submit the pull request from your forked repository branch into the `main` branch
50 |
51 | If you have any questions or if something is not entirely clear ¯\\\_(ツ)\_/¯, please reach out on Slack!
52 |
--------------------------------------------------------------------------------