├── 5-python ├── a-python │ ├── 3-oop-concepts.md │ ├── 1-python-basics.md │ ├── 4-flask.md │ └── 2-oop-basics.md └── README.md ├── 0-basic-concepts ├── assets │ ├── VM-diagram.png │ ├── port-fwd1.png │ └── port-fwd2.png ├── README.md └── request.txt ├── 3-git ├── README.md ├── 0-git-basics.md ├── 2-git-workflow.md └── 1-complete-cheatsheet.md ├── .gitignore ├── 4-vagrant ├── README.md ├── vagrant-commands.md └── vagrantfile-syntax.md ├── README.md ├── 2-unix ├── 1-linux-sysadmin.md └── README.md ├── 6-docker ├── 4-docker-volumes.md ├── 2-cont-networks.md ├── 5-docker-compose.md ├── README.md ├── 1-run-container.md └── 3-docker-images.md └── 1-get-started └── README.md /5-python/a-python/3-oop-concepts.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /5-python/a-python/1-python-basics.md: -------------------------------------------------------------------------------- 1 | (to be done) -------------------------------------------------------------------------------- /0-basic-concepts/assets/VM-diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nyu-devops/nyu-devops-concepts/HEAD/0-basic-concepts/assets/VM-diagram.png -------------------------------------------------------------------------------- /0-basic-concepts/assets/port-fwd1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nyu-devops/nyu-devops-concepts/HEAD/0-basic-concepts/assets/port-fwd1.png -------------------------------------------------------------------------------- /0-basic-concepts/assets/port-fwd2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nyu-devops/nyu-devops-concepts/HEAD/0-basic-concepts/assets/port-fwd2.png -------------------------------------------------------------------------------- /3-git/README.md: -------------------------------------------------------------------------------- 1 | # GIT 2 | 3 | This document gives an overall picture of `Git`, assuming no previous background. It is composed by four documents 4 | 5 | ### Document contents: 6 | #### [0 - Basics of Git](0-git-basics.md) 7 | 8 | #### [1 - Git commented cheatsheet](1-complete-cheatsheet.md) 9 | 10 | #### [2 - Git workflow](2-git-workflow.md) -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled source # 2 | ################### 3 | *.com 4 | *.class 5 | *.dll 6 | *.exe 7 | *.o 8 | *.so 9 | 10 | # Packages # 11 | ############ 12 | # it's better to unpack these files and commit the raw source 13 | # git has its own built in compression methods 14 | *.7z 15 | *.dmg 16 | *.gz 17 | *.iso 18 | *.jar 19 | *.rar 20 | *.tar 21 | *.zip 22 | 23 | # Logs and databases # 24 | ###################### 25 | *.log 26 | *.sql 27 | *.sqlite 28 | 29 | # OS generated files # 30 | ###################### 31 | .DS_Store 32 | .DS_Store? 33 | ._* 34 | .Spotlight-V100 35 | .Trashes 36 | ehthumbs.db 37 | Thumbs.db -------------------------------------------------------------------------------- /5-python/README.md: -------------------------------------------------------------------------------- 1 | # PYTHON 2 | 3 | This file gives an overview on the Python knowledge required for the course, with some general topics and an overview of libraries used in the course such as Flask, Unittest, or SQLAlchemy. 4 | 5 | The available files are shown below (click to go to each part): 6 | 7 | ## First Part: Python fundamentals 8 | **1 - Python Basics**: (To be done) 9 | 10 | **[2 - Object-Oriented Programming with Python](a-python/2-oop-with-python.md)**: how to work with Classes and objects in Python 11 | 12 | **3 - Working with functions in Python**: (Coming soon) 13 | 14 | ## Second Part: Packages used in the course 15 | **[4 - Flask**](a-python/4-flask.md) 16 | 17 | **5 - Unittest**: (Coming soon) 18 | 19 | **5 - SQLAlchemy**: (Coming soon) 20 | -------------------------------------------------------------------------------- /4-vagrant/README.md: -------------------------------------------------------------------------------- 1 | # VAGRANT 2 | 3 | The documentation on Vagrant is divided in three parts: 4 | 5 | **[0 - Basics of Vagrant and intuition](#basics-of-vagrant-and-intuition)**: why is Vagrant useful at all? 6 | 7 | **[1 - Useful commands to run Vagrant](vagrant-commands.md)**: some useful commands (to be run in the terminal) to run Vagrant 8 | 9 | **[2 - Defining the `Vagrantfile`](vagrantfile-syntax.md)**: the syntax of the Vagrantfile (Ruby language) that will allow us to launch the Virtual Machines as coded. (In progress) 10 | 11 | # Basics of Vagrant and intuition 12 | Vagrant is one of the components we will use the most during the course, and a very handy DevOps tool. In a nutshell, Vagrant allows us to define a set of basic steps to get a Virtual Machine ready according to a set of specifications (which will be defined in the `Vagrantfile`). 13 | 14 | It is a DevOps tool since it **automates** the process of starting and adding packages to VM, ensuring the it will run the same way every time we run the same `Vagrantfile`, even across different computers! 15 | 16 | ## The `Vagrantfile` 17 | The basic file we need to take care once vagrant is installed is `Vagrantfile` (which uses the `Ruby` syntax), and lists the requirements for running the VM. We will encode statements such as `Allocate 1024 MB of RAM for the VM`, `Download Python 3`, `Forward any traffic received on port 5000 in the VM to port 5000 in the local machine`, etc. 18 | 19 | The `Vagrantfile` syntax will be explained in more detail in [this section](vagrantfile-syntax.md). 20 | 21 | ## The commands 22 | Vagrant also installs a set of commands on our local machine, which will allow us to manage VMs though command-line statements, which are [listed in this section](vagrant-commands.md). -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # NYU DevOps course documentation 2 | 3 | This repository aims to be the technical support for the tools required for the *DevOps and Agile Methodologies* course at NYU. 4 | 5 | The content of the repository is listed below; all resources are listed below and are clickable. 6 | 7 | 8 | # Contents 9 | > Note: Click on each title to go to the resource 10 | 11 | ## [0 - Basic concepts](0-basic-concepts/README.md) 12 | This part explains some basic CS concepts used on the course, such as Network and Unix basics. 13 | >Subjects treated: [hardware components](0-basic-concepts/README.md#11-basic-hardware-components), [software components](0-basic-concepts/README.md#12-software-components), [virtualization](0-basic-concepts/README.md#122-hypervisors-and-vms), [IPs and Ports](0-basic-concepts/README.md#21-ips-and-ports), [server requests](0-basic-concepts/README.md#22-http-protocol). 14 | 15 | ## [1 - Get started](1-get-started/README.md) 16 | This section documents the elementary tools required to get started on the course (such as a text editor and command line interpreter), as well as some resources and proposed tools. 17 | >Subjects treated: [VS Code](1-get-started/README.md#1-text-editor), [Command Line Interfaces](1-get-started/README.md#2-terminal), [Postman](1-get-started/README.md#3-postman) 18 | 19 | ## [2 - Unix](2-unix/README.md) 20 | This part documents the Unix commands that will be used during the course. 21 | >Subjects treated: [Linux commands](2-unix/README.md#1---basic-commands), [environment variables](2-unix/README.md#3---environment-variables) 22 | 23 | ## [3 - Git](3-git/README.md) 24 | The Git section explains why `Git` is useful, how to use it, and some concepts on what goes on in the background. 25 | >Subjects treated: [what is Git (intuition)](3-git/0-git-basics.md), [Git cheatsheet](3-git/1-complete-cheatsheet.md), [Git workflow](3-git/2-git-workflow.md) 26 | 27 | ## [4 - Vagrant](4-vagrant/README.md) 28 | The Vagrant section has two parts: the list of commands required to run Vagrant (on the terminal), and the syntax used on the Vagrantfile (second part to be done). 29 | >Subjects treated: [what is Vagrant? (intuition)](4-vagrant/README.md), [CLI commands to run Vagrant](4-vagrant/vagrant-commands.md), [syntax of the `Vagrantfile`](4-vagrant/vagrantfile-syntax.md) 30 | 31 | ## [5 - Python](5-python/README.md) 32 | The Python part goes over all concepts required for the course: som Python fundamentals, Object-oriented programming concepts, and libraries used in the course such as `Flask`, `Unittest`, and `SQLAlchemy`. (on-going) 33 | >Subjects treated: [Object-oriented programming (Classes)](5-python/a-python/2-oop-with-python.md), [the Flask library](5-python/a-python/4-flask.md) 34 | 35 | ## [6 - Docker](6-docker/README.md) 36 | The Docker part intuitively explains docker, and shows how to use it (on-going) 37 | >Subjects treated: (On-going) 38 | -------------------------------------------------------------------------------- /2-unix/1-linux-sysadmin.md: -------------------------------------------------------------------------------- 1 | # Basics of Linux system administration 2 | 3 | # 1 - Introduction 4 | 5 | ## 1.1 Everything is a file 6 | 7 | The very basic concept one must grasp when working on a Linux system is this: everything 8 | is a file. Directories, processes, external components (such as keyboards or mouses) are 9 | files which can accept standard input (receive information) and standard output (send 10 | information). In a linux system, for example, a keyboard is a file on the `/??` folder 11 | that emits as standard output the letters that the user types on the keyboard. This stream 12 | of data is then captured by running processes (such as the code editor) that are 13 | expected to use this data. 14 | 15 | ## 1.2 The Kernel 16 | 17 | The kernel is an important concept to grasp while entering the world of Linux system 18 | administration. As explained [in the introduction of this documentation](), 19 | it is the central part of the Operating Systems that manages the computer resources 20 | in order to performs the tasks the user asks it to do. 21 | 22 | 23 | 24 | # 2 - Filesystem structure 25 | 26 | All Linux systems come with the same file structure, 27 | 28 | 29 | Here is a description of all basic directories: 30 | * `/boot`: contains all the files needed for the system to boot. When it starts, it looks for a hardcoded file (ex: file `grub.cfg` tells the system which OS to boot). 31 | * `/root`: the home directory of the root user (not to be confused with the `/` directory) 32 | * `/dev`: system devices, all devices (ex: keyboard, mouse, etc) will be a file inside this directory 33 | * `/etc`: configuration files of applications that are built on top of linux, or that come with linux (mail, etc). Important to do a backup of the folder before touching any config file 34 | * `/bin` (link to `/usr/bin`): contains the binaries for everyday user commands 35 | * `/sbin` (link to `/usr/sbin`): contains binaries for system commands 36 | * `/opt`: optional add-on applications (not part of OS). 37 | * `/proc`: folder storing files (created by the kernel) for each running program (when it computer starts, it should be empty) 38 | * `/lib` (now points to `/usr/lib`) stores C programming library files needed by commands and apps (ex: `cd` and `pwd` use some C libraries) 39 | * `/tmp`: contains all temporary files 40 | * `/home`: directory for user (each user has a directory inside “home” 41 | * `/var`: where the system stores its logs (error logs, etc.) 42 | * `/run`: stores temporary runtime files (ex: PID files) used by system daemons that start very early 43 | * `/mnt`: used to mount external filesystems 44 | * `/media`: used for cdroms (ex: if you mount a virtual ISO image, it will show in this folder) 45 | 46 | # 2 - File attributes 47 | ## 2.1 - File types 48 | When the command `ls -l` is run, a list of attributes is displayed for each file: 49 | 1. Type (ex: `drwxrwxrwx`), there are seven file types: 50 | * `d`: directory 51 | * `l`: link 52 | * `-` regular file 53 | * `c`: special file or device file (ex: keyboard) 54 | * `s`: socket 55 | * `p`: named pipe 56 | * `b`: block device 57 | 2. Number of links associated to that file 58 | 3. Owner of the directory 59 | 4. Group of the directory 60 | 5. Size 61 | 6. Month / Day / Time 62 | 7. Name 63 | -------------------------------------------------------------------------------- /4-vagrant/vagrant-commands.md: -------------------------------------------------------------------------------- 1 | # Useful commands to run Vagrant 2 | 3 | These commands must be run in the terminal in order to control a Vagrant-generated 4 | Virtual Machine. These commands will allow us to launch, access (ssh), halt, check 5 | the status or remove from hard disk Virtual machines. 6 | 7 | > Note: when any Vagrant command is run, Vagrant will look in the current directory 8 | > fo any `Vagrantfile`. If it doesn't, it will keep on looking the upwards directories 9 | > until the first `Vagrantfile` is found. The easiest way to run the correct 10 | > `Vagrantfile` is to open the folder where the `Vagrantfile` can be found (check the 11 | > current folder using the `pwd` command). 12 | 13 | ## 0 - Initializing Vagrant 14 | 15 | ```sh 16 | $ vagrant init 17 | ``` 18 | - creates a 'default' `Vagrantfile` on the current folder, with has the minimum basic 19 | requirements to launch a VM; `` is the name of a valid Vagrant box such as 20 | `ubuntu/bionic64`, or `ubuntu/xenial64`. 21 | 22 | > Note: Box names can be found on [vagrantup.com](https://app.vagrantup.com/boxes/search). 23 | 24 | ## 1 - Launching the VM 25 | 26 | ```sh 27 | $ vagrant up 28 | ``` 29 | - launches the virtual machine, with the specifications of the Vagrantfile since last 30 | time we ran `provision`. 31 | 32 | > Note: `vagrant up` will check if there is a Virtual Machine already built for that 33 | > Vagrantfile (which means it is not the first time we run `vagrant up` in that 34 | > folder). If there is, it simply launches it without looking at the `provision` 35 | > statements of the `Vagrantfile` that download software packages such as `Python`. 36 | > If there is no VM, `vagrant up` will launh the VM and run these `provision` statements. 37 | 38 | ```sh 39 | $ vagrant provision 40 | ``` 41 | - checks the Vagrantfile, and downloads the required packages (if any is missing) to 42 | make the virtual machine work. 43 | 44 | 45 | ```sh 46 | vagrant reload --provision 47 | ``` 48 | - restarts Vagrant VM with a new Vagrantfile configuration, provisioning all required 49 | packages (same as running `provision`). This command is useful to reload the VM after 50 | we modified the Vagrantfile. 51 | 52 | ## 2 - Connecting to the VM 53 | 54 | ```sh 55 | vagrant ssh 56 | ``` 57 | - Vagrant accesses the initialized VM through SSH. After running this command, we 58 | 'enter' the VM and can pass commands to it. (To 'get out' of the VM, we can use 59 | the command `exit`; that will bring back the terminal to our local machine). 60 | 61 | > Note: if we `exit` a running VM, it will not stop running, for that we need to 62 | > **explicitely halt it** (see below). 63 | 64 | ## 3 - Halting the VM 65 | 66 | ```sh 67 | $ vagrant halt 68 | ``` 69 | - stops the current vagrant VM running. 70 | 71 | > Note: a specific VM can be halted by adding its unique id (a 7-digit Hash code such 72 | > as `e3ea523`) using `vagrant halt ` (the `` can be retrieved 73 | > using `vagrant global-status`). This also works for other commands such as `destroy`. 74 | 75 | 76 | ```sh 77 | $ vagrant suspend 78 | ``` 79 | - suspends current VM running (same as 'hibernating': saves the current state of the 80 | machine) 81 | 82 | 83 | ```sh 84 | $ vagrant resume 85 | ``` 86 | - resumes suspended VM 87 | 88 | ## 4 - Checking status 89 | 90 | ```sh 91 | vagrant status 92 | ``` 93 | - shows the status of the virtual machine on the current folder (tells us if it is 94 | running or not) 95 | 96 | 97 | ``` 98 | $ vagrant gobal-status 99 | ``` 100 | - shows the status of all created VMs in the machine 101 | 102 | ## 5 - Removing the VM from hard disk 103 | 104 | ```sh 105 | $ vagrant destroy 106 | ``` 107 | - removes the VM from your hard disk 108 | 109 | > Note: a specific VM can be destroyed by referencing its unique id (a 7-digit Hash 110 | > code such as `e3ea523`) using `vagrant destroy ` (the `` can 111 | > be retrieved using `vagrant global-status`) 112 | -------------------------------------------------------------------------------- /6-docker/4-docker-volumes.md: -------------------------------------------------------------------------------- 1 | # Data volumes 2 | 3 | Useful resources: 4 | - [Manage data in Docker](https://docs.docker.com/storage/) 5 | - [O'Reilly's post about immutable infrastructure](https://www.oreilly.com/radar/an-introduction-to-immutable-infrastructure/) 6 | - [The 12-factor app](https://12factor.net/) 7 | 8 | By design, Docker containers are ephemeral and immutable (just re-deploy, do not 9 | modify). Hence the data stored on containers will not persist. Volumes and Data Mounts 10 | are the way data (such as databases or unique data) is persisted in Dockerized 11 | applications: 12 | - Volumes are the way we configure images to persist data. A volume is simply a 13 | location inside of a container, in which all data stored will be persisited in 14 | a location on the host machine (we do not know which yet, usually a default 15 | location managed by Docker, something such as 16 | `/var/lib/docker/volumes//_data` through a mount). 17 | - Bind mounts are the way this data is actually persisted when a container runs. When 18 | a container having a volume runs, Docker creates a mount that binds the contents of 19 | the volume in a container to a physical location in the host machine (usually in a 20 | folder such as `/var/lib/docker/volumes//_data`). The data modified in 21 | any of the two locations is modified in the container as well. But bind mounts can 22 | be defined dynamically for any folder on the host machine. 23 | 24 | > Note: on a Mac/Windows, Docker actually runs a Linux VM; hence, the mount path will 25 | > be on that VM, an not reachable through the host OS. 26 | 27 | # 1 - Volumes 28 | 29 | ## Adding volumes to the `Dockerfile` 30 | 31 | Volumes are specified in the image configuration (check the [`VOLUME`](docker-images.md#31-building-blocks-of-the-dockerfile) 32 | section of the `Dockerfile` definition). 33 | 34 | > Note: a volume will dynamically create a mount automatically on the host from a default 35 | > address managed by Docker. 36 | 37 | ## Naming volumes 38 | 39 | Volumes can be name when a container is initialised: 40 | 41 | ```sh 42 | $ docker container run -v : 43 | ``` 44 | - Adds a name to the volume. 45 | 46 | ## Creating volumes (ahead of time) 47 | 48 | Creating volumes ahead of time is useful if we want to add tags or drivers to them 49 | (not sure why we would need any of those). 50 | 51 | ```sh 52 | $ docker volume create 53 | ``` 54 | 55 | ## Inspecting volumes 56 | 57 | ```sh 58 | $ docker volume inspect 59 | ``` 60 | - Returns the metadata of the volume (will not say which container it is bound to 61 | though -> we should add names for it through 62 | `docker container run -v :`) 63 | 64 | # 2 - Bind Mounts 65 | 66 | A bind mount is simply a map between a directory in the host system to a directory in a 67 | container (basically two directories pointing to the same files). It does not need a 68 | volume to work, it is created for a container level. 69 | 70 | In the mounts world whatever is in the host system always "wins": it overwrites whatever 71 | the contents of the container. When data is erased at runtime on the container, that will 72 | never affect the host data (preserved), instead, a layer of changes will be superposed 73 | (which only affects the container) - this part is to be checked, not sure. Data created 74 | in the host after the container started will also be updated in the container. 75 | 76 | ## Defining bind mounts 77 | 78 | Bind mounts are created by specifying an absolute path (starting with a `/`) instead of a name when running a container: 79 | 80 | ```sh 81 | $ docker container run -v /absolute/path/in/host:/path/in/container 82 | ``` 83 | - Creates a mount binding the contents of the host location inside of the container (the 84 | container will start with the same files as the host at startup, then modify them). 85 | 86 | > Note: to bind the current directory to the container, we can do `$(pwd):/path/in/container` 87 | -------------------------------------------------------------------------------- /3-git/0-git-basics.md: -------------------------------------------------------------------------------- 1 | ## 0 Git basics 2 | ### 0.1 Why is `Git` useful? 3 | `Git` makes it easy to handle code written by a team. First, by enabling multiple people to work in different parts of the code without interfering with one another thanks to branches. Secondly, by allowing recoverability; `Git` keeps track of the changes made to the code at any state, and makes it easy to recover any step the code has been in. 4 | 5 | The usefulness of `Git` has brought complimentary tools such as `GitHub`, which added many additional features allowing open-sourcing, Continuous Integration and Delivery, and much more. 6 | 7 | ### 0.2 Basic concepts 8 | #### The `.git/` folder: where data is stored 9 | Instead of keeping a snapshot of the latest state of the files, `Git` stores documents as an overlay of changes, in a tree structure inside the `.git/` folder (local repository). The `.git/` folder is a hidden folder saved in the path we have told `Git` to initialize the project (either with `git init` or `git clone`; see sections 1 and 2). 10 | 11 | - Example of how files are stored: From an original file of 2000 lines of code, if we remove 10 lines of code and add 50 and then save the file (or 'commit' it in `Git` jargon), instead of replacing the 2000 lines of code file with another file with 2040 lines, `Git` will keep the original file and place on top of it a new file mentioning the changes in the code (the 10 lines removed and the 50 added). That way, any step that was saved in the `.git/` folder is retrievable. 12 | 13 | Note: it is important to distinguish our working files with our local files inside the `.git/` folder. Our 'local' files are those saved in the working folder, the ones we open with the text editor and modify. `Git` will save (when we `commit`) snapshots of these local files in the `.git/` folder when we tell it so. `Git` will also retrieve the saved files in the `.git/` folder (for example, when we change to another branch), and update our local files accordingly. 14 | 15 | #### Committing 16 | Committing is the way we tell `Git` to save the current state of our code in the local `.git/` folder. Each time we feel we are at a step that we might want to retrieve later, or save (once a piece of our code starts working as we expected, for example), we can run the `git commit` command to save the changes (see paragraph 1 below). 17 | 18 | #### Workspace, Staging Area, Local Repository, Remote Repository 19 | It is important to understand the four 'stages', or 'areas' our code can be in: 20 | - The **workspace** is simply the area where our local files are placed (usually the current version of our code); these are the files we can open with the text editor and modify to update the code. 21 | - The **staging area** is the list of files 'to be saved to the local `.git/` repository' when we decide to. (The list gets updated every time we run the `git add` command; we can check the current status of the list by running `git status`). `Git` keeps that list to make sure we keep track of our changes before we `commit` our changes in the local `.git/` repository (see paragraph 1.2 of the [Git cheatsheet](https://github.com/lombardero/nyu-devops-concepts/blob/master/3-git/1-complete-cheatsheet.md)). 22 | - The **local repository** is a hidden folder named `.git/`, where `Git` will save all the versions of the files we committed in an optimized format. 23 | - The **remote repository** is simply an online copy of our project (usually saved on GitHub or GitLab). Having a remote copy of the project enables many features, such as granting access to the latest version of our code to our team members (see Paragraph 2 of the [Git cheatsheet](https://github.com/lombardero/nyu-devops-concepts/blob/master/3-git/1-complete-cheatsheet.md)). 24 | 25 | #### Branches 26 | A branch is an independent version of the code; multiple branches can be active at the same time. Each person usually works in a single 'branch' (usually adding a new, isolated feature for the code); the changes that person makes to the branch will not affect other parts of the code. Once the branch is finished, changes on the branch can be easily compared to the current version of the code, making the code easy to review. 27 | 28 | Note: Each branch is created for developers to work in a single feature without affecting the 'base' branch (usually `master`, the 'current working version' of the code); once a branch is tested , it is 'merged' back to the base branch. 29 | - Example of when to use a branch: On monday, a developer of a Pet shop website is asked to add a new service to the website of displaying a photo of the available pets. For that, he creates a new branch called `pets-photos` (which at the beginning is a copy of the `master` branch), and starts working on it; he estimates he will do the job in three days. Creating a new branch allows him to start working on new code without losing the 'working version': '`master`'. On tuesday afternoon, he receives a call from the Pet show owner saying the website is down due to a bug on the code. `Git` allows him to save the current work on the `pets-photos` branch, and then create a new `urgent-fix` branch (which is again a copy of `master`) to fix the bug quickly. Once he is done fixing the bug, he merges the code of `urgent-fix` back to `master`, which enables the webpage to work again (`master` is now updated with the changes of `urgent-fix`). Now, the developer can take back where he left the `pets-photos` branch, and continue the feature he was previously working on. -------------------------------------------------------------------------------- /6-docker/2-cont-networks.md: -------------------------------------------------------------------------------- 1 | # Docker networks 2 | 3 | > Click here to [go to the commands directly](#2----getting-data). 4 | # 1 - Concepts 5 | ## 1.1 What are networks used for in docker 6 | 7 | Networks are abstractions that allow traffic between containers. The best 8 | way of connecting two containers is to put them in the same network; once 9 | that is done, both containers can talk to each other and will live in a 10 | "containerized" network. 11 | 12 | That way, two containers will be able to talk to each other if they expose 13 | their ports, AND are in the same network. 14 | 15 | > Note: Docker has a "Batteries included, but removable" philosophy, where 16 | > defaults work, but are customizeable. (everything works without specifying 17 | > a network) 18 | 19 | ### How networks interact with the host IP 20 | 21 | By default, all containers are connected to a network called "bridge" (also called 22 | "docker0"), which is used by Docker to connect to the host machine's Ethernet interface 23 | (so the traffic can reach the host -> otherwise it would be rejected). 24 | 25 | When we add the option `docker container run -p 8080:80`, we are telling docker to 26 | forward any traffic incoming from the port 8080 of the Ethernet interface to port 27 | 80 of container. 28 | 29 | ### Networks vs exposing ports 30 | 31 | All containers run through a NAT firewall (a process blocking all traffic the 32 | private IP did not request) on the host IP. When we use `docker container run -p`, 33 | we are exposing the container traffic to a port in the host system. 34 | 35 | > Note: two containers cannot be exposed on the same port at the Host. 36 | 37 | Two applications do not need to have their ports exposed to talk to each other 38 | (ex: a process connected to a MySQL database), the best practice is to put them in 39 | the same network (which will be isolated). 40 | 41 | > Note: containers can talk to multiple networks. 42 | 43 | We should put containers that talk to each other in the same networks, so the traffic 44 | does not need to go out to the Ethernet interface and back in. 45 | 46 | ### Different networks in Docker 47 | 48 | There are three special networks in docker (apart from the ones the user creates): 49 | - `bridge` or `docker0`: the default network all containers are attached to 50 | - `host`: a network that skips the usual networking of Docker and connects the 51 | containers directly to the host interface (containers are less protected but 52 | enhances performance in certain situations) 53 | - `none`: a network that is not attached to anything 54 | 55 | ### Default security 56 | 57 | By default, containers are never exposed to the host, their traffic is only connected 58 | via the docker networks. If a Docker must be accessed from the outside, it must be 59 | exposed explicitely with `-p`. 60 | 61 | ## 1.2 Docker DNS naming 62 | 63 | ### DNS naming 64 | 65 | Docker uses IP addresses dynamically for containers (every time a container is spawned, 66 | it could use a different address). To sort this problem, Docker adds a "docker DNS" in 67 | which he uses the network names as a way to easily communicate with containers inside 68 | that network (instead of needing to specify the IP address). 69 | 70 | When a container is added to a network, a link is created between the network and the 71 | container. This allows to use the name of the network instead of the IP address it is 72 | dynamically associated with to talk to the container. 73 | 74 | > Note: the default network "bridge" is not automatically linked to any containers, if 75 | > we want to link containers to it we need to specify it through the 76 | > `--link []` command. 77 | 78 | ### DNS aliases 79 | 80 | Since Docker Engine 1.11, multiple containers of the same network can respond to the 81 | same DNS address. This enables useful features such as round-robin routing (similarly 82 | to how large websites divert traffic to different instances of their services). 83 | 84 | # 2 - Getting data 85 | 86 | ## 2.1 Show open ports 87 | 88 | Use the below command to show the open ports of a specific container: 89 | 90 | ```sh 91 | $ docker container port 92 | ``` 93 | 94 | - Shows which containers are forwarding its trafic from the host to the 95 | container. 96 | 97 | ## 2.2 Get container IP address 98 | 99 | ```sh 100 | $ docker container inspect --format '{{ .NetworkSettings.IPAddress }}' 101 | ``` 102 | 103 | - Returns the IP address of the container. 104 | 105 | > Note: `--format` can be used for get many other data from the container. The field 106 | > between brackets is used to filter out one of the JSON fields 107 | 108 | # 3 - Interacting with networks 109 | 110 | ## 3.1 Querying networks 111 | 112 | ```sh 113 | $ docker network ls 114 | ``` 115 | - Lists all available networks 116 | 117 | ```sh 118 | $ docker network inspect 119 | ``` 120 | - Prints a JSON object with all the details of the network, including which containers 121 | are attached to it. 122 | 123 | ## 3.2 Creating networks 124 | 125 | ```sh 126 | $ docker network create 127 | ``` 128 | - Creates a new network 129 | - Options 130 | - `-d`: specify the driver (?), by default will use `bridge` 131 | 132 | ## 3.3 Connecting containers to networks 133 | 134 | ```sh 135 | $ docker network connect 136 | ``` 137 | - Connects the container to the network specified (attached the container to a new 138 | ethernet interfaces - different networks have different IP addresses) 139 | 140 | ## 3.4 Adding a DNS alias to a container 141 | 142 | ```sh 143 | $ docker container run --network-alias= 144 | ``` 145 | - Gives the container an alias to respond to. Many containers can respond to the same 146 | alias in a network, resulting in a robin-like routing strategy. 147 | -------------------------------------------------------------------------------- /6-docker/5-docker-compose.md: -------------------------------------------------------------------------------- 1 | # Docker compose 2 | 3 | Docker compos enables to run multiple-container applications (the usual case) with 4 | one command. For that, docker compose introduces two things: 5 | - the `docker-compose.yml` configuration file, in which all the `container run` 6 | options are specified (base images, environment variables, volumes, networks, 7 | etc.) 8 | - a CLI tool that enables to run all these containers with a single command. 9 | This is very useful for local development or test automation. 10 | 11 | > Note: check out Docker compose's [official documentation](https://docs.docker.com/compose/compose-file/compose-file-v3/) 12 | 13 | # 1 - The Compose YAML file 14 | 15 | The compose YAML file contains all instructions that Docker needs to run in order to 16 | build and run a multi-container application. 17 | 18 | ## 1.1 Version 19 | 20 | The Compose file format has its own versions (ex: 3.1, 3.3), which needs to be 21 | specified at the start of the file : 22 | 23 | ```yml 24 | version: '3.3' 25 | ``` 26 | - Sets up the version used for Docker compose (default is 1) 27 | 28 | ## 1.2 Services 29 | 30 | The services section on the Docker compose enables us to define all the containers 31 | the application needs to run. 32 | 33 | This is the basic structure for defining services: 34 | ```yml 35 | services: 36 | : # Name Given to the container; this will be used as a DNS for 37 | # the container inside the network. 38 | image: # (Optional) Same as the "FROM" statement in the Dockerfile 39 | # if no base image provided, docker-compose will look for a 40 | # Dockerfile path. 41 | command: ["a", "startup", "command"] # (Optional) Only needed if no Dockerfile 42 | # is provided 43 | environment: 44 | ENV_VARIABLE: # (Optional) Sets up an environment variable 45 | volumes: # (Optional) Define volumes 46 | - :/path/inside/container 47 | ports: # (Optional Ports exposed) 48 | - : 49 | ``` 50 | 51 | ### Option 1: using Compose to build the image 52 | 53 | A container needs either a `Dockerfile` path: 54 | ```yml 55 | services: 56 | : 57 | build: 58 | context: path/to/dockerfile 59 | args: # (Optional) Arguments passed to the Dockerfile 60 | - ARG_NAME= 61 | image: # (Optional) Name given to the image created by the Dockerfile 62 | 63 | ``` 64 | 65 | ### Option 2: use a base image 66 | 67 | Or a base image and a startup command: 68 | ```yml 69 | services: 70 | : # Name Given to the container; this will be used as a DNS for 71 | # the container inside the network. 72 | image: # Same as the "FROM" statement in the Dockerfile 73 | # if no base image provided, docker-compose will look for a 74 | # Dockerfile path. 75 | command: ["a", "startup", "command"] # (Optional) 76 | ``` 77 | > Note: the `command` field is only required if it is different than the one used 78 | > on the base image used. 79 | 80 | ### Adding container dependencies 81 | 82 | It is common that a given containerized application depends on a second one to run 83 | successfully (ex: an application needs a database to start). We can make Docker compose 84 | start up a set of dependent containers when the "parent" one starts: 85 | 86 | ```yml 87 | services: 88 | : 89 | depends_on: 90 | - 91 | ``` 92 | - Will automatically start up `` when `` is started 93 | 94 | 95 | ## 1.3 Volumes 96 | 97 | This section allows us to tag all volumes we have attached to our services. 98 | 99 | ```yml 100 | services: 101 | : 102 | volumes: 103 | - volume-name-1:/a/path/in/the/container 104 | - colume-name-2:/another/path 105 | 106 | volumes: 107 | volume-name-1: 108 | volume-name-2: 109 | ``` 110 | - Syntax to create the volumes needed for our service to run. 111 | 112 | ## 1.4 Networks 113 | 114 | This section will let us set up the networks. 115 | 116 | # 2 - Running Compose through the CLI 117 | 118 | The Docker compose command is a separate binary file from the regular `docker` one. 119 | On Mac it comes with Docker itself (not on Linux). It is tdesigned for local development 120 | and testing. Docker compose is built thinking it will be used by Docker users, hence, 121 | most Docker commands have their "equivalent" in Compose. 122 | 123 | To ge the full list of commands supported: 124 | ```sh 125 | $ docker-compose --help 126 | ``` 127 | 128 | ## 2.1 Start up the containers 129 | 130 | The `up` command sets up the networks, volumes needed and starts running the specified 131 | containers 132 | 133 | ```sh 134 | $ docker-compose up 135 | ``` 136 | - Setup volumes, networks and runs containers 137 | - Options: 138 | - `-d`: run all containers in a daemon thread 139 | - `--build`: rebuild the images of the containers before startup 140 | 141 | ## 2.2 Stop containers and clean up 142 | 143 | The `down` command stops and deletes all containers, volumes and networks created (not 144 | the images though). 145 | 146 | ```sh 147 | $ docker-compose down 148 | ``` 149 | - Stop and remove all containers, volumes and networks 150 | - Options: 151 | - `-v`: removes all local files in the volumes when bringing containers down 152 | 153 | ## 2.3 Getting container data 154 | 155 | ### Getting container logs 156 | 157 | ```sh 158 | $ docker-compose logs 159 | ``` 160 | - Print logs of a given container (if no container provided, all logs will de dumped) 161 | - Options: 162 | - `--follow`: follows the logs instead of just showing the latest ones 163 | - `--tail=n`: show n latest logs 164 | - `--no-color`: get logs without color encoding (useful if we capture them for another 165 | process). 166 | 167 | ### Showing running stuff 168 | 169 | To show running containers 170 | ```sh 171 | $ docker-compose ps 172 | ``` 173 | - Prints the containers currently running from the docker-compose context 174 | 175 | To show all services running inside of the containers: 176 | ```sh 177 | $ docker-compose top 178 | ``` 179 | -------------------------------------------------------------------------------- /6-docker/README.md: -------------------------------------------------------------------------------- 1 | # Docker 2 | 3 | Like [Vagrant](../4-vagrant/README.md), Docker is a DevOps tool since it automates the process of creating an environment (packaging it in an isolated container with its application) and running applications. But it is much more than that, let's see why in this document. 4 | 5 | The full Docker documentation is linked below: 6 | - **[0 - Basic intuition](#basic-intuitions)**: some fundamental concepts to understand what Docker is 7 | 8 | - **[1 - Running docker containers](1-run-container.md)** 9 | 10 | - **[2 - Container networks](2-cont-networks.md)**: how containers talk to each other 11 | 12 | - **[3 - Building custom images with the `Dockerfile`](3-docker-images.md)** 13 | 14 | - **[4 - Persisting data with volumes](4-docker-volues.md)**: how to persist data from containers 15 | 16 | - **[5 - Running multiple-container applications with Docker compose](5-docker-compose.md)** 17 | 18 | # Basic intuition 19 | Here are some important intuitions on how Docker works. 20 | 21 | ## 1 - What is containerization? 22 | 23 | Containerization is the idea of packaging the application with its dependencies. Before virtualization and Docker existed, if we wanted to run an application in a server (or in our local machine) in order to test it, we needed to download the software packages that the application would be built on: for example, a python application requires a version of Python, flask, a Database, other libraries, etc. That becomes messy pretty fast, as developers needed to run and test different applications on the same computer; packages collided, they worked differently on different machines... 24 | 25 | With that, virtualisation appeared. The idea is pretty simple: instead of downloading all the dependencies in our local machine, we instead create a 'clean' and isolated virtual machine where we can install all the packages that the application requires, without affecting the main environment. That way, we keep the 'mess' isolated into different VMs. 26 | 27 | Finally, containarization appeared as an enhancement of Virtualisation. Virtualisation still had a problem: spinning up a VM is slow, and uses an important amount of space in the host machine. This happens since for each machine, a full Operating System needs to be stored, and launched. But then, an idea came in: if most of the services we need are linux-based, why do we need to download a full operating system several times? Why not "share" a base linux operating system and run isolated processes with just the things needed for that application? This, is exactly what Docker does. 28 | 29 | ## 1 - What's the difference between Vagrant and Docker? 30 | Docker and Vagrant are two different ways to build a containerized application; they have two major differences in the way they work: 31 | 32 | **Vagrant**'s main distinctions from Docker are: 33 | 1. Vagrant launches full a Virtual Machine. That means that in order for it to run, Vagrant needs specify how much RAM, CPU and Storage from our local device it needs to allocate. Once the VM is running, those resources will be blocked even if our VM is not using them. On top of that, VMs are heavy-weight systems: they take some time (on the dozens of seconds order) to launch. It is exactly like "a computer inside a computer": needs to be launched and halted. 34 | 2. Like in a regular computer, Vagrant requires us to build the application 'from the ground up'. With Vagrant, we start with an empty VM, on which we install an OS, the packages we require, the databases, and all we need inside of it to run our application. All of those steps are automated by the `Vagrantfile`, but each step still takes place the same way as in a regular computer (such as, for example, running the applications). 35 | 36 | **Docker**, on the other hand, works as follows: 37 | 1. Instead of instantiating a full Virtual Machine with specific RAM, CPU and Storage, Docker containers runs a much lighter-weight virtualization layer. It still uses a hypervisor, but Docker images run as a regular process in our laptop, without 'blocking' resources: it shares the RAM and the Storage with the local machine. This allows much more flexibility, since Docker will take only what it requires. 38 | 2. Docker works with 'images'. An image is a 'snapshot' of what a VM looked like at a point in time; what we need to know about it is that it is really fast to launch: once a docker image is created, it takes around a second to launch it. Unlike a full VM, however, Docker images cannot be un-done or modified. If we want to remove or change something, a brand new image needs to be created from scratch. 39 | 40 | ## 3 - What is a Docker image, how is it created, and how does Docker use it? 41 | A docker image is simply "how a machine looked like at a point in time". Of course, that point in time is defined by us in the `Dockerfile`. 42 | 43 | > A much detailed explanation on the `Dockerfile` syntax can be found in [this section](3-docker-images.md). 44 | 45 | In the `Dockerfile`, we tell Docker what steps it needs to perform before taking the snapshot and saving it as an image, so it can be copied and run later. Here is a very simple example on how Docker creates a new image through a `Dockerfile`: 46 | - `FROM alpine`: "go to Dockerhub (online repository), download the `alpine` image (which is a Docker image with the Alpine Linux distribution installed), and run it" 47 | - `RUN apk --no-cache add python py-pip`: "Once Alpine is running, install the Python environement"; Note that `apk` is the Alpine version of the `apt` command (which is a package manager, it installs things). 48 | - `ADD . /app`: "share the contents of the current folder with the container, and name it `/app`" For the sake of this example, let's assume there is a file called `app.py` inside of the `/app` folder with our code in it. 49 | - `WORKDIR /app`: "Go inside the `/app` directory" 50 | - `CMD [ "python", "app.py" ]`: "Run the command `python app.py`" (This will launch the python application) 51 | 52 | Once all these steps are run, Docker will take a 'snapshot' of this (the Docker image), and save it locally. This Docker image will now be used as the 'mother' of all Docker instances. 53 | Now, we can create an instance of that image (using the `docker run` command): once we tell it to run, Docker will take this image, make a copy, and run it from the point it was saved. The magic of this is that this process is nearly instantaneous: the python application starts running nearly instantaneously after we run the command! 54 | With this, if the image reaches an error or gets corrupted, no problem: we Launch another! It will start from the initial point we saved it and run normally! (Here is where Kubernetes comes in: it orchestrates Docker instances). 55 | -------------------------------------------------------------------------------- /4-vagrant/vagrantfile-syntax.md: -------------------------------------------------------------------------------- 1 | # Understanding the syntax of the `Vagrantfile` 2 | 3 | As mentioned in the [introduction](/4-vagrant/README.md#basics-of-vagrant-and-intuition) 4 | of this part, the `Vagrantfile` is simply a 'list of tasks' that Vagrant needs perform 5 | to set up our application's environment. These tasks follow in three categories (each 6 | one will be a chapter of this document): 7 | - **[1 - Launching the VM](#1---launching-the-vm)**: Vagrant needs to know which 8 | operating system to use, how much CPU and RAM it requires, which ports should we 9 | forward, and which files should we share with the VM. 10 | - **[2 - Setting up the evironment](#2---setting-up-the-environment)**: Vagrant will 11 | need to run Unix commands to get the evironment ready for us to run our application in. 12 | - **3 - Launching Docker containers**: (to be done) 13 | 14 | Before jumping into these parts, it is important to mention the basics of the 15 | `Vagrantfile` syntax, which is written in `Ruby`. Luckily, it is not necessary an 16 | extensive knowledge of `Ruby` to use Vagrant; all we need to understand is on the 17 | chapter 'zero' below. 18 | 19 | # 0 - Understanding the `Vagrantfile` 20 | 21 | Vagrant works by introducing a `Vagrant` object that sets up the configuration. The only 22 | thing we need to understand for this course is that all commands in the `Vagrantfile` 23 | simply update this `Vagrant` object, which will be the one used by Vagrant to launch and 24 | set the VM. 25 | 26 | Normally, all `Vagrantfile`s have this basic syntax: 27 | ```Ruby 28 | Vagrant.configure(2) do |config| 29 | 30 | # Configuration definition (see section 1 'Launching the VM') 31 | end 32 | ``` 33 | - The code above allows us to modify the `Vagrant` object just created; The `|config|` 34 | statement defines the name we will use to access it (inside the `do` statement, 35 | everytime we call `config`, the `Vagrant` object will get updated). 36 | 37 | > Note: the number `2` inside the `.configure()` method represents the version of the 38 | > configuration object that will be used between the `do` and the `end`. 39 | 40 | All commands defined in the below parts should be added between the `do` and `end` 41 | statements above. 42 | 43 | # 1 - Launching the VM 44 | 45 | In this section, we will add the commands required to tell Vagrant the infrastructure 46 | (CPU, RAM), and the operating system required. We will define these commands through 47 | the `.vm` statement. 48 | 49 | ## 1.1 Setting up the VM requirements 50 | 51 | To set up the Infrastructure requirements (CPU, RAM) needed for the VM, we need to 52 | define the the VM provider (in our case, it will always be `"virtualbox"`), as well as 53 | access the arguments associated to it: 54 | 55 | ```Ruby 56 | #... 57 | config.vm.provider "virtualbox" do |vb| 58 | vb.cpus = 1 59 | vb.memory = "4096" 60 | #... 61 | ``` 62 | - Defines the configuration for the VM launched by Virtualbox: `.cpus` defines the 63 | number of CPU workers, while `.memory` defines the number of MB of RAM required. 64 | 65 | ## 1.1 Setting up the Operating System 66 | 67 | ```Ruby 68 | #... 69 | config.vm.box = "ubuntu/bionic64" 70 | #... 71 | ``` 72 | - Sets up the OS to the one specified (in the case above, we are using an Ubuntu Bionic 73 | 64-bit distribution). 74 | 75 | ```Ruby 76 | #... 77 | config.vm.hostname = "name-of-our-vm" 78 | #... 79 | ``` 80 | - Sets up the display name for when we ssh into the machine (by default, thi will be 81 | set up to `vagrant`). 82 | 83 | ## 1.2 Setting up the network requirements 84 | ### Forwarding ports 85 | During the course, we will run applications inside our VM. In order to access them in 86 | the browser of our local machine, we will need to forward the ports 87 | [as explained in this section](../0-basic-concepts/README.md#what-is-port-forwarding). 88 | 89 | ```Ruby 90 | #... 91 | config.vm.network "forwarded_port", guest: port_inside_VM, host: port_local_machine, host_ip: "127.0.0.1" 92 | #... 93 | ``` 94 | - The code above connects all traffic on the VM running in `port_inside_VM` to the 95 | `127.0.0.1:port_local_machine`. 96 | 97 | > Note: we can remove the last part of this statement `host: "127.0.0.1"` in order to 98 | > forward all traffic running inside the VM in the port specified to any IP in the host 99 | > IP, although this might cause problems in Windows machines. 100 | 101 | ### Setting up the VM's private IP 102 | 103 | In order to SSH inside the VM, its process requires to have a private IP address. We 104 | set it up with the following command: 105 | ```Ruby 106 | #... 107 | config.vm.network "private_network", ip: "192.168.33.10" 108 | #... 109 | ``` 110 | - Sets up a private network IP address which will allow vagrant to ssh into the VM 111 | (it is not necessary to specify the IP address). 112 | 113 | ## 1.3 Sharing folders with the VM 114 | 115 | By default, Vagrant will create a `/vagrant` folder inside the VM with all contents of 116 | the folder where the `Vagrantfile` is stored (the contents of these files will be 117 | accessible both through the VM and the local machine). 118 | 119 | To share folders with the VM we can use: 120 | ```Ruby 121 | config.vm.synced_folder "", "" 122 | ``` 123 | - The code above shares the `` with `` (both entities 124 | can modify its contents). 125 | 126 | As an example, let's look at how sharing the working folder with the VM as a folder 127 | called `/vagrant` (what vagrant does by default) would look like: 128 | ```Ruby 129 | config.vm.synced_folder ".", "/vagrant" 130 | ``` 131 | 132 | ## 1.4 Copying files inside the VM 133 | We can also copy some files in our local machine to the VM using the below syntax: 134 | ```Ruby 135 | #... 136 | config.vm.provision "file", source: "/local-file", destination: "/filename" 137 | #... 138 | ``` 139 | - Shares file called `local-file` in the local machine, and puts it in the specified 140 | path inside the vm. 141 | 142 | > Note: this command will only work once `provision` is run (since we use the 143 | > `vm.provision` method) 144 | 145 | A useful implementation uses an if statement to check if the file exists. In the case 146 | it does, it forwards it inside the VM. The example below copies the `.gitconfig` (which 147 | keeps configuration about GitHub) file in our local home address (`~`) and shares it 148 | with the home address of the VM, if the file exists: 149 | ```Ruby 150 | #... 151 | if File.exists?(File.expand_path("~/.gitconfig")) 152 | config.vm.provision "file", source: "~/.gitconfig", destination: "~/.gitconfig" 153 | end 154 | #... 155 | ``` 156 | 157 | # 2 - Setting up the environment 158 | 159 | Once the VM is set up, we can start installing and running packages on it. We do so by 160 | asking Vagrant to run command-line Unix statements (review all 161 | [unix statements on this section](../2-unix/README.md)) with the below syntax: 162 | 163 | ```Ruby 164 | #... 165 | config.vm.provision "shell", inline: <<-SHELL 166 | apt-get update 167 | apt-get install -y git 168 | SHELL 169 | #... 170 | ``` 171 | - The code above runs the unix commands `apt-get update` and `apt-get install -y git` 172 | inside the VM after it is provisioned. Any Unix command can be automated thanks to 173 | this command 174 | 175 | > Note: this command will only work once `vagrant provision` is run (since we use the 176 | > `vm.provision` method) 177 | -------------------------------------------------------------------------------- /3-git/2-git-workflow.md: -------------------------------------------------------------------------------- 1 | # GIT WORKFLOW 2 | 3 | This document summarizes the 'typical' workflow of commands used while working with `Git`. This document assumes the use of GitHub. 4 | 5 | Note: all the commands listed on this document will work on a terminal after downloading `Git`. 6 | 7 | ## Step 1: Initialize the project (`git init` or `git clone`) 8 | #### Step 1.1: new Repository 9 | Open a new Repository in GitHub, give it a name. 10 | 11 | #### Step 1.2 (possibility A): 12 | Use `git init` to start the project from scratch. In that case, it is recommended to create `README.md` file and `.gitignore` files. GitHub enables the creation of a defaule `README.md` file for the project, although it is always better to create your own. 13 | 14 | #### Step 1.2 (possibility B): 15 | Use `git clone [URL]` to copy an existent project in your local machine. You have then two possibilities: 16 | - You can either use the cloned repository as a 'base' to start one of your own. In this case you will have to add the URL of **your** remote repository using the command `git remote add origin [your-repository-URL]` (it is recommended to use `origin` as a name for the repository as it will be the 'base' one), 17 | - Or you might have cloned to resume working on the repository you cloned from (you might have initialized the `README.md` file on GitHub, or started using a different machine), in that case it is recommended to use `git clone -u [URL]` command instead (`-u` will add the remote repository as a shortcut for 'origin'). 18 | 19 | ## Step 2: Make an initial commit statement to the `master` branch (first time) 20 | Start working on your code, create a 'base' working project with some functionality. Say we created a basic application saved in a file `app.py`. After we verify it works, we add it to the staging area, commit it, and push it to GitHub. 21 | 22 | ```git status``` 23 | - It is always a good idea to check which files `Git` recognizes before adding any one onto the staging area: we might have created files we are not aware of and might want to add it onto the `.gitignore` file. 24 | 25 | ```git add app.py``` 26 | - We add the `app.py` file we just created to the Staging area. 27 | 28 | ```git commit -m 'first commit'``` 29 | - After adding the new file to the staging area, we commit it always adding a descriptor of the commit. (Before committing, we can run `git status` again to check we correctly added all files we wanted -they will appear in green- and did not forget any file). 30 | 31 | ```git push origin master``` 32 | - Updates the `master` branch of the remote repository. Careful: this command is allowed since it is the **first time we commit**, after this time, we should **never** push to the remote `master` branch directly, we should **always** push to another branch, **then** issue a pull request to `master` so others can review. 33 | 34 | ## Step 3: Create a new branch, update the local repository (usual case) 35 | Break down the improvements to be done to your code into discrete parts, and start working on them one by one. For the first new feature (always matching with a story on the Scrum board), create a new branch: 36 | 37 | ```git checkout -b new-feature``` 38 | - Creates a `new-feature` branch, and moves the `HEAD` pointer (an object that tells GitHub the 'place we are currently working on') to the `new-feature` branch. From now on, the changes we do will be saved on this branch (unless we run `git checkout` again.). Since we were on the `master` branch, now `new-feature` is an exact copy of the contents of `master`. 39 | 40 | Then, we start working on the new feature. For example, let's say we add some lines of code to `app.py` and create a new file named `models.py`, where we will store the models that `app.py` will use. After we are done modifying our files, we work as follows: 41 | 42 | ```git status``` 43 | - Again, we run `git status` to confirm everything is fine (we shoud see we modified `app.py` and created a new file `models.py`) 44 | 45 | ```git add app.py``` 46 | ```git add models.py``` 47 | - We must run the `add` command for all modified and newly created files. After this step we can re-run `git status` to check we have correctly added the files. 48 | 49 | ```git commit -m 'created a new feature'``` 50 | - We then commit, and keep describing what we are doing at each commit statement (so we can understand it when we run commands such as `git log`). 51 | 52 | Now that the local branch is updated, it is time to push the code upstream. 53 | 54 | 55 | ## Step 4: Pushing the data upstream and issuing a Pull Request 56 | ### Step 4.1: Verifying if there are merge conflicts before pushing code to the remote repository 57 | It is good practice to sort out merge conflicts on our local repository before pushing them on GitHub. 58 | 59 | Note: GitHub also allows to easily solve a Merge Conflict when a Pull Request is issued, directly on the remote repository; if we wish to do so, this whole *Step 4.1* can be ignored. 60 | 61 | This step, however, is considered best practice. 62 | 63 | #### Step 4.1.1: update the master branch 64 | There are two ways of ensuring our code does not contain merge conflicts, it starts always with the same step: pulling the last changes done to master. We do so after switching to the master branch and fetching the data from the remote repository `origin/master`: 65 | 66 | ```git checkout master``` 67 | - switches to the master branch (locally) 68 | 69 | ```git fetch origin master``` 70 | - Fetching the last changes from the `master` branch updates the local `.git/` folder (but not the local workspace) with the changes made by other team members. Note that this step can be done with the `pull` command as well. 71 | 72 | After updating `master`, we can return to the branch we were working on: `new-feature`: 73 | ```git checkout new-feature``` 74 | - switches back to the 'new-feature' branch 75 | 76 | ##### Step 4.1.2: merge master onto the branch or rebase the branch 77 | At this point, we need to choose one of two possibilities: either merging `master` onto `new-branch` or rebasing `new-feature`. In both cases we will need to sort out the merge conflicts manually, as explained in the [cheatsheet](https://github.com/lombardero/nyu-devops-concepts/blob/master/3-git/1-complete-cheatsheet.md#sorting-out-merge-conflicts). 78 | 79 | **Possibility A: Merging `master` onto `new-feature`** will create a new commit statement in the `Git` tree, showing what really happened: we were working on a branch, in the meantime, `master` was updated by some team members, and at one point we decided to incorporate these changes in our branch. To implement this step, we use: 80 | 81 | ```git merge master``` 82 | - Merges `master` onto `new-feature` (there might be some merge conflicts) 83 | 84 | **Possibility B: Rebasing `new-feature`** will take the latest changes we just pulled from the `master` branch, and apply them to each of the commit statements of `new-feature`, as if we had just started working on `new-feature` right after pulling the changes. In a sense, it is 'faking' time. Check out the ['rebase' explanation in the Git cheatsheet](https://github.com/lombardero/nyu-devops-concepts/blob/master/3-git/1-complete-cheatsheet.md#33-rebasing). 85 | 86 | ```git rebase master``` 87 | - rebases `new-feature` into the latest fetched commit of the `master` branch. 88 | 89 | ### Step 4.2: pushing data upstream 90 | Now that we have ensured our code will not create a Merge Conflict in the remote repository, we can push the data upstream: 91 | 92 | ```git push origin new-feature``` 93 | - This command pushes the code onto a new branch of the remote repository called `new-feature` (it is important to be specific with names so other team members do not use the same branch name). 94 | 95 | Once this command is run, the pull request can be issued (it is easier to do so in GitHub directly). -------------------------------------------------------------------------------- /1-get-started/README.md: -------------------------------------------------------------------------------- 1 | # GET STARTED: BASIC DEVELOPMENT TOOLS 2 | 3 | These are some of the tools you will require for the Course's assignment. 4 | 5 | # 1 Text editor 6 | A key component to start programming is a good text editor. The text editor is the tool you will use to write all your lines of code, visualize them, check them in real time, and integrate them; where you might spend many hours trying to find the reasons why your code does not behave as you planned. And that is why you need the best editor you can find. Here is a recommendation: `VS Code` (although you can use any one you want). 7 | 8 | ## 1.1 - VS Code (Recommendation) 9 | Visual Studio Code is Microsoft's text editor. Its main killer features are: 10 | - Super-easy integration with any programming language or library (as the ones used on the course: `python`, `ruby`, `vagrant`, `docker` and many others), which can be searched and downloaded on the program itself, and enables syntax checking, amazing visual coding and debugging features 11 | - A super useful shell command `code` that allows you to open any file directly into `VS Code` running `code `. 12 | - An amazing debugging system, allowing to 'pause' the running code at any line and check the contents of variables by hovering over them 13 | - Easy navigation though package function definitions and variables 14 | - AI-enabled syntax propositions for `Python`, `Javascript`, and `Java` 15 | 16 | Download VS Code for free on its official page [here](https://code.visualstudio.com/). 17 | 18 | ### 1.1.0 - Setting up VS Code 19 | #### Adding the `code` command on a Mac 20 | After `VS Code` has been installed, you can install the `code` shortcut by following the below steps (or check the [Official VS Code documentation about it](https://code.visualstudio.com/docs/setup/mac#_launching-from-the-command-line)): 21 | - Launch `VS Code` 22 | - Press `F1` to open the command palette, and type 'shell command' to find the `Shell Command: Install 'code' command in PATH` command; run it 23 | - Restart your terminal 24 | 25 | You can now run `code ` command to open files with VS Code directly! 26 | 27 | #### General set up links 28 | Official `VS Code` [guide for Windows in this link](https://code.visualstudio.com/docs/setup/windows). 29 | 30 | Official `VS Code` [guide for mac in this link](https://code.visualstudio.com/docs/setup/windows). 31 | 32 | ### 1.1.1 - Some useful commands in VS Code 33 | - `Ctrl + Space`: gives you the list of possible statements (such as available arguments of functions). Requires having the extension of the language you are working on installed (check below paragraph). 34 | - `Alt + Shift + F`: reformats the code with the proper spacings (not useful for `Python`) 35 | 36 | ### 1.1.2 - Useful `VS Code` extensions 37 | Extensions enhance the capabilities of `VS Code`: spelling corrections, code navigation (see what's inside objects), and AI-enabled suggestions. Check the [official extension tutorial here](https://code.visualstudio.com/docs/editor/extension-gallery). 38 | 39 | You can search for extensions clicking the 'extensions' button (the one that looks like a tiny window), on the left vertical bar. Here are some useful extensions that we will use on the course: 40 | - Programming languages: the `python`, and `vagrant` extensions should be added to VS Code for this course to edit our code (`python` for our project, `ruby` for our vagrantfile) 41 | - `GitLens`: this extension will let you know who wrote any line of code in the project when clicked 42 | 43 | You can find a useful VS Code tutorial for using `Flask` in this [link](https://code.visualstudio.com/docs/python/tutorial-flask). 44 | 45 | ### 1.1.3 - Debugging useful tools 46 | Find out how to use VS Code's debugger [here](https://code.visualstudio.com/docs/python/python-tutorial#_configure-and-run-the-debugger). 47 | 48 | Find out about `Python`-specific features for debugging using VS Code on [this official guide](https://code.visualstudio.com/docs/python/debugging). 49 | 50 | Go to `Debug > Start debugging` to start the debugging tool (we need to choose an environment - such as `Python`): 51 | - Create 'break points' (by selecting critical lines of our code - a red dot should appear in the left side), that will allow us to check the status of our code (the contents of each variable by hovering around with the mouse, for example) just before that line is executed. (once we run the code, the red dot should appear with a yellow mark -> that means the line 'break point' selected is about to be executed). 52 | - A 'play bar' will appear on top of the screen, allowing us to move from break point to break point (using the play button), from line to line (using the 'jump' arrow), and enter the 'layers' of our code (we can access the source code when we call a in-built function using the 'down' arrow, and go back to the top layer with the 'up' arrow of the play bar) 53 | - Check the output of operations in real time ('what would happen if I run this code differently') using the Debug Console (can be used, for example, to 'execute' variables at each point of the code to see what is inside of the variables). 54 | 55 | Go to `View > Debug` to access many features of VS Code Debug mode: 56 | - Clicking on the `Variables` tab on the left menu of the screen allows us to check all defined variables of our code, and its contents in real time. It can be used (by double-clicking) to modify the values stored on the variables. 57 | - We can define 'watchers' (also on the left menu of the screen), which will monitor the contents of a specific variable in real time through the code. 58 | 59 | # 2 Terminal 60 | The terminal (also called 'shell', 'console' or 'command line') is the second most important tool of a developer. Terminals enable complete control over a machine through command line statements, which allow us to run programs, install packages and access core functionalities of our machine. 61 | 62 | In this course, we will use the terminal to install packages, run virtual machines and access them. Once inside of the virtual machines, we the terminal will allow us to read the logs of our service, edit it and debug it. 63 | 64 | Access to terminals in different devices: 65 | - On **Mac**, you can run your terminal by searching for `Terminal` in the Apps tab, or go to `Dock > Launch Pad > Other > Terminal`. 66 | - On **Windows**, go to `Start > Command Prompt`, or search for `cmd` on the Windows search tab. 67 | - On **VS Code**, a built-in terminal (opened on the current folder where your Workspace is) can be launched by clicking `Terminal > New Terminal` on the top menu. 68 | - Alternatively, 3rd party command prompt applications can be installed. 69 | 70 | ## Windows users: installing `git bash` command prompt 71 | `Git bash` is an application for Windows that allows to run the `git` commands (such as the ones on the [`git` cheatsheet](../3-git/1-complete-cheatsheet.md)). It is not required for Mac since `git` commands are installed by default. 72 | 73 | Official link for downloading `Git Bash` [here](https://gitforwindows.org/). 74 | 75 | 76 | # 3 Postman 77 | Postman is a very useful debugging tool to check any kind of client request to our server. Check what a 'server request' is [on this document](../0-basic-concepts/README.md#22-http-protocol). 78 | 79 | Download postman on the follwoing [link](https://www.getpostman.com/). 80 | 81 | Postman is a testing tool for servers; it wil allow us to send any request to our server and see how it reacts to it. To make it work we simply need to start our server locally. Once the server is running (errors should be printed out in our terminal for easy debugging), we can start sending client requests. 82 | 83 | The things we should pay attention to are: 84 | - The request type (`GET`, `PUT`, `POST`, `DELETE`...), on the left side of the top URL navigation bar 85 | - The URL navigation bar, where we can set up the URL we wish to visit (such as `localhost:5000`) 86 | - The Header of our request (should be set to `application/json` if we are sending a `json` format) 87 | - The Body (the data we wish to send to the server, usually in `json` format) of our request 88 | 89 | We can, for example, start by doing an initial `GET` request to the base URL: `localhost:5000`, for example (if we are running our application on port `5000`) to check if the home page is working correctly. 90 | -------------------------------------------------------------------------------- /5-python/a-python/4-flask.md: -------------------------------------------------------------------------------- 1 | # Flask 2 | 3 | This document is organised as follows: 4 | 5 | **[1 - Intuitive introduction to Flask](#1---introduction-to-oop)** 6 | 7 | **[2 - Building web applications with Flask](#2---building-web-applications-with-flask)** 8 | - [2.1 Creating a Flask application](#21-installing-flask) 9 | - [2.2 Creating a Flask application](#22-creating-the-flask-application) 10 | - [2.3 Routing requests, sending responses](#23-routing-requests-sending-responses) 11 | 12 | **[3 - Running the application]()** (To be done) 13 | 14 | # 1 - Intuitive introduction to Flask 15 | 16 | ## What is `Flask`? 17 | `Flask` is a Python Library that allows it to listen to client's requests and prepare back responses (recall how a server works [here](../../0-basic-concepts/README.md#22-http-protocol)). It is the main library that allows to build the backend of a server with Python. 18 | 19 | As any library, it imports Classes (review what a Python class is [here](2-oop-with-python.md)) and functions, which enable certain functionalities to make this work. 20 | 21 | ## What is a server backend? 22 | The backend is the "logic" of a web application. Some applications do not require backend: for example, most classic personal web pages simply require a static display of a series of screens; these are run directly on the browser with a simple logic such as: "If the user clicks this button, display screen A, otherwise display screen B"; screen A and B are always the same (for example, the CV or the portfolio of that person). That simple logic can be run directly in the browser, or the frontend. 23 | Other more complex web applications, such as for example Amazon, need to do fancier stuff: they tailor products displayed in the home page to the user, they get and store information such as items added in the shopcart (which need to be remembered the next time the user enters the page), store the items purchased by the user so it can recommend more... Lots of things! All this complex logic that requires user input, data storage or heavy weight machine learning algorithms needs to be run in a server somewhere: that is the backend. The backend is the "core" of web applications, where all fancy stuff happens. And it needs to happen away from the user (nowadays it runs mostly on the cloud), so the data is stored securely, and the computations are done efficiently, so the user has the desired experience. 24 | 25 | ## What are the main functionalities that Flask brings? 26 | Now that we know what a backend is, we can talk of what `flask` allows Python to do. Python on its own is a "scripting" language, that means that it can be used to run "series of tasks" in a linear way: the program starts, performs some computations (such as reading a file, computing some results, and writing them on another file), and then it stops. 27 | 28 | `Flask` brings a simple but key component that will allow Python to become a server: and that is an "event listener". The event listener is an object that will continuously listen to client requests. Recalling from the [introduction](../../0-basic-concepts/README.md#22-http-protocol), at any point in time, servers should be able to handle any request sent by the client (remember a request is a combination of a **URL** and an **HTTP method**, amongst other data), and it should formulate a response. The event listener brings those functionalities. Under the covers, it works as an infinite loop continuously checking if any request has arrived. Once a request is heared, it will execute the function that triggers the response. 29 | 30 | Once a request is heared by the event listener, it will execute the function in the code that triggers the response: that functionality (called "routing") is also brought by `flask`. Routing means that according to the type of request, our server should be able to trigger the right function (that we will code), so that our server prepares the correct response. 31 | >For example: if a user wants to see all the available products, he will do a `"GET"` request to the `"/products"` URL of our server. In that case, we want our server to query the right table on the database, retrieve the data, and send to the client a list of the currently available products (which will then be sent to the browser and rendered in a nice way for the user to see it). This latter part will be coded by us (the developers), while flask will enable us to do it with a very simple syntax. 32 | 33 | This is what Flask does for us, it allows to tell Python "keep on listening to requests" and "execute this function if this URL is hit with this HTTP method". As easy as that: let's see how the syntax works for Flask. 34 | 35 | # 2 - Building web applications with Flask 36 | ## 2.1 Installing `flask` 37 | In order to run flask, first we need to install the library. For that, once Python is installed in our machine, we must run (in the terminal if we want it locally, or in the `Vagrantfile` or `Dockerfile`): `pip install flask`. That will download a bunch of files with Classes and functions that we will use in our code. 38 | 39 | ## 2.2 Creating the flask application 40 | A flask application is simply an "event listener": an object that will tell Python to never stop (unless told so) listening to requests and processing responses. 41 | 42 | ### 2.2.1 Creating the `app.py` file 43 | We first create the base flask file, in which the `app` object will be created. To do so, we open our favourite text editor and create brand new file called: `my_app.py` (for example). 44 | > Note: naming the file `my_app.py` is arbitrary. The file can be named as we want: in some examples in the course, the `app` object is created in the `__init__.py` file (which is a special file in Python that allows to execute Python files inside sub-folder of the project, otherwise Python ignores those files). 45 | > Note 2: Naming the main app object `app` is a convention: it is an object, so it can be called any name we want. To make our code readable, however, it is good practice to name it simply `app` (so that others understand it). 46 | 47 | ### 2.2.2 Importing Flask 48 | Flask is a library, which is simply a set of Python files with a bunch of Classes (remember: classes are "blueprints" from which to create objects) and functions already programmed so that we can use them in our code. In order to use those files, we want to specify **which specific file**, and **which specific class or function** we want to use in our current file. For performance, we do not want to import things we are not using. 49 | 50 | To import the main functionality of flask (the creation of a `Flask` object), we use: 51 | ```python 52 | from flask import Flask 53 | ``` 54 | - This code imports the `Flask` class (which creates an event listener), from the `flask` library (which is a file we downloaded when we ran `pip install flask`). 55 | 56 | ### 2.2.3 Creating the `app` object 57 | The app object, which will listen to all server requests and then trigger responses cane created with this simple code: 58 | ```python 59 | app = Flask(__name__) 60 | ``` 61 | - creates a `Flask` object named `app` that constantly listens to server requests. 62 | > Note: the `__name__` variable is a special variable in Python that always holds the name of the file that executed it. In this example, since the `my_app.py` file is the one that contains `__name__`, then `__name__` will be equal to `"my_app"` (careful: `"my_app"` is a string, and has nothing to do with the `app` object we just created). It allows Flask to store the location where the `app` object is created. 63 | 64 | That's it! With these few lines of code, our Python script is now capable of listening to server requests! 65 | 66 | ## 2.3 Routing requests, sending responses 67 | The `app` object of the flask library allows us to route requests and prepare responses very easily. We do that using a decorator. 68 | 69 | ### What is a decorator? 70 | A decorator is a special function in python that allows us to define a specific functionality, and adding it into another function. Flask adds the functionality of routing responses using the decorator `@app.route()`. 71 | It works the following way: 72 | - we define a function in our code, that (for example), returns some nice HTML that renders "Welcome to the home page!" (we can use any arbitrary name, but it is good practice to use a name that describes what the function does; let's choose `home_page()`). 73 | - on top of that function, as a decorator, we add `@app.route("/")`. This decorator is defined by flask and adds the following functionality to the function we just defined: "everytime a user hits the "/" URL (which is the "home" URL), execute that function". 74 | 75 | That's it! Now the function we defined gets executed only when the user triggers some action, with only one additional line of code! That is the magic of Flask. Let's see all the pieces together. 76 | 77 | ### 2.3.1 Creating the home route 78 | ```python 79 | @app.route("/", methods = ["GET"]) 80 | def home_page(): 81 | return "

Home Page

" 82 | ``` 83 | - we define the `home_page` function which returns some basic HTML code (which will be rendered nicely by the browser), and added the decorator provided by flask that tells this function to be executed every time the user hits the home page with a `GET` request (which is the default initially). -------------------------------------------------------------------------------- /2-unix/README.md: -------------------------------------------------------------------------------- 1 | # UNIX System administration 2 | 3 | This section aims to go over the very basics of Linux system administration. Linux 4 | system administration is a key skill to earn as a software developer, as over 95% 5 | of software applications are runing on a Linux system. 6 | 7 | > Note: although Linux and Unix are often used interchangeably, Unix is the system 8 | > that was built by XXX on XXXX, while Linux is a specific branch of flavors that 9 | > come from Unix systems. Macs, for example, are Unix-based but not Linux. 10 | 11 | This section is organised in the following sub-sections: 12 | - [**1 - Linux system administration basics**](): basics of Linux, filesystem structure, 13 | file attributes 14 | - [**2 - Bash scripting cheatsheet**](): useful commands 15 | - [2.1 Basic file navigation and management]() `cd`, `ls`, `pwd`, `cp`, `mv`, `rm`, 16 | `cat`, `mkdir`, `echo`, `touch`, `grep`, `find`, `>`, `|`, `head`, `tail`, `tar` and 17 | wildcards. 18 | - [2.2 Environment variables]() 19 | - [2.3 User and persmission management](): `chmod` 20 | - [2.4 Using `vi` and `vim` (Linux file editors)]() 21 | - [2.5 Working remotely](): `ssh`, `curl`, `wget`. 22 | 23 | 24 | This document lists useful commands to be used while working on a UNIX shell: from 25 | basic file and folder navigation, to running processes. 26 | 27 | Note: all the commands listed on this document will only work on a UNIX terminal. 28 | 29 | # 1 - Basic commands 30 | 31 | ## 1.0 Get help 32 | 33 | ```sh 34 | $ --help 35 | ``` 36 | - Will output syntax guidelines to run any command 37 | 38 | ## 1.1 File and folder navigation 39 | 40 | ### Move through folders 41 | 42 | ```sh 43 | $ cd 44 | ``` 45 | - cd stands for "change directory"; this command moves the folder the terminal is 46 | looking to into to the specified absolute or relative path. Example: `cd /folder` 47 | will move to `folder` (which should be inside the folder we are currently looking to). 48 | - `cd ..` moves to the upwards (or 'preceding') directory 49 | 50 | > Note: in some terminals, typing `cd ` (with a space), and pressing `Tab` allows 51 | > the terminal to jump between available sub-folders possible options. 52 | 53 | ```sh 54 | $ pwd 55 | ``` 56 | - stands for "Print Working Directory": returns the absolute path of the current directory 57 | 58 | ### Show folder contents 59 | 60 | ```sh 61 | $ ls 62 | ``` 63 | - lists current directory contents (prints the filenames) 64 | Options: 65 | - use `ls -l` to output the file details. `-l` stands for 'long' listing format. 66 | - use `ls -a` to print all files (normal files and hidden ones). `-a` stands for 'all' 67 | - use `ls -la` to combine above 68 | 69 | ### Create, Update, Delete files and folders 70 | - `rm /`: deletes (`rm` stands for 'remove') `` in `` 71 | - `rm -r ` deletes `` and its contents. Note: this command needs 72 | `-r` (Recursive), since the OS will need to recursively enter every folder and file in 73 | the directory to completely erase it. 74 | - `mv / /` moves `` from a directory to 75 | another 76 | - `mv ` renames file 77 | - `mkdir ` creates a new directory in current path 78 | 79 | ## 1.2 - Check file contents 80 | 81 | ### The `cat` command 82 | 83 | The command `cat` is one of the most useful commands to quickly check on the terminal 84 | the contents of a file. `cat` stands for 'concatenate': the contents of the file will 85 | be 'concatenated' and shown in the terminal. 86 | 87 | ```sh 88 | $ cat 89 | ``` 90 | - Prints on the terminal the contents of `` 91 | 92 | Additional arguments we can use for `cat`: 93 | - `cat -n ` will print the contents of the file with a number showing the 94 | line number 95 | 96 | ## 1.3 - Display on the terminal 97 | 98 | The command `echo` displays a string directly on the terminal. It can be used to 99 | display some statement if something specific happens such as ('Launching application'), 100 | or accessing environment variables (env. variables explained 101 | [here](#3---environment-variables)). 102 | 103 | ```sh 104 | $ echo "This will be displayed on the terminal 105 | ``` 106 | - displays the text above directly on the terminal 107 | 108 | ## 2 - Advanced commands 109 | 110 | ### 2.1 - Downloading and updating packages 111 | 112 | #### the `apt-get` command 113 | 114 | `apt-get` (Advanced Packaging tool) is a command which handles packages in Linux. It 115 | retrieves information about packages from the authenticated sources; it allows to 116 | install, upgrade and remove packages along with their dependencies (We will use 117 | `apt-get` to download Python inside our VM, for example). 118 | 119 | ```sh 120 | $ apt-get install 121 | ``` 122 | - Downloads and installs the package added from the authenticated source 123 | > Note: we can add `-y` for 'Yes' to let the `apt-get` reply 'Yes' for all `[yes/no]` 124 | > queries (such as 'are you sure you want to install?'). 125 | 126 | ```sh 127 | $ apt-get update 128 | ``` 129 | - Checks if there are any updates for the packages installed 130 | 131 | 132 | ### 2.2 - Modifying permissions 133 | 134 | #### How do Unix permissions work? 135 | 136 | Unix systems have three kinds of permissions for each file: `r` for *Read permissions*, 137 | `w` for *Write permissions*, and `x` for *Execute permissions* (permissions can be 138 | checked using the `ls -l` command, which will show `-` when a permission is not 139 | available). These permissions need to be set for each one of the three types of users 140 | defined in a Unix system. 141 | 142 | > Note: Unix systems have three kinds of users: 'user' (the owner of the file), 'group' 143 | > (which is useful when many computers have access to one file), and 'others' (which 144 | > is any user not on the first two groups). 145 | 146 | #### The `chmod` command 147 | 148 | The `chmod` (or 'change mode') command allows us to modify the permissions of a file 149 | for the three types of users. The easiest way of doing so is by using the three-digit 150 | code, which uses a each digit to define the permissions for each type of user. 151 | 152 | Each digit is converted into a binary 'mapping' (`1` for 'permission granted' and `0` 153 | for 'permission disabled') of the permissions using the `rwx` (read-write-execute) 154 | format: for example, `0` is `000` in binary, so it stands for 'none of the permissions 155 | granted'; `4` is `100` (or `r--` so 'Only read permission enabled', `7` is `111` so 156 | 'All permissions granted'. 157 | 158 | That way, `400`, for example, will enable reading permissions for the first type of 159 | user (which is the 'user' of the file itself), and disable any permissions for rest 160 | ('groups' and 'others'); `777`, as another example, would enable all permissions by 161 | all users. 162 | 163 | ```sh 164 | chmod 165 | ``` 166 | - Modifies the permissions of `` and sets them to the ones specified on the 167 | three digit code. 168 | > For example, `chmod 400 file.txt` will only enable reading of `file.txt` by the main 169 | > user, and will not allow groups or others to read it, write it or execute it. 170 | 171 | # 3 - Environment variables 172 | 173 | 'Environment variables' are variables stored in special folders, in order to only reveal 174 | its contents locally (on the terminal), and if requested. 175 | 176 | Environment variables very useful to store confidential data such as passwords and API 177 | keys, which we do not want to reveal in our source code or uploaded in GitHub, for 178 | example. They can also be used to run code in different machines, where the value of 179 | `HOME` is different, for example. 180 | 181 | Note: in Unix systems, global environment variables are stored in the 182 | `/etc/environment` folder and user level variables in `.bashrc` and `.profile` files 183 | of the user's Home folder. 184 | 185 | #### Environment variables to know 186 | 187 | - `PATH` is the list of folder paths (separated by `:`) that our terminal will look 188 | into to understand the commands we run in the terminal. 189 | - `HOME` is the absolute location of the user's home directory 190 | 191 | #### Get list of environment variables 192 | 193 | ```sh 194 | printenv 195 | ``` 196 | - prints in the terminal the list of currently set environment variables 197 | (alternatively, we can use the command `env`) 198 | 199 | ```sh 200 | $ set 201 | ``` 202 | - displays the entire list of set or unset values of shell options (environment 203 | variables). Gives a much more complete list than `env`, with all predefined 204 | evironment variables (even those that have no value assigned to it) 205 | 206 | #### View contents of environment variables 207 | 208 | ```sh 209 | $ echo $ 210 | ``` 211 | - Displays the value of the environment variable. Example: `echo $PATH` 212 | 213 | > Note: the `$` sign is used by Linux to access the value of variables 214 | 215 | #### Update environment variables 216 | ```export =``` 217 | - Sets up the contents of an environment variable. Example: `export PATH=$PATH:opt/bin` 218 | adds an address to `PATH`. 219 | -------------------------------------------------------------------------------- /6-docker/1-run-container.md: -------------------------------------------------------------------------------- 1 | # Running containers 2 | 3 | > Click here to [go to the commands directly](#1---basic-commands). 4 | ## Table of contents 5 | 6 | - [Running containers](#running-containers) 7 | - [Table of contents](#table-of-contents) 8 | - [1 - Basic commands](#1---basic-commands) 9 | - [1.0 Basic structure](#10-basic-structure) 10 | - [1.1 Querying info](#11-querying-info) 11 | - [2 - Containers](#2---containers) 12 | - [2.1 Launching and stopping containers](#21-launching-and-stopping-containers) 13 | - [Run a container from an image](#run-a-container-from-an-image) 14 | - [Run a command inside of the container](#run-a-command-inside-of-the-container) 15 | - [Start an already created container](#start-an-already-created-container) 16 | - [Stop a container](#stop-a-container) 17 | - [Deleting containers](#deleting-containers) 18 | - [2.2 Messing with running containers](#22-messing-with-running-containers) 19 | - [2.2.1 Getting info](#221-getting-info) 20 | - [Get the container logs](#get-the-container-logs) 21 | - [Check processes in a container (top)](#check-processes-in-a-container-top) 22 | - [Get details of container configuration](#get-details-of-container-configuration) 23 | - [Get stats](#get-stats) 24 | - [2.2.2 Modifying running containers](#222-modifying-running-containers) 25 | - [Getting a shell inside the container](#getting-a-shell-inside-the-container) 26 | - [2.3 Listing containers](#23-listing-containers) 27 | - [Listing running containers](#listing-running-containers) 28 | - [Listing all containers](#listing-all-containers) 29 | 30 | # 1 - Basic commands 31 | ## 1.0 Basic structure 32 | There are two types of commands in Docker: 33 | * Management commands (new): `docker (options)` 34 | * Original syntax: `docker (options)` 35 | > Note: the original `docker run` (which still works) is replaced by `docker container run` 36 | 37 | ## 1.1 Querying info 38 | 39 | Check current version: 40 | ```sh 41 | $ docker version 42 | ``` 43 | * Returns the version 44 | 45 | Check generic information: 46 | ```sh 47 | $ docker info 48 | ``` 49 | * Returns current number of containers (running & stopped), and images 50 | 51 | # 2 - Containers 52 | 53 | ## 2.1 Launching and stopping containers 54 | 55 | ### Run a container from an image 56 | 57 | To run a container from an image for the first time: 58 | 59 | ```sh 60 | $ docker container run : 61 | ``` 62 | 63 | * Looks for a local image matching the name insterted, otherwise looks in Dockerhub for it, and then runs it 64 | * Options: 65 | * `-p :` (or alternatively `--publish`): binds local and container ports (container is accessible through local port). Many options can be specified, such as the IP where the container runs. If no IP address is specified, they will run on `127.0.0.1`. 66 | * `-d` (or alternatively, `--detach`): runs the container as a daemon. 67 | * `--name `: defines the container name (otherwise it is randomly generated) 68 | * `-e EXAMPLE_ENV_VARIABLE=dummy` (or alternatively `--env`): defined an environment variable inside of the container. 69 | * `-t`: allocates a "pseudo-tty", simulates a pairing between a pair of devices (one giving orders, the other receiving them), similar to SSH. It allows to run a command inside the container (usually used along `-i`, which keeps the session open, allowing for multiple commands to be ran) 70 | * `-v :`: allows us to define a name for the given volume 71 | * `-i`: interactive (used usually alongside `-t`), keeps the session open to receive terminal input. 72 | * `-t`: Allocates a pseudo-TTY (which simulates a real terminal) 73 | * `--net `: connects the container to the specified network 74 | * `--netw-alias `: defines a network alias for the container. A robin-like 75 | routing strategy will be used by Docker when accessing containers using the same 76 | alias. 77 | * `--rm`: clean up (erase) the container when it is exited or stopped. 78 | 79 | > Note: if no `` is specified, Docker will pull the latest version from Dockerhub 80 | 81 | > Note 2: Docker gives each running container a virtual IP inside the docker engine 82 | 83 | ### Run a command inside of the container 84 | 85 | Running optional commands: apart from ``, the `run` command allows to run 86 | alternative commands inside the container with the below syntax: 87 | 88 | ```sh 89 | $ docker container run : 90 | ``` 91 | * Will run the container and execute a command inside of it. 92 | * (Optional) commands: 93 | * `bash`: runs a bash terminal on the container (useful if used along `-it`, to get 94 | a terminal inside the container). 95 | * `sh`: runs the default shell of the container (some distributions of linux such as 96 | Alpine do not have bash to save space). 97 | 98 | Alternatvely to `run`, the `start` command can be used to run a stopped container (it must have 99 | been started already with `run`): 100 | 101 | ### Start an already created container 102 | 103 | Instead of taking an image and starting a container from it, this command allows us to 104 | start up a container that was already created. 105 | 106 | ```sh 107 | $ docker container start 108 | ``` 109 | - Will start the stopped container requested 110 | - Options: 111 | - `-a`: attach STDOUT/STDERR and forward signals (used with `-i` to "enter" the 112 | container terminal) 113 | - `-i`: interactive mode 114 | 115 | ### Stop a container 116 | 117 | Stopping a running container: 118 | 119 | ```sh 120 | $ docker container stop 121 | ``` 122 | 123 | * Stops the running container 124 | 125 | 126 | > Note: as ``, it is enough to run the digits that make that id unique. 127 | 128 | 129 | ### Deleting containers 130 | 131 | ```sh 132 | $ docker container rm 133 | ``` 134 | 135 | * Deletes the container(s) specified 136 | * Options: 137 | * `-f`: forcefully delete a container (even if it is currently running) 138 | 139 | ## 2.2 Messing with running containers 140 | ### 2.2.1 Getting info 141 | #### Get the container logs 142 | 143 | To get the logs of a container running as daemon: 144 | 145 | ```sh 146 | $ docker container logs 147 | ``` 148 | 149 | * Returns all the logs the container has generated since its start 150 | * Options: 151 | * `--tail `: outputs the last `n` lines of logs the container registered 152 | * `-f`: "follows" the log generation (in real time) 153 | 154 | #### Check processes in a container (top) 155 | 156 | It can be useful to look at the different processes running inside a container. It 157 | can be done with the below command: 158 | 159 | ```sh 160 | $ docker container top 161 | ``` 162 | 163 | - Shows the processes running inside a container 164 | 165 | #### Get details of container configuration 166 | 167 | It can be useful to know all the metadata used to start a container: 168 | 169 | ```sh 170 | $ docker container inspect 171 | ``` 172 | - Returns a JSON array of all the data used to initialise the container. 173 | 174 | #### Get stats 175 | 176 | We can check how much resources (CPU, etc) each of the running containers is taking by running the below command: 177 | 178 | ```sh 179 | $ docker container stats 180 | ``` 181 | 182 | - Shows live data of local resources used by each container (CPU, Memory, Disk...), displayed by ID. 183 | 184 | > Note: we can specify a name to only view the stats of a single container instead of all the ones running. 185 | 186 | ### 2.2.2 Modifying running containers 187 | 188 | #### Getting a shell inside the container 189 | 190 | To launch a container and open a CLI interface on it we can use: 191 | ```sh 192 | $ docker container run -it 193 | ``` 194 | - The `-i` option adds an interactive session, and the `-t` enables a pseudo-TTY (chec 195 | [`docker run` section](#run-a-container)). 196 | - Options: 197 | - `` can be `bash`, `zsh`, etc. or even `ubuntu` (which installs the 198 | minimal ubuntu package). 199 | 200 | Optionally, to access a container that is already running we can use: 201 | ```sh 202 | $ docker container exec -it 203 | ``` 204 | - Again, the `-i` option adds an interactive session, and the `-t` enables a pseudo-TTY 205 | (check [`docker run` section](#run-a-container)). 206 | - Options: 207 | - `` can be `bash`, `sh`, `zsh`, etc. or even `ubuntu` (which installs the 208 | minimal ubuntu package). 209 | 210 | > Note: the requested shell should be installed in the container for this command to 211 | > succeed. 212 | 213 | > Note2: leaving the container after accessing it with `exec` will not stop the container 214 | > as `exec` runs a different process than the root one. 215 | 216 | ## 2.3 Listing containers 217 | 218 | ### Listing running containers 219 | 220 | ```sh 221 | $ docker container ps 222 | ``` 223 | 224 | ### Listing all containers 225 | 226 | ```sh 227 | $ docker container ls 228 | ``` 229 | * lists all running containers 230 | * Options: 231 | * `-a`: lists all containers (stopped, and running) 232 | -------------------------------------------------------------------------------- /5-python/a-python/2-oop-basics.md: -------------------------------------------------------------------------------- 1 | # Object Oriented Programming Basics 2 | 3 | This file contains the basics of Object Oriented programming: some intuition about it, 4 | and how to define a Class. For more advanced topics check the 5 | [Advanced OOP python]() 6 | section. 7 | 8 | This document is organised as follows: 9 | 10 | **[1 - Intuitive introduction to Object-Oriented Programming](#1---introduction-to-oop)** 11 | 12 | **[2 - Objects and Classes in Python](#2---oop-with-python)** 13 | - [2.1 Defining and instantiating classes](#21---classes-in-python) 14 | - [2.2 Defining class methods](#22---defining-class-methods) 15 | - [2.3 Inheriting from classes](#23---class-inheritance) 16 | 17 | 18 | # 1 - Introduction to OOP 19 | 20 | ## What is OOP? 21 | 22 | Object-oriented programming (OOP) is a programming paradigm that allows programmers 23 | to encapsulate the business logic of applications in 'objects'. 24 | 25 | As opposed to lower-level programming languages in which programmers have to focus on 26 | the pure logic of the code, OOP works with a higher level of abstraction, and focuses 27 | on the logical interaction of the code components (which are grouped in 'objects'). It 28 | allows us to build much more complex systems in a much simpler and structured way. 29 | 30 | ## But wait, what is an 'object'? 31 | 32 | Objects are abstract entities defined by programmers. Conceptually, an object is a group 33 | of functions and variables packaged together in an entity; the way objects are defined 34 | in the code depends entirely on the programmer: the possibilities are endless. 35 | 36 | > Let's illustrate it with an example: Imagine we have to deal with the backend of a 37 | > pet shop. Amongst many things, the Pet shop needs to store the data of Pets. Each 38 | > Pet has a name, a type, a breed, and a date of birth; each pet can also be adopted, 39 | > or brought to the pet store for the first time. 40 | > In OOP, we can create a `Pet` object, which will have the variables `name`, `type`, 41 | > `breed`, and `date_of_birth` associated to it. More importantly, we can define within 42 | > the `Pet` object the functions `add_to_database` and `adopt`. The first adds the Pet 43 | > to the database while the second removes it from the main database and adds it to an 44 | > 'adopted Pets' database. The idea is to hide the complexity of the code in such a way 45 | > that the statement `Pet.add_to_database()` performs always the desired action no 46 | > matter the changes to the function we make (such as, for example, changing the type 47 | > of database). That is OOP: the complexity is now encapsulated in the object itself, 48 | > which makes the code much easier to maintain. 49 | 50 | ## Organising files in OOP 51 | 52 | Files are typically used with the MVC (or Model-View-Controller) design pattern. In a 53 | nutshell, the model divides the logic of the code in three parts: 54 | - The `Models` file contains all the object definitions of the code; it is the backbone 55 | of the application. Objects hold all the complexity of the system, such as the way it 56 | talks to the database. In this part we also define the logic the application will 57 | follow, since we will define the actions each object will be capable of doing. 58 | - The `Contoller` defines the events that trigger the use of certain functionalities. 59 | In a typical server, the controller will define the actions to be taken once it knows 60 | the **URL** and **HTTP request** of a client's request. The controller should use the 61 | methods defined in the `Models` file, in such a way that if any of the models changes 62 | its implementation, the controller should still work. 63 | - The `View` is the code (usually in a renderizable format such as HTML) that the client 64 | will see and interact with. Both controller and model will be able to update the view 65 | (based on what the user does such as clicking a button or sending information, for 66 | example), which will then be rendered to the user by the browser. 67 | 68 | # 2 - OOP with Python 69 | 70 | ## 2.0 - Introduction: everything is an object 71 | 72 | You heard well: in Python, everything is an 'object', or rather, using a more pythonic 73 | language: everything is an instance from a Class, even Classes themselves! 74 | 75 | In Python, a **Class** is a template that allows to create instances, all of which have 76 | the same types of attributes and functions attached to it. It can be seen as a template 77 | which defines the structure of the data required to create an instance. 78 | - An **Instance of a Class** is a set of data that follows the Class blueprint 79 | (its structure) to allocate some memory to store an object (also called 'Class 80 | instance'). These objects will have some functionalities, which will be defined in the 81 | class. 82 | > Example: Lists, Integers and Dictionnaries are Classes in Python. More specifically, 83 | > a `List` has a method `pop()` attached to it, which will return the object contained 84 | > in the last index. 85 | 86 | Classes are key in Python: they are the building blocks of complexity: once we 87 | them, we can do almost anything. 88 | 89 | ## 2.1 - Classes in Python 90 | 91 | ### 2.1.1 Defining a Class 92 | 93 | Remember: when we define a class, we are defining the inner structure that its 94 | instantiations will have: what types of data it will hold? which functions will be 95 | allowed? Creating an instance is simply using that 'building block' to initiate an 96 | object following that structure. 97 | 98 | #### Creating a Class 99 | 100 | Classes in Python are defined using the `class` keyword. Classes expect methods, 101 | which will be attached to the class using the right indentation. See the below 102 | example: 103 | 104 | ```python 105 | class MyClass: 106 | def my_method(self, attribute): 107 | self.my_attribute = attribute 108 | ``` 109 | - The class above contains a method called `my_method`, which accepts two arguments. 110 | The first is the class instance (always named `self` by convention in instance 111 | methods), the second is an additional one provided as an example. Then, the method 112 | sets an instance attribute to the argument added (this is called a "setter method"). 113 | 114 | Now that this class has been created, we can instantiate it as follows: 115 | ```python 116 | # Creating the class. 117 | my_instance = Myclass() 118 | 119 | # Using the method defined in the class. 120 | my_instance.my_method("example_attribute") 121 | ``` 122 | - The code above creates a `MyClass` instance, then uses the method defined within 123 | it to perform some action. In this case, `self` is the instance itself 124 | (`my_instance`), then, a second argument is explicitely passed in the method. 125 | 126 | > Note: by convention, Class names are in `CamelCase`, and methods and arguments in 127 | > `slug_format` 128 | 129 | #### The class constructor: the `__init__` method 130 | 131 | `__init__` is a "magic method" in Python: rather than explicitely, it will be called 132 | under certain circumstances. In the case of `__init__`, it will be automatically 133 | executed every time an instance of a Class is created. In other languages 134 | it is known as the "constructor" function, as it will define the actions the Class 135 | should perform at startup (usually, it contains the logic of setting up some useful 136 | attributes that will be used within the class). 137 | 138 | Here is an example that illustrate how it works: 139 | ```python 140 | class Person: 141 | def __init__(self, complete_name): 142 | self.first_name = complete_name.split()[0] 143 | self.last_name = complete_name.split()[1] 144 | ``` 145 | - In the code above, we have created a class named `Person`, with a mandatory input 146 | variable: `complete_name` (needs to be provided when the class is created). The 147 | class, however, will save in memory the values of `first_name` and `last_name` as 148 | two separate variables internally. 149 | 150 | 151 | Let's take a closer look at what each line of code is doing: 152 | - `class Person:` defines the class (creates a pointer to the name). That way, every 153 | time we run `Person(complete_name)` we will create an instance of that class. 154 | - `def __init__(self, complete_name):` defines the `__init__` function, which, as 155 | mentioned before, is a 'special' function in python that will be automatically 156 | executed when the class is instantated. It allows us to bind the variables 157 | defined within the function with that instance (in this example, it will be 158 | `first_name` and `last_name`). Note that all arguments entered on `__init__` 159 | (apart from `self`) are the required arguments for that class to initiate. 160 | - `self.first_name = complete_name.split()[0]` will create a variable associated 161 | to the instance. From now on, we can run `.first_name` to return 162 | the value that is saved in it. In this example, it will be the first name of the 163 | person. 164 | 165 | > Note: `self` the name given by convention to the first argument of instance methods, 166 | > which mean 'the instance of the class itself' (which is passed as an argument to the 167 | > method). It is a pretty ubiquitous argument in Python. 168 | 169 | ### 2.1.2 Defining a class instance 170 | 171 | Once a Class is defined, we can use the template of that Class to create instances 172 | with the same structure. Let's create two instances of the `Person` class defined 173 | before: 174 | 175 | ```python 176 | person1 = Person('Paul McCartney') 177 | person2 = Person('John Lennon') 178 | ``` 179 | - The code above creates two instances of the class, and saves their variables in 180 | memory. 181 | 182 | The arguments are now accessible: 183 | - `person1.first_name` will return `'Paul'` 184 | - `person2.last_name` will return `'Lennon'` 185 | 186 | ## 2.2 - Defining instance methods 187 | 188 | Apart from `__init__` (which is not mandatory), we can define custom methods for the 189 | class. These methods add functionalities to the Class, and can be called using the 190 | statement `.method()`. 191 | 192 | #### Example: `Rectangle` 193 | 194 | Let's define a `Rectangle` class adding a method to it that returns the area of the 195 | rectange: 196 | 197 | ```python 198 | class Rectangle: 199 | 200 | def __init__(self, length, width): 201 | self.length = length 202 | self.width = width 203 | 204 | def area(self): 205 | return self.length * self.width 206 | ``` 207 | - The code above defines an `area` method that simply returns the area of the rectangle 208 | 209 | #### Creating instances and calling the method 210 | 211 | We can now create an instance of that Class: 212 | 213 | ```python 214 | rectangle1 = Rectangle(3,4) 215 | ``` 216 | 217 | And then call the `area` method on it to compute its area: 218 | 219 | ```python 220 | rectangle1.area() 221 | ``` 222 | - The code above will return `12` (as defined in the method definition), which is the 223 | area of the rectangle 224 | 225 | -------------------------------------------------------------------------------- /6-docker/3-docker-images.md: -------------------------------------------------------------------------------- 1 | # Docker images 2 | 3 | This section explains all about Docker images: what are they and how to create them. 4 | 5 | > Click here to [go to the commands directly](#2---image-commands). 6 | 7 | Contents: 8 | 1. [Basic concepts](#1---basic-conceps) 9 | 2. [Image commands](#2---image-commands) 10 | 3. [Building images with the `Dockerfile`](#3---building-images) 11 | 4. [Uploading to Dockerhub](#4---uploading-to-dockerhub) 12 | 13 | 14 | # 1 - Basic conceps 15 | 16 | ## 1.2 Intuition on images 17 | 18 | Using technical jargon, a Docker image is "an ordered colection of root filesystem 19 | changes and the corresponding execution parameters for use within a container runtime". 20 | 21 | More intuitively, an image consists of the minimal data required to run a piece of 22 | code starting at a given point in time. Images are created by taking a snapshot of 23 | a piece of software, saving it, copying it, and running instances of the copies. 24 | The snapshot is the image, and the copies are the containers. 25 | 26 | Docker images consist of all the app binaries of the software we intend to run, 27 | all their dependencies, and metadata about the image's data and how to run it. 28 | 29 | Inside an image there is no OS, not even a kernel or kernel modules (e. g. drivers). 30 | The host provides the kernel. This allows for docker images to be really small. 31 | 32 | > Note: the "host" is not the local machine. Docker actually runs a Linux VM on which 33 | > all containers are run. 34 | 35 | ## 1.2 Image layers 36 | 37 | ### What are layers 38 | 39 | Images are not huge chunks of data coming from a big blob. They are designed using 40 | the union filesystem concept of making layers about the changes. These are visible 41 | when running `docker history `, where all the layers of the image are 42 | displayed. 43 | 44 | The image always starts from a base image we call "scratch"; all the changes added 45 | to that base image are an additional layer (such as running a command, or adding 46 | environment variables). Each layer of the image is identified with a unique hash, which 47 | enables other images to use that same layer (the stored version of it) if shared. This 48 | saves a lot of space and booting time. 49 | 50 | This feature works also for custom-made containers. If we create two images with 51 | `Dockerfile`s that only copy application A or B in the container, the common layers 52 | from both images can be shared (hence saving a lot of space). 53 | 54 | ### Containers are also layers 55 | 56 | Containers are "runtime" changes that are stacked on top of the base image. Unlike 57 | images (which are read-only layers), containers can be modified at runtime. When a 58 | container runs, it actually adds a new "running instance" of changes that is stacked 59 | on top of the base image. That means that two running containers on the same image 60 | actually share that layer stack. 61 | 62 | > Note: if one of the containers changes one of the files that were originally on the 63 | > base image, what happens is that these files are copied onto the container, adding 64 | > an additional layer of changes on top of them. This is known as "copy on write". 65 | 66 | ## 1.3 Docker hub 67 | 68 | Docker hub is the online registry of Docker images. Official ones and the ones made by 69 | users. 70 | 71 | Official images do not have an `/` format, they simply show the 72 | image name. Official images are maintained tested and documented (usually really well) 73 | by professionals (much stable). However, some popular private images might add useful 74 | things. 75 | 76 | Usual tags of images: 77 | - `:latest`: latest stable version of the image (default) 78 | - `:stable`: stable version, this version is older than the latest, but also 79 | will be used through a long time and will almost certainly not contain any bug. 80 | - `:`: fixed version. Specifying any of the digits will pin 81 | those digits and select the newest one of that version. 82 | - images containing the `alpine` keyword: `alpine` being a micro version of linux, it 83 | means the image comes from it (meaning it will probably be very small). 84 | 85 | > Note: more on Linux distributions in [this section](linux-distr.md). 86 | 87 | # 2 - Image commands 88 | 89 | ## 2.1 Inspecting images 90 | 91 | #### View image history 92 | 93 | ```sh 94 | $ docker history 95 | ``` 96 | - Shows the history of the image: all the changes done from the scratch image (base 97 | one) to the current status. Each command run is usually one 98 | [layer](#12-image-layers). 99 | 100 | #### Get the image metadata 101 | 102 | ```sh 103 | $ docker image inspect 104 | ``` 105 | - Returns a JSON object with all the metadata the image has (author, 106 | tags of the image, dates, details on how to run it, env. variables, 107 | exposed ports, etc.) 108 | 109 | ## 2.2 Running images 110 | 111 | ### Download an image 112 | 113 | To download an image from Dockerhub: 114 | ```sh 115 | $ docker pull 116 | ``` 117 | 118 | # 3 - Building images 119 | 120 | Check the [official documentation](https://docs.docker.com/engine/reference/builder/) 121 | on building Docker images. 122 | 123 | The `Dockerfile` is the list of commands used to create an image. It uses a language 124 | that is made similar to bash commands on purpose. 125 | 126 | Each statement block (starting with a capitalized keyword such as `FROM`, `ENV`, etc.) 127 | added to the `Dockerfile` is actually a new layer added to the Docker image. When the 128 | image is built, each layer is saved and a hash is given to it. If any other image uses 129 | the same layer, Docker will re-use that layer (speeding up the process immansely). 130 | 131 | > Note: the moment a line in the `Dockerfile` changes, all superior layers will 132 | > also change. Docker will however recognise the below layers are the same and 133 | > use the cached ones instead of building. For this reason, we keep the things 134 | > that change the least on the top of the `Dockerfile`. 135 | 136 | ## 3.1 Building blocks of the `Dockerfile` 137 | 138 | `Dockerfile` instructions covered: 139 | - [`FROM`](#from-starting-a-build-stage): base image 140 | - [`ENV`](#env-defining-environment-variables): environment variables 141 | - [`RUN`](#run-running-commands-inside-of-the-image): execute commands 142 | - [`EXPOSE`](#expose-command): ports 143 | - [`CMD`](#cmd-command-to-be-run-at-startup): startup command (required) 144 | - [`ARG`](#arg-use-arguments): arguments 145 | - [`COPY`](#copy-copy-files-into-the-image): copy files into the image 146 | - [`WORKDIR`](#workdir-change-directory): change directory 147 | - [`VOLUME`](#volume-specify-the-container-volume-path) 148 | 149 | ### `FROM`: starting a build stage 150 | 151 | The `FROM` is the main "build" statement in Docker. It indicates from 152 | which base image (usually a basic linux distribution, or the official starter package 153 | for a given application - `nginx`, `python`, etc. - easeri to maintain) a build stage 154 | should start to generate an image. From then on, we will add things 155 | to it. 156 | 157 | > Note: images can be built using more than one build stage. 158 | 159 | > Note 2: only an `ARG` block can be placed before the first `FROM` statement of the 160 | > `Dockerfile`. 161 | #### Basic usage: one build stage 162 | 163 | The `Dockerfile` always starts with a `FROM` statement, that is the base image (usually 164 | a basic linux distribution) from which we will start adding things. 165 | 166 | ```Dockerfile 167 | FROM : 168 | ``` 169 | - Uses the given image as a foundation for the new image being built. Check the 170 | [linux distributions](linux-distr.md). 171 | 172 | #### Using more than one build stage 173 | 174 | An image can be created by starting with a basic linux distribution, download all 175 | needed elements to run the application, then storing that as an image. But 176 | sometimes, some credentials we do not want stored in the final image are required 177 | to install certain packages, and some packages are only added to install some of the requirements of the application. They are also not required in the final image. 178 | 179 | To sort these two problems, we can two a build stages: 180 | - In the first one, we create a "dummy" initial image, on which we will download and 181 | install all the required packages to run the application 182 | - In the second one, we start from another blank image, on which we copy the binaries 183 | or the needed files to run the application. 184 | 185 | ```Dockerfile 186 | FROM : as builder 187 | 188 | # some other statements 189 | 190 | FROM alpine 191 | 192 | COPY --from=builder 193 | ``` 194 | 195 | Let's see an example using build stages to build a Python application using `pipenv`: 196 | 197 | ```Dockerfile 198 | # First building stage 199 | FROM python as builder 200 | 201 | COPY Pipfile Pipfile.lock ./ 202 | RUN pipenv sync 203 | 204 | # Second building stage 205 | FROM python:alpine 206 | 207 | COPY --from=builder ~/.venv .venv 208 | COPY --from=builder ~/Pipfile ~/Pipfile.lock ./ 209 | ``` 210 | 211 | ### `ENV`: defining environment variables 212 | 213 | ```Dockerfile 214 | ENV VARIABLE_NAME 215 | ``` 216 | - Adds the given environment variable to image (safe way of storing secrets) 217 | 218 | > Note: all environment variables defined in a build stage will be made 219 | > available to the final image. To avoid that, we can unse `ARG` instead. 220 | 221 | ### `RUN`: running commands inside of the image 222 | 223 | We usually use this command to download packages and install things in the image. We 224 | should use the commands that are supported by our base image. 225 | 226 | ```Dockerfile 227 | RUN \ 228 | && \ 229 | && 230 | ``` 231 | - Adds a layer to the image running all the specified commands 232 | 233 | ### `EXPOSE` command 234 | 235 | Open the ports in the container on the virtual network so the container is reachable 236 | from the host machine (we still will need to wire the host port to the container with 237 | the `-p :`). 238 | 239 | ```Dockerfile 240 | EXPOSE 241 | ``` 242 | - Ports open in the container (the rest will be closed). 243 | 244 | ### `CMD`: command to be run at startup 245 | 246 | This statement allows us to define what will start up the container layer. It is only 247 | required if the base image we pull from does not implement it, or we want to superseed 248 | it. This statement enables us to define the command that will be run every time a 249 | container is run from an image, and every time it is restarted. It is always the last 250 | statement to be specified in a `Dockerfile`. 251 | 252 | ```Dockerfile 253 | CMD ["command", "to", "run", "at", "startup"] 254 | ``` 255 | 256 | ### `COPY`: copy files into the image 257 | 258 | ```Dockerfile 259 | COPY 260 | ``` 261 | - Copies the contents of a file or folder from the host to the image (usually used 262 | to copy the source code of the application into the image). 263 | - Options: 264 | - `--chown=:`: sets up permissions for the copied file in the image 265 | (this works only on Linux-based images) 266 | 267 | ### `ARG`: use arguments 268 | 269 | Arguments are environment variables that are only available at a certain build stage, not for all build stages like variables defined in `ENV`. They can be seen as variables that 270 | we can pass on to the `Dockerfile` to build the image (through the `docker image build` 271 | command, or through `docker-compose.yml`). 272 | 273 | There are two ways of using `ARG`: inside or outside of a building stage. 274 | 275 | Arguments are defined with the below syntax: 276 | 277 | ```Dockerfile 278 | ARG ARGUMENT_NAME= 279 | ``` 280 | - This value will only be available for a given build stage 281 | 282 | ### `VOLUME`: specify the container volume path 283 | 284 | All the files the container will put in a given folder will outlive the container and 285 | be stored as a volume locally. 286 | 287 | 288 | 289 | #### Outside of a building stage (before the first `FROM` statement) 290 | 291 | The argument will be made available to all building stages, which can access it 292 | to run a command or set an environment variable with `$ARGUMENT_NAME`. 293 | 294 | 295 | #### Inside of a building stage (after a `FROM` statement) 296 | 297 | When `ARG` is used in a building stage, it will define an environment variable with 298 | the argument name and its value, but not persist that environment variable to the 299 | final images (good to avoid storing secrets in final image). 300 | 301 | ### `WORKDIR`: change directory 302 | 303 | This command enables to change the current directory while running commands to 304 | create an image. This could be achieved with `RUN cd `, but it is considered 305 | better practice doing it with `WORKDIR` 306 | 307 | ```Dockerfile 308 | WORKDIR 309 | ``` 310 | 311 | ## 3.2 Build an image using a dockerfile 312 | 313 | Running the `docker build` command will run each of the statements specified in the 314 | `Dockerfile` through the Docker engine, and save them in our local machine as layers 315 | of the image. 316 | 317 | ```sh 318 | $ docker image build 319 | ``` 320 | - Builds an image using the `Dockerfile` in the current folder 321 | - Options: 322 | - `-t :`: 323 | - `-f `: use a custom-named `Dockerfilez 324 | 325 | 326 | # 4 - Uploading to Dockerhub 327 | 328 | ## 4.1 Logging in to Dockerhub 329 | 330 | 331 | ```sh 332 | $ docker login 333 | ``` 334 | - Login and save an authentication key locally 335 | 336 | > Note: when this command is run, it will download a session authentication key and 337 | > save it locally. Run this command only if the machine is trusted 338 | 339 | ```sh 340 | $ docker logout 341 | ``` 342 | - Logout and invalidate the authentication key downloaded 343 | 344 | 345 | ## 4.2 Tagging 346 | 347 | Images do not really have names. The "name" of the image is actually composed by three 348 | pieces of information `/:`. In the case of official images, 349 | there is only a repository name: `:`. 350 | 351 | ### Changing the name of an image 352 | 353 | ```sh 354 | $ docker image tag : : 355 | ``` 356 | - Adds locally a new tag for a given image. The new name given will come after our 357 | repository name, and that can be uploaded to dockerhub. 358 | 359 | > Note 1: this image can now be pushed to dockerhub using `docker image push` (that 360 | > would make a copy of the original image in our repository) 361 | 362 | > Note 2: if no tag is specified, `latest` will be used 363 | 364 | 365 | ## 4.3 Interacting with Dockerhub 366 | 367 | ### Download an image 368 | 369 | To download an image from Dockerhub: 370 | ```sh 371 | $ docker pull : 372 | ``` 373 | 374 | ### Push an image 375 | 376 | To push an image to Dockerhub: 377 | ```sh 378 | $ docker image push : 379 | ``` 380 | -------------------------------------------------------------------------------- /0-basic-concepts/README.md: -------------------------------------------------------------------------------- 1 | # COMPUTER SCIENCE REFRESHER 2 | 3 | This document contains an intuitive explanation of some elemental Computer Science 4 | concepts that are used in the course. It is intended as a 'live' document, completed 5 | each time a student has a doubt about a concept. 6 | 7 | ## Contents of the document: 8 | 9 | #### 1 - [Computing Basics](#1---computing-basics) 10 | #### 2 - [Networking basics](#2---networking-basics) 11 | 12 | # 1 - Computing Basics 13 | 14 | This part will have a (very) basic overview of the parts of a computer. 15 | 16 | ## 1.1 Basic hardware components 17 | 18 | To understand why the software works the way it does, it can be useful to understand 19 | the main 'physical' components of a computer: the CPU, the Hard Drive, and the RAM. A 20 | very basic overview is explained below. 21 | 22 | > Also: this [5-minute video from TED-Ed](https://www.youtube.com/watch?v=p3q5zWCw8J4) 23 | > explains it amazingly well. 24 | 25 | ### 1.1.1 The CPU 26 | 27 | CPU stands for 'Central Processing Unit', and acts as the 'brain' of the computer. It 28 | is responsible for running the software and performing the data manipulation in the 29 | computer. The CPUs installed in modern computers are optimized to be versatile and run 30 | many kinds of processes in parallel. 31 | 32 | > Note: A 'process' or a 'thread' is the elementary component of processing. Single 33 | > programs can run multiple processes at a time. To get the intuition: a very simple 34 | > python `for` loop, for example, where data is computed as a sequence, is a single 35 | > process. 36 | 37 | ### 1.1.2 The Hard Drive 38 | 39 | The hard drive is the 'long-term' storage of computers. It allows data to be saved in 40 | a persistent way, so it is not lost if the machine is turn off; the problem with it 41 | is that accessing the data is pretty slow (that is when RAM becomes useful). Data such 42 | as Software packages, Photos, or Music is stored in the Hard Drive. 43 | 44 | Hard Drives can usually be of two types: 45 | - HDD (hard disk drives), which store data by changing the magnetic orientation of 46 | crystals of a rotating disk, 47 | - and SSD (solid state drives), such as USBs, which store data in a series of switches 48 | (transistors) that can be 'on' (1) or 'off' (0). 49 | 50 | ### 1.1.3 The RAM 51 | 52 | RAM stands for 'Random Access Memory' in reference of the way the data is stored on it, 53 | as any bit of information can be retrieved in any order (as opposed to a Hard disk 54 | drive, where the needle needs to physically switch between positions). The killer 55 | feature of the RAM is that the data can be accessed really fast (around 50-100 times 56 | faster than a typical SSD Hard Drive, and 100,000 times faster than an HHD), although 57 | the data is usually lost when the machine is turned off. 58 | 59 | RAM is used to speed up processes: typically, when a software is launched, it will load 60 | the data (for example, all of the variables created in a Python script) so that the CPU 61 | can do its computations quickly. Browsers also use the RAM to store the data they 62 | received from the internet, such as images and HTML files, so they do not overload the 63 | network by requesting it each time the user opens the browser tab. 64 | 65 | #### What does 'Caching' mean? 66 | 67 | 'Caching' is a term used in Computer Science that simply means 'Storing in RAM', so it 68 | can be accessed faster. 69 | 70 | #### What does 'saving the state' mean? 71 | 72 | 'Saving the state' of a machine simply means saving some of the contents of the RAM into 73 | the Hard Drive, so it can be recovered later. The data we usually save is an 74 | 'intermediate step' of computation stored in the RAM that we want to recover later. It 75 | is called the 'state' since it is not recoverable if lost; when saved in the Hard drive, 76 | it becomes recoverable. 77 | 78 | Docker containers, for example, always start from a 'saved state' of a previously 79 | generated container (which has the software we want already installed and running), so 80 | that desired packages are launched as fast as possible. 81 | 82 | ### 1.1.4 Other components 83 | 84 | Computers have many other components, such as the Mother Board, which connects the three 85 | components mentioned above (CPU, RAM, and Hard Disk), and other components such as GPU 86 | (or 'Graphics Processing Unit) to process media; but these are far beyond the scope of 87 | what is needed to understand for the course. 88 | 89 | ## 1.2 Software components 90 | This part overviews how the software controls the hardware. The goal of this part is to understand very basically how a computer is controlled, and explain what a Virtual Machine (VM) is. For that, we need to intuitively understand what is the kernel, and an hypervisor. 91 | 92 | ### 1.2.1 The kernel 93 | In a nutshell, the 'kernel' is the central software that manages the computer's limited resources (CPU, RAM, Hard Drive), and allocates them to processes. It can be considered as the 'core of the Operating System'. 94 | 95 | A bit more precisely, in a Unix system, the kernel is responsible of the following tasks: 96 | - Process scheduling: it decides how the multiple processes use of the CPU of the machine. 97 | - Memory management: the kernel also handles the use of the RAM by each process. It will keep track of which parts of the RAM are used by which process, and make sure that two different processes do not modify the data of one another. 98 | - Provision of a file system: a way of storing data on the Hard Disk (translating from 1s and 0s to treatable documents) 99 | - Creation and termination of processes: the kernel loads a new process into RAM and provides the resources it needs (CPU, memory, access to files) once the process is run; once the process is terminated, the kernel ensures the resources used by it are freed again. 100 | - Access to devices: the kernel manages the access to computer devices (keyboard, mouse, disks, etc.) by the running processes. 101 | - Network: it sends and receives networking packets on behalf of the processes. 102 | 103 | ### 1.2.2 Hypervisors and VMs 104 | #### Hypervisors 105 | Each operating system has its own 'language', and its way of talking to devices. A **hypervisor** is simply a 'translator layer' that allows a Host Operating System be run in any computer, believing it runs in a physical device (although it doesn't). To achieve this, hypervisors perform two main tasks: 106 | - They allocate computer resources (CPU, RAM and Hard Disk) so that only virtual machine can use them. It achieves this by 'partitioning' (or splitting) them: one 'slice' for the Host OS, and another for the VM (For example, a RAM of 8GB might be split in 4GB for the Host OS, and 4GB for the VM). 107 | - 'Translating' orders from one operating system to another. For example, if we emulate a UNIX operating system (as we will do on the course), it will try to talk to the hardware using a 'language' that Windows or MAC Operating Systems might not understand. Hypervisors 'translate' UNIX commands (such as 'give me 10MB of RAM for this process') so that a Windows or a MAC operating system can understand it and perform that task, in such a way so that the UNIX system believes that is running in an actual computer (although it is not). 108 | 109 | #### Virtual Machines (VMs) 110 | Virtual Machines are simply an emulation of an operating system that believes it runs in an isolated physical device, but it is really running inside another computer thanks to a hypervisor. Let's visualize it with the below diagram: 111 | 112 | ![Alt text](assets/VM-diagram.png) 113 | 114 | Boxes represent different layers of complexity, talking to each other. The 'Host OS' (kernel) layer manages the Infrastructure (CPU, RAM, etc.), the Hypervisor talks to the Host OS, creating three 'encapsulations' (divisions of the Infrastructure) and 'translating' the Host OS commands to Native Linux (Ubuntu), Windows, and RedHat (top layers of the example.) 115 | 116 | ##### Why are VMs useful at all? 117 | VMs are used in DevOps (as well as containers) to create a replicable evironment (such as a UNIX system), so that any member of the team can re-create it without problem and work on it, to avoid the "it seems to only work on your computer" kinds of problem. In the Class, we will use `Vagrant` to create the specifications of the VM we want to start. 118 | 119 | VMs are one of the keys in the revolution of Cloud Computing: all instances ran in the Cloud are actual VMs running somewhere in a Data Center. Thanks to them, computing has become much more efficient, saving billions. Now, instead of having to buy a physical computer to run a server, multiple servers are 'squished' into a single computer thanks to virtualization in a 'pay-for-what-you-need' business model. 120 | 121 | 122 | # 2 - Networking Basics 123 | In this course, we will be running a server, and use a nice trick called 'port-forwarding' to access the outputs of the server running inside a VM. For that, we need to understand some very basics of Networking, such as what are IPs, ports, as well as how HTTP (the way servers talk to browsers) works. 124 | 125 | ## 2.1 IPs and Ports 126 | ### 2.1.1 IP addresses 127 | An IP ('Internet Protocol') address is a unique identifyer of a 'node' on the internet graph (a point that can receive and send information), composed of four 8-bit numbers. IPs are used to specify who should receive a specific chunk of data (called 'packet'). 128 | IPs can be public (a unique number, used to communicate with the web) or private (which means it is only accessible locally). An example of a private IP address is `127.0.0.1`, which is known as `localhost` by default. In the course, we will run a 'local' version of the server on our machine on IP `127.0.0.1`, which the browser can access when the IP address or its shortcut are visited. 129 | 130 | #### What does a 'local' version of the server mean? 131 | It means that the process (all computations) run by the server will be done in our local device, and the outputs of the server (called 'server responses') will only be accessible locally. This is usually used to debug the server and ensure it behaves as it should. Once some basic functionalities are coded and tested, we can run it in the Cloud (in a Virtual Machine in some Data Center, where a public IP will be given to it) so that we can access it from anywhere in the world. 132 | 133 | ### 2.1.2 Ports 134 | The 'port' is simply a 16-bit number (which complements the IP address), to organize the type of data sent and received by a machine. Port numbers from 0 to 1023 are pre-assigned (by convention) to frequently used communications; for example, port `80` is used for the `HTTP` protocol (by default, browsers 'listen' to port 80 for data and ignore the rest). Port `22` is used for the `SSH` protocol. In this course, we will run applications, and direct their outputs into ports (which we will set to large numbers such as `8080` to avoid using a predefined number). 135 | 136 | #### What is 'port forwarding'? 137 | 'Port forwarding' is a trick we will use to communicate with a server running in a Virtual Machine. 138 | 139 | In our project, we will run a VM running UNIX (which will be controlled using a terminal). On the VM, we will run a server that will listen to requests (inputs), and send back responses (outputs) to an IP and port of our choice (let's say `localhost:8080`). 140 | 141 | To test the server, we want to be able to communicate with it though our browser locally (by opening the browser and going to the `localhost:8080` address). However, this will not work, since the server will be running inside of the VM, and our browser will 'listen' inside of our local machine. Let's explain it with a visual example: 142 | 143 | ![Alt text](assets/port-fwd1.png) 144 | 145 | The above diagram shows the local machine (light-blue), inside of which runs a browser tab and a VM (black box); inside the VM, a Flask server is running and sending its outputs to the `localhost:8080` inside the VM (represented by the red arrow). The green dot represents the virtual address `localhost:8080` in the VM (which is where the data From the server is sent). The browser, then, when the address `localhost:8080` is requested, will try to communicate with the `localhost:8080` of the local machine (Not the VM!), represented by the other green dot. Both dots are not connected, therefore we cannot access the server! Here is when 'port forwarding' comes handy (represented by the green arrow): 146 | 147 | ![Alt text](assets/port-fwd2.png) 148 | 149 | Port forwarding means asking the VM to foward any traffic of data received on its `localhost:8080` to the `localhost:8080` of the local device (represented by the green arrow), hence connecting the browser and the server running inside the VM. We can now access our server! 150 | 151 | ## 2.2 HTTP protocol 152 | `HTTP` stands for 'Hypertext Transfer Protocol', which is used to send and receive data through the internet. It is the language used by servers and browsers to communicate as well. HTTP codifies the messages sent by clients (ex: a browser) in a structured manner, so that servers can understand what the client is asking for, and send a response to it. 153 | 154 | ### 2.2.1 HTTP requests 155 | [This file](request.txt) contains an example of the request sent by Google Chrome while trying to reach the address `localhost:3000`. (To get this file, a server which simply logs the requests received in `localhost:3000` was created). 156 | 157 | As we can see, there is a lot of information in a request (a lot of empty fields), some private data (such as the Computer and browser used by the client), but most importantly, the **URL** (in the example provided, the URL is `'/'`, or the 'home' URL), and the **HTTP Method** (a `GET` request in the example, since the client is simply requesting the server to 'read' the `'/'` URL). 158 | 159 | #### What are URLs and HTTP Methods used for? 160 | A server needs two fields to know how to process a request: the **HTTP method** (`GET`, `POST`, `PUT`, `DELETE`, etc.), and the **URL** (always `'/'`). Once the server knows both fields, it will know which functions on the backend to trigger to prepare a response back to the client. In other words, it is the way we structure the servers. Each combination of valid HTTP method and URL should trigger a function of our server. 161 | 162 | For example, a `GET` request to the home URL `'/'` should trigger the function that sends the client the contents of home page (which is usually a static HTML file with a nice display). 163 | 164 | In this course, **URLs** and **HTTP methods** will follow a convention called the `RESTful API`. 165 | 166 | > Note: The most frequently used HTTP methods are `GET` (read data), `PUT` (update some data on the server), `POST` (send some data on the server to create a resource), and `DELETE` (delete some resource). We will see other HTTP methods in the `RESTful API` lecture. 167 | 168 | #### Headers and body of the request 169 | All HTTP messages have two main components: the **header** and the **body**. 170 | 171 | The **header** should store two or three key pieces of data (depending if we are sending data to the server or not): 172 | - The HTTP method (explained above) 173 | - The URL requested (explained above) 174 | - The `Content-Type`: which is required only when the HTTP method is `POST` or `PUT`. This field allows the server to know which type of data the client is sending. (example: setting `Content-Type: application/json` will let the server know we are sending a JSON object). 175 | 176 | The **body** contains the data the client is sending (example: a JSON object). 177 | 178 | ### 2.2.2 HTTP responses 179 | Once the server has treated a request, it should send a response to the client; responses can have many forms, and are sent in a similar object as the request. The two main components a response must contain are the **header** and the **body**. 180 | 181 | #### The response header 182 | The **header** contains information about the message itself; the minimum information required for a valid response are the following two fields: 183 | - the `Content-Type`: which states the format used on the body of the message (so that the client can decode it). For example, setting `Content-Type` to `application/json` (on the header: `Content-Type: application/json`) will let the client know that the message sent back is in JSON format. 184 | - the `Status`: which tells the client information about how the server handled the request. For example, `200 OK` tells the client the server properly understood the request, and sent back an appropriate response. If something went wrong, the `Status` would be set to the appropriate error code. Find the entire list [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status). 185 | 186 | #### The body of the response 187 | The **body** is the content of the message the server wants to send back to the client. It can have many formats: plain text, HTML, JSON... The browser understands those languages and will render the response accordingly. 188 | 189 | ### 2.2.3 the `JSON` format 190 | JSON stands for 'JavasCript Object Notation', which is a type of very commonly used format to send data through HTTP (since browsers understand `JavaScript`). JSON objects are very similar to Python dictionnaries: the data is stored in a structured way by slicing it in a set of keys. 191 | 192 | #### How does a JSON object look like? 193 | Imagine we are a client that wants to purchase a music instrument. The client will send a request to the server to buy that product. (For simplicity, let's say the server only requires two fields to identify the product and process its purchase: the `product_name` and its `id_number`). The **body** of the message the browser will send is shown below: 194 | ```JSON 195 | { 196 | "product_name": "Höfner bass", 197 | "id_number": 123456 198 | } 199 | ``` 200 | - The object above is in JSON format, and the server will easily retrieve the fields it needs (as simple as that). 201 | 202 | 203 | ### 2.3 SSH 204 | SSH stands for 'Secure Shell' protocol. It is an encrypted protocol (for security reasons) that allows a machine to take control over another machine. In the course, we will use the SSH protocol to control and run commands on the Virtual Machines we create with Vagrant. 205 | -------------------------------------------------------------------------------- /3-git/1-complete-cheatsheet.md: -------------------------------------------------------------------------------- 1 | # GIT CHEATSHEET 2 | 3 | This document is a commented cheatsheet of `Git`, divided in three parts: 4 | 5 | ### [1: Working in the local repository](#1-working-in-the-local-repository-1) 6 | - Commands treated: [`git init`](#10-initializing), [`git status`](#checking-status), 7 | [`git diff`](#checking-status), [`git add`](#adding-files-to-staging-area), 8 | [`git commit`](#adding-files-to-the-local-repository), 9 | [`git log`](#retrieving-commit-history) 10 | 11 | ### [2: Working with remote repositories](#2-working-with-remote-repositories-1) 12 | - Commands treated: [`git clone`](#21-cloning-remote-repositories), 13 | [`git remote add`](#22-addingchecking-remote-repositories), 14 | [`git fetch`](#23-pulling-data-from-remote-repositories), 15 | [`git push`](#24-pushing-data-upstream) 16 | 17 | ### [3: Working with branches](#3-working-with-branches-1) 18 | - Commands treated: [`git checkout`](#31-creating-and-switching-branches), 19 | [`git merge`](#32-merging-branches), [`git rebase`](#33-rebasing) 20 | 21 | A simplified Git cheatsheet can be found [here](https://github.github.com/training-kit/downloads/github-git-cheat-sheet.pdf). 22 | 23 | ## 1. Working in the Local Repository 24 | 25 | ### 1.0 Initializing 26 | 27 | ```sh 28 | $ git init 29 | ``` 30 | - creates an empty Git repository (`.git/`) that will gather all committed files from the working document (see [Git basics](https://github.com/lombardero/nyu-devops-concepts/blob/master/3-git/0-git-basics.md)). 31 | 32 | #### Basic files 33 | 34 | Apart from our codebase, it is a good idea to initialize each project with two 35 | additional files: 36 | - `README.md` file: uses the 37 | [Markdown language](https://guides.github.com/pdfs/markdown-cheatsheet-online.pdf) 38 | to render a description of the Project in GitHub. 39 | - `.gitignore` file: a list of the files we do not want Git to keep track of (and we 40 | do not want in GitHub). 41 | 42 | #### Note on `gitignore` file 43 | 44 | The `.gitignore` file contains all files and terminations that should be ignored by 45 | `Git` (those files you might have or want in your local repository but you do not need 46 | in your code, such as `VS Code` files '`.vscode'`.) 47 | The file contains a list of all terminations, folders, files to be ignored. The format 48 | is explained below: 49 | 50 | - use `*.` to ignore all files ended up in the `` specified 51 | - use `/` to ignore all files inside a folder 52 | - use `/` to ignore a specific file 53 | 54 | An example of `.gitignore` file can be found 55 | [here](https://gist.github.com/octocat/9257657). 56 | 57 | ### 1.1 Adding, Removing and Modifying files in the Staging Area 58 | 59 | Conceptually, the Staging Area is what `Git` calls the list of 'files to be added later 60 | to my code'. 61 | The Staging Area allows us to keep track of the current changes of our code in real 62 | time. When a file added to the staging area, `Git` will take a snapshot of that file; 63 | `Git` will then be able to know if we have made changes on that file. 64 | 65 | Once it is on the staging area, the file will be ready to be Committed (Saved on the 66 | `Git` tree, see point [1.2](#12-comitting-files-from-staging-area-to-local-repository)). 67 | 68 | #### Checking status 69 | 70 | ```sh 71 | $ git status 72 | ``` 73 | - compares the current state of the local files with the files on the staging area and 74 | outputs the differences. This command will also let us know the branch we are 75 | currently working on, and the files already in the Staging area. 76 | 77 | ```sh 78 | $ git diff 79 | ``` 80 | - Tells us the files we have changed locally but not yet staged. 81 | 82 | #### Adding files to Staging area 83 | 84 | ```sh 85 | $ git add [file] 86 | ``` 87 | - adds snapshot of `[file]` to the staging area (Snapshot of file is taken, and will be 88 | added to the Git repository next time we run `commit`) (Note that if you modify 89 | `[file]` again before committing, only the version added to the Staging area will be 90 | considered. To update the modified version, run the add command again with the same 91 | file). 92 | 93 | ```sh 94 | $ git add -A 95 | ``` 96 | - adds all modified files of the current folder in the Staging area 97 | 98 | #### Removing files from Staging area 99 | ```sh 100 | $ git reset HEAD -- [file] 101 | ``` 102 | - Removes from the staging area a `[file]` we previously added. 103 | 104 | > Note: this command should not be confused with the `rm` one, explained below: 105 | > `git reset HEAD` simply 'resets' the state of the Staging area for the specific 106 | > `[file]` we select (that `[file]` must have been previoulsy added to the staging area 107 | > (with the command `git add` for the command to work); `git rm` is used to 'delete' or 108 | > 'not keep track' anymore of a file that we have committed in the past (we are telling 109 | > `Git` not to keep track of it anyore in future commits). 110 | 111 | #### Removing files from Git tree 112 | 113 | ```sh 114 | $ git rm [file] 115 | ``` 116 | - Remove: Tells `Git` to not keep track of changes in `[file]` anymore (removes it from 117 | the `Git` tree) 118 | 119 | #### Modifying file names 120 | 121 | ```sh 122 | $ git mv [previous_file_name] [new_file_name] 123 | ``` 124 | - Updates the name of the file from `[previous_file_name]` to `[new_file_name]` (this 125 | command explicitely tells `Git` that the new file had a different name in the past, 126 | and if we decide to retrieve a past version of the new file, it should look for the 127 | previous name). 128 | 129 | > Note: if a file ia deleted (using the `rm` command - `git rm [previous_file_name]`) 130 | > then added again with a different name (`git add [new_file_name]`), `Git` will still 131 | > figure out the file is being renamed, but the `mv` command is the explicit way of 132 | > doing so (preferred way). 133 | 134 | ### 1.2 Comitting files from Staging area to Local Repository 135 | 136 | Once the local changes have been sent to Staging area (git has taken a 'snapshot' of 137 | the files), these snapshots are ready to be saved in the local `.git/` repository. Once 138 | in the local repository, any saved state will be retrievable at any time. 139 | 140 | #### Adding files to the Local Repository 141 | 142 | ```sh 143 | git commit -m '' 144 | ``` 145 | - Saves the snapshots the files in the staging area in to the local `.git/` repository. 146 | - Options: 147 | - `-m` lets use add a comment to the commit (Important, so we can keep track of what 148 | we did in that commit). Note that `Git` will force us to add the message if it is not included. 149 | 150 | > Note: In the background, what `Git` does on each `commit` statement is saving the 151 | > changes done to the files in an optimized format. The files do not get copied over and 152 | > over, what is saved are only the lines of code that were deleted, and the new lines 153 | > added from the previous version. That way, files can be 're-built' following any of 154 | > the steps of each `commit` statement, from the initial state. Each `commit` statement 155 | > receives a hash code, which will allow us to identify it, and retrieve it anytime we 156 | > wish so. 157 | 158 | 159 | #### Retrieving commit history 160 | 161 | ```sh 162 | $ git log 163 | ``` 164 | - Outputs the history of commit statement done in the local repository. This command is 165 | very useful to understand where we are in the `Git` tree history; it can be used to 166 | retrieve the `commit` code and understand the branch tree. 167 | 168 | Useful options (can be combined): 169 | - `git log --oneline` displays all the information in one line 170 | - `git log --decorate` adds branch information 171 | - `git log --all` shows all branches 172 | - `git log --graph` creates a visual graph of the branches and merges 173 | - `git log --pretty=format:" %s" --graph` displays the info demanded ("%h" prints the 174 | commit hash, "%s" prints the subject) in a visual way showing the branch and merge history. 175 | 176 | #### Creating aliases 177 | It is useful to create aliases for long commands (such as the `git log` ones). 178 | 179 | ```sh 180 | $ git config --global alias.[alias] "[command]" 181 | ``` 182 | - Creates an `[alias]` for the `[command]` we requested. 183 | 184 | > Example: running `git config --global alias.lg "log --oneline --decorate --all --graph"` 185 | > creates an alias `lg` that will be equivalent of `og --oneline --decorate --all --graph`. 186 | 187 | ### 1.3 Undoing commit statements 188 | 189 | #### Undoing commits on entire project 190 | 191 | ```sh 192 | $ git reset --soft 193 | ``` 194 | - Deletes the last commit statement saved on the local repository (the `.git/` folder), 195 | but saves the state on the Staging area (that way, if we run `git commit` again the 196 | local repository will be back to the point where it was before running `git reset`). 197 | 198 | ```sh 199 | $ git reset --hard 200 | ``` 201 | - Deletes the last commit statement saved on the local repository definitely, in a way 202 | that is not retrievable (this command should be run once we are completely sure we do 203 | not want to keep the last commit). 204 | 205 | #### Undoing commits on specific files 206 | ```sh 207 | $ git checkout -- [file] 208 | ``` 209 | - Discards the last changes done to `[file]`, and reverts its state back to where it was 210 | in the last `commit` statement. 211 | 212 | ```sh 213 | $ git checkout [hashcode] -- [file] 214 | ``` 215 | - Discards the last changes done to `[file]`, and reverts its state back to where it was 216 | in the `commit` statement corresponding to the `[hashcode]` entered. 217 | 218 | > Example: `git checkout 8ae67 -- my_file.txt` 219 | 220 | ```sh 221 | $ git reset --soft 222 | ``` 223 | - Deletes the last commit statement on the `.git/` folder, but keeps it in the staging area. 224 | 225 | ## 2. Working with Remote Repositories 226 | 227 | ### 2.0 Configuring 228 | Before working with any remote repository (in GitHub, for example), it can be useful to 229 | set up correctly the username and email of our account. GitHub will use this data to 230 | autheticate us every time we try to push changes to remote repositories. 231 | 232 | ```sh 233 | $ git config --global user.name [my-username] 234 | ``` 235 | - Sets up the username to 'my-username' 236 | 237 | ```sh 238 | $ git config --global user.email [my.email@example.com] 239 | ``` 240 | - Sets up the email 241 | 242 | ### 2.1 Cloning remote repositories 243 | 244 | Cloning makes a copy of a remote repository in a local file, and allows us to start 245 | working immediately. (It is similar to `git init`, but the contents are copied from 246 | a remote repository instead of being empty). 247 | 248 | ```sh 249 | $ git clone [URL] 250 | ``` 251 | - Creates a new folder in current directory, and makes a local copy of the contents of 252 | the repository of the specified URL 253 | - Options: 254 | - `git clone [URL] [folder_name]`: adding `[folder_name]` creates a new folder (named 255 | `[folder_name]`), and clones the content of the URL. 256 | 257 | ### 2.2 Adding/checking remote repositories 258 | 259 | ```sh 260 | $ git remote 261 | ``` 262 | - lists the names of the remote repositories you have configured 263 | - adding `-v` (verbose) will print the URLs of each remote repository. 264 | 265 | > Note: `origin` is the default name for the 'base' project (the repository you cloned 266 | > the project from). 267 | 268 | ```sh 269 | $ git remote show [remote repository name] 270 | ``` 271 | - gives you info about the remote repository 272 | 273 | ```sh 274 | $ git remote add [shortname] [URL] 275 | ``` 276 | - adds a new remote repository in the URL specified, with the shortname typed (shortname 277 | can be changed) 278 | 279 | > Example: `git remote add origin https://github....` 280 | 281 | ```sh 282 | $ git remote rename [oldname] [newname] 283 | ``` 284 | - changes the name of the remote file from old to new name 285 | 286 | 287 | ### 2.3 Pulling data from remote repositories 288 | 289 | ```sh 290 | $ git fetch [remote repository name] [branchname] 291 | ``` 292 | - updates the local `.git/` folder from the data of a branch in the remote repository 293 | (Github). Only the `.git/` file is changed, not the local working files. 294 | 295 | > Example: `git fetch origin master` 296 | 297 | ```sh 298 | $ git merge [repository name] [branchname] 299 | ``` 300 | - takes the data from a branch in the local `.git/` folder and merges it into our local 301 | files in the workspace (merging might bring discrepancies - i.e. a line modified in 302 | the `.git/` folder & locally, they must be resolved before being able to end the merge 303 | command, see Paragraph 3). 304 | 305 | > Example: `git merge origin master` 306 | 307 | ```sh 308 | $ git pull [repository name] [branchname] 309 | ``` 310 | - the pull command is equivalent of fetch & merge. The changes on the remote repository 311 | are saved in the local `.git/` repository and merget onto our workspace. 312 | 313 | > Example: `git pull origin maser` 314 | 315 | ### 2.4 Pushing data upstream 316 | 317 | ```sh 318 | $ git push [remote repository name] [branchname] 319 | ``` 320 | - "pushes" or updates the local data to the remote repository. This command will only 321 | work if we have access to the repository, and if there are no merge conflicts in the 322 | code. 323 | - Options 324 | - `-u`: (`git push -u ...`) creates a bond between 'origin/master' (local Git 325 | repository) and the virtual 'master' on Github. `-u` needs to be called one time 326 | only to do the bonding. 327 | - `-f` (force-pushing): pushes changes ignoring any warnings or conflicts with the 328 | code in the remote repository (usually used after rebasing, use carefully). 329 | 330 | > Note on force-pushing: the cases we are absolutely sure that we want to ignore the 331 | > changes in the remote repository to push code from our local files, we can use the 332 | > `-f` command added to the push statement. (`git push -f ...`). This will 333 | > automatically update the remote repository with the changes on the local one, 334 | > removing any conflicting parts in favor of the local data. 335 | 336 | ### 2.5 Tagging 337 | 338 | ```sh 339 | $ git tag -a [tag] -m [tag message] 340 | ``` 341 | - sets up an annotated tag that will be associated with a specific commit 342 | 343 | > Example: `git tag -a v1.1 -m 'This is the 1.1 version'` 344 | 345 | ## 3. Working with Branches 346 | 347 | ### 3.1 Creating and switching branches 348 | 349 | Branching allows us to work in multiple tasks at the same time (see Paragraph 0). 350 | This is a useful [link](https://git-scm.com/book/en/v1/Git-Branching-Basic-Branching-and-Merging) explaining branching. 351 | 352 | ```sh 353 | $ git branch [newbranch] 354 | ``` 355 | - creates a new branch named 'newbranch' from the last commit of the current branch we 356 | are in. If we are not in `master` and want to make a branch FROM it, we must switch 357 | back to `master` (command: `git checkout master`) before running the above command 358 | and THEN create the new branch. 359 | 360 | ```sh 361 | $ git branch -a 362 | ``` 363 | - Tells us the current existing branch names (`-a` stands for 'all', and will output 364 | both local and remote branches) 365 | 366 | ```sh 367 | $ git checkout [namebranch] 368 | ``` 369 | - Switches to the branch spacified. The way Git handles it is by changing the pointer 370 | of the HEAD object, which will make all commits done from now on will go on the 371 | branch specified (note that if we have uncommitted changes that clash with the branch 372 | we are switching to, Git will not allow us to do the switch; to sort that see the 373 | `git stash` command). 374 | 375 | > Note: once we run `checkout`, the local files on our workspace will also be updated 376 | > to match the contents of the branch we are swithching to. 377 | 378 | ```sh 379 | $ git checkout -b [newbranch] 380 | ``` 381 | - adding `-b` to the checkout command creates a new branch from the current branch AND 382 | changes the pointer to work on it 383 | 384 | #### Sorting out switching branches issues 385 | 386 | Sometimes, the state of our workspace (the local files) does not match the latest commit 387 | statement (this happens when we modify our code without committing). At that time, if we 388 | try switching branches (with the `checkout` commit), Git will not allow us to do so to 389 | prevent losing data. At that point, we have two possibilities: committing the local 390 | changes, or using the `stash` and `pop` commands, described below. 391 | 392 | ```sh 393 | $ git stash 394 | ``` 395 | - takes a snapshot of the current state of the local workspace, saves it (making it 396 | recoverable with the `git pop` command), and reverts the state of the local files 397 | to match with the last `commit` statement of the branch. Once the `git stash` command 398 | has been done, it will be possible to switch branches (since the local workspace 399 | matches the contents of the branch in the `.git/` folder). 400 | 401 | ```sh 402 | $ git pop 403 | ``` 404 | - sets back the state of the local workspace as it was before running the `git stash` 405 | command. 406 | 407 | ### 3.2 Merging branches 408 | 409 | #### Successful merges 410 | 411 | ```sh 412 | $ git merge [branchname] 413 | ``` 414 | - merges the specified branch with the branch you are currently in. 415 | 416 | ```sh 417 | $ git branch -d [branchname] 418 | ``` 419 | - This command deletes the branch (`-d`). This should be ran after branches are mergerd. 420 | 421 | #### Sorting out merge conflicts 422 | 423 | Merge conflicts occur when we merge code to a branch that has been changed since the 424 | last time we pulled code from it (for example, we create a `new-branch` from `master`, 425 | and someone modified `master` before we merged back the `new-branch` changes-). Not all 426 | changes generate merge conflicts, only the ones affecting the same lines of code we 427 | modified in our branch. Note that 428 | **merging will be blocked until we resolve the merge conflict**. 429 | 430 | There are many ways to sort out Merge conflicts, the easiest is using opening our text 431 | editor and manually selecting the lines of code we want to keep. A merge conflict looks 432 | like this: 433 | 434 | ```python 435 | # This code is unaffected by the merge conflict 436 | 437 | <<<<<< HEAD 438 | # These lines were modified since 439 | # the last time we pulled from the branch 440 | 441 | ======= 442 | # These lines are the ones we modified 443 | # that are conflicting with the lines above 444 | 445 | >>>>>> new-branch 446 | ``` 447 | 448 | - To sort out the merge conflict, we must select the lines that we wish to keep (we 449 | can select parts of each side), and delete the `<<<<<< HEAD`, `=======` and 450 | `>>>>>> new-branch` operators. 451 | 452 | Additional [resource](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/resolving-a-merge-conflict-using-the-command-line) on sorting out Merge conflicts. 453 | 454 | ### 3.3 Rebasing 455 | 456 | Rebase is a command used to prevent merge conflicts when pushing code to the remote 457 | repository. Conceptually, what `rebase` does is to take the changes done to the base 458 | branch (usually `master`), and add them to the branch we are currently working on. 459 | 460 | ```sh 461 | $ git rebase [base-branch] 462 | ``` 463 | - takes the last changes of the `[base-branch]` (usually `master`) and adds it to all 464 | commits of the current branch, making it look as if the current branch was created 465 | after the last commit of `[base-branch]`. Running this command will tell us if there 466 | are merge conflicts. (After this, a PR can be issued.) 467 | - Options: 468 | - `-i` (interactive): runs `rebase` in interactive mode, will open a file that will 469 | enable us to define the action to do at each step of the rebase (each commit is a 470 | step). In interactive mode, commits can be reordered (by simply deleting a row and 471 | copy-pasting it - careful with merge conflicts), renamed (change the `pick` keyword 472 | by `reword`), or squashed. Example: `git rebase -i ` 473 | 474 | **Example of `rebase` usage**: we create a `new-branch` from `master` to start working 475 | on a new feature and we start working on it. We commit two times (commits `(b)` and 476 | `(c)`). After some time, other members of the team have updated the branch `master` 477 | since we created `new-branch` (commits `(d)` and `(e)`): `master` has some changes that 478 | `new-branch` does not have. The commit tree looks as shown below: 479 | 480 | ```text 481 | --- (a) --- (d) --- (e) master 482 | \ 483 | (b) --- (c) new-branch 484 | ``` 485 | 486 | Now, if we run `rebase`, we will add the changes made to master to our own branch to 487 | all commits done to `new-branch` as if we created `new-branch` from point `(e)` instead 488 | of point `(a)` (in a sense, it is like 'faking reality' to make it look as if the 489 | branch was created at the current point in time). After running `rebase`, both `(b)` 490 | and `(c)` commits are modified considering the changes made to `master`, and the commit 491 | tree will look as shown below: 492 | 493 | ```text 494 | --- (a) --- (d) --- (e) master 495 | \ 496 | (b') --- (c') new-branch 497 | ``` 498 | Note that all merge conflicts coming from rebasing would have to be sort out manually as 499 | well. 500 | -------------------------------------------------------------------------------- /0-basic-concepts/request.txt: -------------------------------------------------------------------------------- 1 | IncomingMessage { 2 | _readableState: 3 | ReadableState { 4 | objectMode: false, 5 | highWaterMark: 16384, 6 | buffer: BufferList { head: null, tail: null, length: 0 }, 7 | length: 0, 8 | pipes: null, 9 | pipesCount: 0, 10 | flowing: null, 11 | ended: false, 12 | endEmitted: false, 13 | reading: false, 14 | sync: true, 15 | needReadable: false, 16 | emittedReadable: false, 17 | readableListening: false, 18 | resumeScheduled: false, 19 | paused: true, 20 | emitClose: true, 21 | autoDestroy: false, 22 | destroyed: false, 23 | defaultEncoding: 'utf8', 24 | awaitDrain: 0, 25 | readingMore: true, 26 | decoder: null, 27 | encoding: null }, 28 | readable: true, 29 | _events: 30 | [Object: null prototype] { end: [Function: resetHeadersTimeoutOnReqEnd] }, 31 | _eventsCount: 1, 32 | _maxListeners: undefined, 33 | socket: 34 | Socket { 35 | connecting: false, 36 | _hadError: false, 37 | _handle: 38 | TCP { 39 | reading: true, 40 | onread: [Function: onStreamRead], 41 | onconnection: null, 42 | _consumed: true, 43 | [Symbol(owner)]: [Circular] }, 44 | _parent: null, 45 | _host: null, 46 | _readableState: 47 | ReadableState { 48 | objectMode: false, 49 | highWaterMark: 16384, 50 | buffer: BufferList { head: null, tail: null, length: 0 }, 51 | length: 0, 52 | pipes: null, 53 | pipesCount: 0, 54 | flowing: true, 55 | ended: false, 56 | endEmitted: false, 57 | reading: true, 58 | sync: false, 59 | needReadable: true, 60 | emittedReadable: false, 61 | readableListening: false, 62 | resumeScheduled: false, 63 | paused: false, 64 | emitClose: false, 65 | autoDestroy: false, 66 | destroyed: false, 67 | defaultEncoding: 'utf8', 68 | awaitDrain: 0, 69 | readingMore: false, 70 | decoder: null, 71 | encoding: null }, 72 | readable: true, 73 | _events: 74 | [Object: null prototype] { 75 | end: [Array], 76 | drain: [Array], 77 | timeout: [Function: socketOnTimeout], 78 | data: [Function: bound socketOnData], 79 | error: [Function: socketOnError], 80 | close: [Array], 81 | resume: [Function: onSocketResume], 82 | pause: [Function: onSocketPause] }, 83 | _eventsCount: 8, 84 | _maxListeners: undefined, 85 | _writableState: 86 | WritableState { 87 | objectMode: false, 88 | highWaterMark: 16384, 89 | finalCalled: false, 90 | needDrain: false, 91 | ending: false, 92 | ended: false, 93 | finished: false, 94 | destroyed: false, 95 | decodeStrings: false, 96 | defaultEncoding: 'utf8', 97 | length: 0, 98 | writing: false, 99 | corked: 0, 100 | sync: true, 101 | bufferProcessing: false, 102 | onwrite: [Function: bound onwrite], 103 | writecb: null, 104 | writelen: 0, 105 | bufferedRequest: null, 106 | lastBufferedRequest: null, 107 | pendingcb: 0, 108 | prefinished: false, 109 | errorEmitted: false, 110 | emitClose: false, 111 | autoDestroy: false, 112 | bufferedRequestCount: 0, 113 | corkedRequestsFree: [Object] }, 114 | writable: true, 115 | allowHalfOpen: true, 116 | _sockname: null, 117 | _pendingData: null, 118 | _pendingEncoding: '', 119 | server: 120 | Server { 121 | _events: [Object], 122 | _eventsCount: 2, 123 | _maxListeners: undefined, 124 | _connections: 2, 125 | _handle: [TCP], 126 | _usingWorkers: false, 127 | _workers: [], 128 | _unref: false, 129 | allowHalfOpen: true, 130 | pauseOnConnect: false, 131 | httpAllowHalfOpen: false, 132 | timeout: 120000, 133 | keepAliveTimeout: 5000, 134 | maxHeadersCount: null, 135 | headersTimeout: 40000, 136 | _connectionKey: '6::::3000', 137 | [Symbol(IncomingMessage)]: [Function], 138 | [Symbol(ServerResponse)]: [Function], 139 | [Symbol(asyncId)]: 5 }, 140 | _server: 141 | Server { 142 | _events: [Object], 143 | _eventsCount: 2, 144 | _maxListeners: undefined, 145 | _connections: 2, 146 | _handle: [TCP], 147 | _usingWorkers: false, 148 | _workers: [], 149 | _unref: false, 150 | allowHalfOpen: true, 151 | pauseOnConnect: false, 152 | httpAllowHalfOpen: false, 153 | timeout: 120000, 154 | keepAliveTimeout: 5000, 155 | maxHeadersCount: null, 156 | headersTimeout: 40000, 157 | _connectionKey: '6::::3000', 158 | [Symbol(IncomingMessage)]: [Function], 159 | [Symbol(ServerResponse)]: [Function], 160 | [Symbol(asyncId)]: 5 }, 161 | timeout: 120000, 162 | parser: 163 | HTTPParser { 164 | '0': [Function: parserOnHeaders], 165 | '1': [Function: parserOnHeadersComplete], 166 | '2': [Function: parserOnBody], 167 | '3': [Function: parserOnMessageComplete], 168 | '4': [Function: bound onParserExecute], 169 | _headers: [], 170 | _url: '', 171 | socket: [Circular], 172 | incoming: [Circular], 173 | outgoing: null, 174 | maxHeaderPairs: 2000, 175 | _consumed: true, 176 | onIncoming: [Function: bound parserOnIncoming], 177 | parsingHeadersStart: 0, 178 | [Symbol(isReused)]: false }, 179 | on: [Function: socketOnWrap], 180 | _paused: false, 181 | _httpMessage: 182 | ServerResponse { 183 | _events: [Object], 184 | _eventsCount: 1, 185 | _maxListeners: undefined, 186 | output: [], 187 | outputEncodings: [], 188 | outputCallbacks: [], 189 | outputSize: 0, 190 | writable: true, 191 | _last: false, 192 | chunkedEncoding: false, 193 | shouldKeepAlive: true, 194 | useChunkedEncodingByDefault: true, 195 | sendDate: true, 196 | _removedConnection: false, 197 | _removedContLen: false, 198 | _removedTE: false, 199 | _contentLength: null, 200 | _hasBody: true, 201 | _trailer: '', 202 | finished: false, 203 | _headerSent: false, 204 | socket: [Circular], 205 | connection: [Circular], 206 | _header: null, 207 | _onPendingData: [Function: bound updateOutgoingData], 208 | _sent100: false, 209 | _expect_continue: false, 210 | [Symbol(isCorked)]: false, 211 | [Symbol(outHeadersKey)]: null }, 212 | [Symbol(asyncId)]: 7, 213 | [Symbol(lastWriteQueueSize)]: 0, 214 | [Symbol(timeout)]: 215 | Timeout { 216 | _called: false, 217 | _idleTimeout: 120000, 218 | _idlePrev: [Timeout], 219 | _idleNext: [TimersList], 220 | _idleStart: 8095, 221 | _onTimeout: [Function: bound ], 222 | _timerArgs: undefined, 223 | _repeat: null, 224 | _destroyed: false, 225 | [Symbol(unrefed)]: true, 226 | [Symbol(asyncId)]: 8, 227 | [Symbol(triggerId)]: 7 }, 228 | [Symbol(kBytesRead)]: 0, 229 | [Symbol(kBytesWritten)]: 0 }, 230 | connection: 231 | Socket { 232 | connecting: false, 233 | _hadError: false, 234 | _handle: 235 | TCP { 236 | reading: true, 237 | onread: [Function: onStreamRead], 238 | onconnection: null, 239 | _consumed: true, 240 | [Symbol(owner)]: [Circular] }, 241 | _parent: null, 242 | _host: null, 243 | _readableState: 244 | ReadableState { 245 | objectMode: false, 246 | highWaterMark: 16384, 247 | buffer: BufferList { head: null, tail: null, length: 0 }, 248 | length: 0, 249 | pipes: null, 250 | pipesCount: 0, 251 | flowing: true, 252 | ended: false, 253 | endEmitted: false, 254 | reading: true, 255 | sync: false, 256 | needReadable: true, 257 | emittedReadable: false, 258 | readableListening: false, 259 | resumeScheduled: false, 260 | paused: false, 261 | emitClose: false, 262 | autoDestroy: false, 263 | destroyed: false, 264 | defaultEncoding: 'utf8', 265 | awaitDrain: 0, 266 | readingMore: false, 267 | decoder: null, 268 | encoding: null }, 269 | readable: true, 270 | _events: 271 | [Object: null prototype] { 272 | end: [Array], 273 | drain: [Array], 274 | timeout: [Function: socketOnTimeout], 275 | data: [Function: bound socketOnData], 276 | error: [Function: socketOnError], 277 | close: [Array], 278 | resume: [Function: onSocketResume], 279 | pause: [Function: onSocketPause] }, 280 | _eventsCount: 8, 281 | _maxListeners: undefined, 282 | _writableState: 283 | WritableState { 284 | objectMode: false, 285 | highWaterMark: 16384, 286 | finalCalled: false, 287 | needDrain: false, 288 | ending: false, 289 | ended: false, 290 | finished: false, 291 | destroyed: false, 292 | decodeStrings: false, 293 | defaultEncoding: 'utf8', 294 | length: 0, 295 | writing: false, 296 | corked: 0, 297 | sync: true, 298 | bufferProcessing: false, 299 | onwrite: [Function: bound onwrite], 300 | writecb: null, 301 | writelen: 0, 302 | bufferedRequest: null, 303 | lastBufferedRequest: null, 304 | pendingcb: 0, 305 | prefinished: false, 306 | errorEmitted: false, 307 | emitClose: false, 308 | autoDestroy: false, 309 | bufferedRequestCount: 0, 310 | corkedRequestsFree: [Object] }, 311 | writable: true, 312 | allowHalfOpen: true, 313 | _sockname: null, 314 | _pendingData: null, 315 | _pendingEncoding: '', 316 | server: 317 | Server { 318 | _events: [Object], 319 | _eventsCount: 2, 320 | _maxListeners: undefined, 321 | _connections: 2, 322 | _handle: [TCP], 323 | _usingWorkers: false, 324 | _workers: [], 325 | _unref: false, 326 | allowHalfOpen: true, 327 | pauseOnConnect: false, 328 | httpAllowHalfOpen: false, 329 | timeout: 120000, 330 | keepAliveTimeout: 5000, 331 | maxHeadersCount: null, 332 | headersTimeout: 40000, 333 | _connectionKey: '6::::3000', 334 | [Symbol(IncomingMessage)]: [Function], 335 | [Symbol(ServerResponse)]: [Function], 336 | [Symbol(asyncId)]: 5 }, 337 | _server: 338 | Server { 339 | _events: [Object], 340 | _eventsCount: 2, 341 | _maxListeners: undefined, 342 | _connections: 2, 343 | _handle: [TCP], 344 | _usingWorkers: false, 345 | _workers: [], 346 | _unref: false, 347 | allowHalfOpen: true, 348 | pauseOnConnect: false, 349 | httpAllowHalfOpen: false, 350 | timeout: 120000, 351 | keepAliveTimeout: 5000, 352 | maxHeadersCount: null, 353 | headersTimeout: 40000, 354 | _connectionKey: '6::::3000', 355 | [Symbol(IncomingMessage)]: [Function], 356 | [Symbol(ServerResponse)]: [Function], 357 | [Symbol(asyncId)]: 5 }, 358 | timeout: 120000, 359 | parser: 360 | HTTPParser { 361 | '0': [Function: parserOnHeaders], 362 | '1': [Function: parserOnHeadersComplete], 363 | '2': [Function: parserOnBody], 364 | '3': [Function: parserOnMessageComplete], 365 | '4': [Function: bound onParserExecute], 366 | _headers: [], 367 | _url: '', 368 | socket: [Circular], 369 | incoming: [Circular], 370 | outgoing: null, 371 | maxHeaderPairs: 2000, 372 | _consumed: true, 373 | onIncoming: [Function: bound parserOnIncoming], 374 | parsingHeadersStart: 0, 375 | [Symbol(isReused)]: false }, 376 | on: [Function: socketOnWrap], 377 | _paused: false, 378 | _httpMessage: 379 | ServerResponse { 380 | _events: [Object], 381 | _eventsCount: 1, 382 | _maxListeners: undefined, 383 | output: [], 384 | outputEncodings: [], 385 | outputCallbacks: [], 386 | outputSize: 0, 387 | writable: true, 388 | _last: false, 389 | chunkedEncoding: false, 390 | shouldKeepAlive: true, 391 | useChunkedEncodingByDefault: true, 392 | sendDate: true, 393 | _removedConnection: false, 394 | _removedContLen: false, 395 | _removedTE: false, 396 | _contentLength: null, 397 | _hasBody: true, 398 | _trailer: '', 399 | finished: false, 400 | _headerSent: false, 401 | socket: [Circular], 402 | connection: [Circular], 403 | _header: null, 404 | _onPendingData: [Function: bound updateOutgoingData], 405 | _sent100: false, 406 | _expect_continue: false, 407 | [Symbol(isCorked)]: false, 408 | [Symbol(outHeadersKey)]: null }, 409 | [Symbol(asyncId)]: 7, 410 | [Symbol(lastWriteQueueSize)]: 0, 411 | [Symbol(timeout)]: 412 | Timeout { 413 | _called: false, 414 | _idleTimeout: 120000, 415 | _idlePrev: [Timeout], 416 | _idleNext: [TimersList], 417 | _idleStart: 8095, 418 | _onTimeout: [Function: bound ], 419 | _timerArgs: undefined, 420 | _repeat: null, 421 | _destroyed: false, 422 | [Symbol(unrefed)]: true, 423 | [Symbol(asyncId)]: 8, 424 | [Symbol(triggerId)]: 7 }, 425 | [Symbol(kBytesRead)]: 0, 426 | [Symbol(kBytesWritten)]: 0 }, 427 | httpVersionMajor: 1, 428 | httpVersionMinor: 1, 429 | httpVersion: '1.1', 430 | complete: false, 431 | headers: 432 | { host: 'localhost:3000', 433 | connection: 'keep-alive', 434 | 'upgrade-insecure-requests': '1', 435 | 'user-agent': 436 | 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36', 437 | 'sec-fetch-user': '?1', 438 | accept: 439 | 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 440 | purpose: 'prefetch', 441 | 'sec-fetch-site': 'none', 442 | 'sec-fetch-mode': 'navigate', 443 | 'accept-encoding': 'gzip, deflate, br', 444 | 'accept-language': 'en-US,en;q=0.9,ca;q=0.8,es;q=0.7,it;q=0.6' }, 445 | rawHeaders: 446 | [ 'Host', 447 | 'localhost:3000', 448 | 'Connection', 449 | 'keep-alive', 450 | 'Upgrade-Insecure-Requests', 451 | '1', 452 | 'User-Agent', 453 | 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36', 454 | 'Sec-Fetch-User', 455 | '?1', 456 | 'Accept', 457 | 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 458 | 'Purpose', 459 | 'prefetch', 460 | 'Sec-Fetch-Site', 461 | 'none', 462 | 'Sec-Fetch-Mode', 463 | 'navigate', 464 | 'Accept-Encoding', 465 | 'gzip, deflate, br', 466 | 'Accept-Language', 467 | 'en-US,en;q=0.9,ca;q=0.8,es;q=0.7,it;q=0.6' ], 468 | trailers: {}, 469 | rawTrailers: [], 470 | aborted: false, 471 | upgrade: false, 472 | url: '/', 473 | method: 'GET', 474 | statusCode: null, 475 | statusMessage: null, 476 | client: 477 | Socket { 478 | connecting: false, 479 | _hadError: false, 480 | _handle: 481 | TCP { 482 | reading: true, 483 | onread: [Function: onStreamRead], 484 | onconnection: null, 485 | _consumed: true, 486 | [Symbol(owner)]: [Circular] }, 487 | _parent: null, 488 | _host: null, 489 | _readableState: 490 | ReadableState { 491 | objectMode: false, 492 | highWaterMark: 16384, 493 | buffer: BufferList { head: null, tail: null, length: 0 }, 494 | length: 0, 495 | pipes: null, 496 | pipesCount: 0, 497 | flowing: true, 498 | ended: false, 499 | endEmitted: false, 500 | reading: true, 501 | sync: false, 502 | needReadable: true, 503 | emittedReadable: false, 504 | readableListening: false, 505 | resumeScheduled: false, 506 | paused: false, 507 | emitClose: false, 508 | autoDestroy: false, 509 | destroyed: false, 510 | defaultEncoding: 'utf8', 511 | awaitDrain: 0, 512 | readingMore: false, 513 | decoder: null, 514 | encoding: null }, 515 | readable: true, 516 | _events: 517 | [Object: null prototype] { 518 | end: [Array], 519 | drain: [Array], 520 | timeout: [Function: socketOnTimeout], 521 | data: [Function: bound socketOnData], 522 | error: [Function: socketOnError], 523 | close: [Array], 524 | resume: [Function: onSocketResume], 525 | pause: [Function: onSocketPause] }, 526 | _eventsCount: 8, 527 | _maxListeners: undefined, 528 | _writableState: 529 | WritableState { 530 | objectMode: false, 531 | highWaterMark: 16384, 532 | finalCalled: false, 533 | needDrain: false, 534 | ending: false, 535 | ended: false, 536 | finished: false, 537 | destroyed: false, 538 | decodeStrings: false, 539 | defaultEncoding: 'utf8', 540 | length: 0, 541 | writing: false, 542 | corked: 0, 543 | sync: true, 544 | bufferProcessing: false, 545 | onwrite: [Function: bound onwrite], 546 | writecb: null, 547 | writelen: 0, 548 | bufferedRequest: null, 549 | lastBufferedRequest: null, 550 | pendingcb: 0, 551 | prefinished: false, 552 | errorEmitted: false, 553 | emitClose: false, 554 | autoDestroy: false, 555 | bufferedRequestCount: 0, 556 | corkedRequestsFree: [Object] }, 557 | writable: true, 558 | allowHalfOpen: true, 559 | _sockname: null, 560 | _pendingData: null, 561 | _pendingEncoding: '', 562 | server: 563 | Server { 564 | _events: [Object], 565 | _eventsCount: 2, 566 | _maxListeners: undefined, 567 | _connections: 2, 568 | _handle: [TCP], 569 | _usingWorkers: false, 570 | _workers: [], 571 | _unref: false, 572 | allowHalfOpen: true, 573 | pauseOnConnect: false, 574 | httpAllowHalfOpen: false, 575 | timeout: 120000, 576 | keepAliveTimeout: 5000, 577 | maxHeadersCount: null, 578 | headersTimeout: 40000, 579 | _connectionKey: '6::::3000', 580 | [Symbol(IncomingMessage)]: [Function], 581 | [Symbol(ServerResponse)]: [Function], 582 | [Symbol(asyncId)]: 5 }, 583 | _server: 584 | Server { 585 | _events: [Object], 586 | _eventsCount: 2, 587 | _maxListeners: undefined, 588 | _connections: 2, 589 | _handle: [TCP], 590 | _usingWorkers: false, 591 | _workers: [], 592 | _unref: false, 593 | allowHalfOpen: true, 594 | pauseOnConnect: false, 595 | httpAllowHalfOpen: false, 596 | timeout: 120000, 597 | keepAliveTimeout: 5000, 598 | maxHeadersCount: null, 599 | headersTimeout: 40000, 600 | _connectionKey: '6::::3000', 601 | [Symbol(IncomingMessage)]: [Function], 602 | [Symbol(ServerResponse)]: [Function], 603 | [Symbol(asyncId)]: 5 }, 604 | timeout: 120000, 605 | parser: 606 | HTTPParser { 607 | '0': [Function: parserOnHeaders], 608 | '1': [Function: parserOnHeadersComplete], 609 | '2': [Function: parserOnBody], 610 | '3': [Function: parserOnMessageComplete], 611 | '4': [Function: bound onParserExecute], 612 | _headers: [], 613 | _url: '', 614 | socket: [Circular], 615 | incoming: [Circular], 616 | outgoing: null, 617 | maxHeaderPairs: 2000, 618 | _consumed: true, 619 | onIncoming: [Function: bound parserOnIncoming], 620 | parsingHeadersStart: 0, 621 | [Symbol(isReused)]: false }, 622 | on: [Function: socketOnWrap], 623 | _paused: false, 624 | _httpMessage: 625 | ServerResponse { 626 | _events: [Object], 627 | _eventsCount: 1, 628 | _maxListeners: undefined, 629 | output: [], 630 | outputEncodings: [], 631 | outputCallbacks: [], 632 | outputSize: 0, 633 | writable: true, 634 | _last: false, 635 | chunkedEncoding: false, 636 | shouldKeepAlive: true, 637 | useChunkedEncodingByDefault: true, 638 | sendDate: true, 639 | _removedConnection: false, 640 | _removedContLen: false, 641 | _removedTE: false, 642 | _contentLength: null, 643 | _hasBody: true, 644 | _trailer: '', 645 | finished: false, 646 | _headerSent: false, 647 | socket: [Circular], 648 | connection: [Circular], 649 | _header: null, 650 | _onPendingData: [Function: bound updateOutgoingData], 651 | _sent100: false, 652 | _expect_continue: false, 653 | [Symbol(isCorked)]: false, 654 | [Symbol(outHeadersKey)]: null }, 655 | [Symbol(asyncId)]: 7, 656 | [Symbol(lastWriteQueueSize)]: 0, 657 | [Symbol(timeout)]: 658 | Timeout { 659 | _called: false, 660 | _idleTimeout: 120000, 661 | _idlePrev: [Timeout], 662 | _idleNext: [TimersList], 663 | _idleStart: 8095, 664 | _onTimeout: [Function: bound ], 665 | _timerArgs: undefined, 666 | _repeat: null, 667 | _destroyed: false, 668 | [Symbol(unrefed)]: true, 669 | [Symbol(asyncId)]: 8, 670 | [Symbol(triggerId)]: 7 }, 671 | [Symbol(kBytesRead)]: 0, 672 | [Symbol(kBytesWritten)]: 0 }, 673 | _consuming: false, 674 | _dumped: false } 675 | --------------------------------------------------------------------------------