├── LICENSE ├── README.md ├── appspec.yml ├── backend ├── .ebextensions │ ├── django.config │ └── setup.config ├── .gitignore ├── Pipfile ├── Pipfile.lock ├── README.md ├── beanstalk_deploy.zip ├── conduit │ ├── __init__.py │ ├── apps │ │ ├── __init__.py │ │ ├── articles │ │ │ ├── __init__.py │ │ │ ├── migrations │ │ │ │ ├── 0001_initial.py │ │ │ │ ├── 0002_comment.py │ │ │ │ ├── 0003_auto_20160828_1656.py │ │ │ │ └── __init__.py │ │ │ ├── models.py │ │ │ ├── relations.py │ │ │ ├── renderers.py │ │ │ ├── serializers.py │ │ │ ├── signals.py │ │ │ ├── urls.py │ │ │ └── views.py │ │ ├── authentication │ │ │ ├── __init__.py │ │ │ ├── backends.py │ │ │ ├── migrations │ │ │ │ ├── 0001_initial.py │ │ │ │ └── __init__.py │ │ │ ├── models.py │ │ │ ├── renderers.py │ │ │ ├── serializers.py │ │ │ ├── signals.py │ │ │ ├── urls.py │ │ │ └── views.py │ │ ├── core │ │ │ ├── __init__.py │ │ │ ├── exceptions.py │ │ │ ├── models.py │ │ │ ├── renderers.py │ │ │ └── utils.py │ │ └── profiles │ │ │ ├── __init__.py │ │ │ ├── exceptions.py │ │ │ ├── migrations │ │ │ ├── 0001_initial.py │ │ │ ├── 0002_profile_follows.py │ │ │ ├── 0003_profile_favorites.py │ │ │ └── __init__.py │ │ │ ├── models.py │ │ │ ├── renderers.py │ │ │ ├── serializers.py │ │ │ ├── urls.py │ │ │ └── views.py │ ├── settings │ │ ├── __init__.py │ │ ├── defaults.py │ │ ├── docker.py │ │ └── ec2.py │ ├── urls.py │ └── wsgi.py ├── manage.py ├── project-logo.png └── requirements.txt ├── buildspec.frontend.yml ├── frontend ├── .gitignore ├── README.md ├── package-lock.json ├── package.json ├── project-logo.png ├── public │ ├── favicon.ico │ └── index.html └── src │ ├── agent.js │ ├── components │ ├── App.js │ ├── Article │ │ ├── ArticleActions.js │ │ ├── ArticleMeta.js │ │ ├── Comment.js │ │ ├── CommentContainer.js │ │ ├── CommentInput.js │ │ ├── CommentList.js │ │ ├── DeleteButton.js │ │ └── index.js │ ├── ArticleList.js │ ├── ArticlePreview.js │ ├── Editor.js │ ├── Header.js │ ├── Home │ │ ├── Banner.js │ │ ├── MainView.js │ │ ├── Tags.js │ │ └── index.js │ ├── ListErrors.js │ ├── ListPagination.js │ ├── Login.js │ ├── Profile.js │ ├── ProfileFavorites.js │ ├── Register.js │ └── Settings.js │ ├── constants │ └── actionTypes.js │ ├── index.js │ ├── middleware.js │ ├── reducer.js │ ├── reducers │ ├── article.js │ ├── articleList.js │ ├── auth.js │ ├── common.js │ ├── editor.js │ ├── home.js │ ├── profile.js │ └── settings.js │ └── store.js ├── infrastructure ├── .gitignore ├── aws │ ├── codedeploy │ │ ├── after_install │ │ ├── gunicorn.ec2.conf │ │ ├── install_dependencies │ │ ├── start_server │ │ ├── stop_server │ │ └── validate_service │ └── lambda │ │ ├── lambda_function.py │ │ ├── psycopg2 │ │ ├── .DS_Store │ │ ├── __init__.py │ │ ├── __pycache__ │ │ │ └── __init__.cpython-37.pyc │ │ ├── _ipaddress.py │ │ ├── _json.py │ │ ├── _psycopg.so │ │ ├── _range.py │ │ ├── errorcodes.py │ │ ├── extensions.py │ │ ├── extras.py │ │ ├── pool.py │ │ ├── psycopg1.py │ │ ├── sql.py │ │ └── tz.py │ │ └── serverless.yml └── docker │ ├── api │ ├── .env.template │ ├── Dockerfile │ └── gunicorn.docker.conf │ ├── db │ └── .env.template │ ├── django-admin │ ├── docker-compose.yml │ ├── nginx │ ├── .env.template │ ├── entrypoint │ ├── nginx.conf.template │ ├── sites-available.workshop.template │ └── sites-enabled.workshop.template │ └── web │ ├── .env.template │ └── Dockerfile └── workshop ├── beanstalk ├── 01-clean-up.md ├── 02-new-app-environment.md ├── 03-finish-integration.md ├── 04-conclusion.md ├── introduction.md └── troubleshooting.md ├── elb-auto-scaling-group ├── 01-load-balancer.md ├── 02-auto-scaling-group.md ├── 03-finishing-up.md └── introduction.md ├── s3-web-ec2-api-rds ├── 01-serve-website-from-s3.md ├── 02-EC2-instances.md ├── 03-RDS.md ├── 04-code-deploy.md ├── 05-finishing-up.md └── introduction.md ├── serverless ├── 01-serverless.md ├── 02-api-integration.md └── introduction.md ├── set-up-users.md └── vpc-subnets-bastion ├── 01-create-vpc.md ├── 02-internet-gateway.md ├── 03-nat-instance.md ├── 04-load-balancer.md ├── 05-RDS.md ├── 06-auto-scaling-group.md ├── 07-bastion.md ├── 08-finishing-up.md └── introduction.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Tryolabs 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AWS Workshop 2 | 3 | This workshop aims to introduce the reader to managing infrastructure using [Amazon Web Services](https://aws.amazon.com/) (AWS). 4 | 5 | We will learn to deploy real applications. As our demo app, we will use an [open source](https://github.com/gothinkster/realworld) test application called Conduit, which is handy to learn new frameworks because the same app has implementations in multiple frameworks for backend and frontend. In particular, we will use the version built with [React](https://reactjs.org/) and [Django](https://www.djangoproject.com/) + [Django-Rest-Framework](http://www.django-rest-framework.org/) backend. 6 | 7 | In this repo, you can find the backend and frontend components, both with modified settings to fit our future infrastructure. 8 | 9 | # Preconditions 10 | 11 | You must have an AWS account. Even though you mostly will be in the free tier, some services like Elastic Load Balancers, Encryption Keys, and others **will be billed**. This means that you should be ready to spend a few dollars (< 5 U$S) to complete this workshop. 12 | 13 | If you want to, you can [set up a billing alarm](http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/free-tier-alarms.html) to avoid these situations, just in case. 14 | 15 | > **TryoTip:** if you are doing this workshop as part of Tryolabs training, feel free to ask for access to the **Tryolabs Playground AWS account**. This way, you will not have to put your own credit card. 16 | 17 | # Content 18 | 19 | This workshop contains the following sections: 20 | 21 | 1. [Set up users](/workshop/set-up-users.md). 22 | 2. [S3, RDS and EC2](/workshop/s3-web-ec2-api-rds/introduction.md). Here, you will deploy the website on S3, the backend will store the data using RDS and the API will be deployed on EC2. 23 | 3. [Load Balancer and Auto Scaling Group](/workshop/elb-auto-scaling-group/introduction.md). 24 | 4. [VPC configuration and Bastion instance](/workshop/vpc-subnets-bastion/introduction.md). Here, you will setup your own VPC with public and private subnets, modify your Auto Scaling Group and Load Balancer to work with those and add a Bastion to access to your API instances through SSH. 25 | 5. [Serverless](/workshop/serverless/introduction.md). Here you will deploy a microservice to query posts by title. To do that you will create a [lambda](https://aws.amazon.com/lambda/) function. 26 | 27 | --- 28 | 29 | **Next:** assuming you already have an AWS account, you can [get started](/workshop/set-up-users.md). 30 | -------------------------------------------------------------------------------- /appspec.yml: -------------------------------------------------------------------------------- 1 | version: 0.0 2 | os: linux 3 | files: 4 | - source: / 5 | destination: /deploy 6 | hooks: 7 | ApplicationStop: 8 | - location: infrastructure/aws/codedeploy/stop_server 9 | timeout: 300 10 | runas: root 11 | BeforeInstall: 12 | - location: infrastructure/aws/codedeploy/install_dependencies 13 | timeout: 300 14 | runas: root 15 | AfterInstall: 16 | - location: infrastructure/aws/codedeploy/after_install 17 | timeout: 300 18 | runas: root 19 | ApplicationStart: 20 | - location: infrastructure/aws/codedeploy/start_server 21 | timeout: 300 22 | runas: root 23 | ValidateService: 24 | - location: infrastructure/aws/codedeploy/validate_service 25 | timeout: 300 26 | runas: root 27 | -------------------------------------------------------------------------------- /backend/.ebextensions/django.config: -------------------------------------------------------------------------------- 1 | option_settings: 2 | aws:elasticbeanstalk:container:python: 3 | WSGIPath: conduit/wsgi.py 4 | aws:elasticbeanstalk:container:python:staticfiles: 5 | "/static/": "/var/www/conduit/static/" 6 | aws:elasticbeanstalk:application:environment: 7 | DJANGO_SETTINGS_MODULE: conduit.settings.ec2 8 | AWS_DEFAULT_REGION: us-east-1 9 | 10 | container_commands: 11 | 01_collectstatic: 12 | command: "django-admin.py collectstatic --noinput" 13 | 02_migrate: 14 | command: "django-admin.py migrate" 15 | leader_only: true 16 | 03_statics: 17 | command: mkdir -p /var/www/conduit/static/ 18 | 04_wsgipassauth: 19 | command: 'echo "WSGIPassAuthorization On" >> /etc/httpd/conf.d/wsgi.conf' 20 | -------------------------------------------------------------------------------- /backend/.ebextensions/setup.config: -------------------------------------------------------------------------------- 1 | commands: 2 | 00_create_dir: 3 | command: mkdir -p /var/log/django 4 | 01_change_permissions: 5 | command: chmod 0666 /var/log/django 6 | 02_change_owner: 7 | command: chown wsgi:wsgi /var/log/django 8 | -------------------------------------------------------------------------------- /backend/.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | .hypothesis/ 47 | 48 | # Translations 49 | *.mo 50 | *.pot 51 | 52 | # Django stuff: 53 | *.log 54 | local_settings.py 55 | 56 | # Flask stuff: 57 | instance/ 58 | .webassets-cache 59 | 60 | # Scrapy stuff: 61 | .scrapy 62 | 63 | # Sphinx documentation 64 | docs/_build/ 65 | 66 | # PyBuilder 67 | target/ 68 | 69 | # IPython Notebook 70 | .ipynb_checkpoints 71 | 72 | # pyenv 73 | .python-version 74 | 75 | # celery beat schedule file 76 | celerybeat-schedule 77 | 78 | # dotenv 79 | .env 80 | 81 | # virtualenv 82 | venv/ 83 | ENV/ 84 | 85 | # Spyder project settings 86 | .spyderproject 87 | 88 | # Rope project settings 89 | .ropeproject 90 | 91 | # SQLite3 92 | db.sqlite3 93 | -------------------------------------------------------------------------------- /backend/Pipfile: -------------------------------------------------------------------------------- 1 | [[source]] 2 | 3 | url = "https://pypi.python.org/simple" 4 | verify_ssl = true 5 | name = "pypi" 6 | 7 | 8 | [dev-packages] 9 | 10 | ipython = "*" 11 | 12 | 13 | [packages] 14 | 15 | django = "==1.10.5" 16 | django-cors-middleware = "==1.3.1" 17 | django-extensions = "==1.7.1" 18 | djangorestframework = "==3.9.1" 19 | pyjwt = "==1.4.2" 20 | gunicorn = "*" 21 | gevent = "*" 22 | "psycopg2" = "*" 23 | "boto3" = "*" 24 | 25 | 26 | [requires] 27 | 28 | python_version = "3" 29 | -------------------------------------------------------------------------------- /backend/README.md: -------------------------------------------------------------------------------- 1 | # ![Django DRF Example App](project-logo.png) 2 | 3 | > ### Example Django DRF codebase containing real world examples (CRUD, auth, advanced patterns, etc) that adheres to the [RealWorld](https://github.com/gothinkster/realworld-example-apps) API spec. 4 | 5 | 6 | 7 | This repo is functionality complete — PR's and issues welcome! 8 | 9 | ## Installation 10 | 11 | 1. Clone this repository: `git clone git@github.com:gothinkster/productionready-django-api.git`. 12 | 2. `cd` into `conduit-django`: `cd productionready-django-api`. 13 | 3. Install [pyenv](https://github.com/yyuu/pyenv#installation). 14 | 4. Install [pyenv-virtualenv](https://github.com/yyuu/pyenv-virtualenv#installation). 15 | 5. Install Python 3.5.2: `pyenv install 3.5.2`. 16 | 6. Create a new virtualenv called `productionready`: `pyenv virtualenv 3.5.2 productionready`. 17 | 7. Set the local virtualenv to `productionready`: `pyenv local productionready`. 18 | 8. Reload the `pyenv` environment: `pyenv rehash`. 19 | 20 | If all went well then your command line prompt should now start with `(productionready)`. 21 | 22 | If your command line prompt does not start with `(productionready)` at this point, try running `pyenv activate productionready` or `cd ../productionready-django-api`. 23 | 24 | If pyenv is still not working, visit us in the Thinkster Slack channel so we can help you out. 25 | -------------------------------------------------------------------------------- /backend/beanstalk_deploy.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/backend/beanstalk_deploy.zip -------------------------------------------------------------------------------- /backend/conduit/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/backend/conduit/__init__.py -------------------------------------------------------------------------------- /backend/conduit/apps/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/backend/conduit/apps/__init__.py -------------------------------------------------------------------------------- /backend/conduit/apps/articles/__init__.py: -------------------------------------------------------------------------------- 1 | from django.apps import AppConfig 2 | 3 | 4 | class ArticlesAppConfig(AppConfig): 5 | name = 'conduit.apps.articles' 6 | label = 'articles' 7 | verbose_name = 'Articles' 8 | 9 | def ready(self): 10 | import conduit.apps.articles.signals 11 | 12 | default_app_config = 'conduit.apps.articles.ArticlesAppConfig' 13 | -------------------------------------------------------------------------------- /backend/conduit/apps/articles/migrations/0001_initial.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by Django 1.10 on 2016-08-28 15:43 3 | from __future__ import unicode_literals 4 | 5 | from django.db import migrations, models 6 | import django.db.models.deletion 7 | 8 | 9 | class Migration(migrations.Migration): 10 | 11 | initial = True 12 | 13 | dependencies = [ 14 | ('profiles', '0001_initial'), 15 | ] 16 | 17 | operations = [ 18 | migrations.CreateModel( 19 | name='Article', 20 | fields=[ 21 | ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), 22 | ('created_at', models.DateTimeField(auto_now_add=True)), 23 | ('updated_at', models.DateTimeField(auto_now=True)), 24 | ('slug', models.SlugField(max_length=255, unique=True)), 25 | ('title', models.CharField(db_index=True, max_length=255)), 26 | ('description', models.TextField()), 27 | ('body', models.TextField()), 28 | ('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='articles', to='profiles.Profile')), 29 | ], 30 | options={ 31 | 'abstract': False, 32 | 'ordering': ['-created_at', '-updated_at'], 33 | }, 34 | ), 35 | ] 36 | -------------------------------------------------------------------------------- /backend/conduit/apps/articles/migrations/0002_comment.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by Django 1.10 on 2016-08-28 16:00 3 | from __future__ import unicode_literals 4 | 5 | from django.db import migrations, models 6 | import django.db.models.deletion 7 | 8 | 9 | class Migration(migrations.Migration): 10 | 11 | dependencies = [ 12 | ('profiles', '0001_initial'), 13 | ('articles', '0001_initial'), 14 | ] 15 | 16 | operations = [ 17 | migrations.CreateModel( 18 | name='Comment', 19 | fields=[ 20 | ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), 21 | ('created_at', models.DateTimeField(auto_now_add=True)), 22 | ('updated_at', models.DateTimeField(auto_now=True)), 23 | ('body', models.TextField()), 24 | ('article', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='comments', to='articles.Article')), 25 | ('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='comments', to='profiles.Profile')), 26 | ], 27 | options={ 28 | 'ordering': ['-created_at', '-updated_at'], 29 | 'abstract': False, 30 | }, 31 | ), 32 | ] 33 | -------------------------------------------------------------------------------- /backend/conduit/apps/articles/migrations/0003_auto_20160828_1656.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by Django 1.10 on 2016-08-28 16:56 3 | from __future__ import unicode_literals 4 | 5 | from django.db import migrations, models 6 | 7 | 8 | class Migration(migrations.Migration): 9 | 10 | dependencies = [ 11 | ('articles', '0002_comment'), 12 | ] 13 | 14 | operations = [ 15 | migrations.CreateModel( 16 | name='Tag', 17 | fields=[ 18 | ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), 19 | ('created_at', models.DateTimeField(auto_now_add=True)), 20 | ('updated_at', models.DateTimeField(auto_now=True)), 21 | ('tag', models.CharField(max_length=255)), 22 | ('slug', models.SlugField(unique=True)), 23 | ], 24 | options={ 25 | 'ordering': ['-created_at', '-updated_at'], 26 | 'abstract': False, 27 | }, 28 | ), 29 | migrations.AddField( 30 | model_name='article', 31 | name='tags', 32 | field=models.ManyToManyField(related_name='articles', to='articles.Tag'), 33 | ), 34 | ] 35 | -------------------------------------------------------------------------------- /backend/conduit/apps/articles/migrations/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/backend/conduit/apps/articles/migrations/__init__.py -------------------------------------------------------------------------------- /backend/conduit/apps/articles/models.py: -------------------------------------------------------------------------------- 1 | from django.db import models 2 | 3 | from conduit.apps.core.models import TimestampedModel 4 | 5 | 6 | class Article(TimestampedModel): 7 | slug = models.SlugField(db_index=True, max_length=255, unique=True) 8 | title = models.CharField(db_index=True, max_length=255) 9 | 10 | description = models.TextField() 11 | body = models.TextField() 12 | 13 | # Every article must have an author. This will answer questions like "Who 14 | # gets credit for writing this article?" and "Who can edit this article?". 15 | # Unlike the `User` <-> `Profile` relationship, this is a simple foreign 16 | # key (or one-to-many) relationship. In this case, one `Profile` can have 17 | # many `Article`s. 18 | author = models.ForeignKey( 19 | 'profiles.Profile', on_delete=models.CASCADE, related_name='articles' 20 | ) 21 | 22 | tags = models.ManyToManyField( 23 | 'articles.Tag', related_name='articles' 24 | ) 25 | 26 | def __str__(self): 27 | return self.title 28 | 29 | 30 | class Comment(TimestampedModel): 31 | body = models.TextField() 32 | 33 | article = models.ForeignKey( 34 | 'articles.Article', related_name='comments', on_delete=models.CASCADE 35 | ) 36 | 37 | author = models.ForeignKey( 38 | 'profiles.Profile', related_name='comments', on_delete=models.CASCADE 39 | ) 40 | 41 | 42 | class Tag(TimestampedModel): 43 | tag = models.CharField(max_length=255) 44 | slug = models.SlugField(db_index=True, unique=True) 45 | 46 | def __str__(self): 47 | return self.tag 48 | -------------------------------------------------------------------------------- /backend/conduit/apps/articles/relations.py: -------------------------------------------------------------------------------- 1 | from rest_framework import serializers 2 | 3 | from .models import Tag 4 | 5 | 6 | class TagRelatedField(serializers.RelatedField): 7 | def get_queryset(self): 8 | return Tag.objects.all() 9 | 10 | def to_internal_value(self, data): 11 | tag, created = Tag.objects.get_or_create(tag=data, slug=data.lower()) 12 | 13 | return tag 14 | 15 | def to_representation(self, value): 16 | return value.tag 17 | -------------------------------------------------------------------------------- /backend/conduit/apps/articles/renderers.py: -------------------------------------------------------------------------------- 1 | from conduit.apps.core.renderers import ConduitJSONRenderer 2 | 3 | 4 | class ArticleJSONRenderer(ConduitJSONRenderer): 5 | object_label = 'article' 6 | pagination_object_label = 'articles' 7 | pagination_count_label = 'articlesCount' 8 | 9 | 10 | class CommentJSONRenderer(ConduitJSONRenderer): 11 | object_label = 'comment' 12 | pagination_object_label = 'comments' 13 | pagination_count_label = 'commentsCount' 14 | -------------------------------------------------------------------------------- /backend/conduit/apps/articles/serializers.py: -------------------------------------------------------------------------------- 1 | from rest_framework import serializers 2 | 3 | from conduit.apps.profiles.serializers import ProfileSerializer 4 | 5 | from .models import Article, Comment, Tag 6 | from .relations import TagRelatedField 7 | 8 | 9 | class ArticleSerializer(serializers.ModelSerializer): 10 | author = ProfileSerializer(read_only=True) 11 | description = serializers.CharField(required=False) 12 | slug = serializers.SlugField(required=False) 13 | 14 | favorited = serializers.SerializerMethodField() 15 | favoritesCount = serializers.SerializerMethodField( 16 | method_name='get_favorites_count' 17 | ) 18 | 19 | tagList = TagRelatedField(many=True, required=False, source='tags') 20 | 21 | # Django REST Framework makes it possible to create a read-only field that 22 | # gets it's value by calling a function. In this case, the client expects 23 | # `created_at` to be called `createdAt` and `updated_at` to be `updatedAt`. 24 | # `serializers.SerializerMethodField` is a good way to avoid having the 25 | # requirements of the client leak into our API. 26 | createdAt = serializers.SerializerMethodField(method_name='get_created_at') 27 | updatedAt = serializers.SerializerMethodField(method_name='get_updated_at') 28 | 29 | class Meta: 30 | model = Article 31 | fields = ( 32 | 'author', 33 | 'body', 34 | 'createdAt', 35 | 'description', 36 | 'favorited', 37 | 'favoritesCount', 38 | 'slug', 39 | 'tagList', 40 | 'title', 41 | 'updatedAt', 42 | ) 43 | 44 | def create(self, validated_data): 45 | author = self.context.get('author', None) 46 | 47 | tags = validated_data.pop('tags', []) 48 | 49 | article = Article.objects.create(author=author, **validated_data) 50 | 51 | for tag in tags: 52 | article.tags.add(tag) 53 | 54 | return article 55 | 56 | def get_created_at(self, instance): 57 | return instance.created_at.isoformat() 58 | 59 | def get_favorited(self, instance): 60 | request = self.context.get('request', None) 61 | 62 | if request is None: 63 | return False 64 | 65 | if not request.user.is_authenticated(): 66 | return False 67 | 68 | return request.user.profile.has_favorited(instance) 69 | 70 | def get_favorites_count(self, instance): 71 | return instance.favorited_by.count() 72 | 73 | def get_updated_at(self, instance): 74 | return instance.updated_at.isoformat() 75 | 76 | 77 | class CommentSerializer(serializers.ModelSerializer): 78 | author = ProfileSerializer(required=False) 79 | 80 | createdAt = serializers.SerializerMethodField(method_name='get_created_at') 81 | updatedAt = serializers.SerializerMethodField(method_name='get_updated_at') 82 | 83 | class Meta: 84 | model = Comment 85 | fields = ( 86 | 'id', 87 | 'author', 88 | 'body', 89 | 'createdAt', 90 | 'updatedAt', 91 | ) 92 | 93 | def create(self, validated_data): 94 | article = self.context['article'] 95 | author = self.context['author'] 96 | 97 | return Comment.objects.create( 98 | author=author, article=article, **validated_data 99 | ) 100 | 101 | def get_created_at(self, instance): 102 | return instance.created_at.isoformat() 103 | 104 | def get_updated_at(self, instance): 105 | return instance.updated_at.isoformat() 106 | 107 | 108 | class TagSerializer(serializers.ModelSerializer): 109 | class Meta: 110 | model = Tag 111 | fields = ('tag',) 112 | 113 | def to_representation(self, obj): 114 | return obj.tag 115 | -------------------------------------------------------------------------------- /backend/conduit/apps/articles/signals.py: -------------------------------------------------------------------------------- 1 | from django.db.models.signals import pre_save 2 | from django.dispatch import receiver 3 | from django.utils.text import slugify 4 | 5 | from conduit.apps.core.utils import generate_random_string 6 | 7 | from .models import Article 8 | 9 | @receiver(pre_save, sender=Article) 10 | def add_slug_to_article_if_not_exists(sender, instance, *args, **kwargs): 11 | MAXIMUM_SLUG_LENGTH = 255 12 | 13 | if instance and not instance.slug: 14 | slug = slugify(instance.title) 15 | unique = generate_random_string() 16 | 17 | if len(slug) > MAXIMUM_SLUG_LENGTH: 18 | slug = slug[:MAXIMUM_SLUG_LENGTH] 19 | 20 | while len(slug + '-' + unique) > MAXIMUM_SLUG_LENGTH: 21 | parts = slug.split('-') 22 | 23 | if len(parts) is 1: 24 | # The slug has no hypens. To append the unique string we must 25 | # arbitrarly remove `len(unique)` characters from the end of 26 | # `slug`. Subtract one to account for extra hyphen. 27 | slug = slug[:MAXIMUM_SLUG_LENGTH - len(unique) - 1] 28 | else: 29 | slug = '-'.join(parts[:-1]) 30 | 31 | instance.slug = slug + '-' + unique 32 | -------------------------------------------------------------------------------- /backend/conduit/apps/articles/urls.py: -------------------------------------------------------------------------------- 1 | from django.conf.urls import include, url 2 | 3 | from rest_framework.routers import DefaultRouter 4 | 5 | from .views import ( 6 | ArticleViewSet, ArticlesFavoriteAPIView, ArticlesFeedAPIView, 7 | CommentsListCreateAPIView, CommentsDestroyAPIView, TagListAPIView 8 | ) 9 | 10 | router = DefaultRouter() 11 | router.register(r'articles', ArticleViewSet) 12 | 13 | urlpatterns = [ 14 | url(r'^articles/feed/?$', ArticlesFeedAPIView.as_view()), 15 | 16 | url(r'^articles/(?P[-\w]+)/favorite/?$', 17 | ArticlesFavoriteAPIView.as_view()), 18 | 19 | url(r'^articles/(?P[-\w]+)/comments/?$', 20 | CommentsListCreateAPIView.as_view()), 21 | 22 | url(r'^articles/(?P[-\w]+)/comments/(?P[\d]+)/?$', 23 | CommentsDestroyAPIView.as_view()), 24 | 25 | url(r'^', include(router.urls)), 26 | 27 | url(r'^tags/?$', TagListAPIView.as_view()), 28 | ] 29 | -------------------------------------------------------------------------------- /backend/conduit/apps/authentication/__init__.py: -------------------------------------------------------------------------------- 1 | from django.apps import AppConfig 2 | 3 | 4 | class AuthenticationAppConfig(AppConfig): 5 | name = 'conduit.apps.authentication' 6 | label = 'authentication' 7 | verbose_name = 'Authentication' 8 | 9 | def ready(self): 10 | import conduit.apps.authentication.signals 11 | 12 | # This is how we register our custom app config with Django. Django is smart 13 | # enough to look for the `default_app_config` property of each registered app 14 | # and use the correct app config based on that value. 15 | default_app_config = 'conduit.apps.authentication.AuthenticationAppConfig' 16 | -------------------------------------------------------------------------------- /backend/conduit/apps/authentication/backends.py: -------------------------------------------------------------------------------- 1 | import jwt 2 | 3 | from django.conf import settings 4 | 5 | from rest_framework import authentication, exceptions 6 | 7 | from .models import User 8 | 9 | 10 | class JWTAuthentication(authentication.BaseAuthentication): 11 | authentication_header_prefix = 'Token' 12 | 13 | def authenticate(self, request): 14 | """ 15 | The `authenticate` method is called on every request, regardless of 16 | whether the endpoint requires authentication. 17 | 18 | `authenticate` has two possible return values: 19 | 20 | 1) `None` - We return `None` if we do not wish to authenticate. Usually 21 | this means we know authentication will fail. An example of 22 | this is when the request does not include a token in the 23 | headers. 24 | 25 | 2) `(user, token)` - We return a user/token combination when 26 | authentication was successful. 27 | 28 | If neither of these two cases were met, that means there was an error. 29 | In the event of an error, we do not return anything. We simple raise 30 | the `AuthenticationFailed` exception and let Django REST Framework 31 | handle the rest. 32 | """ 33 | request.user = None 34 | 35 | # `auth_header` should be an array with two elements: 1) the name of 36 | # the authentication header (in this case, "Token") and 2) the JWT 37 | # that we should authenticate against. 38 | auth_header = authentication.get_authorization_header(request).split() 39 | auth_header_prefix = self.authentication_header_prefix.lower() 40 | 41 | if not auth_header: 42 | return None 43 | 44 | if len(auth_header) == 1: 45 | # Invalid token header. No credentials provided. Do not attempt to 46 | # authenticate. 47 | return None 48 | 49 | elif len(auth_header) > 2: 50 | # Invalid token header. Token string should not contain spaces. Do 51 | # not attempt to authenticate. 52 | return None 53 | 54 | # The JWT library we're using can't handle the `byte` type, which is 55 | # commonly used by standard libraries in Python 3. To get around this, 56 | # we simply have to decode `prefix` and `token`. This does not make for 57 | # clean code, but it is a good decision because we would get an error 58 | # if we didn't decode these values. 59 | prefix = auth_header[0].decode('utf-8') 60 | token = auth_header[1].decode('utf-8') 61 | 62 | if prefix.lower() != auth_header_prefix: 63 | # The auth header prefix is not what we expected. Do not attempt to 64 | # authenticate. 65 | return None 66 | 67 | # By now, we are sure there is a *chance* that authentication will 68 | # succeed. We delegate the actual credentials authentication to the 69 | # method below. 70 | return self._authenticate_credentials(request, token) 71 | 72 | def _authenticate_credentials(self, request, token): 73 | """ 74 | Try to authenticate the given credentials. If authentication is 75 | successful, return the user and token. If not, throw an error. 76 | """ 77 | try: 78 | payload = jwt.decode(token, settings.SECRET_KEY) 79 | except: 80 | msg = 'Invalid authentication. Could not decode token.' 81 | raise exceptions.AuthenticationFailed(msg) 82 | 83 | try: 84 | user = User.objects.get(pk=payload['id']) 85 | except User.DoesNotExist: 86 | msg = 'No user matching this token was found.' 87 | raise exceptions.AuthenticationFailed(msg) 88 | 89 | if not user.is_active: 90 | msg = 'This user has been deactivated.' 91 | raise exceptions.AuthenticationFailed(msg) 92 | 93 | return (user, token) 94 | -------------------------------------------------------------------------------- /backend/conduit/apps/authentication/migrations/0001_initial.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by Django 1.10 on 2016-08-15 00:13 3 | from __future__ import unicode_literals 4 | 5 | from django.db import migrations, models 6 | 7 | 8 | class Migration(migrations.Migration): 9 | 10 | initial = True 11 | 12 | dependencies = [ 13 | ('auth', '0008_alter_user_username_max_length'), 14 | ] 15 | 16 | operations = [ 17 | migrations.CreateModel( 18 | name='User', 19 | fields=[ 20 | ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), 21 | ('password', models.CharField(max_length=128, verbose_name='password')), 22 | ('last_login', models.DateTimeField(blank=True, null=True, verbose_name='last login')), 23 | ('is_superuser', models.BooleanField(default=False, help_text='Designates that this user has all permissions without explicitly assigning them.', verbose_name='superuser status')), 24 | ('username', models.CharField(db_index=True, max_length=255, unique=True)), 25 | ('email', models.EmailField(db_index=True, max_length=254, unique=True)), 26 | ('is_active', models.BooleanField(default=True)), 27 | ('is_staff', models.BooleanField(default=False)), 28 | ('created_at', models.DateTimeField(auto_now_add=True)), 29 | ('updated_at', models.DateTimeField(auto_now=True)), 30 | ('groups', models.ManyToManyField(blank=True, help_text='The groups this user belongs to. A user will get all permissions granted to each of their groups.', related_name='user_set', related_query_name='user', to='auth.Group', verbose_name='groups')), 31 | ('user_permissions', models.ManyToManyField(blank=True, help_text='Specific permissions for this user.', related_name='user_set', related_query_name='user', to='auth.Permission', verbose_name='user permissions')), 32 | ], 33 | options={ 34 | 'abstract': False, 35 | }, 36 | ), 37 | ] 38 | -------------------------------------------------------------------------------- /backend/conduit/apps/authentication/migrations/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/backend/conduit/apps/authentication/migrations/__init__.py -------------------------------------------------------------------------------- /backend/conduit/apps/authentication/renderers.py: -------------------------------------------------------------------------------- 1 | from conduit.apps.core.renderers import ConduitJSONRenderer 2 | 3 | 4 | class UserJSONRenderer(ConduitJSONRenderer): 5 | charset = 'utf-8' 6 | object_label = 'user' 7 | pagination_object_label = 'users' 8 | pagination_count_label = 'usersCount' 9 | 10 | def render(self, data, media_type=None, renderer_context=None): 11 | # If we recieve a `token` key as part of the response, it will by a 12 | # byte object. Byte objects don't serializer well, so we need to 13 | # decode it before rendering the User object. 14 | token = data.get('token', None) 15 | 16 | if token is not None and isinstance(token, bytes): 17 | # Also as mentioned above, we will decode `token` if it is of type 18 | # bytes. 19 | data['token'] = token.decode('utf-8') 20 | 21 | return super(UserJSONRenderer, self).render(data) 22 | -------------------------------------------------------------------------------- /backend/conduit/apps/authentication/signals.py: -------------------------------------------------------------------------------- 1 | from django.db.models.signals import post_save 2 | from django.dispatch import receiver 3 | 4 | from conduit.apps.profiles.models import Profile 5 | 6 | from .models import User 7 | 8 | @receiver(post_save, sender=User) 9 | def create_related_profile(sender, instance, created, *args, **kwargs): 10 | # Notice that we're checking for `created` here. We only want to do this 11 | # the first time the `User` instance is created. If the save that caused 12 | # this signal to be run was an update action, we know the user already 13 | # has a profile. 14 | if instance and created: 15 | instance.profile = Profile.objects.create(user=instance) 16 | -------------------------------------------------------------------------------- /backend/conduit/apps/authentication/urls.py: -------------------------------------------------------------------------------- 1 | from django.conf.urls import url 2 | 3 | from .views import ( 4 | LoginAPIView, RegistrationAPIView, UserRetrieveUpdateAPIView 5 | ) 6 | 7 | urlpatterns = [ 8 | url(r'^user/?$', UserRetrieveUpdateAPIView.as_view()), 9 | url(r'^users/?$', RegistrationAPIView.as_view()), 10 | url(r'^users/login/?$', LoginAPIView.as_view()), 11 | ] 12 | -------------------------------------------------------------------------------- /backend/conduit/apps/authentication/views.py: -------------------------------------------------------------------------------- 1 | from rest_framework import status 2 | from rest_framework.generics import RetrieveUpdateAPIView 3 | from rest_framework.permissions import AllowAny, IsAuthenticated 4 | from rest_framework.response import Response 5 | from rest_framework.views import APIView 6 | 7 | from .renderers import UserJSONRenderer 8 | from .serializers import ( 9 | LoginSerializer, RegistrationSerializer, UserSerializer 10 | ) 11 | 12 | 13 | class RegistrationAPIView(APIView): 14 | # Allow any user (authenticated or not) to hit this endpoint. 15 | permission_classes = (AllowAny,) 16 | renderer_classes = (UserJSONRenderer,) 17 | serializer_class = RegistrationSerializer 18 | 19 | def post(self, request): 20 | user = request.data.get('user', {}) 21 | 22 | # The create serializer, validate serializer, save serializer pattern 23 | # below is common and you will see it a lot throughout this course and 24 | # your own work later on. Get familiar with it. 25 | serializer = self.serializer_class(data=user) 26 | serializer.is_valid(raise_exception=True) 27 | serializer.save() 28 | 29 | return Response(serializer.data, status=status.HTTP_201_CREATED) 30 | 31 | 32 | class LoginAPIView(APIView): 33 | permission_classes = (AllowAny,) 34 | renderer_classes = (UserJSONRenderer,) 35 | serializer_class = LoginSerializer 36 | 37 | def post(self, request): 38 | user = request.data.get('user', {}) 39 | 40 | # Notice here that we do not call `serializer.save()` like we did for 41 | # the registration endpoint. This is because we don't actually have 42 | # anything to save. Instead, the `validate` method on our serializer 43 | # handles everything we need. 44 | serializer = self.serializer_class(data=user) 45 | serializer.is_valid(raise_exception=True) 46 | 47 | return Response(serializer.data, status=status.HTTP_200_OK) 48 | 49 | 50 | class UserRetrieveUpdateAPIView(RetrieveUpdateAPIView): 51 | permission_classes = (IsAuthenticated,) 52 | renderer_classes = (UserJSONRenderer,) 53 | serializer_class = UserSerializer 54 | 55 | def retrieve(self, request, *args, **kwargs): 56 | # There is nothing to validate or save here. Instead, we just want the 57 | # serializer to handle turning our `User` object into something that 58 | # can be JSONified and sent to the client. 59 | serializer = self.serializer_class(request.user) 60 | 61 | return Response(serializer.data, status=status.HTTP_200_OK) 62 | 63 | def update(self, request, *args, **kwargs): 64 | user_data = request.data.get('user', {}) 65 | 66 | serializer_data = { 67 | 'username': user_data.get('username', request.user.username), 68 | 'email': user_data.get('email', request.user.email), 69 | 70 | 'profile': { 71 | 'bio': user_data.get('bio', request.user.profile.bio), 72 | 'image': user_data.get('image', request.user.profile.image) 73 | } 74 | } 75 | 76 | # Here is that serialize, validate, save pattern we talked about 77 | # before. 78 | serializer = self.serializer_class( 79 | request.user, data=serializer_data, partial=True 80 | ) 81 | serializer.is_valid(raise_exception=True) 82 | serializer.save() 83 | 84 | return Response(serializer.data, status=status.HTTP_200_OK) 85 | 86 | -------------------------------------------------------------------------------- /backend/conduit/apps/core/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/backend/conduit/apps/core/__init__.py -------------------------------------------------------------------------------- /backend/conduit/apps/core/exceptions.py: -------------------------------------------------------------------------------- 1 | from rest_framework.views import exception_handler 2 | 3 | def core_exception_handler(exc, context): 4 | # If an exception is thrown that we don't explicitly handle here, we want 5 | # to delegate to the default exception handler offered by DRF. If we do 6 | # handle this exception type, we will still want access to the response 7 | # generated by DRF, so we get that response up front. 8 | response = exception_handler(exc, context) 9 | handlers = { 10 | 'NotFound': _handle_not_found_error, 11 | 'ValidationError': _handle_generic_error 12 | } 13 | # This is how we identify the type of the current exception. We will use 14 | # this in a moment to see whether we should handle this exception or let 15 | # Django REST Framework do it's thing. 16 | exception_class = exc.__class__.__name__ 17 | 18 | if exception_class in handlers: 19 | # If this exception is one that we can handle, handle it. Otherwise, 20 | # return the response generated earlier by the default exception 21 | # handler. 22 | return handlers[exception_class](exc, context, response) 23 | 24 | return response 25 | 26 | def _handle_generic_error(exc, context, response): 27 | # This is about the most straightforward exception handler we can create. 28 | # We take the response generated by DRF and wrap it in the `errors` key. 29 | response.data = { 30 | 'errors': response.data 31 | } 32 | 33 | return response 34 | 35 | def _handle_not_found_error(exc, context, response): 36 | view = context.get('view', None) 37 | 38 | if view and hasattr(view, 'queryset') and view.queryset is not None: 39 | error_key = view.queryset.model._meta.verbose_name 40 | 41 | response.data = { 42 | 'errors': { 43 | error_key: response.data['detail'] 44 | } 45 | } 46 | 47 | else: 48 | response = _handle_generic_error(exc, context, response) 49 | 50 | return response 51 | -------------------------------------------------------------------------------- /backend/conduit/apps/core/models.py: -------------------------------------------------------------------------------- 1 | from django.db import models 2 | 3 | 4 | class TimestampedModel(models.Model): 5 | # A timestamp representing when this object was created. 6 | created_at = models.DateTimeField(auto_now_add=True) 7 | 8 | # A timestamp reprensenting when this object was last updated. 9 | updated_at = models.DateTimeField(auto_now=True) 10 | 11 | class Meta: 12 | abstract = True 13 | 14 | # By default, any model that inherits from `TimestampedModel` should 15 | # be ordered in reverse-chronological order. We can override this on a 16 | # per-model basis as needed, but reverse-chronological is a good 17 | # default ordering for most models. 18 | ordering = ['-created_at', '-updated_at'] 19 | -------------------------------------------------------------------------------- /backend/conduit/apps/core/renderers.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | from rest_framework.renderers import JSONRenderer 4 | 5 | 6 | class ConduitJSONRenderer(JSONRenderer): 7 | charset = 'utf-8' 8 | object_label = 'object' 9 | pagination_object_label = 'objects' 10 | pagination_object_count = 'count' 11 | 12 | def render(self, data, media_type=None, renderer_context=None): 13 | if data.get('results', None) is not None: 14 | return json.dumps({ 15 | self.pagination_object_label: data['results'], 16 | self.pagination_count_label: data['count'] 17 | }) 18 | 19 | # If the view throws an error (such as the user can't be authenticated 20 | # or something similar), `data` will contain an `errors` key. We want 21 | # the default JSONRenderer to handle rendering errors, so we need to 22 | # check for this case. 23 | elif data.get('errors', None) is not None: 24 | return super(ConduitJSONRenderer, self).render(data) 25 | 26 | else: 27 | return json.dumps({ 28 | self.object_label: data 29 | }) 30 | -------------------------------------------------------------------------------- /backend/conduit/apps/core/utils.py: -------------------------------------------------------------------------------- 1 | import random 2 | import string 3 | 4 | DEFAULT_CHAR_STRING = string.ascii_lowercase + string.digits 5 | 6 | def generate_random_string(chars=DEFAULT_CHAR_STRING, size=6): 7 | return ''.join(random.choice(chars) for _ in range(size)) 8 | -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/backend/conduit/apps/profiles/__init__.py -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/exceptions.py: -------------------------------------------------------------------------------- 1 | from rest_framework.exceptions import APIException 2 | 3 | 4 | class ProfileDoesNotExist(APIException): 5 | status_code = 400 6 | default_detail = 'The requested profile does not exist.' 7 | -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/migrations/0001_initial.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by Django 1.10 on 2016-08-28 15:06 3 | from __future__ import unicode_literals 4 | 5 | from django.conf import settings 6 | from django.db import migrations, models 7 | import django.db.models.deletion 8 | 9 | 10 | class Migration(migrations.Migration): 11 | 12 | initial = True 13 | 14 | dependencies = [ 15 | migrations.swappable_dependency(settings.AUTH_USER_MODEL), 16 | ] 17 | 18 | operations = [ 19 | migrations.CreateModel( 20 | name='Profile', 21 | fields=[ 22 | ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), 23 | ('created_at', models.DateTimeField(auto_now_add=True)), 24 | ('updated_at', models.DateTimeField(auto_now=True)), 25 | ('bio', models.TextField(blank=True)), 26 | ('image', models.URLField(blank=True)), 27 | ('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)), 28 | ], 29 | options={ 30 | 'ordering': ['-created_at', '-updated_at'], 31 | 'abstract': False, 32 | }, 33 | ), 34 | ] 35 | -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/migrations/0002_profile_follows.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by Django 1.10 on 2016-08-28 16:09 3 | from __future__ import unicode_literals 4 | 5 | from django.db import migrations, models 6 | 7 | 8 | class Migration(migrations.Migration): 9 | 10 | dependencies = [ 11 | ('profiles', '0001_initial'), 12 | ] 13 | 14 | operations = [ 15 | migrations.AddField( 16 | model_name='profile', 17 | name='follows', 18 | field=models.ManyToManyField(related_name='followed_by', to='profiles.Profile'), 19 | ), 20 | ] 21 | -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/migrations/0003_profile_favorites.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by Django 1.10 on 2016-08-28 16:24 3 | from __future__ import unicode_literals 4 | 5 | from django.db import migrations, models 6 | 7 | 8 | class Migration(migrations.Migration): 9 | 10 | dependencies = [ 11 | ('articles', '0002_comment'), 12 | ('profiles', '0002_profile_follows'), 13 | ] 14 | 15 | operations = [ 16 | migrations.AddField( 17 | model_name='profile', 18 | name='favorites', 19 | field=models.ManyToManyField(related_name='favorited_by', to='articles.Article'), 20 | ), 21 | ] 22 | -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/migrations/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/backend/conduit/apps/profiles/migrations/__init__.py -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/models.py: -------------------------------------------------------------------------------- 1 | from django.db import models 2 | 3 | from conduit.apps.core.models import TimestampedModel 4 | 5 | 6 | class Profile(TimestampedModel): 7 | # As mentioned, there is an inherent relationship between the Profile and 8 | # User models. By creating a one-to-one relationship between the two, we 9 | # are formalizing this relationship. Every user will have one -- and only 10 | # one -- related Profile model. 11 | user = models.OneToOneField( 12 | 'authentication.User', on_delete=models.CASCADE 13 | ) 14 | 15 | # Each user profile will have a field where they can tell other users 16 | # something about themselves. This field will be empty when the user 17 | # creates their account, so we specify `blank=True`. 18 | bio = models.TextField(blank=True) 19 | 20 | # In addition to the `bio` field, each user may have a profile image or 21 | # avatar. Similar to `bio`, this field is not required. It may be blank. 22 | image = models.URLField(blank=True) 23 | 24 | # This is an example of a Many-To-Many relationship where both sides of the 25 | # relationship are of the same model. In this case, the model is `Profile`. 26 | # As mentioned in the text, this relationship will be one-way. Just because 27 | # you are following mean does not mean that I am following you. This is 28 | # what `symmetrical=False` does for us. 29 | follows = models.ManyToManyField( 30 | 'self', 31 | related_name='followed_by', 32 | symmetrical=False 33 | ) 34 | 35 | favorites = models.ManyToManyField( 36 | 'articles.Article', 37 | related_name='favorited_by' 38 | ) 39 | 40 | 41 | def __str__(self): 42 | return self.user.username 43 | 44 | def follow(self, profile): 45 | """Follow `profile` if we're not already following `profile`.""" 46 | self.follows.add(profile) 47 | 48 | def unfollow(self, profile): 49 | """Unfollow `profile` if we're already following `profile`.""" 50 | self.follows.remove(profile) 51 | 52 | def is_following(self, profile): 53 | """Returns True if we're following `profile`; False otherwise.""" 54 | return self.follows.filter(pk=profile.pk).exists() 55 | 56 | def is_followed_by(self, profile): 57 | """Returns True if `profile` is following us; False otherwise.""" 58 | return self.followed_by.filter(pk=profile.pk).exists() 59 | 60 | def favorite(self, article): 61 | """Favorite `article` if we haven't already favorited it.""" 62 | self.favorites.add(article) 63 | 64 | def unfavorite(self, article): 65 | """Unfavorite `article` if we've already favorited it.""" 66 | self.favorites.remove(article) 67 | 68 | def has_favorited(self, article): 69 | """Returns True if we have favorited `article`; else False.""" 70 | return self.favorites.filter(pk=article.pk).exists() 71 | -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/renderers.py: -------------------------------------------------------------------------------- 1 | from conduit.apps.core.renderers import ConduitJSONRenderer 2 | 3 | 4 | class ProfileJSONRenderer(ConduitJSONRenderer): 5 | object_label = 'profile' 6 | pagination_object_label = 'profiles' 7 | pagination_count_label = 'profilesCount' 8 | -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/serializers.py: -------------------------------------------------------------------------------- 1 | from rest_framework import serializers 2 | 3 | from .models import Profile 4 | 5 | 6 | class ProfileSerializer(serializers.ModelSerializer): 7 | username = serializers.CharField(source='user.username') 8 | bio = serializers.CharField(allow_blank=True, required=False) 9 | image = serializers.SerializerMethodField() 10 | following = serializers.SerializerMethodField() 11 | 12 | class Meta: 13 | model = Profile 14 | fields = ('username', 'bio', 'image', 'following',) 15 | read_only_fields = ('username',) 16 | 17 | def get_image(self, obj): 18 | if obj.image: 19 | return obj.image 20 | 21 | return 'https://static.productionready.io/images/smiley-cyrus.jpg' 22 | 23 | def get_following(self, instance): 24 | request = self.context.get('request', None) 25 | 26 | if request is None: 27 | return False 28 | 29 | if not request.user.is_authenticated(): 30 | return False 31 | 32 | follower = request.user.profile 33 | followee = instance 34 | 35 | return follower.is_following(followee) 36 | -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/urls.py: -------------------------------------------------------------------------------- 1 | from django.conf.urls import url 2 | 3 | from .views import ProfileRetrieveAPIView, ProfileFollowAPIView 4 | 5 | urlpatterns = [ 6 | url(r'^profiles/(?P\w+)/?$', ProfileRetrieveAPIView.as_view()), 7 | url(r'^profiles/(?P\w+)/follow/?$', 8 | ProfileFollowAPIView.as_view()), 9 | ] 10 | -------------------------------------------------------------------------------- /backend/conduit/apps/profiles/views.py: -------------------------------------------------------------------------------- 1 | from rest_framework import serializers, status 2 | from rest_framework.exceptions import NotFound 3 | from rest_framework.generics import RetrieveAPIView 4 | from rest_framework.permissions import AllowAny, IsAuthenticated 5 | from rest_framework.response import Response 6 | from rest_framework.views import APIView 7 | 8 | from .models import Profile 9 | from .renderers import ProfileJSONRenderer 10 | from .serializers import ProfileSerializer 11 | 12 | 13 | class ProfileRetrieveAPIView(RetrieveAPIView): 14 | permission_classes = (AllowAny,) 15 | queryset = Profile.objects.select_related('user') 16 | renderer_classes = (ProfileJSONRenderer,) 17 | serializer_class = ProfileSerializer 18 | 19 | def retrieve(self, request, username, *args, **kwargs): 20 | # Try to retrieve the requested profile and throw an exception if the 21 | # profile could not be found. 22 | try: 23 | profile = self.queryset.get(user__username=username) 24 | except Profile.DoesNotExist: 25 | raise NotFound('A profile with this username does not exist.') 26 | 27 | serializer = self.serializer_class(profile, context={ 28 | 'request': request 29 | }) 30 | 31 | return Response(serializer.data, status=status.HTTP_200_OK) 32 | 33 | 34 | class ProfileFollowAPIView(APIView): 35 | permission_classes = (IsAuthenticated,) 36 | renderer_classes = (ProfileJSONRenderer,) 37 | serializer_class = ProfileSerializer 38 | 39 | def delete(self, request, username=None): 40 | follower = self.request.user.profile 41 | 42 | try: 43 | followee = Profile.objects.get(user__username=username) 44 | except Profile.DoesNotExist: 45 | raise NotFound('A profile with this username was not found.') 46 | 47 | follower.unfollow(followee) 48 | 49 | serializer = self.serializer_class(followee, context={ 50 | 'request': request 51 | }) 52 | 53 | return Response(serializer.data, status=status.HTTP_200_OK) 54 | 55 | def post(self, request, username=None): 56 | follower = self.request.user.profile 57 | 58 | try: 59 | followee = Profile.objects.get(user__username=username) 60 | except Profile.DoesNotExist: 61 | raise NotFound('A profile with this username was not found.') 62 | 63 | if follower.pk is followee.pk: 64 | raise serializers.ValidationError('You can not follow yourself.') 65 | 66 | follower.follow(followee) 67 | 68 | serializer = self.serializer_class(followee, context={ 69 | 'request': request 70 | }) 71 | 72 | return Response(serializer.data, status=status.HTTP_201_CREATED) 73 | -------------------------------------------------------------------------------- /backend/conduit/settings/__init__.py: -------------------------------------------------------------------------------- 1 | from conduit.settings.defaults import * 2 | -------------------------------------------------------------------------------- /backend/conduit/settings/defaults.py: -------------------------------------------------------------------------------- 1 | """ 2 | Django settings for conduit project. 3 | 4 | Generated by 'django-admin startproject' using Django 1.10. 5 | 6 | For more information on this file, see 7 | https://docs.djangoproject.com/en/1.10/topics/settings/ 8 | 9 | For the full list of settings and their values, see 10 | https://docs.djangoproject.com/en/1.10/ref/settings/ 11 | """ 12 | 13 | import os 14 | 15 | # Build paths inside the project like this: os.path.join(BASE_DIR, ...) 16 | BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) 17 | 18 | 19 | # Quick-start development settings - unsuitable for production 20 | # See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/ 21 | 22 | # SECURITY WARNING: keep the secret key used in production secret! 23 | SECRET_KEY = '2^f+3@v7$v1f8yt0!s)3-1t$)tlp+xm17=*g))_xoi&&9m#2a&' 24 | 25 | # SECURITY WARNING: don't run with debug turned on in production! 26 | DEBUG = True 27 | 28 | # We put * on ALLOWED_HOSTS this implies that any request from any domain 29 | # will be handled. 30 | # Http Header Attacks should be prevented in another way. 31 | # Check this to read more: 32 | # https://docs.djangoproject.com/en/1.11/topics/security/#host-headers-virtual-hosting 33 | ALLOWED_HOSTS = ['*'] 34 | 35 | # Application definition 36 | 37 | INSTALLED_APPS = [ 38 | 'django.contrib.admin', 39 | 'django.contrib.auth', 40 | 'django.contrib.contenttypes', 41 | 'django.contrib.sessions', 42 | 'django.contrib.messages', 43 | 'django.contrib.staticfiles', 44 | 45 | 'corsheaders', 46 | 'django_extensions', 47 | 'rest_framework', 48 | 49 | 'conduit.apps.articles', 50 | 'conduit.apps.authentication', 51 | 'conduit.apps.core', 52 | 'conduit.apps.profiles', 53 | ] 54 | 55 | MIDDLEWARE = [ 56 | 'django.middleware.security.SecurityMiddleware', 57 | 'django.contrib.sessions.middleware.SessionMiddleware', 58 | 'corsheaders.middleware.CorsMiddleware', 59 | 'django.middleware.common.CommonMiddleware', 60 | 'django.middleware.csrf.CsrfViewMiddleware', 61 | 'django.contrib.auth.middleware.AuthenticationMiddleware', 62 | 'django.contrib.messages.middleware.MessageMiddleware', 63 | 'django.middleware.clickjacking.XFrameOptionsMiddleware', 64 | ] 65 | 66 | ROOT_URLCONF = 'conduit.urls' 67 | 68 | TEMPLATES = [ 69 | { 70 | 'BACKEND': 'django.template.backends.django.DjangoTemplates', 71 | 'DIRS': [], 72 | 'APP_DIRS': True, 73 | 'OPTIONS': { 74 | 'context_processors': [ 75 | 'django.template.context_processors.debug', 76 | 'django.template.context_processors.request', 77 | 'django.contrib.auth.context_processors.auth', 78 | 'django.contrib.messages.context_processors.messages', 79 | ], 80 | }, 81 | }, 82 | ] 83 | 84 | WSGI_APPLICATION = 'conduit.wsgi.application' 85 | 86 | 87 | # Database 88 | # https://docs.djangoproject.com/en/1.10/ref/settings/#databases 89 | 90 | DATABASES = { 91 | 'default': { 92 | 'ENGINE': 'django.db.backends.sqlite3', 93 | 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), 94 | } 95 | } 96 | 97 | 98 | # Password validation 99 | # https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators 100 | 101 | AUTH_PASSWORD_VALIDATORS = [ 102 | { 103 | 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', 104 | }, 105 | { 106 | 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', 107 | }, 108 | { 109 | 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', 110 | }, 111 | { 112 | 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', 113 | }, 114 | ] 115 | 116 | 117 | # Internationalization 118 | # https://docs.djangoproject.com/en/1.10/topics/i18n/ 119 | 120 | LANGUAGE_CODE = 'en-us' 121 | 122 | TIME_ZONE = 'UTC' 123 | 124 | USE_I18N = True 125 | 126 | USE_L10N = True 127 | 128 | USE_TZ = True 129 | 130 | 131 | # Static files (CSS, JavaScript, Images) 132 | # https://docs.djangoproject.com/en/1.10/howto/static-files/ 133 | 134 | STATIC_URL = '/static/' 135 | 136 | CORS_ORIGIN_WHITELIST = [] 137 | 138 | # Tell Django about the custom `User` model we created. The string 139 | # `authentication.User` tells Django we are referring to the `User` model in 140 | # the `authentication` module. This module is registered above in a setting 141 | # called `INSTALLED_APPS`. 142 | AUTH_USER_MODEL = 'authentication.User' 143 | 144 | 145 | REST_FRAMEWORK = { 146 | 'EXCEPTION_HANDLER': 'conduit.apps.core.exceptions.core_exception_handler', 147 | 'NON_FIELD_ERRORS_KEY': 'error', 148 | 149 | 'DEFAULT_AUTHENTICATION_CLASSES': ( 150 | 'conduit.apps.authentication.backends.JWTAuthentication', 151 | ), 152 | 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination', 153 | 'PAGE_SIZE': 20, 154 | } 155 | -------------------------------------------------------------------------------- /backend/conduit/settings/docker.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | 4 | from conduit.settings.defaults import * 5 | 6 | 7 | DEBUG = os.environ.get('DJANGO_DEBUG', 'False') == 'True' 8 | 9 | 10 | STATIC_ROOT = '/data/static/' 11 | 12 | 13 | # Database 14 | # https://docs.djangoproject.com/en/1.10/ref/settings/#databases 15 | 16 | DATABASES = { 17 | 'default': { 18 | 'ENGINE': 'django.db.backends.postgresql', 19 | 'NAME': os.environ['DATABASE_NAME'], 20 | 'USER': os.environ['DATABASE_USER'], 21 | 'PASSWORD': os.environ['DATABASE_PASSWORD'], 22 | 'HOST': os.environ['DATABASE_HOST'], 23 | 'PORT': os.environ.get('DATABASE_PORT', '5432'), 24 | } 25 | } 26 | 27 | 28 | LOGGING = { 29 | 'version': 1, 30 | 'disable_existing_loggers': False, 31 | 'handlers': { 32 | 'django.file': { 33 | 'level': 'DEBUG', 34 | 'class': 'logging.FileHandler', 35 | 'filename': '/data/django.log', 36 | }, 37 | 'django.security.file': { 38 | 'level': 'DEBUG', 39 | 'class': 'logging.FileHandler', 40 | 'filename': '/data/django.security.log', 41 | }, 42 | }, 43 | 'loggers': { 44 | 'django.request': { 45 | 'handlers': ['django.file'], 46 | 'level': 'DEBUG', 47 | 'propagate': True, 48 | }, 49 | 'django.security': { 50 | 'handlers': ['django.security.file'], 51 | 'level': 'DEBUG', 52 | 'propagate': True, 53 | }, 54 | 'django.db.backends': { 55 | 'handlers': [], 56 | 'level': 'DEBUG', 57 | 'propagate': True, 58 | }, 59 | }, 60 | } 61 | 62 | 63 | CORS_ORIGIN_WHITELIST = tuple(json.loads(os.environ.get( 64 | 'DJANGO_CORS_ORIGIN_WHITELIST', 65 | '[]' 66 | ))) 67 | -------------------------------------------------------------------------------- /backend/conduit/settings/ec2.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import json 3 | 4 | from botocore.exceptions import ClientError 5 | 6 | from conduit.settings.defaults import * 7 | 8 | 9 | client = boto3.client('ssm') 10 | PARAMETERS_PATH = '/prod/api/' 11 | 12 | def get_parameter(name, with_decryption=False, default=None): 13 | try: 14 | response = client.get_parameter( 15 | Name=PARAMETERS_PATH + name, 16 | WithDecryption=with_decryption 17 | ) 18 | 19 | parameter = response.get('Parameter') 20 | return parameter.get('Value') 21 | except ClientError as e: 22 | if e.response['Error']['Code'] == 'ParameterNotFound': 23 | return default 24 | else: 25 | raise e 26 | 27 | 28 | DEBUG = get_parameter('DEBUG', default='False') == 'True' 29 | 30 | 31 | STATIC_ROOT = '/var/www/conduit/static/' 32 | 33 | 34 | # Database 35 | # https://docs.djangoproject.com/en/1.10/ref/settings/#databases 36 | DATABASES = { 37 | 'default': { 38 | 'ENGINE': 'django.db.backends.postgresql', 39 | 'NAME': get_parameter('DATABASE_NAME'), 40 | 'USER': get_parameter('DATABASE_USER'), 41 | 'PASSWORD': get_parameter('DATABASE_PASSWORD', True), 42 | 'HOST': get_parameter('DATABASE_HOST'), 43 | 'PORT': get_parameter('DATABASE_PORT', default='5432'), 44 | } 45 | } 46 | 47 | LOGGING = { 48 | 'version': 1, 49 | 'disable_existing_loggers': False, 50 | 'handlers': { 51 | 'django.file': { 52 | 'level': 'DEBUG', 53 | 'class': 'logging.FileHandler', 54 | 'filename': '/var/log/django/django.log', 55 | }, 56 | 'django.security.file': { 57 | 'level': 'DEBUG', 58 | 'class': 'logging.FileHandler', 59 | 'filename': '/var/log/django/django.security.log', 60 | }, 61 | }, 62 | 'loggers': { 63 | 'django.request': { 64 | 'handlers': ['django.file'], 65 | 'level': 'DEBUG', 66 | 'propagate': True, 67 | }, 68 | 'django.security': { 69 | 'handlers': ['django.security.file'], 70 | 'level': 'DEBUG', 71 | 'propagate': True, 72 | }, 73 | 'django.db.backends': { 74 | 'handlers': [], 75 | 'level': 'DEBUG', 76 | 'propagate': True, 77 | }, 78 | }, 79 | } 80 | 81 | CORS_ORIGIN_WHITELIST = tuple(json.loads(get_parameter( 82 | 'CORS_ORIGIN_WHITELIST', 83 | default='[]' 84 | ))) 85 | 86 | CORS_ORIGIN_ALLOW_ALL = True 87 | -------------------------------------------------------------------------------- /backend/conduit/urls.py: -------------------------------------------------------------------------------- 1 | """conduit URL Configuration 2 | 3 | The `urlpatterns` list routes URLs to views. For more information please see: 4 | https://docs.djangoproject.com/en/1.10/topics/http/urls/ 5 | Examples: 6 | Function views 7 | 1. Add an import: from my_app import views 8 | 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home') 9 | Class-based views 10 | 1. Add an import: from other_app.views import Home 11 | 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home') 12 | Including another URLconf 13 | 1. Import the include() function: from django.conf.urls import url, include 14 | 2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls')) 15 | """ 16 | from django.conf.urls import include, url 17 | from django.contrib import admin 18 | 19 | urlpatterns = [ 20 | url(r'^admin/', admin.site.urls), 21 | 22 | url(r'^api/', include('conduit.apps.articles.urls', namespace='articles')), 23 | url(r'^api/', include('conduit.apps.authentication.urls', namespace='authentication')), 24 | url(r'^api/', include('conduit.apps.profiles.urls', namespace='profiles')), 25 | ] 26 | -------------------------------------------------------------------------------- /backend/conduit/wsgi.py: -------------------------------------------------------------------------------- 1 | """ 2 | WSGI config for conduit project. 3 | 4 | It exposes the WSGI callable as a module-level variable named ``application``. 5 | 6 | For more information on this file, see 7 | https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/ 8 | """ 9 | 10 | import os 11 | 12 | from django.core.wsgi import get_wsgi_application 13 | 14 | os.environ.setdefault("DJANGO_SETTINGS_MODULE", "conduit.settings") 15 | 16 | application = get_wsgi_application() 17 | -------------------------------------------------------------------------------- /backend/manage.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import os 3 | import sys 4 | 5 | if __name__ == "__main__": 6 | os.environ.setdefault("DJANGO_SETTINGS_MODULE", "conduit.settings") 7 | try: 8 | from django.core.management import execute_from_command_line 9 | except ImportError: 10 | # The above import may fail for some other reason. Ensure that the 11 | # issue is really that Django is missing to avoid masking other 12 | # exceptions on Python 2. 13 | try: 14 | import django 15 | except ImportError: 16 | raise ImportError( 17 | "Couldn't import Django. Are you sure it's installed and " 18 | "available on your PYTHONPATH environment variable? Did you " 19 | "forget to activate a virtual environment?" 20 | ) 21 | raise 22 | execute_from_command_line(sys.argv) 23 | -------------------------------------------------------------------------------- /backend/project-logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/backend/project-logo.png -------------------------------------------------------------------------------- /buildspec.frontend.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | env: 4 | variables: 5 | BUCKET_PARAMETER_NAME: "/prod/codebuild/WEBSITE_BUCKET_NAME" 6 | API_URL_PARAMETER_NAME: "/prod/frontend/API_URL" 7 | FRONTEND_DIR: "frontend" 8 | BUILD_DIR: "build" 9 | 10 | phases: 11 | pre_build: 12 | commands: 13 | - echo "Installing npm dependencies and jq..." 14 | - cd "$FRONTEND_DIR" 15 | - npm install 16 | - apt update 17 | - apt install jq 18 | - pip install --upgrade awscli 19 | build: 20 | commands: 21 | - echo "Build started on `date`" 22 | - echo "Building ..." 23 | - export REACT_APP_API_DOMAIN=`aws ssm get-parameter --name "$API_URL_PARAMETER_NAME" --region us-east-1 | jq -r .Parameter.Value` 24 | - npm run build 25 | post_build: 26 | commands: 27 | - export BUCKET=`aws ssm get-parameter --name "$BUCKET_PARAMETER_NAME" --region us-east-1 | jq -r .Parameter.Value` 28 | - echo "Uploading build to $BUCKET ..." 29 | - aws s3 sync "$BUILD_DIR" "$BUCKET" --delete 30 | - echo "Build completed on `date`" 31 | -------------------------------------------------------------------------------- /frontend/.gitignore: -------------------------------------------------------------------------------- 1 | # See http://help.github.com/ignore-files/ for more about ignoring files. 2 | 3 | # dependencies 4 | node_modules 5 | 6 | # testing 7 | coverage 8 | 9 | # production 10 | build 11 | 12 | # misc 13 | .DS_Store 14 | .env 15 | npm-debug.log 16 | .idea -------------------------------------------------------------------------------- /frontend/README.md: -------------------------------------------------------------------------------- 1 | # ![React + Redux Example App](project-logo.png) 2 | 3 | > ### React + Redux codebase containing real world examples (CRUD, auth, advanced patterns, etc) that adheres to the [RealWorld](https://github.com/gothinkster/realworld-example-apps) spec and API. 4 | 5 |    6 | 7 | ### [Demo](https://react-redux.realworld.io)    [RealWorld](https://github.com/gothinkster/realworld) 8 | 9 | Originally created for this [GH issue](https://github.com/reactjs/redux/issues/1353). The codebase is now feature complete; please submit bug fixes via pull requests & feedback via issues. 10 | 11 | We also have notes in [**our wiki**](https://github.com/gothinkster/react-redux-realworld-example-app/wiki) about how the various patterns used in this codebase and how they work (thanks [@thejmazz](https://github.com/thejmazz)!) 12 | 13 | 14 | ## Getting started 15 | 16 | You can view a live demo over at https://react-redux.realworld.io/ 17 | 18 | To get the frontend running locally: 19 | 20 | - Clone this repo 21 | - `npm install` to install all req'd dependencies 22 | - `npm start` to start the local server (this project uses create-react-app) 23 | 24 | Local web server will use port 4100 instead of standard React's port 3000 to prevent conflicts with some backends like Node or Rails. You can configure port in scripts section of `package.json`: we use [cross-env](https://github.com/kentcdodds/cross-env) to set environment variable PORT for React scripts, this is Windows-compatible way of setting environment variables. 25 | 26 | Alternatively, you can add `.env` file in the root folder of project to set environment variables (use PORT to change webserver's port). This file will be ignored by git, so it is suitable for API keys and other sensitive stuff. Refer to [dotenv](https://github.com/motdotla/dotenv) and [React](https://github.com/facebookincubator/create-react-app/blob/master/packages/react-scripts/template/README.md#adding-development-environment-variables-in-env) documentation for more details. Also, please remove setting variable via script section of `package.json` - `dotenv` never override variables if they are already set. 27 | 28 | ### Making requests to the backend API 29 | 30 | For convenience, we have a live API server running at https://conduit.productionready.io/api for the application to make requests against. You can view [the API spec here](https://github.com/GoThinkster/productionready/blob/master/api) which contains all routes & responses for the server. 31 | 32 | The source code for the backend server (available for Node, Rails and Django) can be found in the [main RealWorld repo](https://github.com/gothinkster/realworld). 33 | 34 | If you want to change the API URL to a local server, simply edit `src/agent.js` and change `API_ROOT` to the local server's URL (i.e. `http://localhost:3000/api`) 35 | 36 | 37 | ## Functionality overview 38 | 39 | The example application is a social blogging site (i.e. a Medium.com clone) called "Conduit". It uses a custom API for all requests, including authentication. You can view a live demo over at https://redux.productionready.io/ 40 | 41 | **General functionality:** 42 | 43 | - Authenticate users via JWT (login/signup pages + logout button on settings page) 44 | - CRU* users (sign up & settings page - no deleting required) 45 | - CRUD Articles 46 | - CR*D Comments on articles (no updating required) 47 | - GET and display paginated lists of articles 48 | - Favorite articles 49 | - Follow other users 50 | 51 | **The general page breakdown looks like this:** 52 | 53 | - Home page (URL: /#/ ) 54 | - List of tags 55 | - List of articles pulled from either Feed, Global, or by Tag 56 | - Pagination for list of articles 57 | - Sign in/Sign up pages (URL: /#/login, /#/register ) 58 | - Use JWT (store the token in localStorage) 59 | - Settings page (URL: /#/settings ) 60 | - Editor page to create/edit articles (URL: /#/editor, /#/editor/article-slug-here ) 61 | - Article page (URL: /#/article/article-slug-here ) 62 | - Delete article button (only shown to article's author) 63 | - Render markdown from server client side 64 | - Comments section at bottom of page 65 | - Delete comment button (only shown to comment's author) 66 | - Profile page (URL: /#/@username, /#/@username/favorites ) 67 | - Show basic user info 68 | - List of articles populated from author's created articles or author's favorited articles 69 | 70 |
71 | 72 | [![Brought to you by Thinkster](https://raw.githubusercontent.com/gothinkster/realworld/master/media/end.png)](https://thinkster.io) 73 | -------------------------------------------------------------------------------- /frontend/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "react-redux-realworld-example-app", 3 | "version": "0.1.0", 4 | "private": true, 5 | "devDependencies": { 6 | "cross-env": "^4.0.0", 7 | "react-scripts": "0.9.5" 8 | }, 9 | "dependencies": { 10 | "history": "^4.6.3", 11 | "marked": "^0.3.6", 12 | "prop-types": "^15.5.10", 13 | "react": "^15.5.0", 14 | "react-dom": "^15.5.0", 15 | "react-redux": "^4.4.8", 16 | "react-router": "^4.1.2", 17 | "react-router-dom": "^4.1.2", 18 | "react-router-redux": "^5.0.0-alpha.6", 19 | "redux": "^3.6.0", 20 | "redux-devtools-extension": "^2.13.2", 21 | "redux-logger": "^3.0.1", 22 | "superagent": "^2.3.0", 23 | "superagent-promise": "^1.1.0" 24 | }, 25 | "scripts": { 26 | "start": "cross-env PORT=4100 react-scripts start", 27 | "build": "react-scripts build", 28 | "test": "cross-env PORT=4100 react-scripts test --env=jsdom", 29 | "eject": "react-scripts eject" 30 | } 31 | } 32 | -------------------------------------------------------------------------------- /frontend/project-logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/frontend/project-logo.png -------------------------------------------------------------------------------- /frontend/public/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/frontend/public/favicon.ico -------------------------------------------------------------------------------- /frontend/public/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 19 | Conduit 20 | 21 | 22 |
23 | 33 | 34 | 35 | -------------------------------------------------------------------------------- /frontend/src/agent.js: -------------------------------------------------------------------------------- 1 | import superagentPromise from 'superagent-promise'; 2 | import _superagent from 'superagent'; 3 | 4 | const superagent = superagentPromise(_superagent, global.Promise); 5 | 6 | const API_DOMAIN = process.env.REACT_APP_API_DOMAIN ? process.env.REACT_APP_API_DOMAIN : global.location.origin; 7 | const API_ROOT = `${API_DOMAIN}/api`; 8 | 9 | const encode = encodeURIComponent; 10 | const responseBody = res => res.body; 11 | 12 | let token = null; 13 | const tokenPlugin = req => { 14 | if (token) { 15 | req.set('authorization', `Token ${token}`); 16 | } 17 | } 18 | 19 | const requests = { 20 | del: url => 21 | superagent.del(`${API_ROOT}${url}`).use(tokenPlugin).then(responseBody), 22 | get: url => 23 | superagent.get(`${API_ROOT}${url}`).use(tokenPlugin).then(responseBody), 24 | put: (url, body) => 25 | superagent.put(`${API_ROOT}${url}`, body).use(tokenPlugin).then(responseBody), 26 | post: (url, body) => 27 | superagent.post(`${API_ROOT}${url}`, body).use(tokenPlugin).then(responseBody) 28 | }; 29 | 30 | const Auth = { 31 | current: () => 32 | requests.get('/user'), 33 | login: (email, password) => 34 | requests.post('/users/login', { user: { email, password } }), 35 | register: (username, email, password) => 36 | requests.post('/users', { user: { username, email, password } }), 37 | save: user => 38 | requests.put('/user', { user }) 39 | }; 40 | 41 | const Tags = { 42 | getAll: () => requests.get('/tags') 43 | }; 44 | 45 | const limit = (count, p) => `limit=${count}&offset=${p ? p * count : 0}`; 46 | const omitSlug = article => Object.assign({}, article, { slug: undefined }) 47 | const Articles = { 48 | all: page => 49 | requests.get(`/articles/?${limit(10, page)}`), 50 | byAuthor: (author, page) => 51 | requests.get(`/articles/?author=${encode(author)}&${limit(5, page)}`), 52 | byTag: (tag, page) => 53 | requests.get(`/articles/?tag=${encode(tag)}&${limit(10, page)}`), 54 | del: slug => 55 | requests.del(`/articles/${slug}/`), 56 | favorite: slug => 57 | requests.post(`/articles/${slug}/favorite/`), 58 | favoritedBy: (author, page) => 59 | requests.get(`/articles/?favorited=${encode(author)}&${limit(5, page)}`), 60 | feed: () => 61 | requests.get('/articles/feed/?limit=10&offset=0'), 62 | get: slug => 63 | requests.get(`/articles/${slug}/`), 64 | unfavorite: slug => 65 | requests.del(`/articles/${slug}/favorite/`), 66 | update: article => 67 | requests.put(`/articles/${article.slug}/`, { article: omitSlug(article) }), 68 | create: article => 69 | requests.post('/articles/', { article }) 70 | }; 71 | 72 | const Comments = { 73 | create: (slug, comment) => 74 | requests.post(`/articles/${slug}/comments/`, { comment }), 75 | delete: (slug, commentId) => 76 | requests.del(`/articles/${slug}/comments/${commentId}/`), 77 | forArticle: slug => 78 | requests.get(`/articles/${slug}/comments/`) 79 | }; 80 | 81 | const Profile = { 82 | follow: username => 83 | requests.post(`/profiles/${username}/follow/`), 84 | get: username => 85 | requests.get(`/profiles/${username}/`), 86 | unfollow: username => 87 | requests.del(`/profiles/${username}/follow/`) 88 | }; 89 | 90 | export default { 91 | Articles, 92 | Auth, 93 | Comments, 94 | Profile, 95 | Tags, 96 | setToken: _token => { token = _token; } 97 | }; 98 | -------------------------------------------------------------------------------- /frontend/src/components/App.js: -------------------------------------------------------------------------------- 1 | import agent from '../agent'; 2 | import Header from './Header'; 3 | import React from 'react'; 4 | import { connect } from 'react-redux'; 5 | import { APP_LOAD, REDIRECT } from '../constants/actionTypes'; 6 | import { Route, Switch } from 'react-router-dom'; 7 | import Article from '../components/Article'; 8 | import Editor from '../components/Editor'; 9 | import Home from '../components/Home'; 10 | import Login from '../components/Login'; 11 | import Profile from '../components/Profile'; 12 | import ProfileFavorites from '../components/ProfileFavorites'; 13 | import Register from '../components/Register'; 14 | import Settings from '../components/Settings'; 15 | import { store } from '../store'; 16 | import { push } from 'react-router-redux'; 17 | 18 | const mapStateToProps = state => { 19 | return { 20 | appLoaded: state.common.appLoaded, 21 | appName: state.common.appName, 22 | currentUser: state.common.currentUser, 23 | redirectTo: state.common.redirectTo 24 | }}; 25 | 26 | const mapDispatchToProps = dispatch => ({ 27 | onLoad: (payload, token) => 28 | dispatch({ type: APP_LOAD, payload, token, skipTracking: true }), 29 | onRedirect: () => 30 | dispatch({ type: REDIRECT }) 31 | }); 32 | 33 | class App extends React.Component { 34 | componentWillReceiveProps(nextProps) { 35 | if (nextProps.redirectTo) { 36 | // this.context.router.replace(nextProps.redirectTo); 37 | store.dispatch(push(nextProps.redirectTo)); 38 | this.props.onRedirect(); 39 | } 40 | } 41 | 42 | componentWillMount() { 43 | const token = window.localStorage.getItem('jwt'); 44 | if (token) { 45 | agent.setToken(token); 46 | } 47 | 48 | this.props.onLoad(token ? agent.Auth.current() : null, token); 49 | } 50 | 51 | render() { 52 | if (this.props.appLoaded) { 53 | return ( 54 |
55 |
58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 |
70 | ); 71 | } 72 | return ( 73 |
74 |
77 |
78 | ); 79 | } 80 | } 81 | 82 | // App.contextTypes = { 83 | // router: PropTypes.object.isRequired 84 | // }; 85 | 86 | export default connect(mapStateToProps, mapDispatchToProps)(App); 87 | -------------------------------------------------------------------------------- /frontend/src/components/Article/ArticleActions.js: -------------------------------------------------------------------------------- 1 | import { Link } from 'react-router-dom'; 2 | import React from 'react'; 3 | import agent from '../../agent'; 4 | import { connect } from 'react-redux'; 5 | import { DELETE_ARTICLE } from '../../constants/actionTypes'; 6 | 7 | const mapDispatchToProps = dispatch => ({ 8 | onClickDelete: payload => 9 | dispatch({ type: DELETE_ARTICLE, payload }) 10 | }); 11 | 12 | const ArticleActions = props => { 13 | const article = props.article; 14 | const del = () => { 15 | props.onClickDelete(agent.Articles.del(article.slug)) 16 | }; 17 | if (props.canModify) { 18 | return ( 19 | 20 | 21 | 24 | Edit Article 25 | 26 | 27 | 30 | 31 | 32 | ); 33 | } 34 | 35 | return ( 36 | 37 | 38 | ); 39 | }; 40 | 41 | export default connect(() => ({}), mapDispatchToProps)(ArticleActions); 42 | -------------------------------------------------------------------------------- /frontend/src/components/Article/ArticleMeta.js: -------------------------------------------------------------------------------- 1 | import ArticleActions from './ArticleActions'; 2 | import { Link } from 'react-router-dom'; 3 | import React from 'react'; 4 | 5 | const ArticleMeta = props => { 6 | const article = props.article; 7 | return ( 8 |
9 | 10 | {article.author.username} 11 | 12 | 13 |
14 | 15 | {article.author.username} 16 | 17 | 18 | {new Date(article.createdAt).toDateString()} 19 | 20 |
21 | 22 | 23 |
24 | ); 25 | }; 26 | 27 | export default ArticleMeta; 28 | -------------------------------------------------------------------------------- /frontend/src/components/Article/Comment.js: -------------------------------------------------------------------------------- 1 | import DeleteButton from './DeleteButton'; 2 | import { Link } from 'react-router-dom'; 3 | import React from 'react'; 4 | 5 | const Comment = props => { 6 | const comment = props.comment; 7 | const show = props.currentUser && 8 | props.currentUser.username === comment.author.username; 9 | return ( 10 |
11 |
12 |

{comment.body}

13 |
14 |
15 | 18 | {comment.author.username} 19 | 20 |   21 | 24 | {comment.author.username} 25 | 26 | 27 | {new Date(comment.createdAt).toDateString()} 28 | 29 | 30 |
31 |
32 | ); 33 | }; 34 | 35 | export default Comment; 36 | -------------------------------------------------------------------------------- /frontend/src/components/Article/CommentContainer.js: -------------------------------------------------------------------------------- 1 | import CommentInput from './CommentInput'; 2 | import CommentList from './CommentList'; 3 | import { Link } from 'react-router-dom'; 4 | import React from 'react'; 5 | 6 | const CommentContainer = props => { 7 | if (props.currentUser) { 8 | return ( 9 |
10 |
11 | 12 | 13 |
14 | 15 | 19 |
20 | ); 21 | } else { 22 | return ( 23 |
24 |

25 | Sign in 26 |  or  27 | sign up 28 |  to add comments on this article. 29 |

30 | 31 | 35 |
36 | ); 37 | } 38 | }; 39 | 40 | export default CommentContainer; 41 | -------------------------------------------------------------------------------- /frontend/src/components/Article/CommentInput.js: -------------------------------------------------------------------------------- 1 | import React from 'react'; 2 | import agent from '../../agent'; 3 | import { connect } from 'react-redux'; 4 | import { ADD_COMMENT } from '../../constants/actionTypes'; 5 | 6 | const mapDispatchToProps = dispatch => ({ 7 | onSubmit: payload => 8 | dispatch({ type: ADD_COMMENT, payload }) 9 | }); 10 | 11 | class CommentInput extends React.Component { 12 | constructor() { 13 | super(); 14 | this.state = { 15 | body: '' 16 | }; 17 | 18 | this.setBody = ev => { 19 | this.setState({ body: ev.target.value }); 20 | }; 21 | 22 | this.createComment = ev => { 23 | ev.preventDefault(); 24 | const payload = agent.Comments.create(this.props.slug, 25 | { body: this.state.body }); 26 | this.setState({ body: '' }); 27 | this.props.onSubmit(payload); 28 | }; 29 | } 30 | 31 | render() { 32 | return ( 33 |
34 |
35 | 41 |
42 |
43 | {this.props.currentUser.username} 47 | 52 |
53 |
54 | ); 55 | } 56 | } 57 | 58 | export default connect(() => ({}), mapDispatchToProps)(CommentInput); 59 | -------------------------------------------------------------------------------- /frontend/src/components/Article/CommentList.js: -------------------------------------------------------------------------------- 1 | import Comment from './Comment'; 2 | import React from 'react'; 3 | 4 | const CommentList = props => { 5 | return ( 6 |
7 | { 8 | props.comments.map(comment => { 9 | return ( 10 | 15 | ); 16 | }) 17 | } 18 |
19 | ); 20 | }; 21 | 22 | export default CommentList; 23 | -------------------------------------------------------------------------------- /frontend/src/components/Article/DeleteButton.js: -------------------------------------------------------------------------------- 1 | import React from 'react'; 2 | import agent from '../../agent'; 3 | import { connect } from 'react-redux'; 4 | import { DELETE_COMMENT } from '../../constants/actionTypes'; 5 | 6 | const mapDispatchToProps = dispatch => ({ 7 | onClick: (payload, commentId) => 8 | dispatch({ type: DELETE_COMMENT, payload, commentId }) 9 | }); 10 | 11 | const DeleteButton = props => { 12 | const del = () => { 13 | const payload = agent.Comments.delete(props.slug, props.commentId); 14 | props.onClick(payload, props.commentId); 15 | }; 16 | 17 | if (props.show) { 18 | return ( 19 | 20 | 21 | 22 | ); 23 | } 24 | return null; 25 | }; 26 | 27 | export default connect(() => ({}), mapDispatchToProps)(DeleteButton); 28 | -------------------------------------------------------------------------------- /frontend/src/components/Article/index.js: -------------------------------------------------------------------------------- 1 | import ArticleMeta from './ArticleMeta'; 2 | import CommentContainer from './CommentContainer'; 3 | import React from 'react'; 4 | import agent from '../../agent'; 5 | import { connect } from 'react-redux'; 6 | import marked from 'marked'; 7 | import { ARTICLE_PAGE_LOADED, ARTICLE_PAGE_UNLOADED } from '../../constants/actionTypes'; 8 | 9 | const mapStateToProps = state => ({ 10 | ...state.article, 11 | currentUser: state.common.currentUser 12 | }); 13 | 14 | const mapDispatchToProps = dispatch => ({ 15 | onLoad: payload => 16 | dispatch({ type: ARTICLE_PAGE_LOADED, payload }), 17 | onUnload: () => 18 | dispatch({ type: ARTICLE_PAGE_UNLOADED }) 19 | }); 20 | 21 | class Article extends React.Component { 22 | componentWillMount() { 23 | this.props.onLoad(Promise.all([ 24 | agent.Articles.get(this.props.match.params.id), 25 | agent.Comments.forArticle(this.props.match.params.id) 26 | ])); 27 | } 28 | 29 | componentWillUnmount() { 30 | this.props.onUnload(); 31 | } 32 | 33 | render() { 34 | if (!this.props.article) { 35 | return null; 36 | } 37 | 38 | const markup = { __html: marked(this.props.article.body, { sanitize: true }) }; 39 | const canModify = this.props.currentUser && 40 | this.props.currentUser.username === this.props.article.author.username; 41 | return ( 42 |
43 | 44 |
45 |
46 | 47 |

{this.props.article.title}

48 | 51 | 52 |
53 |
54 | 55 |
56 | 57 |
58 |
59 | 60 |
61 | 62 |
    63 | { 64 | this.props.article.tagList.map(tag => { 65 | return ( 66 |
  • 69 | {tag} 70 |
  • 71 | ); 72 | }) 73 | } 74 |
75 | 76 |
77 |
78 | 79 |
80 | 81 |
82 |
83 | 84 |
85 | 90 |
91 |
92 |
93 | ); 94 | } 95 | } 96 | 97 | export default connect(mapStateToProps, mapDispatchToProps)(Article); 98 | -------------------------------------------------------------------------------- /frontend/src/components/ArticleList.js: -------------------------------------------------------------------------------- 1 | import ArticlePreview from './ArticlePreview'; 2 | import ListPagination from './ListPagination'; 3 | import React from 'react'; 4 | 5 | const ArticleList = props => { 6 | if (!props.articles) { 7 | return ( 8 |
Loading...
9 | ); 10 | } 11 | 12 | if (props.articles.length === 0) { 13 | return ( 14 |
15 | No articles are here... yet. 16 |
17 | ); 18 | } 19 | 20 | return ( 21 |
22 | { 23 | props.articles.map(article => { 24 | return ( 25 | 26 | ); 27 | }) 28 | } 29 | 30 | 34 |
35 | ); 36 | }; 37 | 38 | export default ArticleList; 39 | -------------------------------------------------------------------------------- /frontend/src/components/ArticlePreview.js: -------------------------------------------------------------------------------- 1 | import React from 'react'; 2 | import { Link } from 'react-router-dom'; 3 | import agent from '../agent'; 4 | import { connect } from 'react-redux'; 5 | import { ARTICLE_FAVORITED, ARTICLE_UNFAVORITED } from '../constants/actionTypes'; 6 | 7 | const FAVORITED_CLASS = 'btn btn-sm btn-primary'; 8 | const NOT_FAVORITED_CLASS = 'btn btn-sm btn-outline-primary'; 9 | 10 | const mapDispatchToProps = dispatch => ({ 11 | favorite: slug => dispatch({ 12 | type: ARTICLE_FAVORITED, 13 | payload: agent.Articles.favorite(slug) 14 | }), 15 | unfavorite: slug => dispatch({ 16 | type: ARTICLE_UNFAVORITED, 17 | payload: agent.Articles.unfavorite(slug) 18 | }) 19 | }); 20 | 21 | const ArticlePreview = props => { 22 | const article = props.article; 23 | const favoriteButtonClass = article.favorited ? 24 | FAVORITED_CLASS : 25 | NOT_FAVORITED_CLASS; 26 | 27 | const handleClick = ev => { 28 | ev.preventDefault(); 29 | if (article.favorited) { 30 | props.unfavorite(article.slug); 31 | } else { 32 | props.favorite(article.slug); 33 | } 34 | }; 35 | 36 | return ( 37 |
38 |
39 | 40 | {article.author.username} 41 | 42 | 43 |
44 | 45 | {article.author.username} 46 | 47 | 48 | {new Date(article.createdAt).toDateString()} 49 | 50 |
51 | 52 |
53 | 56 |
57 |
58 | 59 | 60 |

{article.title}

61 |

{article.description}

62 | Read more... 63 |
    64 | { 65 | article.tagList.map(tag => { 66 | return ( 67 |
  • 68 | {tag} 69 |
  • 70 | ) 71 | }) 72 | } 73 |
74 | 75 |
76 | ); 77 | } 78 | 79 | export default connect(() => ({}), mapDispatchToProps)(ArticlePreview); 80 | -------------------------------------------------------------------------------- /frontend/src/components/Header.js: -------------------------------------------------------------------------------- 1 | import React from 'react'; 2 | import { Link } from 'react-router-dom'; 3 | 4 | const LoggedOutView = props => { 5 | if (!props.currentUser) { 6 | return ( 7 |
    8 | 9 |
  • 10 | 11 | Home 12 | 13 |
  • 14 | 15 |
  • 16 | 17 | Sign in 18 | 19 |
  • 20 | 21 |
  • 22 | 23 | Sign up 24 | 25 |
  • 26 | 27 |
28 | ); 29 | } 30 | return null; 31 | }; 32 | 33 | const LoggedInView = props => { 34 | if (props.currentUser) { 35 | return ( 36 |
    37 | 38 |
  • 39 | 40 | Home 41 | 42 |
  • 43 | 44 |
  • 45 | 46 |  New Post 47 | 48 |
  • 49 | 50 |
  • 51 | 52 |  Settings 53 | 54 |
  • 55 | 56 |
  • 57 | 60 | {props.currentUser.username} 61 | {props.currentUser.username} 62 | 63 |
  • 64 | 65 |
66 | ); 67 | } 68 | 69 | return null; 70 | }; 71 | 72 | class Header extends React.Component { 73 | render() { 74 | return ( 75 | 87 | ); 88 | } 89 | } 90 | 91 | export default Header; 92 | -------------------------------------------------------------------------------- /frontend/src/components/Home/Banner.js: -------------------------------------------------------------------------------- 1 | import React from 'react'; 2 | 3 | const Banner = ({ appName, token }) => { 4 | if (token) { 5 | return null; 6 | } 7 | return ( 8 |
9 |
10 |

11 | {appName.toLowerCase()} 12 |

13 |

A place to share your knowledge.

14 |
15 |
16 | ); 17 | }; 18 | 19 | export default Banner; 20 | -------------------------------------------------------------------------------- /frontend/src/components/Home/MainView.js: -------------------------------------------------------------------------------- 1 | import ArticleList from '../ArticleList'; 2 | import React from 'react'; 3 | import agent from '../../agent'; 4 | import { connect } from 'react-redux'; 5 | import { CHANGE_TAB } from '../../constants/actionTypes'; 6 | 7 | const YourFeedTab = props => { 8 | if (props.token) { 9 | const clickHandler = ev => { 10 | ev.preventDefault(); 11 | props.onTabClick('feed', agent.Articles.feed, agent.Articles.feed()); 12 | } 13 | 14 | return ( 15 |
  • 16 | 19 | Your Feed 20 | 21 |
  • 22 | ); 23 | } 24 | return null; 25 | }; 26 | 27 | const GlobalFeedTab = props => { 28 | const clickHandler = ev => { 29 | ev.preventDefault(); 30 | props.onTabClick('all', agent.Articles.all, agent.Articles.all()); 31 | }; 32 | return ( 33 |
  • 34 | 38 | Global Feed 39 | 40 |
  • 41 | ); 42 | }; 43 | 44 | const TagFilterTab = props => { 45 | if (!props.tag) { 46 | return null; 47 | } 48 | 49 | return ( 50 |
  • 51 | 52 | {props.tag} 53 | 54 |
  • 55 | ); 56 | }; 57 | 58 | const mapStateToProps = state => ({ 59 | ...state.articleList, 60 | tags: state.home.tags, 61 | token: state.common.token 62 | }); 63 | 64 | const mapDispatchToProps = dispatch => ({ 65 | onTabClick: (tab, pager, payload) => dispatch({ type: CHANGE_TAB, tab, pager, payload }) 66 | }); 67 | 68 | const MainView = props => { 69 | return ( 70 |
    71 |
    72 |
      73 | 74 | 78 | 79 | 80 | 81 | 82 | 83 |
    84 |
    85 | 86 | 92 |
    93 | ); 94 | }; 95 | 96 | export default connect(mapStateToProps, mapDispatchToProps)(MainView); 97 | -------------------------------------------------------------------------------- /frontend/src/components/Home/Tags.js: -------------------------------------------------------------------------------- 1 | import React from 'react'; 2 | import agent from '../../agent'; 3 | 4 | const Tags = props => { 5 | const tags = props.tags; 6 | if (tags) { 7 | return ( 8 |
    9 | { 10 | tags.map(tag => { 11 | const handleClick = ev => { 12 | ev.preventDefault(); 13 | props.onClickTag(tag, page => agent.Articles.byTag(tag, page), agent.Articles.byTag(tag)); 14 | }; 15 | 16 | return ( 17 | 22 | {tag} 23 | 24 | ); 25 | }) 26 | } 27 |
    28 | ); 29 | } else { 30 | return ( 31 |
    Loading Tags...
    32 | ); 33 | } 34 | }; 35 | 36 | export default Tags; 37 | -------------------------------------------------------------------------------- /frontend/src/components/Home/index.js: -------------------------------------------------------------------------------- 1 | import Banner from './Banner'; 2 | import MainView from './MainView'; 3 | import React from 'react'; 4 | import Tags from './Tags'; 5 | import agent from '../../agent'; 6 | import { connect } from 'react-redux'; 7 | import { 8 | HOME_PAGE_LOADED, 9 | HOME_PAGE_UNLOADED, 10 | APPLY_TAG_FILTER 11 | } from '../../constants/actionTypes'; 12 | 13 | const Promise = global.Promise; 14 | 15 | const mapStateToProps = state => ({ 16 | ...state.home, 17 | appName: state.common.appName, 18 | token: state.common.token 19 | }); 20 | 21 | const mapDispatchToProps = dispatch => ({ 22 | onClickTag: (tag, pager, payload) => 23 | dispatch({ type: APPLY_TAG_FILTER, tag, pager, payload }), 24 | onLoad: (tab, pager, payload) => 25 | dispatch({ type: HOME_PAGE_LOADED, tab, pager, payload }), 26 | onUnload: () => 27 | dispatch({ type: HOME_PAGE_UNLOADED }) 28 | }); 29 | 30 | class Home extends React.Component { 31 | componentWillMount() { 32 | const tab = this.props.token ? 'feed' : 'all'; 33 | const articlesPromise = this.props.token ? 34 | agent.Articles.feed : 35 | agent.Articles.all; 36 | 37 | this.props.onLoad(tab, articlesPromise, Promise.all([agent.Tags.getAll(), articlesPromise()])); 38 | } 39 | 40 | componentWillUnmount() { 41 | this.props.onUnload(); 42 | } 43 | 44 | render() { 45 | return ( 46 |
    47 | 48 | 49 | 50 |
    51 |
    52 | 53 | 54 |
    55 |
    56 | 57 |

    Popular Tags

    58 | 59 | 62 | 63 |
    64 |
    65 |
    66 |
    67 | 68 |
    69 | ); 70 | } 71 | } 72 | 73 | export default connect(mapStateToProps, mapDispatchToProps)(Home); 74 | -------------------------------------------------------------------------------- /frontend/src/components/ListErrors.js: -------------------------------------------------------------------------------- 1 | import React from 'react'; 2 | 3 | class ListErrors extends React.Component { 4 | render() { 5 | const errors = this.props.errors; 6 | if (errors) { 7 | return ( 8 |
      9 | { 10 | Object.keys(errors).map(key => { 11 | return ( 12 |
    • 13 | {key} {errors[key]} 14 |
    • 15 | ); 16 | }) 17 | } 18 |
    19 | ); 20 | } else { 21 | return null; 22 | } 23 | } 24 | } 25 | 26 | export default ListErrors; 27 | -------------------------------------------------------------------------------- /frontend/src/components/ListPagination.js: -------------------------------------------------------------------------------- 1 | import React from 'react'; 2 | import agent from '../agent'; 3 | import { connect } from 'react-redux'; 4 | import { SET_PAGE } from '../constants/actionTypes'; 5 | 6 | const mapDispatchToProps = dispatch => ({ 7 | onSetPage: (page, payload) => 8 | dispatch({ type: SET_PAGE, page, payload }) 9 | }); 10 | 11 | const ListPagination = props => { 12 | if (props.articlesCount <= 10) { 13 | return null; 14 | } 15 | 16 | const range = []; 17 | for (let i = 0; i < Math.ceil(props.articlesCount / 10); ++i) { 18 | range.push(i); 19 | } 20 | 21 | const setPage = page => { 22 | if(props.pager) { 23 | props.onSetPage(page, props.pager(page)); 24 | }else { 25 | props.onSetPage(page, agent.Articles.all(page)) 26 | } 27 | }; 28 | 29 | return ( 30 | 55 | ); 56 | }; 57 | 58 | export default connect(() => ({}), mapDispatchToProps)(ListPagination); 59 | -------------------------------------------------------------------------------- /frontend/src/components/Login.js: -------------------------------------------------------------------------------- 1 | import { Link } from 'react-router-dom'; 2 | import ListErrors from './ListErrors'; 3 | import React from 'react'; 4 | import agent from '../agent'; 5 | import { connect } from 'react-redux'; 6 | import { 7 | UPDATE_FIELD_AUTH, 8 | LOGIN, 9 | LOGIN_PAGE_UNLOADED 10 | } from '../constants/actionTypes'; 11 | 12 | const mapStateToProps = state => ({ ...state.auth }); 13 | 14 | const mapDispatchToProps = dispatch => ({ 15 | onChangeEmail: value => 16 | dispatch({ type: UPDATE_FIELD_AUTH, key: 'email', value }), 17 | onChangePassword: value => 18 | dispatch({ type: UPDATE_FIELD_AUTH, key: 'password', value }), 19 | onSubmit: (email, password) => 20 | dispatch({ type: LOGIN, payload: agent.Auth.login(email, password) }), 21 | onUnload: () => 22 | dispatch({ type: LOGIN_PAGE_UNLOADED }) 23 | }); 24 | 25 | class Login extends React.Component { 26 | constructor() { 27 | super(); 28 | this.changeEmail = ev => this.props.onChangeEmail(ev.target.value); 29 | this.changePassword = ev => this.props.onChangePassword(ev.target.value); 30 | this.submitForm = (email, password) => ev => { 31 | ev.preventDefault(); 32 | this.props.onSubmit(email, password); 33 | }; 34 | } 35 | 36 | componentWillUnmount() { 37 | this.props.onUnload(); 38 | } 39 | 40 | render() { 41 | const email = this.props.email; 42 | const password = this.props.password; 43 | return ( 44 |
    45 |
    46 |
    47 | 48 |
    49 |

    Sign In

    50 |

    51 | 52 | Need an account? 53 | 54 |

    55 | 56 | 57 | 58 |
    59 |
    60 | 61 |
    62 | 68 |
    69 | 70 |
    71 | 77 |
    78 | 79 | 85 | 86 |
    87 |
    88 |
    89 | 90 |
    91 |
    92 |
    93 | ); 94 | } 95 | } 96 | 97 | export default connect(mapStateToProps, mapDispatchToProps)(Login); 98 | -------------------------------------------------------------------------------- /frontend/src/components/Profile.js: -------------------------------------------------------------------------------- 1 | import ArticleList from './ArticleList'; 2 | import React from 'react'; 3 | import { Link } from 'react-router-dom'; 4 | import agent from '../agent'; 5 | import { connect } from 'react-redux'; 6 | import { 7 | FOLLOW_USER, 8 | UNFOLLOW_USER, 9 | PROFILE_PAGE_LOADED, 10 | PROFILE_PAGE_UNLOADED 11 | } from '../constants/actionTypes'; 12 | 13 | const EditProfileSettings = props => { 14 | if (props.isUser) { 15 | return ( 16 | 19 | Edit Profile Settings 20 | 21 | ); 22 | } 23 | return null; 24 | }; 25 | 26 | const FollowUserButton = props => { 27 | if (props.isUser) { 28 | return null; 29 | } 30 | 31 | let classes = 'btn btn-sm action-btn'; 32 | if (props.user.following) { 33 | classes += ' btn-secondary'; 34 | } else { 35 | classes += ' btn-outline-secondary'; 36 | } 37 | 38 | const handleClick = ev => { 39 | ev.preventDefault(); 40 | if (props.user.following) { 41 | props.unfollow(props.user.username) 42 | } else { 43 | props.follow(props.user.username) 44 | } 45 | }; 46 | 47 | return ( 48 | 55 | ); 56 | }; 57 | 58 | const mapStateToProps = state => ({ 59 | ...state.articleList, 60 | currentUser: state.common.currentUser, 61 | profile: state.profile 62 | }); 63 | 64 | const mapDispatchToProps = dispatch => ({ 65 | onFollow: username => dispatch({ 66 | type: FOLLOW_USER, 67 | payload: agent.Profile.follow(username) 68 | }), 69 | onLoad: payload => dispatch({ type: PROFILE_PAGE_LOADED, payload }), 70 | onUnfollow: username => dispatch({ 71 | type: UNFOLLOW_USER, 72 | payload: agent.Profile.unfollow(username) 73 | }), 74 | onUnload: () => dispatch({ type: PROFILE_PAGE_UNLOADED }) 75 | }); 76 | 77 | class Profile extends React.Component { 78 | componentWillMount() { 79 | this.props.onLoad(Promise.all([ 80 | agent.Profile.get(this.props.match.params.username), 81 | agent.Articles.byAuthor(this.props.match.params.username) 82 | ])); 83 | } 84 | 85 | componentWillUnmount() { 86 | this.props.onUnload(); 87 | } 88 | 89 | renderTabs() { 90 | return ( 91 |
      92 |
    • 93 | 96 | My Articles 97 | 98 |
    • 99 | 100 |
    • 101 | 104 | Favorited Articles 105 | 106 |
    • 107 |
    108 | ); 109 | } 110 | 111 | render() { 112 | const profile = this.props.profile; 113 | if (!profile) { 114 | return null; 115 | } 116 | 117 | const isUser = this.props.currentUser && 118 | this.props.profile.username === this.props.currentUser.username; 119 | 120 | return ( 121 |
    122 | 123 |
    124 |
    125 |
    126 |
    127 | 128 | {profile.username} 129 |

    {profile.username}

    130 |

    {profile.bio}

    131 | 132 | 133 | 139 | 140 |
    141 |
    142 |
    143 |
    144 | 145 |
    146 |
    147 | 148 |
    149 | 150 |
    151 | {this.renderTabs()} 152 |
    153 | 154 | 159 |
    160 | 161 |
    162 |
    163 | 164 |
    165 | ); 166 | } 167 | } 168 | 169 | export default connect(mapStateToProps, mapDispatchToProps)(Profile); 170 | export { Profile, mapStateToProps }; 171 | -------------------------------------------------------------------------------- /frontend/src/components/ProfileFavorites.js: -------------------------------------------------------------------------------- 1 | import { Profile, mapStateToProps } from './Profile'; 2 | import React from 'react'; 3 | import { Link } from 'react-router-dom'; 4 | import agent from '../agent'; 5 | import { connect } from 'react-redux'; 6 | import { 7 | PROFILE_PAGE_LOADED, 8 | PROFILE_PAGE_UNLOADED 9 | } from '../constants/actionTypes'; 10 | 11 | const mapDispatchToProps = dispatch => ({ 12 | onLoad: (pager, payload) => 13 | dispatch({ type: PROFILE_PAGE_LOADED, pager, payload }), 14 | onUnload: () => 15 | dispatch({ type: PROFILE_PAGE_UNLOADED }) 16 | }); 17 | 18 | class ProfileFavorites extends Profile { 19 | componentWillMount() { 20 | this.props.onLoad(page => agent.Articles.favoritedBy(this.props.match.params.username, page), Promise.all([ 21 | agent.Profile.get(this.props.match.params.username), 22 | agent.Articles.favoritedBy(this.props.match.params.username) 23 | ])); 24 | } 25 | 26 | componentWillUnmount() { 27 | this.props.onUnload(); 28 | } 29 | 30 | renderTabs() { 31 | return ( 32 |
      33 |
    • 34 | 37 | My Articles 38 | 39 |
    • 40 | 41 |
    • 42 | 45 | Favorited Articles 46 | 47 |
    • 48 |
    49 | ); 50 | } 51 | } 52 | 53 | export default connect(mapStateToProps, mapDispatchToProps)(ProfileFavorites); 54 | -------------------------------------------------------------------------------- /frontend/src/components/Register.js: -------------------------------------------------------------------------------- 1 | import { Link } from 'react-router-dom'; 2 | import ListErrors from './ListErrors'; 3 | import React from 'react'; 4 | import agent from '../agent'; 5 | import { connect } from 'react-redux'; 6 | import { 7 | UPDATE_FIELD_AUTH, 8 | REGISTER, 9 | REGISTER_PAGE_UNLOADED 10 | } from '../constants/actionTypes'; 11 | 12 | const mapStateToProps = state => ({ ...state.auth }); 13 | 14 | const mapDispatchToProps = dispatch => ({ 15 | onChangeEmail: value => 16 | dispatch({ type: UPDATE_FIELD_AUTH, key: 'email', value }), 17 | onChangePassword: value => 18 | dispatch({ type: UPDATE_FIELD_AUTH, key: 'password', value }), 19 | onChangeUsername: value => 20 | dispatch({ type: UPDATE_FIELD_AUTH, key: 'username', value }), 21 | onSubmit: (username, email, password) => { 22 | const payload = agent.Auth.register(username, email, password); 23 | dispatch({ type: REGISTER, payload }) 24 | }, 25 | onUnload: () => 26 | dispatch({ type: REGISTER_PAGE_UNLOADED }) 27 | }); 28 | 29 | class Register extends React.Component { 30 | constructor() { 31 | super(); 32 | this.changeEmail = ev => this.props.onChangeEmail(ev.target.value); 33 | this.changePassword = ev => this.props.onChangePassword(ev.target.value); 34 | this.changeUsername = ev => this.props.onChangeUsername(ev.target.value); 35 | this.submitForm = (username, email, password) => ev => { 36 | ev.preventDefault(); 37 | this.props.onSubmit(username, email, password); 38 | } 39 | } 40 | 41 | componentWillUnmount() { 42 | this.props.onUnload(); 43 | } 44 | 45 | render() { 46 | const email = this.props.email; 47 | const password = this.props.password; 48 | const username = this.props.username; 49 | 50 | return ( 51 |
    52 |
    53 |
    54 | 55 |
    56 |

    Sign Up

    57 |

    58 | 59 | Have an account? 60 | 61 |

    62 | 63 | 64 | 65 |
    66 |
    67 | 68 |
    69 | 75 |
    76 | 77 |
    78 | 84 |
    85 | 86 |
    87 | 93 |
    94 | 95 | 101 | 102 |
    103 |
    104 |
    105 | 106 |
    107 |
    108 |
    109 | ); 110 | } 111 | } 112 | 113 | export default connect(mapStateToProps, mapDispatchToProps)(Register); 114 | -------------------------------------------------------------------------------- /frontend/src/constants/actionTypes.js: -------------------------------------------------------------------------------- 1 | export const APP_LOAD = 'APP_LOAD'; 2 | export const REDIRECT = 'REDIRECT'; 3 | export const ARTICLE_SUBMITTED = 'ARTICLE_SUBMITTED'; 4 | export const SETTINGS_SAVED = 'SETTINGS_SAVED'; 5 | export const DELETE_ARTICLE = 'DELETE_ARTICLE'; 6 | export const SETTINGS_PAGE_UNLOADED = 'SETTINGS_PAGE_UNLOADED'; 7 | export const HOME_PAGE_LOADED = 'HOME_PAGE_LOADED'; 8 | export const HOME_PAGE_UNLOADED = 'HOME_PAGE_UNLOADED'; 9 | export const ARTICLE_PAGE_LOADED = 'ARTICLE_PAGE_LOADED'; 10 | export const ARTICLE_PAGE_UNLOADED = 'ARTICLE_PAGE_UNLOADED'; 11 | export const ADD_COMMENT = 'ADD_COMMENT'; 12 | export const DELETE_COMMENT = 'DELETE_COMMENT'; 13 | export const ARTICLE_FAVORITED = 'ARTICLE_FAVORITED'; 14 | export const ARTICLE_UNFAVORITED = 'ARTICLE_UNFAVORITED'; 15 | export const SET_PAGE = 'SET_PAGE'; 16 | export const APPLY_TAG_FILTER = 'APPLY_TAG_FILTER'; 17 | export const CHANGE_TAB = 'CHANGE_TAB'; 18 | export const PROFILE_PAGE_LOADED = 'PROFILE_PAGE_LOADED'; 19 | export const PROFILE_PAGE_UNLOADED = 'PROFILE_PAGE_UNLOADED'; 20 | export const LOGIN = 'LOGIN'; 21 | export const LOGOUT = 'LOGOUT'; 22 | export const REGISTER = 'REGISTER'; 23 | export const LOGIN_PAGE_UNLOADED = 'LOGIN_PAGE_UNLOADED'; 24 | export const REGISTER_PAGE_UNLOADED = 'REGISTER_PAGE_UNLOADED'; 25 | export const ASYNC_START = 'ASYNC_START'; 26 | export const ASYNC_END = 'ASYNC_END'; 27 | export const EDITOR_PAGE_LOADED = 'EDITOR_PAGE_LOADED'; 28 | export const EDITOR_PAGE_UNLOADED = 'EDITOR_PAGE_UNLOADED'; 29 | export const ADD_TAG = 'ADD_TAG'; 30 | export const REMOVE_TAG = 'REMOVE_TAG'; 31 | export const UPDATE_FIELD_AUTH = 'UPDATE_FIELD_AUTH'; 32 | export const UPDATE_FIELD_EDITOR = 'UPDATE_FIELD_EDITOR'; 33 | export const FOLLOW_USER = 'FOLLOW_USER'; 34 | export const UNFOLLOW_USER = 'UNFOLLOW_USER'; 35 | -------------------------------------------------------------------------------- /frontend/src/index.js: -------------------------------------------------------------------------------- 1 | import ReactDOM from 'react-dom'; 2 | import { Provider } from 'react-redux'; 3 | import React from 'react'; 4 | import { store, history} from './store'; 5 | 6 | import { Route, Switch } from 'react-router-dom'; 7 | import { ConnectedRouter } from 'react-router-redux'; 8 | 9 | import App from './components/App'; 10 | 11 | ReactDOM.render(( 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | ), document.getElementById('root')); 21 | -------------------------------------------------------------------------------- /frontend/src/middleware.js: -------------------------------------------------------------------------------- 1 | import agent from './agent'; 2 | import { 3 | ASYNC_START, 4 | ASYNC_END, 5 | LOGIN, 6 | LOGOUT, 7 | REGISTER 8 | } from './constants/actionTypes'; 9 | 10 | const promiseMiddleware = store => next => action => { 11 | if (isPromise(action.payload)) { 12 | store.dispatch({ type: ASYNC_START, subtype: action.type }); 13 | 14 | const currentView = store.getState().viewChangeCounter; 15 | const skipTracking = action.skipTracking; 16 | 17 | action.payload.then( 18 | res => { 19 | const currentState = store.getState() 20 | if (!skipTracking && currentState.viewChangeCounter !== currentView) { 21 | return 22 | } 23 | console.log('RESULT', res); 24 | action.payload = res; 25 | store.dispatch({ type: ASYNC_END, promise: action.payload }); 26 | store.dispatch(action); 27 | }, 28 | error => { 29 | const currentState = store.getState() 30 | if (!skipTracking && currentState.viewChangeCounter !== currentView) { 31 | return 32 | } 33 | console.log('ERROR', error); 34 | action.error = true; 35 | action.payload = error.response.body; 36 | if (!action.skipTracking) { 37 | store.dispatch({ type: ASYNC_END, promise: action.payload }); 38 | } 39 | store.dispatch(action); 40 | } 41 | ); 42 | 43 | return; 44 | } 45 | 46 | next(action); 47 | }; 48 | 49 | const localStorageMiddleware = store => next => action => { 50 | if (action.type === REGISTER || action.type === LOGIN) { 51 | if (!action.error) { 52 | window.localStorage.setItem('jwt', action.payload.user.token); 53 | agent.setToken(action.payload.user.token); 54 | } 55 | } else if (action.type === LOGOUT) { 56 | window.localStorage.setItem('jwt', ''); 57 | agent.setToken(null); 58 | } 59 | 60 | next(action); 61 | }; 62 | 63 | function isPromise(v) { 64 | return v && typeof v.then === 'function'; 65 | } 66 | 67 | 68 | export { promiseMiddleware, localStorageMiddleware } 69 | -------------------------------------------------------------------------------- /frontend/src/reducer.js: -------------------------------------------------------------------------------- 1 | import article from './reducers/article'; 2 | import articleList from './reducers/articleList'; 3 | import auth from './reducers/auth'; 4 | import { combineReducers } from 'redux'; 5 | import common from './reducers/common'; 6 | import editor from './reducers/editor'; 7 | import home from './reducers/home'; 8 | import profile from './reducers/profile'; 9 | import settings from './reducers/settings'; 10 | import { routerReducer } from 'react-router-redux'; 11 | 12 | export default combineReducers({ 13 | article, 14 | articleList, 15 | auth, 16 | common, 17 | editor, 18 | home, 19 | profile, 20 | settings, 21 | router: routerReducer 22 | }); 23 | -------------------------------------------------------------------------------- /frontend/src/reducers/article.js: -------------------------------------------------------------------------------- 1 | import { 2 | ARTICLE_PAGE_LOADED, 3 | ARTICLE_PAGE_UNLOADED, 4 | ADD_COMMENT, 5 | DELETE_COMMENT 6 | } from '../constants/actionTypes'; 7 | 8 | export default (state = {}, action) => { 9 | switch (action.type) { 10 | case ARTICLE_PAGE_LOADED: 11 | return { 12 | ...state, 13 | article: action.payload[0].article, 14 | comments: action.payload[1].comments 15 | }; 16 | case ARTICLE_PAGE_UNLOADED: 17 | return {}; 18 | case ADD_COMMENT: 19 | return { 20 | ...state, 21 | commentErrors: action.error ? action.payload.errors : null, 22 | comments: action.error ? 23 | null : 24 | (state.comments || []).concat([action.payload.comment]) 25 | }; 26 | case DELETE_COMMENT: 27 | const commentId = action.commentId 28 | return { 29 | ...state, 30 | comments: state.comments.filter(comment => comment.id !== commentId) 31 | }; 32 | default: 33 | return state; 34 | } 35 | }; 36 | -------------------------------------------------------------------------------- /frontend/src/reducers/articleList.js: -------------------------------------------------------------------------------- 1 | import { 2 | ARTICLE_FAVORITED, 3 | ARTICLE_UNFAVORITED, 4 | SET_PAGE, 5 | APPLY_TAG_FILTER, 6 | HOME_PAGE_LOADED, 7 | HOME_PAGE_UNLOADED, 8 | CHANGE_TAB, 9 | PROFILE_PAGE_LOADED, 10 | PROFILE_PAGE_UNLOADED, 11 | PROFILE_FAVORITES_PAGE_LOADED, 12 | PROFILE_FAVORITES_PAGE_UNLOADED 13 | } from '../constants/actionTypes'; 14 | 15 | export default (state = {}, action) => { 16 | switch (action.type) { 17 | case ARTICLE_FAVORITED: 18 | case ARTICLE_UNFAVORITED: 19 | return { 20 | ...state, 21 | articles: state.articles.map(article => { 22 | if (article.slug === action.payload.article.slug) { 23 | return { 24 | ...article, 25 | favorited: action.payload.article.favorited, 26 | favoritesCount: action.payload.article.favoritesCount 27 | }; 28 | } 29 | return article; 30 | }) 31 | }; 32 | case SET_PAGE: 33 | return { 34 | ...state, 35 | articles: action.payload.articles, 36 | articlesCount: action.payload.articlesCount, 37 | currentPage: action.page 38 | }; 39 | case APPLY_TAG_FILTER: 40 | return { 41 | ...state, 42 | pager: action.pager, 43 | articles: action.payload.articles, 44 | articlesCount: action.payload.articlesCount, 45 | tab: null, 46 | tag: action.tag, 47 | currentPage: 0 48 | }; 49 | case HOME_PAGE_LOADED: 50 | return { 51 | ...state, 52 | pager: action.pager, 53 | tags: action.payload[0].tags, 54 | articles: action.payload[1].articles, 55 | articlesCount: action.payload[1].articlesCount, 56 | currentPage: 0, 57 | tab: action.tab 58 | }; 59 | case HOME_PAGE_UNLOADED: 60 | return {}; 61 | case CHANGE_TAB: 62 | return { 63 | ...state, 64 | pager: action.pager, 65 | articles: action.payload.articles, 66 | articlesCount: action.payload.articlesCount, 67 | tab: action.tab, 68 | currentPage: 0, 69 | tag: null 70 | }; 71 | case PROFILE_PAGE_LOADED: 72 | case PROFILE_FAVORITES_PAGE_LOADED: 73 | return { 74 | ...state, 75 | pager: action.pager, 76 | articles: action.payload[1].articles, 77 | articlesCount: action.payload[1].articlesCount, 78 | currentPage: 0 79 | }; 80 | case PROFILE_PAGE_UNLOADED: 81 | case PROFILE_FAVORITES_PAGE_UNLOADED: 82 | return {}; 83 | default: 84 | return state; 85 | } 86 | }; 87 | -------------------------------------------------------------------------------- /frontend/src/reducers/auth.js: -------------------------------------------------------------------------------- 1 | import { 2 | LOGIN, 3 | REGISTER, 4 | LOGIN_PAGE_UNLOADED, 5 | REGISTER_PAGE_UNLOADED, 6 | ASYNC_START, 7 | UPDATE_FIELD_AUTH 8 | } from '../constants/actionTypes'; 9 | 10 | export default (state = {}, action) => { 11 | switch (action.type) { 12 | case LOGIN: 13 | case REGISTER: 14 | return { 15 | ...state, 16 | inProgress: false, 17 | errors: action.error ? action.payload.errors : null 18 | }; 19 | case LOGIN_PAGE_UNLOADED: 20 | case REGISTER_PAGE_UNLOADED: 21 | return {}; 22 | case ASYNC_START: 23 | if (action.subtype === LOGIN || action.subtype === REGISTER) { 24 | return { ...state, inProgress: true }; 25 | } 26 | break; 27 | case UPDATE_FIELD_AUTH: 28 | return { ...state, [action.key]: action.value }; 29 | default: 30 | return state; 31 | } 32 | 33 | return state; 34 | }; 35 | -------------------------------------------------------------------------------- /frontend/src/reducers/common.js: -------------------------------------------------------------------------------- 1 | import { 2 | APP_LOAD, 3 | REDIRECT, 4 | LOGOUT, 5 | ARTICLE_SUBMITTED, 6 | SETTINGS_SAVED, 7 | LOGIN, 8 | REGISTER, 9 | DELETE_ARTICLE, 10 | ARTICLE_PAGE_UNLOADED, 11 | EDITOR_PAGE_UNLOADED, 12 | HOME_PAGE_UNLOADED, 13 | PROFILE_PAGE_UNLOADED, 14 | PROFILE_FAVORITES_PAGE_UNLOADED, 15 | SETTINGS_PAGE_UNLOADED, 16 | LOGIN_PAGE_UNLOADED, 17 | REGISTER_PAGE_UNLOADED 18 | } from '../constants/actionTypes'; 19 | 20 | const defaultState = { 21 | appName: 'Conduit', 22 | token: null, 23 | viewChangeCounter: 0 24 | }; 25 | 26 | export default (state = defaultState, action) => { 27 | switch (action.type) { 28 | case APP_LOAD: 29 | return { 30 | ...state, 31 | token: action.token || null, 32 | appLoaded: true, 33 | currentUser: action.payload ? action.payload.user : null 34 | }; 35 | case REDIRECT: 36 | return { ...state, redirectTo: null }; 37 | case LOGOUT: 38 | return { ...state, redirectTo: '/', token: null, currentUser: null }; 39 | case ARTICLE_SUBMITTED: 40 | const redirectUrl = `/article/${action.payload.article.slug}`; 41 | return { ...state, redirectTo: redirectUrl }; 42 | case SETTINGS_SAVED: 43 | return { 44 | ...state, 45 | redirectTo: action.error ? null : '/', 46 | currentUser: action.error ? null : action.payload.user 47 | }; 48 | case LOGIN: 49 | case REGISTER: 50 | return { 51 | ...state, 52 | redirectTo: action.error ? null : '/', 53 | token: action.error ? null : action.payload.user.token, 54 | currentUser: action.error ? null : action.payload.user 55 | }; 56 | case DELETE_ARTICLE: 57 | return { ...state, redirectTo: '/' }; 58 | case ARTICLE_PAGE_UNLOADED: 59 | case EDITOR_PAGE_UNLOADED: 60 | case HOME_PAGE_UNLOADED: 61 | case PROFILE_PAGE_UNLOADED: 62 | case PROFILE_FAVORITES_PAGE_UNLOADED: 63 | case SETTINGS_PAGE_UNLOADED: 64 | case LOGIN_PAGE_UNLOADED: 65 | case REGISTER_PAGE_UNLOADED: 66 | return { ...state, viewChangeCounter: state.viewChangeCounter + 1 }; 67 | default: 68 | return state; 69 | } 70 | }; 71 | -------------------------------------------------------------------------------- /frontend/src/reducers/editor.js: -------------------------------------------------------------------------------- 1 | import { 2 | EDITOR_PAGE_LOADED, 3 | EDITOR_PAGE_UNLOADED, 4 | ARTICLE_SUBMITTED, 5 | ASYNC_START, 6 | ADD_TAG, 7 | REMOVE_TAG, 8 | UPDATE_FIELD_EDITOR 9 | } from '../constants/actionTypes'; 10 | 11 | export default (state = {}, action) => { 12 | switch (action.type) { 13 | case EDITOR_PAGE_LOADED: 14 | return { 15 | ...state, 16 | articleSlug: action.payload ? action.payload.article.slug : '', 17 | title: action.payload ? action.payload.article.title : '', 18 | description: action.payload ? action.payload.article.description : '', 19 | body: action.payload ? action.payload.article.body : '', 20 | tagInput: '', 21 | tagList: action.payload ? action.payload.article.tagList : [] 22 | }; 23 | case EDITOR_PAGE_UNLOADED: 24 | return {}; 25 | case ARTICLE_SUBMITTED: 26 | return { 27 | ...state, 28 | inProgress: null, 29 | errors: action.error ? action.payload.errors : null 30 | }; 31 | case ASYNC_START: 32 | if (action.subtype === ARTICLE_SUBMITTED) { 33 | return { ...state, inProgress: true }; 34 | } 35 | break; 36 | case ADD_TAG: 37 | return { 38 | ...state, 39 | tagList: state.tagList.concat([state.tagInput]), 40 | tagInput: '' 41 | }; 42 | case REMOVE_TAG: 43 | return { 44 | ...state, 45 | tagList: state.tagList.filter(tag => tag !== action.tag) 46 | }; 47 | case UPDATE_FIELD_EDITOR: 48 | return { ...state, [action.key]: action.value }; 49 | default: 50 | return state; 51 | } 52 | 53 | return state; 54 | }; 55 | -------------------------------------------------------------------------------- /frontend/src/reducers/home.js: -------------------------------------------------------------------------------- 1 | import { HOME_PAGE_LOADED, HOME_PAGE_UNLOADED } from '../constants/actionTypes'; 2 | 3 | export default (state = {}, action) => { 4 | switch (action.type) { 5 | case HOME_PAGE_LOADED: 6 | return { 7 | ...state, 8 | tags: action.payload[0].tags 9 | }; 10 | case HOME_PAGE_UNLOADED: 11 | return {}; 12 | default: 13 | return state; 14 | } 15 | }; 16 | -------------------------------------------------------------------------------- /frontend/src/reducers/profile.js: -------------------------------------------------------------------------------- 1 | import { 2 | PROFILE_PAGE_LOADED, 3 | PROFILE_PAGE_UNLOADED, 4 | FOLLOW_USER, 5 | UNFOLLOW_USER 6 | } from '../constants/actionTypes'; 7 | 8 | export default (state = {}, action) => { 9 | switch (action.type) { 10 | case PROFILE_PAGE_LOADED: 11 | return { 12 | ...action.payload[0].profile 13 | }; 14 | case PROFILE_PAGE_UNLOADED: 15 | return {}; 16 | case FOLLOW_USER: 17 | case UNFOLLOW_USER: 18 | return { 19 | ...action.payload.profile 20 | }; 21 | default: 22 | return state; 23 | } 24 | }; 25 | -------------------------------------------------------------------------------- /frontend/src/reducers/settings.js: -------------------------------------------------------------------------------- 1 | import { 2 | SETTINGS_SAVED, 3 | SETTINGS_PAGE_UNLOADED, 4 | ASYNC_START 5 | } from '../constants/actionTypes'; 6 | 7 | export default (state = {}, action) => { 8 | switch (action.type) { 9 | case SETTINGS_SAVED: 10 | return { 11 | ...state, 12 | inProgress: false, 13 | errors: action.error ? action.payload.errors : null 14 | }; 15 | case SETTINGS_PAGE_UNLOADED: 16 | return {}; 17 | case ASYNC_START: 18 | return { 19 | ...state, 20 | inProgress: true 21 | }; 22 | default: 23 | return state; 24 | } 25 | }; 26 | -------------------------------------------------------------------------------- /frontend/src/store.js: -------------------------------------------------------------------------------- 1 | import { applyMiddleware, createStore } from 'redux'; 2 | import { createLogger } from 'redux-logger' 3 | import { composeWithDevTools } from 'redux-devtools-extension/developmentOnly'; 4 | import { promiseMiddleware, localStorageMiddleware } from './middleware'; 5 | import reducer from './reducer'; 6 | 7 | import { routerMiddleware } from 'react-router-redux' 8 | import createHistory from 'history/createBrowserHistory'; 9 | 10 | export const history = createHistory(); 11 | 12 | // Build the middleware for intercepting and dispatching navigation actions 13 | const myRouterMiddleware = routerMiddleware(history); 14 | 15 | const getMiddleware = () => { 16 | if (process.env.NODE_ENV === 'production') { 17 | return applyMiddleware(myRouterMiddleware, promiseMiddleware, localStorageMiddleware); 18 | } else { 19 | // Enable additional logging in non-production environments. 20 | return applyMiddleware(myRouterMiddleware, promiseMiddleware, localStorageMiddleware, createLogger()) 21 | } 22 | }; 23 | 24 | export const store = createStore( 25 | reducer, composeWithDevTools(getMiddleware())); 26 | -------------------------------------------------------------------------------- /infrastructure/.gitignore: -------------------------------------------------------------------------------- 1 | .env 2 | 3 | /docker/api/data 4 | /docker/db/data 5 | /docker/nginx/data 6 | /docker/web/data 7 | -------------------------------------------------------------------------------- /infrastructure/aws/codedeploy/after_install: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | mkdir -p /var/log/gunicorn 3 | mkdir -p /var/log/django 4 | mkdir -p /var/www/conduit/static/ 5 | 6 | cd /deploy/backend || exit 2 7 | pipenv install 8 | 9 | AWS_DEFAULT_REGION="$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region)" 10 | export AWS_DEFAULT_REGION 11 | 12 | DJANGO_SETTINGS_MODULE='conduit.settings.ec2' 13 | export DJANGO_SETTINGS_MODULE 14 | 15 | pipenv run python manage.py migrate 16 | pipenv run python manage.py collectstatic --no-input 17 | -------------------------------------------------------------------------------- /infrastructure/aws/codedeploy/gunicorn.ec2.conf: -------------------------------------------------------------------------------- 1 | proc_name = "gunicorn" 2 | 3 | bind = "0.0.0.0:9000" 4 | 5 | daemon = True 6 | pidfile = "/var/run/gunicorn.pid" 7 | 8 | loglevel = "DEBUG" 9 | accesslog = "/var/log/gunicorn/access.log" 10 | errorlog = "/var/log/gunicorn/error.log" 11 | 12 | keepalive = 1 13 | timeout = 300 14 | workers = 5 15 | worker_class = "gevent" 16 | -------------------------------------------------------------------------------- /infrastructure/aws/codedeploy/install_dependencies: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | sudo apt update 3 | sudo apt install -y python3-pip jq 4 | 5 | pip3 install pipenv 6 | -------------------------------------------------------------------------------- /infrastructure/aws/codedeploy/start_server: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | cd /deploy/backend || exit 2 3 | 4 | AWS_DEFAULT_REGION="$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region)" 5 | export AWS_DEFAULT_REGION 6 | 7 | DJANGO_SETTINGS_MODULE='conduit.settings.ec2' 8 | export DJANGO_SETTINGS_MODULE 9 | 10 | pipenv run gunicorn --config ../infrastructure/aws/codedeploy/gunicorn.ec2.conf conduit.wsgi 11 | -------------------------------------------------------------------------------- /infrastructure/aws/codedeploy/stop_server: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | if [[ -f /var/run/gunicorn.pid ]]; then 3 | kill -s TERM "$(cat /var/run/gunicorn.pid)" 4 | fi 5 | -------------------------------------------------------------------------------- /infrastructure/aws/codedeploy/validate_service: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | PID_FILE="/var/run/gunicorn.pid" 3 | if [[ -f "$PID_FILE" ]]; then 4 | PID="$(cat "$PID_FILE")" 5 | PROC_FILE="/proc/$PID/status" 6 | if [[ ! -f "$PROC_FILE" ]]; then 7 | echo "gunicorn is not running" 8 | rm -f "$PID_FILE" 9 | exit 2 10 | fi 11 | else 12 | echo "gunicorn is not running" 13 | exit 2 14 | fi 15 | -------------------------------------------------------------------------------- /infrastructure/aws/lambda/lambda_function.py: -------------------------------------------------------------------------------- 1 | import psycopg2 2 | import json 3 | import boto3 4 | 5 | client = boto3.client('ssm') 6 | 7 | db_host = client.get_parameter( 8 | Name='/prod/api/DATABASE_HOST')['Parameter']['Value'] 9 | db_name = client.get_parameter( 10 | Name='/prod/api/DATABASE_NAME')['Parameter']['Value'] 11 | db_user = client.get_parameter( 12 | Name='/prod/api/DATABASE_USER')['Parameter']['Value'] 13 | db_pass = client.get_parameter( 14 | Name='/prod/api/DATABASE_PASSWORD', WithDecryption=True)['Parameter']['Value'] 15 | 16 | db_port = 5432 17 | 18 | 19 | def create_conn(): 20 | conn = None 21 | try: 22 | conn = psycopg2.connect("dbname={} user={} host={} password={}".format( 23 | db_name, db_user, db_host, db_pass)) 24 | except: 25 | print("Cannot connect.") 26 | return conn 27 | 28 | 29 | def fetch(conn, query): 30 | result = [] 31 | print("Now executing: {}".format(query)) 32 | cursor = conn.cursor() 33 | cursor.execute(query) 34 | raw = cursor.fetchall() 35 | for line in raw: 36 | result.append(line) 37 | return result 38 | 39 | 40 | def lambda_handler(event, context): 41 | 42 | print(event) 43 | 44 | if 'query' in event.keys(): 45 | query = event['query'] 46 | else: 47 | query = '' 48 | 49 | query_cmd = "select * from articles_article where title like '%"+query+"%'" 50 | 51 | print(query_cmd) 52 | 53 | conn = create_conn() 54 | 55 | result = fetch(conn, query_cmd) 56 | conn.close() 57 | 58 | return { 59 | 'statusCode': 200, 60 | 'body': str(result) 61 | } 62 | -------------------------------------------------------------------------------- /infrastructure/aws/lambda/psycopg2/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/infrastructure/aws/lambda/psycopg2/.DS_Store -------------------------------------------------------------------------------- /infrastructure/aws/lambda/psycopg2/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/infrastructure/aws/lambda/psycopg2/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /infrastructure/aws/lambda/psycopg2/_ipaddress.py: -------------------------------------------------------------------------------- 1 | """Implementation of the ipaddres-based network types adaptation 2 | """ 3 | 4 | # psycopg/_ipaddress.py - Ipaddres-based network types adaptation 5 | # 6 | # Copyright (C) 2016 Daniele Varrazzo 7 | # 8 | # psycopg2 is free software: you can redistribute it and/or modify it 9 | # under the terms of the GNU Lesser General Public License as published 10 | # by the Free Software Foundation, either version 3 of the License, or 11 | # (at your option) any later version. 12 | # 13 | # In addition, as a special exception, the copyright holders give 14 | # permission to link this program with the OpenSSL library (or with 15 | # modified versions of OpenSSL that use the same license as OpenSSL), 16 | # and distribute linked combinations including the two. 17 | # 18 | # You must obey the GNU Lesser General Public License in all respects for 19 | # all of the code used other than OpenSSL. 20 | # 21 | # psycopg2 is distributed in the hope that it will be useful, but WITHOUT 22 | # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 23 | # FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public 24 | # License for more details. 25 | 26 | from psycopg2.extensions import ( 27 | new_type, new_array_type, register_type, register_adapter, QuotedString) 28 | 29 | # The module is imported on register_ipaddress 30 | ipaddress = None 31 | 32 | # The typecasters are created only once 33 | _casters = None 34 | 35 | 36 | def register_ipaddress(conn_or_curs=None): 37 | """ 38 | Register conversion support between `ipaddress` objects and `network types`__. 39 | 40 | :param conn_or_curs: the scope where to register the type casters. 41 | If `!None` register them globally. 42 | 43 | After the function is called, PostgreSQL :sql:`inet` values will be 44 | converted into `~ipaddress.IPv4Interface` or `~ipaddress.IPv6Interface` 45 | objects, :sql:`cidr` values into into `~ipaddress.IPv4Network` or 46 | `~ipaddress.IPv6Network`. 47 | 48 | .. __: https://www.postgresql.org/docs/current/static/datatype-net-types.html 49 | """ 50 | global ipaddress 51 | import ipaddress 52 | 53 | global _casters 54 | if _casters is None: 55 | _casters = _make_casters() 56 | 57 | for c in _casters: 58 | register_type(c, conn_or_curs) 59 | 60 | for t in [ipaddress.IPv4Interface, ipaddress.IPv6Interface, 61 | ipaddress.IPv4Network, ipaddress.IPv6Network]: 62 | register_adapter(t, adapt_ipaddress) 63 | 64 | 65 | def _make_casters(): 66 | inet = new_type((869,), 'INET', cast_interface) 67 | ainet = new_array_type((1041,), 'INET[]', inet) 68 | 69 | cidr = new_type((650,), 'CIDR', cast_network) 70 | acidr = new_array_type((651,), 'CIDR[]', cidr) 71 | 72 | return [inet, ainet, cidr, acidr] 73 | 74 | 75 | def cast_interface(s, cur=None): 76 | if s is None: 77 | return None 78 | # Py2 version force the use of unicode. meh. 79 | return ipaddress.ip_interface(str(s)) 80 | 81 | 82 | def cast_network(s, cur=None): 83 | if s is None: 84 | return None 85 | return ipaddress.ip_network(str(s)) 86 | 87 | 88 | def adapt_ipaddress(obj): 89 | return QuotedString(str(obj)) 90 | -------------------------------------------------------------------------------- /infrastructure/aws/lambda/psycopg2/_psycopg.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tryolabs/aws-workshop/a8ccd86d4ab0891b547025304514b5ee9f19af19/infrastructure/aws/lambda/psycopg2/_psycopg.so -------------------------------------------------------------------------------- /infrastructure/aws/lambda/psycopg2/psycopg1.py: -------------------------------------------------------------------------------- 1 | """psycopg 1.1.x compatibility module 2 | 3 | This module uses the new style connection and cursor types to build a psycopg 4 | 1.1.1.x compatibility layer. It should be considered a temporary hack to run 5 | old code while porting to psycopg 2. Import it as follows:: 6 | 7 | from psycopg2 import psycopg1 as psycopg 8 | """ 9 | # psycopg/psycopg1.py - psycopg 1.1.x compatibility module 10 | # 11 | # Copyright (C) 2003-2010 Federico Di Gregorio 12 | # 13 | # psycopg2 is free software: you can redistribute it and/or modify it 14 | # under the terms of the GNU Lesser General Public License as published 15 | # by the Free Software Foundation, either version 3 of the License, or 16 | # (at your option) any later version. 17 | # 18 | # In addition, as a special exception, the copyright holders give 19 | # permission to link this program with the OpenSSL library (or with 20 | # modified versions of OpenSSL that use the same license as OpenSSL), 21 | # and distribute linked combinations including the two. 22 | # 23 | # You must obey the GNU Lesser General Public License in all respects for 24 | # all of the code used other than OpenSSL. 25 | # 26 | # psycopg2 is distributed in the hope that it will be useful, but WITHOUT 27 | # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 28 | # FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public 29 | # License for more details. 30 | 31 | import psycopg2._psycopg as _2psycopg # noqa 32 | from psycopg2.extensions import cursor as _2cursor 33 | from psycopg2.extensions import connection as _2connection 34 | 35 | from psycopg2 import * # noqa 36 | import psycopg2.extensions as _ext 37 | _2connect = connect 38 | 39 | 40 | def connect(*args, **kwargs): 41 | """connect(dsn, ...) -> new psycopg 1.1.x compatible connection object""" 42 | kwargs['connection_factory'] = connection 43 | conn = _2connect(*args, **kwargs) 44 | conn.set_isolation_level(_ext.ISOLATION_LEVEL_READ_COMMITTED) 45 | return conn 46 | 47 | 48 | class connection(_2connection): 49 | """psycopg 1.1.x connection.""" 50 | 51 | def cursor(self): 52 | """cursor() -> new psycopg 1.1.x compatible cursor object""" 53 | return _2connection.cursor(self, cursor_factory=cursor) 54 | 55 | def autocommit(self, on_off=1): 56 | """autocommit(on_off=1) -> switch autocommit on (1) or off (0)""" 57 | if on_off > 0: 58 | self.set_isolation_level(_ext.ISOLATION_LEVEL_AUTOCOMMIT) 59 | else: 60 | self.set_isolation_level(_ext.ISOLATION_LEVEL_READ_COMMITTED) 61 | 62 | 63 | class cursor(_2cursor): 64 | """psycopg 1.1.x cursor. 65 | 66 | Note that this cursor implements the exact procedure used by psycopg 1 to 67 | build dictionaries out of result rows. The DictCursor in the 68 | psycopg.extras modules implements a much better and faster algorithm. 69 | """ 70 | 71 | def __build_dict(self, row): 72 | res = {} 73 | for i in range(len(self.description)): 74 | res[self.description[i][0]] = row[i] 75 | return res 76 | 77 | def dictfetchone(self): 78 | row = _2cursor.fetchone(self) 79 | if row: 80 | return self.__build_dict(row) 81 | else: 82 | return row 83 | 84 | def dictfetchmany(self, size): 85 | res = [] 86 | rows = _2cursor.fetchmany(self, size) 87 | for row in rows: 88 | res.append(self.__build_dict(row)) 89 | return res 90 | 91 | def dictfetchall(self): 92 | res = [] 93 | rows = _2cursor.fetchall(self) 94 | for row in rows: 95 | res.append(self.__build_dict(row)) 96 | return res 97 | -------------------------------------------------------------------------------- /infrastructure/aws/lambda/psycopg2/tz.py: -------------------------------------------------------------------------------- 1 | """tzinfo implementations for psycopg2 2 | 3 | This module holds two different tzinfo implementations that can be used as 4 | the 'tzinfo' argument to datetime constructors, directly passed to psycopg 5 | functions or used to set the .tzinfo_factory attribute in cursors. 6 | """ 7 | # psycopg/tz.py - tzinfo implementation 8 | # 9 | # Copyright (C) 2003-2010 Federico Di Gregorio 10 | # 11 | # psycopg2 is free software: you can redistribute it and/or modify it 12 | # under the terms of the GNU Lesser General Public License as published 13 | # by the Free Software Foundation, either version 3 of the License, or 14 | # (at your option) any later version. 15 | # 16 | # In addition, as a special exception, the copyright holders give 17 | # permission to link this program with the OpenSSL library (or with 18 | # modified versions of OpenSSL that use the same license as OpenSSL), 19 | # and distribute linked combinations including the two. 20 | # 21 | # You must obey the GNU Lesser General Public License in all respects for 22 | # all of the code used other than OpenSSL. 23 | # 24 | # psycopg2 is distributed in the hope that it will be useful, but WITHOUT 25 | # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 26 | # FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public 27 | # License for more details. 28 | 29 | import datetime 30 | import time 31 | 32 | ZERO = datetime.timedelta(0) 33 | 34 | 35 | class FixedOffsetTimezone(datetime.tzinfo): 36 | """Fixed offset in minutes east from UTC. 37 | 38 | This is exactly the implementation__ found in Python 2.3.x documentation, 39 | with a small change to the `!__init__()` method to allow for pickling 40 | and a default name in the form ``sHH:MM`` (``s`` is the sign.). 41 | 42 | The implementation also caches instances. During creation, if a 43 | FixedOffsetTimezone instance has previously been created with the same 44 | offset and name that instance will be returned. This saves memory and 45 | improves comparability. 46 | 47 | .. __: http://docs.python.org/library/datetime.html#datetime-tzinfo 48 | """ 49 | _name = None 50 | _offset = ZERO 51 | 52 | _cache = {} 53 | 54 | def __init__(self, offset=None, name=None): 55 | if offset is not None: 56 | self._offset = datetime.timedelta(minutes=offset) 57 | if name is not None: 58 | self._name = name 59 | 60 | def __new__(cls, offset=None, name=None): 61 | """Return a suitable instance created earlier if it exists 62 | """ 63 | key = (offset, name) 64 | try: 65 | return cls._cache[key] 66 | except KeyError: 67 | tz = super(FixedOffsetTimezone, cls).__new__(cls, offset, name) 68 | cls._cache[key] = tz 69 | return tz 70 | 71 | def __repr__(self): 72 | offset_mins = self._offset.seconds // 60 + self._offset.days * 24 * 60 73 | return "psycopg2.tz.FixedOffsetTimezone(offset=%r, name=%r)" \ 74 | % (offset_mins, self._name) 75 | 76 | def __getinitargs__(self): 77 | offset_mins = self._offset.seconds // 60 + self._offset.days * 24 * 60 78 | return (offset_mins, self._name) 79 | 80 | def utcoffset(self, dt): 81 | return self._offset 82 | 83 | def tzname(self, dt): 84 | if self._name is not None: 85 | return self._name 86 | else: 87 | seconds = self._offset.seconds + self._offset.days * 86400 88 | hours, seconds = divmod(seconds, 3600) 89 | minutes = seconds / 60 90 | if minutes: 91 | return "%+03d:%d" % (hours, minutes) 92 | else: 93 | return "%+03d" % hours 94 | 95 | def dst(self, dt): 96 | return ZERO 97 | 98 | 99 | STDOFFSET = datetime.timedelta(seconds=-time.timezone) 100 | if time.daylight: 101 | DSTOFFSET = datetime.timedelta(seconds=-time.altzone) 102 | else: 103 | DSTOFFSET = STDOFFSET 104 | DSTDIFF = DSTOFFSET - STDOFFSET 105 | 106 | 107 | class LocalTimezone(datetime.tzinfo): 108 | """Platform idea of local timezone. 109 | 110 | This is the exact implementation from the Python 2.3 documentation. 111 | """ 112 | def utcoffset(self, dt): 113 | if self._isdst(dt): 114 | return DSTOFFSET 115 | else: 116 | return STDOFFSET 117 | 118 | def dst(self, dt): 119 | if self._isdst(dt): 120 | return DSTDIFF 121 | else: 122 | return ZERO 123 | 124 | def tzname(self, dt): 125 | return time.tzname[self._isdst(dt)] 126 | 127 | def _isdst(self, dt): 128 | tt = (dt.year, dt.month, dt.day, 129 | dt.hour, dt.minute, dt.second, 130 | dt.weekday(), 0, -1) 131 | stamp = time.mktime(tt) 132 | tt = time.localtime(stamp) 133 | return tt.tm_isdst > 0 134 | 135 | LOCAL = LocalTimezone() 136 | 137 | # TODO: pre-generate some interesting time zones? 138 | -------------------------------------------------------------------------------- /infrastructure/aws/lambda/serverless.yml: -------------------------------------------------------------------------------- 1 | service: serverless-aws-workshop 2 | provider: 3 | name: aws 4 | runtime: python3.7 5 | iamRoleStatements: 6 | - Effect: 'Allow' 7 | Action: 8 | - 'ssm:GetParameter' 9 | Resource: '*' 10 | - Effect: 'Allow' 11 | Action: 12 | - 'kms:Decrypt' 13 | Resource: '*' 14 | package: 15 | include: 16 | - lambda_function.py 17 | - psycopg2/** 18 | exclude: 19 | - '**' 20 | functions: 21 | lambda_handler: 22 | handler: lambda_function.lambda_handler 23 | events: 24 | - http: 25 | path: search 26 | method: get 27 | -------------------------------------------------------------------------------- /infrastructure/docker/api/.env.template: -------------------------------------------------------------------------------- 1 | DATABASE_NAME=database_name 2 | DATABASE_USER=database_user 3 | DATABASE_PASSWORD=database_password 4 | DATABASE_HOST=database_host 5 | -------------------------------------------------------------------------------- /infrastructure/docker/api/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3 2 | 3 | ENV PYTHONUNBUFFERED 1 4 | 5 | RUN pip install pipenv 6 | 7 | RUN mkdir /conf 8 | RUN mkdir /app 9 | RUN mkdir /data 10 | 11 | WORKDIR /app 12 | 13 | COPY backend/Pipfile /app 14 | COPY backend/Pipfile.lock /app 15 | 16 | RUN pipenv install 17 | 18 | COPY infrastructure/docker/api/gunicorn.docker.conf /conf/gunicorn.conf 19 | 20 | COPY backend /app 21 | 22 | ENV DJANGO_SETTINGS_MODULE 'conduit.settings.docker' 23 | 24 | EXPOSE 9000 25 | CMD [ \ 26 | "pipenv", \ 27 | "run", \ 28 | "gunicorn", \ 29 | "--config", "/conf/gunicorn.conf", \ 30 | "conduit.wsgi" \ 31 | ] 32 | -------------------------------------------------------------------------------- /infrastructure/docker/api/gunicorn.docker.conf: -------------------------------------------------------------------------------- 1 | proc_name = "gunicorn" 2 | 3 | bind = "0.0.0.0:9000" 4 | 5 | loglevel = "DEBUG" 6 | accesslog = "/data/access.log" 7 | errorlog = "/data/error.log" 8 | 9 | keepalive = 1 10 | timeout = 300 11 | workers = 5 12 | worker_class = "gevent" 13 | -------------------------------------------------------------------------------- /infrastructure/docker/db/.env.template: -------------------------------------------------------------------------------- 1 | POSTGRES_USER=postgres_user 2 | POSTGRES_PASSWORD=postgres_password 3 | POSTGRES_DB=postgres_db 4 | PGDATA=/pgdata 5 | -------------------------------------------------------------------------------- /infrastructure/docker/django-admin: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # get_absolute path function 3 | get_abs_path() { 4 | local PARENT_DIR 5 | local ABS_PATH 6 | PARENT_DIR=$(dirname "$1") 7 | cd "$PARENT_DIR" || exit 8 | ABS_PATH="$(pwd)"/"$(basename "$1")" 9 | cd - > /dev/null || exit 10 | echo "$ABS_PATH" 11 | } 12 | 13 | INFRA_DIR="$(dirname "$(get_abs_path "$0")")" 14 | 15 | pushd "$INFRA_DIR" > /dev/null 2>&1 16 | 17 | if [[ $(docker-compose ps | wc -l) -lt 3 ]]; then 18 | ALREADY_RUNNING=0 19 | docker-compose up --build -d > /dev/null 2>&1 20 | else 21 | ALREADY_RUNNING=1 22 | fi 23 | 24 | # wait some seconds for the db to be up an ready 25 | until docker-compose exec db pg_isready > /dev/null 2>&1 26 | do 27 | sleep 1 28 | done 29 | 30 | docker-compose run api pipenv run python manage.py "$@" 31 | 32 | if [[ $ALREADY_RUNNING == 0 ]]; then 33 | docker-compose down > /dev/null 2>&1 34 | fi 35 | 36 | popd > /dev/null 37 | -------------------------------------------------------------------------------- /infrastructure/docker/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | services: 3 | db: 4 | image: postgres 5 | volumes: 6 | - ./db/data:/pgdata 7 | env_file: ./db/.env 8 | api: 9 | build: 10 | # Set parent dir as context so as to allow "COPY pegaces /app" in Dockerfile 11 | context: ../../ 12 | dockerfile: infrastructure/docker/api/Dockerfile 13 | volumes: 14 | - ./api/data:/data 15 | links: 16 | - db:db 17 | env_file: ./api/.env 18 | depends_on: 19 | - db 20 | web: 21 | build: 22 | context: ../../ 23 | dockerfile: infrastructure/docker/web/Dockerfile 24 | volumes: 25 | - ./web/data/:/data 26 | env_file: ./web/.env 27 | nginx: 28 | image: nginx 29 | volumes: 30 | - ./nginx/nginx.conf.template:/conf/nginx.conf.template:ro 31 | - ./nginx/sites-available.workshop.template:/conf/sites-available/workshop.template:ro 32 | - ./nginx/sites-enabled.workshop.template:/conf/sites-enabled/workshop.template:ro 33 | - ./nginx/entrypoint:/conf/entrypoint 34 | - ./nginx/data/:/data 35 | ports: 36 | - 80:80 37 | entrypoint: /conf/entrypoint 38 | links: 39 | - api:api 40 | - web:web 41 | env_file: ./nginx/.env 42 | depends_on: 43 | - api 44 | - web 45 | -------------------------------------------------------------------------------- /infrastructure/docker/nginx/.env.template: -------------------------------------------------------------------------------- 1 | API_HOST=api_host 2 | API_PORT=9000 3 | WEB_HOST=web_host 4 | WEB_PORT=5000 5 | -------------------------------------------------------------------------------- /infrastructure/docker/nginx/entrypoint: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | export DOLLAR='$' 4 | mkdir -p /etc/nginx/sites-enabled 5 | mkdir -p /etc/nginx/sites-available 6 | envsubst < /conf/sites-available/workshop.template > /etc/nginx/sites-available/workshop 7 | envsubst < /conf/sites-enabled/workshop.template > /etc/nginx/sites-enabled/workshop 8 | envsubst < /conf/nginx.conf.template > /etc/nginx/nginx.conf 9 | nginx 10 | -------------------------------------------------------------------------------- /infrastructure/docker/nginx/sites-available.workshop.template: -------------------------------------------------------------------------------- 1 | # www to non-www redirect -- duplicate content is BAD: 2 | # https://github.com/h5bp/html5-boilerplate/blob/5370479476dceae7cc3ea105946536d6bc0ee468/.htaccess#L362 3 | # Choose between www and non-www, listen on the *wrong* one and redirect to 4 | # the right one -- http://wiki.nginx.org/Pitfalls#Server_Name 5 | 6 | server { 7 | # listen [::]:80 accept_filter=httpready; # for FreeBSD 8 | # listen 80 accept_filter=httpready; # for FreeBSD 9 | # listen [::]:80 deferred; # for Linux 10 | # listen 80 deferred; # for Linux 11 | 12 | listen [::]:80; 13 | listen 80; 14 | 15 | # logging 16 | access_log /data/access.log; 17 | error_log /data/error.log debug; 18 | 19 | # The host name to respond to 20 | server_name localhost; 21 | 22 | # The root directory for static files 23 | root /www; 24 | 25 | location /api { 26 | proxy_pass http://${API_HOST}:${API_PORT}; 27 | proxy_set_header Host ${DOLLAR}http_host; 28 | proxy_set_header X-Real-IP ${DOLLAR}remote_addr; 29 | proxy_set_header X-Forwarded-For ${DOLLAR}proxy_add_x_forwarded_for; 30 | proxy_connect_timeout 300s; 31 | proxy_read_timeout 300s; 32 | sendfile on; 33 | } 34 | 35 | location / { 36 | proxy_pass http://${WEB_HOST}:${WEB_PORT}; 37 | proxy_set_header Host ${DOLLAR}http_host; 38 | proxy_set_header X-Real-IP ${DOLLAR}remote_addr; 39 | proxy_set_header X-Forwarded-For ${DOLLAR}proxy_add_x_forwarded_for; 40 | proxy_connect_timeout 300s; 41 | proxy_read_timeout 300s; 42 | sendfile on; 43 | } 44 | 45 | #Specify a charset 46 | charset utf-8; 47 | } 48 | -------------------------------------------------------------------------------- /infrastructure/docker/nginx/sites-enabled.workshop.template: -------------------------------------------------------------------------------- 1 | include sites-available/workshop; 2 | -------------------------------------------------------------------------------- /infrastructure/docker/web/.env.template: -------------------------------------------------------------------------------- 1 | REACT_APP_USE_OWN_API=true 2 | -------------------------------------------------------------------------------- /infrastructure/docker/web/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:8 2 | 3 | RUN mkdir /app 4 | 5 | WORKDIR /app 6 | 7 | RUN npm install -g serve 8 | 9 | COPY frontend /app 10 | 11 | RUN npm install 12 | 13 | ENV REACT_APP_USE_OWN_API=true 14 | RUN npm run build 15 | 16 | EXPOSE 5000 17 | CMD [ \ 18 | "serve", \ 19 | "-s", \ 20 | "build" \ 21 | ] 22 | -------------------------------------------------------------------------------- /workshop/beanstalk/01-clean-up.md: -------------------------------------------------------------------------------- 1 | # Clean up 2 | 3 | We have a pretty interesting infrastructure running now so in order to integrate Beanstalk we need to remove some services to make room. Our current setup has four major components: Elastic Load Balancer (ELB), Auto Scaling Group (ASG), Virtual Private Cloud (VPC) and Relational Database Service (RDS). As powerful as it is Beanstalk [can't setup VPCs](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc.html) for you and [is not recommend](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html) for RDS other than for testing environments so we are going to keep these two and remove the ELB and ASG. 4 | 5 | To remove the ELB: 6 | 7 | 1. On your AWS Console, go to **EC2** under **Compute** 8 | 2. Click on **Load Balancers** under **LOAD BALANCING** 9 | 3. Select the load balancer we created earlier. We used the name `aws-workshop-load-balancer` 10 | 4. Click **Actions** 11 | 5. Select **Delete** 12 | 6. Click **Yes, Delete** 13 | 14 | Now do the same for the **Target Groups**, **Launch Configurations** and **Auto Scaling Groups**. 15 | 16 | Once you completely remove the ELB and ASG we need to terminate the EC2 instances we have running. 17 | 18 | 1. Go to **EC2** under **Compute** 19 | 2. Click on **Instances** 20 | 3. Using the checkbox at the left select all the instances that aren't the Bastion (if you have it running) and the NAT instance. 21 | 4. Click **Actions** 22 | 5. Click **Instance State** 23 | 6. Click Terminate 24 | 7. Click **Yes, Terminate** 25 | 26 | Next we are going to setup our application with a production environment. 27 | 28 | --- 29 | **Extra mil:** when all the instances are terminated remove all the security groups that aren't needed anymore. Could you leave your setup broken by doing this? 30 | 31 | --- 32 | **Next:** [create a new app](/workshop/beanstalk/02-new-app-environment.md) -------------------------------------------------------------------------------- /workshop/beanstalk/03-finish-integration.md: -------------------------------------------------------------------------------- 1 | # Finish integration 2 | 3 | Now that we have our instance running we need to adjust some details to make it work with the other components of our infrastructure. Those are: our frontend in S3 and the database in RDS. Now we have to tell the frontend that the API now is reachable in another URL. 4 | 5 | First we need the new URL for the API. 6 | 7 | 1. Go to **Elastic Beanstalk** under **Compute**. 8 | 2. Click the **Conduit-prod** card under the **Conduit** application. 9 | 3. At the top, in the end of **All Applications**... breadcrumb you have the **Environment ID** and **URL**. Copy that URL. 10 | 11 | Now we need to paste the API URL in the Parameter Store read for the frontend. 12 | 13 | 1. Go to **EC2** under **Compute**. 14 | 2. Click on **Parameter Store** under **SYSTEMS MANAGER SHARED RESOURCES**. 15 | 3. Select the parameter **/prod/frontend/API_URL**. 16 | 4. Click **Actions**, **Edit Parameter**. 17 | 5. In the value field past the URL for the API. You may need to remove the last `/` so the URL ends in `elasticbeanstalk.com`. If you left the last path separator all the API calls will fail. 18 | 19 | For this change to take effect we need to run CodeBuild because this value is read when the [frontend is deployed](buildspec.frontend.yml). 20 | 21 | 1. Go to **Code Build** under **Developer Tools**. 22 | 2. Click your project name. 23 | 3. In the **Build History** section, select the checkbox at the left for the most recent build. 24 | 4. Click **Retry**. 25 | 26 | You can click the build name to follow the progress, it shouldn't take too long. When the build completes, the frontend will be hitting our new production environment. However, it won't work because we still need to give permissions to access the Parameter Store to our API web instances. This is the same we did in the past with our **ApiRole**, but as we are using the role created by Beanstalk for provisioning our instances now, we need to grant read access to the Parameter Store to that role. 27 | 28 | 1. Go to **IAM** under **Security, Identity & Compliance**. 29 | 2. Click **Roles** in the left pane. 30 | 3. Click **aws-elasticbeanstalk-ec2-role**, that's the role created for Beanstalk to use in the EC2 instances. 31 | 4. Click **Attach policy** in the **Permissions** tab. 32 | 5. Click the search bar and look for `AmazonSSMReadOnlyAccess` and select the checkbox in the left. 33 | 6. Click **Attach policy**. 34 | 35 | Great, now our instances can access the Parameter Store, but they still can't read the password protected values. To fix this we need to grant access to anyone with the **aws-elasticbeanstalk-ec2-role** role to our encryption keys. 36 | 37 | 1. In **IAM** under **Security, Identity & Compliance**, go to **Encryption keys**. 38 | 2. Scroll down to **Key Users** section and click **Add** under **This Account** sub section. 39 | 3. Select the **aws-elasticbeanstalk-service-role** role. 40 | 4. Click **Attach**. 41 | 42 | Ok, so our instances have full access to all the parameters in the Parameter Store now. We need to restart the server in our prod environment because the values from the Parameter Store are read when the app starts. 43 | 44 | 1. Go to **Elastic Beanstalk** under **Compute**. 45 | 2. Click the **Conduit-prod** card under the **Conduit** application. 46 | 3. Click **Action** on the right. 47 | 4. Click **Restart App Server(s)**. 48 | 49 | When the restart finishes the API should be working. You can navigate to the prod environment URL, append `/api` to get the default Django Rest-Framework page describing the API. Try also to navigate to the front end and inspect some of the requests to confirm that you are using the right environment. 50 | 51 | --- 52 | **Extra mil:** 53 | 54 | - What about the RDS? Why is it working without touching anything? 55 | - Check the **Scaling** card in the **Configuration** options for the environment. Click the gear and a **Time-based Scaling** action. Check how that change impact the configuration. 56 | - Not sure about the database? Login to one of your instance, install `postgresql` and try it yourself. Tip: Amazon Linux use [yum](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-software.html) package manager. 57 | 58 | --- 59 | **Next:** [conclusion](/workshop/beanstalk/04-conclusion.md) -------------------------------------------------------------------------------- /workshop/beanstalk/04-conclusion.md: -------------------------------------------------------------------------------- 1 | # Conclusion 2 | 3 | If everything went as expected you have now a running production-ready environment accessible from the same frontend as before. Take some time to explore what Beanstalk offer from the environment dashboard. There are plenty of interesting metrics and health status reports. 4 | 5 | The more valuable feature of Beanstalk is the ability to setup and terminate environments for the same application independently. This makes a big difference when you are working on a real project where the changes have to be tested before move to production, new people come to the project and other leave, you need to demo things without the risk of some other team member break the app because it's on active development, etc. In that kind of situations is where Elastic Beanstalk shine. 6 | 7 | --- 8 | **Extra mil:** 9 | 10 | - Thinking about the later our current app architecture (frontend in S3 and API in EC2) is not the ideal combination for taking the most out of multi-environment scenarios. There are many options on how to do this and there are all good and bad depending on the case. Try to think about this and come up with your idea of how can you make the environment management more simple and implement it. 11 | 12 | What is the big pain point? 13 | 14 | - We mentioned [earlier](/workshop/beanstalk/introduction.md) that letting Beanstalk manage your RDS instances is **not** recommended for production environments. This is because in order to handle your environment configuration Beanstalk needs to handle the life time of your instances like if they were stateless entities. A database is the opposite of that. 15 | 16 | Try creating a new environment with an internal RDS called **Conduit-staging**. -------------------------------------------------------------------------------- /workshop/beanstalk/introduction.md: -------------------------------------------------------------------------------- 1 | # Beanstalk 2 | 3 | [Elastic Benstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html) is an AWS service that allows us to deploy a web app without having to worry about what combination of AWS services might be needed. All we have to do is describe what we need and let Beanstalk do the rest (create security groups, setup a load balancer, etc). 4 | 5 | Beanstalk also have some nice tools that aren't available in other AWS services that really add value other than the automatic setup, those are: 6 | 7 | - The ability to manage different environments for the same app (dev, prod, testing, etc). 8 | - Centralized panel to handle the setup for each environment. 9 | - Great monitoring metrics. 10 | - Detailed health status. 11 | - Really easy to setup new environments quickly. 12 | 13 | At the time of writing this, Beanstalk support apps developed in Java, PHP, .NET, Node.js, Python, Ruby out of the box and you can build your custom containers for other platforms all running on Amazon Linux. 14 | 15 | In this section we are going to use Beanstalk to setup a _production environment_ (this means with an [external RDS](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html)) replacing our current Elastic Load Balancer (ELB) and Auto Scaling Group (ASG) setup with a Beanstalk-handled setup. 16 | 17 | It's important to remember not to make manual changes over the components generated by Beanstalk because it could prevent the correct clean up if we decide to remove an environment in the future. So to start fresh we need to remove some things before creating our new environment. 18 | 19 | --- 20 | **Next** [clean up current setup](/workshop/beanstalk/01-clean-up.md) -------------------------------------------------------------------------------- /workshop/beanstalk/troubleshooting.md: -------------------------------------------------------------------------------- 1 | # Troubleshooting 2 | 3 | These are some places where you can look for info if something doesn't work as expected. 4 | 5 | - The deploy log is stored on each instances `/var/log/eb-activity.log` file. 6 | - Beanstalk use Apache to run the Django app so the error logs for Apache are under `/var/log/httpd` folder. 7 | - Your code is deployed to `/opt/python/current/` wich is a symblink to the last bundle deployed. All the past versions are in `/opt/python/bundle`. 8 | - The virtual env for your app is in `/opt/python/run/venv`. 9 | - Amazon Linux use `yum` to install packages. If you need some tool not installed by default, which is very likely to happen, [install it yourself](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-software.html). 10 | 11 | If you can't login to your instances the logs can be retrieved from Beanstalk. 12 | 13 | 1. Go to **Elastic Beanstalk** under **Compute**. 14 | 2. Select your environment. 15 | 3. Click **Logs** in the left pane. 16 | 4. Click **Request Logs** and then **Full Logs**. This will zip all the `/var/log` and `/var/opt/log` content and give you a link in the **Logs** table to download it. 17 | 18 | If nothing of this helps you, [open an issue](https://github.com/tryolabs/aws-workshop/issues/new) and we will try to help you. -------------------------------------------------------------------------------- /workshop/elb-auto-scaling-group/01-load-balancer.md: -------------------------------------------------------------------------------- 1 | # Create a Load Balancer 2 | 3 | Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses. When you are running applications in production, you typically will use multiple instances so if one fails, your application can still work. The Load Balancer will get the traffic, and will forward it to the instances that serve your app. You can more about this [here](https://aws.amazon.com/elasticloadbalancing/). 4 | 5 | 1. Go to **EC2** under **Compute** section. 6 | 2. On left menu select **Load Balancers** under **LOAD BALANCING**. 7 | 3. Click **Create Load Balancer**. 8 | 4. Select **Application Load Balancer**. 9 | 5. As name put: `aws-workshop-load-balancer`. 10 | 6. Select at least 2 Availability zones. 11 | 7. Click **Next: Configure Security Settings**. 12 | 8. Click **Next: Configure Security Groups**. 13 | 9. Select **Create a new security group** and as name put `load-balancer-security-group` and add a description. 14 | 10. Click **Next: Configure Routing**. 15 | 11. As name put: `aws-workshop-target-group`. 16 | 12. As Port: `9000`. 17 | 13. As path: `/api/tags`. 18 | 14. Click **Next: Register Targets**. 19 | 15. Click **Next: Review**. 20 | 16. Click **Create**. 21 | 17. Click **Close**. 22 | 23 | --- 24 | **Next:** [create an Auto Scaling Group](/workshop/elb-auto-scaling-group/02-auto-scaling-group.md). 25 | -------------------------------------------------------------------------------- /workshop/elb-auto-scaling-group/02-auto-scaling-group.md: -------------------------------------------------------------------------------- 1 | # Create Auto Scaling Group 2 | 3 | Production applications need to be ready to tolerate a growing number of users at the same time. For example, if you get published in a popular blog, you may receive many more users that you had expected in a short period of time, and your application may crash because it's not able to sustain all the incoming traffic. 4 | 5 | Amazon provides [Auto Scaling Groups](https://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html) as way to build a more robust application which can handle increasing loads. Using these, you can setup rules (scaling policies) so more instances serving your application 6 | 7 | To create an Auto Scaling Group, first we need to create a [Launch Configuration](http://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html), which is basically a template that specifies properties of the instances that will be launched. 8 | 9 | ## Create Launch Configuration 10 | 1. Go to **EC2** under **Compute** section. 11 | 2. On left menu select **Launch Configuration** under **AUTO SCALING**. 12 | 3. Click **Create launch configuration**. 13 | 4. Look for Ubuntu Server (make sure it say Free tier eligible) and click Select. 14 | 5. Select `t2.micro` and then click on **Next: Configure Details**. 15 | 6. As name put: `aws-workshop-auto-scaling-group`. 16 | 7. As **IAM Role** select: `ApiRole`. 17 | 8. On **Advanced Settings**, there you have to select **As text** in **User data** and paste this bash script: 18 | ``` 19 | #!/bin/bash 20 | export LC_ALL=C.UTF-8 21 | apt update 22 | apt -y install ruby 23 | cd /home/ubuntu 24 | wget https://aws-codedeploy-us-east-1.s3.amazonaws.com/latest/install 25 | chmod +x ./install 26 | ./install auto 27 | ``` 28 | Be careful that there are NO SPACES before every line in the script. 29 | 9. Click **Next: Add Storage**. 30 | 10. Click **Next: Configure Security Group**. 31 | 11. Click **Create new security group**. 32 | 12. Security Group Name: `api-security-group`. 33 | 13. Click: **Add Rule**. 34 | 14. Type: **All TCP**. 35 | 15. Source: `load-balancer-security-group` and select the one suggested. 36 | 16. Click **Review**. 37 | 17. Click **Create launch configuration** and select the key pair to used to `ssh` into future instances. 38 | 39 | ## Add Security Group inbound rule 40 | 1. Go to **Security Groups** under **Network & Security** (still on EC2 service). 41 | 2. Open the `api-security-group` (created on the previous step). 42 | 3. Click **Edit inbound rules**. 43 | 4. Add a new rule with type `PostgreSQL` (port `5432` should be set automatically). As source select the `api-security-group` itself (start typing the name and select the one suggested). Note that this rule could not be added on the previous step because the security group didn't exist at that point. 44 | 5. Click **Save rules**. 45 | 46 | Now that we have our **Launch configuration** we can create our **Auto Scaling Group**. 47 | 48 | ## Create Auto Scaling Group 49 | 1. Go to **EC2** under **Compute** section. 50 | 2. On left menu select **Auto Scaling Groups** under **AUTO SCALING**. 51 | 3. Click: **Create Auto Scaling group**. 52 | 4. Select: `aws-workshop-auto-scaling-group` and then click **Next Step**. 53 | 5. On **Group name** put the same as in Launch configuration. 54 | 6. **Group size:** 2. At least we will have some redundancy form the start! 55 | 7. On **Subnet** add all the available options. 56 | 8. On **Advanced Details** click on: **Receive traffic from one or more load balancers**. 57 | 9. On **Target Groups** click and select: `aws-workshop-target-group`. 58 | 10. Click **Next: Configure scaling policies**. 59 | 11. Select: **Use scaling policies to adjust the capacity of this group**. We will configure a toy scaling policy only for learning. In a real system, you would have to do some benchmarking and determine your application's bottlenecks to setup an optimal scaling policy. 60 | 12. Configure it to scale between 2 and 4 instances. 61 | 13. Pick `Average CPU Utilization` as metric (imagine your app was compute intensive). In Target value, set something like 80. 62 | 14. **Instances need:** 180 seconds for warm up. See more [here](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-scaling-simple-step.html#as-step-scaling-warmup). 63 | 15. Click **Next: Configure Notifications**. 64 | 16. Click **Next: Configure Tags**. 65 | 17. Click **Review**. 66 | 18. Click **Create Auto Scaling group**. 67 | 19. Click **Close**. 68 | 69 | --- 70 | **Next:** finishing up, we need to [modify parameters and re-run CodeBuild](/workshop/elb-auto-scaling-group/03-finishing-up.md). 71 | 72 | -------------------------------------------------------------------------------- /workshop/elb-auto-scaling-group/03-finishing-up.md: -------------------------------------------------------------------------------- 1 | # Finishing up 2 | 3 | Now, we have two instances on EC2, an ELB to distribute the traffic across them, and an Auto Scaling Group to have redundancy and scale in an automatic way if throughput needs to increase. 4 | 5 | In [the first section](/workshop/s3-web-ec2-api-rds/05-finishing-up.md), the `API_URL` parameter was set to the DNS name of our only instance. Now, we need to tell the web that the request must be done through the load balancer, so we need to modify `API_URL`. 6 | We also need to modify the CodeDeploy project so the tool knows that now we have an Auto Scaling Group and that it needs to run the deploy each time a new instance is launched. 7 | Finally, we need to re-run CodeBuild so the new bundle on S3 points to the DNS of the load balancer instead of the instance' DNS. 8 | 9 | ## Modify `API_URL` 10 | 1. Go to **EC2** under **Computer** section. 11 | 2. On left menu select **Load Balancer** under **LOAD BALANCING**. 12 | 3. Copy the DNS name of your load balancer that appears under **Description**. 13 | 4. On left menu, select **Parameter Store**. 14 | 5. Click on `/prod/frontend/API_URL` and on **Actions** select **Edit Parameter**. 15 | 6. As Value put: `http://` + the DNS that you copied 3 steps ago. 16 | 7. Click **Save Parameter**. 17 | 18 | ## Modify the CodeDeploy project 19 | 1. Go to **CodeDeploy** under **Developer Tools**. 20 | 2. Click your application's name. 21 | 3. Select your deployment group and on **Actions** select **Edit**. 22 | 4. On **Environment configuration** select your Auto Scaling Group on **Auto Scaling groups** tab. 23 | 5. Go to **Amazon EC2 instances** tab, and delete all existing Tag groups that we setup earlier. 24 | 6. Check **Enable load balancing**. 25 | 7. On **Load balancer** check **Application Load Balancer**. 26 | 8. Select your target group in the dropdown. 27 | 9. Click **Save**. 28 | 10. Select your deployment group and on **Actions** click **Deploy new version**. 29 | 11. On **Repository** type select: `My application is stored in GitHub`. 30 | 12. Repository Name: `tryolabs/aws-workshop`. 31 | 13. Get the last commit id and past it in the **Commit ID** field. 32 | 14. Then click **Deploy**. 33 | 34 | ## Re-run CodeBuild 35 | 1. Go to **CodeBuild** under **Developer Tools**. 36 | 2. Click **Start build**. 37 | 3. Click **Start build**. 38 | 39 | ## Update RDS security group 40 | To give access to the instances created by the auto scaling to the data base we need to update our Postgres instance security group. 41 | 42 | 1. Go to **RDS** under **Database** 43 | 2. Click **Instances** on the left 44 | 3. Select your instance and with the radio button on the left and click **Instance actions** and select **Modify** 45 | 4. Scroll to **Security group** under **Network & Security** section 46 | 5. Click on the security groups drop down and select `api-security-group`. This is the group we created with the Launch Configuration for our Auto Scaling Group in the [previous section](/workshop/elb-auto-scaling-group/02-auto-scaling-group.md#create-launch-configuration-group). 47 | 48 | Now, terminate all your running instances and wait for the Auto Scaling group to start the new ones, this might take some minutes. You can follow the current state of the ASG by going to **EC2**, **Auto Scaling Groups**, select your group and check the **Activity History** and **Instances** tabs. Once the new instances were in place and `running` you should be able to get the full site working on the URL of the load balancer. 49 | 50 | --- 51 | **Extra mile:** once you have the site running: 52 | 53 | - Can you tell which instance is getting the requests? 54 | - Try changing the _Desired_ and _Min_ parameters of the ASG and see what happens. 55 | - Force the launch of new instances by triggering a condition that would make the scale up policy activate (that is, without changing the _Desired_ value). 56 | > Tip: running `yes > /dev/null &` will max out one of the CPU cores. 57 | 58 | - Try running [ab](http://httpd.apache.org/docs/2.2/programs/ab.html) (installed by default on macOS) to stress test the API. Do you see any reaction in the AWS console? 59 | 60 | --- 61 | **Next:** [VPC configuration and Bastion instance](/workshop/vpc-subnets-bastion/introduction.md). -------------------------------------------------------------------------------- /workshop/elb-auto-scaling-group/introduction.md: -------------------------------------------------------------------------------- 1 | # Add an extra EC2 instance with ELB and auto-scaling 2 | 3 | In this section we want to add an extra EC2 instance to be able to manage a bigger amount of trafic and improve our performance. 4 | 5 | To do that we are going to add also a [ELB](https://aws.amazon.com/elasticloadbalancing/) that is going to be the one in charge of distribute the traffic accross our instances. 6 | 7 | Also we will add an [auto-scaling group](https://aws.amazon.com/documentation/autoscaling/) with 2 availability zones. 8 | This way we ensure that if we have 2 instances one on each availability zone, and an [Availability Zone](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions-availability-zones) goes down and our instance terminated, AWS will automatically start a new instance in the other availability zone so we don't decrease our performance. 9 | Also we will create some rules to add more instances if our 2 instances are overloaded (example: using 80% of cpu for the last 5 minutes), you can add whatever rule you want. 10 | 11 | --- 12 | **Next:** [Create a Load Balancer](/workshop/elb-auto-scaling-group/01-load-balancer.md) -------------------------------------------------------------------------------- /workshop/s3-web-ec2-api-rds/03-RDS.md: -------------------------------------------------------------------------------- 1 | # RDS 2 | 3 | ## Create a PostgreSQL instance in RDS 4 | 1. Go to **RDS** under **Database** section. 5 | 2. Click on **Create Database**. 6 | 3. Click on PostgreSQL logo, and under **Templates** section tick the _"Free Tier"_ checkbox. 7 | 4. Enter a name on _DB Instance identifier_ (we will need it later, so don’t forget it). 8 | 5. Enter a username and password and click Next (again, we will need these later). 9 | 6. Under **Connectivity** section verify that **Publicly Accessible** is set to No. 10 | 7. On **VPC security groups** choose _Select existing VPC security groups_ and select the security group you created when [launching the EC2 instance](/workshop/s3-web-ec2-api-rds/02-EC2-instances.md#launch-your-first-ec2-instance). 11 | 8. Pick a db name under **Additional Configuration** and click create Database (again, we will need the database name later). 12 | 13 | Now our instance is created. We configured its access, allowing every instance under the security group that was created in the previous section to connect. 14 | 15 | ## Add DB parameters on Parameters Store 16 | 17 | As before, we will need some variables stored in the parameter store, including the database name, username, password and endpoint. These variables are referenced in [this file](/backend/conduit/settings/ec2.py), so Django can access the database. 18 | 19 | 1. Go to **RDS** under **Database** section. 20 | 2. Click on Instances. 21 | 3. Wait for instance to create. Then see details of your db and copy the **Endpoint**. This will be the value for `DATABASE_HOST`. 22 | 4. Go to AWS console **Systems Manager** under **Management & Governance**. 23 | 5. On the left menu select **Parameter Store**. 24 | 6. Click Create Parameter. 25 | 7. Enter `/prod/api/DATABASE_NAME` as the name and a meaningful description like "Name of the PostgreSQL database". 26 | 8. Enter the DB name you selected before on the value attribute. 27 | 9. Click create parameter and close. 28 | 10. Now we will need to do the same thing for the username and host 29 | 1. For the username enter `/prod/api/DATABASE_USER` as the name and your database username and as the value 30 | 2. For the host enter `/prod/api/DATABASE_HOST` as the name and the hostname you copied earlier as the value 31 | 11. For `/prod/api/DATABASE_PASSWORD` do the same steps but select as **Type: Secure String** and as KMS Key ID the key `workshopkey`. 32 | 33 | Now we have our database parameters set, and the password encrypted. Only our EC2 instances will be able to decrypt it. 34 | 35 | --- 36 | **Extra mile:** 37 | 38 | - Can you `ping` the Postgres instance? 39 | - Try to connect to the DB through your running EC2 instance. 40 | 41 | --- 42 | 43 | **Next:** create a [CodeDeploy project to deploy your API](/workshop/s3-web-ec2-api-rds/04-code-deploy.md). 44 | -------------------------------------------------------------------------------- /workshop/s3-web-ec2-api-rds/04-code-deploy.md: -------------------------------------------------------------------------------- 1 | # CodeDeploy 2 | 3 | [CodeBuild](http://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) is a service to automate the deployment of any kind of applications to EC2 instances. The configuration is really simple and easy to adapt. The deployment process is described in an `appspec.yml` file like [this one](/appspec.yml). If you want to know what happens during the deploy, you can also check the implementation of the hooks [here](/infrastructure/aws/codedeploy). 4 | 5 | First, we need to create a default role for CodeDeploy so it can have access to other AWS services (like S3). 6 | 7 | ## Create CodeDeploy Role 8 | 1. Go to **IAM** under **Security, Identity & Compliance**. 9 | 2. Go to **Role** section and click **Create Role**. 10 | 3. Select **CodeDeploy** for both service and use case and click **Next: Permissions**. 11 | 4. Select **Next: Tags**. 12 | 5. Select **Next: Review**. 13 | 6. Type a name and description and click **Create Role**. 14 | 15 | Now we are ready to start using it. 16 | 17 | ## Configure Code Deploy 18 | 1. Go to **CodeDeploy** under **Developer Tools**. 19 | 2. Go to **Applications** and click **Create application**. 20 | 3. Enter an **Application name** and **EC2/On-premises** on **Compute platform** then click **Create Application**. 21 | 4. Click on **Create Deployment group** and enter a Deployment Group name. 22 | 5. On **Service role** select the role created to grant CodeDeploy access to the instances. 23 | 6. Select **In-place** on **Deployment Type** section. 24 | 7. Check **Amazon EC2 instances** in **Environment Configuration**, then on the first tag group select `environment` as Key and as Value `prod`, on the second line select `service` as Key and as Value `api`. This means that CodeDeploy will deploy our application to all the EC2 instances with those tags. 25 | 8. On **Deployment settings** select **CodeDeployDefault.OneAtATime** in Deployment Configurations. 26 | 9. Under **Load Balancer** uncheck **Enable load balancing** 27 | 10. Click **Create deployment group** 28 | 29 | Now our CodeDeploy application is ready. Let’s try our first deployment. 30 | 31 | 1. On the deployment group details of the group we just made, click **Create Deployment** 32 | 2. On **Repository type** select **"My application is stored in GitHub"**. 33 | 3. In **Connect to GitHub** section type your GitHub account and select **Connect to GitHub**. 34 | 4. Allow AWS to access your GitHub account, if needed. 35 | 5. Enter your repository name in the form _account/repository_. 36 | 6. In **Commit ID** type the commit hash that you want to deploy. 37 | 7. Select **Overwrite the content** below. 38 | 8. Click **Create Deployment**. 39 | 40 | During the deploy try **View instances** and then **View events** to follow the progress and see what's happening. 41 | 42 | --- 43 | **Extra mile:** once the deploy finishes: 44 | 45 | - Try hitting the API with something like [Postman](https://www.getpostman.com/) or [httpie](https://httpie.org/). 46 | - What effect did the deploy have? Where did all the Python code end up? Is the API connected with the RDS already? `ssh` in to get all those answers, and more. 47 | 48 | --- 49 | **Next:** we are going to [finish our first deploy](/workshop/s3-web-ec2-api-rds/05-finishing-up.md), only some extra parameters are missing! 50 | -------------------------------------------------------------------------------- /workshop/s3-web-ec2-api-rds/05-finishing-up.md: -------------------------------------------------------------------------------- 1 | # Finishing up 2 | 3 | We are almost done. We have to add some more parameters and we are ready to deploy the whole project. 4 | 5 | ## Create API_URL on Parameter Store 6 | 1. Go to **EC2** under **Compute** section. 7 | 2. Select your instance. 8 | 3. Copy the **Public DNS** under **Description**. 9 | 4. On the left menu select **Parameter Store**. 10 | 5. Click **Create Parameter**. 11 | 6. Enter `/prod/frontend/API_URL` as name and `http://:9000` as value. 12 | 7. Click **Create Parameter** and close. 13 | 14 | This will be used by CodeBuild, so the frontend knows where the API is. You can check how [here](/buildspec.frontend.yml). 15 | 16 | ## Run CodeBuild project 17 | 1. Go to **CodeBuild** under the **Developer Tools** section. 18 | 2. Select the project created before and click **Start Build**. 19 | 3. Click **Start Build**. 20 | 4. Wait. 21 | 5. Check if all the phases run successfully. 22 | 6. Done. 23 | 24 | Now, if you go to the public URL provided by S3 (under **S3**, your bucket, **Properties**, **Static website hosting**) you will find the endpoint. If everything went as planned, you should see the complete website. 25 | 26 | --- 27 | **Next:** add an extra [EC2 instance with ELB and auto-scaling](/workshop/elb-auto-scaling-group/introduction.md). 28 | -------------------------------------------------------------------------------- /workshop/s3-web-ec2-api-rds/introduction.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | We are ready to start the deployment of our website. 4 | 5 | The first step will be the frontend. Because it’s a static website, we can create an [S3 bucket](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html), put all the code in it and serve it as a static website. Think of an S3 bucket as a folder in the cloud, which can be setup for access from the outside world via a URL (and even help a bit with your application's routes). 6 | 7 | To automate the build, we will use [CodeBuild](https://aws.amazon.com/codebuild/), AWS service to build projects on the go. 8 | CodeBuild will pull our repository, build the webpage and copy the build directory to S3. The configuration is specified on `buildspec.frontend.yml` on [the root folder of our repo](/buildspec.frontend.yml). 9 | 10 | In order to automate the deployment of our API to the EC2 instances, we will use [CodeDeploy](http://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html). It will pull our repo to the EC2 instances and start our server (gunicorn). The full deploy process is described in the `appspec.yml` file, [here](/appspec.yml). 11 | 12 | Last but not least our database will be hosted using [AWS RDS](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html), as a PostgreSQL instance. 13 | 14 | To sum up, in this section we will create: 15 | 16 | - an S3 bucket to host our static frontend. 17 | - a CodeBuild setup to build the frontend and copy the output to the S3 bucket. 18 | - a CodeDeploy setup to deploy our API to the EC2 instances. 19 | - a RDS PostgreSQL instance. 20 | 21 | > **Important:** after you are done with this workshop, you will ideally clean up your account, so you are not billed anymore. This means that you need to delete everything you have created. 22 | > 23 | > Many resources in AWS [can be tagged](https://aws.amazon.com/answers/account-management/aws-tagging-strategies/). If something can be tagged, then you should tag it with a **unique name**. Later, you can use the [Tag Editor](https://aws.amazon.com/blogs/aws/resource-groups-and-tagging/) to find your tagged resources to delete, and make sure you don't leave anything behind. 24 | 25 | --- 26 | 27 | **Next:** learn how to [serve a static website from S3](/workshop/s3-web-ec2-api-rds/01-serve-website-from-s3.md). 28 | -------------------------------------------------------------------------------- /workshop/serverless/02-api-integration.md: -------------------------------------------------------------------------------- 1 | ## API Integration 2 | 3 | Now we want to use our API endpoint we have been using before to have a single API endpoint to all of our API calls. For doing that we will connect our load balancer to fordward all requests to the path `/*/search/*` 4 | 5 | Firstly we need to create a new `Target Group` with our lambda registered as the targets. 6 | 7 | 1. Go to **EC2** under **Computing** section. 8 | 2. Click on **Target Groups**. 9 | 3. Click on **Create Target Group**. 10 | 4. Put a name and choose `Lambda function` as Target Type. 11 | 5. Select your lambda and click **Create** 12 | 13 | Now we will modify our load balancer to add a fordwarding rule for the new target group. 14 | 15 | 1. Go to **EC2** under **Computing** section. 16 | 2. Click on **Load Balancers** on the left. 17 | 3. Select your load balancer, go to the `Listeners` tab and click on `View/edit rules`. 18 | 4. Add a new rule with condition `IF` path is `/*/search/*` `THEN` Forward to your Target Grup you created before. 19 | 5. Click on **Update** 20 | 21 | --- 22 | 23 | This is the end of this part of the workshop. You could continue reading more about serverless architecture in the AWS ecosystem [here](https://aws.amazon.com/serverless/). 24 | 25 | -------------------------------------------------------------------------------- /workshop/serverless/introduction.md: -------------------------------------------------------------------------------- 1 | # Add a lambda function to query search posts by title 2 | 3 | In this section we want to add a lambda function to be able to query our data base for posts which has some word in their title. 4 | In the nexts steps we will install [Serverless](https://serverless.com), a framework to easely deploy serverless architectures in AWS and others cloud platforms. 5 | 6 | 7 | --- 8 | **Next:** [Install and configure Serverless](/workshop/serverless/01-serverless.md) -------------------------------------------------------------------------------- /workshop/set-up-users.md: -------------------------------------------------------------------------------- 1 | # Set up users on AWS 2 | 3 | > **TryoTip:** if you are using the **Tryolabs Playground AWS account**, this section does not apply. Please, read it anyway, so you have some context on what you would do with a bare new AWS account. 4 | 5 | As you might already now there is a special account in AWS called _root_. This is the account used to do the initial setup for users, roles and billing information. Is recommended to create a user with administrator privileges for the every day use and [not use the root account](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#create-iam-users) to login to AWS. Additionally, you should make sure you enable [Multi Factor Authentication (MFA)](http://docs.aws.amazon.com/console/iam/security-status-activate-mfa) on your root account, and use an app like [Authy](https://authy.com/) as a second factor on your phone (Android/iOS). 6 | 7 | Next, we are going to use our root account to setup 2 AWS users. 8 | 9 | One will be used to access AWS via the console (web interface, so this will be your own user). The other will be used for accessing our account *programmatically*: we will create an **access key ID** and **secret access key** for the AWS API, CLI, SDK, and other development tools. 10 | 11 | Every account has some associated permissions. It is a good practice to have those strictly limited to the bare minimum necessary, especially for programmatic access. Permissions are handled by attaching [policies](http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) to the user accounts. There, you can customize the access levels to various AWS services. 12 | 13 | First we are going to create the user for the AWS console: 14 | 15 | 1. Login to your AWS account with the root user. 16 | 2. Go to **IAM** under Security, Identity & Compliance section. 17 | 3. Click on Users. 18 | 4. Click Add user button. 19 | 5. Enter a username and check the option: **AWS Management Console access** under the **Select AWS access type** section and then click next. You should also mark the option so that the user is forced to change his password on next login (pick a secure password!). 20 | 6. Select **Attach existing policies directly**. 21 | 7. Search for: `AdministratorAccess`, check it and click next. 22 | 8. Click on Create user. Copy the url and password that appear in the Success message. 23 | 24 | Now, lets login with our new user: 25 | 26 | 1. Log out from AWS and go to the link you copied earlier. 27 | 2. Enter the username and password that was auto-generated. 28 | 3. Enter your new password. 29 | 30 | After this, we can create the user to access AWS programmatically: 31 | 32 | 1. Repeat steps from 2 to 4 to setup a user. 33 | 2. Enter a username and check the option **Programmatic access** under the **Select AWS access type** section. Click next. 34 | 3. Select **Attach existing policies directly**. 35 | 4. Search for: `AdministratorAccess`, check it and click next. Of course, in a real use case, you would design or use a policy with more restricted access. 36 | 5. Click on Download CSV. 37 | 38 | In the downloaded file, you can find the access key id and the secret access key. You’ll need them to [configure your AWS CLI](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) in your computer. If you don’t have AWS CLI installed yet, you can do it following [these steps](http://docs.aws.amazon.com/cli/latest/userguide/installing.html). 39 | 40 | --- 41 | **Extra mile**: set the `ViewOnlyAccess` permissions to the user with programmatic access. Double points if you do it with the CLI. 42 | 43 | --- 44 | 45 | **Next:** [S3, RDS and EC2](/workshop/s3-web-ec2-api-rds/introduction.md). 46 | -------------------------------------------------------------------------------- /workshop/vpc-subnets-bastion/01-create-vpc.md: -------------------------------------------------------------------------------- 1 | # VPC 2 | 3 | We are going to create our VPC with 4 subnets (2 private and 2 public). 4 | 5 | ## Create a VPC 6 | 1. Go to VPC under Networking & Content Delivery. 7 | 2. Go to Your VPCs on the left section. 8 | 3. Click on Create VPC. 9 | 4. As **Name** **tag** put: `awsworkshopvpc`. 10 | 5. As **IPv4 CIDR** **block** put: `10.0.0.0/16`. 11 | 6. Then click: Yes, Create. 12 | 13 | ## Create 4 subnets 14 | 1. Go to Subnets on the left section. 15 | 2. Click Create Subnet. 16 | 3. As **Name tag** put: `10.0.1.0-us-east-1a`. 17 | 4. **Availability Zone**: `us-east-1a`. 18 | 5. As **IPv4 CIDR** **block** put: `10.0.1.0/24`. CIDR block for any subnet will be a subset of the VPC CIDR block. 19 | 6. Then click in Yes, Create. 20 | 7. Repeat steps 2-6 using as **Name tag**: `10.0.2.0-us-east-1a`, **Availability Zone**: `us-east-1a` and **IPv4 CIDR block**: `10.0.2.0/24`. 21 | 8. Repeat steps 2-6 using as **Name tag**: `10.0.3.0-us-east-1b`, **Availability Zone**: `us-east-1b` and **IPv4 CIDR block**: `10.0.3.0/24`. 22 | 9. Repeat steps 2-6 using as **Name tag**: `10.0.4.0-us-east-1b`, **Availability Zone**: `us-east-1b` and **IPv4 CIDR block**: `10.0.4.0/24`. 23 | 24 | --- 25 | **Next:** [create an Internet Gateway and a public Routes table](/workshop/vpc-subnets-bastion/02-internet-gateway.md). 26 | 27 | -------------------------------------------------------------------------------- /workshop/vpc-subnets-bastion/02-internet-gateway.md: -------------------------------------------------------------------------------- 1 | # Internet Gateway 2 | 3 | We already have our VPC with 4 subnets, but none of those can access the Internet (they are effectively private). To turn 2 of them into public, we need to setup an [Internet Gateway](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html) for our VPC and create a Route Table to route all external traffic through the gateway. 4 | Finally, we need to associate 2 of our subnets to this route table and assign them a public IP, so they turn into public subnets. 5 | 6 | ## Create a Internet Gateway 7 | 1. Go to Internet Gateways on the left section. 8 | 2. Click Create Internet Gateway. 9 | 3. As Name tag put: `awsworkshopIGW`. 10 | 4. Click: Yes, Create. 11 | 5. Click Attach to VPC. 12 | 6. Click: Yes, Attach. 13 | 14 | ## Create Route tables 15 | 1. Go to Route Tables on the left section. 16 | 2. Click Create Route Table. 17 | 3. As Name tag: `awsWorkshopPublicRT`. 18 | 4. Click Yes, Create. 19 | 5. On the bottom section select the Routes tab. 20 | 6. Click on Edit button. 21 | 7. Click on Add another Route. 22 | 8. As **Destination** put `0.0.0.0/0`. 23 | 9. As **Target** select your Internet Gateway. 24 | 10. Click Save. 25 | 11. Now select Subnet Associations tab. 26 | 12. Click on Edit. 27 | 13. Select `10.0.1.0-us-east-1a` and `10.0.3.0-us-east-1b`. 28 | 14. Click Save. 29 | 30 | ## Assign public IP to our public subnet 31 | 1. Go to Subnets on the left section. 32 | 2. Select the `10.0.1.0-us-east-1a`. 33 | 3. Click on Subnet Actions. 34 | 4. Select Modify auto-assign IP settings. 35 | 5. Check: Enable auto-assign public IPv4 address. 36 | 6. Click Save. 37 | 7. Click Close. 38 | 8. Repeat steps 2-7 with `10.0.3.0-us-east-1b`. 39 | 40 | --- 41 | **Next:** [create a NAT Instance](/workshop/vpc-subnets-bastion/03-nat-instance.md). 42 | -------------------------------------------------------------------------------- /workshop/vpc-subnets-bastion/03-nat-instance.md: -------------------------------------------------------------------------------- 1 | # NAT Instance 2 | 3 | Until now we have 2 public subnets and 2 private subnets. In the private ones, we will deploy the webserver instances that will be accessible via a Load Balancer. 4 | 5 | Even if these instances don't need to be reachable from outside of the VPC, they need to have Internet access to download and update packages. For this reason, we need a NAT through which we can route all external outbound traffic. 6 | 7 | AWS offers two options for NAT: [NAT Instance](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html) and [NAT Gateway](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html). 8 | The Gateway offering is newer and easier to setup than the NAT Instance, and also automatically scales. However, for this tutorial, will go for the NAT Instance purely because it's cheaper (we don't want to be billed too much!). 9 | 10 | ## Create Nat Instance 11 | 1. Go to EC2 under Computer section. 12 | 2. Click on Instances on the left menu. 13 | 3. Click Launch Instance. 14 | 4. Select Community AMIs. 15 | 5. Type NAT and then hit Enter. 16 | 6. Select the first option that appears: 17 | 1. Root device type: `ebs` 18 | 2. Virtualization type: `hvm` 19 | 3. ENA Enabled: `No`. 20 | 7. Select `t2.micro` and click Next: Configure Instance Details. 21 | 8. On Network, select your VPC. 22 | 9. As subnet, select `10.0.1.0-us-east-1a` 23 | 10. Click Next: Add Storage. 24 | 11. Click Next: Add Tag. 25 | 12. As Key put `Name` and as Value `MyNat`. 26 | 13. Click Next: Configure Security Group. 27 | 14. Select: Create a new security group. 28 | 15. As **Security group name** put `natsecuritygroup`. 29 | 16. Add 3 rules: SSH, HTTP, HTTPS. 30 | 17. Click: Review and Launch. 31 | 18. Click: Next. 32 | 19. Click: Launch. 33 | 20. Select your key pair and click Launch Instances. 34 | 21. Click View Instances. 35 | 22. Select your NAT instance. 36 | 23. Go to Actions - Networking and click on **Change Source/Dest. Check**. 37 | 24. Click Yes, Disable. 38 | 25. Go to Actions again - Networking and click **Change Security Groups** and add the default one for the VPC. 39 | 40 | ## Create a route for private subnets through the NAT instance 41 | 1. Go to VPC under Networking & Content Delivery. 42 | 2. Go to Route Tables on the left section. 43 | 3. Select the Main subnet of `awsworkshopvpc`. 44 | 4. On the bottom section go to the Routes tab. 45 | 5. Click Edit. 46 | 6. Click Add another Route. 47 | 7. As Destination put: `0.0.0.0/0`. 48 | 8. As Target select your NAT Instance. 49 | 9. Click Save. 50 | 51 | --- 52 | **Next:** [create new Load Balancer](/workshop/vpc-subnets-bastion/04-load-balancer.md). 53 | -------------------------------------------------------------------------------- /workshop/vpc-subnets-bastion/04-load-balancer.md: -------------------------------------------------------------------------------- 1 | # Load Balancer 2 | 3 | At this point, we need to create a Load Balancer to be able to route request from the web to our instances. 4 | 5 | ## Create a new Load Balancer 6 | 1. Go to EC2 under Computer section. 7 | 2. Click on Load Balancers. 8 | 3. Click Create Load Balancer. 9 | 4. Click Create on Application Load Balancer. 10 | 5. As Name put: `aws-workshop-load-balancer-vpc`. 11 | 6. On Availability Zones, on VPC select `awsworkshopvpc`. 12 | 7. Click on `us-east-1a`. 13 | 8. Click on `10.0.1.0-us-east-1a`. 14 | 9. Repeat steps 7 and 8 for `us-east-1b` and `10.0.3.0-us-east-1b`. 15 | 10. Click Next: Configure Security Settings. 16 | 11. Click Next: Configure Security Groups. 17 | 12. Select Create a **new** security group and then click Next: Configure Routing. 18 | 13. As name put: `aws-workshop-target-group-vpc`. 19 | 14. As Port: `9000`. 20 | 15. As path: `/api/tags`. 21 | 16. Click Next: Register Targets. 22 | 17. Click Next: Review. 23 | 18. Click: Create. 24 | 19. Click: Close. 25 | 20. Select the new load balancer. 26 | 21. Go to Description on bottom and find Security. 27 | 22. Click Edit Security Groups. 28 | 23. Select default (so that both security groups are selected). 29 | 24. Click Save. 30 | 25. Delete old Load Balancer. 31 | 32 | ## Modify API_URL 33 | Repeat the steps outlined in [this section](/workshop/elb-auto-scaling-group/03-finishing-up.md). 34 | 35 | --- 36 | **Next:** [move RDS into your VPC](/workshop/vpc-subnets-bastion/05-RDS.md). -------------------------------------------------------------------------------- /workshop/vpc-subnets-bastion/05-RDS.md: -------------------------------------------------------------------------------- 1 | # RDS 2 | 3 | Now, we should move our RDS to the private subnets of our VPC. This way, we ensure that RDS is only accessible from these private subnets, and never from the outside world. 4 | 5 | ## Move RDS to your VPC 6 | 1. Open the [Amazon RDS console](https://console.aws.amazon.com/rds) and choose Subnet Groups on the left navigation pane. 7 | 2. Choose **Create DB Subnet Group**. 8 | 3. Enter the subnet name: `vpcsubnetgroup`. 9 | 4. As VPC ID: your VPC. 10 | 5. Then choose Availability Zone `us-east-1a` and Subnet Id `10.0.2.0-us-east-1a` and click Add. 11 | 6. Then choose Availability Zone `us-east-1b` and Subnet Id `10.0.4.0-us-east-1a` and click Add. 12 | 7. Click **Create**. 13 | 8. Go to Instances, select your RDS instance and on Instance Actions select Modify. 14 | 9. As Subnet Group select your `vpcsubnetgroup`. 15 | 10. Security Group: `default`. 16 | 11. Click Modify DB Instance. 17 | 12. Check Apply Immediately. 18 | 13. Click Continue. 19 | 20 | --- 21 | **Next:** [Auto Scaling Group](/workshop/vpc-subnets-bastion/06-auto-scaling-group.md). 22 | -------------------------------------------------------------------------------- /workshop/vpc-subnets-bastion/06-auto-scaling-group.md: -------------------------------------------------------------------------------- 1 | # Auto Scaling Group 2 | 3 | We are going to create a new Launch Configuration and Auto Scaling Group so that all our instances start only in our private subnets. 4 | 5 | ## Create a new Launch Configuration 6 | 1. Go to EC2 under Compute. 7 | 2. Go to Auto Scaling Groups on the left menu. 8 | 3. Delete the existing Auto Scaling group. 9 | 4. Go to Launch Configuration on the left menu. 10 | 5. Delete existing Launch Configuration. 11 | 12 | Now, you need to create a new Launch Configuration that is almost identical to the one that you just deleted except for one thing: instead of creating a Security Group you need to choose the default one for your VPC. 13 | 14 | There is no simple way to find it because your AWS account already has a default VPC with its default security group and at this stage of the Launch Configuration wizard there is no way to distinguish between your VPC's default group and the default group for the default VPC (🤔). To find the security group: 15 | 16 | 1. Go to **VPC** under **Networking & Content Delivery**. 17 | 2. Select **Security Groups** on the **Security** section on the left. 18 | 3. Search for a group with name `default` and VPC `vpc-ugly-id | awsworkshopvpc`. 19 | 4. Copy the **Group ID** value. 20 | 21 | Once you have this Security Group Id, start the Launch Configuration creation wizard. Once you reach the _Click Next: Configure Security Group._ step, instead of creating a new security group choose **Select an existing security group** and look for the group with name _default_ and the same Id that you have. You can check the [previous instructions](/workshop/elb-auto-scaling-group/02-auto-scaling-group.md) if you need. 22 | 23 | ## Create Auto Scaling Group 24 | 1. Go to EC2 under Compute section. 25 | 2. On left menu select Auto Scaling Groups under AUTO SCALING. 26 | 3. Click: Create Auto Scaling group. 27 | 4. Select: `aws-workshop-auto-scaling-group` and then click Next Step. 28 | 5. On Group name put the same as in Launch configuration. 29 | 6. Group size: 2. 30 | 7. Network: `awsworkshopvpc`. 31 | 8. Subnet: `10.0.2.0-us-east-1a` and `10.0.4.0-us-east-1b`. 32 | 9. Advanced Details click on: Receive traffic from one or more load balancers. 33 | 10. On Target Groups double click and select: `aws-workshop-target-group-vpc`. 34 | 11. Click Next: Configure scaling policies. 35 | 12. Select: **Use scaling policies to adjust the capacity of this group**. 36 | 13. Between 2 and 4. 37 | 14. Target value: 80. 38 | 15. Instances need: 180. 39 | 16. Click: Next: Configure Notifications. 40 | 17. Click: Next: Configure Tags. 41 | 18. Click: Review. 42 | 19. Click: Create Auto Scaling group. 43 | 20. Click: close. 44 | 45 | --- 46 | **Extra mile:** 47 | 48 | - Why is the ASG only available on two subnets and not all of them? 49 | - Why do we need this configuration of subnets anyway? (2 public and 2 private). 50 | 51 | --- 52 | **Next:** [create a Bastion](/workshop/vpc-subnets-bastion/07-bastion.md) to be able to SSH into the private instances. 53 | -------------------------------------------------------------------------------- /workshop/vpc-subnets-bastion/07-bastion.md: -------------------------------------------------------------------------------- 1 | # Bastion instance 2 | 3 | A bastion is a regular EC2 instance located in one of the public subnets, which allows incoming traffic through SSH. Through this instance, we will be able to SSH into any instance located in the private subnet (assuming they accept incoming traffic from the bastion). 4 | 5 | ## Create a Bastion Instance 6 | 1. Go to **EC2** under **Compute section**. 7 | 2. Click on Launch Instance. 8 | 3. Look for Ubuntu Server (make sure it says Free tier eligible) and click Select. 9 | 4. Select `t2.micro` and then click on Next: Configure Instance Details. 10 | 5. On Network, select your VPC. 11 | 6. As subnet, you can pick any of the two public ones. For example, `10.0.1.0-us-east-1a`. 12 | 8. Click Next: Add Storage. 13 | 9. Leave the default settings and click Next: Add Tags. 14 | 10. Click Add Tag. 15 | 11. Fill Key with `Name` and in Value with `bastion`. 16 | 12. Click on Next: Configure Security Group. 17 | 13. Write a meaningful name in **Security group name**. 18 | 14. Click Review and Launch. 19 | 15. Click Launch. 20 | 16. Select your key pair and click Launch Instances. 21 | 17. Select the Bastion on the instances list and on Actions/Networking select Change Security Groups. 22 | 18. Check the default security group of your VPC. Make sure that 2 security groups are checked, the default one and the one you created during the creation of the bastion. 23 | 24 | ## Accessing private instances through the bastion 25 | 26 | Now you have a public instance that can be accessed via SSH, but what you want is to be able to access to your private instances. 27 | 28 | To access the instances, you need to SSH with the PEM (key pair) that you had generated when launching the first one. 29 | 30 | ### Option 1: setup SSH agent forwarding 31 | You can read a guide [here](https://developer.github.com/v3/guides/using-ssh-agent-forwarding/). Even though the examples check access to GitHub, it's analogous to accessing our private instances. 32 | 33 | You can setup SSH so it's easier to access protected instances going transparently through the bastion. [Here](https://www.cyberciti.biz/faq/linux-unix-ssh-proxycommand-passing-through-one-host-gateway-server/) you have a nice guide. 34 | 35 | ### Option 2: copy the PEM file from your machine to the bastion instance 36 | Ideally, you would be using a different PEM file for the bastion and the instances (increased security). 37 | 38 | 1. Copy the file with `scp ~/.ssh/.pem ubuntu@:/home/ubuntu/.ssh -i ~/.ssh/.pem`. 39 | 2. SSH into the bastion. 40 | 2. Make sure the file permissions are correct: `chmod 400 `. 41 | 3. SSH into the instances (from the bastion) with `ssh -i `. 42 | 43 | --- 44 | **Extra mile:** `ssh` to one of the instances in the private subnets and `tracepath` to an external host. Do the same for a instance in the public subnets. What's the difference? 45 | 46 | --- 47 | 48 | **Next:** [finish the deploy](/workshop/vpc-subnets-bastion/08-finishing-up.md). -------------------------------------------------------------------------------- /workshop/vpc-subnets-bastion/08-finishing-up.md: -------------------------------------------------------------------------------- 1 | # Finishing up 2 | 3 | As usual, the last steps are modifying our CodeDeploy project so it uses our new Auto Scaling Group, re-run the deploy and rebuild the web so it uses the new parameters. 4 | 5 | For this, you can repeat the steps outlined [here](/workshop/elb-auto-scaling-group/03-finishing-up.md#modify-the-codedeploy-project). 6 | 7 | --- -------------------------------------------------------------------------------- /workshop/vpc-subnets-bastion/introduction.md: -------------------------------------------------------------------------------- 1 | # VPC and *bastion* instance 2 | 3 | The aim of this section is to improve a bit our security and redundancy. For this we are going to create a [custom VPC](https://aws.amazon.com/documentation/vpc/). 4 | 5 | Once we have our VPC, we will create 2 private subnets (where our Auto Scaling Group will launch the web server instances) in different Availability Zones (for redundancy reasons). We will also setup 2 public subnets in the same availability zones, which are needed by the load balancer. You can read more about VPC and subnets [here](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html). 6 | 7 | For our public subnets, we will need to setup an [Internet Gateway](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html) and create a new [Route Table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html), so any instance in the subnet can access the Internet. 8 | 9 | Since our application's instances will live in the **private** subnets, we also will need a [NAT Instance](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html) that will route their Internet traffic through the public subnets. We need our instances to access the Internet so that we can download packages, update our system, etc. 10 | 11 | We need to create a new Launch Configuration and modify our Auto Scaling Group so from now on it deploys to our VPC in the right subnets. Also, our RDS (PostgreSQL database) needs to be moved to our VPC so our instances can reach it. 12 | 13 | To access our instances through SSH from the outside world, we will add a [bastion instance](https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-bastion-host/). Since the bastion has direct access to the instances, we can access them by accessing the bastion first. 14 | 15 | Finally, some changes need to be made to our CodeDeploy project so it deploys to our VPC, as expected. 16 | 17 | So, let's get started. 18 | 19 | --- 20 | **Next:** [create a VPC](/workshop/vpc-subnets-bastion/01-create-vpc.md). 21 | 22 | --------------------------------------------------------------------------------