├── .gitignore
├── {{cookiecutter.project_slug}}
├── .gitignore
├── backend
│ ├── app
│ │ ├── app
│ │ │ ├── __init__.py
│ │ │ ├── api
│ │ │ │ ├── __init__.py
│ │ │ │ └── api_v1
│ │ │ │ │ ├── __init__.py
│ │ │ │ │ ├── endpoints
│ │ │ │ │ ├── __init__.py
│ │ │ │ │ ├── group.py
│ │ │ │ │ ├── token.py
│ │ │ │ │ └── user.py
│ │ │ │ │ ├── utils
│ │ │ │ │ └── __init__.py
│ │ │ │ │ ├── api.py
│ │ │ │ │ └── api_docs.py
│ │ │ ├── core
│ │ │ │ ├── __init__.py
│ │ │ │ ├── security.py
│ │ │ │ ├── celery_app.py
│ │ │ │ ├── cors.py
│ │ │ │ ├── jwt.py
│ │ │ │ ├── config.py
│ │ │ │ ├── app_setup.py
│ │ │ │ ├── errors.py
│ │ │ │ └── database.py
│ │ │ ├── models
│ │ │ │ ├── __init__.py
│ │ │ │ ├── base_relations.py
│ │ │ │ ├── group.py
│ │ │ │ └── user.py
│ │ │ ├── schemas
│ │ │ │ ├── __init__.py
│ │ │ │ ├── base.py
│ │ │ │ ├── msg.py
│ │ │ │ ├── scalar.py
│ │ │ │ ├── token.py
│ │ │ │ ├── user.py
│ │ │ │ └── group.py
│ │ │ ├── rest_tests
│ │ │ │ ├── api
│ │ │ │ │ ├── __init__.py
│ │ │ │ │ └── api_v1
│ │ │ │ │ │ ├── __init__.py
│ │ │ │ │ │ ├── token
│ │ │ │ │ │ ├── __init__.py
│ │ │ │ │ │ └── test_token.py
│ │ │ │ │ │ ├── user
│ │ │ │ │ │ ├── __init__.py
│ │ │ │ │ │ └── test_user.py
│ │ │ │ │ │ ├── group
│ │ │ │ │ │ ├── __init__.py
│ │ │ │ │ │ ├── test_create_groups.py
│ │ │ │ │ │ ├── test_fetch_groups.py
│ │ │ │ │ │ └── test_assign_group_admin.py
│ │ │ │ │ │ └── conftest.py
│ │ │ │ ├── .gitignore
│ │ │ │ ├── utils
│ │ │ │ │ ├── __init__.py
│ │ │ │ │ ├── faker.py
│ │ │ │ │ ├── user.py
│ │ │ │ │ └── group.py
│ │ │ │ └── __init__.py
│ │ │ ├── main.py
│ │ │ └── worker.py
│ │ ├── alembic
│ │ │ ├── versions
│ │ │ │ └── .keep
│ │ │ ├── README
│ │ │ ├── script.py.mako
│ │ │ └── env.py
│ │ ├── uwsgi.ini
│ │ ├── prestart.sh
│ │ └── alembic.ini
│ ├── .gitignore
│ ├── Dockerfile-rest-tests
│ ├── Dockerfile-celery-worker
│ └── Dockerfile
├── frontend
│ ├── src
│ │ ├── assets
│ │ │ └── .gitkeep
│ │ ├── app
│ │ │ ├── app.component.scss
│ │ │ ├── app.component.ts
│ │ │ ├── app-routing.module.ts
│ │ │ ├── app.module.ts
│ │ │ ├── app.component.spec.ts
│ │ │ └── app.component.html
│ │ ├── environments
│ │ │ ├── environment.prod.ts
│ │ │ ├── environment.stag.ts
│ │ │ └── environment.ts
│ │ ├── styles.scss
│ │ ├── favicon.ico
│ │ ├── typings.d.ts
│ │ ├── tsconfig.app.json
│ │ ├── index.html
│ │ ├── tsconfig.spec.json
│ │ ├── main.ts
│ │ ├── test.ts
│ │ └── polyfills.ts
│ ├── .dockerignore
│ ├── nginx-custom.conf
│ ├── e2e
│ │ ├── app.po.ts
│ │ ├── tsconfig.e2e.json
│ │ └── app.e2e-spec.ts
│ ├── .editorconfig
│ ├── tsconfig.json
│ ├── .gitignore
│ ├── protractor.conf.js
│ ├── README.md
│ ├── Dockerfile
│ ├── karma.conf.js
│ ├── .angular-cli.json
│ ├── package.json
│ └── tslint.json
├── docker-compose.stag.build.yml
├── docker-compose.prod.build.yml
├── docker-compose.branch.build.yml
├── docker-compose.test.yml
├── .gitlab-ci.yml
├── docker-compose.override.yml
├── docker-compose.yml
├── docker-compose.prod.yml
├── docker-compose.branch.yml
├── docker-compose.stag.yml
└── README.md
├── testing-config.yml
├── screenshot.png
├── .travis.yml
├── test.sh
├── LICENSE
├── cookiecutter.json
├── README.md
└── docker-swarm-cluster-deploy.md
/.gitignore:
--------------------------------------------------------------------------------
1 | .vscode
2 | testing-project
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/.gitignore:
--------------------------------------------------------------------------------
1 | .vscode
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/assets/.gitkeep:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/api/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/core/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/alembic/versions/.keep:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/api/api_v1/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/models/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/schemas/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/app/app.component.scss:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/.dockerignore:
--------------------------------------------------------------------------------
1 | node_modules
2 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/api/api_v1/endpoints/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/api/api_v1/utils/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/.gitignore:
--------------------------------------------------------------------------------
1 | .cache
2 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/utils/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/testing-config.yml:
--------------------------------------------------------------------------------
1 | default_context:
2 | "project_name": "Testing Project"
3 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/api_v1/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/api_v1/token/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/api_v1/user/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/senseta-os/senseta-base-project/HEAD/screenshot.png
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/alembic/README:
--------------------------------------------------------------------------------
1 | Generic single-database configuration.
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/__init__.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/uwsgi.ini:
--------------------------------------------------------------------------------
1 | [uwsgi]
2 | module = app.main
3 | callable = app
4 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/api_v1/group/__init__.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/environments/environment.prod.ts:
--------------------------------------------------------------------------------
1 | export const environment = {
2 | production: true
3 | };
4 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/styles.scss:
--------------------------------------------------------------------------------
1 | /* You can add global styles to this file, and also import other style files */
2 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/environments/environment.stag.ts:
--------------------------------------------------------------------------------
1 | export const environment = {
2 | production: false
3 | };
4 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/prestart.sh:
--------------------------------------------------------------------------------
1 | #! /usr/bin/env bash
2 |
3 | # Let the DB start
4 | sleep 10;
5 | # Run migrations
6 | alembic upgrade head
7 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/senseta-os/senseta-base-project/HEAD/{{cookiecutter.project_slug}}/frontend/src/favicon.ico
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/core/security.py:
--------------------------------------------------------------------------------
1 | from passlib.context import CryptContext
2 |
3 | pwd_context = CryptContext(schemes=['bcrypt'], deprecated='auto')
4 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/typings.d.ts:
--------------------------------------------------------------------------------
1 | /* SystemJS module definition */
2 | declare var module: NodeModule;
3 | interface NodeModule {
4 | id: string;
5 | }
6 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/utils/faker.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | from faker import Faker
4 |
5 | fake = Faker()
6 |
7 | use_seed = os.getenv('SEED')
8 | if use_seed:
9 | fake.seed(use_seed)
10 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/nginx-custom.conf:
--------------------------------------------------------------------------------
1 | server {
2 | listen 80;
3 | location / {
4 | root /usr/share/nginx/html;
5 | index index.html index.htm;
6 | try_files $uri $uri/ /index.html =404;
7 | }
8 | }
9 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/core/celery_app.py:
--------------------------------------------------------------------------------
1 | from celery import Celery
2 |
3 | celery_app = Celery('worker', broker='amqp://guest@queue//')
4 |
5 | celery_app.conf.task_routes = {
6 | 'main.worker.test_task': 'main-queue',
7 | }
8 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/schemas/base.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import standard library packages
4 |
5 | # Import installed packages
6 | from marshmallow import Schema, fields
7 |
8 | # Import app code
9 |
10 |
11 | class BaseSchema(Schema):
12 | pass
13 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/e2e/app.po.ts:
--------------------------------------------------------------------------------
1 | import { browser, by, element } from 'protractor';
2 |
3 | export class AppPage {
4 | navigateTo() {
5 | return browser.get('/');
6 | }
7 |
8 | getParagraphText() {
9 | return element(by.css('app-root h1')).getText();
10 | }
11 | }
12 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/app/app.component.ts:
--------------------------------------------------------------------------------
1 | import { Component } from '@angular/core';
2 |
3 | @Component({
4 | selector: 'app-root',
5 | templateUrl: './app.component.html',
6 | styleUrls: ['./app.component.scss']
7 | })
8 | export class AppComponent {
9 | title = 'app';
10 | }
11 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/tsconfig.app.json:
--------------------------------------------------------------------------------
1 | {
2 | "extends": "../tsconfig.json",
3 | "compilerOptions": {
4 | "outDir": "../out-tsc/app",
5 | "baseUrl": "./",
6 | "module": "es2015",
7 | "types": []
8 | },
9 | "exclude": [
10 | "test.ts",
11 | "**/*.spec.ts"
12 | ]
13 | }
14 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/main.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import installed packages
4 | from flask import Flask
5 |
6 | # Import app code
7 | app = Flask(__name__)
8 |
9 | # Setup app
10 | from .core import app_setup # noqa
11 |
12 | if __name__ == "__main__":
13 | app.run(debug=True, host='0.0.0.0', port=80)
14 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/.editorconfig:
--------------------------------------------------------------------------------
1 | # Editor configuration, see http://editorconfig.org
2 | root = true
3 |
4 | [*]
5 | charset = utf-8
6 | indent_style = space
7 | indent_size = 2
8 | insert_final_newline = true
9 | trim_trailing_whitespace = true
10 |
11 | [*.md]
12 | max_line_length = off
13 | trim_trailing_whitespace = false
14 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/schemas/msg.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import standard library packages
4 |
5 | # Import installed packages
6 | from marshmallow import fields
7 | # Import app code
8 | from .base import BaseSchema
9 |
10 |
11 | class MsgSchema(BaseSchema):
12 | # Own properties
13 | msg = fields.Str()
14 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/e2e/tsconfig.e2e.json:
--------------------------------------------------------------------------------
1 | {
2 | "extends": "../tsconfig.json",
3 | "compilerOptions": {
4 | "outDir": "../out-tsc/e2e",
5 | "baseUrl": "./",
6 | "module": "commonjs",
7 | "target": "es5",
8 | "types": [
9 | "jasmine",
10 | "jasminewd2",
11 | "node"
12 | ]
13 | }
14 | }
15 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/app/app-routing.module.ts:
--------------------------------------------------------------------------------
1 | import { NgModule } from '@angular/core';
2 | import { Routes, RouterModule } from '@angular/router';
3 |
4 | const routes: Routes = [];
5 |
6 | @NgModule({
7 | imports: [RouterModule.forRoot(routes)],
8 | exports: [RouterModule]
9 | })
10 | export class AppRoutingModule { }
11 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/schemas/scalar.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import standard library packages
4 |
5 | # Import installed packages
6 | from marshmallow import fields
7 | # Import app code
8 | from .base import BaseSchema
9 |
10 |
11 | class ScalarSchema(BaseSchema):
12 | # Own properties
13 | value = fields.Float()
14 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/api/api_v1/api.py:
--------------------------------------------------------------------------------
1 | # Import installed packages
2 |
3 | # Import app code
4 | from app.main import app
5 | from app.core import config
6 | from app.core.database import db_session
7 |
8 | from .api_docs import docs
9 |
10 | from .endpoints import group
11 | from .endpoints import token
12 | from .endpoints import user
13 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/e2e/app.e2e-spec.ts:
--------------------------------------------------------------------------------
1 | import { AppPage } from './app.po';
2 |
3 | describe('frontend App', () => {
4 | let page: AppPage;
5 |
6 | beforeEach(() => {
7 | page = new AppPage();
8 | });
9 |
10 | it('should display welcome message', () => {
11 | page.navigateTo();
12 | expect(page.getParagraphText()).toEqual('Welcome to app!');
13 | });
14 | });
15 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | Frontend
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/Dockerfile-rest-tests:
--------------------------------------------------------------------------------
1 | FROM python:3.6
2 |
3 | RUN pip install requests
4 |
5 | # For development, Jupyter remote kernel, Hydrogen
6 | # Using inside the container:
7 | # jupyter notebook --ip=0.0.0.0 --allow-root
8 | RUN pip install jupyter
9 | EXPOSE 8888
10 |
11 | RUN pip install faker==0.8.4 pytest
12 |
13 | COPY ./app /app
14 |
15 | ENV PYTHONPATH=/app
16 |
17 | WORKDIR /app/app/rest_tests
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/schemas/token.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import standard library packages
4 |
5 | # Import installed packages
6 | from marshmallow import fields
7 | # Import app code
8 | from .base import BaseSchema
9 |
10 |
11 | class TokenSchema(BaseSchema):
12 | # Own properties
13 | access_token = fields.Str()
14 | refresh_token = fields.Str()
15 | token_type = fields.Str()
16 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/tsconfig.spec.json:
--------------------------------------------------------------------------------
1 | {
2 | "extends": "../tsconfig.json",
3 | "compilerOptions": {
4 | "outDir": "../out-tsc/spec",
5 | "baseUrl": "./",
6 | "module": "commonjs",
7 | "target": "es5",
8 | "types": [
9 | "jasmine",
10 | "node"
11 | ]
12 | },
13 | "files": [
14 | "test.ts"
15 | ],
16 | "include": [
17 | "**/*.spec.ts",
18 | "**/*.d.ts"
19 | ]
20 | }
21 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/docker-compose.stag.build.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 | services:
3 | backend:
4 | build: ./backend
5 | image: '{{cookiecutter.docker_image_backend}}:stag'
6 | celeryworker:
7 | build:
8 | context: ./backend
9 | dockerfile: Dockerfile-celery-worker
10 | image: '{{cookiecutter.docker_image_celeryworker}}:stag'
11 | frontend:
12 | build: ./frontend
13 | image: '{{cookiecutter.docker_image_frontend}}:stag'
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/models/base_relations.py:
--------------------------------------------------------------------------------
1 | # Import installed packages
2 | from sqlalchemy import Table, Column, Integer, ForeignKey
3 |
4 | # Import app code
5 | from app.core.database import Base
6 |
7 | groups_admin_users = Table('groups_admin_users', Base.metadata,
8 | Column('user_id', Integer, ForeignKey('user.id')),
9 | Column('group_id', Integer, ForeignKey('group.id')))
10 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/docker-compose.prod.build.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 | services:
3 | backend:
4 | build: ./backend
5 | image: '{{cookiecutter.docker_image_backend}}:prod'
6 | celeryworker:
7 | build:
8 | context: ./backend
9 | dockerfile: Dockerfile-celery-worker
10 | image: '{{cookiecutter.docker_image_celeryworker}}:prod'
11 | frontend:
12 | build: ./frontend
13 | image: '{{cookiecutter.docker_image_frontend}}:prod'
14 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/main.ts:
--------------------------------------------------------------------------------
1 | import { enableProdMode } from '@angular/core';
2 | import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
3 |
4 | import { AppModule } from './app/app.module';
5 | import { environment } from './environments/environment';
6 |
7 | if (environment.production) {
8 | enableProdMode();
9 | }
10 |
11 | platformBrowserDynamic().bootstrapModule(AppModule)
12 | .catch(err => console.log(err));
13 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/docker-compose.branch.build.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 | services:
3 | backend:
4 | build: ./backend
5 | image: '{{cookiecutter.docker_image_backend}}:branch'
6 | celeryworker:
7 | build:
8 | context: ./backend
9 | dockerfile: Dockerfile-celery-worker
10 | image: '{{cookiecutter.docker_image_celeryworker}}:branch'
11 | frontend:
12 | build: ./frontend
13 | image: '{{cookiecutter.docker_image_frontend}}:branch'
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/environments/environment.ts:
--------------------------------------------------------------------------------
1 | // The file contents for the current environment will overwrite these during build.
2 | // The build system defaults to the dev environment which uses `environment.ts`, but if you do
3 | // `ng build --env=prod` then `environment.prod.ts` will be used instead.
4 | // The list of which env maps to which file can be found in `.angular-cli.json`.
5 |
6 | export const environment = {
7 | production: false
8 | };
9 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/core/cors.py:
--------------------------------------------------------------------------------
1 | # Import standard library modules
2 | import re
3 |
4 | # Import installed packages
5 | from flask_cors import CORS
6 |
7 | # Import app code
8 | from app.main import app
9 |
10 | # Anything from *{{cookiecutter.domain_main}}
11 | cors_origins_regex = re.compile(
12 | r'^(https?:\/\/(?:.+\.)?({{cookiecutter.domain_main|replace('.', '\.')}})(?::\d{1,5})?)$'
13 | )
14 | CORS(app, origins=cors_origins_regex, supports_credentials=True)
15 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/core/jwt.py:
--------------------------------------------------------------------------------
1 | # Import standard library modules
2 |
3 | # Import installed modules
4 | from flask_jwt_extended import JWTManager
5 |
6 | # Import app code
7 | from ..main import app
8 | from .database import db_session
9 | from ..models.user import User
10 |
11 | # Setup the Flask-JWT-Extended extension
12 | jwt = JWTManager(app)
13 |
14 |
15 | @jwt.user_loader_callback_loader
16 | def get_current_user(identity):
17 | return db_session.query(User).filter(User.id == identity).first()
18 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/app/app.module.ts:
--------------------------------------------------------------------------------
1 | import { BrowserModule } from '@angular/platform-browser';
2 | import { NgModule } from '@angular/core';
3 |
4 | import { AppRoutingModule } from './app-routing.module';
5 |
6 | import { AppComponent } from './app.component';
7 |
8 |
9 | @NgModule({
10 | declarations: [
11 | AppComponent
12 | ],
13 | imports: [
14 | BrowserModule,
15 | AppRoutingModule
16 | ],
17 | providers: [],
18 | bootstrap: [AppComponent]
19 | })
20 | export class AppModule { }
21 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/core/config.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | API_V1_STR = '/api/v1'
4 |
5 | SECRET_KEY = os.getenvb(b'SECRET_KEY')
6 | if not SECRET_KEY:
7 | SECRET_KEY = os.urandom(32)
8 |
9 | ACCESS_TOKEN_EXPIRE_MINUTES = 15
10 | REFRESH_TOKEN_EXPIRE_DAYS = 30
11 |
12 | SERVER_NAME = os.getenv('SERVER_NAME')
13 | SENTRY_DSN = os.getenv('SENTRY_DSN')
14 | POSTGRES_PASSWORD = os.getenv('POSTGRES_PASSWORD')
15 |
16 | FIRST_SUPERUSER = os.getenv('FIRST_SUPERUSER')
17 | FIRST_SUPERUSER_PASSWORD = os.getenv('FIRST_SUPERUSER_PASSWORD')
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/worker.py:
--------------------------------------------------------------------------------
1 | # Import standard library modules
2 |
3 |
4 | # Import installed packages
5 | from raven import Client
6 |
7 | # Import app code
8 | # Absolute imports for Hydrogen (Jupyter Kernel) compatibility
9 | from app.core.config import SENTRY_DSN
10 | from app.core.database import db_session
11 | from app.core.celery_app import celery_app
12 |
13 | from app.models.user import User
14 |
15 | client_sentry = Client(SENTRY_DSN)
16 |
17 |
18 | @celery_app.task(acks_late=True)
19 | def test_task():
20 | return 'test task'
21 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/tsconfig.json:
--------------------------------------------------------------------------------
1 | {
2 | "compileOnSave": false,
3 | "compilerOptions": {
4 | "outDir": "./dist/out-tsc",
5 | "sourceMap": true,
6 | "declaration": false,
7 | "moduleResolution": "node",
8 | "emitDecoratorMetadata": true,
9 | "experimentalDecorators": true,
10 | "target": "es5",
11 | "typeRoots": [
12 | "node_modules/@types"
13 | ],
14 | "lib": [
15 | "es2017",
16 | "dom"
17 | ]
18 | },
19 | "angularCompilerOptions": {
20 | "preserveWhitespaces": false
21 | },
22 | }
23 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/Dockerfile-celery-worker:
--------------------------------------------------------------------------------
1 | FROM python:3.6
2 |
3 | RUN pip install psycopg2 raven pyyaml celery==4.1.0 SQLAlchemy==1.1.13 passlib[bcrypt]
4 |
5 | # For development, Jupyter remote kernel, Hydrogen
6 | # Using inside the container:
7 | # jupyter notebook --ip=0.0.0.0 --allow-root
8 | ARG env=prod
9 | RUN bash -c "if [ $env == 'dev' ] ; then pip install jupyter ; fi"
10 | EXPOSE 8888
11 |
12 | ENV C_FORCE_ROOT=1
13 |
14 | COPY ./app /app
15 | WORKDIR /app
16 |
17 | ENV PYTHONPATH=/app
18 |
19 | CMD celery worker -A app.worker -l info -Q main-queue -c 1
--------------------------------------------------------------------------------
/.travis.yml:
--------------------------------------------------------------------------------
1 | sudo: required
2 |
3 | language: python
4 |
5 | install:
6 | - pip install cookiecutter
7 |
8 | services:
9 | - docker
10 |
11 | before_script:
12 | - cookiecutter --config-file ./testing-config.yml --no-input -f ./
13 | - cd ./testing-project
14 | - docker-compose -f docker-compose.test.yml build
15 | - docker-compose -f docker-compose.test.yml up -d
16 | - sleep 20
17 | - docker-compose -f docker-compose.test.yml exec -T backend bash -c 'alembic revision --autogenerate -m "Testing" && alembic upgrade head'
18 |
19 | script:
20 | - docker-compose -f docker-compose.test.yml exec -T backend-rest-tests pytest
21 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM tiangolo/uwsgi-nginx-flask:python3.6
2 |
3 | RUN pip install --upgrade pip
4 | RUN pip install flask flask-cors psycopg2 raven[flask] celery==4.1.0 passlib[bcrypt] SQLAlchemy==1.1.13 flask-apispec flask-jwt-extended alembic
5 |
6 | # For development, Jupyter remote kernel, Hydrogen
7 | # Using inside the container:
8 | # jupyter notebook --ip=0.0.0.0 --allow-root
9 | ARG env=prod
10 | RUN bash -c "if [ $env == 'dev' ] ; then pip install jupyter ; fi"
11 | EXPOSE 8888
12 |
13 | COPY ./app /app
14 | WORKDIR /app/
15 |
16 | ENV STATIC_PATH /app/app/static
17 | ENV STATIC_INDEX 1
18 |
19 | EXPOSE 80
20 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/alembic/script.py.mako:
--------------------------------------------------------------------------------
1 | """${message}
2 |
3 | Revision ID: ${up_revision}
4 | Revises: ${down_revision | comma,n}
5 | Create Date: ${create_date}
6 |
7 | """
8 | from alembic import op
9 | import sqlalchemy as sa
10 | ${imports if imports else ""}
11 |
12 | # revision identifiers, used by Alembic.
13 | revision = ${repr(up_revision)}
14 | down_revision = ${repr(down_revision)}
15 | branch_labels = ${repr(branch_labels)}
16 | depends_on = ${repr(depends_on)}
17 |
18 |
19 | def upgrade():
20 | ${upgrades if upgrades else "pass"}
21 |
22 |
23 | def downgrade():
24 | ${downgrades if downgrades else "pass"}
25 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/.gitignore:
--------------------------------------------------------------------------------
1 | # See http://help.github.com/ignore-files/ for more about ignoring files.
2 |
3 | # compiled output
4 | /dist
5 | /tmp
6 | /out-tsc
7 |
8 | # dependencies
9 | /node_modules
10 |
11 | # IDEs and editors
12 | /.idea
13 | .project
14 | .classpath
15 | .c9/
16 | *.launch
17 | .settings/
18 | *.sublime-workspace
19 |
20 | # IDE - VSCode
21 | .vscode/*
22 | !.vscode/settings.json
23 | !.vscode/tasks.json
24 | !.vscode/launch.json
25 | !.vscode/extensions.json
26 |
27 | # misc
28 | /.sass-cache
29 | /connect.lock
30 | /coverage
31 | /libpeerconnection.log
32 | npm-debug.log
33 | testem.log
34 | /typings
35 |
36 | # e2e
37 | /e2e/*.js
38 | /e2e/*.map
39 |
40 | # System Files
41 | .DS_Store
42 | Thumbs.db
43 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/schemas/user.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import standard library packages
4 |
5 | # Import installed packages
6 | from marshmallow import fields
7 | # Import app code
8 | from .base import BaseSchema
9 | from .group import GroupSchema
10 |
11 |
12 | class UserSchema(BaseSchema):
13 | # Own properties
14 | id = fields.Int()
15 | created_at = fields.DateTime()
16 | first_name = fields.Str()
17 | last_name = fields.Str()
18 | email = fields.Email()
19 | is_active = fields.Bool()
20 | is_superuser = fields.Bool()
21 | group = fields.Nested(GroupSchema, only=('id', 'name'))
22 | groups_admin = fields.Nested(
23 | GroupSchema, only=('id', 'name'), many=True)
24 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/api_v1/conftest.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import pytest
3 |
4 | from app.core import config
5 |
6 |
7 | @pytest.fixture(scope='module')
8 | def server_api():
9 | server_name = f'http://{config.SERVER_NAME}'
10 | return server_name
11 |
12 |
13 | @pytest.fixture(scope='module')
14 | def superuser_token_headers(server_api):
15 | login_data = {
16 | 'username': config.FIRST_SUPERUSER,
17 | 'password': config.FIRST_SUPERUSER_PASSWORD
18 | }
19 | r = requests.post(
20 | f'{server_api}{config.API_V1_STR}/login/access-token', data=login_data)
21 | tokens = r.json()
22 | a_token = tokens['access_token']
23 | headers = {'Authorization': f'Bearer {a_token}'}
24 | return headers
25 |
--------------------------------------------------------------------------------
/test.sh:
--------------------------------------------------------------------------------
1 | #! /usr/bin/env bash
2 |
3 | rm -rf ./testing-project
4 |
5 | cookiecutter --config-file ./testing-config.yml --no-input -f ./
6 |
7 | cd ./testing-project
8 |
9 | docker-compose -f docker-compose.test.yml build
10 | docker-compose -f docker-compose.test.yml down -v --remove-orphans # Remove possibly previous broken stacks left hanging after an error
11 | docker-compose -f docker-compose.test.yml up -d
12 | sleep 20; # Give some time for the DB and prestart script to finish
13 | docker-compose -f docker-compose.test.yml exec -T backend bash -c 'alembic revision --autogenerate -m "Testing" && alembic upgrade head'
14 | docker-compose -f docker-compose.test.yml exec -T backend-rest-tests pytest
15 | docker-compose -f docker-compose.test.yml down -v --remove-orphans
16 |
17 | cd ../
18 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/schemas/group.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import standard library packages
4 |
5 | # Import installed packages
6 | from marshmallow import fields
7 | # Import app code
8 | from .base import BaseSchema
9 |
10 |
11 | class GroupSchema(BaseSchema):
12 | # Own properties
13 | id = fields.Int()
14 | created_at = fields.DateTime()
15 | name = fields.Str()
16 | users = fields.Nested(
17 | 'UserSchema',
18 | only=[
19 | 'id', 'first_name', 'last_name', 'email', 'active', 'is_superuser'
20 | ], many=True)
21 | users_admin = fields.Nested(
22 | 'UserSchema',
23 | only=[
24 | 'id', 'first_name', 'last_name', 'email', 'active', 'is_superuser'
25 | ], many=True)
26 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/core/app_setup.py:
--------------------------------------------------------------------------------
1 | # Import standard library packages
2 |
3 | # Import installed packages
4 | from raven.contrib.flask import Sentry
5 | # Import app code
6 | from app.main import app
7 | from .database import db_session, init_db
8 | from . import config
9 | # Set up CORS
10 | from . import cors # noqa
11 |
12 | from .jwt import jwt # noqa
13 | from . import errors
14 |
15 | from ..api.api_v1 import api as api_v1 # noqa
16 |
17 | app.config['SECRET_KEY'] = config.SECRET_KEY
18 | app.config['SERVER_NAME'] = config.SERVER_NAME
19 |
20 | sentry = Sentry(app, dsn=config.SENTRY_DSN)
21 |
22 |
23 | @app.teardown_appcontext
24 | def shutdown_db_session(exception=None):
25 | db_session.remove()
26 |
27 |
28 | @app.before_first_request
29 | def setup():
30 | init_db()
31 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/models/group.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import standard library packages
4 | from datetime import datetime
5 | # Import installed packages
6 | from sqlalchemy import (Column, Integer, DateTime, String)
7 | from sqlalchemy.orm import relationship
8 | # Import app code
9 | from ..core.database import Base
10 | from .base_relations import groups_admin_users
11 |
12 |
13 | class Group(Base):
14 | # Own properties
15 | id = Column(Integer, primary_key=True, index=True)
16 | created_at = Column(DateTime, default=datetime.utcnow(), index=True)
17 | name = Column(String, index=True)
18 | # Relationships
19 | users = relationship('User', back_populates='group')
20 | users_admin = relationship(
21 | 'User', secondary=groups_admin_users, back_populates='groups_admin')
22 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/protractor.conf.js:
--------------------------------------------------------------------------------
1 | // Protractor configuration file, see link for more information
2 | // https://github.com/angular/protractor/blob/master/lib/config.ts
3 |
4 | const { SpecReporter } = require('jasmine-spec-reporter');
5 |
6 | exports.config = {
7 | allScriptsTimeout: 11000,
8 | specs: [
9 | './e2e/**/*.e2e-spec.ts'
10 | ],
11 | capabilities: {
12 | 'browserName': 'chrome'
13 | },
14 | directConnect: true,
15 | baseUrl: 'http://localhost:4200/',
16 | framework: 'jasmine',
17 | jasmineNodeOpts: {
18 | showColors: true,
19 | defaultTimeoutInterval: 30000,
20 | print: function() {}
21 | },
22 | onPrepare() {
23 | require('ts-node').register({
24 | project: 'e2e/tsconfig.e2e.json'
25 | });
26 | jasmine.getEnv().addReporter(new SpecReporter({ spec: { displayStacktrace: true } }));
27 | }
28 | };
29 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/api/api_v1/api_docs.py:
--------------------------------------------------------------------------------
1 | # Import installed packages
2 | from apispec import APISpec
3 | from flask_apispec import FlaskApiSpec
4 |
5 | # Import app code
6 | from ...main import app
7 | from ...core import config
8 |
9 | security_definitions = {
10 | 'bearer': {
11 | 'type': 'oauth2',
12 | 'flow': 'password',
13 | 'tokenUrl': f'{config.API_V1_STR}/login/access-token',
14 | 'refreshUrl': f'{config.API_V1_STR}/login/refresh-token',
15 | }
16 | }
17 |
18 | app.config.update({
19 | 'APISPEC_SPEC':
20 | APISpec(
21 | title='{{cookiecutter.project_name}}',
22 | version='v1',
23 | plugins=('apispec.ext.marshmallow', ),
24 | securityDefinitions=security_definitions),
25 | 'APISPEC_SWAGGER_URL':
26 | f'{config.API_V1_STR}/swagger/'
27 | })
28 | docs = FlaskApiSpec(app)
29 |
30 | security_params = [{'bearer': []}]
31 |
32 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2017 Senseta
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/app/app.component.spec.ts:
--------------------------------------------------------------------------------
1 | import { TestBed, async } from '@angular/core/testing';
2 | import { AppComponent } from './app.component';
3 | describe('AppComponent', () => {
4 | beforeEach(async(() => {
5 | TestBed.configureTestingModule({
6 | declarations: [
7 | AppComponent
8 | ],
9 | }).compileComponents();
10 | }));
11 | it('should create the app', async(() => {
12 | const fixture = TestBed.createComponent(AppComponent);
13 | const app = fixture.debugElement.componentInstance;
14 | expect(app).toBeTruthy();
15 | }));
16 | it(`should have as title 'app'`, async(() => {
17 | const fixture = TestBed.createComponent(AppComponent);
18 | const app = fixture.debugElement.componentInstance;
19 | expect(app.title).toEqual('app');
20 | }));
21 | it('should render title in a h1 tag', async(() => {
22 | const fixture = TestBed.createComponent(AppComponent);
23 | fixture.detectChanges();
24 | const compiled = fixture.debugElement.nativeElement;
25 | expect(compiled.querySelector('h1').textContent).toContain('Welcome to app!');
26 | }));
27 | });
28 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/README.md:
--------------------------------------------------------------------------------
1 | # Frontend
2 |
3 | This project was generated with [Angular CLI](https://github.com/angular/angular-cli) version 1.5.3.
4 |
5 | ## Development server
6 |
7 | Run `ng serve` for a dev server. Navigate to `http://localhost:4200/`. The app will automatically reload if you change any of the source files.
8 |
9 | ## Code scaffolding
10 |
11 | Run `ng generate component component-name` to generate a new component. You can also use `ng generate directive|pipe|service|class|guard|interface|enum|module`.
12 |
13 | ## Build
14 |
15 | Run `ng build` to build the project. The build artifacts will be stored in the `dist/` directory. Use the `-prod` flag for a production build.
16 |
17 | ## Running unit tests
18 |
19 | Run `ng test` to execute the unit tests via [Karma](https://karma-runner.github.io).
20 |
21 | ## Running end-to-end tests
22 |
23 | Run `ng e2e` to execute the end-to-end tests via [Protractor](http://www.protractortest.org/).
24 |
25 | ## Further help
26 |
27 | To get more help on the Angular CLI use `ng help` or go check out the [Angular CLI README](https://github.com/angular/angular-cli/blob/master/README.md).
28 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/test.ts:
--------------------------------------------------------------------------------
1 | // This file is required by karma.conf.js and loads recursively all the .spec and framework files
2 |
3 | import 'zone.js/dist/long-stack-trace-zone';
4 | import 'zone.js/dist/proxy.js';
5 | import 'zone.js/dist/sync-test';
6 | import 'zone.js/dist/jasmine-patch';
7 | import 'zone.js/dist/async-test';
8 | import 'zone.js/dist/fake-async-test';
9 | import { getTestBed } from '@angular/core/testing';
10 | import {
11 | BrowserDynamicTestingModule,
12 | platformBrowserDynamicTesting
13 | } from '@angular/platform-browser-dynamic/testing';
14 |
15 | // Unfortunately there's no typing for the `__karma__` variable. Just declare it as any.
16 | declare const __karma__: any;
17 | declare const require: any;
18 |
19 | // Prevent Karma from running prematurely.
20 | __karma__.loaded = function () {};
21 |
22 | // First, initialize the Angular testing environment.
23 | getTestBed().initTestEnvironment(
24 | BrowserDynamicTestingModule,
25 | platformBrowserDynamicTesting()
26 | );
27 | // Then we find all the tests.
28 | const context = require.context('./', true, /\.spec\.ts$/);
29 | // And load the modules.
30 | context.keys().map(context);
31 | // Finally, start Karma to run the tests.
32 | __karma__.start();
33 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/app/app.component.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Welcome to {{ title }}!
5 |
6 |

7 |
8 | Here are some links to help you start:
9 |
10 | -
11 |
12 |
13 | -
14 |
15 |
16 | -
17 |
18 |
19 |
20 |
21 |
22 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/core/errors.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | from flask import jsonify
4 | from ..main import app
5 |
6 |
7 | # 400 Bad Request
8 | @app.errorhandler(400)
9 | def custom400(error):
10 | return jsonify({'msg': error.description}), 400
11 |
12 |
13 | # 401 Unauthorized
14 | @app.errorhandler(401)
15 | def custom401(error):
16 | return jsonify({'msg': error.description}), 401
17 |
18 |
19 | # 403 Forbidden
20 | @app.errorhandler(403)
21 | def custom403(error):
22 | return jsonify({'msg': error.description}), 403
23 |
24 |
25 | # 404 Not Found
26 | @app.errorhandler(404)
27 | def custom404(error):
28 | return jsonify({'msg': error.description}), 404
29 |
30 |
31 | # 405 Method Not Allowed
32 | @app.errorhandler(405)
33 | def custom405(error):
34 | return jsonify({
35 | 'msg': error.description
36 | }), 405
37 |
38 |
39 | # 406 Not Acceptable
40 | @app.errorhandler(406)
41 | def custom406(error):
42 | return jsonify({'msg': error.description}), 406
43 |
44 |
45 | # 422 Unprocessable Entity, for flask-apispec, webargs
46 | @app.errorhandler(422)
47 | def custom422(error):
48 | return jsonify({
49 | 'msg': error.description,
50 | 'errors': error.exc.messages
51 | }), 422
52 |
53 |
54 | # 500 Internal Server Error
55 | @app.errorhandler(500)
56 | def custom500(error):
57 | return jsonify({'msg': error.description}), 500
58 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/models/user.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import standard library packages
4 | from datetime import datetime
5 | # Import installed packages
6 | from sqlalchemy import Column, Integer, DateTime, String, Boolean, ForeignKey
7 | from sqlalchemy.orm import relationship
8 | # Import app code
9 | from ..core.database import Base
10 | from .base_relations import groups_admin_users
11 |
12 | # Typings, for autocompletion (VS Code with Python plug-in)
13 | from . import group as group_model # noqa
14 | from typing import List # noqa
15 |
16 |
17 | class User(Base):
18 | # Own properties
19 | id = Column(Integer, primary_key=True, index=True)
20 | created_at = Column(DateTime, default=datetime.utcnow(), index=True)
21 | first_name = Column(String, index=True)
22 | last_name = Column(String, index=True)
23 | email = Column(String, unique=True, index=True)
24 | password = Column(String)
25 | is_active = Column(Boolean(), default=True)
26 | is_superuser = Column(Boolean(), default=False)
27 | # Relationships
28 | group_id = Column(Integer, ForeignKey('group.id'), index=True)
29 | group = relationship(
30 | 'Group', back_populates='users') # type: group_model.Group
31 | # If this user is admin of one or more groups, they will be here
32 | groups_admin = relationship(
33 | 'Group', secondary=groups_admin_users,
34 | back_populates='users_admin') # type: List[group.Group]
35 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/utils/user.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | from .faker import fake
4 |
5 | from app.core import config
6 |
7 |
8 | def random_user(group_id=1):
9 | first_name = fake.first_name()
10 | last_name = fake.last_name()
11 | email = fake.email()
12 | app_id = f'{first_name}.{last_name}'
13 | fake_user = {
14 | 'first_name': first_name,
15 | 'last_name': last_name,
16 | 'email': email,
17 | 'app_id': app_id,
18 | 'group_id': group_id,
19 | 'password': 'passwordtest'
20 | }
21 | return fake_user
22 |
23 |
24 | def user_authentication_headers(server_api, email, password):
25 | data = {"username": email, "password": password}
26 |
27 | r = requests.post(
28 | f'{server_api}{config.API_V1_STR}/login/access-token', json=data)
29 |
30 | response = r.json()
31 | print(response)
32 | auth_token = response['access_token']
33 | headers = {'Authorization': f'Bearer {auth_token}'}
34 | return headers
35 |
36 |
37 | def create_user(server_api, superuser_token_headers, user_data):
38 | r = requests.post(
39 | f'{server_api}{config.API_V1_STR}/users/',
40 | headers=superuser_token_headers,
41 | json=user_data)
42 | created_user = r.json()
43 | return created_user
44 |
45 |
46 | def create_random_user(server_api, superuser_token_headers):
47 | user_data = random_user()
48 | return create_user(server_api, superuser_token_headers, user_data)
49 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/Dockerfile:
--------------------------------------------------------------------------------
1 | # Stage 0, based on Node.js, to build and compile Angular
2 | FROM node:9.2 as building
3 |
4 | # Add Chrome dependencies, to run Puppeteer (headless Chrome) for Angular / Karma tests
5 | # Taken from: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
6 | # Install latest chrome dev package.
7 | # Note: this installs the necessary libs to make the bundled version of Chromium that Pupppeteer
8 | # installs, work.
9 | RUN apt-get update && apt-get install -y wget --no-install-recommends \
10 | && wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
11 | && sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
12 | && apt-get update \
13 | && apt-get install -y google-chrome-unstable \
14 | --no-install-recommends \
15 | && rm -rf /var/lib/apt/lists/* \
16 | && apt-get purge --auto-remove -y curl \
17 | && rm -rf /src/*.deb
18 |
19 |
20 | WORKDIR /app
21 |
22 | COPY package*.json /app/
23 |
24 | RUN npm install
25 |
26 | COPY ./ /app/
27 |
28 | ARG env=prod
29 |
30 | RUN npm run test -- --single-run --code-coverage --browsers ChromeHeadlessNoSandbox
31 |
32 | RUN npm run build -- --prod --environment $env
33 |
34 |
35 | # Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
36 | FROM nginx:1.13
37 |
38 | COPY --from=building /app/dist/ /usr/share/nginx/html
39 |
40 | COPY ./nginx-custom.conf /etc/nginx/conf.d/default.conf
41 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/karma.conf.js:
--------------------------------------------------------------------------------
1 | // Karma configuration file, see link for more information
2 | // https://karma-runner.github.io/0.13/config/configuration-file.html
3 |
4 | process.env.CHROME_BIN = require('puppeteer').executablePath()
5 |
6 | module.exports = function (config) {
7 | config.set({
8 | basePath: '',
9 | frameworks: ['jasmine', '@angular/cli'],
10 | plugins: [
11 | require('karma-jasmine'),
12 | require('karma-chrome-launcher'),
13 | require('karma-jasmine-html-reporter'),
14 | require('karma-mocha-reporter'),
15 | require('karma-coverage-istanbul-reporter'),
16 | require('@angular/cli/plugins/karma')
17 | ],
18 | client:{
19 | clearContext: false // leave Jasmine Spec Runner output visible in browser
20 | },
21 | coverageIstanbulReporter: {
22 | reports: [ 'html', 'lcovonly', 'text-summary' ],
23 | fixWebpackSourcePaths: true
24 | },
25 | angularCli: {
26 | environment: 'dev'
27 | },
28 | reporters: ['mocha', 'kjhtml'],
29 | port: 9876,
30 | colors: true,
31 | logLevel: config.LOG_INFO,
32 | autoWatch: true,
33 | browsers: ['Chrome', 'ChromeHeadlessNoSandbox'],
34 | customLaunchers: {
35 | ChromeHeadlessNoSandbox: {
36 | base: 'ChromeHeadless',
37 | flags: [
38 | '--no-sandbox',
39 | // Without a remote debugging port, Google Chrome exits immediately.
40 | '--remote-debugging-port=9222',
41 | ]
42 | }
43 | },
44 | singleRun: false
45 | });
46 | };
47 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/.angular-cli.json:
--------------------------------------------------------------------------------
1 | {
2 | "$schema": "./node_modules/@angular/cli/lib/config/schema.json",
3 | "project": {
4 | "name": "frontend"
5 | },
6 | "apps": [
7 | {
8 | "root": "src",
9 | "outDir": "dist",
10 | "assets": [
11 | "assets",
12 | "favicon.ico"
13 | ],
14 | "index": "index.html",
15 | "main": "main.ts",
16 | "polyfills": "polyfills.ts",
17 | "test": "test.ts",
18 | "tsconfig": "tsconfig.app.json",
19 | "testTsconfig": "tsconfig.spec.json",
20 | "prefix": "app",
21 | "styles": [
22 | "styles.scss"
23 | ],
24 | "scripts": [],
25 | "environmentSource": "environments/environment.ts",
26 | "environments": {
27 | "dev": "environments/environment.ts",
28 | "stag": "environments/environment.stag.ts",
29 | "prod": "environments/environment.prod.ts"
30 | }
31 | }
32 | ],
33 | "e2e": {
34 | "protractor": {
35 | "config": "./protractor.conf.js"
36 | }
37 | },
38 | "lint": [
39 | {
40 | "project": "src/tsconfig.app.json",
41 | "exclude": "**/node_modules/**"
42 | },
43 | {
44 | "project": "src/tsconfig.spec.json",
45 | "exclude": "**/node_modules/**"
46 | },
47 | {
48 | "project": "e2e/tsconfig.e2e.json",
49 | "exclude": "**/node_modules/**"
50 | }
51 | ],
52 | "test": {
53 | "karma": {
54 | "config": "./karma.conf.js"
55 | }
56 | },
57 | "defaults": {
58 | "styleExt": "scss",
59 | "component": {}
60 | }
61 | }
62 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/api_v1/token/test_token.py:
--------------------------------------------------------------------------------
1 | # Import installed packages
2 | import requests
3 | # Import app code
4 | from app.core import config
5 |
6 |
7 | def test_get_access_token(server_api):
8 | login_data = {
9 | 'username': config.FIRST_SUPERUSER,
10 | 'password': config.FIRST_SUPERUSER_PASSWORD
11 | }
12 | r = requests.post(
13 | f'{server_api}{config.API_V1_STR}/login/access-token', data=login_data)
14 | tokens = r.json()
15 | assert r.status_code == 200
16 | assert 'access_token' in tokens
17 | assert 'refresh_token' in tokens
18 | assert tokens['access_token']
19 | assert tokens['refresh_token']
20 |
21 |
22 | def test_use_access_token(server_api, superuser_token_headers):
23 | r = requests.post(
24 | f'{server_api}{config.API_V1_STR}/login/test-token',
25 | headers=superuser_token_headers,
26 | json={'test': 'test'})
27 | result = r.json()
28 | assert r.status_code == 200
29 | assert 'id' in result
30 |
31 |
32 | def test_refresh_token(server_api):
33 | login_data = {
34 | 'username': config.FIRST_SUPERUSER,
35 | 'password': config.FIRST_SUPERUSER_PASSWORD
36 | }
37 | r = requests.post(
38 | f'{server_api}{config.API_V1_STR}/login/access-token', data=login_data)
39 | tokens = r.json()
40 | refresh_token = tokens['refresh_token']
41 | headers = {
42 | 'Authorization': f'Bearer {refresh_token}'
43 | }
44 | r = requests.post(
45 | f'{server_api}{config.API_V1_STR}/login/refresh-token',
46 | headers=headers)
47 | result = r.json()
48 | assert r.status_code == 200
49 | assert 'access_token' in result
50 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/api_v1/group/test_create_groups.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | from app.rest_tests.utils.user import random_user, user_authentication_headers
4 | from app.rest_tests.utils.group import random_group
5 |
6 | from app.core import config
7 |
8 |
9 | def test_create_group_by_superuser(server_api, superuser_token_headers):
10 |
11 | new_group = random_group()
12 |
13 | r = requests.post(
14 | f'{server_api}{config.API_V1_STR}/groups/',
15 | headers=superuser_token_headers,
16 | data=new_group)
17 |
18 | expected_fields = ['created_at', 'id', 'name', 'users', 'users_admin']
19 | created_group = r.json()
20 |
21 | for expected_field in expected_fields:
22 | assert expected_field in created_group
23 |
24 | assert r.status_code == 200
25 |
26 | assert created_group['users'] == []
27 | assert created_group['users_admin'] == []
28 | assert created_group['name'] == new_group['name']
29 |
30 |
31 | def test_create_group_by_normal_user(server_api, superuser_token_headers):
32 | new_user = random_user()
33 | r = requests.post(
34 | f'{server_api}{config.API_V1_STR}/users/',
35 | headers=superuser_token_headers,
36 | data=new_user)
37 |
38 | created_user = r.json()
39 |
40 | if r.status_code == 200:
41 |
42 | email, password = new_user['email'], new_user['password']
43 | auth = user_authentication_headers(server_api, email, password)
44 |
45 | new_group = random_group()
46 | r = requests.post(
47 | f'{server_api}{config.API_V1_STR}/groups/',
48 | headers=auth,
49 | data=new_group)
50 |
51 | assert r.status_code == 400
52 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/core/database.py:
--------------------------------------------------------------------------------
1 | from sqlalchemy import create_engine
2 | from sqlalchemy.orm import scoped_session, sessionmaker
3 | from sqlalchemy.ext.declarative import declarative_base, declared_attr
4 |
5 | from . import config
6 | from .security import pwd_context
7 |
8 | engine = create_engine(
9 | f'postgresql://postgres:{config.POSTGRES_PASSWORD}@db/app',
10 | convert_unicode=True)
11 | db_session = scoped_session(
12 | sessionmaker(autocommit=False, autoflush=False, bind=engine))
13 |
14 |
15 | class Base(object):
16 | @declared_attr
17 | def __tablename__(cls):
18 | return cls.__name__.lower()
19 |
20 |
21 | Base = declarative_base(cls=Base)
22 | Base.query = db_session.query_property()
23 |
24 | # Import all the models, so that Base has them before being
25 | # imported by Alembic or used by init_db()
26 | from app.models.user import User
27 | from app.models.group import Group
28 |
29 | def init_db():
30 | # Tables should be created with Alembic migrations
31 | # But if you don't want to use migrations, create the tables uncommenting the next line
32 | # Base.metadata.create_all(bind=engine)
33 |
34 | group = db_session.query(Group).filter(Group.name == 'default').first()
35 | if not group:
36 | group = Group(name='default')
37 | db_session.add(group)
38 |
39 | user = db_session.query(User).filter(
40 | User.email == config.FIRST_SUPERUSER).first()
41 | if not user:
42 | user = User(
43 | email=config.FIRST_SUPERUSER,
44 | password=pwd_context.hash(config.FIRST_SUPERUSER_PASSWORD),
45 | group=group,
46 | is_superuser=True)
47 | user.groups_admin.append(group)
48 |
49 | db_session.add(user)
50 | db_session.commit()
51 |
--------------------------------------------------------------------------------
/cookiecutter.json:
--------------------------------------------------------------------------------
1 | {
2 | "project_name": "Base Project",
3 | "project_slug": "{{ cookiecutter.project_name|lower|replace(' ', '-') }}",
4 | "domain_main": "{{cookiecutter.project_slug}}.com",
5 | "domain_staging": "stag.{{cookiecutter.domain_main}}",
6 | "domain_branch": "branch.{{cookiecutter.domain_main}}",
7 | "domain_dev": "dev.{{cookiecutter.domain_main}}",
8 |
9 | "docker_swarm_stack_name_main": "{{cookiecutter.domain_main|replace('.', '-')}}",
10 | "docker_swarm_stack_name_staging": "{{cookiecutter.domain_staging|replace('.', '-')}}",
11 | "docker_swarm_stack_name_branch": "{{cookiecutter.domain_branch|replace('.', '-')}}",
12 |
13 | "secret_key": "changethis",
14 | "first_superuser": "admin@{{cookiecutter.domain_main}}",
15 | "first_superuser_password": "changethis",
16 |
17 |
18 | "postgres_password": "changethis",
19 | "pgadmin_default_user": "{{cookiecutter.first_superuser}}",
20 | "pgadmin_default_user_password": "changethis",
21 |
22 | "traefik_constraint_tag": "{{cookiecutter.domain_main}}",
23 | "traefik_constraint_tag_staging": "{{cookiecutter.domain_staging}}",
24 | "traefik_constraint_tag_branch": "{{cookiecutter.domain_branch}}",
25 | "traefik_public_network": "traefik-public",
26 | "traefik_public_constraint_tag": "traefik-public",
27 |
28 | "flower_auth": "root:changethis",
29 |
30 | "sentry_dsn": "",
31 |
32 | "docker_image_prefix": "",
33 |
34 | "docker_image_backend": "{{cookiecutter.docker_image_prefix}}backend",
35 | "docker_image_celeryworker": "{{cookiecutter.docker_image_prefix}}celeryworker",
36 | "docker_image_frontend": "{{cookiecutter.docker_image_prefix}}frontend",
37 |
38 | "_copy_without_render": [
39 | "frontend/src/**/*.html",
40 | "frontend/node_modules/*"
41 | ]
42 | }
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/docker-compose.test.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 | services:
3 | db:
4 | image: 'postgres:10'
5 | environment:
6 | POSTGRES_DB: app
7 | POSTGRES_PASSWORD: {{cookiecutter.postgres_password}}
8 | PGDATA: /var/lib/postgresql/data/pgdata
9 | volumes:
10 | - 'app-db-data:/var/lib/postgresql/data/pgdata'
11 | queue:
12 | image: 'rabbitmq:3'
13 | backend:
14 | build:
15 | context: ./backend
16 | depends_on:
17 | - db
18 | environment:
19 | - SERVER_NAME=backend
20 | - SECRET_KEY={{cookiecutter.secret_key}}
21 | - SENTRY_DSN={{cookiecutter.sentry_dsn}}
22 | - POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
23 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
24 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
25 | labels:
26 | - "traefik.frontend.rule=PathPrefix:/api"
27 | - "traefik.enable=true"
28 | - "traefik.port=80"
29 | - "traefik.tags={{cookiecutter.traefik_constraint_tag}}"
30 | celeryworker:
31 | build:
32 | context: ./backend
33 | dockerfile: Dockerfile-celery-worker
34 | depends_on:
35 | - db
36 | - queue
37 | environment:
38 | - SENTRY_DSN={{cookiecutter.sentry_dsn}}
39 | - POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
40 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
41 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
42 | backend-rest-tests:
43 | build:
44 | context: ./backend
45 | dockerfile: Dockerfile-rest-tests
46 | command: bash -c "while true; do sleep 1; done"
47 | environment:
48 | - SERVER_NAME=backend
49 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
50 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
51 | # - SEED=0
52 | volumes:
53 | app-db-data: {}
54 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/alembic.ini:
--------------------------------------------------------------------------------
1 | # A generic, single database configuration.
2 |
3 | [alembic]
4 | # path to migration scripts
5 | script_location = alembic
6 |
7 | # template used to generate migration files
8 | # file_template = %%(rev)s_%%(slug)s
9 |
10 | # timezone to use when rendering the date
11 | # within the migration file as well as the filename.
12 | # string value is passed to dateutil.tz.gettz()
13 | # leave blank for localtime
14 | # timezone =
15 |
16 | # max length of characters to apply to the
17 | # "slug" field
18 | #truncate_slug_length = 40
19 |
20 | # set to 'true' to run the environment during
21 | # the 'revision' command, regardless of autogenerate
22 | # revision_environment = false
23 |
24 | # set to 'true' to allow .pyc and .pyo files without
25 | # a source .py file to be detected as revisions in the
26 | # versions/ directory
27 | # sourceless = false
28 |
29 | # version location specification; this defaults
30 | # to alembic/versions. When using multiple version
31 | # directories, initial revisions must be specified with --version-path
32 | # version_locations = %(here)s/bar %(here)s/bat alembic/versions
33 |
34 | # the output encoding used when revision files
35 | # are written from script.py.mako
36 | # output_encoding = utf-8
37 |
38 | sqlalchemy.url = postgresql://postgres:{{cookiecutter.postgres_password}}@db/app
39 |
40 |
41 | # Logging configuration
42 | [loggers]
43 | keys = root,sqlalchemy,alembic
44 |
45 | [handlers]
46 | keys = console
47 |
48 | [formatters]
49 | keys = generic
50 |
51 | [logger_root]
52 | level = WARN
53 | handlers = console
54 | qualname =
55 |
56 | [logger_sqlalchemy]
57 | level = WARN
58 | handlers =
59 | qualname = sqlalchemy.engine
60 |
61 | [logger_alembic]
62 | level = INFO
63 | handlers =
64 | qualname = alembic
65 |
66 | [handler_console]
67 | class = StreamHandler
68 | args = (sys.stderr,)
69 | level = NOTSET
70 | formatter = generic
71 |
72 | [formatter_generic]
73 | format = %(levelname)-5.5s [%(name)s] %(message)s
74 | datefmt = %H:%M:%S
75 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/utils/group.py:
--------------------------------------------------------------------------------
1 | import random
2 |
3 | import requests
4 |
5 | from app.rest_tests.utils.user import random_user
6 | from app.rest_tests.utils.user import user_authentication_headers
7 | from app.rest_tests.utils.faker import fake
8 |
9 | from app.core import config
10 |
11 |
12 | def random_group():
13 | return {
14 | "name": fake.job(),
15 | }
16 |
17 |
18 | def random_group_admin(server_api, superuser_token_headers):
19 | new_user = random_user()
20 | r = requests.post(
21 | f'{server_api}{config.API_V1_STR}/users/',
22 | headers=superuser_token_headers,
23 | data=new_user)
24 |
25 | created_user = r.json()
26 |
27 | group_users, _ = get_group_users_and_admins(server_api,
28 | superuser_token_headers)
29 | group_id = random.choice(list(group_users.keys()))
30 |
31 | if r.status_code == 200:
32 | r = requests.post(
33 | f'{server_api}{config.API_V1_STR}/groups/{group_id}/admin_users/',
34 | headers=superuser_token_headers,
35 | data={
36 | 'user_id': created_user['id']
37 | })
38 |
39 | email, password = new_user['email'], new_user['password']
40 | auth = user_authentication_headers(server_api, email, password)
41 |
42 | return group_id, auth
43 |
44 | else:
45 | Exception('Unable to execute due to possible server error')
46 |
47 |
48 | def get_group_users_and_admins(server_api, superuser_token_headers):
49 | r = requests.get(
50 | f'{server_api}{config.API_V1_STR}/users/', headers=superuser_token_headers)
51 |
52 | code, response = r.status_code, r.json()
53 | group_users = {}
54 | group_admins = []
55 |
56 | for user in response:
57 | if user['group']:
58 | group_id = user['group']['id']
59 |
60 | if group_id in group_users:
61 | group_users[group_id].append(user)
62 | else:
63 | group_users[group_id] = [user]
64 |
65 | if user['groups_admin']:
66 | group_admins.append(user)
67 |
68 | return group_users, group_admins
69 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "frontend",
3 | "version": "0.0.0",
4 | "license": "MIT",
5 | "scripts": {
6 | "ng": "ng",
7 | "start": "ng serve",
8 | "start:dev": "ng serve --sourcemaps=true --environment=dev --host={{cookiecutter.domain_dev}}",
9 | "start:stag": "ng serve --sourcemaps=true --environment=stag --host={{cookiecutter.domain_dev}}",
10 | "start:prod": "ng serve --sourcemaps=true --environment=prod --host={{cookiecutter.domain_dev}}",
11 | "build": "ng build",
12 | "build:dev": "ng build --prod --environment=dev",
13 | "build:stag": "ng build --prod --environment=stag",
14 | "build:prod": "ng build --prod --environment=prod",
15 | "test": "ng test",
16 | "test:dev": "ng test --sourcemaps=false --code-coverage=true --environment=dev --browsers Chrome",
17 | "test:stag": "ng test --sourcemaps=false --code-coverage=true --environment=stag --browsers Chrome",
18 | "test:prod": "ng test --sourcemaps=false --code-coverage=true --environment=prod --browsers ChromeHeadlessNoSandbox --single-run",
19 | "lint": "ng lint",
20 | "e2e": "ng e2e"
21 | },
22 | "private": true,
23 | "dependencies": {
24 | "@angular/animations": "^5.1.2",
25 | "@angular/common": "^5.1.2",
26 | "@angular/compiler": "^5.1.2",
27 | "@angular/core": "^5.1.2",
28 | "@angular/forms": "^5.1.2",
29 | "@angular/http": "^5.0.0",
30 | "@angular/platform-browser": "^5.1.2",
31 | "@angular/platform-browser-dynamic": "^5.1.2",
32 | "@angular/router": "^5.1.2",
33 | "core-js": "^2.4.1",
34 | "rxjs": "^5.5.2",
35 | "zone.js": "^0.8.14"
36 | },
37 | "devDependencies": {
38 | "@angular/cli": "1.6.3",
39 | "@angular/compiler-cli": "^5.1.2",
40 | "@angular/language-service": "^5.1.2",
41 | "@types/jasmine": "~2.5.53",
42 | "@types/jasminewd2": "~2.0.2",
43 | "@types/node": "~6.0.60",
44 | "codelyzer": "^4.0.1",
45 | "jasmine-core": "~2.6.2",
46 | "jasmine-spec-reporter": "~4.1.0",
47 | "karma": "~1.7.0",
48 | "karma-chrome-launcher": "~2.1.1",
49 | "karma-cli": "~1.0.1",
50 | "karma-coverage-istanbul-reporter": "^1.2.1",
51 | "karma-jasmine": "~1.1.0",
52 | "karma-jasmine-html-reporter": "^0.2.2",
53 | "karma-mocha-reporter": "^2.2.5",
54 | "protractor": "~5.1.2",
55 | "puppeteer": "^0.11.0",
56 | "ts-node": "~3.2.0",
57 | "tslint": "~5.7.0",
58 | "typescript": "~2.5.3"
59 | }
60 | }
61 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/alembic/env.py:
--------------------------------------------------------------------------------
1 | from __future__ import with_statement
2 | from alembic import context
3 | from sqlalchemy import engine_from_config, pool
4 | from logging.config import fileConfig
5 |
6 | # this is the Alembic Config object, which provides
7 | # access to the values within the .ini file in use.
8 | config = context.config
9 |
10 | # Interpret the config file for Python logging.
11 | # This line sets up loggers basically.
12 | fileConfig(config.config_file_name)
13 |
14 | # add your model's MetaData object here
15 | # for 'autogenerate' support
16 | # from myapp import mymodel
17 | # target_metadata = mymodel.Base.metadata
18 | # target_metadata = None
19 |
20 | from app.core.database import Base
21 |
22 | target_metadata = Base.metadata
23 |
24 | # other values from the config, defined by the needs of env.py,
25 | # can be acquired:
26 | # my_important_option = config.get_main_option("my_important_option")
27 | # ... etc.
28 |
29 |
30 | def run_migrations_offline():
31 | """Run migrations in 'offline' mode.
32 |
33 | This configures the context with just a URL
34 | and not an Engine, though an Engine is acceptable
35 | here as well. By skipping the Engine creation
36 | we don't even need a DBAPI to be available.
37 |
38 | Calls to context.execute() here emit the given string to the
39 | script output.
40 |
41 | """
42 | url = config.get_main_option("sqlalchemy.url")
43 | context.configure(
44 | url=url,
45 | target_metadata=target_metadata,
46 | literal_binds=True,
47 | compare_type=True)
48 |
49 | with context.begin_transaction():
50 | context.run_migrations()
51 |
52 |
53 | def run_migrations_online():
54 | """Run migrations in 'online' mode.
55 |
56 | In this scenario we need to create an Engine
57 | and associate a connection with the context.
58 |
59 | """
60 | connectable = engine_from_config(
61 | config.get_section(config.config_ini_section),
62 | prefix='sqlalchemy.',
63 | poolclass=pool.NullPool)
64 |
65 | with connectable.connect() as connection:
66 | context.configure(
67 | connection=connection,
68 | target_metadata=target_metadata,
69 | compare_type=True)
70 |
71 | with context.begin_transaction():
72 | context.run_migrations()
73 |
74 |
75 | if context.is_offline_mode():
76 | run_migrations_offline()
77 | else:
78 | run_migrations_online()
79 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/.gitlab-ci.yml:
--------------------------------------------------------------------------------
1 | image: tiangolo/docker-with-compose
2 |
3 | before_script:
4 | - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
5 |
6 | stages:
7 | - test
8 | - build
9 | - deploy
10 |
11 | rest-tests:
12 | stage: test
13 | script:
14 | - docker-compose -f docker-compose.test.yml build
15 | - docker-compose -f docker-compose.test.yml down -v --remove-orphans # Remove possibly previous broken stacks left hanging after an error
16 | - docker-compose -f docker-compose.test.yml up -d
17 | - sleep 20; # Give some time for the DB and prestart script to finish
18 | - docker-compose -f docker-compose.test.yml exec -T backend-rest-tests pytest
19 | - docker-compose -f docker-compose.test.yml down -v --remove-orphans
20 | tags:
21 | - build
22 | - test
23 |
24 | build-branch:
25 | stage: build
26 | script:
27 | - docker-compose -f docker-compose.branch.build.yml build
28 | - docker-compose -f docker-compose.branch.build.yml push
29 | except:
30 | - master
31 | - production
32 | - tags
33 | tags:
34 | - build
35 | - test
36 |
37 | build-stag:
38 | stage: build
39 | script:
40 | - docker-compose -f docker-compose.stag.build.yml build
41 | - docker-compose -f docker-compose.stag.build.yml push
42 | only:
43 | - master
44 | tags:
45 | - build
46 | - test
47 |
48 | build-prod:
49 | stage: build
50 | script:
51 | - docker-compose -f docker-compose.prod.build.yml build
52 | - docker-compose -f docker-compose.prod.build.yml push
53 | only:
54 | - production
55 | tags:
56 | - build
57 | - test
58 |
59 | deploy-branch:
60 | stage: deploy
61 | script: docker stack deploy -c docker-compose.branch.yml --with-registry-auth {{cookiecutter.docker_swarm_stack_name_branch}}
62 | environment:
63 | name: staging
64 | url: https://{{cookiecutter.domain_branch}}
65 | except:
66 | - master
67 | - production
68 | - tags
69 | tags:
70 | - swarm
71 | - branch
72 |
73 | deploy-stag:
74 | stage: deploy
75 | script: docker stack deploy -c docker-compose.stag.yml --with-registry-auth {{cookiecutter.docker_swarm_stack_name_staging}}
76 | environment:
77 | name: staging
78 | url: https://{{cookiecutter.domain_staging}}
79 | only:
80 | - master
81 | tags:
82 | - swarm
83 | - stag
84 |
85 | deploy-prod:
86 | stage: deploy
87 | script: docker stack deploy -c docker-compose.prod.yml --with-registry-auth {{cookiecutter.docker_swarm_stack_name_main}}
88 | environment:
89 | name: production
90 | url: https://{{cookiecutter.domain_main}}
91 | only:
92 | - production
93 | tags:
94 | - swarm
95 | - prod
96 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/docker-compose.override.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 | services:
3 | pgadmin:
4 | ports:
5 | - '5050:5050'
6 | swagger-ui:
7 | environment:
8 | - API_URL=http://{{cookiecutter.domain_dev}}/api/v1/swagger/
9 | proxy:
10 | ports:
11 | - '80:80'
12 | - '8080:8080'
13 | flower:
14 | ports:
15 | - '5555:5555'
16 | backend:
17 | build:
18 | context: ./backend
19 | args:
20 | env: dev
21 | networks:
22 | default:
23 | aliases:
24 | - {{cookiecutter.domain_dev}}
25 | # ports:
26 | #- '80:80'
27 | #- '8888:8888'
28 | volumes:
29 | - './backend/app:/app'
30 | # For Hydrogen Jupyter easy integration with
31 | # docker-compose exec server
32 | # and
33 | # jupyter notebook --ip=0.0.0.0 --allow-root
34 | - '~/.jupyter:/root/.jupyter'
35 | environment:
36 | - SERVER_NAME={{cookiecutter.domain_dev}}
37 | - FLASK_APP=app/main.py
38 | - FLASK_DEBUG=1
39 | - 'RUN=flask run --host=0.0.0.0 --port=80'
40 | - 'JUPYTER=jupyter notebook --ip=0.0.0.0 --allow-root'
41 | command: bash -c "while true; do sleep 1; done"
42 | # command: bash -c "flask run --host=0.0.0.0 --port=80"
43 | celeryworker:
44 | build:
45 | context: ./backend
46 | dockerfile: Dockerfile-celery-worker
47 | args:
48 | env: dev
49 | environment:
50 | - 'RUN=celery worker -A app.worker -l info -Q main-queue -c 1'
51 | - 'JUPYTER=jupyter notebook --ip=0.0.0.0 --allow-root'
52 | volumes:
53 | - './backend/app:/app'
54 | # For Hydrogen Jupyter easy integration with
55 | # docker-compose exec server
56 | # and
57 | # jupyter notebook --ip=0.0.0.0 --allow-root
58 | - '~/.jupyter:/root/.jupyter'
59 | backend-rest-tests:
60 | build:
61 | context: ./backend
62 | dockerfile: Dockerfile-rest-tests
63 | args:
64 | env: dev
65 | environment:
66 | - 'JUPYTER=jupyter notebook --ip=0.0.0.0 --allow-root'
67 | - SERVER_NAME={{cookiecutter.domain_dev}}
68 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
69 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
70 | # - SEED=0
71 | command: bash -c "while true; do sleep 1; done"
72 | volumes:
73 | - './backend/app:/app'
74 | # For Hydrogen Jupyter easy integration with
75 | # docker-compose exec server
76 | # and
77 | # jupyter notebook --ip=0.0.0.0 --allow-root
78 | - '~/.jupyter:/root/.jupyter'
79 | ports:
80 | - '8888:8888'
81 | frontend:
82 | build:
83 | context: ./frontend
84 | args:
85 | env: dev
86 |
87 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/api_v1/group/test_fetch_groups.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import random
3 |
4 | from app.rest_tests.utils.user import random_user, user_authentication_headers
5 | from app.rest_tests.utils.group import get_group_users_and_admins
6 |
7 | from app.core import config
8 |
9 |
10 | def test_fetch_superuser_groups(server_api, superuser_token_headers):
11 |
12 | r = requests.get(
13 | f'{server_api}{config.API_V1_STR}/groups/',
14 | headers=superuser_token_headers)
15 |
16 | assert r.status_code == 200
17 |
18 |
19 | def test_fetch_general_user_groups(server_api, superuser_token_headers):
20 | group_users, _ = get_group_users_and_admins(server_api,
21 | superuser_token_headers)
22 | group_id = random.choice(list(group_users.keys()))
23 |
24 | new_user = random_user(group_id)
25 |
26 | r = requests.post(
27 | f'{server_api}{config.API_V1_STR}/users/',
28 | headers=superuser_token_headers,
29 | data=new_user)
30 |
31 | if r.status_code == 200:
32 |
33 | email, password = new_user['email'], new_user['password']
34 | auth = user_authentication_headers(server_api, email, password)
35 |
36 | r = requests.get(
37 | f'{server_api}{config.API_V1_STR}/groups/', headers=auth)
38 | groups = r.json()
39 |
40 | assert len(groups) == 1
41 |
42 |
43 | def test_fetch_group_admin_user_groups(server_api, superuser_token_headers):
44 | new_user = random_user()
45 | r = requests.post(
46 | f'{server_api}{config.API_V1_STR}/users/',
47 | headers=superuser_token_headers,
48 | data=new_user)
49 |
50 | created_user = r.json()
51 |
52 | group_users, _ = get_group_users_and_admins(server_api,
53 | superuser_token_headers)
54 | group_id = random.choice(list(group_users.keys()))
55 |
56 | if r.status_code == 200:
57 | r = requests.post(
58 | f'{server_api}{config.API_V1_STR}/groups/{group_id}/admin_users/',
59 | headers=superuser_token_headers,
60 | data={
61 | 'user_id': created_user['id']
62 | })
63 |
64 | email, password = new_user['email'], new_user['password']
65 | auth = user_authentication_headers(server_api, email, password)
66 |
67 | r = requests.get(
68 | f'{server_api}{config.API_V1_STR}/groups/', headers=auth)
69 | groups = r.json()
70 |
71 | assert len(groups) == 1
72 |
73 |
74 | def test_fetch_create_groups(server_api, superuser_token_headers):
75 | assert True
76 |
77 |
78 | def test_assign_admin_group_success(server_api, superuser_token_headers):
79 | assert True
80 |
81 |
82 | def test_assign_admin_group_fail(server_api, superuser_token_headers):
83 | assert True
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/src/polyfills.ts:
--------------------------------------------------------------------------------
1 | /**
2 | * This file includes polyfills needed by Angular and is loaded before the app.
3 | * You can add your own extra polyfills to this file.
4 | *
5 | * This file is divided into 2 sections:
6 | * 1. Browser polyfills. These are applied before loading ZoneJS and are sorted by browsers.
7 | * 2. Application imports. Files imported after ZoneJS that should be loaded before your main
8 | * file.
9 | *
10 | * The current setup is for so-called "evergreen" browsers; the last versions of browsers that
11 | * automatically update themselves. This includes Safari >= 10, Chrome >= 55 (including Opera),
12 | * Edge >= 13 on the desktop, and iOS 10 and Chrome on mobile.
13 | *
14 | * Learn more in https://angular.io/docs/ts/latest/guide/browser-support.html
15 | */
16 |
17 | /***************************************************************************************************
18 | * BROWSER POLYFILLS
19 | */
20 |
21 | /** IE9, IE10 and IE11 requires all of the following polyfills. **/
22 | // import 'core-js/es6/symbol';
23 | // import 'core-js/es6/object';
24 | // import 'core-js/es6/function';
25 | // import 'core-js/es6/parse-int';
26 | // import 'core-js/es6/parse-float';
27 | // import 'core-js/es6/number';
28 | // import 'core-js/es6/math';
29 | // import 'core-js/es6/string';
30 | // import 'core-js/es6/date';
31 | // import 'core-js/es6/array';
32 | // import 'core-js/es6/regexp';
33 | // import 'core-js/es6/map';
34 | // import 'core-js/es6/weak-map';
35 | // import 'core-js/es6/set';
36 |
37 | /** IE10 and IE11 requires the following for NgClass support on SVG elements */
38 | // import 'classlist.js'; // Run `npm install --save classlist.js`.
39 |
40 | /** IE10 and IE11 requires the following for the Reflect API. */
41 | // import 'core-js/es6/reflect';
42 |
43 |
44 | /** Evergreen browsers require these. **/
45 | // Used for reflect-metadata in JIT. If you use AOT (and only Angular decorators), you can remove.
46 | import 'core-js/es7/reflect';
47 |
48 |
49 | /**
50 | * Required to support Web Animations `@angular/platform-browser/animations`.
51 | * Needed for: All but Chrome, Firefox and Opera. http://caniuse.com/#feat=web-animation
52 | **/
53 | // import 'web-animations-js'; // Run `npm install --save web-animations-js`.
54 |
55 |
56 |
57 | /***************************************************************************************************
58 | * Zone JS is required by Angular itself.
59 | */
60 | import 'zone.js/dist/zone'; // Included with Angular CLI.
61 |
62 |
63 |
64 | /***************************************************************************************************
65 | * APPLICATION IMPORTS
66 | */
67 |
68 | /**
69 | * Date, currency, decimal and percent pipes.
70 | * Needed for: All but Chrome, Firefox, Edge, IE11 and Safari 10
71 | */
72 | // import 'intl'; // Run `npm install --save intl`.
73 | /**
74 | * Need to import at least one locale-data with intl.
75 | */
76 | // import 'intl/locale-data/jsonp/en';
77 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 | services:
3 | db:
4 | image: 'postgres:10'
5 | environment:
6 | POSTGRES_DB: app
7 | POSTGRES_PASSWORD: {{cookiecutter.postgres_password}}
8 | PGDATA: /var/lib/postgresql/data/pgdata
9 | volumes:
10 | - 'app-db-data:/var/lib/postgresql/data/pgdata'
11 | queue:
12 | image: 'rabbitmq:3'
13 | pgadmin:
14 | image: fenglc/pgadmin4
15 | depends_on:
16 | - db
17 | environment:
18 | - DEFAULT_USER={{cookiecutter.pgadmin_default_user}}
19 | - DEFAULT_PASSWORD={{cookiecutter.pgadmin_default_user_password}}
20 | labels:
21 | - "traefik.frontend.rule=Host:pgadmin.{{cookiecutter.domain_main}}"
22 | - "traefik.enable=true"
23 | - "traefik.port=5050"
24 | - "traefik.tags={{cookiecutter.traefik_constraint_tag}}"
25 | swagger-ui:
26 | image: swaggerapi/swagger-ui
27 | environment:
28 | - API_URL=https://{{cookiecutter.domain_main}}/api/v1/swagger/
29 | labels:
30 | - "traefik.frontend.rule=PathPrefixStrip:/swagger/"
31 | - "traefik.enable=true"
32 | - "traefik.port=8080"
33 | - "traefik.tags={{cookiecutter.traefik_constraint_tag}}"
34 | proxy:
35 | image: traefik:v1.5
36 | command: --docker \
37 | --docker.watch \
38 | --docker.exposedbydefault=false \
39 | --constraints=tag=={{cookiecutter.traefik_constraint_tag}} \
40 | --logLevel=DEBUG \
41 | --accessLog \
42 | --web
43 | volumes:
44 | - /var/run/docker.sock:/var/run/docker.sock
45 | flower:
46 | image: 'totem/celery-flower-docker'
47 | environment:
48 | - FLOWER_BASIC_AUTH={{cookiecutter.flower_auth}}
49 | labels:
50 | - "traefik.frontend.rule=Host:flower.{{cookiecutter.domain_main}}"
51 | - "traefik.enable=true"
52 | - "traefik.port=5555"
53 | - "traefik.tags={{cookiecutter.traefik_constraint_tag}}"
54 | backend:
55 | image: '{{cookiecutter.docker_image_backend}}'
56 | depends_on:
57 | - db
58 | environment:
59 | - SERVER_NAME={{cookiecutter.domain_main}}
60 | - SECRET_KEY={{cookiecutter.secret_key}}
61 | - SENTRY_DSN={{cookiecutter.sentry_dsn}}
62 | - POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
63 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
64 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
65 | labels:
66 | - "traefik.frontend.rule=PathPrefix:/api"
67 | - "traefik.enable=true"
68 | - "traefik.port=80"
69 | - "traefik.tags={{cookiecutter.traefik_constraint_tag}}"
70 | celeryworker:
71 | image: '{{cookiecutter.docker_image_celeryworker}}'
72 | depends_on:
73 | - db
74 | - queue
75 | environment:
76 | - SENTRY_DSN={{cookiecutter.sentry_dsn}}
77 | - POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
78 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
79 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
80 | frontend:
81 | image: '{{cookiecutter.docker_image_frontend}}'
82 | labels:
83 | - "traefik.frontend.rule=PathPrefix:/"
84 | - "traefik.enable=true"
85 | - "traefik.port=80"
86 | - "traefik.tags={{cookiecutter.traefik_constraint_tag}}"
87 | volumes:
88 | app-db-data: {}
89 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/frontend/tslint.json:
--------------------------------------------------------------------------------
1 | {
2 | "rulesDirectory": [
3 | "node_modules/codelyzer"
4 | ],
5 | "rules": {
6 | "arrow-return-shorthand": true,
7 | "callable-types": true,
8 | "class-name": true,
9 | "comment-format": [
10 | true,
11 | "check-space"
12 | ],
13 | "curly": true,
14 | "deprecation": {
15 | "severity": "warn"
16 | },
17 | "eofline": true,
18 | "forin": true,
19 | "import-blacklist": [
20 | true,
21 | "rxjs",
22 | "rxjs/Rx"
23 | ],
24 | "import-spacing": true,
25 | "indent": [
26 | true,
27 | "spaces"
28 | ],
29 | "interface-over-type-literal": true,
30 | "label-position": true,
31 | "max-line-length": [
32 | true,
33 | 140
34 | ],
35 | "member-access": false,
36 | "member-ordering": [
37 | true,
38 | {
39 | "order": [
40 | "static-field",
41 | "instance-field",
42 | "static-method",
43 | "instance-method"
44 | ]
45 | }
46 | ],
47 | "no-arg": true,
48 | "no-bitwise": true,
49 | "no-console": [
50 | true,
51 | "debug",
52 | "info",
53 | "time",
54 | "timeEnd",
55 | "trace"
56 | ],
57 | "no-construct": true,
58 | "no-debugger": true,
59 | "no-duplicate-super": true,
60 | "no-empty": false,
61 | "no-empty-interface": true,
62 | "no-eval": true,
63 | "no-inferrable-types": [
64 | true,
65 | "ignore-params"
66 | ],
67 | "no-misused-new": true,
68 | "no-non-null-assertion": true,
69 | "no-shadowed-variable": true,
70 | "no-string-literal": false,
71 | "no-string-throw": true,
72 | "no-switch-case-fall-through": true,
73 | "no-trailing-whitespace": true,
74 | "no-unnecessary-initializer": true,
75 | "no-unused-expression": true,
76 | "no-use-before-declare": true,
77 | "no-var-keyword": true,
78 | "object-literal-sort-keys": false,
79 | "one-line": [
80 | true,
81 | "check-open-brace",
82 | "check-catch",
83 | "check-else",
84 | "check-whitespace"
85 | ],
86 | "prefer-const": true,
87 | "quotemark": [
88 | true,
89 | "single"
90 | ],
91 | "radix": true,
92 | "semicolon": [
93 | true,
94 | "always"
95 | ],
96 | "triple-equals": [
97 | true,
98 | "allow-null-check"
99 | ],
100 | "typedef-whitespace": [
101 | true,
102 | {
103 | "call-signature": "nospace",
104 | "index-signature": "nospace",
105 | "parameter": "nospace",
106 | "property-declaration": "nospace",
107 | "variable-declaration": "nospace"
108 | }
109 | ],
110 | "typeof-compare": true,
111 | "unified-signatures": true,
112 | "variable-name": false,
113 | "whitespace": [
114 | true,
115 | "check-branch",
116 | "check-decl",
117 | "check-operator",
118 | "check-separator",
119 | "check-type"
120 | ],
121 | "directive-selector": [
122 | true,
123 | "attribute",
124 | "app",
125 | "camelCase"
126 | ],
127 | "component-selector": [
128 | true,
129 | "element",
130 | "app",
131 | "kebab-case"
132 | ],
133 | "no-output-on-prefix": true,
134 | "use-input-property-decorator": true,
135 | "use-output-property-decorator": true,
136 | "use-host-property-decorator": true,
137 | "no-input-rename": true,
138 | "no-output-rename": true,
139 | "use-life-cycle-interface": true,
140 | "use-pipe-transform-interface": true,
141 | "component-class-suffix": true,
142 | "directive-class-suffix": true
143 | }
144 | }
145 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/api_v1/group/test_assign_group_admin.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | from app.rest_tests.utils.user import random_user
4 | from app.rest_tests.utils.user import user_authentication_headers
5 |
6 | from app.rest_tests.utils.group import random_group
7 | from app.rest_tests.utils.group import random_group_admin
8 |
9 | from app.core import config
10 |
11 |
12 | def test_assign_group_admin_by_superuser(server_api, superuser_token_headers):
13 |
14 | new_group = random_group()
15 | r = requests.post(
16 | f'{server_api}{config.API_V1_STR}/groups/',
17 | headers=superuser_token_headers,
18 | data=new_group)
19 |
20 | created_group = r.json()
21 | group_id = created_group['id']
22 |
23 | new_user = random_user()
24 | r = requests.post(
25 | f'{server_api}{config.API_V1_STR}/users/',
26 | headers=superuser_token_headers,
27 | data=new_user)
28 |
29 | created_user = r.json()
30 | user_id = created_user['id']
31 |
32 | request_data = {"user_id": user_id}
33 |
34 | r = requests.post(
35 | f'{server_api}{config.API_V1_STR}/groups/{group_id}/admin_users/',
36 | headers=superuser_token_headers,
37 | data=request_data)
38 |
39 | assert r.status_code == 200
40 |
41 |
42 | def test_assign_group_admin_by_group_admin(server_api,
43 | superuser_token_headers):
44 | _, group_admin_auth = random_group_admin(server_api,
45 | superuser_token_headers)
46 |
47 | new_group = random_group()
48 | r = requests.post(
49 | f'{server_api}{config.API_V1_STR}/groups/',
50 | headers=superuser_token_headers,
51 | data=new_group)
52 |
53 | created_group = r.json()
54 | group_id = created_group['id']
55 |
56 | new_user = random_user()
57 | r = requests.post(
58 | f'{server_api}{config.API_V1_STR}/users/',
59 | headers=superuser_token_headers,
60 | data=new_user)
61 |
62 | created_user = r.json()
63 | user_id = created_user['id']
64 |
65 | request_data = {"user_id": user_id}
66 |
67 | r = requests.post(
68 | f'{server_api}{config.API_V1_STR}/groups/{group_id}/admin_users/',
69 | headers=group_admin_auth,
70 | data=request_data)
71 |
72 | assert r.status_code == 400
73 |
74 |
75 | def test_assign_group_admin_by_normal_user(server_api,
76 | superuser_token_headers):
77 | new_user = random_user()
78 | r = requests.post(
79 | f'{server_api}{config.API_V1_STR}/users/',
80 | headers=superuser_token_headers,
81 | data=new_user)
82 |
83 | created_user = r.json()
84 |
85 | if r.status_code == 200:
86 |
87 | email, password = new_user['email'], new_user['password']
88 | auth = user_authentication_headers(server_api, email, password)
89 |
90 | new_group = random_group()
91 | r = requests.post(
92 | f'{server_api}{config.API_V1_STR}/groups/',
93 | headers=superuser_token_headers,
94 | data=new_group)
95 |
96 | created_group = r.json()
97 | group_id = created_group['id']
98 |
99 | new_user = random_user()
100 | r = requests.post(
101 | f'{server_api}{config.API_V1_STR}/users/',
102 | headers=superuser_token_headers,
103 | data=new_user)
104 |
105 | created_user = r.json()
106 | user_id = created_user['id']
107 |
108 | request_data = {"user_id": user_id}
109 |
110 | r = requests.post(
111 | f'{server_api}{config.API_V1_STR}/groups/{group_id}/admin_users/',
112 | headers=auth,
113 | data=request_data)
114 |
115 | assert r.status_code == 400
116 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/api/api_v1/endpoints/group.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import standard library modules
4 |
5 | # Import installed modules
6 | # # Import installed packages
7 | from flask import abort
8 | from webargs import fields
9 | from flask_apispec import doc, use_kwargs, marshal_with
10 | from flask_jwt_extended import (get_current_user, jwt_required)
11 |
12 | # Import app code
13 | from app.main import app
14 | from app.api.api_v1.api_docs import docs, security_params
15 | from app.core import config
16 | from app.core.database import db_session
17 | from app.core.celery_app import celery_app
18 | # Import Schemas
19 | from app.schemas.group import GroupSchema
20 | from app.schemas.msg import MsgSchema
21 | # Import models
22 | from app.models.group import Group
23 | from app.models.user import User
24 |
25 |
26 | @docs.register
27 | @doc(
28 | description='Create a new group',
29 | security=security_params,
30 | tags=['groups'])
31 | @app.route(f'{config.API_V1_STR}/groups/', methods=['POST'])
32 | @use_kwargs({
33 | 'name': fields.Str(required=True),
34 | })
35 | @marshal_with(GroupSchema())
36 | @jwt_required
37 | def route_groups_post(name=None):
38 | current_user = get_current_user()
39 | if not current_user:
40 | abort(400, 'Could not authenticate user with provided token')
41 | elif not current_user.is_active:
42 | abort(400, 'Inactive user')
43 | elif not current_user.is_superuser:
44 | abort(400, 'Not a superuser')
45 |
46 | group = db_session.query(Group).filter(Group.name == name).first()
47 | if group:
48 | return abort(400, f'The group: {name} already exists in the system')
49 | group = Group(name=name)
50 | db_session.add(group)
51 | db_session.commit()
52 | return group
53 |
54 |
55 | @docs.register
56 | @doc(
57 | description='Retrieve the groups of the user',
58 | security=security_params,
59 | tags=['groups'])
60 | @app.route(f'{config.API_V1_STR}/groups/', methods=['GET'])
61 | @marshal_with(
62 | GroupSchema(only=('id', 'name', 'created_at', 'name'), many=True))
63 | @jwt_required
64 | def route_groups_get():
65 | current_user = get_current_user() # type: User
66 |
67 | if not current_user:
68 | abort(400, 'Could not authenticate user with provided token')
69 | elif not current_user.is_active:
70 | abort(400, 'Inactive user')
71 |
72 | if current_user.is_superuser:
73 | return db_session.query(Group).all()
74 | elif current_user.groups_admin:
75 | return [group for group in current_user.groups_admin]
76 | else:
77 | return [current_user.group]
78 |
79 |
80 | @docs.register
81 | @doc(
82 | description='Assign user as group Admin',
83 | security=security_params,
84 | tags=[
85 | 'groups',
86 | ])
87 | @app.route(
88 | f'{config.API_V1_STR}/groups//admin_users/',
89 | methods=['POST'])
90 | @use_kwargs({'user_id': fields.Int(required=True)})
91 | @marshal_with(MsgSchema())
92 | @jwt_required
93 | def route_admin_users_groups_post(group_id=None, user_id=None):
94 | current_user = get_current_user() # type: User
95 |
96 | if not current_user:
97 | abort(400, 'Could not authenticate user with provided token')
98 | elif not current_user.is_active:
99 | abort(400, 'Inactive user')
100 |
101 | group = db_session.query(Group).filter_by(
102 | id=group_id).first() # type: Group
103 | user = db_session.query(User).filter(
104 | User.id == user_id).first() # type: User
105 |
106 | if not group:
107 | return abort(400, f'The group with id: {group_id} does not exists')
108 |
109 | if not user:
110 | return abort(400, f'The user with id: {user_id} does not exists')
111 |
112 | if current_user.is_superuser:
113 | group.users_admin.append(user)
114 | db_session.commit()
115 |
116 | else:
117 | abort(400, 'Not authorized')
118 |
119 | return {
120 | 'msg':
121 | f'The user with id {user_id} was sucessfully added as an admin of the group with id {group_id}'
122 | }
123 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/api/api_v1/endpoints/token.py:
--------------------------------------------------------------------------------
1 | # Import standard library
2 | from datetime import timedelta
3 |
4 | # Import installed modules
5 | from flask import abort
6 | from flask_apispec import doc, use_kwargs, marshal_with
7 | from flask_jwt_extended import (create_access_token, get_current_user,
8 | jwt_required, create_refresh_token,
9 | jwt_refresh_token_required)
10 | from webargs import fields
11 |
12 | # Import app code
13 | from app.main import app
14 | from ..api_docs import docs, security_params
15 | from app.core import config
16 | from app.core.security import pwd_context
17 | from app.core.database import db_session
18 | # Import Schemas
19 | from app.schemas.token import TokenSchema
20 | from app.schemas.user import UserSchema
21 | # Import models
22 | from app.models.user import User
23 |
24 |
25 | @docs.register
26 | @doc(
27 | description=
28 | 'OAuth2 compatible token login, get an access token for future requests',
29 | tags=['login'])
30 | @app.route(f'{config.API_V1_STR}/login/access-token', methods=['POST'])
31 | @use_kwargs({
32 | 'username': fields.Str(required=True),
33 | 'password': fields.Str(required=True),
34 | })
35 | @marshal_with(TokenSchema())
36 | def route_login_access_token(username, password):
37 | user = db_session.query(User).filter(User.email == username).first()
38 | if not user or not pwd_context.verify(password, user.password):
39 | abort(400, 'Incorrect email or password')
40 | elif not user.is_active:
41 | abort(400, 'Inactive user')
42 | access_token_expires = timedelta(
43 | minutes=config.ACCESS_TOKEN_EXPIRE_MINUTES)
44 | refresh_token_expires = timedelta(days=config.REFRESH_TOKEN_EXPIRE_DAYS)
45 | return {
46 | 'access_token':
47 | create_access_token(
48 | identity=user.id, expires_delta=access_token_expires),
49 | 'refresh_token':
50 | create_refresh_token(
51 | identity=user.id, expires_delta=refresh_token_expires),
52 | 'token_type':
53 | 'bearer',
54 | }
55 |
56 |
57 | @docs.register
58 | @doc(description='Refresh access token', tags=['login'])
59 | @app.route(f'{config.API_V1_STR}/login/refresh-token', methods=['POST'])
60 | @use_kwargs(
61 | {
62 | 'Authorization':
63 | fields.Str(
64 | required=True,
65 | description=
66 | 'Authorization HTTP header with JWT refresh token, like: Authorization: Bearer asdf.qwer.zxcv'
67 | )
68 | },
69 | locations=['headers'])
70 | @marshal_with(TokenSchema(only=['access_token']))
71 | @jwt_refresh_token_required
72 | def route_refresh_token(**kwargs):
73 | user = get_current_user()
74 | if not user:
75 | abort(400, 'Could not authenticate user with provided token')
76 | elif not user.is_active:
77 | abort(400, 'Inactive user')
78 | access_token_expires = timedelta(
79 | minutes=config.ACCESS_TOKEN_EXPIRE_MINUTES)
80 | access_token = create_access_token(
81 | identity=user.id, expires_delta=access_token_expires)
82 | return {'access_token': access_token}
83 |
84 |
85 | @docs.register
86 | @doc(description='Test access token', tags=['login'], security=security_params)
87 | @app.route(f'{config.API_V1_STR}/login/test-token', methods=['POST'])
88 | @use_kwargs({'test': fields.Str(required=True)})
89 | @marshal_with(UserSchema())
90 | @jwt_required
91 | def route_test_token(test):
92 | current_user = get_current_user()
93 | if current_user:
94 | return current_user
95 | else:
96 | abort(400, 'No user')
97 | return current_user
98 |
99 |
100 | @docs.register
101 | @doc(
102 | description=
103 | 'Test access token manually, same as the endpoint to "Test access token" but copying and adding the Authorization: Bearer ',
104 | params={
105 | 'Authorization': {
106 | 'description':
107 | 'Authorization HTTP header with JWT token, like: Authorization: Bearer asdf.qwer.zxcv',
108 | 'in':
109 | 'header',
110 | 'type':
111 | 'string',
112 | 'required':
113 | True
114 | }
115 | },
116 | tags=['login'])
117 | @app.route(f'{config.API_V1_STR}/login/manual-test-token', methods=['POST'])
118 | @use_kwargs({'test': fields.Str(required=True)})
119 | @marshal_with(UserSchema())
120 | @jwt_required
121 | def route_manual_test_token(test):
122 | current_user = get_current_user()
123 | if current_user:
124 | return current_user
125 | else:
126 | abort(400, 'No user')
127 | return current_user
128 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/api/api_v1/endpoints/user.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Import standard library modules
4 |
5 | # Import installed modules
6 | # # Import installed packages
7 | from flask import abort
8 | from webargs import fields
9 | from flask_apispec import doc, use_kwargs, marshal_with
10 | from flask_jwt_extended import (get_current_user, jwt_required)
11 |
12 | # Import app code
13 | from app.main import app
14 | from app.api.api_v1.api_docs import docs, security_params
15 | from app.core import config
16 | from app.core.security import pwd_context
17 | from app.core.database import db_session
18 | from app.core.celery_app import celery_app
19 |
20 | # Import Schemas
21 | from app.schemas.user import UserSchema
22 | # Import models
23 | from app.models.user import User
24 | from app.models.group import Group
25 |
26 |
27 | @docs.register
28 | @doc(
29 | description='Retrieve the users that the given user manages from groups',
30 | security=security_params,
31 | tags=['users'])
32 | @app.route(f'{config.API_V1_STR}/users/', methods=['GET'])
33 | @marshal_with(UserSchema(many=True))
34 | @jwt_required
35 | def route_users_get():
36 | current_user = get_current_user()
37 |
38 | if not current_user:
39 | abort(400, 'Could not authenticate user with provided token')
40 | elif not current_user.is_active:
41 | abort(400, 'Inactive user')
42 |
43 | users = [current_user]
44 |
45 | if current_user.is_superuser:
46 | return db_session.query(User).all()
47 |
48 | elif current_user.groups_admin:
49 | # return all the users in the groups the user is admin in
50 | users = []
51 | for group in current_user.groups_admin:
52 | users.extend(group.users)
53 |
54 | return users
55 |
56 | # return the current user's data, but in a list
57 | return users
58 |
59 |
60 | @docs.register
61 | @doc(
62 | description='Create new user',
63 | security=security_params,
64 | tags=['users'])
65 | @app.route(f'{config.API_V1_STR}/users/', methods=['POST'])
66 | @use_kwargs({
67 | 'email': fields.Str(required=True),
68 | 'password': fields.Str(required=True),
69 | 'first_name': fields.Str(),
70 | 'last_name': fields.Str(),
71 | 'group_id': fields.Int(required=True),
72 | })
73 | @marshal_with(UserSchema())
74 | @jwt_required
75 | def route_users_post(email=None,
76 | password=None,
77 | first_name=None,
78 | last_name=None,
79 | group_id=None):
80 | current_user = get_current_user()
81 |
82 | if not current_user:
83 | abort(400, 'Could not authenticate user with provided token')
84 | elif not current_user.is_active:
85 | abort(400, 'Inactive user')
86 | elif not current_user.is_superuser:
87 | abort(400, 'Only a superuser can execute this action')
88 |
89 | user = db_session.query(User).filter(User.email == email).first()
90 |
91 | if user:
92 | return abort(
93 | 400,
94 | f'The user with this email already exists in the system: {email}')
95 |
96 | group = db_session.query(Group).filter(Group.id == group_id).first()
97 |
98 | if group is None:
99 | abort(400, f'There is no group with id: "{group_id}"')
100 | user = User(
101 | email=email,
102 | password=pwd_context.hash(password),
103 | first_name=first_name,
104 | last_name=last_name,
105 | group=group)
106 |
107 | db_session.add(user)
108 | db_session.commit()
109 | db_session.refresh(user)
110 | return user
111 |
112 |
113 | @docs.register
114 | @doc(
115 | description='Get current user',
116 | security=security_params,
117 | tags=['users'])
118 | @app.route(f'{config.API_V1_STR}/users/me', methods=['GET'])
119 | @marshal_with(UserSchema())
120 | @jwt_required
121 | def route_users_me_get():
122 | current_user = get_current_user()
123 | if not current_user:
124 | abort(400, 'Could not authenticate user with provided token')
125 | elif not current_user.is_active:
126 | abort(400, 'Inactive user')
127 | return current_user
128 |
129 |
130 | @docs.register
131 | @doc(
132 | description='Get a specific user by ID',
133 | security=security_params,
134 | tags=['users'])
135 | @app.route(f'{config.API_V1_STR}/users/', methods=['GET'])
136 | @marshal_with(UserSchema())
137 | @jwt_required
138 | def route_users_id_get(user_id):
139 | current_user = get_current_user() # type: User
140 |
141 | if not current_user:
142 | abort(400, 'Could not authenticate user with provided token')
143 | elif not current_user.is_active:
144 | abort(400, 'Inactive user')
145 |
146 | user = db_session.query(User).filter(
147 | User.id == user_id).first() # type: User
148 |
149 | if not user:
150 | return abort(400, f'The user with id: {user_id} does not exists')
151 |
152 | if current_user.is_superuser:
153 | # Return everything, don't abort
154 | pass
155 | elif user.group in current_user.groups_admin:
156 | # Return everything, don't abort
157 | pass
158 |
159 | else:
160 | abort(400, 'Not authorized')
161 |
162 | return user
163 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/docker-compose.prod.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 | services:
3 | db:
4 | image: 'postgres:10'
5 | environment:
6 | POSTGRES_DB: app
7 | POSTGRES_PASSWORD: {{cookiecutter.postgres_password}}
8 | PGDATA: /var/lib/postgresql/data/pgdata
9 | volumes:
10 | - 'app-db-data:/var/lib/postgresql/data/pgdata'
11 | queue:
12 | image: 'rabbitmq:3'
13 | pgadmin:
14 | image: fenglc/pgadmin4
15 | depends_on:
16 | - db
17 | environment:
18 | - DEFAULT_USER={{cookiecutter.pgadmin_default_user}}
19 | - DEFAULT_PASSWORD={{cookiecutter.pgadmin_default_user_password}}
20 | deploy:
21 | labels:
22 | - "traefik.frontend.rule=Host:pgadmin.{{cookiecutter.domain_main}}"
23 | - "traefik.enable=true"
24 | - "traefik.port=5050"
25 | - "traefik.tags={{cookiecutter.traefik_public_constraint_tag}}"
26 | - "traefik.docker.network={{cookiecutter.traefik_public_network}}"
27 | # Traefik service that listens to HTTP
28 | - "traefik.redirectorservice.frontend.entryPoints=http"
29 | - "traefik.redirectorservice.frontend.redirect.entryPoint=https"
30 | # Traefik service that listens to HTTPS
31 | - "traefik.webservice.frontend.entryPoints=https"
32 | networks:
33 | - {{cookiecutter.traefik_public_network}}
34 | - default
35 | swagger-ui:
36 | image: swaggerapi/swagger-ui
37 | environment:
38 | - API_URL=https://{{cookiecutter.domain_main}}/api/v1/swagger/
39 | deploy:
40 | labels:
41 | - "traefik.frontend.rule=PathPrefixStrip:/swagger/"
42 | - "traefik.enable=true"
43 | - "traefik.port=8080"
44 | - "traefik.tags={{cookiecutter.traefik_constraint_tag}}"
45 | proxy:
46 | image: traefik:v1.5
47 | command: --docker \
48 | --docker.swarmmode \
49 | --docker.watch \
50 | --docker.exposedbydefault=false \
51 | --constraints=tag=={{cookiecutter.traefik_constraint_tag}} \
52 | --logLevel=DEBUG \
53 | --accessLog \
54 | --web
55 | volumes:
56 | - /var/run/docker.sock:/var/run/docker.sock
57 | deploy:
58 | placement:
59 | constraints: [node.role == manager]
60 | labels:
61 | - "traefik.frontend.rule=Host:{{cookiecutter.domain_main}}"
62 | - "traefik.enable=true"
63 | - "traefik.port=80"
64 | - "traefik.tags={{cookiecutter.traefik_public_constraint_tag}}"
65 | - "traefik.docker.network={{cookiecutter.traefik_public_network}}"
66 | # Traefik service that listens to HTTP
67 | - "traefik.redirectorservice.frontend.entryPoints=http"
68 | - "traefik.redirectorservice.frontend.redirect.entryPoint=https"
69 | # Traefik service that listens to HTTPS
70 | - "traefik.webservice.frontend.entryPoints=https"
71 | networks:
72 | - {{cookiecutter.traefik_public_network}}
73 | - default
74 | flower:
75 | image: 'totem/celery-flower-docker'
76 | environment:
77 | - FLOWER_BASIC_AUTH={{cookiecutter.flower_auth}}
78 | deploy:
79 | labels:
80 | - "traefik.frontend.rule=Host:flower.{{cookiecutter.domain_main}}"
81 | - "traefik.enable=true"
82 | - "traefik.port=5555"
83 | - "traefik.tags={{cookiecutter.traefik_public_constraint_tag}}"
84 | - "traefik.docker.network={{cookiecutter.traefik_public_network}}"
85 | # Traefik service that listens to HTTP
86 | - "traefik.redirectorservice.frontend.entryPoints=http"
87 | - "traefik.redirectorservice.frontend.redirect.entryPoint=https"
88 | # Traefik service that listens to HTTPS
89 | - "traefik.webservice.frontend.entryPoints=https"
90 | networks:
91 | - {{cookiecutter.traefik_public_network}}
92 | - default
93 | backend:
94 | image: '{{cookiecutter.docker_image_backend}}:prod'
95 | depends_on:
96 | - db
97 | environment:
98 | - SERVER_NAME={{cookiecutter.domain_main}}
99 | - SECRET_KEY={{cookiecutter.secret_key}}
100 | - SENTRY_DSN={{cookiecutter.sentry_dsn}}
101 | - POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
102 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
103 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
104 | deploy:
105 | labels:
106 | - "traefik.frontend.rule=PathPrefix:/api"
107 | - "traefik.enable=true"
108 | - "traefik.port=80"
109 | - "traefik.tags={{cookiecutter.traefik_constraint_tag}}"
110 | celeryworker:
111 | image: '{{cookiecutter.docker_image_celeryworker}}:prod'
112 | depends_on:
113 | - db
114 | - queue
115 | environment:
116 | - SENTRY_DSN={{cookiecutter.sentry_dsn}}
117 | - POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
118 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
119 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
120 | frontend:
121 | image: '{{cookiecutter.docker_image_frontend}}:prod'
122 | deploy:
123 | labels:
124 | - "traefik.frontend.rule=PathPrefix:/"
125 | - "traefik.enable=true"
126 | - "traefik.port=80"
127 | - "traefik.tags={{cookiecutter.traefik_constraint_tag}}"
128 |
129 | volumes:
130 | app-db-data: {}
131 |
132 | networks:
133 | {{cookiecutter.traefik_public_network}}:
134 | external: true
135 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/docker-compose.branch.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 | services:
3 | db:
4 | image: 'postgres:10'
5 | environment:
6 | POSTGRES_DB: app
7 | POSTGRES_PASSWORD: {{cookiecutter.postgres_password}}
8 | PGDATA: /var/lib/postgresql/data/pgdata
9 | volumes:
10 | - 'app-db-data:/var/lib/postgresql/data/pgdata'
11 | queue:
12 | image: 'rabbitmq:3'
13 | pgadmin:
14 | image: fenglc/pgadmin4
15 | depends_on:
16 | - db
17 | environment:
18 | - DEFAULT_USER={{cookiecutter.pgadmin_default_user}}
19 | - DEFAULT_PASSWORD={{cookiecutter.pgadmin_default_user_password}}
20 | deploy:
21 | labels:
22 | - "traefik.frontend.rule=Host:pgadmin.{{cookiecutter.domain_branch}}"
23 | - "traefik.enable=true"
24 | - "traefik.port=5050"
25 | - "traefik.tags={{cookiecutter.traefik_public_constraint_tag}}"
26 | - "traefik.docker.network={{cookiecutter.traefik_public_network}}"
27 | # Traefik service that listens to HTTP
28 | - "traefik.redirectorservice.frontend.entryPoints=http"
29 | - "traefik.redirectorservice.frontend.redirect.entryPoint=https"
30 | # Traefik service that listens to HTTPS
31 | - "traefik.webservice.frontend.entryPoints=https"
32 | networks:
33 | - {{cookiecutter.traefik_public_network}}
34 | - default
35 | swagger-ui:
36 | image: swaggerapi/swagger-ui
37 | environment:
38 | - API_URL=https://{{cookiecutter.domain_branch}}/api/v1/swagger/
39 | deploy:
40 | labels:
41 | - "traefik.frontend.rule=PathPrefixStrip:/swagger/"
42 | - "traefik.enable=true"
43 | - "traefik.port=8080"
44 | - "traefik.tags={{cookiecutter.traefik_constraint_tag_branch}}"
45 | proxy:
46 | image: traefik:v1.5
47 | command: --docker \
48 | --docker.swarmmode \
49 | --docker.watch \
50 | --docker.exposedbydefault=false \
51 | --constraints=tag=={{cookiecutter.traefik_constraint_tag_branch}} \
52 | --logLevel=DEBUG \
53 | --accessLog \
54 | --web
55 | volumes:
56 | - /var/run/docker.sock:/var/run/docker.sock
57 | deploy:
58 | placement:
59 | constraints: [node.role == manager]
60 | labels:
61 | - "traefik.frontend.rule=Host:{{cookiecutter.domain_branch}}"
62 | - "traefik.enable=true"
63 | - "traefik.port=80"
64 | - "traefik.tags={{cookiecutter.traefik_public_constraint_tag}}"
65 | - "traefik.docker.network={{cookiecutter.traefik_public_network}}"
66 | # Traefik service that listens to HTTP
67 | - "traefik.redirectorservice.frontend.entryPoints=http"
68 | - "traefik.redirectorservice.frontend.redirect.entryPoint=https"
69 | # Traefik service that listens to HTTPS
70 | - "traefik.webservice.frontend.entryPoints=https"
71 | networks:
72 | - {{cookiecutter.traefik_public_network}}
73 | - default
74 | flower:
75 | image: 'totem/celery-flower-docker'
76 | environment:
77 | - FLOWER_BASIC_AUTH={{cookiecutter.flower_auth}}
78 | deploy:
79 | labels:
80 | - "traefik.frontend.rule=Host:flower.{{cookiecutter.domain_branch}}"
81 | - "traefik.enable=true"
82 | - "traefik.port=5555"
83 | - "traefik.tags={{cookiecutter.traefik_public_constraint_tag}}"
84 | - "traefik.docker.network={{cookiecutter.traefik_public_network}}"
85 | # Traefik service that listens to HTTP
86 | - "traefik.redirectorservice.frontend.entryPoints=http"
87 | - "traefik.redirectorservice.frontend.redirect.entryPoint=https"
88 | # Traefik service that listens to HTTPS
89 | - "traefik.webservice.frontend.entryPoints=https"
90 | networks:
91 | - {{cookiecutter.traefik_public_network}}
92 | - default
93 |
94 | backend:
95 | image: '{{cookiecutter.docker_image_backend}}:branch'
96 | depends_on:
97 | - db
98 | environment:
99 | - SERVER_NAME={{cookiecutter.domain_branch}}
100 | - 'SECRET_KEY={{cookiecutter.secret_key}}'
101 | - SENTRY_DSN={{cookiecutter.sentry_dsn}}
102 | - POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
103 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
104 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
105 | deploy:
106 | labels:
107 | - "traefik.frontend.rule=PathPrefix:/api"
108 | - "traefik.enable=true"
109 | - "traefik.port=80"
110 | - "traefik.tags={{cookiecutter.traefik_constraint_tag_branch}}"
111 | celeryworker:
112 | image: '{{cookiecutter.docker_image_celeryworker}}:branch'
113 | depends_on:
114 | - db
115 | - queue
116 | environment:
117 | - SENTRY_DSN={{cookiecutter.sentry_dsn}}
118 | - POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
119 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
120 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
121 | frontend:
122 | image: '{{cookiecutter.docker_image_frontend}}:branch'
123 | deploy:
124 | labels:
125 | - "traefik.frontend.rule=PathPrefix:/"
126 | - "traefik.enable=true"
127 | - "traefik.port=80"
128 | - "traefik.tags={{cookiecutter.traefik_constraint_tag_branch}}"
129 |
130 | volumes:
131 | app-db-data: {}
132 |
133 | networks:
134 | {{cookiecutter.traefik_public_network}}:
135 | external: true
136 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/docker-compose.stag.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 | services:
3 | db:
4 | image: 'postgres:10'
5 | environment:
6 | POSTGRES_DB: app
7 | POSTGRES_PASSWORD: {{cookiecutter.postgres_password}}
8 | PGDATA: /var/lib/postgresql/data/pgdata
9 | volumes:
10 | - 'app-db-data:/var/lib/postgresql/data/pgdata'
11 | queue:
12 | image: 'rabbitmq:3'
13 | pgadmin:
14 | image: fenglc/pgadmin4
15 | depends_on:
16 | - db
17 | environment:
18 | - DEFAULT_USER={{cookiecutter.pgadmin_default_user}}
19 | - DEFAULT_PASSWORD={{cookiecutter.pgadmin_default_user_password}}
20 | deploy:
21 | labels:
22 | - "traefik.frontend.rule=Host:pgadmin.{{cookiecutter.domain_staging}}"
23 | - "traefik.enable=true"
24 | - "traefik.port=5050"
25 | - "traefik.tags={{cookiecutter.traefik_public_constraint_tag}}"
26 | - "traefik.docker.network={{cookiecutter.traefik_public_network}}"
27 | # Traefik service that listens to HTTP
28 | - "traefik.redirectorservice.frontend.entryPoints=http"
29 | - "traefik.redirectorservice.frontend.redirect.entryPoint=https"
30 | # Traefik service that listens to HTTPS
31 | - "traefik.webservice.frontend.entryPoints=https"
32 | networks:
33 | - {{cookiecutter.traefik_public_network}}
34 | - default
35 | swagger-ui:
36 | image: swaggerapi/swagger-ui
37 | environment:
38 | - API_URL=https://{{cookiecutter.domain_staging}}/api/v1/swagger/
39 | deploy:
40 | labels:
41 | - "traefik.frontend.rule=PathPrefixStrip:/swagger/"
42 | - "traefik.enable=true"
43 | - "traefik.port=8080"
44 | - "traefik.tags={{cookiecutter.traefik_constraint_tag_staging}}"
45 | proxy:
46 | image: traefik:v1.5
47 | command: --docker \
48 | --docker.swarmmode \
49 | --docker.watch \
50 | --docker.exposedbydefault=false \
51 | --constraints=tag=={{cookiecutter.traefik_constraint_tag_staging}} \
52 | --logLevel=DEBUG \
53 | --accessLog \
54 | --web
55 | volumes:
56 | - /var/run/docker.sock:/var/run/docker.sock
57 | deploy:
58 | placement:
59 | constraints: [node.role == manager]
60 | labels:
61 | - "traefik.frontend.rule=Host:{{cookiecutter.domain_staging}}"
62 | - "traefik.enable=true"
63 | - "traefik.port=80"
64 | - "traefik.tags={{cookiecutter.traefik_public_constraint_tag}}"
65 | - "traefik.docker.network={{cookiecutter.traefik_public_network}}"
66 | # Traefik service that listens to HTTP
67 | - "traefik.redirectorservice.frontend.entryPoints=http"
68 | - "traefik.redirectorservice.frontend.redirect.entryPoint=https"
69 | # Traefik service that listens to HTTPS
70 | - "traefik.webservice.frontend.entryPoints=https"
71 | networks:
72 | - {{cookiecutter.traefik_public_network}}
73 | - default
74 | flower:
75 | image: 'totem/celery-flower-docker'
76 | environment:
77 | - FLOWER_BASIC_AUTH={{cookiecutter.flower_auth}}
78 | deploy:
79 | labels:
80 | - "traefik.frontend.rule=Host:flower.{{cookiecutter.domain_staging}}"
81 | - "traefik.enable=true"
82 | - "traefik.port=5555"
83 | - "traefik.tags={{cookiecutter.traefik_public_constraint_tag}}"
84 | - "traefik.docker.network={{cookiecutter.traefik_public_network}}"
85 | # Traefik service that listens to HTTP
86 | - "traefik.redirectorservice.frontend.entryPoints=http"
87 | - "traefik.redirectorservice.frontend.redirect.entryPoint=https"
88 | # Traefik service that listens to HTTPS
89 | - "traefik.webservice.frontend.entryPoints=https"
90 | networks:
91 | - {{cookiecutter.traefik_public_network}}
92 | - default
93 |
94 | backend:
95 | image: '{{cookiecutter.docker_image_backend}}:stag'
96 | depends_on:
97 | - db
98 | environment:
99 | - SERVER_NAME={{cookiecutter.domain_staging}}
100 | - 'SECRET_KEY={{cookiecutter.secret_key}}'
101 | - SENTRY_DSN={{cookiecutter.sentry_dsn}}
102 | - POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
103 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
104 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
105 | deploy:
106 | labels:
107 | - "traefik.frontend.rule=PathPrefix:/api"
108 | - "traefik.enable=true"
109 | - "traefik.port=80"
110 | - "traefik.tags={{cookiecutter.traefik_constraint_tag_staging}}"
111 | celeryworker:
112 | image: '{{cookiecutter.docker_image_celeryworker}}:stag'
113 | depends_on:
114 | - db
115 | - queue
116 | environment:
117 | - SENTRY_DSN={{cookiecutter.sentry_dsn}}
118 | - POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
119 | - FIRST_SUPERUSER={{cookiecutter.first_superuser}}
120 | - FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
121 | frontend:
122 | image: '{{cookiecutter.docker_image_frontend}}:stag'
123 | deploy:
124 | labels:
125 | - "traefik.frontend.rule=PathPrefix:/"
126 | - "traefik.enable=true"
127 | - "traefik.port=80"
128 | - "traefik.tags={{cookiecutter.traefik_constraint_tag_staging}}"
129 |
130 | volumes:
131 | app-db-data: {}
132 |
133 | networks:
134 | {{cookiecutter.traefik_public_network}}:
135 | external: true
136 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/backend/app/app/rest_tests/api/api_v1/user/test_user.py:
--------------------------------------------------------------------------------
1 | import random
2 | import requests
3 |
4 | from app.rest_tests.utils.user import random_user, user_authentication_headers
5 | from app.rest_tests.utils.group import get_group_users_and_admins, random_group_admin, random_group
6 | from app.core import config
7 |
8 |
9 | def test_get_users_superuser_me(server_api, superuser_token_headers):
10 | r = requests.get(
11 | f'{server_api}{config.API_V1_STR}/users/me',
12 | headers=superuser_token_headers)
13 | current_user = r.json()
14 |
15 | assert current_user['is_active'] == True
16 | assert current_user['email'] == config.FIRST_SUPERUSER
17 | assert current_user['is_superuser'] == True
18 |
19 |
20 | def test_create_user_existing_email(server_api, superuser_token_headers):
21 | post_data = random_user()
22 | r = requests.post(
23 | f'{server_api}{config.API_V1_STR}/users/',
24 | headers=superuser_token_headers,
25 | data=post_data)
26 |
27 | created_user = r.json()
28 |
29 | if r.status_code == 200:
30 | assert 'id' in created_user
31 | assert True
32 |
33 | r = requests.post(
34 | f'{server_api}{config.API_V1_STR}/users/',
35 | headers=superuser_token_headers,
36 | data=post_data)
37 |
38 | created_user = r.json()
39 |
40 | assert r.status_code == 400
41 | assert 'id' not in created_user
42 |
43 |
44 | def test_create_user_new_email(server_api, superuser_token_headers):
45 |
46 | post_data = random_user()
47 | r = requests.post(
48 | f'{server_api}{config.API_V1_STR}/users/',
49 | headers=superuser_token_headers,
50 | data=post_data)
51 |
52 | created_user = r.json()
53 |
54 | if r.status_code == 200:
55 | assert 'id' in created_user
56 | user_id = created_user['id']
57 | r = requests.get(
58 | f'{server_api}{config.API_V1_STR}/users/{user_id}',
59 | headers=superuser_token_headers,
60 | data=post_data)
61 |
62 | code, response = r.status_code, r.json()
63 |
64 | for key in response:
65 | assert created_user[key] == response[key]
66 |
67 | if r.status_code == 400:
68 | assert 'id' not in created_user
69 |
70 |
71 | def test_user_group_permissions(server_api, superuser_token_headers):
72 |
73 | new_user = random_user()
74 | r = requests.post(
75 | f'{server_api}{config.API_V1_STR}/users/',
76 | headers=superuser_token_headers,
77 | data=new_user)
78 |
79 | created_user = r.json()
80 |
81 | group_users, _ = get_group_users_and_admins(server_api,
82 | superuser_token_headers)
83 | group_id = random.choice(list(group_users.keys()))
84 |
85 | if r.status_code == 200:
86 | r = requests.post(
87 | f'{server_api}{config.API_V1_STR}/groups/{group_id}/admin_users/',
88 | headers=superuser_token_headers,
89 | data={
90 | 'user_id': created_user['id']
91 | })
92 |
93 | if r.status_code == 200:
94 | email, password = new_user['email'], new_user['password']
95 | auth = user_authentication_headers(server_api, email, password)
96 |
97 | allowed_users = group_users[group_id]
98 | other_users = [
99 | u for g in group_users for u in group_users[g] if g != group_id
100 | ]
101 |
102 | for user in allowed_users:
103 | user_id = user['id']
104 | r = requests.get(
105 | f'{server_api}{config.API_V1_STR}/users/{user_id}',
106 | headers=auth)
107 | assert r.status_code == 200
108 |
109 | for user in other_users:
110 | user_id = user['id']
111 | r = requests.get(
112 | f'{server_api}{config.API_V1_STR}/users/{user_id}',
113 | headers=auth)
114 | assert r.status_code == 400
115 |
116 |
117 | def test_create_user_by_superuser(server_api, superuser_token_headers):
118 |
119 | new_user = random_user()
120 | r = requests.post(
121 | f'{server_api}{config.API_V1_STR}/users/',
122 | headers=superuser_token_headers,
123 | data=new_user)
124 |
125 | expected_fields = [
126 | "id",
127 | "is_active",
128 | "created_at",
129 | "email",
130 | "first_name",
131 | "last_name",
132 | "group",
133 | "groups_admin",
134 | "is_superuser",
135 | ]
136 |
137 | created_user = r.json()
138 |
139 | for expected_field in expected_fields:
140 | assert expected_field in created_user
141 |
142 | assert r.status_code == 200
143 |
144 |
145 | def test_create_user_by_superuser_any_group(server_api,
146 | superuser_token_headers):
147 | new_group = random_group()
148 | r = requests.post(
149 | f'{server_api}{config.API_V1_STR}/groups/',
150 | headers=superuser_token_headers,
151 | data=new_group)
152 |
153 | created_group = r.json()
154 |
155 | group_id = created_group['id']
156 |
157 | new_user = random_user(group_id)
158 | r = requests.post(
159 | f'{server_api}{config.API_V1_STR}/users/',
160 | headers=superuser_token_headers,
161 | data=new_user)
162 |
163 | expected_fields = [
164 | "id",
165 | "is_active",
166 | "created_at",
167 | "email",
168 | "first_name",
169 | "last_name",
170 | "group",
171 | "groups_admin",
172 | "is_superuser",
173 | ]
174 |
175 | created_user = r.json()
176 |
177 | for expected_field in expected_fields:
178 | assert expected_field in created_user
179 |
180 | assert r.status_code == 200
181 |
182 |
183 | def test_create_user_by_group_admin(server_api, superuser_token_headers):
184 |
185 | _, group_admin_auth = random_group_admin(server_api,
186 | superuser_token_headers)
187 |
188 | new_user = random_user()
189 | r = requests.post(
190 | f'{server_api}{config.API_V1_STR}/users/',
191 | headers=group_admin_auth,
192 | data=new_user)
193 |
194 | assert r.status_code == 400
195 |
196 |
197 | def test_create_user_by_normal_user(server_api, superuser_token_headers):
198 |
199 | new_user = random_user()
200 | r = requests.post(
201 | f'{server_api}{config.API_V1_STR}/users/',
202 | headers=superuser_token_headers,
203 | data=new_user)
204 |
205 | created_user = r.json()
206 |
207 | if r.status_code == 200:
208 |
209 | email, password = new_user['email'], new_user['password']
210 | auth = user_authentication_headers(server_api, email, password)
211 |
212 | new_user = random_user()
213 | r = requests.post(
214 | f'{server_api}{config.API_V1_STR}/users/',
215 | headers=auth,
216 | data=new_user)
217 |
218 | assert r.status_code == 400
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Base Project
2 |
3 | ## Note
4 |
5 | As [I have been the only maintainer](https://github.com/tiangolo) of this project, I'll continue the development in this fork: https://github.com/tiangolo/full-stack.
6 |
7 | ---
8 |
9 | [](https://travis-ci.org/senseta-os/senseta-base-project)
10 |
11 | Generate a back end and front end stack using Python, including interactive API documentation.
12 |
13 | [](https://github.com/senseta-os/senseta-base-project)
14 |
15 | ## Features
16 |
17 | * Full Docker integration (Docker based)
18 | * Docker Swarm Mode deployment
19 | * Docker Compose integration and optimization for local development
20 | * Production ready Python web server using Nginx and uWSGI
21 | * Python Flask back end with:
22 | * Flask-apispec: Swagger live documentation generation
23 | * Marshmallow: model and data serialization (convert model objects to JSON)
24 | * Webargs: parse, validate and document inputs to the endpoint / route
25 | * Secure password hashing by default
26 | * JWT token authentication
27 | * SQLAlchemy models (independent of Flask extensions, so they can be used with Celery workers directly)
28 | * Basic starting models for users and groups (modify and remove as you need)
29 | * Alembic migrations
30 | * CORS (Cross Origin Resource Sharing)
31 | * Celery worker that can import and use models and code from the rest of the back end selectively (you don't have to install the complete app in each worker)
32 | * REST back end tests based on Pytest, integrated with Docker, so you can test the full API interaction, independent on the database. As it runs in Docker, it can build a new data store from scratch each time (so you can use ElasticSearch, MongoDB, CouchDB, or whatever you want, and just test that the API works)
33 | * Easy Python integration with Jupyter Kernels for remote or in-Docker development with extensions like Atom Hydrogen or Visual Studio Code Jupyter
34 | * Angular front end with:
35 | * Docker server based on Nginx
36 | * Docker multi-stage building, so you don't need to save or commit compiled code
37 | * Docker building integrated tests with Chrome Headless
38 | * PGAdmin for PostgreSQL database, you can modify it to use PHPMyAdmin and MySQL easily
39 | * Swagger-UI for live interactive documentation
40 | * Flower for Celery jobs monitoring
41 | * Load balancing between front end and back end with Traefik, so you can have both under the same domain, separated by path, but served by different containers
42 | * Traefik integration, including Let's Encrypt HTTPS certificates automatic generation
43 | * GitLab CI (continuous integration), including front end and back end testing
44 |
45 | ## How to use it
46 |
47 | Go to the directoy where you want to create your project and run:
48 |
49 | ```bash
50 | pip install cookiecutter
51 | cookiecutter https://github.com/senseta-os/base-project
52 | ```
53 |
54 | ### Generate passwords
55 |
56 | You will be asked to provide passwords and secret keys for several components. Open another terminal and run:
57 |
58 | ```bash
59 | openssl rand -hex 32
60 | # Outputs something like: 99d3b1f01aa639e4a76f4fc281fc834747a543720ba4c8a8648ba755aef9be7f
61 | ```
62 |
63 | Copy the contents and use that as password / secret key. And run that again to generate another secure key.
64 |
65 |
66 | ### Input variables
67 |
68 | The generator (cookiecutter) will ask you for some data, you might want to have at hand before generating the project.
69 |
70 | The input variables, with their default values (some auto generated) are:
71 |
72 | * `project_name`: The name of the project
73 | * `project_slug`: The development friendly name of the project. By default, based on the project name
74 | * `domain_main`: The domain in where to deploy the project for production (from the branch `production`), used by the load balancer, back end, etc. By default, based on the project slug.
75 | * `domain_staging`: The domain in where to deploy while staging (before production) (from the branch `master`). By default, based on the main domain.
76 | * `domain_branch`: The domain in where to deploy the project while on another branch, probably a feature branch. By default, based on the main domain.
77 | * `domain_dev`: The domain to use while developing. It won't be deployed, but you should use it by modifying your local `hosts` file.
78 |
79 | * `docker_swarm_stack_name_main`: The name of the stack while deploying to Docker in Swarm mode for production. By default, based on the domain.
80 | * `docker_swarm_stack_name_staging`: The name of the stack while deploying to Docker in Swarm mode for staging. By default, based on the domain.
81 | * `docker_swarm_stack_name_branch`: The name of the stack while deploying to Docker in Swarm mode for feature branches. By default, based on the domain.
82 | * `secret_key`: Back end server secret key. Use the method above to generate it.
83 | * `first_superuser`: The first superuser generated, with it you will be able to create more users, etc. By default, based on the domain.
84 | * `first_superuser_password`: First superuser password. Use the method above to generate it.
85 |
86 | * `postgres_password`: Postgres database password. Use the method above to generate it. (You could easily modify it to use MySQL, MariaDB, etc).
87 | * `pgadmin_default_user`: PGAdmin default user, to log-in to the PGAdmin interface.
88 | * `pgadmin_default_user_password`: PGAdmin default user password. Generate it with the method above.
89 |
90 | * `traefik_constraint_tag`: The tag to be used by the internal Traefik load balancer (for example, to divide requests between back end and front end) for production. Used to separate this stack from any other stack you might have. This should identify each stack in each environment (production, staging, etc).
91 | * `traefik_constraint_tag_staging`: The Traefik tag to be used while on staging.
92 | * `traefik_constraint_tag_branch`: The Traefik tag to be used while on a feature branch.
93 |
94 | * `traefik_public_network`: This assumes you have another separate publicly facing Traefik at the server / cluster level. This is the network that main Traefik lives in.
95 | * `traefik_public_constraint_tag`: The tag that should be used by stack services that should communicate with the public.
96 |
97 | * `flower_auth`: Basic HTTP authentication for flower, in the form`user:password`. By default: "`root:changethis`".
98 |
99 | * `sentry_dsn`: Key URL (DSN) of Sentry, for live error reporting. If you are not using it yet, you should, is open source. E.g.: `https://1234abcd:5678ef@sentry.example.com/30`.
100 |
101 | * `docker_image_prefix`: Prefix to use for Docker image names. If you are using GitLab Docker registry it would be based on your code repository. E.g.: `git.example.com:5005/development-team/my-awesome-project/`.
102 | * `docker_image_backend`: Docker image name for the back end. By default, it will be based on your Docker image prefix, e.g.: `git.example.com:5005/development-team/my-awesome-project/backend`. And depending on your environment, a different tag will be appended ( `prod`, `stag`, `branch` ). So, the final image names used will be like: `git.example.com:5005/development-team/my-awesome-project/backend:prod`.
103 | * `docker_image_celeryworker`: Docker image for the celery worker. By default, based on your Docker image prefix.
104 | * `docker_image_frontend`: Docker image for the front end. By default, based on your Docker image prefix.
105 |
106 | ## How to deploy
107 |
108 | This stack can be adjusted and used with several deployment options that are compatible with Docker Compose, but it is designed to be used in a cluster controlled with pure Docker in Swarm Mode with a Traefik main load balancer proxy.
109 |
110 | Read the [**Guide to deploy a Docker Swarm Mode Cluster**](docker-swarm-cluster-deploy.md) in this repository.
111 |
112 | ## License
113 |
114 | This project is licensed under the terms of the MIT license.
115 |
--------------------------------------------------------------------------------
/{{cookiecutter.project_slug}}/README.md:
--------------------------------------------------------------------------------
1 | # {{cookiecutter.project_name}}
2 |
3 | ## Back end local development
4 |
5 | * Update your local `hosts` file, set the IP `127.0.0.1` (your `localhost`) to `{{cookiecutter.domain_dev}}`. The `docker-compose.override.yml` file will set the environment variable `SERVER_NAME` to that host. Otherwise you would receive 404 errors.
6 |
7 | * Modify your hosts file, probably in `/etc/hosts` to include:
8 |
9 | ```
10 | 0.0.0.0 {{cookiecutter.domain_dev}}
11 | ```
12 |
13 | ...that will make your browser talk to your locally running server.
14 |
15 | * Start the stack with Docker Compose:
16 |
17 | ```bash
18 | docker-compose up -d
19 | ```
20 |
21 | * Start an interactive session in the server container that is running an infinite loop doing nothing:
22 |
23 | ```bash
24 | docker-compose exec backend bash
25 | ```
26 |
27 | **Note**: Before the first run, make sure you create at least one "revision" of your models and database and make sure you create those models / tables in the database with `alembic`. See the section about migrations below for specific instructions.
28 |
29 | * Run the local debugging Flask server, all the command is in the `RUN` environment variable:
30 |
31 | ```bash
32 | $RUN
33 | ```
34 |
35 | * Your OS will handle redirecting `{{cookiecutter.domain_dev}}` to your local stack. So, in your browser, go to: http://{{cookiecutter.domain_dev}}.
36 |
37 | Add and modify SQLAlchemy models to `./backend/app/app/models/`, Marshmallow schemas to `./backend/app/app/schemas` and API endpoints to `./backend/app/app/api/`.
38 |
39 | Add and modify tasks to the Celery worker in `./backend/app/app/worker.py`.
40 |
41 | If you need to install any additional package to the worker, add it to the file `./backend/app/Dockerfile-celery-worker`.
42 |
43 | The `docker-compose.override.yml` file for local development has a host volume with your app files inside the container for rapid iteration. So you can update your code and it will be the same code (updated) inside the container. You just have to restart the server, but you don't need to rebuild the image to test a change. Make sure you use this only for local development. Your final production images should be built with the latest version of your code and do not depend on host volumes mounted.
44 |
45 |
46 | ### Back end tests
47 |
48 | To test the back end run:
49 |
50 | ```bash
51 | # Build the testing stack
52 | docker-compose -f docker-compose.test.yml build
53 | # Start the testing stack
54 | docker-compose -f docker-compose.test.yml up -d
55 | # Run the REST tests
56 | docker-compose -f docker-compose.test.yml exec -T backend-rest-tests pytest
57 | # Stop and eliminate the testing stack
58 | docker-compose -f docker-compose.test.yml down -v
59 | ```
60 |
61 | The tests run with Pytest, modify and add tests to `./backend/app/app/rest_tests/`.
62 |
63 | If you need to install any additional package for the REST tests, add it to the file `./backend/app/Dockerfile-rest-tests`.
64 |
65 | If you use GitLab CI the tests will run automatically.
66 |
67 |
68 | ### Migrations
69 |
70 | As the `docker-compose.override.yml` file for local development mounts your app directory as a volume inside the container, you can also run the migrations with `alembic` commands inside the container and the migration code will be in your app directory (instead of being only inside the container). So you can add it to your git repository.
71 |
72 | Make sure you create at least one "revision" of your models and that you "upgrade" your database with that revision at least once. As this is what will create the tables in your database. Otherwise, your application won't run.
73 |
74 | * Start an interactive session in the server container that is running an infinite loop doing nothing:
75 |
76 | ```bash
77 | docker-compose exec backend bash
78 | ```
79 |
80 | * After changing a model (for example, adding a column) or when you are just starting, inside the container, create a revision, e.g.:
81 |
82 | ```bash
83 | alembic revision --autogenerate -m "Add column last_name to User model"
84 | ```
85 |
86 | * Commit to the git repository the files generated in the alembic directory.
87 |
88 | * After creating the revision, run the migration in the database (this is what will actually change the database):
89 |
90 | ```bash
91 | alembic upgrade head
92 | ```
93 |
94 | If you don't want to use migrations at all, uncomment the line in the file at `./backend/app/app/core/database.py` with:
95 |
96 | ```python
97 | Base.metadata.create_all(bind=engine)
98 | ```
99 |
100 | ## Front end development
101 |
102 | * Enter the `frontend` directory, install the NPM packages and start it the `npm` scrits:
103 |
104 | ```bash
105 | cd frontend
106 | npm install
107 | npm run start
108 | ```
109 |
110 | Check the file `package.json` to see other available options.
111 |
112 | ## Deployment
113 |
114 | To deploy the stack to a Docker Swarm run, e.g.:
115 |
116 | ```bash
117 | docker stack deploy -c docker-compose.prod.yml --with-registry-auth {{cookiecutter.docker_swarm_stack_name_main}}
118 | ```
119 |
120 | Using the corresponding Docker Compose file.
121 |
122 | If you use GitLab CI, it will automatically deploy it.
123 |
124 | GitLab CI is configured assuming 3 environments following GitLab flow:
125 |
126 | * `prod` (production) from the `production` branch.
127 | * `stag` (staging) from the `master` branch.
128 | * `branch`, from any other branch (a feature in development).
129 |
130 |
131 | ## URLs
132 |
133 | These are the URLs that will be used and generated by the project.
134 |
135 | ### Production
136 |
137 | Production URLs, from the branch `production`.
138 |
139 | Front end: https://{{cookiecutter.domain_main}}
140 |
141 | Back end: https://{{cookiecutter.domain_main}}/api/
142 |
143 | Swagger UI: https://{{cookiecutter.domain_main}}/swagger/
144 |
145 | PGAdmin: https://pgadmin.{{cookiecutter.domain_main}}
146 |
147 | Flower: https://flower.{{cookiecutter.domain_main}}
148 |
149 | ### Staging
150 |
151 | Staging URLs, from the branch `master`.
152 |
153 | Front end: https://{{cookiecutter.domain_staging}}
154 |
155 | Back end: https://{{cookiecutter.domain_staging}}/api/
156 |
157 | Swagger UI: https://{{cookiecutter.domain_staging}}/swagger/
158 |
159 | PGAdmin: https://pgadmin.{{cookiecutter.domain_staging}}
160 |
161 | Flower: https://flower.{{cookiecutter.domain_staging}}
162 |
163 | ### Branch (feature branches)
164 |
165 | Feature branch URLs, from any other branch.
166 |
167 | Front end: https://{{cookiecutter.domain_branch}}
168 |
169 | Back end: https://{{cookiecutter.domain_branch}}/api/
170 |
171 | Swagger UI: https://{{cookiecutter.domain_branch}}/swagger/
172 |
173 | PGAdmin: https://pgadmin.{{cookiecutter.domain_branch}}
174 |
175 | Flower: https://flower.{{cookiecutter.domain_branch}}
176 |
177 | ### Development
178 |
179 | Development URLs, for local development. Given that you modified your `hosts` file.
180 |
181 | Front end: http://{{cookiecutter.domain_dev}}
182 |
183 | Back end: http://{{cookiecutter.domain_dev}}/api/
184 |
185 | Swagger UI: http://{{cookiecutter.domain_dev}}/swagger/
186 |
187 | PGAdmin: http://{{cookiecutter.domain_dev}}:5050
188 |
189 | Flower: http://{{cookiecutter.domain_dev}}:5555
190 |
191 | Traefik UI: http://{{cookiecutter.domain_dev}}:8080
192 |
193 | ## Project Cookiecutter variables used during generation
194 |
195 | * `project_name`: {{cookiecutter.project_name}}
196 | * `project_slug`: {{cookiecutter.project_slug}}
197 | * `domain_main`: {{cookiecutter.domain_main}}
198 | * `domain_staging`: {{cookiecutter.domain_staging}}
199 | * `domain_branch`: {{cookiecutter.domain_branch}}
200 | * `domain_dev`: {{cookiecutter.domain_dev}}
201 | * `docker_swarm_stack_name_main`: {{cookiecutter.docker_swarm_stack_name_main}}
202 | * `docker_swarm_stack_name_staging`: {{cookiecutter.docker_swarm_stack_name_staging}}
203 | * `docker_swarm_stack_name_branch`: {{cookiecutter.docker_swarm_stack_name_branch}}
204 | * `secret_key`: {{cookiecutter.secret_key}}
205 | * `first_superuser`: {{cookiecutter.first_superuser}}
206 | * `first_superuser_password`: {{cookiecutter.first_superuser_password}}
207 | * `postgres_password`: {{cookiecutter.postgres_password}}
208 | * `pgadmin_default_user`: {{cookiecutter.pgadmin_default_user}}
209 | * `pgadmin_default_user_password`: {{cookiecutter.pgadmin_default_user_password}}
210 | * `traefik_constraint_tag`: {{cookiecutter.traefik_constraint_tag}}
211 | * `traefik_constraint_tag_staging`: {{cookiecutter.traefik_constraint_tag_staging}}
212 | * `traefik_constraint_tag_branch`: {{cookiecutter.traefik_constraint_tag_branch}}
213 | * `traefik_public_network`: {{cookiecutter.traefik_public_network}}
214 | * `traefik_public_constraint_tag`: {{cookiecutter.traefik_public_constraint_tag}}
215 | * `flower_auth`: {{cookiecutter.flower_auth}}
216 | * `sentry_dsn`: {{cookiecutter.sentry_dsn}}
217 | * `docker_image_prefix`: {{cookiecutter.docker_image_prefix}}
218 | * `docker_image_backend`: {{cookiecutter.docker_image_backend}}
219 | * `docker_image_celeryworker`: {{cookiecutter.docker_image_celeryworker}}
220 | * `docker_image_frontend`: {{cookiecutter.docker_image_frontend}}
221 |
--------------------------------------------------------------------------------
/docker-swarm-cluster-deploy.md:
--------------------------------------------------------------------------------
1 | # Cluster setup with Docker Swarm Mode and Traefik
2 |
3 | This guide shows you how to create a cluster of Linux servers managed with Docker Swarm mode to deploy your projects.
4 |
5 | It also shows how to set up an integrated main Traefik load balancer / proxy to receive incoming connections, re-transmit communication to Docker containers based on the domains, generate TLS / SSL certificates (for HTTPS) with Let's Encrypt and handle HTTPS.
6 |
7 | ## Install a new Linux server with Docker
8 |
9 | * Create a new remote server (VPS).
10 | * If you can create a `swap` disk partition, do it based on the [Ubuntu FAQ for swap partitions](https://help.ubuntu.com/community/SwapFaq#How_much_swap_do_I_need.3F).
11 | * Deploy the latest Ubuntu LTS version image.
12 | * Connect to it via SSH, e.g.:
13 |
14 | ```bash
15 | ssh root@172.173.174.175
16 | ```
17 |
18 | * Define a server name using a subdomain of a domain you own, for example `dog.example.com`.
19 | * Create a temporal environment variable with the name of the host to be used later, e.g.:
20 |
21 | ```bash
22 | export USE_HOSTNAME=dog.example.com
23 | ```
24 |
25 | * Set up the server `hostname`:
26 |
27 | ```bash
28 | # Set up the server hostname
29 | echo $USE_HOSTNAME > /etc/hostname
30 | hostname -F /etc/hostname
31 | ```
32 |
33 | * Update packages:
34 |
35 | ```bash
36 | # Install the latest updates
37 | apt-get update
38 | apt-get upgrade -y
39 | ```
40 |
41 | * Install Docker following the official guide: https://docs.docker.com/install/
42 | * Or alternatively, run the official convenience script, but have in mind that it would install the `edge` version:
43 |
44 | ```bash
45 | # Download Docker
46 | curl -fsSL get.docker.com -o get-docker.sh
47 | # Install Docker
48 | sh get-docker.sh
49 | # Remove Docker install script
50 | rm get-docker.sh
51 | ```
52 |
53 | * Optionally, install [Netdata](http://my-netdata.io/) for server monitoring:
54 |
55 | ```bash
56 | # Install Netdata
57 | bash <(curl -Ss https://my-netdata.io/kickstart.sh)
58 | ```
59 |
60 | * Generate and print SSH keys:
61 |
62 | ```bash
63 | # Generate SSH keys
64 | ssh-keygen -f $HOME/.ssh/id_rsa -t rsa -N ''
65 | # Print SSH public key
66 | cat ~/.ssh/id_rsa.pub
67 | ```
68 |
69 | * Copy the key printed on screen. Something like:
70 |
71 | ```
72 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDdhjLuVOqpK+4+Fn7Nb0zrLJlbnAWqTjInB1vldFJX2J0Vmyss90qth7k/nhrAWX98cDey+dxcX35DYHRe9tsniaADKcnyYGUVY9yitswhZjeGVM/p8qdu6Qin2Oc+ZK9D8HGs3jDVxDG58UzoQGgiRvNsFZ2hhykK9oknO2gAiDcZiPW/UgbJyrlKdIps6ZO2qrCpajSyJDGVf7hDg7HepGv6YA8e4Tpf5iEXdHsm/9wRIL+dAHK4Kau53+D5yGo9Tmp3/H86DBaFrzA4x/Q556aOe/EvBbxEZdtaCXT5JVjhxLYr8eeg9xrg5ic9W2xj2xfdTT8jucLoPnh434+9 user@examplelaptop
73 | ```
74 |
75 | * You can use that public key to import it in your Git server (GitLab, GitHub, Bitbucket) as a deployment server. That way, you will be able to pull your code to that server easily.
76 |
77 |
78 | ## Set up swarm mode
79 |
80 | In Docker Swarm Mode you have one or more "manager" nodes and one or more "worker" nodes (that can be the same manager nodes).
81 |
82 | The first step, is to configure one (or more) manager nodes.
83 |
84 | * On the main manager node, run:
85 |
86 | ```bash
87 | docker swarm init
88 | ```
89 |
90 | * On the main manager node, for each additional manager node you want to set up, run:
91 |
92 | ```bash
93 | docker swarm join-token manager
94 | ```
95 |
96 | * Copy the result and paste it in the additional manager node's terminal, it will be something like:
97 |
98 | ```bash
99 | docker swarm join --token SWMTKN-1-5tl7yaasdfd9qt9j0easdfnml4lqbosbasf14p13-f3hem9ckmkhasdf3idrzk5gz 172.173.174.175:2377
100 | ```
101 |
102 | * On the main manager node, for each additional worker node you want to set up, run:
103 |
104 | ```bash
105 | docker swarm join-token worker
106 | ```
107 |
108 | * Copy the result and paste it in the additional worker node's terminal, it will be something like:
109 |
110 | ```bash
111 | docker swarm join --token SWMTKN-1-5tl7ya98erd9qtasdfml4lqbosbhfqv3asdf4p13-dzw6ugasdfk0arn0 172.173.174.175:2377
112 | ```
113 |
114 | ## Traefik
115 |
116 | Set up a main load balancer with Traefik that handles the public connections and Let's encrypt HTTPS certificates.
117 |
118 | * Create a network that will be shared with Traefik and the containers that should be accessible from the outside, with:
119 |
120 | ```bash
121 | docker network create --driver=overlay traefik-public
122 | ```
123 |
124 | * Create an environment variable with your email, to be used for the generation of Let's Encrypt certificates:
125 |
126 | ```bash
127 | export EMAIL=admin@example.com
128 | ```
129 |
130 | * Create a Traefik service:
131 |
132 | ```bash
133 | docker service create \
134 | --name traefik \
135 | --constraint=node.role==manager \
136 | --publish 80:80 \
137 | --publish 8080:8080 \
138 | --publish 443:443 \
139 | --mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
140 | --mount type=volume,source=traefik-public-certificates,target=/certificates \
141 | --network traefik-public \
142 | traefik:v1.5 \
143 | --docker \
144 | --docker.swarmmode \
145 | --docker.watch \
146 | --docker.exposedbydefault=false \
147 | --constraints=tag==traefik-public \
148 | --entrypoints='Name:http Address::80' \
149 | --entrypoints='Name:https Address::443 TLS' \
150 | --acme \
151 | --acme.email=$EMAIL \
152 | --acme.storage=/certificates/acme.json \
153 | --acme.entryPoint=https \
154 | --acme.httpChallenge.entryPoint=http\
155 | --acme.onhostrule=true \
156 | --acme.acmelogging=true \
157 | --logLevel=DEBUG \
158 | --accessLog \
159 | --web
160 | ```
161 |
162 | The previous command explained:
163 |
164 | * `docker service create`: create a Docker Swarm mode service
165 | * `--name traefik`: name the service "traefik"
166 | * `--constraint=node.role==manager` make it run on a Swarm Manager node
167 | * `--publish 80:80`: listen on ports 80 - HTTP
168 | * `--publish 8080:8080`: listen on port 8080 - HTTP for Traefik web UI
169 | * `--publish 443:443`: listen on port 443 - HTTPS
170 | * `--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock`: communicate with Docker, to read labels, etc.
171 | * `--mount type=volume,source=traefik-public-certificates,target=/certificates`: create a volume to store TLS certificates
172 | * `--network traefik-public`: listen to the specific network traefik-public
173 | * `traefik`: use the image traefik:latest
174 | * `--docker`: enable Docker
175 | * `--docker.swarmmode`: enable Docker Swarm Mode
176 | * `--docker.watch`: enable "watch", so it reloads its config based on new stacks and labels
177 | * `--docker.exposedbydefault=false`: don't expose all the services, only services with traefik.enable=true
178 | * `--constraints=tag==traefik-public`: only show services with traefik.tag=traefik-public, to isolate from possible intra-stack traefik instances
179 | * `--entrypoints='Name:http Address::80'`: create an entrypoint http, on port 80
180 | * `--entrypoints='Name:https Address::443 TLS'`: create an entrypoint https, on port 443 with TLS enabled
181 | * `--acme`: enable Let's encrypt
182 | * `--acme.email=$EMAIL`: let's encrypt email, using the environment variable
183 | * `--acme.storage=/certificates/acme.json`: where to store the Let's encrypt TLS certificates - in the mapped volume
184 | * `--acme.entryPoint=https`: the entrypoint for Let's encrypt - created above
185 | * `--acme.onhostrule=true`: get new certificates automatically with host rules: "traefik.frontend.rule=Host:web.example.com"
186 | * `--acme.acmelogging=true`: log Let's encrypt activity - to debug when and if it gets certificates
187 | * `--logLevel=DEBUG`: log everything, to debug configurations and config reloads
188 | * `--web`: enable the web UI, at port 8080
189 |
190 | To check if it worked, check the logs:
191 |
192 | ```bash
193 | docker service logs traefik
194 | # To make it scrollable with `less`, run:
195 | # docker service logs traefik | less
196 | ```
197 |
198 |
199 | ## Portainer
200 |
201 | Create a Portainer web UI integrated with Traefik that allows you to use a web UI to see the state of your Docker services with:
202 |
203 | ```bash
204 | docker service create \
205 | --name portainer \
206 | --label "traefik.frontend.rule=Host:portainer.$USE_HOSTNAME" \
207 | --label "traefik.enable=true" \
208 | --label "traefik.port=9000" \
209 | --label "traefik.tags=traefik-public" \
210 | --label "traefik.docker.network=traefik-public" \
211 | --label "traefik.frontend.entryPoints=http,https" \
212 | --constraint 'node.role==manager' \
213 | --network traefik-public \
214 | --mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \
215 | portainer/portainer \
216 | -H unix:///var/run/docker.sock
217 | ```
218 |
219 |
220 | ## GitLab Runner in Docker
221 |
222 | If you use GitLab and want to integrate Continuous Integration / Continuous Deployment, you can follow this section to install the GitLab runner.
223 |
224 | There is a sub-section with how to install it in Docker Swarm mode and one in Docker standalone mode.
225 |
226 |
227 | ### Create the GitLab Runner in Docker Swarm mode
228 |
229 | To install a GitLab runner in Docker Swarm mode run:
230 |
231 | ```bash
232 | docker service create \
233 | --name gitlab-runner \
234 | --constraint 'node.role==manager' \
235 | --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
236 | --mount src=gitlab-runner-config,dst=/etc/gitlab-runner \
237 | gitlab/gitlab-runner:latest
238 | ```
239 |
240 | After that, check in which node the service is running with:
241 |
242 | ```bash
243 | docker service ps gitlab-runner
244 | ```
245 |
246 | You will get an output like:
247 |
248 | ```
249 | root@dog:~/code# docker service ps gitlab-runner
250 | ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
251 | eybbh93ll0iw gitlab-runner.1 gitlab/gitlab-runner:latest cat.example.com Running Running 33 seconds ago
252 | ```
253 |
254 | Then SSH to the node running it (in the example above it would be `cat.example.com`), e.g.:
255 |
256 | ```bash
257 | ssh root@cat.example.com
258 | ```
259 |
260 | In that node, run a `docker exec` command to register it, use auto-completion ( with `tab` ) for it to fill the name of the container, e.g.:
261 |
262 | ```bash
263 | docker exec -it gitlab-ru
264 | ```
265 |
266 | ...and then hit `tab`.
267 |
268 | Complete the command with the [GitLab Runner registration setup](https://docs.gitlab.com/runner/register/index.html#docker), e.g.:
269 |
270 | ```bash
271 | docker exec -it gitlab-runner.1.eybbh93lasdfvvnasdfh7 bash
272 | ```
273 |
274 | Continue below in the seciton **Install the GitLab Runner**.
275 |
276 | ### Create the GitLab Runner in Docker standalone mode
277 |
278 | You might want to run a GitLab runner in Docker standalone, for example, run the tests in a standalone server and deploy in a Docker Swarm mode cluster.
279 |
280 | To install a GitLab runner in a standalone Docker run:
281 |
282 | ```bash
283 | docker run -d \
284 | --name gitlab-runner \
285 | --restart always \
286 | -v gitlab-runner:/etc/gitlab-runner \
287 | -v /var/run/docker.sock:/var/run/docker.sock \
288 | gitlab/gitlab-runner:latest
289 | ```
290 |
291 | Then, enter into that container:
292 |
293 | ```bash
294 | docker exec -it gitlab-runner bash
295 | ```
296 |
297 | Continue below in the seciton **Install the GitLab Runner**.
298 |
299 | ### Install the GitLab Runner
300 |
301 | * Go to the GitLab "Admin Area -> Runners" section.
302 | * Get the URL and create a variable in your Docker Manager's Terminal, e.g.:
303 |
304 | ```bash
305 | export GITLAB_URL=https://gitlab.example.com/
306 | ```
307 |
308 | * Get the registration token and create a variable in your Docker Manager's Terminal, e.g.:
309 |
310 | ```bash
311 | export GITLAB_TOKEN=WYasdfJp4sdfasdf1234
312 | ```
313 |
314 | * Run the next command editing the name and tags as you need.
315 |
316 | ```bash
317 | gitlab-runner \
318 | register -n \
319 | --name "Docker Runner" \
320 | --executor docker \
321 | --docker-image docker:latest \
322 | --docker-volumes /var/run/docker.sock:/var/run/docker.sock \
323 | --url $GITLAB_URL \
324 | --registration-token $GITLAB_TOKEN \
325 | --tag-list dog-cat-cluster,stag,prod
326 | ```
327 |
328 | * You can edit the runner more from the GitLab admin section.
329 |
330 |
331 | ## Deploy a stack
332 |
333 | * Check a Docker Compose file like `docker-compose.prod.yml` with your stack.
334 | * The services that should be exposed to the public network should have the `traefik-public` network besides the `default` network.
335 | * Deploy the stack with, e.g.:
336 |
337 | ```bash
338 | docker stack deploy -c docker-compose.yml name-of-my-stack
339 | ```
--------------------------------------------------------------------------------