├── tweetpic ├── requirements.txt ├── function │ ├── requirements.txt │ └── handler.py ├── index.py └── Dockerfile ├── bucketset ├── requirements.txt └── handler.py ├── tweetlistener ├── .gitignore ├── requirements.txt ├── build.sh ├── README.md ├── deploy.sh ├── Dockerfile ├── LICENSE └── index.py ├── requirements.txt ├── deploy.sh ├── normalise-color-icon.png ├── colorisation-architecture.png ├── .gitignore ├── split_frames.py ├── bucketset.yml ├── .travis.yml ├── stack.yml ├── getframes ├── test.py └── Dockerfile ├── colorization ├── index.py ├── Dockerfile ├── README.md └── handler.py ├── normalisecolor └── Dockerfile ├── twitter_stack.yml ├── LICENSE ├── colourise_frames.py ├── CONTRIBUTING.md └── README.md /tweetpic/requirements.txt: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /bucketset/requirements.txt: -------------------------------------------------------------------------------- 1 | minio 2 | -------------------------------------------------------------------------------- /tweetlistener/.gitignore: -------------------------------------------------------------------------------- 1 | env.list 2 | .DS_Store 3 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | opencv-python 2 | pycurl 3 | pillow 4 | -------------------------------------------------------------------------------- /deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | faas-cli deploy -f stack.yml --replace 4 | -------------------------------------------------------------------------------- /tweetlistener/requirements.txt: -------------------------------------------------------------------------------- 1 | python-twitter 2 | minio 3 | tweepy 4 | -------------------------------------------------------------------------------- /tweetpic/function/requirements.txt: -------------------------------------------------------------------------------- 1 | python-twitter 2 | minio 3 | Pillow 4 | requests 5 | -------------------------------------------------------------------------------- /tweetlistener/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | docker build -t alexellis2/tweetlistener:0.2.1 . 4 | -------------------------------------------------------------------------------- /tweetlistener/README.md: -------------------------------------------------------------------------------- 1 | # tweetlistener 2 | Listen for tweets with certain hashtags & trigger services 3 | -------------------------------------------------------------------------------- /normalise-color-icon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexellis/repaint-the-past/HEAD/normalise-color-icon.png -------------------------------------------------------------------------------- /colorisation-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexellis/repaint-the-past/HEAD/colorisation-architecture.png -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | tweetlistener.envs 2 | credentials.yml 3 | .DS_Store 4 | template 5 | *.jpg 6 | *.mov 7 | *.mp4 8 | build 9 | .idea 10 | .vscode 11 | -------------------------------------------------------------------------------- /tweetlistener/deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | docker service create --env-file env.list --network func_functions --name tweetlistener alexellis2/tweetlistener:0.2.1 4 | -------------------------------------------------------------------------------- /tweetlistener/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:2.7-alpine 2 | 3 | WORKDIR /root/ 4 | 5 | COPY requirements.txt . 6 | RUN pip install -r requirements.txt 7 | COPY index.py . 8 | 9 | CMD ["python", "-u", "index.py"] 10 | -------------------------------------------------------------------------------- /split_frames.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | video = cv2.VideoCapture('out.mov') 3 | f_n = 1 4 | 5 | if video.isOpened(): 6 | fin, frame = video.read() 7 | else: 8 | fin = false 9 | 10 | while fin: 11 | fin, frame = video.read() 12 | cv2.imwrite('frames/' + str(f_n).zfill(5) + '.jpg', frame) 13 | f_n += 1 14 | cv2.waitKey(1) 15 | -------------------------------------------------------------------------------- /bucketset.yml: -------------------------------------------------------------------------------- 1 | provider: 2 | name: faas 3 | gateway: http://localhost:8080 4 | 5 | functions: 6 | bucketset: 7 | lang: python 8 | handler: ./bucketset 9 | image: bucketset 10 | environment: 11 | minio_secret_key: vMIoCaBu9sSg4ODrSkbD9CGXtq0TTpq6kq7psLuE 12 | minio_access_key: ZBPIIAOCJRY9QLUVEHQO 13 | minio_authority: 127.0.0.1:9000 14 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | sudo: required 2 | 3 | services: 4 | - docker 5 | 6 | language: python 7 | python: 8 | - "2.7" 9 | 10 | before_install: 11 | # installing newer docker 12 | - sudo apt-get update 13 | - sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce 14 | 15 | # installing the faas-cli 16 | - curl -sSL https://cli.openfaas.com | sudo sh 17 | - docker swarm init 18 | - faas-cli build -f stack.yml 19 | 20 | after_script: 21 | - docker swarm leave -f 22 | 23 | -------------------------------------------------------------------------------- /stack.yml: -------------------------------------------------------------------------------- 1 | provider: 2 | name: faas 3 | gateway: http://localhost:8080 4 | 5 | functions: 6 | colorization: 7 | lang: Dockerfile 8 | handler: ./colorization 9 | image: alexellis2/openfaas-colorization:0.4.1 10 | environment: 11 | read_timeout: 60 12 | write_timeout: 60 13 | write_debug: true 14 | # environment_file: 15 | # - credentials.yml 16 | 17 | normalisecolor: 18 | lang: Dockerfile 19 | handler: ./normalisecolor 20 | image: alexellis2/normalisecolor:0.2.1 21 | 22 | -------------------------------------------------------------------------------- /getframes/test.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import skvideo.io 3 | import sys 4 | 5 | video = skvideo.io.VideoCapture(sys.argv[1]) 6 | 7 | fn = 1 8 | 9 | if video.isOpened(): 10 | fin, frame = video.read() 11 | else: 12 | print('Failed to open file') 13 | fin = False 14 | 15 | while fin: 16 | print('Extracting frame %i' % fn) 17 | 18 | fin, frame = video.read() 19 | 20 | filename = 'frames/' + str(fn).zfill(5) + '.jpg' 21 | 22 | cv2.imwrite(filename, frame) 23 | 24 | fn += 1 25 | 26 | cv2.waitKey(1) 27 | video.release() 28 | 29 | print('%i frames successfully extracted' % fn) 30 | 31 | -------------------------------------------------------------------------------- /tweetpic/index.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Alex Ellis 2017. All rights reserved. 2 | # Licensed under the MIT license. See LICENSE file in the project root for full license information. 3 | 4 | import sys 5 | from function import handler 6 | 7 | def get_stdin(): 8 | buf = "" 9 | for line in sys.stdin: 10 | buf = buf + line 11 | return buf 12 | 13 | if(__name__ == "__main__"): 14 | st = get_stdin() 15 | res = handler.handle(st) 16 | if (res['status_id']): 17 | print("Replied to "+ str(res['reply_to'])) 18 | else: 19 | print("Tweetback failed to" + str(res['reply_to'])) 20 | 21 | -------------------------------------------------------------------------------- /colorization/index.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import handler 3 | import json 4 | import os 5 | 6 | def get_stdin(): 7 | buf = "" 8 | for line in sys.stdin: 9 | buf = buf + line 10 | return buf 11 | 12 | def read_head(): 13 | buf = "" 14 | while(True): 15 | line = sys.stdin.readline() 16 | buf += line 17 | 18 | if line == "\r\n": 19 | break 20 | return buf 21 | 22 | if(__name__ == "__main__"): 23 | binary_mode = os.getenv('minio_authority') == None 24 | if binary_mode == True: 25 | st = sys.stdin.read() 26 | handler.handle(st) 27 | else: 28 | st = get_stdin() 29 | print(json.dumps(handler.handle(st))) 30 | -------------------------------------------------------------------------------- /normalisecolor/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine:3.9 2 | WORKDIR /root/ 3 | 4 | # Use any image as your base image, or "scratch" 5 | # Add fwatchdog binary via https://github.com/openfaas/faas/releases/ 6 | # Then set fprocess to the process you want to invoke per request - i.e. "cat" or "my_binary" 7 | 8 | RUN apk add --no-cache curl bash imagemagick 9 | 10 | RUN curl -sL https://github.com/openfaas/faas/releases/download/0.13.4/fwatchdog > /usr/bin/fwatchdog \ 11 | && chmod +x /usr/bin/fwatchdog 12 | 13 | # Populate example here - i.e. "cat", "sha512sum" or "node index.js" 14 | ENV fprocess="convert - -flatten +matte -separate -normalize -combine fd:1" 15 | 16 | HEALTHCHECK --interval=5s CMD [ -e /tmp/.lock ] || exit 1 17 | CMD ["fwatchdog"] 18 | -------------------------------------------------------------------------------- /twitter_stack.yml: -------------------------------------------------------------------------------- 1 | provider: 2 | name: faas 3 | gateway: http://localhost:8080 4 | 5 | functions: 6 | colorization: 7 | lang: Dockerfile 8 | handler: ./colorization 9 | image: alexellis2/openfaas-colorization:0.4.0 10 | environment: 11 | read_timeout: 300 12 | write_timeout: 300 13 | write_debug: true 14 | environment_file: 15 | - credentials.yml 16 | 17 | tweetpic: 18 | lang: Dockerfile 19 | handler: ./tweetpic 20 | image: alexellis2/openfaas-tweetpic:0.3.0 21 | environment: 22 | read_timeout: 60 23 | write_timeout: 60 24 | environment_file: 25 | - credentials.yml 26 | 27 | normalisecolor: 28 | lang: Dockerfile 29 | handler: ./normalisecolor 30 | image: alexellis2/normalisecolor:0.2 31 | 32 | -------------------------------------------------------------------------------- /bucketset/handler.py: -------------------------------------------------------------------------------- 1 | from minio import Minio 2 | from minio.error import ResponseError 3 | import json 4 | import os 5 | 6 | # Request 7 | # { 8 | # "inbox": "aellis_catjump_inbox", 9 | # "outbox": "aellis_catjump_outbox" 10 | # } 11 | def handle(st): 12 | req = json.loads(st) 13 | mc = Minio(os.environ['minio_authority'], 14 | access_key=os.environ['minio_access_key'], 15 | secret_key=os.environ['minio_secret_key'], 16 | secure=False) 17 | try: 18 | res = mc.make_bucket(req["inbox"] , location='us-east-1') 19 | except ResponseError as err: 20 | print(err) 21 | try: 22 | res = mc.make_bucket(req["outbox"] , location='us-east-1') 23 | except ResponseError as err: 24 | print(err) 25 | 26 | # Response 27 | # Empty - success 28 | # Non-empty - failure 29 | -------------------------------------------------------------------------------- /getframes/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:2.7 2 | RUN apt-get update -qy \ 3 | && apt-get install -qy \ 4 | nano wget build-essential libmp3lame-dev \ 5 | libvorbis-dev libtheora-dev libspeex-dev \ 6 | yasm pkg-config libopenjpeg-dev libx264-dev libav-tools 7 | RUN pip install numpy \ 8 | && pip install opencv-python scikit-video 9 | 10 | WORKDIR /root 11 | RUN wget http://ffmpeg.org/releases/ffmpeg-3.4.tar.bz2 \ 12 | && tar xvjf ffmpeg-3.4.tar.bz2 13 | 14 | WORKDIR /root/ffmpeg-3.4 15 | 16 | RUN ./configure --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-libmp3lame \ 17 | --enable-libvorbis --enable-libtheora --enable-libx264 --enable-libspeex --enable-shared --enable-pthreads \ 18 | --enable-libopenjpeg --enable-nonfree \ 19 | && make -j 4 \ 20 | && make install \ 21 | && /sbin/ldconfig 22 | 23 | WORKDIR /tmp/ 24 | 25 | CMD ["/bin/bash"] 26 | -------------------------------------------------------------------------------- /tweetpic/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:2.7-alpine 2 | 3 | # Alternatively use ADD https:// (which will not be cached by Docker builder) 4 | RUN apk --no-cache add curl \ 5 | && echo "Pulling watchdog binary from Github." \ 6 | && curl -sSL https://github.com/openfaas/faas/releases/download/0.6.1/fwatchdog > /usr/bin/fwatchdog \ 7 | && chmod +x /usr/bin/fwatchdog \ 8 | && apk del curl --no-cache 9 | 10 | RUN apk add --no-cache build-base python-dev py-pip jpeg-dev zlib-dev 11 | ENV LIBRARY_PATH=/lib:/usr/lib 12 | WORKDIR /root/ 13 | 14 | COPY requirements.txt . 15 | RUN pip install -r requirements.txt 16 | COPY index.py . 17 | 18 | COPY function function 19 | 20 | RUN touch ./function/__init__.py 21 | 22 | WORKDIR /root/function/ 23 | COPY function/requirements.txt . 24 | RUN pip install -r requirements.txt 25 | 26 | WORKDIR /root/ 27 | 28 | ENV fprocess="python index.py" 29 | 30 | HEALTHCHECK --interval=1s CMD [ -e /tmp/.lock ] || exit 1 31 | 32 | CMD ["fwatchdog"] 33 | -------------------------------------------------------------------------------- /colorization/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM bvlc/caffe:cpu 2 | 3 | RUN apt-get update -qqy\ 4 | && apt-get install -qy \ 5 | unzip \ 6 | python-tk \ 7 | curl -qy \ 8 | && pip install minio 9 | 10 | RUN mkdir -p models resources \ 11 | && curl -sL https://github.com/richzhang/colorization/raw/master/resources/pts_in_hull.npy > ./resources/pts_in_hull.npy \ 12 | && curl -sL http://eecs.berkeley.edu/~rich.zhang/projects/2016_colorization/files/demo_v2/colorization_release_v2.caffemodel > ./models/colorization_release_v2.caffemodel \ 13 | && curl -sL https://raw.githubusercontent.com/richzhang/colorization/master/models/colorization_deploy_v2.prototxt > ./models/colorization_deploy_v2.prototxt 14 | 15 | RUN curl -sSL https://github.com/openfaas/faas/releases/download/0.13.4/fwatchdog > /usr/bin/fwatchdog \ 16 | && chmod +x /usr/bin/fwatchdog 17 | RUN chmod +x /usr/bin/fwatchdog 18 | 19 | ENV fprocess="python -u index.py" 20 | ENV read_timeout="60" 21 | ENV write_timeout="60" 22 | 23 | RUN pip install requests 24 | 25 | COPY index.py index.py 26 | COPY handler.py handler.py 27 | 28 | CMD ["fwatchdog"] 29 | -------------------------------------------------------------------------------- /tweetlistener/LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Finnian Anderson 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Finnian Anderson & Oli Callaghan 4 | Copyright (c) 2017 Alex Ellis 5 | 6 | Permission is hereby granted, free of charge, to any person obtaining a copy 7 | of this software and associated documentation files (the "Software"), to deal 8 | in the Software without restriction, including without limitation the rights 9 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 10 | copies of the Software, and to permit persons to whom the Software is 11 | furnished to do so, subject to the following conditions: 12 | 13 | The above copyright notice and this permission notice shall be included in all 14 | copies or substantial portions of the Software. 15 | 16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 19 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 22 | SOFTWARE. 23 | -------------------------------------------------------------------------------- /colorization/README.md: -------------------------------------------------------------------------------- 1 | colorization 2 | ================ 3 | 4 | This function applies color to a black and white image. It uses the [Caffe project](https://github.com/BVLC/caffe) and a pre-trained model. 5 | 6 | ## Caffe: 7 | 8 | > Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is 9 | > developed by Berkeley AI Research (BAIR)/The Berkeley Vision and Learning Center (BVLC) and 10 | > community contributors. 11 | 12 | ## Operating modes: 13 | 14 | There are two operating modes - the first uses minio for storing images and is used by this 15 | repository by default. The binary mode is best for testing / bench-marking. 16 | 17 | ### Mode 1: minio 18 | 19 | Mode one takes a JSON input and provides an output, see the source-code for the format. 20 | 21 | Before calling the function make the image available within the minio bucket "colorization" using 22 | `mc cp` or the minio SDK. 23 | 24 | 25 | ### Mode 2: binary 26 | 27 | The binary mode does not require the use of a minio bucket to store input or output images. Just 28 | deploy the function without specifying minio configuration. Call the function by passing in the 29 | binary source image and capture the result to a local file. 30 | 31 | 32 | 33 | -------------------------------------------------------------------------------- /colourise_frames.py: -------------------------------------------------------------------------------- 1 | import os 2 | import pycurl 3 | import StringIO 4 | from PIL import Image 5 | 6 | def run_colourise(in_dir, out_dir, filename): 7 | in_path = in_dir + '/' + filename 8 | out_path = out_dir + '/' + filename 9 | 10 | # change to your own gateway if you need to 11 | url = 'http://localhost:8080' 12 | 13 | c = pycurl.Curl() 14 | c.setopt(pycurl.URL, url) 15 | fout = StringIO.StringIO() 16 | c.setopt(pycurl.WRITEFUNCTION, fout.write) 17 | 18 | c.setopt(pycurl.POST, 1) 19 | c.setopt(pycurl.HTTPHEADER, ["Content-Type: image/jpeg"]) 20 | 21 | filesize = os.path.getsize(in_path) 22 | c.setopt(pycurl.POSTFIELDSIZE, filesize) 23 | fin = open(in_path, 'rb') 24 | c.setopt(pycurl.READFUNCTION, fin.read) 25 | 26 | c.perform() 27 | 28 | response_code = c.getinfo(pycurl.RESPONSE_CODE) 29 | response_data = fout.getvalue() 30 | print(response_code) 31 | 32 | with open(out_path, 'wb') as f: 33 | f.write(response_data) 34 | 35 | im = Image.open(out_path) 36 | im = im.convert('RGB') 37 | im.save(out_path, "JPEG") 38 | 39 | c.close() 40 | 41 | for filename in os.listdir('frames'): 42 | print('Colourising ' + filename) 43 | run_colourise('frames', 'output', filename) 44 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | ## Contributing 2 | 3 | ### Guidelines 4 | 5 | Guidelines for contributing. 6 | 7 | * Read the DCO below 8 | * Sign off your commits with `git commit -s` 9 | * Show how you have tested the changes 10 | * Provide context - why are these changes needed? 11 | 12 | ### License 13 | 14 | This project is licensed under the MIT License. 15 | 16 | #### Copyright notice 17 | 18 | Please add a Copyright notice to new files you add where this is not already present: 19 | 20 | ``` 21 | // Copyright (c) Alex Ellis 2018. All rights reserved. 22 | // Licensed under the MIT license. See LICENSE file in the project root for full license information. 23 | ``` 24 | 25 | #### Sign your work 26 | 27 | > Note: all of the commits in your PR/Patch must be signed-off. 28 | 29 | The sign-off is a simple line at the end of the explanation for a patch. Your 30 | signature certifies that you wrote the patch or otherwise have the right to pass 31 | it on as an open-source patch. The rules are pretty simple: if you can certify 32 | the below (from [developercertificate.org](http://developercertificate.org/)): 33 | 34 | ``` 35 | Developer Certificate of Origin 36 | Version 1.1 37 | 38 | Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 39 | 1 Letterman Drive 40 | Suite D4700 41 | San Francisco, CA, 94129 42 | 43 | Everyone is permitted to copy and distribute verbatim copies of this 44 | license document, but changing it is not allowed. 45 | 46 | Developer's Certificate of Origin 1.1 47 | 48 | By making a contribution to this project, I certify that: 49 | 50 | (a) The contribution was created in whole or in part by me and I 51 | have the right to submit it under the open source license 52 | indicated in the file; or 53 | 54 | (b) The contribution is based upon previous work that, to the best 55 | of my knowledge, is covered under an appropriate open source 56 | license and I have the right under that license to submit that 57 | work with modifications, whether created in whole or in part 58 | by me, under the same open source license (unless I am 59 | permitted to submit under a different license), as indicated 60 | in the file; or 61 | 62 | (c) The contribution was provided directly to me by some other 63 | person who certified (a), (b) or (c) and I have not modified 64 | it. 65 | 66 | (d) I understand and agree that this project and the contribution 67 | are public and that a record of the contribution (including all 68 | personal information I submit with it, including my sign-off) is 69 | maintained indefinitely and may be redistributed consistent with 70 | this project or the open source license(s) involved. 71 | ``` 72 | 73 | Then you just add a line to every git commit message: 74 | 75 | Signed-off-by: Joe Smith 76 | 77 | Use your real name (sorry, no pseudonyms or anonymous contributions.) 78 | 79 | If you set your `user.name` and `user.email` git configs, you can sign your 80 | commit automatically with `git commit -s`. 81 | 82 | * Please sign your commits with `git commit -s` so that commits are traceable. 83 | -------------------------------------------------------------------------------- /tweetpic/function/handler.py: -------------------------------------------------------------------------------- 1 | import twitter, os, json, time, tempfile, contextlib, sys, io, requests 2 | 3 | from PIL import Image 4 | from minio import Minio 5 | from minio.error import ResponseError 6 | 7 | @contextlib.contextmanager 8 | def nostdout(): 9 | save_stdout = sys.stdout 10 | sys.stdout = io.BytesIO() 11 | yield 12 | sys.stdout = save_stdout 13 | 14 | minioClient = Minio(os.environ['minio_authority'], 15 | access_key=os.environ['minio_access_key'], 16 | secret_key=os.environ['minio_secret_key'], 17 | secure=False) 18 | 19 | api = twitter.Api( 20 | consumer_key=os.environ['consumer_key'], 21 | consumer_secret=os.environ['consumer_secret'], 22 | access_token_key=os.environ['access_token'], 23 | access_token_secret=os.environ['access_token_secret'] 24 | ) 25 | 26 | def requeue(st): 27 | # grab from headers or set defaults. 28 | retries = int(os.getenv("Http_X_Retries", "0")) 29 | max_retries = int(os.getenv("Http_X_Max_Retries", "9999")) # retry up to 9999 30 | delay_duration = int(os.getenv("Http_X_Delay_Duration", "60")) # delay 60s by default 31 | 32 | # Bump retries up one, since we're on a zero-based index. 33 | retries = retries + 1 34 | 35 | headers = { 36 | "X-Retries": str(retries), 37 | "X-Max-Retries": str(max_retries), 38 | "X-Delay-Duration": str(delay_duration) 39 | } 40 | 41 | r = requests.post("http://mailbox:8080/deadletter/tweetpic", data=st, json=False, headers=headers) 42 | 43 | print "Posting to Mailbox: ", r.status_code 44 | if r.status_code!= 202: 45 | print "Mailbox says: ", r.text 46 | 47 | 48 | """ 49 | Input: 50 | { 51 | "status_id": "twitter status ID", 52 | "image": "minio_path_to_image.jpg" 53 | "duration": 5.5 54 | } 55 | """ 56 | 57 | def handle(st): 58 | print("Incoming request " + str(st)) 59 | 60 | req = json.loads(st) 61 | 62 | print("Parsed request") 63 | 64 | filename = tempfile.gettempdir() + '/' + str(int(round(time.time() * 1000))) + '.jpg' 65 | in_reply_to_status_id = req['status_id'] 66 | duration = req['duration'] 67 | 68 | with nostdout(): 69 | minioClient.fget_object('colorization', req['image'], filename) 70 | 71 | with open(filename, 'rb') as image: 72 | size = os.fstat(image.fileno()).st_size 73 | im = Image.open(filename) 74 | if size > 5 * 1048576: 75 | maxsize = (1028, 1028) 76 | im.thumbnail(maxsize, Image.ANTIALIAS) 77 | im = im.convert("RGB") 78 | im.save(filename, "JPEG") 79 | image = open(filename, 'rb') 80 | 81 | status_id = False 82 | 83 | try: 84 | status = api.PostUpdate("We just colourised your image in %.1f seconds. Find out how: https://goo.gl/cSK4Xu" % duration, 85 | media=image, 86 | auto_populate_reply_metadata=True, 87 | in_reply_to_status_id=in_reply_to_status_id) 88 | 89 | status_id = status.id 90 | 91 | except twitter.error.TwitterError, e: 92 | for m in e.message: 93 | if m['code'] == 34 or m['code'] == 385: 94 | print('Tweet %i went missing' % in_reply_to_status_id) 95 | if m['code'] == 88: 96 | print('We hit the API limits, queuing %i' % in_reply_to_status_id) 97 | requeue(st) 98 | 99 | finally: 100 | # this is always run, regardless of wether we got an error or not 101 | image.close() 102 | 103 | return { 104 | "reply_to": in_reply_to_status_id, 105 | "status_id": status_id 106 | } 107 | -------------------------------------------------------------------------------- /tweetlistener/index.py: -------------------------------------------------------------------------------- 1 | import os, json, time, tempfile, sys, io, urllib, requests, contextlib 2 | from tweepy.streaming import StreamListener 3 | from tweepy import OAuthHandler 4 | from tweepy import Stream 5 | 6 | from minio import Minio 7 | from minio.error import ResponseError 8 | 9 | @contextlib.contextmanager 10 | def nostdout(): 11 | save_stdout = sys.stdout 12 | sys.stdout = io.BytesIO() 13 | yield 14 | sys.stdout = save_stdout 15 | 16 | gateway_hostname = os.getenv("gateway_hostname", "gateway") 17 | twitter_account = os.getenv("twitter_account", "@colorisebot") 18 | auth = OAuthHandler(os.environ['consumer_key'], os.environ['consumer_secret']) 19 | auth.set_access_token(os.environ['access_token'], os.environ['access_token_secret']) 20 | 21 | minioClient = Minio(os.environ['minio_hostname'], 22 | access_key=os.environ['minio_access_key'], 23 | secret_key=os.environ['minio_secret_key'], 24 | secure=False) 25 | 26 | class TweetListener(StreamListener): 27 | def on_data(self, data): 28 | try: 29 | tweet = json.loads(data) 30 | print('Got tweet from %s "%s" (%i followers)' % (tweet['user']['screen_name'], tweet['text'], tweet['user']['followers_count'])) 31 | if not tweet['retweeted']: 32 | if (tweet['extended_entities'] and tweet['extended_entities']['media']): 33 | print('Got %i media items' % len(tweet['extended_entities']['media'])) 34 | for media in tweet['extended_entities']['media']: 35 | if (media['type'] == 'photo'): 36 | print("Ooooo a photo") 37 | image_data = urllib.urlopen(media['media_url_https']).read() 38 | 39 | now = str(int(round(time.time() * 1000))) 40 | filename_in = now + '.jpg' 41 | filename_out = now + '_output.jpg' 42 | file_path_in = tempfile.gettempdir() + '/' + filename_in 43 | 44 | with open(file_path_in, 'wb') as f: 45 | f.write(image_data) 46 | 47 | with nostdout(): 48 | minioClient.fput_object('colorization', filename_in, file_path_in) 49 | 50 | headers = {'X-Callback-Url': 'http://' + gateway_hostname + ':8080/async-function/tweetpic'} 51 | json_data = { 52 | "image": filename_in, 53 | "output_filename": filename_out, 54 | "status_id": tweet['id_str'] 55 | } 56 | r = requests.post('http://' + gateway_hostname + ':8080/async-function/colorization', json=json_data, headers=headers) 57 | if (r.status_code == requests.codes.accepted): 58 | print("Colorization succeeded for -> " + media['media_url_https']) 59 | else: 60 | print("Colorization failed for -> " + media['media_url_https']) 61 | else: 62 | print("Not a photo :(") 63 | else: 64 | print('No media :(') 65 | else: 66 | print("Oh... a retweet") 67 | except Exception as e: 68 | print("error oops", e) 69 | 70 | def on_error(self, status): 71 | print('Error from tweet streamer', status) 72 | 73 | if __name__ == '__main__': 74 | print('Setting up') 75 | l = TweetListener() 76 | stream = Stream(auth, l) 77 | 78 | print('Listening for tweets') 79 | #This line filter Twitter Streams to capture data by the keywords: 'python', 'javascript', 'ruby' 80 | stream.filter(track=[twitter_account]) 81 | -------------------------------------------------------------------------------- /colorization/handler.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['GLOG_minloglevel'] = '3' 3 | 4 | import numpy as np 5 | import requests 6 | import sys, time, warnings 7 | import skimage.color as color 8 | import matplotlib.pyplot as plt 9 | import scipy.ndimage.interpolation as sni 10 | import caffe, contextlib, io, tempfile 11 | import json 12 | import uuid 13 | import requests, shutil 14 | 15 | from minio import Minio 16 | from minio.error import ResponseError 17 | 18 | @contextlib.contextmanager 19 | def nostdout(): 20 | save_stdout = sys.stdout 21 | sys.stdout = io.BytesIO() 22 | yield 23 | sys.stdout = save_stdout 24 | 25 | def download_file(url, save_path): 26 | r = requests.get(url, stream=True) 27 | with open(save_path, 'wb') as f: 28 | shutil.copyfileobj(r.raw, f) 29 | 30 | return save_path 31 | 32 | """ 33 | Input: 34 | { 35 | "image": "minio_path_to_image.jpg", 36 | "output_filename": "minio_path_to_output_image.jpg" 37 | } 38 | 39 | Output: 40 | { 41 | "image": "minio_path_to_image.jpg", 42 | "duration": 5.5 43 | } 44 | """ 45 | def handle(request_in): 46 | minio_mode = os.getenv('minio_authority') != None 47 | binary_mode = not minio_mode 48 | url_mode = os.getenv('url_mode') != None 49 | normalise_enabled = os.getenv('normalise_enabled') != None 50 | 51 | json_in = None 52 | minio_client = None 53 | 54 | if minio_mode: 55 | minioClient = Minio(os.environ['minio_authority'], 56 | access_key=os.environ['minio_access_key'], 57 | secret_key=os.environ['minio_secret_key'], 58 | secure=False) 59 | json_in = json.loads(request_in) 60 | 61 | caffe.set_mode_cpu() 62 | # Select desired model 63 | net = caffe.Net('./models/colorization_deploy_v2.prototxt', './models/colorization_release_v2.caffemodel', caffe.TEST) 64 | 65 | (H_in,W_in) = net.blobs['data_l'].data.shape[2:] # get input shape 66 | (H_out,W_out) = net.blobs['class8_ab'].data.shape[2:] # get output shape 67 | 68 | pts_in_hull = np.load('./resources/pts_in_hull.npy') # load cluster centers 69 | net.params['class8_ab'][0].data[:,:,0,0] = pts_in_hull.transpose((1,0)) # populate cluster centers as 1x1 convolution kernel 70 | 71 | now = str(int(round(time.time() * 1000))) 72 | uuid_value = str(uuid.uuid4()) 73 | 74 | filename_in = now + '_' + uuid_value + '.jpg' 75 | filename_out = None 76 | file_path_in = tempfile.gettempdir() + '/' + filename_in 77 | file_path_out = tempfile.gettempdir() + '/out.' + filename_in 78 | 79 | if url_mode: 80 | download_file(request_in, file_path_in) 81 | else: 82 | if minio_mode: 83 | filename_out = json_in['output_filename'] 84 | with nostdout(): 85 | minioClient.fget_object('colorization', json_in['image'], file_path_in) 86 | 87 | else: 88 | # binary mode 89 | file_path_out = tempfile.gettempdir() + '/' + 'out.' + filename_in 90 | with open(file_path_in, 'ab') as f: 91 | f.write(request_in) 92 | 93 | with warnings.catch_warnings(): 94 | warnings.simplefilter("ignore") 95 | 96 | # load the original image 97 | img_rgb = caffe.io.load_image(file_path_in) 98 | 99 | start = time.time() 100 | 101 | img_lab = color.rgb2lab(img_rgb) # convert image to lab color space 102 | img_l = img_lab[:,:,0] # pull out L channel 103 | (H_orig,W_orig) = img_rgb.shape[:2] # original image size 104 | 105 | # create grayscale version of image (just for displaying) 106 | img_lab_bw = img_lab.copy() 107 | img_lab_bw[:,:,1:] = 0 108 | img_rgb_bw = color.lab2rgb(img_lab_bw) 109 | 110 | # resize image to network input size 111 | img_rs = caffe.io.resize_image(img_rgb,(H_in,W_in)) # resize image to network input size 112 | img_lab_rs = color.rgb2lab(img_rs) 113 | img_l_rs = img_lab_rs[:,:,0] 114 | 115 | net.blobs['data_l'].data[0,0,:,:] = img_l_rs-50 # subtract 50 for mean-centering 116 | net.forward() # run network 117 | 118 | ab_dec = net.blobs['class8_ab'].data[0,:,:,:].transpose((1,2,0)) # this is our result 119 | ab_dec_us = sni.zoom(ab_dec,(1.*H_orig/H_out,1.*W_orig/W_out,1)) # upsample to match size of original image L 120 | img_lab_out = np.concatenate((img_l[:,:,np.newaxis],ab_dec_us),axis=2) # concatenate with original image L 121 | img_rgb_out = (255*np.clip(color.lab2rgb(img_lab_out),0,1)).astype('uint8') # convert back to rgb 122 | 123 | duration = time.time() - start 124 | 125 | plt.imsave(file_path_out, img_rgb_out) 126 | 127 | gateway_url = os.getenv("gateway_url", "http://gateway:8080") 128 | 129 | if normalise_enabled: 130 | url = gateway_url + "/function/normalisecolor" 131 | 132 | with open(file_path_out, "rb") as f: 133 | r = requests.post(url, data=f.read()) 134 | with open(file_path_out, "wb") as f: 135 | f.write(r.content) 136 | 137 | if binary_mode == False: 138 | json_out = json_in 139 | json_out['image'] = filename_out 140 | json_out['duration'] = duration 141 | 142 | with nostdout(): 143 | minioClient.fput_object('colorization', filename_out, file_path_out) 144 | 145 | return json_out 146 | else: 147 | with open(file_path_out, "rb") as f: 148 | sys.stdout.write(f.read()) 149 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # repaint-the-past 2 | 3 | ![](https://github.com/alexellis/repaint-the-past/raw/master/colorisation-architecture.png) 4 | 5 | # Deployment 6 | 7 | ## Minio 8 | 9 | You'll need a [Minio](https://minio.io) server running to store the images in. 10 | 11 | 12 | * Run Minio once and capture the secret/access keys and inject into the command above. 13 | 14 | ``` 15 | $ docker run -ti --rm minio/minio server /data 16 | ... 17 | AccessKey: ZBPIIAOCJRY9QLUVEHQO 18 | SecretKey: vMIoCaBu9sSg4ODrSkbD9CGXtq0TTpq6kq7psLuE 19 | ... 20 | ``` 21 | 22 | Hit Control+C and set up two environmental variables: 23 | 24 | ``` 25 | export MINIO_ACCESS_KEY="ZBPIIAOCJRY9QLUVEHQO" 26 | export MINIO_SECRET_KEY="vMIoCaBu9sSg4ODrSkbD9CGXtq0TTpq6kq7psLuE" 27 | ``` 28 | 29 | We found running a single-container minio server was the easiest way as I had issues when running the distributed version. Once Minio is deployed, go and create a new bucket called `colorization`. This is where the images will be stored. 30 | 31 | ``` 32 | $ docker run -dp 9000:9000 \ 33 | --restart always --name minio \ 34 | -e "MINIO_ACCESS_KEY=$MINIO_ACCESS_KEY" \ 35 | -e "MINIO_SECRET_KEY=$MINIO_SECRET_KEY" \ 36 | minio/minio server /data 37 | ``` 38 | 39 | > You can optionally add a bind-mount too by adding the option: `-v /mnt/data:/data -v /mnt/config:/root/.minio` 40 | 41 | ## OpenFaaS functions 42 | 43 | Firstly, you'll need to make sure the OpenFaaS gateway is configured to have larger read & write timeouts as the colorise function can sometimes take a few seconds to return. 44 | 45 | ``` 46 | $ docker service update func_gateway \ 47 | --env-add "write_timeout=60" \ 48 | --env-add "read_timeout=60" 49 | ``` 50 | 51 | Create `credentials.yml` with these contents: 52 | 53 | ```yaml 54 | environment: 55 | minio_secret_key: 56 | minio_access_key: 57 | minio_authority: 58 | ``` 59 | 60 | Do not include HTTP in the `minio_authority`. 61 | 62 | For example: 63 | 64 | ``` 65 | environment: 66 | minio_secret_key: vMIoCaBu9sSg4ODrSkbD9CGXtq0TTpq6kq7psLuE 67 | minio_access_key: ZBPIIAOCJRY9QLUVEHQO 68 | minio_authority: 192.168.0.10:9000 69 | ``` 70 | 71 | And now deploy the OpenFaaS functions 72 | 73 | ``` 74 | $ faas-cli deploy -f stack.yml 75 | ``` 76 | 77 | ## Configuration 78 | 79 | There are three modes supported by the `colorise` function, as documented below. 80 | 81 | ### `minio` 82 | Uses minio object storage for saving images and is enabled by setting `minio_authority` as above. When in this mode, input to the function must be JSON in the following format: 83 | 84 | ```json 85 | { 86 | "output_filename": "output.jpg", 87 | "image": "input.jpg" 88 | } 89 | ``` 90 | 91 | This will retrieve `input.jpg` from the minio bucket `colorization`, colourise that image and then upload it to the same bucket with a name of `output.jpg`. 92 | 93 | ### `binary_mode` 94 | Pure binary is used for input & output. Enabled by _not_ setting the `minio_authority` env var. 95 | Simply POST the raw image and pipe the output back into a file. 96 | 97 | ### `url_mode` 98 | POST a URL as input and receive binary output. Enabled by setting env var `url_mode` to `"true"` 99 | 100 | ### Other options 101 | There is also the ability to pass the image through imagemagick to remove some of the saturation. 102 | This can be enabled by setting the env var `normalise_enabled` to `"true"`. 103 | 104 | # Invocation 105 | 106 | ## Configure Minio (object storage) 107 | 108 | * Download the CLI 109 | 110 | Download the Minio client from: https://github.com/minio/mc 111 | 112 | * Add the Docker container running Minio as a host 113 | 114 | ``` 115 | $ export IP=192.168.0.1 # replace with the Docker host IP 116 | $ mc config host add minios3 http://$IP:9000 $MINIO_ACCESS_KEY $MINIO_SECRET_KEY 117 | ``` 118 | 119 | * Create a bucket 120 | 121 | ``` 122 | $ mc mb minios3/colorization 123 | Bucket created successfully minios3/colorization. 124 | ``` 125 | 126 | * Upload an image to your minio bucket: 127 | 128 | ``` 129 | $ curl -sL https://static.pexels.com/photos/276374/pexels-photo-276374.jpeg > test_image_bw.jpg && \ 130 | http_proxy="" mc cp test_image_bw.jpg minios3/colorization 131 | ``` 132 | 133 | * Then call the function: 134 | 135 | ``` 136 | $ http_proxy="" curl -d '{"image": "test_image_bw.jpg"}' \ 137 | http://127.0.0.1:8080/function/colorization 138 | 139 | {"duration": 8.719741106033325, "image": "1508788935770_output.jpg"} 140 | ``` 141 | 142 | The returned image is the path to the converted image in minio. 143 | 144 | So copy it back to your host: 145 | 146 | ``` 147 | $ mc cp minios3/colorization/1508788935770_output.jpg . 148 | $ open 1508788935770_output.jpg 149 | ``` 150 | 151 | Congratulations - you just recoloured the past! Read on for how we hooked this into a Twitter bot. 152 | 153 | # Twitter extension 154 | 155 | If you want to replicate our DockerCon demo which listens for tweets matching a certain criteria & then replies with the colourised image, follow the instructions below. 156 | 157 | You need to deploy the `tweetlistener` service. This will listen for tweets matching a certain criteria and then forward the requests into the `colorise` function. Make sure you only deploy one replica of `tweetlistener` because otherwise you'll get duplicate events. 158 | 159 | Create `tweetlistener.envs` with the following contents: 160 | 161 | ``` 162 | # tweetlistener.envs (add your values below) 163 | minio_access_key= 164 | minio_secret_key= 165 | minio_authority= 166 | consumer_key= 167 | consumer_secret= 168 | access_token= 169 | access_token_secret= 170 | ``` 171 | 172 | ``` 173 | $ docker service create --name tweetlistener \ 174 | --env-file tweetlistener.envs \ 175 | --network func_functions \ 176 | developius/tweetlistener:latest 177 | ``` 178 | 179 | Update `credentials.yml` to add your Twitter details 180 | 181 | ```yaml 182 | environment: 183 | minio_secret_key: 184 | minio_access_key: 185 | minio_authority: 186 | consumer_key: 187 | consumer_secret: 188 | access_token: 189 | access_token_secret: 190 | ``` 191 | 192 | Then re-deploy 193 | 194 | ``` 195 | $ faas-cli deploy -f twitter_stack.yml 196 | ``` 197 | 198 | # Tweet it 199 | You can now tweet [@colorisebot](https://twitter.com/colorisebot) (or your own twitter account) with your image and see the data flow through the functions. Depending on the underlying infrastructure, it should take about 10s for the whole flow from tweeting the image, to receiving the tweeted reply. 200 | 201 | # Colourising video 202 | 203 | We've shown how we can colourise photos, but our original goal was to colourise video. 204 | There are a couple of scripts we've written to do this. 205 | 206 | ## Prerequisite 207 | 208 | You need to install some dependencies before you can run the code. 209 | 210 | ``` 211 | $ pip install -r requirements.txt 212 | ``` 213 | 214 | ## Splitting frames 215 | 216 | First, modify line 2 of `split_frames.py` to use the path to your video file. 217 | 218 | Now you can run the script which splits up all the frames & outputs them into `frames/` (make sure this folder exists). 219 | 220 | ``` 221 | $ python split_frames.py 222 | ``` 223 | 224 | ## Colourising the frames 225 | 226 | Next you need to run the actual colourisation code. To do this, you must first start up the docker container which will run the colourisation. 227 | 228 | ``` 229 | $ docker run -a STDERR -e write_timeout=60 -e read_timeout=60 -p 8080:8080 --rm developius/openfaas-colorization:0.1 230 | ``` 231 | 232 | Now you can run `colourise_frames.py` to generate the colourised frames. 233 | 234 | ``` 235 | $ python colourise_frames.py 236 | ``` 237 | 238 | This will create all the colourised frames in the folder `output/` (make sure this one exists too). 239 | 240 | ## Stitch them back together 241 | 242 | The final step is to stitch them back together with `ffmpeg`. 243 | ffmpeg can be installed via brew with `brew install ffmpeg` and is available on most other platforms too. 244 | 245 | ``` 246 | $ ffmpeg -framerate 24 -i output/%05d.jpg output.mp4 247 | ``` 248 | 249 | Now check your current directory for a file name `output.mp4`. Whoop, you just colourised the past, again! 250 | --------------------------------------------------------------------------------