├── .github └── VideoThumbnail.png ├── .gitignore ├── Data-Model ├── EmojiSketches.mlmodel ├── README.md ├── Training │ ├── Dockerfile │ ├── export.py │ ├── input │ │ ├── augment.sh │ │ ├── augmentor │ │ │ ├── README.md │ │ │ ├── counter.py │ │ │ ├── main.py │ │ │ └── ops │ │ │ │ ├── __init__.py │ │ │ │ ├── blur.py │ │ │ │ ├── fliph.py │ │ │ │ ├── flipv.py │ │ │ │ ├── noise.py │ │ │ │ ├── rotate.py │ │ │ │ ├── translate.py │ │ │ │ └── zoom.py │ │ ├── normalize.sh │ │ └── training-data │ │ │ ├── checkmark │ │ │ └── .gitkeep │ │ │ ├── cloud │ │ │ └── .gitkeep │ │ │ ├── croissant │ │ │ └── .gitkeep │ │ │ ├── heart │ │ │ └── .gitkeep │ │ │ ├── laugh │ │ │ └── .gitkeep │ │ │ ├── smile │ │ │ └── .gitkeep │ │ │ └── sun │ │ │ └── .gitkeep │ ├── output │ │ └── .gitkeep │ ├── prepare.sh │ └── retrain.sh └── Utilities │ ├── ModelTesting │ ├── ModelTesting.xcodeproj │ │ ├── project.pbxproj │ │ └── project.xcworkspace │ │ │ ├── contents.xcworkspacedata │ │ │ └── xcshareddata │ │ │ └── IDEWorkspaceChecks.plist │ └── ModelTesting │ │ ├── AppDelegate.swift │ │ ├── Assets.xcassets │ │ └── AppIcon.appiconset │ │ │ └── Contents.json │ │ ├── Base.lproj │ │ ├── LaunchScreen.storyboard │ │ └── Main.storyboard │ │ ├── Info.plist │ │ ├── PaperContainerView.swift │ │ └── ViewController.swift │ └── SampleCollecting │ ├── SampleCollecting.xcodeproj │ ├── project.pbxproj │ └── project.xcworkspace │ │ ├── contents.xcworkspacedata │ │ └── xcshareddata │ │ └── IDEWorkspaceChecks.plist │ └── SampleCollecting │ ├── AppDelegate.swift │ ├── Assets.xcassets │ └── AppIcon.appiconset │ │ └── Contents.json │ ├── Base.lproj │ ├── LaunchScreen.storyboard │ └── Main.storyboard │ ├── Canvas.swift │ ├── Info.plist │ ├── PaperContainerView.swift │ ├── Session.swift │ ├── Storage.swift │ └── ViewController.swift ├── LICENSE ├── Playground ├── MLMOJI.playgroundbook │ └── Contents │ │ ├── Chapters │ │ └── Chapter1.playgroundchapter │ │ │ ├── Manifest.plist │ │ │ └── Pages │ │ │ ├── Page1.playgroundpage │ │ │ ├── Contents.swift │ │ │ ├── LiveView.swift │ │ │ ├── Manifest.plist │ │ │ ├── PrivateResources │ │ │ │ ├── EmojiStrokes@2x.png │ │ │ │ └── SupportedEmoji@2x.png │ │ │ └── Sources │ │ │ │ ├── PredictionResultNode.swift │ │ │ │ ├── PredictionResultsContainer.swift │ │ │ │ ├── PredictionViewController+Playground.swift │ │ │ │ └── PredictionViewController.swift │ │ │ ├── Page2.playgroundpage │ │ │ ├── Contents.swift │ │ │ ├── Manifest.plist │ │ │ ├── PrivateResources │ │ │ │ └── AugmentedSet@2x.png │ │ │ └── Sources │ │ │ │ ├── AugmentViewController+Playground.swift │ │ │ │ ├── AugmentViewController.swift │ │ │ │ └── ImageCollectionViewCell.swift │ │ │ └── Page3.playgroundpage │ │ │ ├── Contents.swift │ │ │ ├── LiveView.swift │ │ │ └── Manifest.plist │ │ ├── Manifest.plist │ │ ├── PrivateResources │ │ ├── Cover.png │ │ ├── EmojiSketches.mlmodelc │ │ │ ├── coremldata.bin │ │ │ ├── model.espresso.net │ │ │ ├── model.espresso.shape │ │ │ ├── model.espresso.weights │ │ │ └── model │ │ │ │ └── coremldata.bin │ │ ├── Glossary.plist │ │ └── KeyboardClear@2x.png │ │ └── Sources │ │ ├── Augmentation │ │ ├── AugmentationFilter.swift │ │ └── AugmentationSession.swift │ │ ├── Paper │ │ ├── Exporters │ │ │ ├── BitmapPaperExporter.swift │ │ │ ├── CVPixelBufferExporter.swift │ │ │ └── PaperExporter.swift │ │ └── Paper.swift │ │ ├── Prediction │ │ ├── Classes.swift │ │ ├── EmojiPrediction.swift │ │ ├── EmojiSketches.swift │ │ ├── PredictionSession.swift │ │ └── PredictionSessionDelegate.swift │ │ └── Utilities │ │ ├── Animation.swift │ │ ├── AutoLayout+Pin.swift │ │ ├── Messages.swift │ │ ├── PlaygroundPage+Tools.swift │ │ └── PlaygroundViewController.swift └── PlaygroundStub │ ├── PlaygroundStub.xcodeproj │ ├── project.pbxproj │ └── project.xcworkspace │ │ └── contents.xcworkspacedata │ └── PlaygroundStub │ ├── AppDelegate.swift │ ├── Assets.xcassets │ ├── AppIcon.appiconset │ │ └── Contents.json │ ├── ArtificalBackground.imageset │ │ ├── ArtificialBackground.jpeg │ │ └── Contents.json │ ├── Contents.json │ └── KeyboardClear.imageset │ │ ├── Contents.json │ │ └── KeyboardClear@2x.png │ ├── AugmentationStubViewController.swift │ ├── Base.lproj │ ├── LaunchScreen.storyboard │ └── Main.storyboard │ ├── EmojiSketches.mlmodelc │ ├── coremldata.bin │ ├── model.espresso.net │ ├── model.espresso.shape │ ├── model.espresso.weights │ └── model │ │ └── coremldata.bin │ ├── Info.plist │ └── PredictionStubViewController.swift ├── README.md └── Scholarship.xcworkspace ├── contents.xcworkspacedata └── xcshareddata └── IDEWorkspaceChecks.plist /.github/VideoThumbnail.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/.github/VideoThumbnail.png -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Xcode 2 | # 3 | # gitignore contributors: remember to update Global/Xcode.gitignore, Objective-C.gitignore & Swift.gitignore 4 | 5 | ## Build generated 6 | build/ 7 | DerivedData/ 8 | 9 | ## Various settings 10 | *.pbxuser 11 | !default.pbxuser 12 | *.mode1v3 13 | !default.mode1v3 14 | *.mode2v3 15 | !default.mode2v3 16 | *.perspectivev3 17 | !default.perspectivev3 18 | xcuserdata/ 19 | 20 | ## Other 21 | *.moved-aside 22 | *.xccheckout 23 | *.xcscmblueprint 24 | 25 | ## Obj-C/Swift specific 26 | *.hmap 27 | *.ipa 28 | *.dSYM.zip 29 | *.dSYM 30 | 31 | ## Playgrounds 32 | timeline.xctimeline 33 | playground.xcworkspace 34 | 35 | # Swift Package Manager 36 | # 37 | # Add this line if you want to avoid checking in source code from Swift Package Manager dependencies. 38 | # Packages/ 39 | # Package.pins 40 | .build/ 41 | 42 | # CocoaPods 43 | # 44 | # We recommend against adding the Pods directory to your .gitignore. However 45 | # you should judge for yourself, the pros and cons are mentioned at: 46 | # https://guides.cocoapods.org/using/using-cocoapods.html#should-i-check-the-pods-directory-into-source-control 47 | # 48 | # Pods/ 49 | 50 | # Carthage 51 | # 52 | # Add this line if you want to avoid checking in source code from Carthage dependencies. 53 | # Carthage/Checkouts 54 | 55 | Carthage/Build 56 | 57 | # fastlane 58 | # 59 | # It is recommended to not store the screenshots in the git repo. Instead, use fastlane to re-generate the 60 | # screenshots whenever they are needed. 61 | # For more information about the recommended setup visit: 62 | # https://docs.fastlane.tools/best-practices/source-control/#source-control 63 | 64 | fastlane/report.xml 65 | fastlane/Preview.html 66 | fastlane/screenshots 67 | fastlane/test_output 68 | 69 | **/.DS_Store 70 | **/*.pyc 71 | Data-Model/Training/input/augmented 72 | Data-Model/Training/input/normalized 73 | Data-Model/Training/input/training-data/**/*.jpg 74 | Data-Model/Training/input/training-data/**/*.JPG 75 | Data-Model/Training/input/training-data/**/*.jpeg 76 | Data-Model/Training/input/training-data/**/*.JPEG 77 | Data-Model/Training/output/tf_files -------------------------------------------------------------------------------- /Data-Model/EmojiSketches.mlmodel: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Data-Model/EmojiSketches.mlmodel -------------------------------------------------------------------------------- /Data-Model/README.md: -------------------------------------------------------------------------------- 1 | # Data Model 2 | 3 | The Core ML model was built using transfer learning. To perform this task, I used 4 | 5 | - A data set of hand-drawn emojis I created 6 | - TensorFlow and Docker 7 | - A pre-trained [MobileNet](https://arxiv.org/abs/1704.04861) snapshot 8 | - Data augmentation 9 | 10 | ## Classes 11 | 12 | The data model can recognize seven emojis: 13 | 14 | - 😊 `smile` 15 | - 😂 `laugh` 16 | - ☀️ `sun` 17 | - ☁️ `cloud` 18 | - ❤️ `heart` 19 | - ✔️ `checkmark` 20 | - 🥐 `croissant` 21 | 22 | ## Training with your own data set 23 | 24 | To build your own data set, you can use the `SampleCollection` iOS app. Once you want to train your model, export the saved samples using iTunes or Xcode and put them in the `Training/input/training-data` folder. 25 | 26 | Open a shell in the `Training` folder. 27 | 28 | Run the `prepare.sh` script to download the dependencies and prepare the images. If you do not want to perform data augmentation (which requires high CPU resources), you can edit the `input/normalize.sh` script to use the `input/training-data` folder instead of `input/augmented`. 29 | 30 | Once the `input/normalized` folder is filled with the normalized images, you can run the `retrain.sh` script to create the Core ML model. This can take around half an hour, depending on your computer's capabilities. 31 | 32 | The `EmojiSketches.coreml` model file in this directory will be updated when training has completed. 33 | -------------------------------------------------------------------------------- /Data-Model/Training/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM gcr.io/tensorflow/tensorflow:1.6.0 2 | 3 | ENV IMAGE_SIZE 224 4 | ENV OUTPUT_GRAPH tf_files/retrained_graph.pb 5 | ENV OUTPUT_LABELS tf_files/retrained_labels.txt 6 | ENV ARCHITECTURE mobilenet_1.0_${IMAGE_SIZE} 7 | ENV TRAINING_STEPS 1000 8 | 9 | VOLUME /output 10 | VOLUME /input 11 | 12 | RUN curl -O https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/examples/image_retraining/retrain.py 13 | 14 | ENTRYPOINT python -m retrain \ 15 | --how_many_training_steps="${TRAINING_STEPS}" \ 16 | --model_dir=/output/tf_files/models/ \ 17 | --output_graph=/output/"${OUTPUT_GRAPH}" \ 18 | --output_labels=/output/"${OUTPUT_LABELS}" \ 19 | --architecture="${ARCHITECTURE}" \ 20 | --image_dir=/input/ -------------------------------------------------------------------------------- /Data-Model/Training/export.py: -------------------------------------------------------------------------------- 1 | # 2 | # MLMOJI 3 | # 4 | # This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | # 6 | # Copyright (c) 2018 Alexis Aubry. Available under the terms of the MIT License. 7 | # 8 | 9 | import tfcoreml as tf_converter 10 | 11 | tf_converter.convert(tf_model_path = 'output/tf_files/retrained_graph.pb', 12 | mlmodel_path = '../EmojiSketches.mlmodel', 13 | output_feature_names = ['final_result:0'], 14 | class_labels = 'output/tf_files/retrained_labels.txt', 15 | input_name_shape_dict = {'input:0' : [1, 224, 224, 3]}, 16 | image_input_names = ['input:0']) 17 | 18 | -------------------------------------------------------------------------------- /Data-Model/Training/input/augment.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # 4 | # MLMOJI 5 | # 6 | # This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 7 | # 8 | # Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 9 | # 10 | 11 | # Prepare directory 12 | rm -rf augmented 13 | cp -R training-data augmented 14 | 15 | # Generate Data 16 | python augmentor/main.py augmented/ fliph 17 | python augmentor/main.py augmented/ noise_0.01 18 | python augmentor/main.py augmented/ fliph,noise_0.01 19 | python augmentor/main.py augmented/ fliph,rot_-30 20 | python augmentor/main.py augmented/ fliph,rot_30 21 | python augmentor/main.py augmented/ rot_15,trans_20_20 22 | python augmentor/main.py augmented/ rot_33,trans_-20_50 23 | python augmentor/main.py augmented/ trans_0_20,zoom_100_50_300_300 24 | python augmentor/main.py augmented/ fliph,trans_50_20,zoom_60_50_200_200 25 | python augmentor/main.py augmented/ rot_-15,zoom_75_50_300_300 26 | python augmentor/main.py augmented/ rot_30 27 | python augmentor/main.py augmented/ blur_4.0 28 | python augmentor/main.py augmented/ fliph,blur_4.0 29 | python augmentor/main.py augmented/ fliph,rot_30,blur_4.0 30 | python augmentor/main.py augmented/ zoom_50_50_250_250 31 | -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/README.md: -------------------------------------------------------------------------------- 1 | ## Image Augmentor 2 | 3 | > Original repository: [codebox/image_augmentor](https://github/com/codebox/image_augmentor) 4 | 5 | This is a simple data augmentation tool for image files, intended for use with machine learning data sets. 6 | The tool scans a directory containing image files, and generates new images by performing a specified set of 7 | augmentation operations on each file that it finds. This process multiplies the number of training examples that can 8 | be used when developing a neural network, and should significantly improve the resulting network's performance, 9 | particularly when the number of training examples is relatively small. 10 | 11 | Run the utility from the command-line as follows: 12 | 13 | python main.py ... 14 | 15 | The `` argument should be the path to a directory containing the image files to be augmented. 16 | The utility will search the directory recursively for files with any of the following extensions: 17 | `jpg, jpeg, bmp, png`. 18 | 19 | The `transform` arguments determine what types of augmentation operations will be performed, 20 | using the codes listed in the table below: 21 | 22 | |Code|Description|Example Values| 23 | |---|---|------| 24 | |`fliph`|Horizontal Flip|`fliph`| 25 | |`flipv`|Vertical Flip|`flipv`| 26 | |`noise`|Adds random noise to the image|`noise_0.01`,`noise_0.5`| 27 | |`rot`|Rotates the image by the specified amount|`rot_90`,`rot_-45`| 28 | |`trans`|Shifts the pixels of the image by the specified amounts in the x and y directions|`trans_20_10`,`trans_-10_0`| 29 | |`zoom`|Zooms into the specified region of the image, performing stretching/shrinking as necessary|`zoom_0_0_20_20`,`zoom_-10_-20_10_10`| 30 | |`blur`|Blurs the image by the specified amount|`blur_1.5`| 31 | 32 | 33 | Each transform argument results in one additional output image being generated for each input image. 34 | An argument may consist of one or more augmentation operations. Multiple operations within a single argument 35 | must be separated by commas, and the order in which the operations are performed will match the order in which they 36 | are specified within the argument. 37 | 38 | ### Examples 39 | Produce 2 output images for each input image, one of which is flipped horizontally, and one of which is flipped vertically: 40 | 41 | python main.py ./my_images fliph flipv 42 | 43 | Produce 1 output image for each input image, by first rotating the image by 90° and then flipping it horizontally: 44 | 45 | python main.py ./my_images rot_90,fliph 46 | 47 | ### Operations 48 | 49 | #### Horizontal Flip 50 | Mirrors the image around a vertical line running through its center 51 | 52 | python main.py ./my_images fliph 53 | 54 | Original Image 55 |            56 | Flipped Image 57 | 58 | #### Vertical Flip 59 | Mirrors the image around a horizontal line running through its center 60 | 61 | python main.py ./my_images flipv 62 | 63 | Original Image 64 |            65 | Flipped Image 66 | 67 | #### Noise 68 | Adds random noise to the image. The amount of noise to be added is specified by a floating-point numeric value that is included 69 | in the transform argument, the numeric value must be greater than 0. 70 | 71 | python main.py ./my_images noise_0.01 noise_0.02 noise_0.05 72 | 73 | Original Image 74 |            75 | Noisy Image 76 | Noisy Image 77 | Noisy Image 78 | 79 | #### Rotate 80 | Rotates the image. The angle of rotation is specified by a integer value that is included in the transform argument 81 | 82 | python main.py ./my_images rot_90 rot_180 rot_-90 83 | 84 | Original Image 85 |            86 | Rotated Image 87 | Rotated Image 88 | Rotated Image 89 | 90 | #### Translate 91 | Performs a translation on the image. The size of the translation in the x and y directions are specified by integer values that 92 | are included in the transform argument 93 | 94 | python main.py ./my_images trans_20_20 trans_0_100 95 | 96 | Original Image 97 |            98 | Translated Image 99 | Translated Image 100 | 101 | #### Zoom/Stretch 102 | Zooms in (or out) to a particular area of the image. The top-left and bottom-right coordinates of the target region are 103 | specified by integer values included in the transform argument. By specifying a target region with an aspect ratio that 104 | differs from that of the source image, stretching transformations can be performed. 105 | 106 | python main.py ./my_images zoom_150_0_300_150 zoom_0_50_300_150 zoom_200_0_300_300 107 | 108 | Original Image 109 |            110 | Zoomed Image 111 | Stretched Image 112 | Stretched Image 113 | 114 | #### Blur 115 | Blurs the image. The amount of blurring is specified by a floating-point value included in the transform argument. 116 | 117 | python main.py ./my_images blur_1.0 blur_2.0 blur_4.0 118 | 119 | Original Image 120 |            121 | Blurred Image 122 | Blurred Image 123 | Blurred Image 124 | -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/counter.py: -------------------------------------------------------------------------------- 1 | from multiprocessing.dummy import Lock 2 | 3 | class Counter: 4 | def __init__(self): 5 | self.lock = Lock() 6 | self._processed = 0 7 | self._error = 0 8 | self._skipped_no_match = 0 9 | self._skipped_augmented = 0 10 | 11 | def processed(self): 12 | with self.lock: 13 | self._processed += 1 14 | 15 | def error(self): 16 | with self.lock: 17 | self._error += 1 18 | 19 | def skipped_no_match(self): 20 | with self.lock: 21 | self._skipped_no_match += 1 22 | 23 | def skipped_augmented(self): 24 | with self.lock: 25 | self._skipped_augmented += 1 26 | 27 | def get(self): 28 | with self.lock: 29 | return {'processed' : self._processed, 'error' : self._error, 'skipped_no_match' : self._skipped_no_match, 'skipped_augmented' : self._skipped_augmented} 30 | -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/main.py: -------------------------------------------------------------------------------- 1 | import sys, os, re, traceback 2 | from os.path import isfile 3 | from multiprocessing.dummy import Pool, cpu_count 4 | from counter import Counter 5 | from ops.rotate import Rotate 6 | from ops.fliph import FlipH 7 | from ops.flipv import FlipV 8 | from ops.zoom import Zoom 9 | from ops.blur import Blur 10 | from ops.noise import Noise 11 | from ops.translate import Translate 12 | from skimage.io import imread, imsave 13 | 14 | EXTENSIONS = ['png', 'jpg', 'jpeg', 'bmp'] 15 | WORKER_COUNT = max(cpu_count() - 1, 1) 16 | OPERATIONS = [Rotate, FlipH, FlipV, Translate, Noise, Zoom, Blur] 17 | 18 | ''' 19 | Augmented files will have names matching the regex below, eg 20 | 21 | original__rot90__crop1__flipv.jpg 22 | 23 | ''' 24 | AUGMENTED_FILE_REGEX = re.compile('^.*(__.+)+\\.[^\\.]+$') 25 | EXTENSION_REGEX = re.compile('|'.join(map(lambda n : '.*\\.' + n + '$', EXTENSIONS))) 26 | 27 | thread_pool = None 28 | counter = None 29 | 30 | def build_augmented_file_name(original_name, ops): 31 | root, ext = os.path.splitext(original_name) 32 | result = root 33 | for op in ops: 34 | result += '__' + op.code 35 | return result + ext 36 | 37 | def work(d, f, op_lists): 38 | try: 39 | in_path = os.path.join(d,f) 40 | for op_list in op_lists: 41 | out_file_name = build_augmented_file_name(f, op_list) 42 | if isfile(os.path.join(d,out_file_name)): 43 | continue 44 | img = imread(in_path) 45 | for op in op_list: 46 | img = op.process(img) 47 | imsave(os.path.join(d, out_file_name), img) 48 | 49 | counter.processed() 50 | except: 51 | traceback.print_exc(file=sys.stdout) 52 | 53 | def process(dir, file, op_lists): 54 | thread_pool.apply_async(work, (dir, file, op_lists)) 55 | 56 | if __name__ == '__main__': 57 | if len(sys.argv) < 3: 58 | print 'Usage: {} ( ...)'.format(sys.argv[0]) 59 | sys.exit(1) 60 | 61 | image_dir = sys.argv[1] 62 | if not os.path.isdir(image_dir): 63 | print 'Invalid image directory: {}'.format(image_dir) 64 | sys.exit(2) 65 | 66 | op_codes = sys.argv[2:] 67 | op_lists = [] 68 | for op_code_list in op_codes: 69 | op_list = [] 70 | for op_code in op_code_list.split(','): 71 | op = None 72 | for op in OPERATIONS: 73 | op = op.match_code(op_code) 74 | if op: 75 | op_list.append(op) 76 | break 77 | 78 | if not op: 79 | print 'Unknown operation {}'.format(op_code) 80 | sys.exit(3) 81 | op_lists.append(op_list) 82 | 83 | counter = Counter() 84 | thread_pool = Pool(WORKER_COUNT) 85 | print 'Thread pool initialised with {} worker{}'.format(WORKER_COUNT, '' if WORKER_COUNT == 1 else 's') 86 | 87 | matches = [] 88 | for dir_info in os.walk(image_dir): 89 | dir_name, _, file_names = dir_info 90 | print 'Processing {}...'.format(dir_name) 91 | 92 | for file_name in file_names: 93 | if EXTENSION_REGEX.match(file_name): 94 | if AUGMENTED_FILE_REGEX.match(file_name): 95 | counter.skipped_augmented() 96 | else: 97 | process(dir_name, file_name, op_lists) 98 | else: 99 | counter.skipped_no_match() 100 | 101 | print "Waiting for workers to complete..." 102 | thread_pool.close() 103 | thread_pool.join() 104 | 105 | print counter.get() 106 | -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/ops/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Data-Model/Training/input/augmentor/ops/__init__.py -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/ops/blur.py: -------------------------------------------------------------------------------- 1 | from skimage.filters import gaussian 2 | from skimage.exposure import rescale_intensity 3 | import re 4 | 5 | CODE = 'blur' 6 | REGEX = re.compile(r"^" + CODE + "_(?P[.0-9]+)") 7 | 8 | class Blur: 9 | def __init__(self, sigma): 10 | self.code = CODE + str(sigma) 11 | self.sigma = sigma 12 | 13 | def process(self, img): 14 | is_colour = len(img.shape)==3 15 | return rescale_intensity(gaussian(img, sigma=self.sigma, multichannel=is_colour)) 16 | 17 | @staticmethod 18 | def match_code(code): 19 | match = REGEX.match(code) 20 | if match: 21 | d = match.groupdict() 22 | return Blur(float(d['sigma'])) 23 | -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/ops/fliph.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | CODE = 'fliph' 4 | 5 | class FlipH: 6 | def __init__(self): 7 | self.code = CODE 8 | 9 | def process(self, img): 10 | return np.fliplr(img) 11 | 12 | @staticmethod 13 | def match_code(code): 14 | if code == CODE: 15 | return FlipH() -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/ops/flipv.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | CODE = 'flipv' 4 | 5 | class FlipV: 6 | def __init__(self): 7 | self.code = CODE 8 | 9 | def process(self, img): 10 | return np.flipud(img) 11 | 12 | @staticmethod 13 | def match_code(code): 14 | if code == CODE: 15 | return FlipV() -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/ops/noise.py: -------------------------------------------------------------------------------- 1 | from skimage.util import random_noise 2 | import re 3 | 4 | CODE = 'noise' 5 | REGEX = re.compile(r"^" + CODE + "_(?P[.0-9]+)") 6 | 7 | class Noise: 8 | def __init__(self, var): 9 | self.code = CODE + str(var) 10 | self.var = var 11 | 12 | def process(self, img): 13 | return random_noise(img, mode='gaussian', var=self.var) 14 | 15 | @staticmethod 16 | def match_code(code): 17 | match = REGEX.match(code) 18 | if match: 19 | d = match.groupdict() 20 | return Noise(float(d['var'])) 21 | -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/ops/rotate.py: -------------------------------------------------------------------------------- 1 | from skimage import transform 2 | import re 3 | 4 | PREFIX = 'rot' 5 | REGEX = re.compile(r"^" + PREFIX + "_(?P-?[0-9]+)") 6 | 7 | class Rotate: 8 | def __init__(self, angle): 9 | self.angle = angle 10 | self.code = PREFIX + str(angle) 11 | 12 | def process(self, img): 13 | return transform.rotate(img, -self.angle, cval=1) 14 | 15 | @staticmethod 16 | def match_code(code): 17 | match = REGEX.match(code) 18 | if match: 19 | d = match.groupdict() 20 | return Rotate(int(d['angle'])) 21 | -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/ops/translate.py: -------------------------------------------------------------------------------- 1 | from skimage.transform import AffineTransform 2 | from skimage import transform as tf 3 | import re 4 | 5 | CODE = 'trans' 6 | REGEX = re.compile(r"^" + CODE + "_(?P[-0-9]+)_(?P[-0-9]+)") 7 | 8 | class Translate: 9 | def __init__(self, x_trans, y_trans): 10 | self.code = CODE + str(x_trans) + '_' + str(y_trans) 11 | self.x_trans = x_trans 12 | self.y_trans = y_trans 13 | 14 | def process(self, img): 15 | return tf.warp(img, AffineTransform(translation=(-self.x_trans, -self.y_trans)), cval=1) 16 | 17 | @staticmethod 18 | def match_code(code): 19 | match = REGEX.match(code) 20 | if match: 21 | d = match.groupdict() 22 | return Translate(int(d['x_trans']), int(d['y_trans'])) 23 | -------------------------------------------------------------------------------- /Data-Model/Training/input/augmentor/ops/zoom.py: -------------------------------------------------------------------------------- 1 | from skimage import transform 2 | import numpy as np 3 | import re 4 | 5 | PREFIX = 'zoom' 6 | REGEX = re.compile(r"^" + PREFIX + "_(?P[-0-9]+)_(?P[-0-9]+)_(?P[-0-9]+)_(?P[-0-9]+)") 7 | PAD_VALUE = 0 8 | 9 | class Zoom: 10 | def __init__(self, p1x, p1y, p2x, p2y): 11 | self.p1x = p1x 12 | self.p1y = p1y 13 | self.p2x = p2x 14 | self.p2y = p2y 15 | self.code = PREFIX + str(p1x) + '_' + str(p1y) + '_' + str(p2x) + '_' + str(p2y) 16 | 17 | def process(self, img): 18 | h = len(img) 19 | w = len(img[0]) 20 | 21 | crop_p1x = max(self.p1x, 0) 22 | crop_p1y = max(self.p1y, 0) 23 | crop_p2x = min(self.p2x, w) 24 | crop_p2y = min(self.p2y, h) 25 | 26 | cropped_img = img[crop_p1y:crop_p2y, crop_p1x:crop_p2x] 27 | 28 | x_pad_before = -min(0, self.p1x) 29 | x_pad_after = max(0, self.p2x-w) 30 | y_pad_before = -min(0, self.p1y) 31 | y_pad_after = max(0, self.p2y-h) 32 | 33 | padding = [(y_pad_before, y_pad_after), (x_pad_before, x_pad_after)] 34 | is_colour = len(img.shape) == 3 35 | if is_colour: 36 | padding.append((0,0)) # colour images have an extra dimension 37 | 38 | padded_img = np.pad(cropped_img, padding, 'constant') 39 | return transform.resize(padded_img, (h,w)) 40 | 41 | @staticmethod 42 | def match_code(code): 43 | match = REGEX.match(code) 44 | if match: 45 | d = match.groupdict() 46 | return Zoom(int(d['p1x']), int(d['p1y']), int(d['p2x']), int(d['p2y'])) 47 | -------------------------------------------------------------------------------- /Data-Model/Training/input/normalize.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # 5 | # MLMOJI 6 | # 7 | # This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 8 | # 9 | # Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 10 | # 11 | 12 | RAW_DIR=$(pwd)/augmented 13 | NORMALIZED_DIR=$(pwd)/normalized 14 | MOBILENET_IMAGE_SIZE=224 15 | 16 | # Preparation 17 | rm -rf $NORMALIZED_DIR 18 | cp -R $RAW_DIR $NORMALIZED_DIR 19 | 20 | # Rezsizing 21 | cd $NORMALIZED_DIR 22 | sips -Z $MOBILENET_IMAGE_SIZE heart/*.jpg 23 | sips -Z $MOBILENET_IMAGE_SIZE cloud/*.jpg 24 | sips -Z $MOBILENET_IMAGE_SIZE sun/*.jpg 25 | sips -Z $MOBILENET_IMAGE_SIZE smile/*.jpg 26 | sips -Z $MOBILENET_IMAGE_SIZE laugh/*.jpg 27 | sips -Z $MOBILENET_IMAGE_SIZE croissant/*.jpg 28 | sips -Z $MOBILENET_IMAGE_SIZE checkmark/*.jpg 29 | -------------------------------------------------------------------------------- /Data-Model/Training/input/training-data/checkmark/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Data-Model/Training/input/training-data/checkmark/.gitkeep -------------------------------------------------------------------------------- /Data-Model/Training/input/training-data/cloud/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Data-Model/Training/input/training-data/cloud/.gitkeep -------------------------------------------------------------------------------- /Data-Model/Training/input/training-data/croissant/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Data-Model/Training/input/training-data/croissant/.gitkeep -------------------------------------------------------------------------------- /Data-Model/Training/input/training-data/heart/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Data-Model/Training/input/training-data/heart/.gitkeep -------------------------------------------------------------------------------- /Data-Model/Training/input/training-data/laugh/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Data-Model/Training/input/training-data/laugh/.gitkeep -------------------------------------------------------------------------------- /Data-Model/Training/input/training-data/smile/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Data-Model/Training/input/training-data/smile/.gitkeep -------------------------------------------------------------------------------- /Data-Model/Training/input/training-data/sun/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Data-Model/Training/input/training-data/sun/.gitkeep -------------------------------------------------------------------------------- /Data-Model/Training/output/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Data-Model/Training/output/.gitkeep -------------------------------------------------------------------------------- /Data-Model/Training/prepare.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # 5 | # MLMOJI 6 | # 7 | # This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 8 | # 9 | # Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 10 | # 11 | 12 | # Install dependencies 13 | echo "👉 Installing dependencies..." 14 | pip install -U tfcoreml six numpy scipy scikit-image opencv-python imgaug 15 | 16 | # Normalize images 17 | echo "👉 Normalizing images..." 18 | cd input 19 | sh ./normalize.sh 20 | 21 | # Create output directory 22 | echo "👉 Creating output directory..." 23 | cd .. 24 | rm -rf output && mkdir output 25 | 26 | # Building TensorFlow image 27 | echo "👉 Building Docker image" 28 | docker build -t train-classifier . 29 | -------------------------------------------------------------------------------- /Data-Model/Training/retrain.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # 5 | # MLMOJI 6 | # 7 | # This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 8 | # 9 | # Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 10 | # 11 | 12 | # Train classifier 13 | echo "👉 Retraining model..." 14 | docker run -it -v $(pwd)/output:/output \-v $(pwd)/input/normalized:/input train-classifier 15 | 16 | # Export MLModel 17 | echo "👉 Exporting model..." 18 | python "export.py" 19 | -------------------------------------------------------------------------------- /Data-Model/Utilities/ModelTesting/ModelTesting.xcodeproj/project.xcworkspace/contents.xcworkspacedata: -------------------------------------------------------------------------------- 1 | 2 | 4 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /Data-Model/Utilities/ModelTesting/ModelTesting.xcodeproj/project.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | IDEDidComputeMac32BitWarning 6 | 7 | 8 | 9 | -------------------------------------------------------------------------------- /Data-Model/Utilities/ModelTesting/ModelTesting/AppDelegate.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | @UIApplicationMain 11 | class AppDelegate: UIResponder, UIApplicationDelegate { 12 | var window: UIWindow? 13 | } 14 | -------------------------------------------------------------------------------- /Data-Model/Utilities/ModelTesting/ModelTesting/Assets.xcassets/AppIcon.appiconset/Contents.json: -------------------------------------------------------------------------------- 1 | { 2 | "images" : [ 3 | { 4 | "idiom" : "iphone", 5 | "size" : "20x20", 6 | "scale" : "2x" 7 | }, 8 | { 9 | "idiom" : "iphone", 10 | "size" : "20x20", 11 | "scale" : "3x" 12 | }, 13 | { 14 | "idiom" : "iphone", 15 | "size" : "29x29", 16 | "scale" : "2x" 17 | }, 18 | { 19 | "idiom" : "iphone", 20 | "size" : "29x29", 21 | "scale" : "3x" 22 | }, 23 | { 24 | "idiom" : "iphone", 25 | "size" : "40x40", 26 | "scale" : "2x" 27 | }, 28 | { 29 | "idiom" : "iphone", 30 | "size" : "40x40", 31 | "scale" : "3x" 32 | }, 33 | { 34 | "idiom" : "iphone", 35 | "size" : "60x60", 36 | "scale" : "2x" 37 | }, 38 | { 39 | "idiom" : "iphone", 40 | "size" : "60x60", 41 | "scale" : "3x" 42 | }, 43 | { 44 | "idiom" : "ipad", 45 | "size" : "20x20", 46 | "scale" : "1x" 47 | }, 48 | { 49 | "idiom" : "ipad", 50 | "size" : "20x20", 51 | "scale" : "2x" 52 | }, 53 | { 54 | "idiom" : "ipad", 55 | "size" : "29x29", 56 | "scale" : "1x" 57 | }, 58 | { 59 | "idiom" : "ipad", 60 | "size" : "29x29", 61 | "scale" : "2x" 62 | }, 63 | { 64 | "idiom" : "ipad", 65 | "size" : "40x40", 66 | "scale" : "1x" 67 | }, 68 | { 69 | "idiom" : "ipad", 70 | "size" : "40x40", 71 | "scale" : "2x" 72 | }, 73 | { 74 | "idiom" : "ipad", 75 | "size" : "76x76", 76 | "scale" : "1x" 77 | }, 78 | { 79 | "idiom" : "ipad", 80 | "size" : "76x76", 81 | "scale" : "2x" 82 | }, 83 | { 84 | "idiom" : "ipad", 85 | "size" : "83.5x83.5", 86 | "scale" : "2x" 87 | } 88 | ], 89 | "info" : { 90 | "version" : 1, 91 | "author" : "xcode" 92 | } 93 | } -------------------------------------------------------------------------------- /Data-Model/Utilities/ModelTesting/ModelTesting/Base.lproj/LaunchScreen.storyboard: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | -------------------------------------------------------------------------------- /Data-Model/Utilities/ModelTesting/ModelTesting/Base.lproj/Main.storyboard: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | -------------------------------------------------------------------------------- /Data-Model/Utilities/ModelTesting/ModelTesting/Info.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | CFBundleDevelopmentRegion 6 | $(DEVELOPMENT_LANGUAGE) 7 | CFBundleDisplayName 8 | ModelTesting 9 | CFBundleExecutable 10 | $(EXECUTABLE_NAME) 11 | CFBundleIdentifier 12 | $(PRODUCT_BUNDLE_IDENTIFIER) 13 | CFBundleInfoDictionaryVersion 14 | 6.0 15 | CFBundleName 16 | $(PRODUCT_NAME) 17 | CFBundlePackageType 18 | APPL 19 | CFBundleShortVersionString 20 | 1.0 21 | CFBundleVersion 22 | 1 23 | LSRequiresIPhoneOS 24 | 25 | UIFileSharingEnabled 26 | 27 | UILaunchStoryboardName 28 | LaunchScreen 29 | UIMainStoryboardFile 30 | Main 31 | UIRequiredDeviceCapabilities 32 | 33 | armv7 34 | 35 | UISupportedInterfaceOrientations 36 | 37 | UIInterfaceOrientationPortrait 38 | UIInterfaceOrientationLandscapeLeft 39 | UIInterfaceOrientationLandscapeRight 40 | 41 | UISupportedInterfaceOrientations~ipad 42 | 43 | UIInterfaceOrientationPortrait 44 | UIInterfaceOrientationPortraitUpsideDown 45 | UIInterfaceOrientationLandscapeLeft 46 | UIInterfaceOrientationLandscapeRight 47 | 48 | 49 | 50 | -------------------------------------------------------------------------------- /Data-Model/Utilities/ModelTesting/ModelTesting/PaperContainerView.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | class PaperContainerView: UIView { 11 | 12 | let paper: Paper 13 | 14 | init(paper: Paper) { 15 | self.paper = paper 16 | super.init(frame: .zero) 17 | createConstraints() 18 | } 19 | 20 | required init?(coder aDecoder: NSCoder) { 21 | self.paper = Paper() 22 | super.init(coder: aDecoder) 23 | createConstraints() 24 | } 25 | 26 | private func createConstraints() { 27 | addSubview(paper) 28 | paper.translatesAutoresizingMaskIntoConstraints = false 29 | paper.centerXAnchor.constraint(equalTo: centerXAnchor).isActive = true 30 | paper.centerYAnchor.constraint(equalTo: centerYAnchor).isActive = true 31 | paper.widthAnchor.constraint(equalToConstant: 300).isActive = true 32 | paper.heightAnchor.constraint(equalToConstant: 300).isActive = true 33 | 34 | setContentCompressionResistancePriority(.defaultHigh, for: .horizontal) 35 | setContentCompressionResistancePriority(.defaultHigh, for: .vertical) 36 | setContentHuggingPriority(.defaultLow, for: .horizontal) 37 | setContentHuggingPriority(.defaultLow, for: .vertical) 38 | } 39 | 40 | override var intrinsicContentSize: CGSize { 41 | return CGSize(width: 300, height: 300) 42 | } 43 | 44 | } 45 | -------------------------------------------------------------------------------- /Data-Model/Utilities/ModelTesting/ModelTesting/ViewController.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | /** 11 | * The view controller that records the samples for data labels. 12 | */ 13 | 14 | class ViewController: UIViewController { 15 | 16 | /** 17 | * Possible states for sample collection. 18 | */ 19 | 20 | enum State { 21 | case idle 22 | case drawing 23 | case recognized(String) 24 | } 25 | 26 | /// The state of sample collection. 27 | private var state: State = .idle { 28 | didSet { 29 | refreshInterfaceForUpdatedState() 30 | } 31 | } 32 | 33 | private var session = PredictionSession() 34 | 35 | 36 | // MARK: - Views 37 | 38 | private let currentTitleLabel = UILabel() 39 | private let paper = Paper() 40 | private let submitButton = UIButton() 41 | private let stackView = UIStackView() 42 | 43 | // MARK: - UI Setup 44 | 45 | override func viewDidLoad() { 46 | super.viewDidLoad() 47 | session.delegate = self 48 | 49 | createViews() 50 | createContraints() 51 | view.backgroundColor = UIColor.groupTableViewBackground 52 | } 53 | 54 | /** 55 | * Creates the views to display on the data collection screen. 56 | */ 57 | 58 | private func createViews() { 59 | 60 | currentTitleLabel.adjustsFontSizeToFitWidth = true 61 | currentTitleLabel.textAlignment = .center 62 | currentTitleLabel.font = UIFont.systemFont(ofSize: 30) 63 | currentTitleLabel.numberOfLines = 0 64 | 65 | paper.backgroundColor = .white 66 | paper.isUserInteractionEnabled = false 67 | paper.delegate = self 68 | 69 | let paperContainer = PaperContainerView(paper: paper) 70 | 71 | submitButton.setTitleColor(.white, for: .normal) 72 | submitButton.setTitle("Save", for: .normal) 73 | submitButton.titleLabel?.font = UIFont.systemFont(ofSize: 17, weight: .semibold) 74 | 75 | let buttonBackgroundColor = UIGraphicsImageRenderer(size: CGSize(width: 1, height: 1)).image { context in 76 | UIColor(red: 0, green: 122/255, blue: 1, alpha: 1).setFill() 77 | context.fill(CGRect(x: 0, y: 0, width: 1, height: 1)) 78 | } 79 | 80 | submitButton.setBackgroundImage(buttonBackgroundColor, for: .normal) 81 | submitButton.setContentHuggingPriority(.defaultLow, for: .horizontal) 82 | submitButton.layer.cornerRadius = 12 83 | submitButton.clipsToBounds = true 84 | 85 | submitButton.addTarget(self, action: #selector(submitSketch), for: .touchUpInside) 86 | 87 | stackView.alignment = .fill 88 | stackView.axis = .vertical 89 | stackView.addArrangedSubview(currentTitleLabel) 90 | stackView.addArrangedSubview(paperContainer) 91 | stackView.addArrangedSubview(submitButton) 92 | 93 | view.addSubview(stackView) 94 | 95 | configureViews(isRegular: traitCollection.horizontalSizeClass == .regular) 96 | refreshInterfaceForUpdatedState() 97 | 98 | } 99 | 100 | /** 101 | * Creates the Auto Layout constraints for the view. 102 | */ 103 | 104 | private func createContraints() { 105 | stackView.translatesAutoresizingMaskIntoConstraints = false 106 | stackView.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true 107 | stackView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true 108 | 109 | submitButton.heightAnchor.constraint(equalToConstant: 55).isActive = true 110 | } 111 | 112 | /** 113 | * Responds to size class changes. 114 | */ 115 | 116 | override func traitCollectionDidChange(_ previousTraitCollection: UITraitCollection?) { 117 | super.traitCollectionDidChange(previousTraitCollection) 118 | configureViews(isRegular: traitCollection.horizontalSizeClass == .regular) 119 | } 120 | 121 | /** 122 | * Configures the view for adaptive presentation. 123 | */ 124 | 125 | private func configureViews(isRegular: Bool) { 126 | stackView.spacing = isRegular ? 64 : 32 127 | } 128 | 129 | // MARK: - Session 130 | 131 | 132 | /** 133 | * Refreshes the UI when the state changes. 134 | */ 135 | 136 | private func refreshInterfaceForUpdatedState() { 137 | 138 | switch state { 139 | case .idle: 140 | submitButton.setTitle("Recognize", for: .normal) 141 | currentTitleLabel.isHidden = true 142 | submitButton.isEnabled = false 143 | paper.isUserInteractionEnabled = true 144 | case .drawing: 145 | submitButton.isEnabled = true 146 | case .recognized(let labels): 147 | paper.isUserInteractionEnabled = false 148 | submitButton.setTitle("Restart", for: .normal) 149 | currentTitleLabel.text = labels 150 | currentTitleLabel.isHidden = false 151 | } 152 | 153 | } 154 | 155 | /** 156 | * Attempts to save the skecth from the paper into the session. 157 | */ 158 | 159 | @objc private func submitSketch() { 160 | 161 | switch state { 162 | case .idle: 163 | return 164 | case .recognized: 165 | paper.clear() 166 | state = .idle 167 | return 168 | case .drawing: 169 | break 170 | } 171 | 172 | let exporter = CVPixelBufferExporter(size: session.inputSize, scale: 1) 173 | 174 | do { 175 | let buffer = try paper.export(with: exporter) 176 | session.requestPrediction(for: buffer) 177 | } catch { 178 | handleError((error as NSError).localizedDescription) 179 | } 180 | 181 | } 182 | 183 | } 184 | 185 | // MARK: - PredictionSessionDelegate + PaperDelegate 186 | 187 | extension ViewController: PredictionSessionDelegate { 188 | 189 | func predictionSession(_ session: PredictionSession, didUpdatePrediction prediction: EmojiSketchesOutput) { 190 | 191 | let top3: [String] = prediction.final_result__0.sorted { a, b in 192 | return a.value > b.value 193 | }.prefix(3).flatMap { 194 | guard let predictionClass = Class(rawValue: $0.key) else { 195 | handleError("Invalid class name '\($0.key)'") 196 | return nil 197 | } 198 | 199 | return "\(predictionClass.emojiValue) = \($0.value)" 200 | } 201 | 202 | state = .recognized(top3.joined(separator: "\n")) 203 | 204 | } 205 | 206 | func predictionSession(_ session: PredictionSession, didFailToProvidePredictionWith error: NSError) { 207 | handleError(error.localizedDescription) 208 | } 209 | 210 | } 211 | 212 | extension ViewController: PaperDelegate { 213 | 214 | func paperDidStartDrawingStroke(_ paper: Paper) { 215 | // no-op 216 | } 217 | 218 | func paperDidFinishDrawingStroke(_ paper: Paper) { 219 | guard case .idle = state else { 220 | return 221 | } 222 | 223 | state = .drawing 224 | } 225 | 226 | } 227 | 228 | // MARK: - Error Handling 229 | 230 | extension ViewController { 231 | 232 | func handleError(_ description: String) { 233 | 234 | let alert = UIAlertController(title: "Could not recognize emoji", 235 | message: description, 236 | preferredStyle: .alert) 237 | 238 | let cancelAction = UIAlertAction(title: "Cancel", style: .cancel, handler: { _ in 239 | self.paper.clear() 240 | }) 241 | 242 | alert.addAction(cancelAction) 243 | present(alert, animated: true) 244 | 245 | } 246 | 247 | } 248 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting.xcodeproj/project.xcworkspace/contents.xcworkspacedata: -------------------------------------------------------------------------------- 1 | 2 | 4 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting.xcodeproj/project.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | IDEDidComputeMac32BitWarning 6 | 7 | 8 | 9 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting/AppDelegate.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | @UIApplicationMain 11 | class AppDelegate: UIResponder, UIApplicationDelegate { 12 | var window: UIWindow? 13 | } 14 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting/Assets.xcassets/AppIcon.appiconset/Contents.json: -------------------------------------------------------------------------------- 1 | { 2 | "images" : [ 3 | { 4 | "idiom" : "iphone", 5 | "size" : "20x20", 6 | "scale" : "2x" 7 | }, 8 | { 9 | "idiom" : "iphone", 10 | "size" : "20x20", 11 | "scale" : "3x" 12 | }, 13 | { 14 | "idiom" : "iphone", 15 | "size" : "29x29", 16 | "scale" : "2x" 17 | }, 18 | { 19 | "idiom" : "iphone", 20 | "size" : "29x29", 21 | "scale" : "3x" 22 | }, 23 | { 24 | "idiom" : "iphone", 25 | "size" : "40x40", 26 | "scale" : "2x" 27 | }, 28 | { 29 | "idiom" : "iphone", 30 | "size" : "40x40", 31 | "scale" : "3x" 32 | }, 33 | { 34 | "idiom" : "iphone", 35 | "size" : "60x60", 36 | "scale" : "2x" 37 | }, 38 | { 39 | "idiom" : "iphone", 40 | "size" : "60x60", 41 | "scale" : "3x" 42 | }, 43 | { 44 | "idiom" : "ipad", 45 | "size" : "20x20", 46 | "scale" : "1x" 47 | }, 48 | { 49 | "idiom" : "ipad", 50 | "size" : "20x20", 51 | "scale" : "2x" 52 | }, 53 | { 54 | "idiom" : "ipad", 55 | "size" : "29x29", 56 | "scale" : "1x" 57 | }, 58 | { 59 | "idiom" : "ipad", 60 | "size" : "29x29", 61 | "scale" : "2x" 62 | }, 63 | { 64 | "idiom" : "ipad", 65 | "size" : "40x40", 66 | "scale" : "1x" 67 | }, 68 | { 69 | "idiom" : "ipad", 70 | "size" : "40x40", 71 | "scale" : "2x" 72 | }, 73 | { 74 | "idiom" : "ipad", 75 | "size" : "76x76", 76 | "scale" : "1x" 77 | }, 78 | { 79 | "idiom" : "ipad", 80 | "size" : "76x76", 81 | "scale" : "2x" 82 | }, 83 | { 84 | "idiom" : "ipad", 85 | "size" : "83.5x83.5", 86 | "scale" : "2x" 87 | }, 88 | { 89 | "idiom" : "ios-marketing", 90 | "size" : "1024x1024", 91 | "scale" : "1x" 92 | } 93 | ], 94 | "info" : { 95 | "version" : 1, 96 | "author" : "xcode" 97 | } 98 | } -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting/Base.lproj/LaunchScreen.storyboard: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting/Base.lproj/Main.storyboard: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting/Canvas.swift: -------------------------------------------------------------------------------- 1 | // 2 | // Canvas.swift 3 | // TestDrawing 4 | // 5 | // Created by Alexis AUBRY on 23/03/2018. 6 | // Copyright © 2018 Alexis Aubry. All rights reserved. 7 | // 8 | 9 | import Foundation 10 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting/Info.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | CFBundleDevelopmentRegion 6 | $(DEVELOPMENT_LANGUAGE) 7 | CFBundleDisplayName 8 | SampleCollecting 9 | CFBundleExecutable 10 | $(EXECUTABLE_NAME) 11 | CFBundleIdentifier 12 | $(PRODUCT_BUNDLE_IDENTIFIER) 13 | CFBundleInfoDictionaryVersion 14 | 6.0 15 | CFBundleName 16 | $(PRODUCT_NAME) 17 | CFBundlePackageType 18 | APPL 19 | CFBundleShortVersionString 20 | 1.0 21 | CFBundleVersion 22 | 1 23 | LSRequiresIPhoneOS 24 | 25 | UIFileSharingEnabled 26 | 27 | UILaunchStoryboardName 28 | LaunchScreen 29 | UIMainStoryboardFile 30 | Main 31 | UIRequiredDeviceCapabilities 32 | 33 | armv7 34 | 35 | UISupportedInterfaceOrientations 36 | 37 | UIInterfaceOrientationPortrait 38 | UIInterfaceOrientationLandscapeLeft 39 | UIInterfaceOrientationLandscapeRight 40 | 41 | UISupportedInterfaceOrientations~ipad 42 | 43 | UIInterfaceOrientationPortrait 44 | UIInterfaceOrientationPortraitUpsideDown 45 | UIInterfaceOrientationLandscapeLeft 46 | UIInterfaceOrientationLandscapeRight 47 | 48 | 49 | 50 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting/PaperContainerView.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | class PaperContainerView: UIView { 11 | 12 | let paper: Paper 13 | 14 | init(paper: Paper) { 15 | self.paper = paper 16 | super.init(frame: .zero) 17 | createConstraints() 18 | } 19 | 20 | required init?(coder aDecoder: NSCoder) { 21 | self.paper = Paper() 22 | super.init(coder: aDecoder) 23 | createConstraints() 24 | } 25 | 26 | private func createConstraints() { 27 | addSubview(paper) 28 | paper.translatesAutoresizingMaskIntoConstraints = false 29 | paper.centerXAnchor.constraint(equalTo: centerXAnchor).isActive = true 30 | paper.centerYAnchor.constraint(equalTo: centerYAnchor).isActive = true 31 | paper.widthAnchor.constraint(equalToConstant: 300).isActive = true 32 | paper.heightAnchor.constraint(equalToConstant: 300).isActive = true 33 | 34 | setContentCompressionResistancePriority(.defaultHigh, for: .horizontal) 35 | setContentCompressionResistancePriority(.defaultHigh, for: .vertical) 36 | setContentHuggingPriority(.defaultLow, for: .horizontal) 37 | setContentHuggingPriority(.defaultLow, for: .vertical) 38 | } 39 | 40 | override var intrinsicContentSize: CGSize { 41 | return CGSize(width: 300, height: 300) 42 | } 43 | 44 | } 45 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting/Session.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import Foundation 9 | import UIKit 10 | 11 | /** 12 | * The delegate of a session. 13 | */ 14 | 15 | protocol SessionDelegate: class { 16 | 17 | /** 18 | * The session did request a sample for the given label. Once you have the data, call 19 | * `session.completeRequest(withID:,sample:)` to provide the sample data. 20 | */ 21 | 22 | func sessionDidRequestSample(for label: Class, requestID: UUID) 23 | 24 | /** 25 | * The session did add a contribution for a label (ig. the file was successfully saved). 26 | */ 27 | 28 | func sessionDidAddContribution() 29 | 30 | } 31 | 32 | /** 33 | * A sample data gathering session. 34 | * 35 | * Set the `delegate`, and call `start` to start a new session. 36 | */ 37 | 38 | class Session { 39 | 40 | /// The delegate that receives informations about the session. 41 | weak var delegate: SessionDelegate? 42 | 43 | /// The number of contributions for each label. 44 | fileprivate(set) var contributions: [Class: Int] 45 | 46 | /// The desired size of ouput samples. 47 | let sampleSize = CGSize(width: 300, height: 300) 48 | 49 | private let storage: Storage 50 | private let labels: [Class] = Class.allLabels 51 | private var activeRequest: (UUID, Class)? 52 | 53 | // MARK: - Initialization 54 | 55 | init() throws { 56 | self.storage = try Storage() 57 | self.contributions = storage.fetchContributions() 58 | } 59 | 60 | // MARK: - Interacting with the session 61 | 62 | /** 63 | * Starts the session. Make sure `delegate` is set before calling this method. 64 | */ 65 | 66 | func start() { 67 | 68 | guard labels.count >= 1 else { 69 | fatalError("No labels to classify.") 70 | } 71 | 72 | guard activeRequest == nil else { 73 | fatalError("Cannot start a request that is already active.") 74 | } 75 | 76 | requestSampleForNextLabel() 77 | 78 | } 79 | 80 | /** 81 | * Saves the provided sample for the given request. The sample data must be in the JPG format. 82 | */ 83 | 84 | func completeRequest(withID requestID: UUID, sample: Data) throws { 85 | 86 | guard let activeRequest = self.activeRequest else { 87 | fatalError("Cannot complete a request that never started...") 88 | } 89 | 90 | guard activeRequest.0 == requestID else { 91 | fatalError("Wrong request. Only one request can be active at a time.") 92 | } 93 | 94 | guard storage.save(jpgImage: sample, for: activeRequest.1) == true else { 95 | throw NSError(domain: "SessionErrorDomain", code: 1000, userInfo: [ 96 | NSLocalizedDescriptionKey: "The file could not be saved on disk." 97 | ]) 98 | } 99 | 100 | let initialContriubutions = contributions[activeRequest.1] ?? 0 101 | contributions[activeRequest.1] = initialContriubutions + 1 102 | 103 | delegate?.sessionDidAddContribution() 104 | requestSampleForNextLabel() 105 | 106 | } 107 | 108 | private func requestSampleForNextLabel() { 109 | 110 | // Prioritize the labels with the lowest contributions 111 | let lowestContributors = labels.sorted { 112 | (self.contributions[$0]! < self.contributions[$1]!) 113 | } 114 | 115 | let requestID = UUID() 116 | let nextLabel = lowestContributors.first! 117 | 118 | self.activeRequest = (requestID, nextLabel) 119 | delegate?.sessionDidRequestSample(for: nextLabel, requestID: requestID) 120 | 121 | } 122 | 123 | } 124 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting/Storage.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import Foundation 9 | 10 | /** 11 | * Manages the storage of the generated training data. 12 | */ 13 | 14 | class Storage { 15 | 16 | /// The URL of the output data set. 17 | let dataSetURL: URL 18 | 19 | init() throws { 20 | 21 | let documentsDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] 22 | let documentsDirectoryURL = URL(fileURLWithPath: documentsDirectory) 23 | 24 | self.dataSetURL = documentsDirectoryURL.appendingPathComponent("training-data") 25 | 26 | for label in Class.allLabels { 27 | 28 | let labelURL = dataSetURL.appendingPathComponent(label.rawValue) 29 | var isDirectory: ObjCBool = false 30 | 31 | if FileManager.default.fileExists(atPath: labelURL.path, isDirectory: &isDirectory) { 32 | 33 | if isDirectory.boolValue == false { 34 | try FileManager.default.removeItem(at: labelURL) 35 | try FileManager.default.createDirectory(at: labelURL, withIntermediateDirectories: true, attributes: nil) 36 | } 37 | 38 | } else { 39 | try FileManager.default.createDirectory(at: labelURL, withIntermediateDirectories: true, attributes: nil) 40 | } 41 | 42 | } 43 | 44 | // Clean up old images 45 | 46 | if let contents = try? FileManager.default.contentsOfDirectory(atPath: dataSetURL.path) { 47 | 48 | for item in contents { 49 | 50 | if Class(rawValue: item) == nil { 51 | let url = dataSetURL.appendingPathComponent(item) 52 | try FileManager.default.removeItem(at: url) 53 | } 54 | 55 | } 56 | 57 | } 58 | 59 | } 60 | 61 | /** 62 | * Saves the JPG image data for the specified label. Returns `true` if the operation was successful. 63 | */ 64 | 65 | func save(jpgImage: Data, for label: Class) -> Bool { 66 | 67 | let itemURL = dataSetURL 68 | .appendingPathComponent(label.rawValue) 69 | .appendingPathComponent(UUID().uuidString) 70 | .appendingPathExtension("jpg") 71 | 72 | return FileManager.default.createFile(atPath: itemURL.path, contents: jpgImage, attributes: nil) 73 | 74 | } 75 | 76 | /** 77 | * The number of samples saved for every label. 78 | */ 79 | 80 | func fetchContributions() -> [Class: Int] { 81 | 82 | var result: [Class: Int] = [:] 83 | 84 | for label in Class.allLabels { 85 | let labelURL = dataSetURL.appendingPathComponent(label.rawValue) 86 | let numberOfItems = (try? FileManager.default.contentsOfDirectory(atPath: labelURL.path))?.count ?? 0 87 | result[label] = numberOfItems 88 | } 89 | 90 | return result 91 | 92 | } 93 | 94 | } 95 | -------------------------------------------------------------------------------- /Data-Model/Utilities/SampleCollecting/SampleCollecting/ViewController.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | import MobileCoreServices 10 | 11 | /** 12 | * The view controller that records the samples for data labels. 13 | */ 14 | 15 | class ViewController: UIViewController { 16 | 17 | /** 18 | * Possible states for sample collection. 19 | */ 20 | 21 | enum State { 22 | case idle 23 | case pendingUserInput(UUID) 24 | case pendingUserConfirmation(UUID) 25 | case pendingNewRequest 26 | } 27 | 28 | /// The active session. 29 | let session = try! Session() 30 | 31 | /// The state of sample collection. 32 | private var state: State = .idle { 33 | didSet { 34 | refreshInterfaceForUpdatedState() 35 | } 36 | } 37 | 38 | 39 | // MARK: - Views 40 | 41 | private let currentTitleLabel = UILabel() 42 | private let paper = Paper() 43 | private let submitButton = UIButton() 44 | private let stackView = UIStackView() 45 | private let contributionsLabel = UILabel() 46 | 47 | 48 | // MARK: - UI Setup 49 | 50 | override func viewDidLoad() { 51 | super.viewDidLoad() 52 | createViews() 53 | createContraints() 54 | view.backgroundColor = UIColor.groupTableViewBackground 55 | 56 | session.delegate = self 57 | session.start() 58 | } 59 | 60 | /** 61 | * Creates the views to display on the data collection screen. 62 | */ 63 | 64 | private func createViews() { 65 | 66 | currentTitleLabel.adjustsFontSizeToFitWidth = true 67 | currentTitleLabel.textAlignment = .center 68 | currentTitleLabel.font = UIFont.systemFont(ofSize: 100) 69 | 70 | contributionsLabel.numberOfLines = 0 71 | contributionsLabel.textAlignment = .center 72 | contributionsLabel.font = UIFont.systemFont(ofSize: 24) 73 | 74 | paper.backgroundColor = .white 75 | paper.isUserInteractionEnabled = false 76 | paper.delegate = self 77 | 78 | let paperContainer = PaperContainerView(paper: paper) 79 | 80 | submitButton.setTitleColor(.white, for: .normal) 81 | submitButton.setTitle("Save", for: .normal) 82 | submitButton.titleLabel?.font = UIFont.systemFont(ofSize: 17, weight: .semibold) 83 | 84 | let buttonBackgroundColor = UIGraphicsImageRenderer(size: CGSize(width: 1, height: 1)).image { context in 85 | UIColor(red: 0, green: 122/255, blue: 1, alpha: 1).setFill() 86 | context.fill(CGRect(x: 0, y: 0, width: 1, height: 1)) 87 | } 88 | 89 | submitButton.setBackgroundImage(buttonBackgroundColor, for: .normal) 90 | submitButton.setContentHuggingPriority(.defaultLow, for: .horizontal) 91 | submitButton.layer.cornerRadius = 12 92 | submitButton.clipsToBounds = true 93 | 94 | submitButton.addTarget(self, action: #selector(submitSketch), for: .touchUpInside) 95 | 96 | stackView.alignment = .fill 97 | stackView.axis = .vertical 98 | stackView.addArrangedSubview(currentTitleLabel) 99 | stackView.addArrangedSubview(paperContainer) 100 | stackView.addArrangedSubview(submitButton) 101 | 102 | view.addSubview(stackView) 103 | view.addSubview(contributionsLabel) 104 | 105 | refreshContributionsLabel() 106 | configureViews(isRegular: traitCollection.horizontalSizeClass == .regular) 107 | refreshInterfaceForUpdatedState() 108 | 109 | } 110 | 111 | /** 112 | * Creates the Auto Layout constraints for the view. 113 | */ 114 | 115 | private func createContraints() { 116 | stackView.translatesAutoresizingMaskIntoConstraints = false 117 | stackView.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true 118 | stackView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true 119 | 120 | contributionsLabel.translatesAutoresizingMaskIntoConstraints = false 121 | contributionsLabel.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor, constant: -16).isActive = true 122 | contributionsLabel.leadingAnchor.constraint(equalTo: view.readableContentGuide.leadingAnchor, constant: 16).isActive = true 123 | contributionsLabel.trailingAnchor.constraint(equalTo: view.readableContentGuide.trailingAnchor, constant: -16).isActive = true 124 | 125 | submitButton.heightAnchor.constraint(equalToConstant: 55).isActive = true 126 | } 127 | 128 | /** 129 | * Responds to size class changes. 130 | */ 131 | 132 | override func traitCollectionDidChange(_ previousTraitCollection: UITraitCollection?) { 133 | super.traitCollectionDidChange(previousTraitCollection) 134 | configureViews(isRegular: traitCollection.horizontalSizeClass == .regular) 135 | } 136 | 137 | /** 138 | * Configures the view for adaptive presentation. 139 | */ 140 | 141 | private func configureViews(isRegular: Bool) { 142 | stackView.spacing = isRegular ? 64 : 32 143 | } 144 | 145 | // MARK: - Session 146 | 147 | /** 148 | * Refreshes the number of contributions for each trained label. 149 | */ 150 | 151 | private func refreshContributionsLabel() { 152 | 153 | let labels = session.contributions.map { label, count in 154 | return "\(label.emojiValue)\u{a0}\(count)" 155 | } 156 | 157 | contributionsLabel.text = labels.joined(separator: "\u{a0}\u{a0}• ") 158 | 159 | } 160 | 161 | /** 162 | * Refreshes the UI when the state changes. 163 | */ 164 | 165 | private func refreshInterfaceForUpdatedState() { 166 | 167 | switch state { 168 | case .idle: 169 | stackView.isHidden = true 170 | case .pendingUserInput: 171 | submitButton.isEnabled = false 172 | paper.isUserInteractionEnabled = true 173 | stackView.isHidden = false 174 | stackView.alpha = 1 175 | case .pendingUserConfirmation: 176 | submitButton.isEnabled = true 177 | case .pendingNewRequest: 178 | stackView.alpha = 0.5 179 | } 180 | 181 | } 182 | 183 | /** 184 | * Attempts to save the skecth from the paper into the session. 185 | */ 186 | 187 | @objc private func submitSketch() { 188 | 189 | guard case let .pendingUserConfirmation(requestID) = state else { 190 | return 191 | } 192 | 193 | state = .pendingNewRequest 194 | let exporter = BitmapPaperExporter(size: session.sampleSize, scale: 1, fileType: kUTTypeJPEG) 195 | 196 | do { 197 | 198 | let jpgSample = try paper.export(with: exporter) 199 | try session.completeRequest(withID: requestID, sample: jpgSample) 200 | paper.clear() 201 | 202 | } catch { 203 | 204 | let alert = UIAlertController(title: "Could not save sample", 205 | message: (error as NSError).localizedDescription, 206 | preferredStyle: .alert) 207 | 208 | let cancelAction = UIAlertAction(title: "Cancel", style: .cancel, handler: nil) 209 | alert.addAction(cancelAction) 210 | 211 | present(alert, animated: true) { 212 | self.paper.clear() 213 | } 214 | 215 | } 216 | 217 | } 218 | 219 | } 220 | 221 | // MARK: - SessionDelegate + PaperDelegate 222 | 223 | extension ViewController: SessionDelegate, PaperDelegate { 224 | 225 | func sessionDidRequestSample(for label: Class, requestID: UUID) { 226 | currentTitleLabel.text = label.emojiValue 227 | state = .pendingUserInput(requestID) 228 | } 229 | 230 | func sessionDidAddContribution() { 231 | refreshContributionsLabel() 232 | } 233 | 234 | func paperDidStartDrawingStroke(_ paper: Paper) { 235 | // no-op 236 | } 237 | 238 | func paperDidFinishDrawingStroke(_ paper: Paper) { 239 | guard case let .pendingUserInput(requestID) = state else { 240 | return 241 | } 242 | 243 | state = .pendingUserConfirmation(requestID) 244 | } 245 | 246 | } 247 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Alexis Aubry 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Manifest.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Name 6 | MLMOJI 7 | Pages 8 | 9 | Page1.playgroundpage 10 | Page2.playgroundpage 11 | Page3.playgroundpage 12 | 13 | 14 | 15 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/Contents.swift: -------------------------------------------------------------------------------- 1 | //#-hidden-code 2 | import PlaygroundSupport 3 | 4 | var startedPredictions = false 5 | PlaygroundPage.current.needsIndefiniteExecution = true 6 | 7 | /// Starts the predictive keyboard. 8 | func startPredictions() { 9 | PlaygroundPage.current.proxy?.sendEvent(.startPredictions) 10 | startedPredictions = true 11 | } 12 | //#-end-hidden-code 13 | /*: 14 | # MLMOJI : Hand-Drawn Emoji Recognition 15 | 16 | --- 17 | 18 | With more than 2000 emoji supported on iOS, it can be difficult to find the character you're after in the system keyboard. What if there were a keyboard that converts drawings to the corresponding emoji? ✨ 19 | 20 | This is what this playground is all about! Using **Core Graphics** and **Core ML**, you'll be playing with a 21 | [deep neural network](glossary://dnn), and explore a more fun and intuitive way to type. 🤖 22 | 23 | The machine learning model was trained to recognize 7 hand-drawn objects: 24 | 25 | ![SupportedEmoji](SupportedEmoji.png) 26 | 27 | ## Getting started 28 | 29 | To launch the predictive keyboard, enter `startPredictions()` in the code editor below and run your code. 30 | 31 | Sketch your emoji in the white area on the right side of the screen. The keyboard 32 | will predict the most likely matches as you keep drawing. Here are some examples, for inspiration: 33 | 34 | ![EmojiStrokes](EmojiStrokes.png) 35 | 36 | After this, if you want to learn how the Core ML model was prepared, you can read the [next page](@next). 37 | 38 | */ 39 | //#-code-completion(everything, hide) 40 | //#-code-completion(identifier, show, startPredictions()) 41 | //#-editable-code Tap to write your code 42 | //#-end-editable-code 43 | 44 | //#-hidden-code 45 | if !startedPredictions { 46 | PlaygroundPage.current.assessmentStatus = .fail(hints: ["You did not start the predictive keyboard."], 47 | solution: "Enter `startPredictions()` in the code editor on the left and run your code again") 48 | } 49 | //#-end-hidden-code 50 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/LiveView.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | import PlaygroundSupport 10 | 11 | let vc = PlaygroundPredictionViewController() 12 | PlaygroundPage.current.liveView = vc 13 | PlaygroundPage.current.needsIndefiniteExecution = true 14 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/Manifest.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Name 6 | Introducing MLMOJI 7 | LiveViewEdgeToEdge 8 | 9 | LiveViewMode 10 | VisibleByDefault 11 | CodeCopySetup 12 | 13 | ReadyToCopyInstructions 14 | Let's start this challenge! Import the code from the previous page to start. 15 | NotReadyToCopyInstructions 16 | Code 420: You need to complete the previous challenge before starting this one. 17 | 18 | PlaygroundLoggingMode 19 | Off 20 | 21 | 22 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/PrivateResources/EmojiStrokes@2x.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/PrivateResources/EmojiStrokes@2x.png -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/PrivateResources/SupportedEmoji@2x.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/PrivateResources/SupportedEmoji@2x.png -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/Sources/PredictionResultNode.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | /** 11 | * A view that displays the prediction result for a single emojis. 12 | * 13 | * It has two states: waiting for prediction and expanded. 14 | */ 15 | 16 | class PredictionResultNode: UIView { 17 | 18 | /// The label that displays the emoji. 19 | let emojiLabel = UILabel() 20 | 21 | /// The label that displays the likeliness percentage. 22 | let percentageLabel = UILabel() 23 | 24 | /// The view that contains the percentage prediction. 25 | let percentageContainerView = UIView() 26 | 27 | // MARK: - Initialization 28 | 29 | override init(frame: CGRect) { 30 | super.init(frame: frame) 31 | configureSubviews() 32 | configureConstraints() 33 | } 34 | 35 | required init?(coder aDecoder: NSCoder) { 36 | super.init(coder: aDecoder) 37 | configureSubviews() 38 | configureConstraints() 39 | } 40 | 41 | /** 42 | * Configures the subviews appearance. 43 | */ 44 | 45 | private func configureSubviews() { 46 | 47 | emojiLabel.font = UIFont.systemFont(ofSize: 40) 48 | emojiLabel.adjustsFontSizeToFitWidth = true 49 | emojiLabel.textAlignment = .center 50 | emojiLabel.baselineAdjustment = .alignCenters 51 | 52 | percentageLabel.font = UIFont.systemFont(ofSize: 55, weight: .bold) 53 | percentageLabel.textColor = .white 54 | percentageLabel.textAlignment = .center 55 | percentageLabel.adjustsFontSizeToFitWidth = true 56 | percentageLabel.baselineAdjustment = .alignCenters 57 | 58 | percentageContainerView.layer.cornerRadius = 12 59 | percentageContainerView.addSubview(emojiLabel) 60 | percentageContainerView.addSubview(percentageLabel) 61 | 62 | addSubview(percentageContainerView) 63 | 64 | } 65 | 66 | /** 67 | * Creates the constraints for the given view. 68 | */ 69 | 70 | private func configureConstraints() { 71 | 72 | emojiLabel.translatesAutoresizingMaskIntoConstraints = false 73 | percentageLabel.translatesAutoresizingMaskIntoConstraints = false 74 | percentageContainerView.translatesAutoresizingMaskIntoConstraints = false 75 | 76 | emojiLabel.leadingAnchor.constraint(equalTo: percentageContainerView.leadingAnchor, constant: 8).isActive = true 77 | emojiLabel.topAnchor.constraint(equalTo: percentageContainerView.topAnchor, constant: 8).isActive = true 78 | emojiLabel.bottomAnchor.constraint(equalTo: percentageContainerView.bottomAnchor, constant: -10).isActive = true 79 | emojiLabel.setContentCompressionResistancePriority(.defaultHigh, for: .horizontal) 80 | 81 | percentageLabel.leadingAnchor.constraint(equalTo: emojiLabel.trailingAnchor, constant: 8).isActive = true 82 | percentageLabel.trailingAnchor.constraint(equalTo: percentageContainerView.trailingAnchor, constant: -8).isActive = true 83 | percentageLabel.centerYAnchor.constraint(equalTo: percentageContainerView.centerYAnchor).isActive = true 84 | percentageLabel.heightAnchor.constraint(equalToConstant: 44).isActive = true 85 | percentageLabel.setContentCompressionResistancePriority(.defaultLow, for: .horizontal) 86 | 87 | percentageContainerView.leadingAnchor.constraint(equalTo: leadingAnchor).isActive = true 88 | percentageContainerView.widthAnchor.constraint(equalToConstant: 150).isActive = true 89 | percentageContainerView.heightAnchor.constraint(equalToConstant: 55).isActive = true 90 | 91 | trailingAnchor.constraint(equalTo: percentageContainerView.trailingAnchor).isActive = true 92 | bottomAnchor.constraint(equalTo: percentageContainerView.bottomAnchor).isActive = true 93 | 94 | } 95 | 96 | override var intrinsicContentSize: CGSize { 97 | return CGSize(width: 150, height: 55) 98 | } 99 | 100 | // MARK: - Changing the Data 101 | 102 | /** 103 | * Displays the given prediction output. 104 | */ 105 | 106 | func display(output: Double, for predictionClass: Class) { 107 | 108 | emojiLabel.text = predictionClass.emojiValue 109 | percentageLabel.text = String(format: "%.2f", output * 100) + "%" 110 | 111 | if output >= 0.2 { 112 | percentageLabel.font = UIFont.systemFont(ofSize: 55, weight: .bold) 113 | percentageContainerView.backgroundColor = predictionClass.highlightColor 114 | } else { 115 | percentageLabel.font = UIFont.systemFont(ofSize: 22, weight: .light) 116 | percentageContainerView.backgroundColor = predictionClass.highlightColor.withAlphaComponent(0.5) 117 | } 118 | 119 | } 120 | 121 | /** 122 | * Resets the result node. 123 | */ 124 | 125 | func reset() { 126 | emojiLabel.text = nil 127 | percentageLabel.text = nil 128 | percentageContainerView.backgroundColor = nil 129 | } 130 | 131 | } 132 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/Sources/PredictionResultsContainer.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | private let kPlaceholderFloatAnimation = "PredictionResultsContainer.PlaceholderFloat" 11 | private let kStatusDraw = "🖌 Draw on the canvas below" 12 | private let kStatusFailure = "😿 Could not guess the emoji" 13 | 14 | /** 15 | * A view that displays the results of the prediction. 16 | */ 17 | 18 | class PredictionResultsContainer: UIView { 19 | 20 | enum State { 21 | case idle 22 | case displayed 23 | case startedPredictions 24 | case waitingInitialPrediction 25 | case predicted 26 | case noPrediction 27 | } 28 | 29 | /// The state of the results. 30 | var state: State = .idle 31 | 32 | // MARK: - Properties 33 | 34 | /// The main content stack view. 35 | let stackView = UIStackView() 36 | 37 | /// The lines displaying the placeholder result. 38 | private var placeholderLineStacks: [UIStackView] = [] 39 | 40 | /// The view that contains the status placeholder. 41 | private let statusPlaceholderContainer = UIView() 42 | 43 | /// The placeholder to display status messages. 44 | private let statusPlaceholder = UILabel() 45 | 46 | /// The nodes that display the result of the prediction. 47 | private let resultNodes: [PredictionResultNode] = { 48 | return [PredictionResultNode(), PredictionResultNode(), PredictionResultNode()] 49 | }() 50 | 51 | private var small_left_trailing: NSLayoutConstraint? 52 | private var small_left_bottom: NSLayoutConstraint? 53 | private var small_middle_leading: NSLayoutConstraint? 54 | private var small_middle_bottom: NSLayoutConstraint? 55 | private var small_right_top: NSLayoutConstraint? 56 | private var small_right_centerX: NSLayoutConstraint? 57 | 58 | private var large_middle_centerX: NSLayoutConstraint? 59 | private var large_middle_centerY: NSLayoutConstraint? 60 | private var large_left_trailing: NSLayoutConstraint? 61 | private var large_left_centerY: NSLayoutConstraint? 62 | private var large_right_leading: NSLayoutConstraint? 63 | private var large_right_centerY: NSLayoutConstraint? 64 | 65 | // MARK: - Initialization 66 | 67 | override init(frame: CGRect) { 68 | super.init(frame: frame) 69 | configureSubviews() 70 | configureConstraints() 71 | } 72 | 73 | required init?(coder aDecoder: NSCoder) { 74 | super.init(coder: aDecoder) 75 | configureSubviews() 76 | configureConstraints() 77 | } 78 | 79 | /** 80 | * Configures the view's subviews. 81 | */ 82 | 83 | private func configureSubviews() { 84 | 85 | // 1) Create the general stack 86 | 87 | stackView.axis = .vertical 88 | stackView.spacing = 55 89 | stackView.alignment = .center 90 | stackView.distribution = .fillEqually 91 | 92 | addSubview(stackView) 93 | 94 | // 2) Create the placeholder lines 95 | 96 | let allClasses = Class.allLabels 97 | let (numberOfLines, remainingEmojis) = allClasses.count.quotientAndRemainder(dividingBy: 4) 98 | 99 | // 2.1 - Create the lines 100 | 101 | for lineIndex in 0 ..< numberOfLines { 102 | 103 | let startIndex = allClasses.startIndex + (lineIndex * 4) 104 | let endIndex = startIndex + 4 105 | 106 | let stack = makePlaceholderLineStack(content: allClasses, startIndex: startIndex, endIndex: endIndex) 107 | placeholderLineStacks.append(stack) 108 | stackView.addArrangedSubview(stack) 109 | 110 | } 111 | 112 | // 2.2 - Create a line for the remaining classes 113 | 114 | let startIndex = allClasses.startIndex + (numberOfLines * 4) 115 | let endIndex = startIndex + remainingEmojis 116 | 117 | let stack = makePlaceholderLineStack(content: allClasses, startIndex: startIndex, endIndex: endIndex) 118 | placeholderLineStacks.append(stack) 119 | stackView.addArrangedSubview(stack) 120 | 121 | // 3) Create the result nodes 122 | 123 | for node in resultNodes { 124 | addSubview(node) 125 | } 126 | 127 | // 4) Create the status placeholder 128 | 129 | statusPlaceholder.font = UIFont.systemFont(ofSize: 30, weight: .medium) 130 | statusPlaceholder.numberOfLines = 1 131 | statusPlaceholder.textColor = .black 132 | 133 | statusPlaceholderContainer.backgroundColor = .white 134 | statusPlaceholderContainer.layer.cornerRadius = 12 135 | statusPlaceholderContainer.alpha = 0 136 | 137 | statusPlaceholderContainer.addSubview(statusPlaceholder) 138 | addSubview(statusPlaceholderContainer) 139 | 140 | } 141 | 142 | /** 143 | * Creates a stack view that displays a line of elements. 144 | */ 145 | 146 | private func makeLineStack() -> UIStackView { 147 | 148 | let stack = UIStackView() 149 | stack.spacing = 55 150 | stack.distribution = .fillEqually 151 | stack.alignment = .center 152 | stack.axis = .horizontal 153 | 154 | return stack 155 | 156 | } 157 | 158 | /** 159 | * Creates a placeholder line stack for the classes between the given indices. 160 | */ 161 | 162 | private func makePlaceholderLineStack(content: [Class], startIndex: Int, endIndex: Int) -> UIStackView { 163 | 164 | let stack = makeLineStack() 165 | 166 | for classIndex in startIndex ..< endIndex { 167 | 168 | let predictionClass = content[classIndex] 169 | 170 | let floatingLabel = UILabel() 171 | floatingLabel.font = UIFont.monospacedDigitSystemFont(ofSize: 69, weight: .regular) 172 | floatingLabel.adjustsFontSizeToFitWidth = true 173 | floatingLabel.text = predictionClass.emojiValue 174 | 175 | stack.addArrangedSubview(floatingLabel) 176 | 177 | } 178 | 179 | return stack 180 | 181 | } 182 | 183 | /** 184 | * Configures the Auto Layout constraints 185 | */ 186 | 187 | private func configureConstraints() { 188 | 189 | stackView.translatesAutoresizingMaskIntoConstraints = false 190 | stackView.centerXAnchor.constraint(equalTo: centerXAnchor).isActive = true 191 | stackView.centerYAnchor.constraint(equalTo: centerYAnchor).isActive = true 192 | 193 | statusPlaceholderContainer.translatesAutoresizingMaskIntoConstraints = false 194 | statusPlaceholderContainer.centerXAnchor.constraint(equalTo: centerXAnchor).isActive = true 195 | statusPlaceholderContainer.centerYAnchor.constraint(equalTo: centerYAnchor).isActive = true 196 | 197 | statusPlaceholder.translatesAutoresizingMaskIntoConstraints = false 198 | statusPlaceholder.leadingAnchor.constraint(equalTo: statusPlaceholderContainer.leadingAnchor, constant: 10).isActive = true 199 | statusPlaceholder.topAnchor.constraint(equalTo: statusPlaceholderContainer.topAnchor, constant: 10).isActive = true 200 | statusPlaceholderContainer.trailingAnchor.constraint(equalTo: statusPlaceholder.trailingAnchor, constant: 15).isActive = true 201 | statusPlaceholderContainer.bottomAnchor.constraint(equalTo: statusPlaceholder.bottomAnchor, constant: 10).isActive = true 202 | 203 | // Result nodes 204 | 205 | let leftNode = resultNodes[0] 206 | let middleNode = resultNodes[1] 207 | let rightNode = resultNodes[2] 208 | 209 | leftNode.translatesAutoresizingMaskIntoConstraints = false 210 | middleNode.translatesAutoresizingMaskIntoConstraints = false 211 | rightNode.translatesAutoresizingMaskIntoConstraints = false 212 | 213 | // Compact width 214 | 215 | small_left_trailing = leftNode.trailingAnchor.constraint(equalTo: centerXAnchor, constant: -25) 216 | small_left_bottom = leftNode.bottomAnchor.constraint(equalTo: centerYAnchor, constant: -25) 217 | small_middle_leading = middleNode.leadingAnchor.constraint(equalTo: centerXAnchor, constant: 25) 218 | small_middle_bottom = middleNode.bottomAnchor.constraint(equalTo: centerYAnchor, constant: -25) 219 | small_right_top = rightNode.topAnchor.constraint(equalTo: centerYAnchor, constant: 25) 220 | small_right_centerX = rightNode.centerXAnchor.constraint(equalTo: centerXAnchor) 221 | 222 | large_middle_centerX = middleNode.centerXAnchor.constraint(equalTo: centerXAnchor) 223 | large_middle_centerY = middleNode.centerYAnchor.constraint(equalTo: centerYAnchor) 224 | large_left_trailing = leftNode.trailingAnchor.constraint(equalTo: middleNode.leadingAnchor, constant: -55) 225 | large_left_centerY = leftNode.centerYAnchor.constraint(equalTo: centerYAnchor) 226 | large_right_leading = rightNode.leadingAnchor.constraint(equalTo: middleNode.trailingAnchor, constant: 55) 227 | large_right_centerY = rightNode.centerYAnchor.constraint(equalTo: centerYAnchor) 228 | 229 | // Large Width 230 | 231 | } 232 | 233 | // MARK: - Animations 234 | 235 | /** 236 | * Adds the gravity floating animation to the placeholders. 237 | */ 238 | 239 | private func floatPlaceholders() { 240 | 241 | for line in self.placeholderLineStacks.reversed() { 242 | 243 | for placeholder in line.arrangedSubviews { 244 | let rotateAnimation = makeGravityAnimation(for: placeholder) 245 | placeholder.layer.removeAnimation(forKey: kPlaceholderFloatAnimation) 246 | placeholder.layer.add(rotateAnimation, forKey: kPlaceholderFloatAnimation) 247 | } 248 | 249 | } 250 | 251 | } 252 | 253 | /** 254 | * Creates an animation that makes the layer rotate around a random orbit. 255 | */ 256 | 257 | private func makeGravityAnimation(for placeholder: UIView) -> CAKeyframeAnimation { 258 | 259 | // Define the distance between the center of the layer and its roation orbit 260 | let radius: CGFloat = 7.5 261 | 262 | // Calculate a random angle to define the direction of the orbit and convert it to radians 263 | let angle = (CGFloat(arc4random_uniform(360 - 25 + 1) + 25) * CGFloat.pi) / 180 264 | 265 | // Calculate a random rotation duration (between 4 and 6s) 266 | let duration = Double(arc4random_uniform(6 - 4 + 1) + 4) 267 | 268 | // Calculate the offset between the center of the layer and the edge of the orbit (thanks, trig!) 269 | let yOffset = sin(angle) * radius 270 | let xOffset = cos(angle) * radius 271 | 272 | // Calculate the path of the orbit 273 | 274 | let center = placeholder.center 275 | 276 | let rotationPoint = CGPoint(x: center.x - xOffset, 277 | y: center.y - yOffset) 278 | 279 | let minX = min(center.x, rotationPoint.x) 280 | let minY = min(center.y, rotationPoint.y) 281 | let maxX = max(center.x, rotationPoint.x) 282 | let maxY = max(center.y, rotationPoint.y) 283 | 284 | let size = max(maxX - minX, maxY - minY) 285 | 286 | let ovalRect = CGRect(x: minX, y: minY, 287 | width: size, height:size) 288 | 289 | let ovalPath = UIBezierPath(ovalIn: ovalRect) 290 | 291 | // Create and return the animation 292 | 293 | let animation = CAKeyframeAnimation(keyPath: "position") 294 | animation.path = ovalPath.cgPath 295 | animation.isCumulative = false 296 | animation.fillMode = kCAFillModeForwards 297 | animation.isRemovedOnCompletion = false 298 | animation.repeatCount = .infinity 299 | animation.duration = duration 300 | animation.calculationMode = kCAAnimationPaced 301 | 302 | return animation 303 | 304 | } 305 | 306 | /** 307 | * Creates the animations to display the placeholder status. The returned animations should 308 | * be enqueued as soon as possible. 309 | */ 310 | 311 | private func displayStatusPlaceholder() -> [Animation] { 312 | 313 | var animations: [Animation] = [] 314 | statusPlaceholderContainer.transform = CGAffineTransform(scaleX: 0, y: 0) 315 | 316 | let upscaleStatusPrompt = Animation(type: .property(0.4, .easeIn)) { 317 | self.statusPlaceholderContainer.transform = CGAffineTransform(scaleX: 1.1, y: 1.1) 318 | self.statusPlaceholderContainer.alpha = 0.69 319 | } 320 | 321 | let finalizeStatusPrompt = Animation(type: .property(0.2, .easeOut)) { 322 | self.statusPlaceholderContainer.transform = .identity 323 | self.statusPlaceholderContainer.alpha = 1 324 | } 325 | 326 | animations.append(upscaleStatusPrompt) 327 | animations.append(finalizeStatusPrompt) 328 | 329 | return animations 330 | 331 | } 332 | 333 | // MARK: - Interacting with the Results 334 | 335 | /** 336 | * Starts the result view. 337 | * 338 | * This method adds the gravity floating animation. 339 | */ 340 | 341 | func prepareForDisplay() { 342 | state = .displayed 343 | floatPlaceholders() 344 | setNeedsLayout() 345 | } 346 | 347 | /** 348 | * Returns the animations to remove the placeholder and display the draw prompt. 349 | */ 350 | 351 | func startPredictions() -> [Animation] { 352 | 353 | state = .startedPredictions 354 | 355 | var animations: [Animation] = [] 356 | 357 | // Remove the placeholders 358 | 359 | for line in self.placeholderLineStacks.reversed() { 360 | 361 | let removePlaceholders = line.arrangedSubviews.reversed().map { placeholder in 362 | Animation(type: .property(0.25, .linear)) { placeholder.alpha = 0 } 363 | } 364 | 365 | animations.append(contentsOf: removePlaceholders) 366 | 367 | } 368 | 369 | // Hide the lines 370 | 371 | let hideLines = self.placeholderLineStacks.map { line in 372 | Animation(type: .notAnimated) { line.isHidden = true } 373 | } 374 | 375 | animations.append(contentsOf: hideLines) 376 | 377 | // Show the drawing prompt 378 | 379 | statusPlaceholder.text = kStatusDraw 380 | 381 | let statusOnAnimations = displayStatusPlaceholder() 382 | animations.append(contentsOf: statusOnAnimations) 383 | 384 | return animations 385 | 386 | } 387 | 388 | /** 389 | * Removes the state placeholder. 390 | */ 391 | 392 | func startDrawing() { 393 | 394 | guard state == .startedPredictions else { 395 | return 396 | } 397 | 398 | state = .waitingInitialPrediction 399 | 400 | enqueueChanges(animation: .property(0.25, .linear), changes: { 401 | self.statusPlaceholderContainer.alpha = 0 402 | }) 403 | 404 | } 405 | 406 | /** 407 | * Removes all the result nodes and displays the placeholders. 408 | */ 409 | 410 | func clear() { 411 | 412 | enqueueChanges(animation: .transition(0.35, .transitionCrossDissolve), changes: { 413 | self.resultNodes.forEach({ $0.reset() }) 414 | }) 415 | 416 | } 417 | 418 | /** 419 | * Handles the output of a Core ML prediction. 420 | */ 421 | 422 | func handle(result: EmojiPrediction) { 423 | 424 | state = .predicted 425 | 426 | // Get the result sorted by highest likeliness 427 | 428 | let sortedResuls = result.predictions.sorted { 429 | $0.value > $1.value 430 | } 431 | 432 | // Only display the first 3 results 433 | let top3 = sortedResuls.prefix(3) 434 | 435 | // If there are no predictions, return early 436 | 437 | if top3.count < 3 { 438 | handleNoPrediction() 439 | return 440 | } 441 | 442 | // Remove the status 443 | 444 | enqueueChanges(animation: .property(0.25, .linear), changes: { 445 | self.statusPlaceholderContainer.alpha = 0 446 | }) 447 | 448 | // Update the elements 449 | 450 | for (prediction, node) in zip(top3, resultNodes) { 451 | 452 | guard let classLabel = Class(rawValue: prediction.key) else { 453 | continue 454 | } 455 | 456 | node.alpha = 1 457 | node.display(output: prediction.value, for: classLabel) 458 | 459 | } 460 | 461 | } 462 | 463 | /** 464 | * Handles the case where no predictions are found. 465 | */ 466 | 467 | private func handleNoPrediction() { 468 | 469 | var animations: [Animation] = [] 470 | 471 | statusPlaceholder.text = kStatusFailure 472 | let statusOnAnimations = displayStatusPlaceholder() 473 | animations.append(contentsOf: statusOnAnimations) 474 | 475 | if state == .predicted { 476 | 477 | let removeResults = Animation(type: .transition(0.35, .transitionCrossDissolve)) { 478 | self.resultNodes.forEach({ $0.reset() }) 479 | } 480 | 481 | animations.append(removeResults) 482 | 483 | } 484 | 485 | state = .noPrediction 486 | enqueueChanges(animations) 487 | 488 | } 489 | 490 | // MARK: - Layout 491 | 492 | override func layoutSubviews() { 493 | 494 | if case .displayed = state { 495 | floatPlaceholders() 496 | } 497 | 498 | let isCompactWidth = frame.size.width < 624 499 | updateLayout(isCompact: isCompactWidth) 500 | 501 | } 502 | 503 | func updateLayout(isCompact: Bool) { 504 | 505 | small_left_trailing?.isActive = isCompact 506 | small_left_bottom?.isActive = isCompact 507 | small_middle_leading?.isActive = isCompact 508 | small_middle_bottom?.isActive = isCompact 509 | small_right_top?.isActive = isCompact 510 | small_right_centerX?.isActive = isCompact 511 | 512 | large_middle_centerX?.isActive = !isCompact 513 | large_middle_centerY?.isActive = !isCompact 514 | large_left_trailing?.isActive = !isCompact 515 | large_left_centerY?.isActive = !isCompact 516 | large_right_leading?.isActive = !isCompact 517 | large_right_centerY?.isActive = !isCompact 518 | 519 | } 520 | 521 | } 522 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/Sources/PredictionViewController+Playground.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | import PlaygroundSupport 10 | 11 | extension PredictionViewController: PlaygroundContainable {} 12 | 13 | /** 14 | * Displays a prediction view controller in the safe are, and handles incoming messages. 15 | */ 16 | 17 | public class PlaygroundPredictionViewController: PlaygroundViewController, PlaygroundLiveViewMessageHandler { 18 | 19 | public convenience init() { 20 | let vc = PredictionViewController() 21 | self.init(child: vc) 22 | } 23 | 24 | public func receive(_ message: PlaygroundValue) { 25 | 26 | guard case let .dictionary(dict) = message else { return } 27 | guard case let .string(eventName)? = dict[MessageType.event] else { return } 28 | 29 | if eventName == Event.startPredictions.rawValue { 30 | child.startPredictions() 31 | } 32 | 33 | } 34 | 35 | } 36 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page1.playgroundpage/Sources/PredictionViewController.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | /** 11 | * A view controller that displays a predictive keyboard and the results interface. 12 | */ 13 | 14 | public class PredictionViewController: UIViewController { 15 | 16 | /// The session that will handle predictions. 17 | let session = PredictionSession() 18 | 19 | /// Whether the keyboard is started. 20 | private(set) var startedPredictions: Bool = false 21 | 22 | /// The layout guide of the container. 23 | public var containerLayoutGuide: UILayoutGuide! 24 | 25 | // MARK: - UI Properties 26 | 27 | /// The paper where the user will draw. 28 | let paper = Paper() 29 | 30 | /// The view that displays the results. 31 | let resultsContainer = PredictionResultsContainer() 32 | 33 | /// The view containing the paper. 34 | let keyboardContainer = UIView() 35 | 36 | /// The button to clear the contents of the canvas. 37 | let clearButton = UIButton(type: .system) 38 | 39 | // MARK: - Initialization 40 | 41 | override public func loadView() { 42 | super.loadView() 43 | 44 | // Configure the keyboard container 45 | keyboardContainer.backgroundColor = #colorLiteral(red: 0.8196078431, green: 0.8352941176, blue: 0.8588235294, alpha: 1) 46 | 47 | // Configure the paper 48 | paper.backgroundColor = .white 49 | paper.isUserInteractionEnabled = false 50 | 51 | // Configure the clear button 52 | let clearIcon = UIImage(named: "KeyboardClear")!.withRenderingMode(.alwaysTemplate) 53 | clearButton.setImage(clearIcon, for: .normal) 54 | clearButton.adjustsImageWhenHighlighted = true 55 | clearButton.isEnabled = false 56 | clearButton.addTarget(self, action: #selector(clearCanvas), for: .touchUpInside) 57 | 58 | // Configure the session 59 | session.delegate = self 60 | 61 | } 62 | 63 | override public func viewDidAppear(_ animated: Bool) { 64 | super.viewDidAppear(animated) 65 | resultsContainer.prepareForDisplay() 66 | } 67 | 68 | /// Configures the Auto Layout constraints of the view hierarchy. 69 | public func configureConstraints() { 70 | 71 | let keyboardSize: CGFloat = 250 72 | 73 | // Add the keyboard container and move it outside of the screen 74 | 75 | view.addSubview(keyboardContainer) 76 | keyboardContainer.translatesAutoresizingMaskIntoConstraints = false 77 | keyboardContainer.leadingAnchor.constraint(equalTo: view.leadingAnchor).isActive = true 78 | keyboardContainer.trailingAnchor.constraint(equalTo: view.trailingAnchor).isActive = true 79 | keyboardContainer.bottomAnchor.constraint(equalTo: view.bottomAnchor).isActive = true 80 | 81 | keyboardContainer.transform = CGAffineTransform(translationX: 0, y: keyboardSize + 150) 82 | resultsContainer.stackView.transform = CGAffineTransform(translationX: 0, y: 150) 83 | 84 | // Add the paper 85 | keyboardContainer.addSubview(paper) 86 | paper.translatesAutoresizingMaskIntoConstraints = false 87 | paper.bottomAnchor.constraint(equalTo: containerLayoutGuide.bottomAnchor).isActive = true 88 | paper.centerXAnchor.constraint(equalTo: containerLayoutGuide.centerXAnchor).isActive = true 89 | 90 | // Add the undo button 91 | keyboardContainer.addSubview(clearButton) 92 | clearButton.translatesAutoresizingMaskIntoConstraints = false 93 | clearButton.topAnchor.constraint(equalTo: keyboardContainer.topAnchor, constant: 16).isActive = true 94 | clearButton.trailingAnchor.constraint(equalTo: keyboardContainer.trailingAnchor, constant: -16).isActive = true 95 | 96 | // Constrain the size of the keyboard 97 | paper.heightAnchor.constraint(equalToConstant: keyboardSize).isActive = true 98 | paper.widthAnchor.constraint(equalToConstant: keyboardSize).isActive = true 99 | keyboardContainer.topAnchor.constraint(equalTo: paper.topAnchor).isActive = true 100 | 101 | // Add the results container 102 | view.addSubview(resultsContainer) 103 | resultsContainer.translatesAutoresizingMaskIntoConstraints = false 104 | resultsContainer.leadingAnchor.constraint(equalTo: containerLayoutGuide.leadingAnchor).isActive = true 105 | resultsContainer.trailingAnchor.constraint(equalTo: containerLayoutGuide.trailingAnchor).isActive = true 106 | resultsContainer.topAnchor.constraint(equalTo: containerLayoutGuide.topAnchor, constant: 16).isActive = true 107 | resultsContainer.bottomAnchor.constraint(equalTo: keyboardContainer.topAnchor, constant: -16).isActive = true 108 | 109 | } 110 | 111 | // MARK: - Interacting with the Paper 112 | 113 | /** 114 | * Starts the predictive keyboard. 115 | */ 116 | 117 | func startPredictions() { 118 | 119 | paper.delegate = self 120 | paper.isUserInteractionEnabled = true 121 | paper.becomeFirstResponder() 122 | 123 | clearCanvas() 124 | 125 | let curve = UIViewAnimationCurve(rawValue: 7)! 126 | 127 | // Move the keyboard on the screen 128 | 129 | guard !startedPredictions else { 130 | return 131 | } 132 | 133 | var animations = resultsContainer.startPredictions() 134 | 135 | let showKeyboard = Animation(type: .property(0.35, curve)) { 136 | self.resultsContainer.stackView.transform = .identity 137 | self.keyboardContainer.transform = .identity 138 | } 139 | 140 | animations.append(showKeyboard) 141 | view.enqueueChanges(animations) 142 | 143 | startedPredictions = true 144 | 145 | } 146 | 147 | /** 148 | * Clears the canvas and resets the state of the prediction engine. 149 | */ 150 | 151 | @objc private func clearCanvas() { 152 | paper.clear() 153 | resultsContainer.clear() 154 | clearButton.isEnabled = false 155 | resultsContainer.state = startedPredictions ? .startedPredictions : .displayed 156 | } 157 | 158 | } 159 | 160 | // MARK: - PaperDelegate 161 | 162 | extension PredictionViewController: PaperDelegate { 163 | 164 | /** 165 | * When the user starts drawing, notify the results view. 166 | */ 167 | 168 | public func paperDidStartDrawingStroke(_ paper: Paper) { 169 | resultsContainer.startDrawing() 170 | } 171 | 172 | /** 173 | * When a new stroke is added, request a new prediction. 174 | */ 175 | 176 | public func paperDidFinishDrawingStroke(_ paper: Paper) { 177 | 178 | clearButton.isEnabled = true 179 | 180 | let exporter = CVPixelBufferExporter(size: session.inputSize, scale: 1) 181 | 182 | do { 183 | let imageBuffer = try paper.export(with: exporter) 184 | session.requestPrediction(for: imageBuffer) 185 | } catch { 186 | handleError(error) 187 | } 188 | 189 | } 190 | 191 | } 192 | 193 | extension PredictionViewController: PredictionSessionDelegate { 194 | 195 | public func predictionSession(_ session: PredictionSession, didUpdatePrediction prediction: EmojiPrediction) { 196 | resultsContainer.handle(result: prediction) 197 | } 198 | 199 | public func predictionSession(_ session: PredictionSession, didFailToProvidePredictionWith error: NSError) { 200 | handleError(error) 201 | } 202 | 203 | /** 204 | * Handles a prediction error. 205 | */ 206 | 207 | private func handleError(_ error: Error) { 208 | 209 | let alert = UIAlertController(title: "Could not recognize emoji", 210 | message: (error as NSError).localizedDescription, 211 | preferredStyle: .alert) 212 | 213 | let ok = UIAlertAction(title: "OK", style: .cancel) { _ in 214 | self.clearCanvas() 215 | } 216 | 217 | alert.addAction(ok) 218 | present(alert, animated: true, completion: nil) 219 | 220 | } 221 | 222 | } 223 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page2.playgroundpage/Contents.swift: -------------------------------------------------------------------------------- 1 | //#-hidden-code 2 | import UIKit 3 | import PlaygroundSupport 4 | 5 | var pendingFilters: [AugmentationFilter] = [] 6 | PlaygroundPage.current.needsIndefiniteExecution = true 7 | 8 | /** 9 | * Rotates the image by the specified angle, in degrees. 10 | * - parameter degress: The angle to rotate the image by. 11 | */ 12 | 13 | func rotate(by degrees: CGFloat) { 14 | pendingFilters.append(.rotate(degrees)) 15 | } 16 | 17 | /** 18 | * Moves the image on the screen. 19 | * - parameter x: The number of pixels to move the image by, horizontally. Can be negative. 20 | * - parameter y: The number of pixels to move the image by, vertically. Can be negative. 21 | */ 22 | 23 | func move(x: CGFloat, y: CGFloat) { 24 | pendingFilters.append(.translate(x, y)) 25 | } 26 | 27 | /** 28 | * Zooms in or out of the image from the center, to the specified percentage. 29 | * 30 | * - parameter percentage: The percentage to scale the image to. Use a negative value to zoom 31 | * out. Use a positive value to zoom in. 32 | */ 33 | 34 | func zoom(percentage: CGFloat) { 35 | pendingFilters.append(.scale(percentage)) 36 | } 37 | 38 | /** 39 | * Applies a light gaussian blur to the image. 40 | */ 41 | 42 | func blur() { 43 | pendingFilters.append(.blur) 44 | } 45 | 46 | //#-end-hidden-code 47 | /*: 48 | # Building a Data Set 49 | 50 | --- 51 | 52 | The emoji recognizer you just tried is powered by a neural network, which was trained with a **data set** prior 53 | to being embedded into the playground. 54 | 55 | Neural networks need to be trained with real world images to learn the [features](glossary://feature) of each outcome 56 | before they can be used. So, for this task, we need to draw emoji for each class we will be predicting. 57 | 58 | However, the [classifier](glossary://classifier) needs hundreds, if not thousands of very different images 59 | for each class to reach satisfactory accuracy. As drawing is a very, *very* repetitive task, we need a 60 | process that generates the sketches for us. 61 | 62 | ## Meet Data Augmentation 63 | 64 | Some research are being conducted to explore generating synthetic images programmatically. For simplicity 65 | reasons, we will use a more approachable technique: **data augmentation**. 66 | 67 | Data augmentation allows us to generate new data from existing real images. With operations such as scaling, 68 | rotation, translation or blurring, we can create images that derive from an existing shape, but that include 69 | new features for the classifier to learn; because the augmented images look different. 70 | 71 | This is an example set of augmented images from a heart sketch: 72 | 73 | ![AugmentedSet.png](AugmentedSet.png) 74 | 75 | As you can see, 9 different drawings were generated from one sketch. The main benefit of this technique is 76 | its speed: it can increase the size of a medium-sized data set by ten times in less than 5 minutes. 77 | 78 | ## Augment your own drawing 79 | 80 | Now that we've learned about data augmentation, why not try it with our own drawing? To complete this challenge, you can follow these steps: 81 | 82 | 1. Add a list of filters in the code editor below 83 | 84 | We will be using **cumulative augmentation**. Each filter is a function that will be applied on top of the previous one to produce the final image, in the order you write them. 85 | 86 | You can use these augmentation filters: 87 | 88 | - `rotate(by:)` using an angle in degrees. 89 | 90 | - `move(x:y:)` with numbers describing the movement of the image on the screen, in horizontal 91 | and vertical points. 92 | 93 | - `zoom(percentage:)` with a zoom percentage. A percentage lower than `100` makes the 94 | image smaller. A percentage greater than `100` makes the image bigger. 95 | 96 | - `blur()` to blur the image with a light Gaussian blur. 97 | 98 | You drawing will be converted to a 250x250 pixel image, to which the filters will be applied. Keep 99 | these dimensions in mind when using the `move(x:y:)` filter, or your image may 100 | move off the screen. 101 | 102 | 2. Run your code 103 | 104 | Feel free to experiment with different augmentation sequences by combining filters in multiple orders, 105 | using different arguments and repeating filters. 106 | 107 | */ 108 | //#-code-completion(everything, hide) 109 | //#-code-completion(identifier, show, rotate(by:)) 110 | //#-code-completion(identifier, show, move(x:y:)) 111 | //#-code-completion(identifier, show, zoom(percentage:)) 112 | //#-code-completion(identifier, show, blur()) 113 | //#-editable-code Tap to write your code 114 | //#-end-editable-code 115 | 116 | //#-hidden-code 117 | 118 | if !pendingFilters.isEmpty { 119 | 120 | let vc = AugmentViewController(filters: pendingFilters) 121 | 122 | vc.completionHandler = { 123 | 124 | let pluralMark = pendingFilters.count > 1 ? "s" : "" 125 | 126 | PlaygroundPage.current.assessmentStatus = .pass(message: "You have created an augmented data set with \(pendingFilters.count) new image\(pluralMark)! You can move to [next page](@next), or keep experimenting with augmentation by editing your code.") 127 | 128 | } 129 | 130 | PlaygroundPage.current.liveView = PlaygroundViewController(child: vc) 131 | 132 | } else { 133 | 134 | PlaygroundPage.current.assessmentStatus = .fail(hints: ["You did not add filters to augment your image."], 135 | solution: "Add at least one filter between the brackets in the `augmentImage()` function.") 136 | 137 | } 138 | 139 | //#-end-hidden-code 140 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page2.playgroundpage/Manifest.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Name 6 | Building a Data Set 7 | LiveViewEdgeToEdge 8 | 9 | LiveViewMode 10 | VisibleByDefault 11 | CodeCopySetup 12 | 13 | ReadyToCopyInstructions 14 | Let's start this challenge! Import the code from the previous page to start. 15 | NotReadyToCopyInstructions 16 | Code 420: You need to complete the previous challenge before starting this one. 17 | 18 | PlaygroundLoggingMode 19 | Off 20 | 21 | 22 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page2.playgroundpage/PrivateResources/AugmentedSet@2x.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page2.playgroundpage/PrivateResources/AugmentedSet@2x.png -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page2.playgroundpage/Sources/AugmentViewController+Playground.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | extension AugmentViewController: PlaygroundContainable {} 9 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page2.playgroundpage/Sources/AugmentViewController.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | import MobileCoreServices 10 | 11 | private let kAugmentCell = "ImageCollectionViewCell" 12 | 13 | /** 14 | * A view controller to perform data augmentation. 15 | */ 16 | 17 | public class AugmentViewController: UIViewController { 18 | 19 | /// The filters to apply. 20 | let filters: [AugmentationFilter] 21 | 22 | /// The list of augmented images received so far. 23 | var augmentedImages: [UIImage] = [] 24 | 25 | /// The block called when augmentation has finished. 26 | public var completionHandler: (() -> Void)? = nil 27 | 28 | /// The active augmentation session. 29 | private var currentSession: AugmentationSession? = nil 30 | 31 | // MARK: - Views 32 | 33 | let paper = Paper() 34 | let clearButton = UIButton() 35 | let confirmButton = UIButton(type: .system) 36 | let drawContainer = UIView() 37 | 38 | let layout: UICollectionViewFlowLayout = { 39 | let layout = UICollectionViewFlowLayout() 40 | layout.minimumInteritemSpacing = 1 41 | return layout 42 | }() 43 | 44 | lazy var collectionView: UICollectionView = { 45 | return UICollectionView(frame: .zero, collectionViewLayout: layout) 46 | }() 47 | 48 | public var containerLayoutGuide: UILayoutGuide! = nil 49 | 50 | // MARK: - Initialization 51 | 52 | public init(filters: [AugmentationFilter]) { 53 | self.filters = filters 54 | super.init(nibName: nil, bundle: nil) 55 | } 56 | 57 | public required init?(coder aDecoder: NSCoder) { 58 | fatalError("init(coder:) was not implemented.") 59 | } 60 | 61 | public override func loadView() { 62 | super.loadView() 63 | configureViews() 64 | } 65 | 66 | /** 67 | * Configures the views. 68 | */ 69 | 70 | private func configureViews() { 71 | 72 | // Configure the clear button 73 | 74 | let clearIcon = UIImage(named: "KeyboardClear")!.withRenderingMode(.alwaysTemplate) 75 | clearButton.setImage(clearIcon, for: .normal) 76 | clearButton.adjustsImageWhenHighlighted = true 77 | clearButton.isEnabled = false 78 | clearButton.addTarget(self, action: #selector(clearCanvas), for: .touchUpInside) 79 | 80 | // Configure the confirm button 81 | 82 | confirmButton.setTitle("Augment Drawing", for: .normal) 83 | confirmButton.setTitleColor(.white, for: .normal) 84 | confirmButton.titleLabel?.font = UIFont.systemFont(ofSize: 17, weight: .semibold) 85 | 86 | let bgImage = UIGraphicsImageRenderer(size: CGSize(width: 1, height: 1)).image { context in 87 | #colorLiteral(red: 0, green: 0.4352941176, blue: 1, alpha: 1).setFill() 88 | context.fill(CGRect(x: 0, y: 0, width: 1, height: 1)) 89 | } 90 | 91 | confirmButton.setBackgroundImage(bgImage, for: .normal) 92 | confirmButton.layer.cornerRadius = 12 93 | confirmButton.clipsToBounds = true 94 | 95 | confirmButton.addTarget(self, action: #selector(confirm), for: .touchUpInside) 96 | confirmButton.adjustsImageWhenHighlighted = true 97 | confirmButton.adjustsImageWhenDisabled = true 98 | confirmButton.isEnabled = false 99 | 100 | // Configure the paper 101 | 102 | paper.backgroundColor = .white 103 | paper.layer.cornerRadius = 34 104 | paper.clipsToBounds = true 105 | paper.delegate = self 106 | 107 | // Configure the collection view 108 | 109 | collectionView.backgroundColor = .clear 110 | collectionView.register(ImageCollectionViewCell.self, forCellWithReuseIdentifier:kAugmentCell) 111 | collectionView.delegate = self 112 | collectionView.dataSource = self 113 | 114 | } 115 | 116 | /** 117 | * Configures the Auto Layout constraint. 118 | */ 119 | 120 | public func configureConstraints() { 121 | 122 | view.addSubview(collectionView) 123 | collectionView.translatesAutoresizingMaskIntoConstraints = false 124 | collectionView.leadingAnchor.constraint(equalTo: containerLayoutGuide.leadingAnchor, constant: 44).isActive = true 125 | collectionView.trailingAnchor.constraint(equalTo: containerLayoutGuide.trailingAnchor, constant: -44).isActive = true 126 | collectionView.topAnchor.constraint(equalTo: containerLayoutGuide.topAnchor, constant: 44).isActive = true 127 | collectionView.bottomAnchor.constraint(equalTo: containerLayoutGuide.bottomAnchor, constant: -44).isActive = true 128 | 129 | view.addSubview(drawContainer) 130 | drawContainer.translatesAutoresizingMaskIntoConstraints = false 131 | drawContainer.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true 132 | drawContainer.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true 133 | 134 | drawContainer.addSubview(paper) 135 | paper.translatesAutoresizingMaskIntoConstraints = false 136 | paper.widthAnchor.constraint(equalToConstant: 250).isActive = true 137 | paper.heightAnchor.constraint(equalToConstant: 250).isActive = true 138 | paper.topAnchor.constraint(equalTo: drawContainer.topAnchor).isActive = true 139 | paper.leadingAnchor.constraint(equalTo: drawContainer.leadingAnchor).isActive = true 140 | 141 | drawContainer.trailingAnchor.constraint(equalTo: paper.trailingAnchor).isActive = true 142 | 143 | view.addSubview(clearButton) 144 | clearButton.translatesAutoresizingMaskIntoConstraints = false 145 | clearButton.leadingAnchor.constraint(equalTo: drawContainer.trailingAnchor, constant: 16).isActive = true 146 | clearButton.topAnchor.constraint(equalTo: drawContainer.topAnchor).isActive = true 147 | clearButton.widthAnchor.constraint(equalToConstant: 44).isActive = true 148 | clearButton.heightAnchor.constraint(equalToConstant: 44).isActive = true 149 | 150 | drawContainer.addSubview(confirmButton) 151 | confirmButton.translatesAutoresizingMaskIntoConstraints = false 152 | confirmButton.topAnchor.constraint(equalTo: paper.bottomAnchor, constant: 16).isActive = true 153 | confirmButton.leadingAnchor.constraint(equalTo: paper.leadingAnchor).isActive = true 154 | confirmButton.trailingAnchor.constraint(equalTo: paper.trailingAnchor).isActive = true 155 | confirmButton.heightAnchor.constraint(equalToConstant: 55).isActive = true 156 | drawContainer.bottomAnchor.constraint(equalTo: confirmButton.bottomAnchor).isActive = true 157 | 158 | } 159 | 160 | public override func viewDidLayoutSubviews() { 161 | super.viewDidLayoutSubviews() 162 | layout.invalidateLayout() 163 | } 164 | 165 | // MARK: - Drawing 166 | 167 | /** 168 | * Clears the canvas. 169 | */ 170 | 171 | @objc private func clearCanvas() { 172 | paper.clear() 173 | clearButton.isEnabled = false 174 | confirmButton.isEnabled = false 175 | } 176 | 177 | /** 178 | * Confirms the drawing and starts augmentation. 179 | */ 180 | 181 | @objc private func confirm() { 182 | 183 | guard self.currentSession == nil else { 184 | return 185 | } 186 | 187 | let startAugmentation: (Bool) -> Void = { _ in 188 | 189 | let session = AugmentationSession(actions: self.filters) 190 | session.delegate = self 191 | 192 | let imageSize = CGSize(width: 250, height: 250) 193 | let initialImage = self.paper.exportImage(size: imageSize) 194 | let initialUIImage = UIImage(cgImage: initialImage) 195 | 196 | self.currentSession = session 197 | self.insertImage(initialUIImage) 198 | session.start(initialImage: initialImage) 199 | 200 | } 201 | 202 | view.enqueueChanges(animation: .property(0.35, .linear), changes: { 203 | self.drawContainer.alpha = 0 204 | self.drawContainer.isUserInteractionEnabled = false 205 | self.clearButton.alpha = 0 206 | self.clearButton.isUserInteractionEnabled = false 207 | }, completion: startAugmentation) 208 | 209 | } 210 | 211 | } 212 | 213 | extension AugmentViewController: AugmentationSessionDelegate { 214 | 215 | public func augmentationSession(_ session: AugmentationSession, didCreateImage image: UIImage) { 216 | guard currentSession == session else { 217 | return 218 | } 219 | 220 | insertImage(image) 221 | } 222 | 223 | public func augmentationSessionDidFinish(_ session: AugmentationSession) { 224 | guard currentSession == session else { 225 | return 226 | } 227 | 228 | currentSession = nil 229 | completionHandler?() 230 | } 231 | 232 | func insertImage(_ image: UIImage) { 233 | let collectionEndIndex = augmentedImages.endIndex 234 | augmentedImages.append(image) 235 | collectionView.insertItems(at: [IndexPath(row: collectionEndIndex, section: 0)]) 236 | } 237 | 238 | } 239 | 240 | extension AugmentViewController: PaperDelegate { 241 | 242 | public func paperDidStartDrawingStroke(_ paper: Paper) {} 243 | 244 | public func paperDidFinishDrawingStroke(_ paper: Paper) { 245 | clearButton.isEnabled = true 246 | confirmButton.isEnabled = true 247 | } 248 | 249 | } 250 | 251 | extension AugmentViewController: UICollectionViewDelegateFlowLayout, UICollectionViewDataSource { 252 | 253 | public func numberOfSections(in collectionView: UICollectionView) -> Int { 254 | return 1 255 | } 256 | 257 | public func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { 258 | return augmentedImages.count 259 | } 260 | 261 | public func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { 262 | 263 | let cell = collectionView.dequeueReusableCell(withReuseIdentifier: kAugmentCell, for: indexPath) as! ImageCollectionViewCell 264 | 265 | cell.configure(image: augmentedImages[indexPath.row]) 266 | return cell 267 | 268 | } 269 | 270 | public func collectionView(_ collectionView: UICollectionView, 271 | layout collectionViewLayout: UICollectionViewLayout, 272 | sizeForItemAt indexPath: IndexPath) -> CGSize { 273 | let containerSize = collectionView.frame.size.width 274 | let numberOfCellsPerLine: CGFloat = containerSize < 700 ? 3 : 5 275 | let cellWidth = (containerSize / numberOfCellsPerLine) - (numberOfCellsPerLine * 3) 276 | return CGSize(width: cellWidth, height: cellWidth) 277 | } 278 | 279 | } 280 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page2.playgroundpage/Sources/ImageCollectionViewCell.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | /** 11 | * A collection view cell that displays an image. 12 | */ 13 | 14 | class ImageCollectionViewCell: UICollectionViewCell { 15 | 16 | /// The image view that displays the contents of the cell. 17 | private let imageView = UIImageView() 18 | 19 | // MARK: - Initializtion 20 | 21 | override init(frame: CGRect) { 22 | super.init(frame: frame) 23 | configureViews() 24 | } 25 | 26 | required init?(coder aDecoder: NSCoder) { 27 | super.init(coder: aDecoder) 28 | configureViews() 29 | } 30 | 31 | /** 32 | * Configures the subviews of the cell. 33 | */ 34 | 35 | private func configureViews() { 36 | addSubview(imageView) 37 | imageView.contentMode = .scaleAspectFill 38 | imageView.pinEdges(to: self) 39 | } 40 | 41 | // MARK: - Changing the Contents 42 | 43 | /** 44 | * Configures the cell to display the specified image. 45 | */ 46 | 47 | func configure(image: UIImage) { 48 | self.imageView.image = image 49 | } 50 | 51 | override func prepareForReuse() { 52 | imageView.image = nil 53 | } 54 | 55 | } 56 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page3.playgroundpage/Contents.swift: -------------------------------------------------------------------------------- 1 | /*: 2 | 3 | # Congratulations! 4 | 5 | You now have completed this book. 💯 6 | 7 | In this Playground, we have explored an interactive application of machine learning with the [emoji classifier](MLMOJI/Introducing%20MLMOJI), 8 | and learned more about building a data set from scratch by taking advantage of [data augmentation](MLMOJI/Building%20a%20Data%20Set). 9 | 10 | ## Where to Go Next? 11 | 12 | --- 13 | 14 | If you wish to learn more, reusing the knowledge from this book, there are some related topics and questions worth exploring: 15 | 16 | - What is the structure of a neural network? What is an **activation function**? 17 | 18 | - What are **convolutions** ? Why are they commonly used for image processing? 19 | 20 | - How to train a neural network faster by using another trained model with **transfer learning**? 21 | 22 | - What is **overfitting** and how to prevent it? 23 | 24 | - What are the different machine learning tools? 25 | 26 | - Callout(Thank you for reading! 📖): I hope this quick introduction to machine learning and image 27 | classification interested you. 28 | 29 | */ 30 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page3.playgroundpage/LiveView.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | import PlaygroundSupport 10 | 11 | let label = UILabel() 12 | label.text = "🎉" 13 | label.font = UIFont.systemFont(ofSize: 123) 14 | label.textAlignment = .center 15 | label.baselineAdjustment = .alignCenters 16 | label.numberOfLines = 1 17 | 18 | let pulseAnimation = CABasicAnimation(keyPath: "transform.scale") 19 | pulseAnimation.duration = 3.0 20 | pulseAnimation.toValue = NSNumber(value: 0.8) 21 | pulseAnimation.repeatCount = Float.infinity 22 | pulseAnimation.autoreverses = true 23 | 24 | label.layer.add(pulseAnimation, forKey: nil) 25 | PlaygroundPage.current.liveView = label 26 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Page3.playgroundpage/Manifest.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Name 6 | Where to Go Next? 7 | LiveViewEdgeToEdge 8 | 9 | LiveViewMode 10 | VisibleByDefault 11 | CodeCopySetup 12 | 13 | ReadyToCopyInstructions 14 | Let's start this challenge! Import the code from the previous page to start. 15 | NotReadyToCopyInstructions 16 | Code 420: You need to complete the previous challenge before starting this one. 17 | 18 | PlaygroundLoggingMode 19 | Off 20 | 21 | 22 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Manifest.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Name 6 | MLMOJI 7 | DevelopmentRegion 8 | en-US 9 | ContentIdentifier 10 | fr.alexaubry.wwdc18.mlmoji 11 | ContentVersion 12 | 1.0.0 13 | ImageReference 14 | Cover.png 15 | DeploymentTarget 16 | ios11.0 17 | SwiftVersion 18 | 4.0 19 | MinimumSwiftPlaygroundsVersion 20 | 1.6 21 | Version 22 | 3.0 23 | Chapters 24 | 25 | Chapter1.playgroundchapter 26 | 27 | 28 | 29 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/PrivateResources/Cover.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/MLMOJI.playgroundbook/Contents/PrivateResources/Cover.png -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/PrivateResources/EmojiSketches.mlmodelc/coremldata.bin: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/MLMOJI.playgroundbook/Contents/PrivateResources/EmojiSketches.mlmodelc/coremldata.bin -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/PrivateResources/EmojiSketches.mlmodelc/model.espresso.weights: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/MLMOJI.playgroundbook/Contents/PrivateResources/EmojiSketches.mlmodelc/model.espresso.weights -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/PrivateResources/EmojiSketches.mlmodelc/model/coremldata.bin: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/MLMOJI.playgroundbook/Contents/PrivateResources/EmojiSketches.mlmodelc/model/coremldata.bin -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/PrivateResources/Glossary.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Terms 6 | 7 | classifier 8 | 9 | Title 10 | classifier 11 | Definition 12 | A classifier is a type of neural network that identifies the sub-categories to which an item belongs to. It is trained using items whose category is already known. They are not limitated to images, and can also be used for text or audio processing. 13 | 14 | For example, it can be used to tag an e-mail as spam, or to tell if a picture of food is a hot dog or not. 15 | FirstUse 16 | 17 | PageReference 18 | MLMOJI/Building a Data Set 19 | Title 20 | Building a Data Set 21 | 22 | 23 | feature 24 | 25 | Title 26 | feature 27 | Definition 28 | A feature (or pattern) is an individual property of an item being observed. When training a deep neural network, we are in fact training the model to learn the distinctive features of each prediction class. 29 | 30 | In image recognition, features can include edges, colors and shapes. 31 | FirstUse 32 | 33 | PageReference 34 | MLMOJI/Building a Data Set 35 | Title 36 | Building a Data Set 37 | 38 | 39 | dnn 40 | 41 | Title 42 | deep neural network 43 | Definition 44 | A neural network is a set of mathematical operations organized as layers. The operations are applied to the input input inside “hidden“ layers. The last layer produces the output. 45 | 46 | A deep neural network is a neural network that contains multiple hidden layers. Therefore, it applies multiple transforms to the input before producing the output. It is used for complex problems, like image classification. 47 | FirstUse 48 | 49 | PageReference 50 | MLMOJI/Introducing MLMOJI 51 | Title 52 | Introducing MLMOJI 53 | 54 | 55 | 56 | 57 | 58 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/PrivateResources/KeyboardClear@2x.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/MLMOJI.playgroundbook/Contents/PrivateResources/KeyboardClear@2x.png -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Augmentation/AugmentationFilter.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import CoreGraphics 9 | import CoreImage 10 | 11 | /** 12 | * The filters that can be performed in a data set augmentation session. 13 | */ 14 | 15 | public enum AugmentationFilter { 16 | 17 | /// Rotates the image by the specified angle. 18 | case rotate(CGFloat) 19 | 20 | /// Translates the image to the given point on the screen. 21 | case translate(CGFloat, CGFloat) 22 | 23 | /// Applies 24 | case scale(CGFloat) 25 | 26 | /// Blurs the image. 27 | case blur 28 | 29 | } 30 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Augmentation/AugmentationSession.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | /** 11 | * The delegate of an image augmentation session. 12 | */ 13 | 14 | public protocol AugmentationSessionDelegate: class { 15 | 16 | /** 17 | * Called when the augmentation session added an image. 18 | */ 19 | 20 | func augmentationSession(_ session: AugmentationSession, didCreateImage image: UIImage) 21 | 22 | /** 23 | * Called when the augmentation session finishes. 24 | */ 25 | 26 | func augmentationSessionDidFinish(_ session: AugmentationSession) 27 | 28 | } 29 | 30 | /** 31 | * An object that augments an image. 32 | */ 33 | 34 | public class AugmentationSession: NSObject { 35 | 36 | /// The filters to apply. 37 | public let actions: [AugmentationFilter] 38 | 39 | /// The size of images. 40 | public let imageSize = CGSize(width: 250, height: 250) 41 | 42 | /// The session delegate. 43 | public weak var delegate: AugmentationSessionDelegate? 44 | 45 | /// The background queue where the augmentation work will be executed. 46 | private let workQueue = DispatchQueue(label: "AugmentationSession.Work") 47 | 48 | // MARK: - Initialization 49 | 50 | public init(actions: [AugmentationFilter]) { 51 | self.actions = actions 52 | } 53 | 54 | // MARK: - Filters 55 | 56 | /** 57 | * Starts a session with a delegate and an initial image. 58 | * 59 | * The filters will be applied cumulatively to the initial image. 60 | */ 61 | 62 | public func start(initialImage: CGImage) { 63 | workQueue.async { 64 | self.applyFilters(start: initialImage) 65 | } 66 | } 67 | 68 | /** 69 | * Applies the filter to an image. 70 | */ 71 | 72 | private func applyFilters(start: CGImage) { 73 | 74 | var current = start 75 | 76 | for action in actions { 77 | 78 | let imageBounds = CGRect(origin: .zero, size: self.imageSize) 79 | var outputImage: CGImage 80 | var needsFlip: Bool = false 81 | 82 | switch action { 83 | case .blur: 84 | let ciImage = CIImage(cgImage: current) 85 | 86 | let filter = CIFilter(name: "CIGaussianBlur")! 87 | filter.setValue(5, forKey: "inputRadius") 88 | filter.setValue(ciImage, forKey: kCIInputImageKey) 89 | let filteredImage = filter.outputImage! 90 | 91 | let context = CIContext(options: [kCIContextUseSoftwareRenderer: true]) 92 | outputImage = context.createCGImage(filteredImage, from: ciImage.extent)! 93 | needsFlip = true 94 | 95 | default: 96 | var transform = CGAffineTransform.identity 97 | switch action { 98 | case .rotate(let degreeAngle): 99 | transform = transform 100 | .translatedBy(x: imageBounds.size.width / 2, y: imageBounds.size.height / 2) 101 | .rotated(by: -degreeAngle * .pi / 180) 102 | .translatedBy(x: -imageBounds.size.width / 2, y: -imageBounds.size.height / 2) 103 | 104 | case .scale(let scale): 105 | transform = transform 106 | .translatedBy(x: imageBounds.size.width / 2, y: imageBounds.size.height / 2) 107 | .scaledBy(x: scale / 100, y: scale / 100) 108 | .translatedBy(x: -imageBounds.size.width / 2, y: -imageBounds.size.height / 2) 109 | 110 | case .translate(let x, let y): 111 | transform = transform.translatedBy(x: x, y: -y) 112 | 113 | case .blur: 114 | preconditionFailure("Not reachable") 115 | break 116 | } 117 | 118 | let tranformedImage = UIGraphicsImageRenderer(size: imageBounds.size).image { context in 119 | context.cgContext.concatenate(transform) 120 | context.cgContext.draw(current, in: imageBounds) 121 | } 122 | 123 | outputImage = tranformedImage.cgImage! 124 | } 125 | 126 | let outputImageSize = CGSize(width: outputImage.width, height: outputImage.height) 127 | let scaledRect = scaleRectangle(ofSize: outputImageSize, in: imageBounds) 128 | 129 | let renderedImage = UIGraphicsImageRenderer(size: imageSize).image { context in 130 | UIColor.white.setFill() 131 | context.fill(imageBounds) 132 | context.cgContext.translateBy(x: 0, y: needsFlip ? imageBounds.size.height : 0) 133 | context.cgContext.scaleBy(x: 1, y: needsFlip ? -1 : 1) 134 | let outputImageRect = scaledRect 135 | context.cgContext.draw(outputImage, in: outputImageRect) 136 | } 137 | 138 | current = renderedImage.cgImage! 139 | 140 | DispatchQueue.main.async { 141 | self.delegate?.augmentationSession(self, didCreateImage: renderedImage) 142 | } 143 | 144 | } 145 | 146 | DispatchQueue.main.async { 147 | self.delegate?.augmentationSessionDidFinish(self) 148 | } 149 | 150 | } 151 | 152 | func scaleRectangle(ofSize size: CGSize, in rect: CGRect) -> CGRect { 153 | 154 | let imageWidth = size.width 155 | let imageHeight = size.height 156 | let containerWidth = rect.size.width 157 | let containerHeight = rect.size.height 158 | 159 | // If the image is the same size as the container, return the container unscaled 160 | if (imageWidth == containerWidth) && (imageHeight == containerHeight) { 161 | return rect 162 | } 163 | 164 | // Downscale the image to fit inside the container if needed 165 | 166 | let scale: CGFloat 167 | let scaleX = containerWidth / imageWidth 168 | let scaleY = containerHeight / imageHeight 169 | 170 | if (imageWidth > containerWidth) || (imageHeight > containerHeight) { 171 | scale = min(scaleX, scaleY) 172 | } else { 173 | scale = 1 174 | } 175 | 176 | let adaptedWidth = imageWidth * scale 177 | let adaptedHeight = imageHeight * scale 178 | 179 | // Center the image in the parent container 180 | 181 | var adaptedRect = CGRect(origin: .zero, size: CGSize(width: adaptedWidth, height: adaptedHeight)) 182 | adaptedRect.origin.x = rect.origin.x + ((containerWidth - adaptedWidth) / 2) 183 | adaptedRect.origin.y = rect.origin.y + ((containerHeight - adaptedHeight) / 2) 184 | 185 | return adaptedRect 186 | 187 | } 188 | 189 | } 190 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Paper/Exporters/BitmapPaperExporter.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | import MobileCoreServices 10 | 11 | /** 12 | * Exports papers to image bitmaps. 13 | */ 14 | 15 | public class BitmapPaperExporter: PaperExporter { 16 | 17 | /// The UTI of the image file format to use. 18 | public let fileType: CFString 19 | 20 | /** 21 | * Creates a bitmap exporter. 22 | * 23 | * - parameter size: The size of the image to export. 24 | * - parameter scale: The factor to multiply the image size. 25 | * - parameter fileType: The UTI of the image file format to use (ex: `kUTTypeJPEG`). Must 26 | * conform to `public.image`. 27 | */ 28 | 29 | public init(size: CGSize, scale: CGFloat, fileType: CFString) { 30 | self.size = size 31 | self.fileType = fileType 32 | self.scale = scale 33 | } 34 | 35 | // MARK: - PaperExporter 36 | 37 | public let size: CGSize 38 | public let scale: CGFloat 39 | 40 | public func exportPaper(contents: CGImage) throws -> Data { 41 | 42 | let outputData = NSMutableData() 43 | 44 | guard UTTypeConformsTo(fileType, kUTTypeImage) else { 45 | throw NSError(domain: "FilePaperExporterDomain", code: 2001, userInfo: [ 46 | NSLocalizedDescriptionKey: "The selected type '\(fileType)' is not an image type." 47 | ]) 48 | } 49 | 50 | guard let destination = CGImageDestinationCreateWithData(outputData as CFMutableData, fileType, 1, nil) else { 51 | throw NSError(domain: "FilePaperExporterDomain", code: 2002, userInfo: [ 52 | NSLocalizedDescriptionKey: "Could not create the destination for the image." 53 | ]) 54 | } 55 | 56 | CGImageDestinationAddImage(destination, contents, nil) 57 | 58 | if !CGImageDestinationFinalize(destination) { 59 | throw NSError(domain: "FilePaperExporterDomain", code: 2003, userInfo: [ 60 | NSLocalizedDescriptionKey: "Could not save image as a file." 61 | ]) 62 | } 63 | 64 | return outputData as Data 65 | 66 | } 67 | 68 | } 69 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Paper/Exporters/CVPixelBufferExporter.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | import CoreVideo 10 | 11 | /** 12 | * Exports images to a CVPixelBuffer. 13 | */ 14 | 15 | public class CVPixelBufferExporter: PaperExporter { 16 | 17 | public init(size: CGSize, scale: CGFloat) { 18 | self.size = size 19 | self.scale = scale 20 | } 21 | 22 | public let size: CGSize 23 | public let scale: CGFloat 24 | 25 | public func exportPaper(contents: CGImage) throws -> CVPixelBuffer { 26 | 27 | // Create an empty image buffer that fits the size of the image 28 | 29 | let imageWidth = Int(size.width * scale) 30 | let imageHeight = Int(size.height * scale) 31 | 32 | var buffer: CVPixelBuffer? = nil 33 | let status = CVPixelBufferCreate(kCFAllocatorDefault, imageWidth, imageHeight, kCVPixelFormatType_32BGRA, nil, &buffer) 34 | 35 | guard status == kCVReturnSuccess else { 36 | throw NSError(domain: "CVPixelBufferExporter", code: 2001, userInfo: [ 37 | NSLocalizedDescriptionKey: "Could not create the pixel buffer for the image (code \(status))." 38 | ]) 39 | } 40 | 41 | guard let pixelBuffer = buffer else { 42 | throw NSError(domain: "CVPixelBufferExporter", code: 2002, userInfo: [ 43 | NSLocalizedDescriptionKey: "Could not create the pixel buffer for the image." 44 | ]) 45 | } 46 | 47 | // Draw the image into the buffer's context 48 | 49 | CVPixelBufferLockBaseAddress(pixelBuffer, []) 50 | 51 | let data = CVPixelBufferGetBaseAddress(pixelBuffer) 52 | let colorSpace = CGColorSpaceCreateDeviceRGB() 53 | let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue) 54 | let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer) 55 | 56 | let ctx = CGContext(data: data, width: imageWidth, height: imageHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, 57 | space: colorSpace, bitmapInfo: bitmapInfo.rawValue) 58 | 59 | guard let pixelBufferContext = ctx else { 60 | throw NSError(domain: "CVPixelBufferExporter", code: 2003, userInfo: [ 61 | NSLocalizedDescriptionKey: "Could not create the context to draw the image." 62 | ]) 63 | } 64 | 65 | pixelBufferContext.draw(contents, in: CGRect(origin: .zero, size: size)) 66 | 67 | // Return the filled buffer 68 | CVPixelBufferUnlockBaseAddress(pixelBuffer, []) 69 | return pixelBuffer 70 | 71 | } 72 | 73 | } 74 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Paper/Exporters/PaperExporter.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | /** 11 | * You implement this protocol to provide a way to export the sketches in a `Paper` instance. 12 | */ 13 | 14 | public protocol PaperExporter { 15 | 16 | /** 17 | * The type of output values produced by the exporter. Can be any type. 18 | */ 19 | 20 | associatedtype Output 21 | 22 | /// The size of the exported image, in points. 23 | var size: CGSize { get } 24 | 25 | /// The display scale to use to export the image. 26 | var scale: CGFloat { get } 27 | 28 | /** 29 | * Process the contents of the paper, provided as a `CGImage` and convert it to the 30 | * object your exporter provides. 31 | */ 32 | 33 | func exportPaper(contents: CGImage) throws -> Output 34 | 35 | } 36 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Paper/Paper.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | /** 11 | * An object that receives information about the paper. 12 | */ 13 | 14 | public protocol PaperDelegate: class { 15 | 16 | /** 17 | * Called when the paper starts drawing a stroke. 18 | */ 19 | 20 | func paperDidStartDrawingStroke(_ paper: Paper) 21 | 22 | /** 23 | * Called when the paper did finish drawing and flattening a stroke. 24 | */ 25 | 26 | func paperDidFinishDrawingStroke(_ paper: Paper) 27 | 28 | } 29 | 30 | /** 31 | * A view that renders strokes based on touches. 32 | */ 33 | 34 | public class Paper: UIView { 35 | 36 | /// The object that receives updates about the paper. 37 | public weak var delegate: PaperDelegate? = nil 38 | 39 | // MARK: - Strokes 40 | 41 | /// The currently active stroke. 42 | private var currentStroke: [CGPoint]? 43 | 44 | /// The image containing the previously drawn strokes. 45 | private var strokesBuffer: UIImage? 46 | 47 | /// The color of the strokes. 48 | private let strokeColor: UIColor = .black 49 | 50 | /// The size of the brush. 51 | private let brushSize: CGFloat = 5 52 | 53 | 54 | // MARK: - Initialization 55 | 56 | public override init(frame: CGRect) { 57 | super.init(frame: frame) 58 | // Redraw the contents of the paper when the screen changes. 59 | self.contentMode = .redraw 60 | } 61 | 62 | public required init?(coder aDecoder: NSCoder) { 63 | super.init(coder: aDecoder) 64 | // Redraw the contents of the paper when the screen changes. 65 | self.contentMode = .redraw 66 | } 67 | 68 | } 69 | 70 | // MARK: - Drawing 71 | 72 | extension Paper { 73 | 74 | /** 75 | * Draws the contents of the paper on the screen. 76 | */ 77 | 78 | public override func draw(_ rect: CGRect) { 79 | guard let context = UIGraphicsGetCurrentContext() else { return } 80 | 81 | if let buffer = strokesBuffer { 82 | 83 | // Scale the image if needed when we draw it for the first time 84 | if rect == bounds { 85 | let container = scaleRectangle(ofSize: buffer.size, in: rect) 86 | buffer.draw(in: container) 87 | } else { 88 | buffer.draw(at: .zero) 89 | } 90 | 91 | } 92 | 93 | // Draw the current unfinished stroke 94 | if let stroke = currentStroke { 95 | draw(stroke, in: context) 96 | } 97 | 98 | } 99 | 100 | /** 101 | * Draws a set of points. 102 | */ 103 | 104 | private func draw(_ stroke: [CGPoint], in context: CGContext) { 105 | 106 | guard let firstPoint = stroke.first else { 107 | return 108 | } 109 | 110 | context.setFillColor(strokeColor.cgColor) 111 | context.setStrokeColor(strokeColor.cgColor) 112 | 113 | // If there is only one point, draw a dot. 114 | if stroke.count == 1 { 115 | context.fillEllipse(in: boundingRect(around: firstPoint)) 116 | return 117 | } 118 | 119 | // If there are multiple points, join them with a set of strokes 120 | let path = UIBezierPath() 121 | path.lineCapStyle = .round 122 | path.lineWidth = brushSize 123 | 124 | path.move(to: stroke.first!) 125 | 126 | for point in stroke.dropFirst() { 127 | path.addLine(to: point) 128 | } 129 | 130 | path.stroke() 131 | 132 | } 133 | 134 | /** 135 | * Draws the current stroke on top of the buffer image, and updates the buffer image to 136 | * include the new stroke. 137 | */ 138 | 139 | private func flatten(in renderer: UIGraphicsImageRenderer, bounds: CGRect) -> UIImage { 140 | 141 | // Draw the image and the current stroke in the context of the screen 142 | 143 | return renderer.image { context in 144 | 145 | UIColor.white.setFill() 146 | context.fill(bounds) 147 | 148 | if let bufferImage = strokesBuffer { 149 | let container = scaleRectangle(ofSize: bufferImage.size, in: bounds) 150 | bufferImage.draw(in: container) 151 | } 152 | 153 | if let stroke = currentStroke { 154 | draw(stroke, in: context.cgContext) 155 | } 156 | 157 | } 158 | 159 | } 160 | 161 | } 162 | 163 | // MARK: - Calculations 164 | 165 | private extension Paper { 166 | 167 | /** 168 | * Calculates a point between the previous point and the new one to achieve a smoother line. 169 | */ 170 | 171 | func continuousPoint(at location: CGPoint) -> CGPoint { 172 | let factor: CGFloat = 0.35 173 | let previous = currentStroke!.last! 174 | return CGPoint(x: previous.x * (1 - factor) + location.x * factor, 175 | y: previous.y * (1 - factor) + location.y * factor) 176 | } 177 | 178 | /** 179 | * Returns the bounding rectangle around the specified point. 180 | * 181 | * It is a rectangle where the point is the center, and whose size is the size of the brush. 182 | */ 183 | 184 | func boundingRect(around point: CGPoint) -> CGRect { 185 | let inset = brushSize / 2 186 | return CGRect(x: point.x - inset, y: point.y - inset, width: brushSize, height: brushSize) 187 | } 188 | 189 | /** 190 | * Returns the bounding rectangle around the last `n` last point. 191 | */ 192 | 193 | func boundingRect(aroundLast n: Int) -> CGRect { 194 | guard let lastPoints = currentStroke?.suffix(n) else { 195 | return .zero 196 | } 197 | 198 | var minX = 0.0, minY = 0.0, maxX = 0.0, maxY = 0.0 199 | 200 | for point in lastPoints { 201 | minX = min(Double(point.x), minX) 202 | minY = min(Double(point.y), minY) 203 | maxX = max(Double(point.x), maxX) 204 | maxY = max(Double(point.y), maxY) 205 | } 206 | 207 | let margins = Double(brushSize) * 2 208 | 209 | return CGRect(x: minX, y: minY, width: maxX - minX + margins, height: maxY - minY + margins) 210 | .insetBy(dx: -brushSize, dy: -brushSize) 211 | } 212 | 213 | /** 214 | * Downscales the image for the container if needed. 215 | */ 216 | 217 | func scaleRectangle(ofSize size: CGSize, in rect: CGRect) -> CGRect { 218 | 219 | let imageWidth = size.width 220 | let imageHeight = size.height 221 | let containerWidth = rect.size.width 222 | let containerHeight = rect.size.height 223 | 224 | // If the image is the same size as the container, return the container unscaled 225 | if (imageWidth == containerWidth) && (imageHeight == containerHeight) { 226 | return rect 227 | } 228 | 229 | // Downscale the image to fit inside the container if needed 230 | 231 | let scale: CGFloat 232 | let scaleX = containerWidth / imageWidth 233 | let scaleY = containerHeight / imageHeight 234 | 235 | if (imageWidth > containerWidth) || (imageHeight > containerHeight) { 236 | scale = min(scaleX, scaleY) 237 | } else { 238 | scale = 1 239 | } 240 | 241 | let adaptedWidth = imageWidth * scale 242 | let adaptedHeight = imageHeight * scale 243 | 244 | // Center the image in the parent container 245 | 246 | var adaptedRect = CGRect(origin: .zero, size: CGSize(width: adaptedWidth, height: adaptedHeight)) 247 | adaptedRect.origin.x = rect.origin.x + ((containerWidth - adaptedWidth) / 2) 248 | adaptedRect.origin.y = rect.origin.y + ((containerHeight - adaptedHeight) / 2) 249 | 250 | return adaptedRect 251 | 252 | } 253 | 254 | } 255 | 256 | // MARK: - Touch Handling 257 | 258 | extension Paper { 259 | 260 | public override func touchesBegan(_ touches: Set, with event: UIEvent?) { 261 | super.touchesBegan(touches, with: event) 262 | 263 | guard let location = touches.first?.location(in: self) else { 264 | return 265 | } 266 | 267 | self.currentStroke = [location] 268 | 269 | // Draw the initial point. 270 | let affectedRect = boundingRect(around: location) 271 | setNeedsDisplay(affectedRect) 272 | } 273 | 274 | public override func touchesMoved(_ touches: Set, with event: UIEvent?) { 275 | super.touchesMoved(touches, with: event) 276 | 277 | guard let firstTouch = touches.first else { 278 | return 279 | } 280 | 281 | // Handle touch aggregation 282 | 283 | guard let coalescedTouches = event?.coalescedTouches(for: firstTouch) else { 284 | return 285 | } 286 | 287 | for touch in coalescedTouches { 288 | let location = touch.location(in: self) 289 | let point = continuousPoint(at: location) 290 | currentStroke?.append(point) 291 | } 292 | 293 | delegate?.paperDidStartDrawingStroke(self) 294 | 295 | // Redraw the points that affect the curve 296 | let affectedRect = boundingRect(aroundLast: coalescedTouches.count + 2) 297 | setNeedsDisplay(affectedRect) 298 | 299 | } 300 | 301 | public override func touchesEnded(_ touches: Set, with event: UIEvent?) { 302 | super.touchesEnded(touches, with: event) 303 | 304 | // Flatten the stroke so that it can be scaled as appropriate if the screen size changes 305 | let renderer = UIGraphicsImageRenderer(bounds: bounds) 306 | strokesBuffer = flatten(in: renderer, bounds: bounds) 307 | 308 | // Redraw with the buffer 309 | currentStroke = nil 310 | delegate?.paperDidFinishDrawingStroke(self) 311 | setNeedsDisplay() 312 | } 313 | 314 | public override var canBecomeFirstResponder: Bool { 315 | return true 316 | } 317 | 318 | } 319 | 320 | // MARK: - Interacting with the contents 321 | 322 | extension Paper { 323 | 324 | /** 325 | * Clears the contents of the screen. 326 | */ 327 | 328 | public func clear() { 329 | strokesBuffer = nil 330 | setNeedsDisplay() 331 | } 332 | 333 | /** 334 | * Exports the contents of the paper as an image. 335 | * 336 | * - parameter size: The size of the exported image. 337 | */ 338 | 339 | public func exportImage(size: CGSize) -> CGImage { 340 | 341 | let bounds = CGRect(origin: .zero, size: size) 342 | 343 | let format = UIGraphicsImageRendererFormat(for: traitCollection) 344 | format.opaque = true 345 | format.prefersExtendedRange = false 346 | format.scale = 1 347 | 348 | let renderer = UIGraphicsImageRenderer(bounds: bounds, format: format) 349 | let finalImage = flatten(in: renderer, bounds: bounds) 350 | 351 | return finalImage.cgImage! 352 | 353 | } 354 | 355 | /** 356 | * Exports the contents of the canvas using the specified exporter. 357 | * 358 | * The strokes will be drawn on a white opaque background. The image will be scaled to 359 | * fit inside the exporter's container bounds, using its `size` property. 360 | */ 361 | 362 | public func export(with exporter: T) throws -> T.Output { 363 | let exportedImage = exportImage(size: exporter.size) 364 | return try exporter.exportPaper(contents: exportedImage) 365 | } 366 | 367 | } 368 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Prediction/Classes.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | /** 11 | * The classes that the image classifier recognizes. 12 | */ 13 | 14 | public enum Class: String { 15 | 16 | case laugh 17 | case smile 18 | case heart 19 | case checkmark 20 | case croissant 21 | case sun 22 | case cloud 23 | 24 | /// An array containing all the class labels. 25 | public static var allLabels: [Class] { 26 | return [.smile, .laugh, .sun, .checkmark, .croissant, .heart, .cloud] 27 | } 28 | 29 | /// The emoji representation of this label. 30 | public var emojiValue: String { 31 | switch self { 32 | case .laugh: return "😂" 33 | case .smile: return "😊" 34 | case .heart: return "❤️" 35 | case .checkmark: return "✔️" 36 | case .croissant: return "🥐" 37 | case .sun: return "☀️" 38 | case .cloud: return "☁️" 39 | } 40 | } 41 | 42 | /// The color to show for prediction bars. 43 | public var highlightColor: UIColor { 44 | switch self { 45 | case .laugh: return #colorLiteral(red: 0.8274509804, green: 0.3333333333, blue: 0.3607843137, alpha: 1) 46 | case .smile: return #colorLiteral(red: 0.9568627451, green: 0.5529411765, blue: 0.2274509804, alpha: 1) 47 | case .heart: return #colorLiteral(red: 0.9921568627, green: 0.7803921569, blue: 0.3254901961, alpha: 1) 48 | case .checkmark: return #colorLiteral(red: 0.4392156863, green: 0.737254902, blue: 0.3254901961, alpha: 1) 49 | case .croissant: return #colorLiteral(red: 0.1411764706, green: 0.6117647059, blue: 0.8352941176, alpha: 1) 50 | case .sun: return #colorLiteral(red: 0.2196078431, green: 0.08235294118, blue: 0.5725490196, alpha: 1) 51 | case .cloud: return #colorLiteral(red: 0.5058823529, green: 0.1215686275, blue: 0.537254902, alpha: 1) 52 | } 53 | } 54 | 55 | } 56 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Prediction/EmojiPrediction.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import Foundation 9 | 10 | /** 11 | * The result of emoji classification. 12 | */ 13 | 14 | public class EmojiPrediction { 15 | 16 | let output: EmojiSketchesOutput 17 | 18 | init(output: EmojiSketchesOutput) { 19 | self.output = output 20 | } 21 | 22 | // MARK: - Wrappers 23 | 24 | /// The predicted class. 25 | public var classLabel: String { 26 | return output.classLabel 27 | } 28 | 29 | /// The predicted percentage for each class in the `Class` enum. 30 | public var predictions: [String: Double] { 31 | return output.final_result__0 32 | } 33 | 34 | } 35 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Prediction/EmojiSketches.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | // This file was automatically generated and should not be edited. 8 | // 9 | 10 | import CoreML 11 | 12 | /// Model Prediction Input Type 13 | @available(macOS 10.13, iOS 11.0, tvOS 11.0, watchOS 4.0, *) 14 | class EmojiSketchesInput : MLFeatureProvider { 15 | 16 | /// input__0 as color (kCVPixelFormatType_32BGRA) image buffer, 224 pixels wide by 224 pixels high 17 | var input__0: CVPixelBuffer 18 | 19 | var featureNames: Set { 20 | get { 21 | return ["input__0"] 22 | } 23 | } 24 | 25 | func featureValue(for featureName: String) -> MLFeatureValue? { 26 | if (featureName == "input__0") { 27 | return MLFeatureValue(pixelBuffer: input__0) 28 | } 29 | return nil 30 | } 31 | 32 | init(input__0: CVPixelBuffer) { 33 | self.input__0 = input__0 34 | } 35 | } 36 | 37 | /// Model Prediction Output Type 38 | @available(macOS 10.13, iOS 11.0, tvOS 11.0, watchOS 4.0, *) 39 | public class EmojiSketchesOutput : MLFeatureProvider { 40 | 41 | /// final_result__0 as dictionary of strings to doubles 42 | public let final_result__0: [String : Double] 43 | 44 | /// classLabel as string value 45 | public let classLabel: String 46 | 47 | public var featureNames: Set { 48 | get { 49 | return ["final_result__0", "classLabel"] 50 | } 51 | } 52 | 53 | public func featureValue(for featureName: String) -> MLFeatureValue? { 54 | if (featureName == "final_result__0") { 55 | return try! MLFeatureValue(dictionary: final_result__0 as [NSObject : NSNumber]) 56 | } 57 | if (featureName == "classLabel") { 58 | return MLFeatureValue(string: classLabel) 59 | } 60 | return nil 61 | } 62 | 63 | public init(final_result__0: [String : Double], classLabel: String) { 64 | self.final_result__0 = final_result__0 65 | self.classLabel = classLabel 66 | } 67 | } 68 | 69 | /// Class for model loading and prediction 70 | @available(macOS 10.13, iOS 11.0, tvOS 11.0, watchOS 4.0, *) 71 | class EmojiSketches { 72 | var model: MLModel 73 | 74 | /** 75 | Construct a model with explicit path to mlmodel file 76 | - parameters: 77 | - url: the file url of the model 78 | - throws: an NSError object that describes the problem 79 | */ 80 | init(contentsOf url: URL) throws { 81 | self.model = try MLModel(contentsOf: url) 82 | } 83 | 84 | /// Construct a model that automatically loads the model from the app's bundle 85 | convenience init() { 86 | let bundle = Bundle(for: EmojiSketches.self) 87 | let assetPath = bundle.url(forResource: "EmojiSketches", withExtension:"mlmodelc") 88 | try! self.init(contentsOf: assetPath!) 89 | } 90 | 91 | /** 92 | Make a prediction using the structured interface 93 | - parameters: 94 | - input: the input to the prediction as EmojiSketchesInput 95 | - throws: an NSError object that describes the problem 96 | - returns: the result of the prediction as EmojiSketchesOutput 97 | */ 98 | func prediction(input: EmojiSketchesInput) throws -> EmojiSketchesOutput { 99 | let outFeatures = try model.prediction(from: input) 100 | let result = EmojiSketchesOutput(final_result__0: outFeatures.featureValue(for: "final_result__0")!.dictionaryValue as! [String : Double], classLabel: outFeatures.featureValue(for: "classLabel")!.stringValue) 101 | return result 102 | } 103 | 104 | /** 105 | Make a prediction using the convenience interface 106 | - parameters: 107 | - input__0 as color (kCVPixelFormatType_32BGRA) image buffer, 224 pixels wide by 224 pixels high 108 | - throws: an NSError object that describes the problem 109 | - returns: the result of the prediction as EmojiSketchesOutput 110 | */ 111 | func prediction(input__0: CVPixelBuffer) throws -> EmojiSketchesOutput { 112 | let input_ = EmojiSketchesInput(input__0: input__0) 113 | return try self.prediction(input: input_) 114 | } 115 | } 116 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Prediction/PredictionSession.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import Foundation 9 | import CoreVideo 10 | 11 | /** 12 | * Provides Core ML predictions for the EmojiSketches model. 13 | */ 14 | 15 | public class PredictionSession { 16 | 17 | /// The delegate to notify with updates. 18 | public weak var delegate: PredictionSessionDelegate? = nil 19 | 20 | /// The size of input images. 21 | public let inputSize = CGSize(width: 224, height: 224) 22 | 23 | /// The emoji prediction model. 24 | private let model: EmojiSketches 25 | 26 | /// The operation queue where operations will be executed. 27 | private let workQueue = DispatchQueue(label: "PredictionManager", qos: .userInitiated) 28 | 29 | /** 30 | * Creates a new session. 31 | */ 32 | 33 | public init() { 34 | let url = Bundle.main.url(forResource: "EmojiSketches", withExtension: "mlmodelc")! 35 | model = try! EmojiSketches(contentsOf: url) 36 | } 37 | 38 | // MARK: - Getting Predictions 39 | 40 | /** 41 | * Requests a prediction for the specified image buffer. The results will be provided 42 | * to the session's delegate when available. 43 | */ 44 | 45 | public func requestPrediction(for buffer: CVPixelBuffer) { 46 | 47 | let predictionWorkItem = DispatchWorkItem { 48 | 49 | do { 50 | let output = try self.model.prediction(input__0: buffer) 51 | let prediction = EmojiPrediction(output: output) 52 | DispatchQueue.main.async { 53 | self.delegate?.predictionSession(self, didUpdatePrediction: prediction) 54 | } 55 | } catch { 56 | DispatchQueue.main.async { 57 | self.delegate?.predictionSession(self, didFailToProvidePredictionWith: error as NSError) 58 | } 59 | } 60 | 61 | } 62 | 63 | workQueue.async(execute: predictionWorkItem) 64 | 65 | } 66 | 67 | } 68 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Prediction/PredictionSessionDelegate.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import Foundation 9 | 10 | /** 11 | * An object that receives notifications from a prediction session. 12 | */ 13 | 14 | public protocol PredictionSessionDelegate: class { 15 | 16 | /** 17 | * The session has finished predicting the emoji for the current buffer. 18 | * 19 | * Always called on the main thread. 20 | */ 21 | 22 | func predictionSession(_ session: PredictionSession, didUpdatePrediction prediction: EmojiPrediction) 23 | 24 | /** 25 | * The session failed to provide a prediction for the current buffer, and returned an error. 26 | * 27 | * Always called on the main thread. 28 | */ 29 | 30 | func predictionSession(_ session: PredictionSession, didFailToProvidePredictionWith error: NSError) 31 | 32 | } 33 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Utilities/Animation.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | /** 11 | * The types of animation that can be queued. 12 | */ 13 | 14 | public enum AnimationType { 15 | 16 | /// A UIView transition of the specified duration and options. 17 | case transition(Double, UIViewAnimationOptions) 18 | 19 | /// A UIView property animation of the specified duration and curve. 20 | case property(Double, UIViewAnimationCurve) 21 | 22 | /// A change that is not animated. 23 | case notAnimated 24 | 25 | } 26 | 27 | /** 28 | * Describes a view change. 29 | */ 30 | 31 | public class Animation { 32 | 33 | /// The changes to send to the view. 34 | let changes: () -> Void 35 | 36 | /// The type of animation to perform. 37 | let type: AnimationType 38 | 39 | public init(type: AnimationType, changes: @escaping () -> Void) { 40 | self.changes = changes 41 | self.type = type 42 | } 43 | 44 | } 45 | 46 | // MARK: - Enqueuing Changes 47 | 48 | extension UIView { 49 | 50 | /** 51 | * Enqueues an animation. 52 | */ 53 | 54 | public func enqueueChanges(animation: AnimationType, changes: @escaping () -> Void, completion: ((Bool) -> Void)? = nil) { 55 | 56 | setNeedsLayout() 57 | 58 | let layoutCompletion: (Bool) -> Void = { 59 | self.setNeedsLayout() 60 | completion?($0) 61 | } 62 | 63 | switch animation { 64 | case .notAnimated: 65 | changes() 66 | layoutCompletion(true) 67 | 68 | case let .property(duration, curve): 69 | let animator = UIViewPropertyAnimator(duration: duration, curve: curve, animations: changes) 70 | animator.addCompletion { layoutCompletion($0 == .end) } 71 | animator.startAnimation() 72 | 73 | case let .transition(duration, options): 74 | UIView.transition(with: self, duration: duration, options: options, animations: changes, completion: layoutCompletion) 75 | } 76 | 77 | } 78 | 79 | /** 80 | * Performs multiple animations one after the other. 81 | */ 82 | 83 | public func enqueueChanges(_ animations: [Animation]) { 84 | 85 | guard let currentAnimation = animations.first else { 86 | return 87 | } 88 | 89 | let nextIndex = animations.index(after: animations.startIndex) 90 | 91 | enqueueChanges(animation: currentAnimation.type, changes: currentAnimation.changes) { finished in 92 | 93 | let next = Array(animations.suffix(from: nextIndex)) 94 | 95 | guard finished else { 96 | next.forEach { $0.changes() } 97 | return 98 | } 99 | 100 | self.enqueueChanges(next) 101 | 102 | } 103 | 104 | } 105 | 106 | } 107 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Utilities/AutoLayout+Pin.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | extension UIView { 11 | 12 | /** 13 | * Pins the edges of the reciever to those of the specified view. 14 | */ 15 | 16 | public func pinEdges(to container: UIView) { 17 | self.translatesAutoresizingMaskIntoConstraints = false 18 | self.leadingAnchor.constraint(equalTo: container.leadingAnchor).isActive = true 19 | self.trailingAnchor.constraint(equalTo: container.trailingAnchor).isActive = true 20 | self.topAnchor.constraint(equalTo: container.topAnchor).isActive = true 21 | self.bottomAnchor.constraint(equalTo: container.bottomAnchor).isActive = true 22 | } 23 | 24 | } 25 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Utilities/Messages.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import Foundation 9 | import PlaygroundSupport 10 | 11 | /** 12 | * The types of messages sent to the playground. 13 | */ 14 | 15 | public enum MessageType { 16 | 17 | /// The message describes an event. 18 | public static let event: String = "Event" 19 | } 20 | 21 | /** 22 | * The events that can happen in the app. 23 | */ 24 | 25 | public enum Event: String { 26 | 27 | /// The plaground page requested that the predictions be started. 28 | case startPredictions = "StartPredictions" 29 | 30 | } 31 | 32 | extension PlaygroundRemoteLiveViewProxy { 33 | 34 | /** 35 | * Sends an event to the proxy. 36 | */ 37 | 38 | public func sendEvent(_ event: Event) { 39 | self.send(.dictionary([MessageType.event: PlaygroundValue.string(event.rawValue)])) 40 | } 41 | 42 | } 43 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Utilities/PlaygroundPage+Tools.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import Foundation 9 | import PlaygroundSupport 10 | 11 | extension PlaygroundPage { 12 | 13 | /** 14 | * The object that receives message from the playground page. 15 | */ 16 | 17 | public var proxy: PlaygroundRemoteLiveViewProxy? { 18 | return liveView as? PlaygroundRemoteLiveViewProxy 19 | } 20 | 21 | } 22 | -------------------------------------------------------------------------------- /Playground/MLMOJI.playgroundbook/Contents/Sources/Utilities/PlaygroundViewController.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | import PlaygroundSupport 10 | 11 | // MARK: PlaygroundContainable 12 | 13 | /** 14 | * A view controller that can be contained inside a playground's safe area. 15 | */ 16 | 17 | public protocol PlaygroundContainable: class { 18 | var containerLayoutGuide: UILayoutGuide! { get set } 19 | func configureConstraints() 20 | } 21 | 22 | // MARK: - PlaygroundViewController 23 | 24 | /** 25 | * A view controller that displays a child view controller within the playground safe area. 26 | */ 27 | 28 | open class PlaygroundViewController: UIViewController, PlaygroundLiveViewSafeAreaContainer { 29 | 30 | /// The child view controller displayed in the playground. 31 | public let child: T 32 | 33 | // MARK: Initialization 34 | 35 | public init(child: T) { 36 | self.child = child 37 | super.init(nibName: nil, bundle: nil) 38 | } 39 | 40 | public required init?(coder aDecoder: NSCoder) { 41 | fatalError("init(coder:) was not implemented.") 42 | } 43 | 44 | override open func viewDidLoad() { 45 | super.viewDidLoad() 46 | child.containerLayoutGuide = liveViewSafeAreaGuide 47 | addChildViewController(child) 48 | view.addSubview(child.view) 49 | child.configureConstraints() 50 | child.view.pinEdges(to: view) 51 | child.didMove(toParentViewController: self) 52 | } 53 | 54 | } 55 | -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub.xcodeproj/project.xcworkspace/contents.xcworkspacedata: -------------------------------------------------------------------------------- 1 | 2 | 4 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/AppDelegate.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | @UIApplicationMain 11 | class AppDelegate: UIResponder, UIApplicationDelegate { 12 | 13 | var window: UIWindow? 14 | 15 | func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { 16 | return true 17 | } 18 | 19 | } 20 | 21 | -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/Assets.xcassets/AppIcon.appiconset/Contents.json: -------------------------------------------------------------------------------- 1 | { 2 | "images" : [ 3 | { 4 | "idiom" : "iphone", 5 | "size" : "20x20", 6 | "scale" : "2x" 7 | }, 8 | { 9 | "idiom" : "iphone", 10 | "size" : "20x20", 11 | "scale" : "3x" 12 | }, 13 | { 14 | "idiom" : "iphone", 15 | "size" : "29x29", 16 | "scale" : "2x" 17 | }, 18 | { 19 | "idiom" : "iphone", 20 | "size" : "29x29", 21 | "scale" : "3x" 22 | }, 23 | { 24 | "idiom" : "iphone", 25 | "size" : "40x40", 26 | "scale" : "2x" 27 | }, 28 | { 29 | "idiom" : "iphone", 30 | "size" : "40x40", 31 | "scale" : "3x" 32 | }, 33 | { 34 | "idiom" : "iphone", 35 | "size" : "60x60", 36 | "scale" : "2x" 37 | }, 38 | { 39 | "idiom" : "iphone", 40 | "size" : "60x60", 41 | "scale" : "3x" 42 | }, 43 | { 44 | "idiom" : "ipad", 45 | "size" : "20x20", 46 | "scale" : "1x" 47 | }, 48 | { 49 | "idiom" : "ipad", 50 | "size" : "20x20", 51 | "scale" : "2x" 52 | }, 53 | { 54 | "idiom" : "ipad", 55 | "size" : "29x29", 56 | "scale" : "1x" 57 | }, 58 | { 59 | "idiom" : "ipad", 60 | "size" : "29x29", 61 | "scale" : "2x" 62 | }, 63 | { 64 | "idiom" : "ipad", 65 | "size" : "40x40", 66 | "scale" : "1x" 67 | }, 68 | { 69 | "idiom" : "ipad", 70 | "size" : "40x40", 71 | "scale" : "2x" 72 | }, 73 | { 74 | "idiom" : "ipad", 75 | "size" : "76x76", 76 | "scale" : "1x" 77 | }, 78 | { 79 | "idiom" : "ipad", 80 | "size" : "76x76", 81 | "scale" : "2x" 82 | }, 83 | { 84 | "idiom" : "ipad", 85 | "size" : "83.5x83.5", 86 | "scale" : "2x" 87 | }, 88 | { 89 | "idiom" : "ios-marketing", 90 | "size" : "1024x1024", 91 | "scale" : "1x" 92 | } 93 | ], 94 | "info" : { 95 | "version" : 1, 96 | "author" : "xcode" 97 | } 98 | } -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/Assets.xcassets/ArtificalBackground.imageset/ArtificialBackground.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/PlaygroundStub/PlaygroundStub/Assets.xcassets/ArtificalBackground.imageset/ArtificialBackground.jpeg -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/Assets.xcassets/ArtificalBackground.imageset/Contents.json: -------------------------------------------------------------------------------- 1 | { 2 | "images" : [ 3 | { 4 | "idiom" : "universal", 5 | "scale" : "1x" 6 | }, 7 | { 8 | "idiom" : "universal", 9 | "filename" : "ArtificialBackground.jpeg", 10 | "scale" : "2x" 11 | }, 12 | { 13 | "idiom" : "universal", 14 | "scale" : "3x" 15 | } 16 | ], 17 | "info" : { 18 | "version" : 1, 19 | "author" : "xcode" 20 | } 21 | } -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/Assets.xcassets/Contents.json: -------------------------------------------------------------------------------- 1 | { 2 | "info" : { 3 | "version" : 1, 4 | "author" : "xcode" 5 | } 6 | } -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/Assets.xcassets/KeyboardClear.imageset/Contents.json: -------------------------------------------------------------------------------- 1 | { 2 | "images" : [ 3 | { 4 | "idiom" : "universal", 5 | "scale" : "1x" 6 | }, 7 | { 8 | "idiom" : "universal", 9 | "filename" : "KeyboardClear@2x.png", 10 | "scale" : "2x" 11 | }, 12 | { 13 | "idiom" : "universal", 14 | "scale" : "3x" 15 | } 16 | ], 17 | "info" : { 18 | "version" : 1, 19 | "author" : "xcode" 20 | } 21 | } -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/Assets.xcassets/KeyboardClear.imageset/KeyboardClear@2x.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/PlaygroundStub/PlaygroundStub/Assets.xcassets/KeyboardClear.imageset/KeyboardClear@2x.png -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/AugmentationStubViewController.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | class AugmentationStubViewController: UIViewController { 11 | 12 | let filters: [AugmentationFilter] = [ 13 | .translate(-15, 69), .rotate(45), .rotate(18), .scale(150), .blur 14 | ] 15 | 16 | override func viewDidLoad() { 17 | super.viewDidLoad() 18 | view.backgroundColor = .black 19 | 20 | let augmentationVC = AugmentViewController(filters: filters) 21 | augmentationVC.containerLayoutGuide = view.safeAreaLayoutGuide 22 | 23 | addChildViewController(augmentationVC) 24 | view.addSubview(augmentationVC.view) 25 | augmentationVC.configureConstraints() 26 | 27 | view.backgroundColor = UIColor(patternImage: #imageLiteral(resourceName: "ArtificalBackground")) 28 | augmentationVC.view.pinEdges(to: self.view) 29 | } 30 | 31 | } 32 | -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/Base.lproj/LaunchScreen.storyboard: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/Base.lproj/Main.storyboard: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 31 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/EmojiSketches.mlmodelc/coremldata.bin: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/PlaygroundStub/PlaygroundStub/EmojiSketches.mlmodelc/coremldata.bin -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/EmojiSketches.mlmodelc/model.espresso.weights: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/PlaygroundStub/PlaygroundStub/EmojiSketches.mlmodelc/model.espresso.weights -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/EmojiSketches.mlmodelc/model/coremldata.bin: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexaubry/MLMOJI/2bac5c7bc1b341068a9e42fa2556ace30c215ab0/Playground/PlaygroundStub/PlaygroundStub/EmojiSketches.mlmodelc/model/coremldata.bin -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/Info.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | CFBundleDevelopmentRegion 6 | $(DEVELOPMENT_LANGUAGE) 7 | CFBundleDisplayName 8 | MLMOJI 9 | CFBundleExecutable 10 | $(EXECUTABLE_NAME) 11 | CFBundleIdentifier 12 | $(PRODUCT_BUNDLE_IDENTIFIER) 13 | CFBundleInfoDictionaryVersion 14 | 6.0 15 | CFBundleName 16 | $(PRODUCT_NAME) 17 | CFBundlePackageType 18 | APPL 19 | CFBundleShortVersionString 20 | 1.0 21 | CFBundleVersion 22 | 1 23 | LSRequiresIPhoneOS 24 | 25 | UILaunchStoryboardName 26 | LaunchScreen 27 | UIMainStoryboardFile 28 | Main 29 | UIRequiredDeviceCapabilities 30 | 31 | armv7 32 | 33 | UISupportedInterfaceOrientations 34 | 35 | UIInterfaceOrientationPortrait 36 | UIInterfaceOrientationLandscapeLeft 37 | UIInterfaceOrientationLandscapeRight 38 | 39 | UISupportedInterfaceOrientations~ipad 40 | 41 | UIInterfaceOrientationPortrait 42 | UIInterfaceOrientationPortraitUpsideDown 43 | UIInterfaceOrientationLandscapeLeft 44 | UIInterfaceOrientationLandscapeRight 45 | 46 | 47 | 48 | -------------------------------------------------------------------------------- /Playground/PlaygroundStub/PlaygroundStub/PredictionStubViewController.swift: -------------------------------------------------------------------------------- 1 | // 2 | // MLMOJI 3 | // 4 | // This file is part of Alexis Aubry's WWDC18 scolarship submission open source project. 5 | // Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License. 6 | // 7 | 8 | import UIKit 9 | 10 | class PredictionStubViewController: UIViewController { 11 | 12 | var predictionVC: PredictionViewController! 13 | 14 | override func viewDidLoad() { 15 | super.viewDidLoad() 16 | view.backgroundColor = .black 17 | 18 | predictionVC = PredictionViewController() 19 | predictionVC.containerLayoutGuide = view.safeAreaLayoutGuide 20 | 21 | addChildViewController(predictionVC) 22 | view.addSubview(predictionVC.view) 23 | predictionVC.configureConstraints() 24 | 25 | view.backgroundColor = UIColor(patternImage: #imageLiteral(resourceName: "ArtificalBackground")) 26 | predictionVC.view.pinEdges(to: self.view) 27 | 28 | predictionVC.didMove(toParentViewController: self) 29 | 30 | navigationItem.rightBarButtonItem = UIBarButtonItem(title: "Start Predictions", style: .plain, target: self, action: #selector(startButtonTapped)) 31 | } 32 | 33 | @objc func startButtonTapped() { 34 | self.predictionVC.startPredictions() 35 | } 36 | 37 | } 38 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # MLMOJI 2 | 3 | Emoji are becoming more central in the way we express ourselves digitally, whether we want to convey an emotion or put some fun into a conversation. This growth’s impact is visible on mobile devices. The available emoji keep increasing and it often requires a lot of swiping to find the right character. 4 | 5 | My playground’s goal is to research a way to make this process more fun and intuitive. Using touch, Core Graphics, and deep learning, I implemented a keyboard that recognizes hand-drawn emoji and converts them to text. 6 | 7 | You can download the playground book, the data set and the Core ML model from the [releases tab](https://github.com/alexaubry/mlmoji/releases). 8 | 9 | For a live demo, you can watch this video: 10 | 11 | [![MLMOJI Demo](.github/VideoThumbnail.png)](https://www.youtube.com/watch?v=Z7jdLrorctQ) 12 | 13 | ## 🔧 Building MLMOJI 14 | 15 | The first step was to create the drawings themselves. I made a view that builds up points as the user’s finger moves on the screen and renders the stroke incrementally. When the user lifts their finger, a `UIGraphicsImageRenderer` flattens the strokes together into a static image, improving rendering performance. To achieve smoother lines, I used touch coalescing, which allows detection of more touch points. 16 | 17 | The second core component of the playground is a classifier that recognizes a drawn emoji. Building it involved three tasks: gathering training data as images, training the model, and improving its accuracy. 18 | 19 | The training data is used by the model to learn the features of each emoji class. This training data came from an app I built that uses the above drawing component to speed up the process of generating drawings. 20 | 21 | With the training images now available, I looked into training a Core ML classifier. For this, a convolutional neural network (CNN) was appropriate because it learns efficiently for image recognition tasks. 22 | 23 | Training a CNN from scratch can take several weeks because of the complexity of the operations applied to the input. Therefore, I used the “transfer learning” technique to train my classifier. This approach enables you to retrain a general, pre-trained model to detect new features. 24 | 25 | Using the TensorFlow ML framework, a Docker container, and a Python script, I was able to train a small, fast, mobile-friendly neural network implementation (MobileNet) with each emoji’s features. I imported the resulting `mlmodel` into my playground. 26 | 27 | The first version of the classifier was too specialized and not very reliable because my original data set was not large enough (only 50 drawings per class). I used data augmentation techniques (such as scaling, distortion, and flipping) to generate more training images from the manual drawings. Then I repeated the training process to reach a more acceptable accuracy. 28 | 29 | Finally, using the Playground Books format, I created an interactive playground that explains the techniques used and demonstrates a proof of concept. Using features like the glossary and live view proxies, the book provides an accessible and enjoyable learning experience. 30 | 31 | The final result comes with limitations. Because of the assignment’s time and size constraints, I was only able to train data for 7 emoji and to reach a somewhat fluctuating level of accuracy. However, building this playground taught me a lot about deep learning techniques for mobile devices and encouraged me to pursue further research in this field. 32 | -------------------------------------------------------------------------------- /Scholarship.xcworkspace/contents.xcworkspacedata: -------------------------------------------------------------------------------- 1 | 2 | 4 | 6 | 7 | 9 | 10 | 12 | 13 | 15 | 16 | 17 | -------------------------------------------------------------------------------- /Scholarship.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | IDEDidComputeMac32BitWarning 6 | 7 | 8 | 9 | --------------------------------------------------------------------------------