└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Machine Learning Design patterns 2 | 3 | ## Pipeline 4 | 5 | A pipeline is about processing some data sequentially using an arbitrary number of functions. It's useful for data preprocessing or within the context of an inference framework. 6 | 7 | For example you may want to do `preprocess -> inference -> postprocess` 8 | 9 | ```python 10 | 11 | from typing import Union 12 | 13 | def preprocess(input : Union[str, Image, Video, Audio]) -> Tensor: 14 | # implementation 15 | 16 | def inference(input : Tensor) -> Tensor: 17 | # implementation 18 | 19 | def postprocess(input : Tensor) -> Union[str, Image, Video, Audio]: 20 | # implementation 21 | ``` 22 | 23 | And then you'd run your pipeline by saying 24 | 25 | ```python 26 | pipeline = [preprocess, inference, postprocess] 27 | 28 | input = ... 29 | for step in pipeline: 30 | input = step(input) 31 | 32 | return input 33 | ``` 34 | 35 | An import detail is that the input and output types of function in a pipeline need to match. 36 | 37 | This pattern isn't only limited to an inferencing framework but a framework like Keras explictly has a concept of a layer so if you were to implement it from scratch a grossly simplified version would be something like. 38 | 39 | ```python 40 | class KerasModel(): 41 | def __init__(self): 42 | self.layers = [] 43 | 44 | def add_layer(self, layer): 45 | self.layers.append(layer) 46 | 47 | def forward(self, input): 48 | for layer in self.layers: 49 | input = layer(input) 50 | return input 51 | ``` 52 | 53 | An exercise to the reader is to make the above work with a batch of examples. 54 | 55 | ## Workflow 56 | A workflow is a more complex version of a pipeline that allows for sequential behaviors. But the more general pattern is a Directed Acyclic Graph (DAG). This is what DAG providers like Airflow, metaflow and the ensemble support in torchserve do. 57 | 58 | 59 | ```python 60 | # Dag example 61 | graph = { 62 | 'input': ['a'], 63 | 'a': ['b', 'e'], 64 | 'b': ['c', 'd'], 65 | 'd': ['e']} 66 | ``` 67 | 68 | In the example we above we use a Python dictionary where the keys on the right hand side are the nodes where arrows are pointing out of and the values on the left hand side have arrows pointing into them. If you don't like python dictionaries you can also create a DAG using YAML or python decorators. 69 | 70 | Now imagine if every node was some Python function or even a Pytorch model how would you go about executing this DAG? 71 | 72 | ```python 73 | class WorkflowEngine(): 74 | def __init__(self, dag): 75 | self.dag = dag 76 | 77 | def execute(): 78 | for key, value in dag.items(): 79 | Step(key, value, False) 80 | 81 | class Step(): 82 | def __init__(self, inputs, outputs, dependencies_met): 83 | self.inputs = inputs 84 | self.outputs = outputs 85 | self.dependencies_met = False 86 | self.resources = {"cpu" : 2, "gpu" : 1} 87 | 88 | def execute(): 89 | if self.dependencies_met: 90 | # Execute steps 91 | ``` 92 | 93 | A real world orchestrator would need to take care of dependency management, scheduling and resource allocation. 94 | 95 | ## Function as data 96 | Function as data is something LISP programmers talk a lot about. The main idea is you could have a function like 97 | 98 | ```lisp 99 | ;; Add 1 and 2 100 | (+ 1 2) 101 | ``` 102 | 103 | But if you add a quote at the beginning of it then it becomes a string 104 | 105 | ```lisp 106 | ;; The string (+ 1 2) 107 | '(+ 1 2) 108 | ``` 109 | 110 | This is powerful because now you could have a seperate program analyze the string `(+ 1 2)` realize that the inputs never change, the function is pure so the outputs never change so this function can be replaced by `3` 111 | 112 | PyTorch also has a similar idea but first let's define a very simple toy model. 113 | 114 | 115 | ```python 116 | class myModel(torch.nn.Model): 117 | def __init__(self): 118 | self.linear = torch.nn.Linear(100) 119 | 120 | def forward(self, input): 121 | output = self.linear(input) 122 | return output 123 | ``` 124 | 125 | Run an inference with `myModel(torch.randn(100))` so it's a function! But also if you were to run `myModel.data` you would get the weights of the model so it's also data. So a `function = data`. 126 | 127 | This is also made clearer if you've ever pickled model which is essentially a method to serialize some python objects as strings on disk so again `function = data` 128 | 129 | ```python 130 | model = myModel() 131 | pickle.dumps(model) 132 | ``` 133 | 134 | ## Iterator design pattern 135 | 136 | ```python 137 | for i in range(10): 138 | print(i) 139 | 140 | ``` 141 | 142 | But a more useful operation would be something like 143 | 144 | ```python 145 | for batch in dataset: 146 | model(batch) 147 | ``` 148 | 149 | So how do you make something like `for _ in _` available for your classes. We do this by implementing the `__iter__()` and `__next()__` functions 150 | 151 | 152 | ```python 153 | from typing import List 154 | 155 | class Dataset: 156 | def __init__(self, data : List[str]): 157 | self.data = data 158 | self.elements = 0 159 | 160 | def __iter__(self): 161 | return data[0] 162 | 163 | def __next__(self, batch : int = 0): 164 | 165 | # Return a batch of examples 166 | if batch > 0: 167 | # TODO: Fix typo here, this will only return a single or 0 elements 168 | self.elements = self.elements + batch 169 | return self.data[self.elements : self.elements + batch] 170 | 171 | # Return a single example 172 | else: 173 | self.elements = self.elements + 1 174 | return self.data[self.elements] 175 | ``` 176 | 177 | ## Job queues 178 | 179 | Let's say we have a service that needs to pick one of `n` PyTorch models to run on some input 180 | 181 | ```python 182 | from dataclass import dataclass 183 | 184 | @dataclass 185 | class Job: 186 | model : str 187 | input : Union[str, Image, Audio, Video] 188 | endpoint : Tuple[str, int] # url : port 189 | 190 | class JobProcessor(): 191 | def __init__(self): 192 | self.jobs : List[Job] = [] 193 | 194 | def process_job(self): 195 | job = jobs.pop() 196 | execute(job) 197 | 198 | def execute(self, job): 199 | output = job.model(job.input) 200 | expose(output, endpoint) 201 | 202 | def expose(self, output, endpoint)L 203 | # Use FastAPI or something else 204 | ``` 205 | 206 | With only a couple of lines of code we've designed a multi model inferencing framework. Let's say you're not using Python to design this job manager you can also still just spawn a Python process, run the inference and then write it either to disk or stdout and pick it back up from the other language. 207 | 208 | 209 | ## Callbacks 210 | Many trainer loops will implement callbacks where you can trigger some behavior if some condition is fulfilled for example 211 | 212 | ```python 213 | on_training_ends -> do_something 214 | on_epoch_end -> do_something 215 | 216 | def do_something(): 217 | save_logs_to_tensorboard() 218 | change_learning_rate() 219 | ``` 220 | 221 | A callback is a particular case of something called the Observer pattern so let's implement that. Code paraphrased from https://refactoring.guru/design-patterns/observer/python/example#lang-features 222 | 223 | So an observer needs to subscribe to some subject that changes its behavior 224 | 225 | ```python 226 | class ModelSubject(): 227 | def __init__(self): 228 | state : Trainer = None # A trainer includes a model, which epoch its on, loss, model weights... 229 | observers : List[ModelObserver] = None 230 | 231 | def attach(self, observer : ModelObserver): 232 | observers.append(observer) 233 | 234 | 235 | def detach(self, observer : ModelObserver): 236 | observers.remove(observer) 237 | 238 | def notify(self): 239 | for observer in observers: 240 | observer.update(state) 241 | ``` 242 | 243 | The observer is notified of all state changes of the subject and then needs to do something when that happens 244 | 245 | At a high level an Observer is an abstract class that implements a function called update 246 | 247 | ```python 248 | from abc import abstractmethod, ABC 249 | 250 | class Observer(ABC): 251 | 252 | @abstractmethod 253 | def update(self): 254 | """ 255 | Implement your own observer here 256 | """ 257 | pass 258 | ``` 259 | 260 | We can then build specific kinds of observers by by implementing the `update()` function. In the example below we build an observer to adjust the learning rate of a model when the loss increases 261 | 262 | ```python 263 | class ChangeLearningRateObserver(Observer): 264 | def __init__(self): 265 | self.state : [TrainerState] = None 266 | 267 | def update(self, new_state): 268 | if self.state = None: 269 | pass 270 | 271 | else: 272 | # Do not use this in production code this is educational only 273 | if new_state.loss > state.loss: 274 | state.lr = state.lr * 0.1 275 | self.state = new_state 276 | 277 | ``` 278 | 279 | But this is a powerful framework and we can also implement something like logging without changing the library code. 280 | 281 | ```python 282 | class LogObserver(Observer): 283 | def __init__(self, log_dir='/logs/'): 284 | self.state : [TrainerState] = None 285 | self.log_dir : str = log_dir 286 | 287 | def update(self, new_state : Dict): # Asssume new state is a dictionary 288 | with open(filename, "w") as f: 289 | for key, value in new_state.items(): 290 | f.write(f"{key}:{value}") 291 | self.state = new_state 292 | ``` 293 | 294 | 295 | So the benefit of this approach you can extend functionality of a library without changing the core code which may require you to get a PR merged in by the core team that may make the core code unmaintable by adding all sorts of usecases that people care about. So the observer pattern is primarily a way to extend code which is why it's very popular in training frameworks like fast.ai or PyTorch LIghtning. 296 | 297 | ## Learner pattern 298 | 299 | Learner pattern was popularized by frameworks like Sci-kit learn that started approach to modeling that was as simple as 300 | 301 | `model.fit(data)` 302 | 303 | But implementing code for this at least within the context of neural networks is something you already do if you've used vanillay PyTorch without a training framework. 304 | 305 | ```python 306 | 307 | # data[0][0] means the first input example 308 | # data[1][5] means the label for the 5th input example 309 | data = [[inputs], [labels]] 310 | 311 | class Model: 312 | def __init__(self): 313 | self.model = nn_model() 314 | self.loss_function = substract/square_loss/l1 etc.. 315 | 316 | def fit(self, data): 317 | # 1. Compute forward function 318 | output = self.model(data) 319 | 320 | # 2. Get loss 321 | loss = loss_function(data) 322 | 323 | # 3. Update model 324 | model.update(loss) 325 | 326 | def update(self, loss): 327 | # 1. Compute gradients with autograd 328 | 329 | self.model.weights = ... 330 | ``` 331 | 332 | ## Batch processing 333 | 334 | So suppose you'd like to run `model.forward()` on two different inputs. The naive way of doing this is running 335 | 336 | ```python 337 | model.forward(input_1) 338 | model.forward(input_2) 339 | ``` 340 | 341 | But this becomes painfully slow if you start dealing with a large number of examples 342 | 343 | ```python 344 | 345 | # model.forward is called O(inputs) 346 | for input in inputs: 347 | model.forward(input) 348 | ``` 349 | 350 | Generally in numerical code you should fear `for loops` like the plague and as much as possible try to replace them with batch operations. 351 | 352 | So instead rewrite your code as 353 | 354 | ```python 355 | 356 | tensor = torch.Tensor 357 | for input in inputs: 358 | tensor.stack(input) 359 | 360 | # model.forward is called once 361 | model.forward(tensor) 362 | ``` 363 | 364 | Remember GPUs aren't that great at doing many small operations because there's an overhead to sending data to it so as much as possible it's better to batch jobs into large ones to take advantage of speedups. (Technically this can be worked around with CUDA graphs but that's still a relatively new feature) 365 | 366 | As another exercise vectorization on CPU is also another technique to eliminate for loops but by operating over chunks of data concurrently. So for example some new newer Intel CPUs will turn matrices into long vectors and do matrix math on them by using a large instruction width AVX512. 367 | 368 | ## Decorator 369 | Decorators are a technique to add functionality to a function or class without modifying its code. You may have already heard of or used decorators like `@memoize, @lru_cache, @profile, @step` 370 | 371 | As an example let's take a look at how to implement a `@profile` decorator borrowing code from https://medium.com/uncountable-engineering/pythons-line-profiler-32df2b07b290 372 | 373 | ```python 374 | from line_profiler import LineProfiler 375 | 376 | profiler = LineProfiler() 377 | 378 | # A decorator is just a python function that takes in a function 379 | def profile(func) 380 | # Inner function takes in unnamed and named arguments 381 | def inner(*args, **kwargs) 382 | # New code decorator adds 383 | profiler.add_function(func) 384 | profiler.enable_by_count() 385 | 386 | # Running the decorated function 387 | return func(*args, **kwargs) 388 | return inner 389 | ``` 390 | 391 | So now you can just run 392 | 393 | ```python 394 | @profile 395 | def my_slow_func(): 396 | # some terrible code here 397 | ``` 398 | 399 | In the above decorator we ran some commands before returning `func` but we could also change `func`, its arguments or do whatever we please this is another one of those patterns like callbacks that let you extend some code without modifying it. 400 | 401 | One of the most interesting decorators is the FastAPI one https://github.com/tiangolo/fastapi 402 | 403 | ```python 404 | @app.get("/") 405 | def read_root(): 406 | return {"Hello": "World"} 407 | ``` 408 | 409 | The above application redirects calls to `/` to the `read_root()` function so digging into the code a bit you'll find a function called `get()` in `fastapi/application.py` https://github.com/tiangolo/fastapi/blob/master/fastapi/applications.py#L425 410 | 411 | It's a complicated function but what we care about is 412 | 413 | ```python 414 | def get(...) -> Callable[DecoratedCallable]: 415 | return self.router.get(...) 416 | ``` 417 | 418 | Digging through the code a bit more we find that `add_api_route()` whenever a new `@app.get()` is called where see `func` being returned in much the same way as it is in the plain profiling decorator https://github.com/tiangolo/fastapi/blob/87e29ec2c54ce3651939cc4d10e05d07a2f8b9ce/fastapi/applications.py#L378 419 | 420 | The flipside of decorators is that they can lead you to a monolithic architecture where your infrastructure and deployment is tightly coupled to your implementation, this is generally fine if you're a startup but not so fine if multiple people are contributing code to the same place. 421 | 422 | ## Strategy Pattern 423 | 424 | The strategy pattern is classic Object Oriented programming and is generally useful when you to set some particular strategy for an object without constraining it too much as a library designer. 425 | 426 | For example suppose you're creating a new Trainer class and don't have time to implement all optimizers that people care about. So you start with adding support for an SGDOptimizer 427 | ```python 428 | class Trainer: 429 | def __init__(self): 430 | optimizer : Optimizer = SGDOptimizer 431 | ... 432 | 433 | # Create an abstract optimizer class 434 | class Optimizer(ABC): 435 | @abstractmethod 436 | # We don't want to constrain the input types for such a function 437 | # Return type is a tensor because value in a tensor needs to be changed by a bit 438 | def step(*args, **kwargs) -> Tensor: 439 | pass 440 | 441 | class SGDOptimizer(Optimizer): 442 | def step(self, learn_rate : float, n_iter : int, tolerance : float): 443 | # Your SGD implementation here 444 | ``` 445 | 446 | So now someone else that doesn't understand how your whole trainer codebase works could create a new optimizer by just making sure to inherit from `Optimizer` 447 | 448 | ```python 449 | class AdamOptimizer(Optimizer): 450 | def step(self, beta_1 : float, beta_2 : float, epsilon : float): 451 | # Out of core Adam implementation here 452 | ``` 453 | 454 | 455 | ## TODO 456 | * Autograd - https://marksaroufim.medium.com/automatic-differentiation-step-by-step-24240f97a6e6 (Maybe I need to update this tutorial with some python code) 457 | * Matrix Multiplication 458 | * http://supertech.csail.mit.edu/papers/Prokop99.pdf 459 | * https://github.com/mitmath/18335/blob/spring21/notes/oblivious-matmul.pdf 460 | * Distributed patterns: good tutorial here https://huggingface.co/docs/transformers/parallelism 461 | --------------------------------------------------------------------------------