├── results ├── tensorflow_nlp_imdb.png ├── results_pytorch_cv │ ├── Apple_M3_FOOD101_resnet50_224_pytorch_results.csv │ ├── Apple_M3_Pro_FOOD101_resnet50_224_pytorch_results.csv │ ├── Apple_M1_Pro_FOOD101_resnet50_224_pytorch_results.csv │ ├── Apple_M3_Max_FOOD101_resnet50_224_pytorch_results.csv │ ├── NVIDIA_TITAN_RTX_FOOD101_resnet50_224_pytorch_results.csv │ ├── Tesla_V100-SXM2-16GB_FOOD101_resnet50_224_pytorch_results.csv │ ├── Apple_M3_CIFAR100_resnet50_32_pytorch_results.csv │ ├── Apple_M1_Pro_CIFAR100_resnet50_32_pytorch_results.csv │ ├── Apple_M3_Max_CIFAR100_resnet50_32_pytorch_results.csv │ ├── Apple_M3_Pro_CIFAR100_resnet50_32_pytorch_results.csv │ ├── NVIDIA_TITAN_RTX_CIFAR100_resnet50_32_pytorch_results.csv │ └── Tesla V100-SXM2-16GB_CIFAR100_resnet50_32_pytorch_results.csv ├── pytorch_cv_resnet50_cifar100.png ├── pytorch_cv_resnet50_food101.png ├── pytorch_nlp_distilbert_imdb.png ├── tensorflow_cv_resnet50_food101.png ├── tensorflow_cv_resnet50_cifar100.png ├── llamacpp_2_7b_chat_q4_0_gguf_tokens_per_second.png ├── results_tensorflow_cv │ ├── Apple_M1_Pro_FOOD101_ResNet50_224_tensorflow_results.csv │ ├── Apple_M3_FOOD101_ResNet50_224_tensorflow_results.csv │ ├── Apple_M3_Max_FOOD101_ResNet50_224_tensorflow_results.csv │ ├── Apple_M3_Pro_FOOD101_ResNet50_224_tensorflow_results.csv │ ├── NVIDIA_TITAN_RTX_FOOD101_ResNet50_224_tensorflow_results.csv │ ├── Tesla_V100-SXM2-16GB_FOOD101_ResNet50_224_tensorflow_results.csv │ ├── Apple_M3_CIFAR100_ResNet50_32_tensorflow_results.csv │ ├── Apple_M3_Pro_CIFAR100_ResNet50_32_tensorflow_results.csv │ ├── Apple_M1_Pro_CIFAR100_ResNet50_32_tensorflow_results.csv │ ├── Apple_M3_Max_CIFAR100_ResNet50_32_tensorflow_results.csv │ ├── NVIDIA_TITAN_RTX_CIFAR100_ResNet50_32_tensorflow_results.csv │ └── Tesla_V100-SXM2-16GB_CIFAR100_ResNet50_32_tensorflow_results.csv ├── results_tensorflow_nlp │ ├── Apple_M3_IMDB_SmallTransformer_200_results.csv │ ├── Apple_M3_Max_IMDB_SmallTransformer_200_results.csv │ ├── Apple_M1_Pro_IMDB_SmallTransformer_200_results.csv │ ├── Apple_M3_Pro_IMDB_SmallTransformer_200_results.csv │ ├── NVIDIA_TITAN_RTX_IMDB_SmallTransformer_200_results.csv │ └── Tesla V100-SXM2-16GB_IMDB_SmallTransformer_200_results.csv └── results_pytorch_nlp │ ├── Apple_M3_IMDB_distilbert-base-uncased_512_pytorch_results.csv │ ├── Apple_M3_Pro_IMDB_distilbert-base-uncased_512_pytorch_results.csv │ ├── Tesla_V100-SXM2-16GB_IMDB_distilbert-base-uncased_512_pytorch_results.csv │ ├── Apple_M1_Pro_IMDB_distilbert-base-uncased_512_pytorch_results.csv │ ├── Apple_M3_Max_IMDB_distilbert-base-uncased_512_pytorch_results.csv │ └── NVIDIA_TITAN_RTX_IMDB_distilbert-base-uncased_512_pytorch_results.csv ├── .gitignore ├── helper_functions.py ├── mlx ├── README.md ├── mnist.py ├── mlx_main.py └── torch_main.py ├── tensorflow_test_computer_vision_cifar100.py ├── tensorflow_test_computer_vision_food101.py ├── tensorflow_test_nlp.py ├── llama2_test.py ├── pytorch_test_nlp.py ├── pytorch_test_computer_vision_cifar100.py ├── pytorch_test_computer_vision_food101.py └── README.md /results/tensorflow_nlp_imdb.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrdbourke/mac-ml-speed-test/HEAD/results/tensorflow_nlp_imdb.png -------------------------------------------------------------------------------- /results/results_pytorch_cv/Apple_M3_FOOD101_resnet50_224_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,1758.2296382080094 3 | -------------------------------------------------------------------------------- /results/pytorch_cv_resnet50_cifar100.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrdbourke/mac-ml-speed-test/HEAD/results/pytorch_cv_resnet50_cifar100.png -------------------------------------------------------------------------------- /results/pytorch_cv_resnet50_food101.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrdbourke/mac-ml-speed-test/HEAD/results/pytorch_cv_resnet50_food101.png -------------------------------------------------------------------------------- /results/pytorch_nlp_distilbert_imdb.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrdbourke/mac-ml-speed-test/HEAD/results/pytorch_nlp_distilbert_imdb.png -------------------------------------------------------------------------------- /results/tensorflow_cv_resnet50_food101.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrdbourke/mac-ml-speed-test/HEAD/results/tensorflow_cv_resnet50_food101.png -------------------------------------------------------------------------------- /results/tensorflow_cv_resnet50_cifar100.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrdbourke/mac-ml-speed-test/HEAD/results/tensorflow_cv_resnet50_cifar100.png -------------------------------------------------------------------------------- /results/results_pytorch_cv/Apple_M3_Pro_FOOD101_resnet50_224_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,1154.9088756946633 3 | 64,1153.2645214443328 4 | -------------------------------------------------------------------------------- /results/llamacpp_2_7b_chat_q4_0_gguf_tokens_per_second.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrdbourke/mac-ml-speed-test/HEAD/results/llamacpp_2_7b_chat_q4_0_gguf_tokens_per_second.png -------------------------------------------------------------------------------- /results/results_pytorch_cv/Apple_M1_Pro_FOOD101_resnet50_224_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,1266.549370500007 3 | 64,1200.8012103056633 4 | 128,1186.3808301806664 5 | -------------------------------------------------------------------------------- /results/results_pytorch_cv/Apple_M3_Max_FOOD101_resnet50_224_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,775.4842313886669 3 | 64,767.2547952499978 4 | 128,774.9688542220005 5 | -------------------------------------------------------------------------------- /results/results_pytorch_cv/NVIDIA_TITAN_RTX_FOOD101_resnet50_224_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,286.1672895780454 3 | 64,281.51912301670137 4 | 128,285.2718501399892 5 | -------------------------------------------------------------------------------- /results/results_pytorch_cv/Tesla_V100-SXM2-16GB_FOOD101_resnet50_224_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,604.2824396803331 3 | 64,580.8566365773331 4 | 128,591.3937331590001 5 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/Apple_M1_Pro_FOOD101_ResNet50_224_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,987.3942983053372 3 | 64,938.9334349999941 4 | 128,926.6391786109986 5 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/Apple_M3_FOOD101_ResNet50_224_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,1575.0388523889997 3 | 64,3207.9679787636655 4 | 128,2587.5894250556644 5 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/Apple_M3_Max_FOOD101_ResNet50_224_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,670.9488240416686 3 | 64,643.0623972083316 4 | 128,646.8598695833352 5 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/Apple_M3_Pro_FOOD101_ResNet50_224_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,1017.2945216249985 3 | 64,1017.4000618333326 4 | 128,1067.2875440000014 5 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/NVIDIA_TITAN_RTX_FOOD101_ResNet50_224_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,332.85949335728463 3 | 64,318.5872707469389 4 | 128,313.0237473415521 5 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/Tesla_V100-SXM2-16GB_FOOD101_ResNet50_224_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 32,328.7046056459999 3 | 64,292.85519316833324 4 | 128,257.27568483533315 5 | -------------------------------------------------------------------------------- /results/results_tensorflow_nlp/Apple_M3_IMDB_SmallTransformer_200_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,186.32364694277445 3 | 32,162.63966234525046 4 | 64,155.945854028066 5 | 128,161.16359663009644 6 | -------------------------------------------------------------------------------- /results/results_tensorflow_nlp/Apple_M3_Max_IMDB_SmallTransformer_200_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,158.07639606793722 3 | 32,138.934600909551 4 | 64,120.7503666083018 5 | 128,114.80187161763509 6 | -------------------------------------------------------------------------------- /results/results_tensorflow_nlp/Apple_M1_Pro_IMDB_SmallTransformer_200_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,211.14532534281412 3 | 32,182.09144202868143 4 | 64,178.21422592798868 5 | 128,172.24587869644165 6 | -------------------------------------------------------------------------------- /results/results_tensorflow_nlp/Apple_M3_Pro_IMDB_SmallTransformer_200_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,177.51724934577942 3 | 32,150.7669596672058 4 | 64,138.56261237462363 5 | 128,132.47729436556497 6 | -------------------------------------------------------------------------------- /results/results_tensorflow_nlp/NVIDIA_TITAN_RTX_IMDB_SmallTransformer_200_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,18.382678667704266 3 | 32,15.082591772079468 4 | 64,13.508471250534058 5 | 128,11.10114018122355 6 | -------------------------------------------------------------------------------- /results/results_tensorflow_nlp/Tesla V100-SXM2-16GB_IMDB_SmallTransformer_200_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,33.56022310256958 3 | 32,24.59867199261983 4 | 64,21.30341323216756 5 | 128,23.594334443410236 6 | -------------------------------------------------------------------------------- /results/results_pytorch_cv/Apple_M3_CIFAR100_resnet50_32_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,303.0134050415989 3 | 32,184.38392894999998 4 | 64,134.28747648339922 5 | 128,120.64122668339988 6 | 256,115.48426262500143 7 | 512,110.64569474180026 8 | 1024,109.70000765000005 9 | -------------------------------------------------------------------------------- /results/results_pytorch_nlp/Apple_M3_IMDB_distilbert-base-uncased_512_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | train_runtime,train_samples_per_second,train_steps_per_second,train_loss,epoch,batch_size,total_flos 2 | 6388.8935,11.739,0.734,0.25999131531165237,3.0,16, 3 | FAILED,FAILED,FAILED,FAILED,FAILED,32,FAILED 4 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/Apple_M3_CIFAR100_ResNet50_32_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,132.78998702500002 3 | 32,89.12323289179994 4 | 64,67.47725710819995 5 | 128,58.35523201680007 6 | 256,54.51674566679994 7 | 512,52.03046636659983 8 | 1024,51.9580059165999 9 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/Apple_M3_Pro_CIFAR100_ResNet50_32_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,149.33609456659997 3 | 32,81.23528855000004 4 | 64,49.6020009418 5 | 128,40.31359892500004 6 | 256,37.71170701680003 7 | 512,36.15389770820002 8 | 1024,34.848360266599954 9 | -------------------------------------------------------------------------------- /results/results_pytorch_cv/Apple_M1_Pro_CIFAR100_resnet50_32_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,437.6034881500003 3 | 32,248.60223550820083 4 | 64,164.78200595840173 5 | 128,127.25892141660151 6 | 256,119.97889474999974 7 | 512,114.71728076659784 8 | 1024,112.30203669999901 9 | -------------------------------------------------------------------------------- /results/results_pytorch_cv/Apple_M3_Max_CIFAR100_resnet50_32_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,330.25597302499955 3 | 32,211.00030125839984 4 | 64,152.73698298340022 5 | 128,124.97272446660062 6 | 256,113.28274693319982 7 | 512,112.51172307499947 8 | 1024,110.84517355820135 9 | -------------------------------------------------------------------------------- /results/results_pytorch_cv/Apple_M3_Pro_CIFAR100_resnet50_32_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,311.33577428340067 3 | 32,194.67217928320025 4 | 64,138.90752104999993 5 | 128,117.29523603339912 6 | 256,114.84013164159987 7 | 512,111.49198255000083 8 | 1024,110.49978799999954 9 | -------------------------------------------------------------------------------- /results/results_pytorch_cv/NVIDIA_TITAN_RTX_CIFAR100_resnet50_32_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,52.24963461775333 3 | 32,27.800227800477295 4 | 64,17.22838211180642 5 | 128,11.842309796065091 6 | 256,10.023274022806437 7 | 512,9.457086602412165 8 | 1024,8.530665774457157 9 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/Apple_M1_Pro_CIFAR100_ResNet50_32_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,184.64420020820006 3 | 32,100.494637675 4 | 64,57.73143534160008 5 | 128,47.85386208340005 6 | 256,45.071362191600016 7 | 512,44.342994816600005 8 | 1024,43.35281533319994 9 | -------------------------------------------------------------------------------- /results/results_pytorch_cv/Tesla V100-SXM2-16GB_CIFAR100_resnet50_32_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,96.84562273060001 3 | 32,57.28930765880004 4 | 64,34.11284488260007 5 | 128,23.48668880260002 6 | 256,17.850459637600032 7 | 512,15.596215239399953 8 | 1024,15.064376836800056 9 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/Apple_M3_Max_CIFAR100_ResNet50_32_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,148.9555805668002 3 | 32,84.19183269999921 4 | 64,47.784369324997535 5 | 128,27.257267208397387 6 | 256,25.221530308196087 7 | 512,25.288347141596024 8 | 1024,22.15307903320063 9 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/NVIDIA_TITAN_RTX_CIFAR100_ResNet50_32_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,58.2923752611503 3 | 32,33.07459633499384 4 | 64,19.173787624388932 5 | 128,12.029756792169064 6 | 256,8.469756929390133 7 | 512,7.808959643729031 8 | 1024,7.437388936150819 9 | -------------------------------------------------------------------------------- /results/results_tensorflow_cv/Tesla_V100-SXM2-16GB_CIFAR100_ResNet50_32_tensorflow_results.csv: -------------------------------------------------------------------------------- 1 | batch_size,avg_time_per_epoch 2 | 16,115.3452957426 3 | 32,62.31742107519999 4 | 64,43.363982928 5 | 128,31.450304859800053 6 | 256,19.419273650800005 7 | 512,19.393532804200003 8 | 1024,11.809973514800003 9 | -------------------------------------------------------------------------------- /results/results_pytorch_nlp/Apple_M3_Pro_IMDB_distilbert-base-uncased_512_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | train_runtime,train_samples_per_second,train_steps_per_second,train_loss,epoch,batch_size,total_flos 2 | 2615.7227,28.673,1.793,0.2574620359766125,3.0,16, 3 | 2606.1318,28.778,0.9,0.2654318260719709,3.0,32, 4 | 2603.117,28.812,0.451,0.2823666493707707,3.0,64, 5 | 2862.9028,26.197,0.205,0.3112901441094016,3.0,128, 6 | FAILED,FAILED,FAILED,FAILED,FAILED,256,FAILED 7 | -------------------------------------------------------------------------------- /results/results_pytorch_nlp/Tesla_V100-SXM2-16GB_IMDB_distilbert-base-uncased_512_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | train_runtime,train_samples_per_second,train_steps_per_second,train_loss,epoch,batch_size,total_flos 2 | 524.695,142.94,8.937,0.2561188785474065,3.0,16, 3 | 502.5434,149.241,4.668,0.264988687032324,3.0,32, 4 | 491.1324,152.708,2.388,0.28148304125435414,3.0,64, 5 | 483.0604,155.26,1.217,0.31020068797935435,3.0,128, 6 | FAILED,FAILED,FAILED,FAILED,FAILED,256,FAILED 7 | -------------------------------------------------------------------------------- /results/results_pytorch_nlp/Apple_M1_Pro_IMDB_distilbert-base-uncased_512_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | train_runtime,train_samples_per_second,train_steps_per_second,train_loss,epoch,batch_size,total_flos 2 | 2425.8181,30.917,1.933,0.25534291591794794,3.0,16, 3 | 2398.7046,31.267,0.978,0.2650448570674236,3.0,32, 4 | 2448.5217,30.631,0.479,0.28236866566429564,3.0,64, 5 | 4556.6679,16.459,0.129,0.3112893072115321,3.0,128, 6 | 9330.4597,8.038,0.032,0.35917681739443824,3.0,256, 7 | FAILED,FAILED,FAILED,FAILED,FAILED,512,FAILED 8 | -------------------------------------------------------------------------------- /results/results_pytorch_nlp/Apple_M3_Max_IMDB_distilbert-base-uncased_512_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | train_runtime,train_samples_per_second,train_steps_per_second,train_loss,epoch,batch_size,total_flos 2 | 1531.3382,48.977,3.062,0.25682562504477735,3.0,16, 3 | 1554.8644,48.236,1.509,0.26505871095519135,3.0,32, 4 | 1544.9908,48.544,0.759,0.2823666493707707,3.0,64, 5 | 1552.6414,48.305,0.379,0.3112917984423994,3.0,128, 6 | 1707.9282,43.913,0.172,0.35917624648736446,3.0,256, 7 | FAILED,FAILED,FAILED,FAILED,FAILED,512,FAILED 8 | -------------------------------------------------------------------------------- /results/results_pytorch_nlp/NVIDIA_TITAN_RTX_IMDB_distilbert-base-uncased_512_pytorch_results.csv: -------------------------------------------------------------------------------- 1 | train_runtime,train_samples_per_second,train_steps_per_second,train_loss,epoch,batch_size,total_flos 2 | 569.4776,131.7,8.234,0.25587708173618817,3.0,16, 3 | 548.4288,136.754,4.278,0.2650960871833873,3.0,32, 4 | 546.4443,137.251,2.147,0.28227153885395023,3.0,64, 5 | 542.9173,138.143,1.083,0.31058487275830743,3.0,128, 6 | 544.9075,137.638,0.54,0.35983146615579825,3.0,256, 7 | FAILED,FAILED,FAILED,FAILED,FAILED,512,FAILED 8 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Python/Jupyter project gitignore 2 | 3 | # Python bytecode 4 | __pycache__/ 5 | *.py[cod] 6 | 7 | # Jupyter Notebook 8 | .ipynb_checkpoints/ 9 | 10 | # Environment 11 | venv/ 12 | env/ 13 | *.env 14 | 15 | # Logs 16 | *.log 17 | 18 | # Temporary files 19 | *.tmp 20 | 21 | # Build artifacts 22 | build/ 23 | dist/ 24 | *.egg-info/ 25 | 26 | # Compiled files 27 | *.pyc 28 | *.pyo 29 | 30 | # IDE-specific files 31 | .idea/ 32 | .vscode/ 33 | *.sublime-project 34 | *.sublime-workspace 35 | 36 | # Miscellaneous 37 | .DS_Store 38 | Thumbs.db 39 | *.gguf 40 | data/ 41 | pytorch_hf_nlp_model/* 42 | wandb/* -------------------------------------------------------------------------------- /helper_functions.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | 3 | def get_nvidia_gpu_name(): 4 | try: 5 | # Execute the 'nvidia-smi' command and capture its output 6 | gpu_info = subprocess.check_output(['nvidia-smi'], encoding='utf-8') 7 | except Exception as e: 8 | # Handle the case where 'nvidia-smi' is not found 9 | print(f"[INFO] Error: {e}, not connected to a NVIDIA GPU, setting GPU_NAME to None") 10 | return None 11 | 12 | # If the command was successful, parse the GPU name 13 | try: 14 | # Execute 'nvidia-smi -L' command to get detailed GPU info 15 | gpu_full_name = subprocess.check_output(['nvidia-smi', '-L'], encoding='utf-8') 16 | gpu_name = gpu_full_name.split(":")[1].split("(")[0].strip() 17 | print(f"[INFO] Connected to NVIDIA GPU: {gpu_name}") 18 | return gpu_name 19 | except Exception as e: 20 | print(f"Error occurred while getting GPU name: {e}") 21 | return None -------------------------------------------------------------------------------- /mlx/README.md: -------------------------------------------------------------------------------- 1 | Brief testing of Apple's new MLX framework dedicated for Apple Silicon. 2 | 3 | See: https://github.com/ml-explore/mlx-examples/tree/main/mnist for more details. 4 | 5 | Usage: 6 | 7 | ``` 8 | python -m pip install mlx 9 | ``` 10 | 11 | Run MLX MNIST example with GPU: 12 | 13 | ``` 14 | python mlx_main.py --gpu 15 | ``` 16 | 17 | Run PyTorch MLX MNIST example (requires `torch` installed) with GPU: 18 | 19 | ``` 20 | python torch_main.py --gpu 21 | ``` 22 | 23 | Running the above examples, I've noticed MLX is ~2x faster than PyTorch on Apple Silicon. 24 | 25 | **Note:** MLX is very clean but still early development. I also noticed using MNIST on CPU without a GPU was much faster on both MLX and PyTorch. I'd suspect since modern Apple Silicon chips are so fast on CPU, the extra copying of data to the GPU slows them down on small datasets. More testing will be needed for larger datasets. But this is exciting, MLX could potentially mean decent ML work on an Apple Silicon Mac is possible (e.g. a future M3 Ultra w/ 192GB memory :O). -------------------------------------------------------------------------------- /mlx/mnist.py: -------------------------------------------------------------------------------- 1 | # Copyright © 2023 Apple Inc. 2 | 3 | import gzip 4 | import numpy as np 5 | import os 6 | import pickle 7 | from urllib import request 8 | 9 | 10 | def mnist(save_dir="/tmp"): 11 | """ 12 | Load the MNIST dataset in 4 tensors: train images, train labels, 13 | test images, and test labels. 14 | 15 | Checks `save_dir` for already downloaded data otherwise downloads. 16 | 17 | Download code modified from: 18 | https://github.com/hsjeong5/MNIST-for-Numpy 19 | """ 20 | 21 | def download_and_save(save_file): 22 | base_url = "http://yann.lecun.com/exdb/mnist/" 23 | filename = [ 24 | ["training_images", "train-images-idx3-ubyte.gz"], 25 | ["test_images", "t10k-images-idx3-ubyte.gz"], 26 | ["training_labels", "train-labels-idx1-ubyte.gz"], 27 | ["test_labels", "t10k-labels-idx1-ubyte.gz"], 28 | ] 29 | 30 | mnist = {} 31 | for name in filename: 32 | out_file = os.path.join("/tmp", name[1]) 33 | request.urlretrieve(base_url + name[1], out_file) 34 | for name in filename[:2]: 35 | out_file = os.path.join("/tmp", name[1]) 36 | with gzip.open(out_file, "rb") as f: 37 | mnist[name[0]] = np.frombuffer(f.read(), np.uint8, offset=16).reshape( 38 | -1, 28 * 28 39 | ) 40 | for name in filename[-2:]: 41 | out_file = os.path.join("/tmp", name[1]) 42 | with gzip.open(out_file, "rb") as f: 43 | mnist[name[0]] = np.frombuffer(f.read(), np.uint8, offset=8) 44 | with open(save_file, "wb") as f: 45 | pickle.dump(mnist, f) 46 | 47 | save_file = os.path.join(save_dir, "mnist.pkl") 48 | if not os.path.exists(save_file): 49 | download_and_save(save_file) 50 | with open(save_file, "rb") as f: 51 | mnist = pickle.load(f) 52 | 53 | preproc = lambda x: x.astype(np.float32) / 255.0 54 | mnist["training_images"] = preproc(mnist["training_images"]) 55 | mnist["test_images"] = preproc(mnist["test_images"]) 56 | return ( 57 | mnist["training_images"], 58 | mnist["training_labels"].astype(np.uint32), 59 | mnist["test_images"], 60 | mnist["test_labels"].astype(np.uint32), 61 | ) 62 | 63 | 64 | if __name__ == "__main__": 65 | train_x, train_y, test_x, test_y = mnist() 66 | assert train_x.shape == (60000, 28 * 28), "Wrong training set size" 67 | assert train_y.shape == (60000,), "Wrong training set size" 68 | assert test_x.shape == (10000, 28 * 28), "Wrong test set size" 69 | assert test_y.shape == (10000,), "Wrong test set size" -------------------------------------------------------------------------------- /mlx/mlx_main.py: -------------------------------------------------------------------------------- 1 | # Copyright © 2023 Apple Inc. 2 | 3 | import argparse 4 | import time 5 | 6 | import numpy as np 7 | 8 | import mlx.core as mx 9 | import mlx.nn as nn 10 | import mlx.optimizers as optim 11 | 12 | import mnist 13 | 14 | 15 | class MLP(nn.Module): 16 | """A simple MLP.""" 17 | 18 | def __init__( 19 | self, num_layers: int, input_dim: int, hidden_dim: int, output_dim: int 20 | ): 21 | super().__init__() 22 | layer_sizes = [input_dim] + [hidden_dim] * num_layers + [output_dim] 23 | self.layers = [ 24 | nn.Linear(idim, odim) 25 | for idim, odim in zip(layer_sizes[:-1], layer_sizes[1:]) 26 | ] 27 | 28 | def __call__(self, x): 29 | for l in self.layers[:-1]: 30 | x = mx.maximum(l(x), 0.0) 31 | return self.layers[-1](x) 32 | 33 | 34 | def loss_fn(model, X, y): 35 | return mx.mean(nn.losses.cross_entropy(model(X), y)) 36 | 37 | 38 | def eval_fn(model, X, y): 39 | return mx.mean(mx.argmax(model(X), axis=1) == y) 40 | 41 | 42 | def batch_iterate(batch_size, X, y): 43 | perm = mx.array(np.random.permutation(y.size)) 44 | for s in range(0, y.size, batch_size): 45 | ids = perm[s : s + batch_size] 46 | yield X[ids], y[ids] 47 | 48 | 49 | def main(): 50 | seed = 0 51 | num_layers = 2 52 | hidden_dim = 32 53 | num_classes = 10 54 | batch_size = 256 55 | num_epochs = 10 56 | learning_rate = 1e-1 57 | 58 | np.random.seed(seed) 59 | 60 | # Load the data 61 | train_images, train_labels, test_images, test_labels = map(mx.array, mnist.mnist()) 62 | 63 | # Load the model 64 | model = MLP(num_layers, train_images.shape[-1], hidden_dim, num_classes) 65 | mx.eval(model.parameters()) 66 | 67 | loss_and_grad_fn = nn.value_and_grad(model, loss_fn) 68 | optimizer = optim.SGD(learning_rate=learning_rate) 69 | 70 | for e in range(num_epochs): 71 | tic = time.perf_counter() 72 | for X, y in batch_iterate(batch_size, train_images, train_labels): 73 | loss, grads = loss_and_grad_fn(model, X, y) 74 | optimizer.update(model, grads) 75 | mx.eval(model.parameters(), optimizer.state) 76 | accuracy = eval_fn(model, test_images, test_labels) 77 | toc = time.perf_counter() 78 | print( 79 | f"Epoch {e}: Test accuracy {accuracy.item():.3f}," 80 | f" Time {toc - tic:.3f} (s)" 81 | ) 82 | 83 | 84 | if __name__ == "__main__": 85 | parser = argparse.ArgumentParser("Train a simple MLP on MNIST with MLX.") 86 | parser.add_argument("--gpu", action="store_true", help="Use the Metal back-end.") 87 | args = parser.parse_args() 88 | if not args.gpu: 89 | mx.set_default_device(mx.cpu) 90 | main() -------------------------------------------------------------------------------- /mlx/torch_main.py: -------------------------------------------------------------------------------- 1 | # Copyright © 2023 Apple Inc. 2 | 3 | import argparse 4 | import torch 5 | import time 6 | 7 | import mnist 8 | 9 | 10 | class MLP(torch.nn.Module): 11 | def __init__(self, num_layers, input_dim, hidden_dim, output_dim): 12 | super().__init__() 13 | layer_sizes = [hidden_dim] * num_layers 14 | self.layers = torch.nn.ModuleList( 15 | [ 16 | torch.nn.Linear(idim, odim) 17 | for idim, odim in zip( 18 | [input_dim] + layer_sizes, layer_sizes + [output_dim] 19 | ) 20 | ] 21 | ) 22 | 23 | def forward(self, x): 24 | x = self.layers[0](x) 25 | for l in self.layers[1:]: 26 | x = l(x.relu()) 27 | return x 28 | 29 | 30 | def loss_fn(model, X, y): 31 | logits = model(X) 32 | return torch.nn.functional.cross_entropy(logits, y) 33 | 34 | 35 | @torch.no_grad() 36 | def eval_fn(model, X, y): 37 | logits = model(X) 38 | return torch.mean((logits.argmax(-1) == y).float()) 39 | 40 | 41 | def batch_iterate(batch_size, X, y, device): 42 | perm = torch.randperm(len(y), device=device) 43 | for s in range(0, len(y), batch_size): 44 | ids = perm[s : s + batch_size] 45 | yield X[ids], y[ids] 46 | 47 | 48 | if __name__ == "__main__": 49 | parser = argparse.ArgumentParser("Train a simple MLP on MNIST with PyTorch.") 50 | parser.add_argument("--gpu", action="store_true", help="Use the Metal back-end.") 51 | args = parser.parse_args() 52 | 53 | if not args.gpu: 54 | torch.set_num_threads(1) 55 | device = "cpu" 56 | else: 57 | device = "mps" 58 | seed = 0 59 | num_layers = 2 60 | hidden_dim = 32 61 | num_classes = 10 62 | batch_size = 256 63 | num_epochs = 10 64 | learning_rate = 1e-1 65 | 66 | # Load the data 67 | def to_tensor(x): 68 | if x.dtype != "uint32": 69 | return torch.from_numpy(x).to(device) 70 | else: 71 | return torch.from_numpy(x.astype(int)).to(device) 72 | 73 | train_images, train_labels, test_images, test_labels = map(to_tensor, mnist.mnist()) 74 | 75 | # Load the model 76 | model = MLP(num_layers, train_images.shape[-1], hidden_dim, num_classes).to(device) 77 | opt = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.0) 78 | 79 | for e in range(num_epochs): 80 | tic = time.perf_counter() 81 | for X, y in batch_iterate(batch_size, train_images, train_labels, device): 82 | opt.zero_grad() 83 | loss_fn(model, X, y).backward() 84 | opt.step() 85 | accuracy = eval_fn(model, test_images, test_labels) 86 | toc = time.perf_counter() 87 | print( 88 | f"Epoch {e}: Test accuracy {accuracy.item():.3f}," 89 | f" Time {toc - tic:.3f} (s)" 90 | ) -------------------------------------------------------------------------------- /tensorflow_test_computer_vision_cifar100.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | from timeit import default_timer as timer 4 | 5 | 6 | from pathlib import Path 7 | 8 | import pandas as pd 9 | import tensorflow as tf 10 | 11 | from helper_functions import get_nvidia_gpu_name 12 | 13 | try: 14 | import cpuinfo 15 | CPU_PROCESSOR = cpuinfo.get_cpu_info().get('brand_raw').replace(" ", "_") 16 | print(f"[INFO] CPU Processor: {CPU_PROCESSOR}") 17 | except Exception as e: 18 | print(f"Error: {e}, may have failed to get CPU_PROCESSOR name from cpuinfo, please install cpuinfo or set CPU_PROCESSOR manually") 19 | 20 | # Create argument parser 21 | parser = argparse.ArgumentParser() 22 | parser.add_argument("--batch_sizes", default="16, 32, 64, 128, 256, 512, 1024", help="Delimited list input of batch sizes to test, defaults to '16, 32, 64, 128, 256, 512, 1024'", type=str) 23 | parser.add_argument("--epochs", type=int, default=5, help="Number of epochs to train for, default is 5") 24 | args = parser.parse_args() 25 | 26 | # Convert batch_sizes to list 27 | batch_size_args = [int(item.strip()) for item in args.batch_sizes.split(",")] 28 | 29 | # Set constants 30 | GPU_NAME = get_nvidia_gpu_name() 31 | DATASET_NAME = "CIFAR100" 32 | INPUT_SHAPE = (32, 32, 3) 33 | BATCH_SIZES = batch_size_args 34 | MODEL_NAME = "ResNet50" 35 | EPOCHS = args.epochs 36 | BACKEND = "tensorflow" 37 | 38 | print(f"[INFO] Testing model: {MODEL_NAME} on {DATASET_NAME} dataset with input shape {INPUT_SHAPE} for {EPOCHS} epochs across batch sizes: {BATCH_SIZES}") 39 | 40 | # Load dataset 41 | cifar = tf.keras.datasets.cifar100 42 | (x_train, y_train), (x_test, y_test) = cifar.load_data() 43 | 44 | # Setup training 45 | def train_and_time(batch_sizes=BATCH_SIZES, 46 | epochs=EPOCHS, 47 | x_train=x_train, 48 | y_train=y_train): 49 | 50 | batch_size_training_results = [] 51 | for batch_size in batch_sizes: 52 | print(f"[INFO] Training with batch size {batch_size} for {epochs} epochs...") 53 | 54 | # Create model 55 | model = tf.keras.applications.ResNet50( 56 | include_top=True, 57 | weights=None, 58 | input_shape=INPUT_SHAPE, 59 | classes=100,) 60 | 61 | # Create loss function and compile model 62 | loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) 63 | model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"]) 64 | 65 | try: 66 | start_time = timer() 67 | 68 | model.fit(x_train, 69 | y_train, 70 | epochs=epochs, 71 | batch_size=batch_size, 72 | validation_data=None) 73 | # No validation, just testing training speed 74 | # validation_data=(x_test, y_test)) 75 | 76 | end_time = timer() 77 | 78 | total_training_time = end_time - start_time 79 | avg_time_per_epoch = total_training_time / epochs 80 | 81 | batch_size_training_results.append({"batch_size": batch_size, 82 | "avg_time_per_epoch": avg_time_per_epoch}) 83 | print(f"[INFO] Finished training with batch size {batch_size} for {epochs} epochs, total time: {round(total_training_time, 3)} seconds, avg time per epoch: {round(avg_time_per_epoch, 3)} seconds\n\n") 84 | save_results(batch_size_training_results) 85 | except Exception as e: 86 | print(f"[INFO] Error: {e}") 87 | print(f"[INFO] Failed training with batch size {batch_size} for {epochs} epochs...\n\n") 88 | batch_size_training_results.append({"batch_size": batch_size, 89 | "avg_time_per_epoch": "FAILED"}) 90 | save_results(batch_size_training_results) 91 | break 92 | 93 | return batch_size_training_results 94 | 95 | def save_results(batch_size_training_results, results_dir="results", target_dir="results_tensorflow_cv"): 96 | # Create CSV filename 97 | if GPU_NAME: 98 | csv_filename = f"{GPU_NAME.replace(' ', '_')}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[0]}_{BACKEND}_results.csv" 99 | else: 100 | csv_filename = f"{CPU_PROCESSOR}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[0]}_{BACKEND}_results.csv" 101 | 102 | # Make the target results directory if it doesn't exist (include the parents) 103 | target_results_dir = target_dir 104 | results_path = Path(results_dir) / target_results_dir 105 | results_path.mkdir(parents=True, exist_ok=True) 106 | csv_filepath = results_path / csv_filename 107 | 108 | # Turn dict into DataFrame 109 | df = pd.DataFrame(batch_size_training_results) 110 | 111 | # Save to CSV 112 | print(f"[INFO] Saving results to: {csv_filepath}") 113 | df.to_csv(csv_filepath, index=False) 114 | 115 | if __name__ == "__main__": 116 | batch_size_training_results = train_and_time() 117 | print(f"[INFO] Results:\n{batch_size_training_results}") 118 | 119 | -------------------------------------------------------------------------------- /tensorflow_test_computer_vision_food101.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | from timeit import default_timer as timer 4 | 5 | 6 | from pathlib import Path 7 | 8 | import pandas as pd 9 | import tensorflow as tf 10 | import tensorflow_datasets as tfds 11 | 12 | from helper_functions import get_nvidia_gpu_name 13 | 14 | try: 15 | import cpuinfo 16 | CPU_PROCESSOR = cpuinfo.get_cpu_info().get('brand_raw').replace(" ", "_") 17 | print(f"[INFO] CPU Processor: {CPU_PROCESSOR}") 18 | except Exception as e: 19 | print(f"Error: {e}, may have failed to get CPU_PROCESSOR name from cpuinfo, please install cpuinfo or set CPU_PROCESSOR manually") 20 | 21 | # Create argument parser 22 | parser = argparse.ArgumentParser() 23 | parser.add_argument("--batch_sizes", default="32, 64, 128", help="Delimited list input of batch sizes to test, defaults to '32, 64, 128'", type=str) 24 | parser.add_argument("--epochs", type=int, default=3, help="Number of epochs to train for, default is 3") 25 | args = parser.parse_args() 26 | 27 | # Convert batch_sizes to list 28 | batch_size_args = [int(item.strip()) for item in args.batch_sizes.split(",")] 29 | 30 | # Set constants 31 | GPU_NAME = get_nvidia_gpu_name() 32 | DATASET_NAME = "FOOD101" 33 | IMAGE_SIZE = 224 34 | INPUT_SHAPE = (IMAGE_SIZE, IMAGE_SIZE, 3) 35 | BATCH_SIZES = batch_size_args 36 | MODEL_NAME = "ResNet50" 37 | EPOCHS = args.epochs 38 | BACKEND = "tensorflow" 39 | 40 | print(f"[INFO] Testing model: {MODEL_NAME} on {DATASET_NAME} dataset with input shape {INPUT_SHAPE} for {EPOCHS} epochs across batch sizes: {BATCH_SIZES}") 41 | 42 | # Load the dataset 43 | # Note: This is store a ~5GB file in ./data, so make sure to delete it after if you want to free up space 44 | print(f"[INFO] Loading {DATASET_NAME} dataset, note: this will store a ~5GB file in ./data, so make sure to delete it after if you want to free up space") 45 | (train_data, test_data), dataset_info = tfds.load( 46 | 'food101', 47 | split=['train', 'validation'], 48 | as_supervised=True, 49 | with_info=True, 50 | data_dir="./data" 51 | ) 52 | print(f"[INFO] Finished loading {DATASET_NAME} dataset.") 53 | 54 | # Create preprocess layer 55 | preprocess_layer = tf.keras.Sequential([ 56 | tf.keras.layers.Rescaling(1/255.), 57 | tf.keras.layers.Resizing(height=IMAGE_SIZE, width=IMAGE_SIZE) 58 | ]) 59 | 60 | # Setup training 61 | def train_and_time(batch_sizes=BATCH_SIZES, 62 | epochs=EPOCHS, 63 | train_data=train_data, 64 | test_data=test_data): 65 | 66 | batch_size_training_results = [] 67 | for batch_size in batch_sizes: 68 | print(f"[INFO] Training with batch size {batch_size} for {epochs} epochs...") 69 | 70 | # Map preprocessing function to data and turn into batches 71 | train_data_batched = train_data.map(lambda image, label: (preprocess_layer(image), label)).shuffle(1000).batch(batch_size).prefetch(tf.data.AUTOTUNE) 72 | # train_data = train_data.map(lambda image, label: (preprocess_layer(image), label)).shuffle(1000) 73 | test_data = test_data.map(lambda image, label: (preprocess_layer(image), label)) # don't shuffle test data (we're not using it anyway) 74 | 75 | # Print shape of first training batch 76 | for image_batch, label_batch in train_data_batched.take(1): 77 | print(f"[INFO] Training batch shape: {image_batch.shape}, label batch shape: {label_batch.shape}") 78 | 79 | # Create model 80 | model = tf.keras.applications.ResNet50( 81 | include_top=True, 82 | weights=None, 83 | input_shape=INPUT_SHAPE, 84 | classes=101,) 85 | 86 | # Create loss function and compile model 87 | loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) 88 | model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"]) 89 | 90 | try: 91 | start_time = timer() 92 | 93 | model.fit(train_data_batched, # batch the data dynamically 94 | epochs=epochs, 95 | batch_size=batch_size, 96 | validation_data=None) 97 | # No validation, just testing training speed 98 | # validation_data=(x_test, y_test)) 99 | 100 | end_time = timer() 101 | 102 | total_training_time = end_time - start_time 103 | avg_time_per_epoch = total_training_time / epochs 104 | 105 | batch_size_training_results.append({"batch_size": batch_size, 106 | "avg_time_per_epoch": avg_time_per_epoch}) 107 | print(f"[INFO] Finished training with batch size {batch_size} for {epochs} epochs, total time: {round(total_training_time, 3)} seconds, avg time per epoch: {round(avg_time_per_epoch, 3)} seconds\n\n") 108 | 109 | save_results(batch_size_training_results) 110 | except Exception as e: 111 | print(f"[INFO] Error: {e}") 112 | print(f"[INFO] Failed training with batch size {batch_size} for {epochs} epochs...\n\n") 113 | batch_size_training_results.append({"batch_size": batch_size, 114 | "avg_time_per_epoch": "FAILED"}) 115 | 116 | save_results(batch_size_training_results) 117 | break 118 | 119 | return batch_size_training_results 120 | 121 | def save_results(batch_size_training_results, results_dir="results", target_dir="results_tensorflow_cv"): 122 | # Create CSV filename 123 | if GPU_NAME: 124 | csv_filename = f"{GPU_NAME.replace(' ', '_')}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[0]}_{BACKEND}_results.csv" 125 | else: 126 | csv_filename = f"{CPU_PROCESSOR}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[0]}_{BACKEND}_results.csv" 127 | 128 | # Make the target results directory if it doesn't exist (include the parents) 129 | target_results_dir = target_dir 130 | results_path = Path(results_dir) / target_results_dir 131 | results_path.mkdir(parents=True, exist_ok=True) 132 | csv_filepath = results_path / csv_filename 133 | 134 | # Turn dict into DataFrame 135 | df = pd.DataFrame(batch_size_training_results) 136 | 137 | # Save to CSV 138 | print(f"[INFO] Saving results to: {csv_filepath}") 139 | df.to_csv(csv_filepath, index=False) 140 | 141 | if __name__ == "__main__": 142 | batch_size_training_results = train_and_time() 143 | print(f"[INFO] Results:\n{batch_size_training_results}") -------------------------------------------------------------------------------- /tensorflow_test_nlp.py: -------------------------------------------------------------------------------- 1 | """ 2 | Small script to measure training speed of a transformer model on IMDB dataset 3 | Adapted from: https://keras.io/examples/nlp/text_classification_with_transformer/ 4 | """ 5 | import argparse 6 | import time 7 | import os 8 | 9 | from pathlib import Path 10 | 11 | import pandas as pd 12 | import tensorflow as tf 13 | from tensorflow import keras 14 | from tensorflow.keras import layers 15 | 16 | from helper_functions import get_nvidia_gpu_name 17 | 18 | try: 19 | import cpuinfo 20 | CPU_PROCESSOR = cpuinfo.get_cpu_info().get("brand_raw").replace(" ", "_") 21 | print(f"[INFO] CPU Processor: {CPU_PROCESSOR}") 22 | except Exception as e: 23 | print(f"Error: {e}, may have failed to get CPU_PROCESSOR name from cpuinfo, please install cpuinfo or set CPU_PROCESSOR manually") 24 | 25 | # Create argument parser 26 | parser = argparse.ArgumentParser() 27 | 28 | # Misc args 29 | parser.add_argument("--batch_sizes", type=str, default="16, 32, 64, 128", help="String delimited series of batch sizes to test training speed on, default is '16, 32, 64, 128'") 30 | parser.add_argument("--epochs", type=int, default=3, help="Number of epochs to train for, default is 3") 31 | 32 | # Model args 33 | parser.add_argument("--embed_dim", type=int, default=128, help="Embedding size for each token, default is 128") 34 | parser.add_argument("--num_heads", type=int, default=8, help="Number of attention heads, default is 8") 35 | parser.add_argument("--ff_dim", type=int, default=128, help="Hidden layer size in feed forward network inside transformer, default is 128") 36 | parser.add_argument("--num_transformer_blocks", type=int, default=1, help="Number of transformer blocks in the model, default is 1") 37 | parser.add_argument("--dropout_rate", type=float, default=0.1, help="Dropout for layers outside of Transformer blocks, default is 0.1") 38 | parser.add_argument("--num_classes", type=int, default=2, help="Number of output classes, default is 2") 39 | 40 | # Data args 41 | parser.add_argument("--maxlen", type=int, default=200, help="Maximum length of input sequence, default is 200") 42 | parser.add_argument("--vocab_size", type=int, default=20000, help="Vocabulary size, default is 20000") 43 | 44 | args = parser.parse_args() 45 | 46 | # Set constants 47 | GPU_NAME = get_nvidia_gpu_name() 48 | DATASET_NAME = "IMDB" 49 | BATCH_SIZES = [int(item.strip()) for item in args.batch_sizes.split(",")] # turn batch sizes into list, e.g. "16, 32, 64, 128" -> [16, 32, 64, 128] 50 | MODEL_NAME = "SmallTransformer" 51 | EPOCHS = args.epochs 52 | 53 | # Model hyperparameters 54 | EMBED_DIM = args.embed_dim 55 | NUM_HEADS = args.num_heads 56 | FEEDFORWARD_DIM = args.ff_dim 57 | NUM_TRANSFORMER_BLOCKS = args.num_transformer_blocks 58 | DROPOUT_RATE = args.dropout_rate 59 | NUM_OUTPUT_CLASSES = args.num_classes 60 | 61 | # Data hyperparameters 62 | MAXLEN = args.maxlen 63 | VOCAB_SIZE = args.vocab_size 64 | INPUT_SHAPE = (MAXLEN,) 65 | 66 | # Print info 67 | print(f"[INFO] Testing model: {MODEL_NAME} on {DATASET_NAME} dataset with sequence length {INPUT_SHAPE} for {EPOCHS} epochs across batch sizes: {BATCH_SIZES}") 68 | 69 | ### Create Transformer Model ### 70 | class TransformerBlock(layers.Layer): 71 | def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1): 72 | super().__init__() 73 | self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) 74 | self.ffn = keras.Sequential( 75 | [layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),] 76 | ) 77 | self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) 78 | self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) 79 | self.dropout1 = layers.Dropout(rate) 80 | self.dropout2 = layers.Dropout(rate) 81 | 82 | def call(self, inputs, training): 83 | attn_output = self.att(inputs, inputs) 84 | attn_output = self.dropout1(attn_output, training=training) 85 | out1 = self.layernorm1(inputs + attn_output) 86 | ffn_output = self.ffn(out1) 87 | ffn_output = self.dropout2(ffn_output, training=training) 88 | return self.layernorm2(out1 + ffn_output) 89 | 90 | class TokenAndPositionEmbedding(layers.Layer): 91 | def __init__(self, maxlen, vocab_size, embed_dim): 92 | super().__init__() 93 | self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim) 94 | self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim) 95 | 96 | def call(self, x): 97 | maxlen = tf.shape(x)[-1] 98 | positions = tf.range(start=0, limit=maxlen, delta=1) 99 | positions = self.pos_emb(positions) 100 | x = self.token_emb(x) 101 | return x + positions 102 | 103 | def create_transformer_model(embed_dim=EMBED_DIM, 104 | num_heads=NUM_HEADS, 105 | ff_dim=FEEDFORWARD_DIM, 106 | num_transformer_blocks=NUM_TRANSFORMER_BLOCKS, 107 | dropout_rate=DROPOUT_RATE, 108 | num_classes=NUM_OUTPUT_CLASSES, 109 | maxlen=MAXLEN, 110 | vocab_size=VOCAB_SIZE): 111 | inputs = layers.Input(shape=(maxlen,)) 112 | embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim) 113 | x = embedding_layer(inputs) 114 | transformer_blocks = tf.keras.Sequential([TransformerBlock(embed_dim, num_heads, ff_dim) for _ in range(num_transformer_blocks)]) 115 | x = transformer_blocks(x) 116 | x = layers.GlobalAveragePooling1D()(x) 117 | x = layers.Dropout(dropout_rate)(x) 118 | x = layers.Dense(ff_dim, activation="relu")(x) 119 | x = layers.Dropout(dropout_rate)(x) 120 | outputs = layers.Dense(units=num_classes, activation="softmax")(x) 121 | model = keras.Model(inputs=inputs, outputs=outputs) 122 | 123 | return model 124 | 125 | model = create_transformer_model(embed_dim=EMBED_DIM, 126 | num_heads=NUM_HEADS, 127 | ff_dim=FEEDFORWARD_DIM, 128 | num_transformer_blocks=NUM_TRANSFORMER_BLOCKS, 129 | dropout_rate=DROPOUT_RATE, 130 | num_classes=NUM_OUTPUT_CLASSES) 131 | 132 | print(f"[INFO] Model summary:\n{model.summary()}") 133 | 134 | ### Data preparation 135 | print(f"[INFO] Loading {DATASET_NAME} dataset...") 136 | (x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(num_words=VOCAB_SIZE) 137 | x_train = keras.utils.pad_sequences(x_train, maxlen=MAXLEN) 138 | x_val = keras.utils.pad_sequences(x_val, maxlen=MAXLEN) 139 | 140 | print(f"[INFO] Prepared {len(x_train)} training sequences of max length: {MAXLEN} with vocab size: {VOCAB_SIZE}") 141 | print(f"[INFO] Prepared {len(x_val)} validation sequences of max length: {MAXLEN} with vocab size: {VOCAB_SIZE}") 142 | 143 | def train(x_train=x_train, 144 | y_train=y_train, 145 | x_val=x_val, 146 | y_val=y_val, 147 | batch_sizes=BATCH_SIZES, 148 | epochs=EPOCHS): 149 | 150 | batch_size_training_results = [] 151 | for batch_size in batch_sizes: 152 | print(f"[INFO] Training with batch size {batch_size} for {epochs} epochs...") 153 | 154 | # Prepare the data according to tf.data best practices - https://www.tensorflow.org/guide/data_performance 155 | train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) 156 | train_dataset = train_dataset.cache().shuffle(buffer_size=1024).batch(batch_size).prefetch(tf.data.AUTOTUNE) 157 | 158 | val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) 159 | val_dataset = val_dataset.cache().batch(batch_size).prefetch(tf.data.AUTOTUNE) 160 | 161 | # Create new model 162 | model = create_transformer_model() 163 | 164 | # Compile model 165 | model.compile(optimizer="adam", 166 | loss="sparse_categorical_crossentropy", 167 | metrics=["accuracy"]) 168 | 169 | try: 170 | 171 | # Start timer 172 | start_time = time.time() 173 | 174 | history = model.fit(train_dataset, 175 | epochs=epochs, 176 | # Not using validation data, just testing training speed 177 | # validation_data=(x_val, y_val) 178 | ) 179 | 180 | # End timer 181 | end_time = time.time() 182 | 183 | total_training_time = end_time - start_time 184 | avg_time_per_epoch = total_training_time / EPOCHS 185 | 186 | batch_size_training_results.append({"batch_size": batch_size, 187 | "avg_time_per_epoch": avg_time_per_epoch}) 188 | print(f"[INFO] Finished training with batch size {batch_size} for {epochs} epochs, total time: {round(total_training_time, 3)} seconds, avg time per epoch: {round(avg_time_per_epoch, 3)} seconds\n\n") 189 | except: 190 | print(f"[INFO] Failed training with batch size {batch_size} for {epochs} epochs...\n\n") 191 | batch_size_training_results.append({"batch_size": batch_size, 192 | "avg_time_per_epoch": "FAILED"}) 193 | break 194 | 195 | return batch_size_training_results 196 | 197 | if __name__ == "__main__": 198 | batch_size_training_results = train() 199 | print(f"[INFO] Results:\n{batch_size_training_results}") 200 | 201 | # Create CSV filename 202 | if GPU_NAME: 203 | csv_filename = f"{GPU_NAME.replace(' ', '_')}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[0]}_results.csv" 204 | else: 205 | csv_filename = f"{CPU_PROCESSOR}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[0]}_results.csv" 206 | 207 | # Make the target results directory if it doesn't exist (include the parents) 208 | target_results_dir = "results_tensorflow_nlp" 209 | results_path = Path("results") / target_results_dir 210 | results_path.mkdir(parents=True, exist_ok=True) 211 | csv_filepath = results_path / csv_filename 212 | 213 | # Turn dict into DataFrame 214 | df = pd.DataFrame(batch_size_training_results) 215 | # df.head() 216 | 217 | # Save to CSV 218 | print(f"[INFO] Saving results to: {csv_filepath}") 219 | df.to_csv(csv_filepath, index=False) 220 | 221 | 222 | -------------------------------------------------------------------------------- /llama2_test.py: -------------------------------------------------------------------------------- 1 | """ 2 | Install instructions: 3 | 4 | # !pip install pandas 5 | # !pip install py-cpuinfo 6 | # !pip instal llangchain 7 | # !pip install prettytable 8 | # !CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python 9 | 10 | Guides followed for this script: 11 | - https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/HelloLlamaLocal.ipynb 12 | - https://llama-cpp-python.readthedocs.io/en/latest/install/macos/ 13 | - https://python.langchain.com/docs/integrations/llms/llamacpp 14 | """ 15 | 16 | # Standard library imports 17 | import argparse 18 | from pathlib import Path 19 | from timeit import default_timer as timer 20 | 21 | # Third-party imports 22 | import pandas as pd 23 | from tqdm.auto import tqdm 24 | from prettytable import PrettyTable # for nice looking results 25 | 26 | # Local application/library specific imports 27 | from langchain.callbacks.manager import CallbackManager 28 | from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler 29 | from langchain.llms import LlamaCpp 30 | 31 | try: 32 | import cpuinfo 33 | CPU_PROCESSOR = cpuinfo.get_cpu_info().get('brand_raw').replace(" ", "_") 34 | print(f"[INFO] Processor: {CPU_PROCESSOR}") 35 | except Exception as e: 36 | print(f"Error: {e}, may have failed to get CPU_PROCESSOR name from cpuinfo, please install cpuinfo or set CPU_PROCESSOR manually") 37 | 38 | if __name__ == "__main__": 39 | parser = argparse.ArgumentParser(description='Run Llama 2 on a set of questions') 40 | parser.add_argument('--path_to_gguf_model', default="./llama-2-7b-chat.Q4_0.gguf", type=str, help='Path to the Llama 2 model, see: https://huggingface.co/TheBloke for downloads, should be ".gguf" format') 41 | parser.add_argument('--num_times_per_question', default=5, type=int, help='Number of times to ask each question') 42 | parser.add_argument('--num_questions', default='all', type=str, help='Number of questions to ask, default "all", can be a positive integer between 1 and 20') 43 | parser.add_argument('--max_tokens', default=500, type=int, help='Max tokens to generate per question, default 500') 44 | parser.add_argument('--stream_output', action='store_true', help='Stream output token by token, may reduce speed') 45 | args = parser.parse_args() 46 | 47 | if args.stream_output == True: 48 | print(f"[INFO] Streaming output set to: {args.stream_output}, this will print the output token by token, may reduce speed") 49 | 50 | # Prompt questions for the model (using "Let's think step by step..." for verbosity of output) 51 | # See "Let's think step by step..." paper: https://arxiv.org/abs/2205.11916 52 | questions = [ 53 | "What are the nutrition facts of an apple? Let's think step by step...", 54 | "What steps are involved in the water cycle? Let's think step by step...", 55 | "How does a computer process a command? Let's think step by step...", 56 | "What are the stages of a butterfly's life cycle? Let's think step by step...", 57 | "How does a refrigerator keep food cold? Let's think step by step...", 58 | "What happens when we digest food? Let's think step by step...", 59 | "How does an airplane stay airborne? Let's think step by step...", 60 | "What are the processes involved in making a cup of coffee? Let's think step by step...", 61 | "How do bees produce honey? Let's think step by step...", 62 | "What are the key steps in recycling plastic? Let's think step by step...", 63 | "How does a clock measure time? Let's think step by step...", 64 | "What is the process of photosynthesis in plants? Let's think step by step...", 65 | "How does a car engine work? Let's think step by step...", 66 | "What are the basic steps in baking bread? Let's think step by step...", 67 | "How do solar panels generate electricity? Let's think step by step...", 68 | "What are the stages of human sleep? Let's think step by step...", 69 | "How does a smartphone connect to the internet? Let's think step by step...", 70 | "What is the life cycle of a star? Let's think step by step...", 71 | "How does the human immune system fight viruses? Let's think step by step...", 72 | "What are the steps involved in creating a movie? Let's think step by step..." 73 | ] 74 | 75 | def process_questions(arg, questions=questions): 76 | if arg == "all": 77 | print(f"[INFO] num_questions arg is 'all', will ask {len(questions)} questions...") 78 | return questions 79 | else: 80 | arg = int(arg) 81 | 82 | if isinstance(arg, int) and arg > 0: 83 | # Make sure arg is not greater than the number of questions 84 | if arg > len(questions): 85 | print(f"[INFO] num_questions arg is '{arg}' & is greater than the number of questions '{len(questions)}', returning all questions...") 86 | return questions 87 | else: 88 | print(f"[INFO] num_questions arg is '{arg}', will ask {arg} questions...") 89 | return questions[:arg] 90 | else: 91 | raise ValueError("Argument must be 'all' or a positive integer between 1 and 20") 92 | 93 | questions = process_questions(arg=args.num_questions) 94 | 95 | 96 | ### Model setup ### 97 | 98 | # Set up your target model here, download from: https://huggingface.co/TheBloke, for macOS, you'll generally want "Q4_0.gguf" formatted models 99 | path_to_gguf_model = args.path_to_gguf_model 100 | 101 | # Make sure model path exists 102 | assert Path(path_to_gguf_model).exists(), f"Model path '{path_to_gguf_model}' does not exist, please download it from Hugging Face and save it to the local directory, see: https://huggingface.co/TheBloke for '.gguf' models to run on macOS" 103 | 104 | # Print model path 105 | print(f"[INFO] Using model at path: {path_to_gguf_model}") 106 | 107 | # For token-wise streaming so you'll see the answer gets generated token by token 108 | # when Llama is answering your question 109 | callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) 110 | 111 | # 1 GPU layer is enough for M-series chips 112 | n_gpu_layers=1 113 | 114 | llm = LlamaCpp( 115 | model_path=path_to_gguf_model, 116 | n_gpu_layers=n_gpu_layers, 117 | temperature=0.5, 118 | max_tokens=args.max_tokens, 119 | n_batch=512, 120 | top_p=1, 121 | f16_kv=True, 122 | n_ctx=2048, # context window 123 | callback_manager=callback_manager if args.stream_output == True else None, 124 | verbose=False, # this will print output, turning off to save printing to console (may reduce speed) 125 | ) 126 | 127 | # Helper function for converting a string to token count 128 | def character_count_to_tokens(sequence: str) -> float: 129 | character_to_token_ratio = 4 # 4 chars = 1 token, see: https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them 130 | 131 | # Get length of char_sequence without whitespace 132 | character_len = len(sequence.strip()) 133 | 134 | # Return the token length based on char length 135 | return character_len / character_to_token_ratio 136 | 137 | 138 | ### Ask questions ### 139 | NUM_TIMES = args.num_times_per_question 140 | TOTAL_QUESTIONS_TO_ASK = len(questions) * NUM_TIMES 141 | 142 | if TOTAL_QUESTIONS_TO_ASK > 200: 143 | print(f"[INFO] Asking {len(questions)} questions {NUM_TIMES} times each, total questions: {TOTAL_QUESTIONS_TO_ASK}") 144 | # print a warning 145 | print(f"[WARNING] Asking {TOTAL_QUESTIONS_TO_ASK} questions, this may take a while... (consider reducing the number of questions or number of times to ask each question)") 146 | else: 147 | print(f"[INFO] Asking {len(questions)} questions {NUM_TIMES} times each, total questions: {TOTAL_QUESTIONS_TO_ASK}") 148 | 149 | # Prompt model X times per question 150 | qa_results = [] 151 | for question in tqdm(questions): 152 | print(f"[INFO] Asking question '{question}' {NUM_TIMES} times.") 153 | for i, _ in enumerate(range(NUM_TIMES)): 154 | start_time = timer() 155 | answer = llm(question) 156 | end_time = timer() 157 | total_time = end_time - start_time 158 | 159 | answer_char_len = len(answer.strip()) 160 | chars_per_second = round(answer_char_len / total_time, 2) 161 | 162 | answer_token_len = character_count_to_tokens(sequence=answer) 163 | tokens_per_second = round(answer_token_len / total_time, 2) 164 | 165 | print(f"Answer char len: {answer_char_len} | Chars per second: {chars_per_second} | Answer token len: {answer_token_len} | Tokens per second: {tokens_per_second}") 166 | 167 | qa_results.append({"question": question, 168 | "question_iter": i, 169 | "total_time": total_time, 170 | "answer": answer, 171 | "answer_char_len": answer_char_len, 172 | "chars_per_second": chars_per_second, 173 | "answer_token_len": answer_token_len, 174 | "tokens_per_second": tokens_per_second}) 175 | 176 | ### Save results to CSV ### 177 | GPU_NAME = False 178 | MODEL_NAME = path_to_gguf_model.replace('./', '') 179 | if GPU_NAME: 180 | csv_filename = f"{GPU_NAME}_{MODEL_NAME}_results.csv" 181 | else: 182 | csv_filename = f"{CPU_PROCESSOR}_{MODEL_NAME}_results.csv" 183 | 184 | # Make the target results directory if it doesn't exist (include the parents) 185 | target_results_dir = "results_llama2" 186 | results_path = Path("results") / target_results_dir 187 | results_path.mkdir(parents=True, exist_ok=True) 188 | csv_filepath = results_path / csv_filename 189 | 190 | # Turn dict into DataFrame 191 | import pandas as pd 192 | df = pd.DataFrame(qa_results) 193 | 194 | # Print results 195 | print(f"[INFO] Results on {CPU_PROCESSOR}:") 196 | total_questions = len(df) 197 | total_time_for_all_questions = round(df["total_time"].sum(), 2) 198 | total_tokens_generated = df["answer_token_len"].sum() 199 | total_chars_generated = df["answer_char_len"].sum() 200 | total_tokens_per_second = round(total_tokens_generated / total_time_for_all_questions, 2) 201 | total_chars_per_second = round(total_chars_generated / total_time_for_all_questions, 2) 202 | 203 | # Create a PrettyTable object 204 | table = PrettyTable() 205 | 206 | # Define the columns 207 | table.field_names = ["Metric", "Value"] 208 | 209 | # Add rows 210 | table.add_row(["Total questions", total_questions]) 211 | table.add_row(["Total time for all questions (s)", total_time_for_all_questions]) 212 | table.add_row(["Total tokens generated", total_tokens_generated]) 213 | table.add_row(["Total chars generated", total_chars_generated]) 214 | table.add_row(["Average tokens per second", total_tokens_per_second]) 215 | table.add_row(["Average chars per second", total_chars_per_second]) 216 | 217 | # Print the table 218 | print(table) 219 | 220 | # print(f"Total questions: {total_questions}") 221 | # print(f"Total time for all questions: {total_time_for_all_questions}") 222 | # print(f"Total tokens generated: {total_tokens_generated}") 223 | # print(f"Total chars generated: {total_chars_generated}") 224 | # print(f"Average tokens per second: {total_tokens_per_second}") 225 | # print(f"Average chars per second: {total_chars_per_second}") 226 | 227 | # Save to CSV 228 | print(f"[INFO] Saving results to: {csv_filepath}") 229 | df.to_csv(csv_filepath, index=False) 230 | 231 | 232 | -------------------------------------------------------------------------------- /pytorch_test_nlp.py: -------------------------------------------------------------------------------- 1 | """ 2 | Script to test training a PyTorch NLP model on a dataset using HuggingFace's Trainer class. 3 | 4 | Source: https://huggingface.co/docs/transformers/tasks/sequence_classification (with modifications for a focus on MPS devices + tracking) 5 | """ 6 | 7 | # Standard library imports 8 | import argparse 9 | import random 10 | from pathlib import Path 11 | 12 | # Third-party imports 13 | import accelerate 14 | import datasets 15 | import evaluate 16 | import numpy as np 17 | import pandas as pd 18 | import torch 19 | from datasets import load_dataset 20 | from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, Trainer 21 | 22 | # Local application/library specific imports 23 | from helper_functions import get_nvidia_gpu_name 24 | 25 | 26 | if __name__ == "__main__": 27 | CPU_PROCESSOR = None 28 | 29 | ### Get CPU Processor name ### 30 | if not CPU_PROCESSOR: 31 | try: 32 | import cpuinfo 33 | CPU_PROCESSOR = cpuinfo.get_cpu_info().get("brand_raw").replace(" ", "_") 34 | print(f"[INFO] CPU Processor: {CPU_PROCESSOR}") 35 | except Exception as e: 36 | print(f"Error: {e}, may have failed to get CPU_PROCESSOR name from cpuinfo, please install cpuinfo or set CPU_PROCESSOR manually") 37 | 38 | ### Setup device ### 39 | if torch.backends.mps.is_available(): 40 | device = torch.device("mps") 41 | print(f"[INFO] MPS device found, using device: {device}") 42 | elif torch.cuda.is_available(): 43 | device = torch.device("cuda") 44 | print(f"[INFO] CUDA device found, using device: {device}") 45 | else: 46 | device = torch.device("cpu") 47 | print (f"[INFO] MPS or CUDA device not found, using device: {device} (results will be much slower than using MPS or CUDA)") 48 | 49 | 50 | # Set random seed 51 | torch.manual_seed(42) 52 | 53 | # Create argument parser 54 | parser = argparse.ArgumentParser() 55 | parser.add_argument("--batch_sizes", default="16, 32, 64, 128, 256, 512", help="Delimited list input of batch sizes to test, defaults to '16, 32, 64, 128, 256, 512, 1024'", type=str) 56 | parser.add_argument("--epochs", type=int, default=3, help="Number of epochs to train for, default is 3") 57 | parser.add_argument("--quick_experiment", action="store_true", help="Whether to run a quick experiment, default is False") 58 | parser.add_argument("--use_fp16", action="store_true", help="Whether to use fp16 precision, default is False") 59 | parser.add_argument("--use_torch_compile", action="store_true", help="Whether to use torch compile, default is False") 60 | args = parser.parse_args() 61 | 62 | print(args.quick_experiment) 63 | 64 | # Turn args.quick_experiment into boolean 65 | # print(f"[INFO] args.quick_experiment set to '{args.quick_experiment}', converting to boolean... ") 66 | # args.quick_experiment = str(args.quick_experiment) == "True" 67 | # print(f"[INFO] args.quick_experiment set to {args.quick_experiment}") 68 | 69 | # Convert batch_sizes to list 70 | batch_size_args = [int(item.strip()) for item in args.batch_sizes.split(",")] 71 | 72 | ### Set constants ### 73 | GPU_NAME = get_nvidia_gpu_name() 74 | BACKEND = "pytorch" 75 | MODEL_NAME = "distilbert-base-uncased" 76 | 77 | # If quick experiment, set epochs and batch size to simple values 78 | if args.quick_experiment: 79 | print(f"[INFO] args.quick_experiment set to True, setting epochs to 1 and batch_sizes to 32...") 80 | EPOCHS = 1 81 | BATCH_SIZES = [32] 82 | else: 83 | EPOCHS = args.epochs 84 | BATCH_SIZES = batch_size_args 85 | 86 | INPUT_SHAPE = (1, 512) # this is the number of tokens per sample, 512 is the max for distilbert-base-uncased 87 | DATASET_NAME = "IMDB" 88 | 89 | ### Print constants ### 90 | print(f"[INFO] Training {MODEL_NAME} model on {DATASET_NAME} dataset for {EPOCHS} epochs with batch sizes: {BATCH_SIZES}...") 91 | 92 | # Print out whether using FP16 or torch.compile 93 | if args.use_fp16: 94 | print(f"[INFO] Using fp16 precision (only available on NVIDIA GPUs, not MPS).") 95 | 96 | if args.use_torch_compile: 97 | print(f"[INFO] Using torch compile (only availabe on NVIDIA GPUs, not MPS).") 98 | 99 | ### Load dataset ### 100 | print(f"[INFO] Loading IMDB dataset...") 101 | imdb = load_dataset("imdb", 102 | cache_dir="./data") 103 | rand_idx = random.randint(0, len(imdb['train'])) 104 | print(f"[INFO] IMDB dataset loaded. Example sample:\n{imdb['train'][rand_idx]}") 105 | 106 | 107 | ### Load tokenizer ### 108 | print(f"[INFO] Loading tokenizer...") 109 | tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") 110 | print(f"[INFO] Tokenizer loaded.") 111 | 112 | ### Preprocess dataset ### 113 | def preprocess_function(examples): 114 | return tokenizer(examples["text"], truncation=True) 115 | 116 | print(f"[INFO] Preprocessing dataset...") 117 | tokenized_imdb = imdb.map(preprocess_function, batched=True) 118 | 119 | print(f"[INFO] Preprocessing complete. Example sample:\n{tokenized_imdb['train'][rand_idx]}") 120 | 121 | ### Create evaluation metric ### 122 | accuracy = evaluate.load("accuracy") 123 | 124 | def compute_metrics(eval_pred): 125 | """ 126 | Computes the accuracy of the model. 127 | """ 128 | predictions, labels = eval_pred 129 | predictions = np.argmax(predictions, axis=1) 130 | return accuracy.compute(predictions=predictions, references=labels) 131 | 132 | ### Create mapping from label to id and vice versa ### 133 | id2label = {0: "NEGATIVE", 1: "POSITIVE"} 134 | label2id = {"NEGATIVE": 0, "POSITIVE": 1} 135 | 136 | def count_parameters(model): 137 | """Helper function to count number of parameters, trainable, non-trainable and total.""" 138 | trainable_parameters = sum(p.numel() for p in model.parameters() if p.requires_grad) 139 | non_trainable_parameters = sum(p.numel() for p in model.parameters() if not p.requires_grad) 140 | total_parameters = trainable_parameters + non_trainable_parameters 141 | print(f"Trainable parameters: {trainable_parameters}") 142 | print(f"Non-trainable parameters: {non_trainable_parameters}") 143 | print(f"Total parameters: {total_parameters}") 144 | return trainable_parameters, non_trainable_parameters, total_parameters 145 | 146 | def save_results(batch_size_training_results): 147 | # Create CSV filename 148 | if GPU_NAME: 149 | csv_filename = f"{GPU_NAME.replace(' ', '_')}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[-1]}_{BACKEND}_results.csv" 150 | else: 151 | csv_filename = f"{CPU_PROCESSOR}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[-1]}_{BACKEND}_results.csv" 152 | 153 | # Make the target results directory if it doesn't exist (include the parents) 154 | target_results_dir = "results_pytorch_nlp" 155 | results_path = Path("results") / target_results_dir 156 | results_path.mkdir(parents=True, exist_ok=True) 157 | csv_filepath = results_path / csv_filename 158 | 159 | # Turn dict into DataFrame 160 | df = pd.DataFrame(batch_size_training_results) 161 | 162 | # Save to CSV 163 | print(f"[INFO] Saving results to: {csv_filepath}") 164 | df.to_csv(csv_filepath, index=False) 165 | 166 | 167 | """ 168 | Optional? Create Data Collator (not sure if this seems to produce a warning each time or if it's AutoTokenizer? 169 | Seems to work though... 170 | """ 171 | from transformers import DataCollatorWithPadding 172 | 173 | data_collator = DataCollatorWithPadding(tokenizer=tokenizer) 174 | 175 | ### Create model training and timing code ### 176 | batch_size_training_results = [] 177 | for batch_size in BATCH_SIZES: 178 | 179 | print(f"[INFO] Training model with batch size: {batch_size}") 180 | 181 | try: 182 | print(f"[INFO] Instantiating DistilBert Model...") 183 | model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", 184 | num_labels=2, 185 | id2label=id2label, 186 | label2id=label2id) 187 | 188 | if args.quick_experiment: # only train last layer 189 | print(f"[INFO] Quick experiment: only training last layer/using 1000 rows of data...") 190 | 191 | # Freeze all base layers 192 | for param in model.parameters(): 193 | param.requires_grad = False 194 | 195 | for param in model.classifier.parameters(): 196 | param.requires_grad = True 197 | 198 | else: # if not quick experiment, train last transformer layer and classifier layers 199 | 200 | # Freeze all base layers 201 | for param in model.parameters(): 202 | param.requires_grad = False 203 | 204 | # Only train last transformer layer and classifier layers 205 | for param in model.distilbert.transformer.layer[-1].parameters(): 206 | param.requires_grad = True 207 | 208 | for param in model.pre_classifier.parameters(): 209 | param.requires_grad = True 210 | 211 | for param in model.classifier.parameters(): 212 | param.requires_grad = True 213 | 214 | count_parameters(model) 215 | 216 | training_args = TrainingArguments( 217 | output_dir="pytorch_hf_nlp_model", 218 | learning_rate=2e-5, 219 | per_device_train_batch_size=batch_size, 220 | # per_device_eval_batch_size=16, # Don't eval, just train for speed testing 221 | num_train_epochs=EPOCHS, 222 | weight_decay=0.01, 223 | evaluation_strategy="no", 224 | save_strategy="no", # don't save during training 225 | load_best_model_at_end=False, 226 | push_to_hub=False, 227 | use_cpu=False, # defaults to False (will always try to use CUDA GPU or Mac MPS device if available) 228 | fp16=args.use_fp16, # defaults to False (will use float32 precision by default), note: not available on MPS devices 229 | auto_find_batch_size=False, # Note: may be something to explore in the future to automatically find batch size with `accelerate` installed 230 | torch_compile=args.use_torch_compile, # defaults to False, compiling may speedup thanks to PyTorch 2.0, see: https://www.learnpytorch.io/pytorch_2_intro/ (best results on NVIDIA Ampere GPUs and above, not MPS) 231 | ) 232 | 233 | trainer = Trainer( 234 | model=model, 235 | args=training_args, 236 | train_dataset=tokenized_imdb["train"] if not args.quick_experiment else tokenized_imdb["train"].select(range(1000)), 237 | # eval_dataset=tokenized_imdb["test"], 238 | tokenizer=tokenizer, 239 | data_collator=data_collator, 240 | compute_metrics=compute_metrics, 241 | ) 242 | 243 | trainer_output = trainer.train() 244 | trainer_metrics_dict = trainer_output.metrics 245 | trainer_metrics_dict["batch_size"] = batch_size 246 | 247 | print(f"[INFO] Trainer metrics for batch size {batch_size}:\n{trainer_metrics_dict}") 248 | 249 | batch_size_training_results.append(trainer_metrics_dict) 250 | save_results(batch_size_training_results) 251 | 252 | # Delete model and trainer instance, clear cache 253 | del model 254 | del trainer 255 | if torch.cuda.is_available(): 256 | torch.cuda.empty_cache() 257 | 258 | if torch.backends.mps.is_available(): 259 | torch.mps.empty_cache() 260 | 261 | except Exception as e: 262 | print(f"[INFO] Error: {e}") 263 | print(f"[INFO] Failed training with batch size {batch_size} for {EPOCHS} epochs...\n\n") 264 | 265 | batch_size_training_results.append({'train_runtime': "FAILED", 266 | 'train_samples_per_second': "FAILED", 267 | 'train_steps_per_second': "FAILED", 268 | 'total_flos': "FAILED", 269 | 'train_loss': "FAILED", 270 | 'epoch': "FAILED", 271 | 'batch_size': batch_size}) 272 | save_results(batch_size_training_results) 273 | 274 | # Delete model and trainer instance, clear cache 275 | del model 276 | del trainer 277 | 278 | if torch.cuda.is_available(): 279 | torch.cuda.empty_cache() 280 | 281 | if torch.backends.mps.is_available(): 282 | torch.mps.empty_cache() 283 | break 284 | 285 | print("[INFO] Finished training with all batch sizes.") 286 | 287 | print(f"[INFO] Results:\n{batch_size_training_results}") -------------------------------------------------------------------------------- /pytorch_test_computer_vision_cifar100.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | from pathlib import Path 4 | from timeit import default_timer as timer 5 | 6 | import torch 7 | import torchvision 8 | import torchvision.transforms.v2 as transforms # use v2 transforms for faster augmentations 9 | import pandas as pd 10 | 11 | from torch import nn 12 | from torch.utils.data import DataLoader 13 | from torchvision import datasets 14 | from tqdm.auto import tqdm 15 | 16 | from helper_functions import get_nvidia_gpu_name 17 | 18 | if __name__ == "__main__": 19 | CPU_PROCESSOR = None 20 | 21 | ### Get CPU Processor name ### 22 | if not CPU_PROCESSOR: 23 | try: 24 | import cpuinfo 25 | CPU_PROCESSOR = cpuinfo.get_cpu_info().get("brand_raw").replace(" ", "_") 26 | print(f"[INFO] CPU Processor: {CPU_PROCESSOR}") 27 | except Exception as e: 28 | print(f"Error: {e}, may have failed to get CPU_PROCESSOR name from cpuinfo, please install cpuinfo or set CPU_PROCESSOR manually") 29 | 30 | ### Setup device ### 31 | if torch.backends.mps.is_available(): 32 | device = torch.device("mps") 33 | print(f"[INFO] MPS device found, using device: {device}") 34 | elif torch.cuda.is_available(): 35 | device = torch.device("cuda") 36 | print(f"[INFO] CUDA device found, using device: {device}") 37 | else: 38 | device = torch.device("cpu") 39 | print (f"[INFO] MPS or CUDA device not found, using device: {device} (results will be much slower than using MPS or CUDA)") 40 | 41 | # Prevent torch from erroring with too many files open (happens on M3) 42 | # See: https://github.com/pytorch/pytorch/issues/11201, https://github.com/CVMI-Lab/PLA/issues/20 43 | torch.multiprocessing.set_sharing_strategy('file_system') 44 | 45 | # Set random seed 46 | torch.manual_seed(42) 47 | 48 | # Create argument parser 49 | parser = argparse.ArgumentParser() 50 | parser.add_argument("--batch_sizes", default="16, 32, 64, 128, 256, 512, 1024", help="Delimited list input of batch sizes to test, defaults to '16, 32, 64, 128, 256, 512, 1024'", type=str) 51 | parser.add_argument("--epochs", type=int, default=5, help="Number of epochs to train for, default is 5") 52 | args = parser.parse_args() 53 | 54 | # Convert batch_sizes to list 55 | batch_size_args = [int(item.strip()) for item in args.batch_sizes.split(",")] 56 | 57 | ### Set constants ### 58 | GPU_NAME = get_nvidia_gpu_name() 59 | BACKEND = "pytorch" 60 | MODEL_NAME = "resnet50" 61 | IMAGE_SIZE = 32 62 | INPUT_SHAPE = (3, IMAGE_SIZE, IMAGE_SIZE) 63 | NUM_WORKERS = os.cpu_count() 64 | EPOCHS = args.epochs 65 | BATCH_SIZES = batch_size_args 66 | DATASET_NAME = "CIFAR100" 67 | 68 | print(f"[INFO] Testing model: {MODEL_NAME} on {DATASET_NAME} dataset with input shape {INPUT_SHAPE} for {EPOCHS} epochs across batch sizes: {BATCH_SIZES}") 69 | 70 | 71 | ### Prepare Data ### 72 | simple_transform = transforms.Compose([ 73 | transforms.Resize(size=IMAGE_SIZE), 74 | transforms.ToImage(), 75 | transforms.ToDtype(torch.float32, scale=True) 76 | ]) 77 | 78 | # Get Datasets 79 | train_data = datasets.CIFAR10(root="data", 80 | train=True, 81 | transform=simple_transform, 82 | download=True) 83 | 84 | test_data = datasets.CIFAR10(root="data", 85 | train=False, 86 | transform=simple_transform, 87 | download=True) 88 | 89 | print(f"[INFO] Number of training samples: {len(train_data)}, number of testing samples: {len(test_data)}") 90 | 91 | # Create DataLoaders 92 | def create_dataloaders(batch_size, num_workers=NUM_WORKERS): 93 | train_dataloader = DataLoader(train_data, 94 | batch_size=batch_size, 95 | shuffle=True, 96 | num_workers=num_workers, 97 | pin_memory=False) # note: if you pin memory, you may get "too many workers" errors when recreating DataLoaders, see: https://github.com/Lightning-AI/pytorch-lightning/issues/18487#issuecomment-1740244601 98 | 99 | test_dataloader = DataLoader(test_data, 100 | batch_size=batch_size, 101 | shuffle=False, 102 | num_workers=num_workers, 103 | pin_memory=False) 104 | 105 | return train_dataloader, test_dataloader 106 | 107 | ### Train Step ### 108 | def train_step(model: torch.nn.Module, 109 | dataloader: torch.utils.data.DataLoader, 110 | loss_fn: torch.nn.Module, 111 | optimizer: torch.optim.Optimizer, 112 | device: torch.device): 113 | # Put model in train mode 114 | model.train() 115 | 116 | # Setup train loss and train accuracy values 117 | train_loss, train_acc = 0, 0 118 | 119 | # Loop through data loader data batches 120 | for batch, (X, y) in tqdm(enumerate(dataloader), total=len(dataloader)): 121 | # Send data to target device 122 | X, y = X.to(device, non_blocking=True), y.to(device, non_blocking=True) 123 | # X, y = X.to(device, non_blocking=True, memory_format=torch.channels_last), y.to(device, non_blocking=True) 124 | # X, y = X.to(device), y.to(device) 125 | 126 | # 1. Forward pass 127 | y_pred = model(X) 128 | 129 | # 2. Calculate and accumulate loss 130 | loss = loss_fn(y_pred, y) 131 | train_loss += loss.item() 132 | 133 | # 3. Optimizer zero grad 134 | optimizer.zero_grad() 135 | 136 | # 4. Loss backward 137 | loss.backward() 138 | 139 | # 5. Optimizer step 140 | optimizer.step() 141 | 142 | # Calculate and accumulate accuracy metric across all batches 143 | y_pred_class = torch.argmax(torch.softmax(y_pred, dim=1), dim=1) 144 | train_acc += (y_pred_class == y).sum().item()/len(y_pred) 145 | 146 | # Adjust metrics to get average loss and accuracy per batch 147 | train_loss = train_loss / len(dataloader) 148 | train_acc = train_acc / len(dataloader) 149 | return train_loss, train_acc 150 | 151 | ### Test Step ### 152 | def test_step(model: torch.nn.Module, 153 | dataloader: torch.utils.data.DataLoader, 154 | loss_fn: torch.nn.Module, 155 | device: torch.device): 156 | # Put model in eval mode 157 | model.eval() 158 | 159 | # Setup test loss and test accuracy values 160 | test_loss, test_acc = 0, 0 161 | 162 | # Turn on inference context manager 163 | with torch.inference_mode(): 164 | # Loop through DataLoader batches 165 | for batch, (X, y) in tqdm(enumerate(dataloader), total=len(dataloader)): 166 | # Send data to target device 167 | X, y = X.to(device, non_blocking=True), y.to(device, non_blocking=True) 168 | # X, y = X.to(device, non_blocking=True, memory_format=torch.channels_last), y.to(device, non_blocking=True) 169 | # X, y = X.to(device), y.to(device) 170 | 171 | # 1. Forward pass 172 | test_pred_logits = model(X) 173 | 174 | # 2. Calculate and accumulate loss 175 | loss = loss_fn(test_pred_logits, y) 176 | test_loss += loss.item() 177 | 178 | # Calculate and accumulate accuracy 179 | test_pred_labels = test_pred_logits.argmax(dim=1) 180 | test_acc += ((test_pred_labels == y).sum().item()/len(test_pred_labels)) 181 | 182 | # Adjust metrics to get average loss and accuracy per batch 183 | test_loss = test_loss / len(dataloader) 184 | test_acc = test_acc / len(dataloader) 185 | return test_loss, test_acc 186 | 187 | # 1. Take in various parameters required for training and test steps 188 | def train_and_test_model(model: torch.nn.Module, 189 | train_dataloader: torch.utils.data.DataLoader, 190 | test_dataloader: torch.utils.data.DataLoader, 191 | optimizer: torch.optim.Optimizer, 192 | loss_fn: torch.nn.Module, 193 | epochs: int, 194 | device: torch.device, 195 | eval: bool=False): 196 | 197 | print(f"[INFO] Training model {model.__class__.__name__} on device '{device}' for {epochs} epochs...") 198 | 199 | results = {"train_loss": [], "train_acc": [], "test_loss": [], "test_acc": []} 200 | 201 | # Loop through training and testing steps for a number of epochs 202 | for epoch in tqdm(range(epochs)): 203 | # Do eval before training (to see if there's any errors) 204 | if eval: 205 | test_loss, test_acc = test_step(model=model, 206 | dataloader=test_dataloader, 207 | loss_fn=loss_fn, 208 | device=device) 209 | 210 | train_loss, train_acc = train_step(model=model, 211 | dataloader=train_dataloader, 212 | loss_fn=loss_fn, 213 | optimizer=optimizer, 214 | device=device) 215 | 216 | 217 | # Print out what's happening 218 | print( 219 | f"Epoch: {epoch+1} | " 220 | f"train_loss: {train_loss:.4f} | " 221 | f"train_acc: {train_acc:.4f} | " 222 | ) 223 | 224 | if eval: 225 | print( 226 | f"Epoch: {epoch+1} | " 227 | f"test_loss: {test_loss:.4f} | " 228 | f"test_acc: {test_acc:.4f} | " 229 | ) 230 | 231 | # Save results to dictionary 232 | results["train_loss"].append(train_loss) 233 | results["train_acc"].append(train_acc) 234 | if eval: 235 | results["test_loss"].append(test_loss) 236 | results["test_acc"].append(test_acc) 237 | 238 | return results 239 | 240 | def save_results(batch_size_training_results, target_dir="results_pytorch_cv"): 241 | # Create CSV filename 242 | if GPU_NAME: 243 | csv_filename = f"{GPU_NAME.replace(' ', '_')}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[-1]}_{BACKEND}_results.csv" 244 | else: 245 | csv_filename = f"{CPU_PROCESSOR}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[-1]}_{BACKEND}_results.csv" 246 | 247 | # Make the target results directory if it doesn't exist (include the parents) 248 | target_results_dir = target_dir 249 | results_path = Path("results") / target_results_dir 250 | results_path.mkdir(parents=True, exist_ok=True) 251 | csv_filepath = results_path / csv_filename 252 | 253 | # Turn dict into DataFrame 254 | df = pd.DataFrame(batch_size_training_results) 255 | 256 | # Save to CSV 257 | print(f"[INFO] Saving results to: {csv_filepath}") 258 | df.to_csv(csv_filepath, index=False) 259 | 260 | def train_and_time(batch_sizes=BATCH_SIZES, 261 | epochs=EPOCHS, 262 | device=device): 263 | 264 | batch_size_training_results = [] 265 | 266 | for batch_size in batch_sizes: 267 | print(f"[INFO] Training with batch size {batch_size} for {epochs} epochs...") 268 | # Create an instance of resnet50 269 | model = torchvision.models.resnet50(num_classes=100).to(device) 270 | # model = torch.compile(model) # potential way to speed up model 271 | 272 | # Setup loss function and optimizer 273 | loss_fn = nn.CrossEntropyLoss() 274 | optimizer = torch.optim.Adam(params=model.parameters(), lr=0.001) 275 | 276 | # Create DataLoaders 277 | train_dataloader, test_dataloader = create_dataloaders(batch_size=batch_size) 278 | 279 | try: 280 | # Start the timer 281 | start_time = timer() 282 | 283 | # Train model 284 | model_results = train_and_test_model(model=model, 285 | train_dataloader=train_dataloader, 286 | test_dataloader=test_dataloader, 287 | optimizer=optimizer, 288 | loss_fn=loss_fn, 289 | epochs=epochs, 290 | device=device, 291 | eval=False) # don't eval, just test training time 292 | 293 | # End the timer 294 | end_time = timer() 295 | 296 | total_training_time = end_time - start_time 297 | avg_time_per_epoch = total_training_time / epochs 298 | 299 | batch_size_training_results.append({"batch_size": batch_size, 300 | "avg_time_per_epoch": avg_time_per_epoch}) 301 | save_results(batch_size_training_results) 302 | print(f"[INFO] Finished training with batch size {batch_size} for {epochs} epochs, total time: {round(total_training_time, 3)} seconds, avg time per epoch: {round(avg_time_per_epoch, 3)} seconds\n\n") 303 | 304 | except Exception as e: 305 | print(f"[INFO] Error: {e}") 306 | print(f"[INFO] Failed training with batch size {batch_size} for {epochs} epochs...\n\n") 307 | batch_size_training_results.append({"batch_size": batch_size, 308 | "avg_time_per_epoch": "FAILED"}) 309 | save_results(batch_size_training_results) 310 | break 311 | 312 | return batch_size_training_results 313 | 314 | ### Train an time model ### 315 | batch_size_training_results = train_and_time(batch_sizes=BATCH_SIZES, 316 | epochs=EPOCHS, 317 | device=device) 318 | 319 | print("[INFO] Finished training with all batch sizes.") 320 | 321 | print(f"[INFO] Results:\n{batch_size_training_results}") -------------------------------------------------------------------------------- /pytorch_test_computer_vision_food101.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | from pathlib import Path 4 | from timeit import default_timer as timer 5 | 6 | import torch 7 | import torchvision 8 | import torchvision.transforms.v2 as transforms # use v2 transforms for faster augmentations 9 | import pandas as pd 10 | 11 | from torch import nn 12 | from torch.utils.data import DataLoader 13 | from torchvision import datasets 14 | from tqdm.auto import tqdm 15 | 16 | ### Note: Food101 data from Torchvision takes far too long to load, let's use Hugging Face Datasets Instead ### 17 | from datasets import load_dataset # requires datasets package, !pip install datasets 18 | 19 | from helper_functions import get_nvidia_gpu_name 20 | 21 | # Create argument parser 22 | parser = argparse.ArgumentParser() 23 | parser.add_argument("--batch_sizes", default="32, 64, 128", help="Delimited list input of batch sizes to test, defaults to '32, 64, 128'", type=str) 24 | parser.add_argument("--epochs", type=int, default=3, help="Number of epochs to train for, default is 5") 25 | parser.add_argument("--num_workers", type=int, default=4, help="Number of workers to use for DataLoaders, default is 4, may be better to increase with available CPU cores") 26 | args = parser.parse_args() 27 | 28 | 29 | # Prevent torch from erroring with too many files open (happens on M3) 30 | # See: https://github.com/pytorch/pytorch/issues/11201, https://github.com/CVMI-Lab/PLA/issues/20 31 | torch.multiprocessing.set_sharing_strategy('file_system') 32 | 33 | # Set random seed 34 | torch.manual_seed(42) 35 | 36 | # Convert batch_sizes to list 37 | batch_size_args = [int(item.strip()) for item in args.batch_sizes.split(",")] 38 | 39 | ### Set constants ### 40 | GPU_NAME = get_nvidia_gpu_name() 41 | BACKEND = "pytorch" 42 | MODEL_NAME = "resnet50" 43 | IMAGE_SIZE = 224 44 | INPUT_SHAPE = (3, IMAGE_SIZE, IMAGE_SIZE) 45 | NUM_WORKERS = args.num_workers if args.num_workers < os.cpu_count() else 2 # number of workers to use for DataLoaders 46 | print(f"[INFO] Using number of workers: {NUM_WORKERS}") 47 | EPOCHS = args.epochs 48 | BATCH_SIZES = batch_size_args 49 | DATASET_NAME = "FOOD101" 50 | NUM_CLASSES = 101 51 | 52 | # Create DataLoaders 53 | def create_dataloaders(batch_size, num_workers=NUM_WORKERS): 54 | train_dataloader = DataLoader(train_dataset, 55 | batch_size=batch_size, 56 | shuffle=True, 57 | num_workers=num_workers, 58 | pin_memory=False) # note: if you pin memory, you may get "too many workers" errors when recreating DataLoaders, see: https://github.com/Lightning-AI/pytorch-lightning/issues/18487#issuecomment-1740244601 59 | 60 | test_dataloader = DataLoader(test_dataset, 61 | batch_size=batch_size, 62 | shuffle=False, 63 | num_workers=num_workers, 64 | pin_memory=False) 65 | 66 | return train_dataloader, test_dataloader 67 | 68 | ### Train Step ### 69 | def train_step(model: torch.nn.Module, 70 | dataloader: torch.utils.data.DataLoader, 71 | loss_fn: torch.nn.Module, 72 | optimizer: torch.optim.Optimizer, 73 | device: torch.device): 74 | # Put model in train mode 75 | model.train() 76 | 77 | # Setup train loss and train accuracy values 78 | train_loss, train_acc = 0, 0 79 | 80 | # Loop through data loader data batches 81 | for batch in tqdm(dataloader, total=len(dataloader)): 82 | 83 | # Get batch data 84 | X = batch["image"] 85 | y = batch["label"] 86 | 87 | # Send data to target device 88 | X, y = X.to(device, non_blocking=True), y.to(device, non_blocking=True) 89 | 90 | # 1. Forward pass 91 | y_pred = model(X) 92 | 93 | # 2. Calculate and accumulate loss 94 | loss = loss_fn(y_pred, y) 95 | train_loss += loss.item() 96 | 97 | # 3. Optimizer zero grad 98 | optimizer.zero_grad() 99 | 100 | # 4. Loss backward 101 | loss.backward() 102 | 103 | # 5. Optimizer step 104 | optimizer.step() 105 | 106 | # Calculate and accumulate accuracy metric across all batches 107 | y_pred_class = torch.argmax(torch.softmax(y_pred, dim=1), dim=1) 108 | train_acc += (y_pred_class == y).sum().item()/len(y_pred) 109 | 110 | # Adjust metrics to get average loss and accuracy per batch 111 | train_loss = train_loss / len(dataloader) 112 | train_acc = train_acc / len(dataloader) 113 | return train_loss, train_acc 114 | 115 | ### Test Step ### 116 | def test_step(model: torch.nn.Module, 117 | dataloader: torch.utils.data.DataLoader, 118 | loss_fn: torch.nn.Module, 119 | device: torch.device): 120 | # Put model in eval mode 121 | model.eval() 122 | 123 | # Setup test loss and test accuracy values 124 | test_loss, test_acc = 0, 0 125 | 126 | # Turn on inference context manager 127 | with torch.inference_mode(): 128 | # Loop through DataLoader batches 129 | for batch in tqdm(dataloader, total=len(dataloader)): 130 | 131 | # Get batch data 132 | X = batch["image"] 133 | y = batch["label"] 134 | 135 | # Send data to target device 136 | X, y = X.to(device, non_blocking=True), y.to(device, non_blocking=True) 137 | # X, y = X.to(device, non_blocking=True, memory_format=torch.channels_last), y.to(device, non_blocking=True) 138 | # X, y = X.to(device), y.to(device) 139 | 140 | # 1. Forward pass 141 | test_pred_logits = model(X) 142 | 143 | # 2. Calculate and accumulate loss 144 | loss = loss_fn(test_pred_logits, y) 145 | test_loss += loss.item() 146 | 147 | # Calculate and accumulate accuracy 148 | test_pred_labels = test_pred_logits.argmax(dim=1) 149 | test_acc += ((test_pred_labels == y).sum().item()/len(test_pred_labels)) 150 | 151 | # Adjust metrics to get average loss and accuracy per batch 152 | test_loss = test_loss / len(dataloader) 153 | test_acc = test_acc / len(dataloader) 154 | return test_loss, test_acc 155 | 156 | # 1. Take in various parameters required for training and test steps 157 | def train_and_test_model(model: torch.nn.Module, 158 | train_dataloader: torch.utils.data.DataLoader, 159 | test_dataloader: torch.utils.data.DataLoader, 160 | optimizer: torch.optim.Optimizer, 161 | loss_fn: torch.nn.Module, 162 | epochs: int, 163 | device: torch.device, 164 | eval: bool=False): 165 | 166 | print(f"[INFO] Training model {model.__class__.__name__} on device '{device}' for {epochs} epochs...") 167 | 168 | results = {"train_loss": [], "train_acc": [], "test_loss": [], "test_acc": []} 169 | 170 | # Loop through training and testing steps for a number of epochs 171 | for epoch in range(epochs): 172 | # Do eval before training (to see if there's any errors) 173 | if eval: 174 | test_loss, test_acc = test_step(model=model, 175 | dataloader=test_dataloader, 176 | loss_fn=loss_fn, 177 | device=device) 178 | 179 | train_loss, train_acc = train_step(model=model, 180 | dataloader=train_dataloader, 181 | loss_fn=loss_fn, 182 | optimizer=optimizer, 183 | device=device) 184 | 185 | 186 | # Print out what's happening 187 | print( 188 | f"Epoch: {epoch+1} | " 189 | f"train_loss: {train_loss:.4f} | " 190 | f"train_acc: {train_acc:.4f} | " 191 | ) 192 | 193 | if eval: 194 | print( 195 | f"Epoch: {epoch+1} | " 196 | f"test_loss: {test_loss:.4f} | " 197 | f"test_acc: {test_acc:.4f} | " 198 | ) 199 | 200 | # Save results to dictionary 201 | results["train_loss"].append(train_loss) 202 | results["train_acc"].append(train_acc) 203 | if eval: 204 | results["test_loss"].append(test_loss) 205 | results["test_acc"].append(test_acc) 206 | 207 | return results 208 | 209 | def train_and_time(device, 210 | batch_sizes=BATCH_SIZES, 211 | epochs=EPOCHS): 212 | 213 | batch_size_training_results = [] 214 | 215 | for batch_size in batch_sizes: 216 | print(f"[INFO] Training with batch size {batch_size} for {epochs} epochs...") 217 | # Create an instance of resnet50 218 | model = torchvision.models.resnet50(num_classes=NUM_CLASSES).to(device) 219 | # model = torch.compile(model) # potential way to speed up model 220 | 221 | # Setup loss function and optimizer 222 | loss_fn = nn.CrossEntropyLoss() 223 | optimizer = torch.optim.Adam(params=model.parameters(), lr=0.001) 224 | 225 | # Create DataLoaders 226 | train_dataloader, test_dataloader = create_dataloaders(batch_size=batch_size) 227 | 228 | try: 229 | # Start the timer 230 | start_time = timer() 231 | 232 | # Train model 233 | model_results = train_and_test_model(model=model, 234 | train_dataloader=train_dataloader, 235 | test_dataloader=test_dataloader, 236 | optimizer=optimizer, 237 | loss_fn=loss_fn, 238 | epochs=epochs, 239 | device=device, 240 | eval=False) # don't eval, just test training time 241 | 242 | # End the timer 243 | end_time = timer() 244 | 245 | total_training_time = end_time - start_time 246 | avg_time_per_epoch = total_training_time / epochs 247 | 248 | batch_size_training_results.append({"batch_size": batch_size, 249 | "avg_time_per_epoch": avg_time_per_epoch}) 250 | save_results(batch_size_training_results) 251 | print(f"[INFO] Finished training with batch size {batch_size} for {epochs} epochs, total time: {round(total_training_time, 3)} seconds, avg time per epoch: {round(avg_time_per_epoch, 3)} seconds\n\n") 252 | 253 | except Exception as e: 254 | print(f"[INFO] Error: {e}") 255 | print(f"[INFO] Failed training with batch size {batch_size} for {epochs} epochs...\n\n") 256 | batch_size_training_results.append({"batch_size": batch_size, 257 | "avg_time_per_epoch": "FAILED"}) 258 | save_results(batch_size_training_results) 259 | break 260 | 261 | return batch_size_training_results 262 | 263 | def image_transforms(examples): 264 | simple_transform = transforms.Compose([ 265 | transforms.ToImage(), 266 | transforms.Resize(size=(IMAGE_SIZE, IMAGE_SIZE), antialias=True), 267 | transforms.ToDtype(torch.float32, scale=True), 268 | transforms.Normalize(mean=[0.485, 0.456, 0.406], 269 | std=[0.229, 0.224, 0.225]) 270 | ]) 271 | examples["image"] = [simple_transform(image) for image in examples["image"]] 272 | return examples 273 | 274 | def save_results(batch_size_training_results, target_dir="results_pytorch_cv"): 275 | # Create CSV filename 276 | if GPU_NAME: 277 | csv_filename = f"{GPU_NAME.replace(' ', '_')}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[-1]}_{BACKEND}_results.csv" 278 | else: 279 | csv_filename = f"{CPU_PROCESSOR}_{DATASET_NAME}_{MODEL_NAME}_{INPUT_SHAPE[-1]}_{BACKEND}_results.csv" 280 | 281 | # Make the target results directory if it doesn't exist (include the parents) 282 | target_results_dir = target_dir 283 | results_path = Path("results") / target_results_dir 284 | results_path.mkdir(parents=True, exist_ok=True) 285 | csv_filepath = results_path / csv_filename 286 | 287 | # Turn dict into DataFrame 288 | df = pd.DataFrame(batch_size_training_results) 289 | 290 | # Save to CSV 291 | print(f"[INFO] Saving results to: {csv_filepath}") 292 | df.to_csv(csv_filepath, index=False) 293 | if __name__ == "__main__": 294 | print(f"[INFO] Testing model: {MODEL_NAME} on {DATASET_NAME} dataset with input shape {INPUT_SHAPE} for {EPOCHS} epochs across batch sizes: {BATCH_SIZES}") 295 | 296 | ### Get CPU Processor name ### 297 | CPU_PROCESSOR = None 298 | if not CPU_PROCESSOR: 299 | try: 300 | import cpuinfo 301 | CPU_PROCESSOR = cpuinfo.get_cpu_info().get("brand_raw").replace(" ", "_") 302 | print(f"[INFO] CPU Processor: {CPU_PROCESSOR}") 303 | except Exception as e: 304 | print(f"Error: {e}, may have failed to get CPU_PROCESSOR name from cpuinfo, please install cpuinfo or set CPU_PROCESSOR manually") 305 | 306 | ### Prepare Data Functions ### 307 | print(f"[INFO] Preparing {DATASET_NAME} dataset, note: this dataset requires 5GB+ of storage, so may take a while to download... (remember to delete ./data afterwards if you want to free up space)") 308 | dataset = load_dataset("food101", 309 | cache_dir="./data") # note: cache defaults to "~/.cache/huggingface/datasets/..." if not specified 310 | train_dataset = dataset["train"].shuffle(seed=42) 311 | test_dataset = dataset["validation"] 312 | 313 | train_dataset.set_format(type="torch", columns=["image", "label"]) 314 | test_dataset.set_format(type="torch", columns=["image", "label"]) 315 | 316 | train_dataset.set_transform(image_transforms) 317 | test_dataset.set_transform(image_transforms) 318 | 319 | print(f"[INFO] Number of training samples: {len(train_dataset)}, number of testing samples: {len(test_dataset)}") 320 | 321 | ### Setup device ### 322 | if torch.backends.mps.is_available(): 323 | device = torch.device("mps") 324 | print(f"[INFO] MPS device found, using device: {device}") 325 | elif torch.cuda.is_available(): 326 | device = torch.device("cuda") 327 | print(f"[INFO] CUDA device found, using device: {device}") 328 | else: 329 | device = torch.device("cpu") 330 | print (f"[INFO] MPS or CUDA device not found, using device: {device} (results will be much slower than using MPS or CUDA)") 331 | 332 | ### Train an time model ### 333 | batch_size_training_results = train_and_time(batch_sizes=BATCH_SIZES, 334 | epochs=EPOCHS, 335 | device=device) 336 | 337 | print("[INFO] Finished training with all batch sizes.") 338 | 339 | print(f"[INFO] Results:\n{batch_size_training_results}") -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Mac Machine Learning Speed Test 2 | 3 | [Blog post](https://www.mrdbourke.com/apple-m3-machine-learning-test/) | [Video walkthrough](https://youtu.be/cpYqED1q6ro) 4 | 5 | A collection of simple scripts focused on benchmarking the speed of various machine learning models on Apple Silicon Macs (M1, M2, M3). 6 | 7 | Scripts should also ideally work with CUDA (for benchmarking on other machines/Google Colab). 8 | 9 | > **Note:** Scripts are not designed to achieved state-of-the-art results (e.g. accuracy), they are designed to be as simple as possible to run out of the box. Most are examples straight from PyTorch/TensorFlow docs I've tweaked for specific focus on MPS (Metal Performance Shaders - Apple's GPU acceleration framework) devices + simple logging of timing. They are scrappy and likely not the best way to do things, but they are simple and easy to run. 10 | 11 | ## Experiment Overview 12 | 13 | The focus of these experiments is to get a quick benchmark across various ML problems and see how the Apple Silicon Macs perform. 14 | 15 | The focus is on hardware comparison rather than framework to framework comparison and measuring speed rather than accuracy. 16 | 17 | This repo contains code/results for the following experiments: 18 | 19 | 1. PyTorch Computer Vision (CIFAR100 image classification) 20 | 2. PyTorch Computer Vision (Food101 image classification) 21 | 3. PyTorch Natural Langua2ge Processing (NLP text classification) 22 | 4. TensorFlow Computer Vision (CIFAR100 image classication) 23 | 5. TensorFlow Computer Vision (Food101 image classification) 24 | 6. TensorFlow Natural Language Processing (NLP text classification) 25 | 7. LlamaCPP LLM test (text generation) 26 | 8. Geekbench ML (inference-only benchmarks) 27 | 28 | While the focus is on Apple Silicon Macs, I've included my own deep learning PC (NVIDIA TITAN RTX) as well as a Google Colab free tier instance for comparison. 29 | 30 | ## Getting Setup 31 | 32 | If you have a brand new machine, you'll need to setup a few things before running the experiments. 33 | 34 | The following steps will get you ready to go for all experiments (and many future machine learning experiments). 35 | 36 | However, if you've already got `conda`, feel free to skip to the next section. 37 | 38 | ### Base environment setup 39 | 40 | 1. Install homebrew (or run `xcode-select --install` in terminal and skip to next step) 41 | 42 | Go to https://brew.sh/ and follow the main instructions on the front page. 43 | 44 | Run the commands on the homebrew webpage in the terminal and follow the instructions when they appear. 45 | 46 | 2. Install miniforge to get conda: https://github.com/conda-forge/miniforge 47 | 48 | ``` 49 | brew install miniforge 50 | ``` 51 | 52 | or 53 | 54 | Download Miniforge3 for macOS ARM64 from: https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh 55 | 56 | 3. Run the following commands in terminal with Miniforge3 downloaded into the `~/Downloads` folder: 57 | 58 | ``` 59 | chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh 60 | sh ~/Downloads/Miniforge3-MacOSX-arm64.sh 61 | ``` 62 | 63 | 4. Follow the steps, for example, answer "yes", "yes", "ok" etc and then initialize conda to see if it works. 64 | 65 | ``` 66 | source ~/miniforge3/bin/activate 67 | ``` 68 | 69 | 5. **Important:** Restart terminal and check conda is working. 70 | 71 | If conda is working, you should have a `(base)` at the start of your terminal prompt. 72 | 73 | For example: `(base) daniel@Daniels-MacBook-Pro-3 ~ %` 74 | 75 | ### Setting up for machine learning tests 76 | 77 | 1. Clone this repo. 78 | 79 | ``` 80 | git clone https://github.com/mrdbourke/mac-ml-speed-test.git 81 | ``` 82 | 83 | 2. Change into the repo directory. 84 | 85 | ``` 86 | cd mac-ml-speed-test 87 | ``` 88 | 89 | 3. Create conda environment. 90 | 91 | ```python 92 | conda create --prefix ./env python=3.10 93 | ``` 94 | 95 | **Note:** You could also use `conda create --name some-env-name python=3.10` but I prefer `--prefix` as it's more explicit. 96 | 97 | 4. Check conda environments. 98 | 99 | ``` 100 | conda env list 101 | ``` 102 | 103 | 5. Activate newly created conda environment. 104 | 105 | ``` 106 | conda activate ./env 107 | ``` 108 | 109 | 6. Install necessities/helper packages. 110 | 111 | **Note:** This may have a few extra packages that aren't 100% needed for speed tests but help to have (e.g. JupyterLab, PrettyTable). 112 | 113 | ```python 114 | conda install -c conda-forge pip pandas numpy matplotlib scikit-learn jupyterlab langchain prettytable py-cpuinfo tqdm 115 | ``` 116 | 117 | ## Install and Test PyTorch/Hugging Face Transformers 118 | 119 | * [Apple guide to installing PyTorch](https://developer.apple.com/metal/pytorch/). 120 | * [PyTorch guide to installing PyTorch](https://pytorch.org/get-started/locally/). 121 | * Hugging Face Guides to Install [Transformers](https://huggingface.co/docs/transformers/installation), [Datasets](https://huggingface.co/docs/datasets/installation), [Evaluate](https://huggingface.co/docs/evaluate/installation), [Accelerate](https://huggingface.co/docs/accelerate/basic_tutorials/install). 122 | 123 | ```python 124 | conda install pytorch::pytorch torchvision -c pytorch 125 | ``` 126 | 127 | > **Note:** MPS (Metal Performance Shaders, aka using the GPU on Apple Silicon) comes standard with PyTorch on macOS, you don't need to install anything extra. MPS can be accessed via [`torch.mps`](https://pytorch.org/docs/stable/mps.html), see more [notes in the PyTorch documentation](https://pytorch.org/docs/stable/notes/mps.html). 128 | 129 | ### Test PyTorch Computer Vision (CIFAR100) 130 | 131 | Experiment details: 132 | 133 | | **Model** | **Dataset** | **Image Size** | **Epochs** | **Num Samples** | **Num Classes** | **Problem Type** | 134 | | --- | --- | --- | --- | --- | --- | --- | 135 | | [ResNet50](https://pytorch.org/vision/main/models/generated/torchvision.models.resnet50.html) | [CIFAR100](https://pytorch.org/vision/stable/generated/torchvision.datasets.CIFAR100.html) | 32x32x3 | 5 | 50,000 train, 10,000 test | 100 | Image Classification | 136 | 137 | Example usage of `pytorch_test_computer_vision_cifar100.py` for 1 epoch and batch size of 32: 138 | 139 | ``` 140 | python pytorch_test_computer_vision_cifar100.py --epochs=1 --batch_sizes="32" 141 | ``` 142 | 143 | Batch sizes can be a comma-separated list of batch sizes, e.g. `"32, 64, 128, 256"`. 144 | 145 | Default behaviour is to test for `5` epochs and batch sizes of `"16, 32, 64, 128, 256, 512, 1024"`. 146 | 147 | The following: 148 | 149 | ``` 150 | python pytorch_test_computer_vision_cifar100.py 151 | ``` 152 | 153 | Is equivalent to: 154 | 155 | ``` 156 | python pytorch_test_computer_vision_cifar100.py --epochs=5 --batch_sizes="16, 32, 64, 128, 256, 512, 1024" 157 | ``` 158 | 159 | Results will be saved to `results/results_pytorch_cv/[file_name].csv` where `file_name` is a combination of information from the experiment (see `pytorch_test_computer_vision_cifar100.py` for details). 160 | 161 | ### Test PyTorch Computer Vision (Food101) 162 | 163 | Experiment details: 164 | 165 | | **Model** | **Dataset** | **Image Size** | **Epochs** | **Num Samples** | **Num Classes** | **Problem Type** | 166 | | --- | --- | --- | --- | --- | --- | --- | 167 | | [ResNet50](https://pytorch.org/vision/main/models/generated/torchvision.models.resnet50.html) | [Food101](https://huggingface.co/datasets/food101) | 224x224x3 | 5 | 75,750 train, 25,250 test | 101 | Image Classification | 168 | 169 | **Note:** Download Hugging Face Datasets to download Food101 dataset. 170 | 171 | ``` 172 | python -m pip install datasets 173 | ``` 174 | 175 | Example usage of `pytorch_test_computer_vision_food101.py` for 1 epoch and batch size of 32: 176 | 177 | ``` 178 | python pytorch_test_computer_vision_food101.py --epochs=1 --batch_sizes="32" 179 | ``` 180 | 181 | Batch sizes can be a comma-separated list of batch sizes, e.g. `"32, 64, 128, 256"`. 182 | 183 | Default behaviour is to test for `3` epochs and batch sizes of `"32, 64, 128"`. 184 | 185 | The following: 186 | 187 | ``` 188 | python pytorch_test_computer_vision_food101.py 189 | ``` 190 | 191 | Is equivalent to: 192 | 193 | ``` 194 | python pytorch_test_computer_vision_food101.py --epochs=3 --batch_sizes="32, 64, 128" 195 | ``` 196 | 197 | Results will be saved to `results/results_pytorch_cv/[file_name].csv` where `file_name` is a combination of information from the experiment (see `pytorch_test_computer_vision_food101.py` for details). 198 | 199 | 200 | ### Test PyTorch Natural Language Processing (NLP) 201 | 202 | Experiment details: 203 | 204 | | **Model** | **Dataset** | **Sequence Size** | **Epochs** | **Num Samples** | **Num Classes** | **Problem Type** | 205 | | --- | --- | --- | --- | --- | --- | --- | 206 | | [DistilBERT](https://huggingface.co/distilbert-base-uncased) (fine-tune top 2 layers + top Transformer block) | [IMDB](https://huggingface.co/datasets/imdb) | 512 | 5 | 25,000 train, 25,000 test | 2 | Text Classification | 207 | 208 | > **Note:** The `pytorch_test_nlp.py` uses Hugging Face Transformers/Datasets/Evaluate/Accelerate to help with testing. If you get into ML, you'll likely come across these libraries, they are very useful for NLP and ML in general. The model loaded from Transformers uses PyTorch as a backend. 209 | 210 | ```python 211 | python -m pip install transformers datasets evaluate accelerate 212 | ``` 213 | 214 | Example usage of `pytorch_test_nlp.py` for 1 epoch and batch size of 32: 215 | 216 | ``` 217 | python pytorch_test_nlp.py --epochs=1 --batch_sizes="32" 218 | ``` 219 | 220 | Batch sizes can be a comma-separated list of batch sizes, e.g. `"32, 64, 128, 256"`. 221 | 222 | Default behaviour is to test for `3` epochs and batch sizes of `"16, 32, 64, 128, 256, 512"` (**note:** without 24GB+ of RAM, running batch sizes of 256+ will likely error, for example my M1 Pro with 18GB of VRAM can only run `"16, 32, 64, 128"` and fails on `256` with the model/data setup in `python_test_nlp.py`). 223 | 224 | The following: 225 | 226 | ``` 227 | python pytorch_test_nlp.py 228 | ``` 229 | 230 | Is equivalent to: 231 | 232 | ``` 233 | python pytorch_test_nlp.py --epochs=3 --batch_sizes="16, 32, 64, 128, 256, 512" 234 | ``` 235 | 236 | Results will be saved to `results/results_pytorch_nlp/[file_name].csv` where `file_name` is a combination of information from the experiment (see `pytorch_test_nlp.py` for details). 237 | 238 | ## Install and Test TensorFlow 239 | 240 | For more on running TensorFlow on macOS, see [Apple's developer guide](https://developer.apple.com/metal/tensorflow-plugin/). 241 | 242 | **Note:** Install TensorFlow Datasets to access Food101 dataset with TensorFlow. 243 | 244 | ```python 245 | python -m pip install tensorflow 246 | python -m pip install tensorflow-metal 247 | python -m pip install tensorflow_datasets 248 | ``` 249 | 250 | > **Note:** TensorFlow can be run on macOS *without* using the GPU via `pip install tensorflow`, however, if you're using an Apple Silicon Mac, you'll want to use the Metal plugin for GPU acceleration (`pip install tensorflow-metal`). 251 | > 252 | > After installing `tensorflow-metal` and running the scripts, you should see something like: 253 | > 254 | > `2023-12-06 12:22:02.016745: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for device_type GPU is enabled.` 255 | 256 | ### Test TensorFlow Computer Vision (CIFAR100) 257 | 258 | Experiment details: 259 | 260 | | **Model** | **Dataset** | **Image Size** | **Epochs** | **Num Samples** | **Num Classes** | **Problem Type** | 261 | | --- | --- | --- | --- | --- | --- | --- | 262 | | [ResNet50](https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet50/ResNet50) | [CIFAR100](https://www.tensorflow.org/datasets/catalog/cifar100) | 32x32x3 | 5 | 50,000 train, 10,000 test | 100 | Image Classification | 263 | 264 | Example usage of `tensorflow_test_computer_vision_cifar100.py` for 1 epoch and batch size of 32: 265 | 266 | ``` 267 | python tensorflow_test_computer_vision_cifar100.py --epochs=1 --batch_sizes="32" 268 | ``` 269 | 270 | Batch sizes can be a comma-separated list of batch sizes, e.g. `"32, 64, 128, 256"`. 271 | 272 | Default behaviour is to test for `5` epochs and batch sizes of `"16, 32, 64, 128, 256, 512, 1024"`. 273 | 274 | The following: 275 | 276 | ``` 277 | python tensorflow_test_computer_vision_cifar100.py 278 | ``` 279 | 280 | Is equivalent to: 281 | 282 | ``` 283 | python tensorflow_test_computer_vision_cifar100.py --epochs=5 --batch_sizes="16, 32, 64, 128, 256, 512, 1024" 284 | ``` 285 | 286 | Results will be saved to `results/results_tensorflow_cv/[file_name].csv` where `file_name` is a combination of information from the experiment (see `tensorflow_test_computer_vision_cifar100.py` for details). 287 | 288 | ### Test TensorFlow Computer Vision (Food101) 289 | 290 | Experiment details: 291 | 292 | | **Model** | **Dataset** | **Image Size** | **Epochs** | **Num Samples** | **Num Classes** | **Problem Type** | 293 | | --- | --- | --- | --- | --- | --- | --- | 294 | | [ResNet50](https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet50/ResNet50) | [Food101](https://www.tensorflow.org/datasets/catalog/food101) | 224x224x3 | 5 | 75,750 train, 25,250 test | 101 | Image Classification | 295 | 296 | Example usage of `tensorflow_test_computer_vision_food101.py` for 1 epoch and batch size of 32: 297 | 298 | ``` 299 | python tensorflow_test_computer_vision_food101.py --epochs=1 --batch_sizes="32" 300 | ``` 301 | 302 | Batch sizes can be a comma-separated list of batch sizes, e.g. `"32, 64, 128"`. 303 | 304 | Default behaviour is to test for `3` epochs and batch sizes of `"32, 64, 128"`. 305 | 306 | The following: 307 | 308 | ``` 309 | python tensorflow_test_computer_vision_food101.py 310 | ``` 311 | 312 | Is equivalent to: 313 | 314 | ``` 315 | python tensorflow_test_computer_vision_food101.py --epochs=3 --batch_sizes="32, 64, 128" 316 | ``` 317 | 318 | Results will be saved to `results/results_tensorflow_cv/[file_name].csv` where `file_name` is a combination of information from the experiment (see `tensorflow_test_computer_vision_food101.py` for details). 319 | 320 | ### Test TensorFlow Natural Language Processing (NLP) 321 | 322 | Experiment details: 323 | 324 | | **Model** | **Dataset** | **Sequence Size** | **Epochs** | **Num Samples** | **Num Classes** | **Problem Type** | 325 | | --- | --- | --- | --- | --- | --- | --- | 326 | | SmallTransformer (custom) | [IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) | 200 | 5 | 25,000 train, 25,000 test | 2 | Text Classification | 327 | 328 | Example usage of `tensorflow_test_nlp.py` for 1 epoch and batch size of 32: 329 | 330 | ``` 331 | python tensorflow_test_nlp.py --epochs=1 --batch_sizes="32" 332 | ``` 333 | 334 | Batch sizes can be a comma-separated list of batch sizes, e.g. `"32, 64, 128, 256"`. 335 | 336 | Default behaviour is to test for `3` epochs and batch sizes of `"16, 32, 64, 128"`. 337 | 338 | The following: 339 | 340 | ``` 341 | python tensorflow_test_nlp.py 342 | ``` 343 | 344 | Is equivalent to: 345 | 346 | ``` 347 | python tensorflow_test_nlp.py --epochs=3 --batch_sizes="16, 32, 64, 128" 348 | ``` 349 | 350 | Results will be saved to `results/results_tensorflow_nlp/[file_name].csv` where `file_name` is a combination of information from the experiment (see `tensorflow_test_nlp.py` for details). 351 | 352 | ## Install and Test LlamaCPP (Llama 2 LLM test) 353 | 354 | Experiment details: 355 | 356 | | **Model** | **Task** | **Num Questions** | **Num Answers** | **Total Generations** | 357 | | --- | --- | --- | --- | --- | 358 | | [Llama 2 7B .gguf format](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_0.gguf) | Text Generation | 20 | 5 | 20*5 = 100 | 359 | 360 | * See: https://llama-cpp-python.readthedocs.io/en/latest/install/macos/ (note: this focuses on macOS install, I haven't tested with CUDA) 361 | 362 | ```python 363 | CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 python -m pip install llama-cpp-python 364 | ``` 365 | 366 | After installing `llama-cpp-python`, you will need a `.gguf` format model from Hugging Face. 367 | 368 | - Download a model from Hugging Face with `.gguf` extension, e.g. [`https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_0.gguf`](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_0.gguf) → `llama-2-7b-chat.Q4_0.gguf` 369 | - Download link: https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_0.gguf?download=true 370 | - Download code: 371 | 372 | * Install wget if necessary, requires homebrew: https://brew.sh/ 373 | 374 | ```python 375 | brew install wget 376 | ``` 377 | 378 | * Download a `.gguf` LLM file from Hugging Face, on [TheBloke profile](https://huggingface.co/TheBloke), usage/results will vary depending on which model you use, choosing `llama-2-7b-chat.Q4_0.gguf` as an example: 379 | 380 | ``` 381 | wget https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_0.gguf 382 | ``` 383 | Once you've downloaded your model file, put it in the same directory as `llama2_test.py` (or update the `model_path` argument to point to the file). 384 | 385 | Example usage of `llama2_test.py` to generate an answer to 1 example question 1 time using the `llama-2-7b-chat.Q4_0.gguf` model: 386 | 387 | ``` 388 | python llama2_test.py --path_to_gguf_model="llama-2-7b-chat.Q4_0.gguf" --num_questions=1 --num_times_per_question=1 389 | ``` 390 | 391 | Default behaviour is to generate an answer to `20` example questions `5` times each using the `llama-2-7b-chat.Q4_0.gguf` model (100 total generations). 392 | 393 | The following: 394 | 395 | ``` 396 | python llama2_test.py 397 | ``` 398 | 399 | Is equivalent to: 400 | 401 | ``` 402 | python llama2_test.py --path_to_gguf_model="llama-2-7b-chat.Q4_0.gguf" --num_questions="all" --num_times_per_question=5 403 | ``` 404 | 405 | Results will be saved to `results/results_llama2/[file_name].csv` where `file_name` is a combination of information from the experiment (see `llama2_test.py` for details). 406 | 407 | * Note on LLM files: you can use other .gguf models, e.g. llama-2-13b, 70b, other variants etc, I just went with 7b to demonstrate (as to run 70b, you will need a lot of RAM, ~70GB+ in half precision, [~40GB in Quantize 4 precision](https://huggingface.co/TheBloke/Llama-2-70B-Chat-GGUF/tree/main)) 408 | 409 | ## Results 410 | 411 | The following are the machines I tested. For all of the M3 variants of the MacBook Pro's, they were the base model in their class (e.g. an M3 Pro MacBook Pro with no upgrades from the Apple Store). 412 | 413 | | **Machine** | **CPU** | **GPU** | **RAM** | **Storage** | **Price (USD)** | 414 | | --- | --- | --- | --- | --- | --- | 415 | | M1 Pro 14" 2021 | 10-core CPU | 16-core GPU | 32GB | 4TB SSD | ~$3,500 | 416 | | M3 14" 2023 | 8-core CPU | 10-core GPU | 8GB | 512GB SSD | $1,599 | 417 | | M3 Pro 14" 2023 | 11-core CPU | 14-core GPU | 18GB | 512GB SSD | $1,999 | 418 | | M3 Max 14" 2023 | 14-core CPU | 30-core GPU | 36GB | 1TB SSD | $3,199 | 419 | | Deep Learning PC | Intel i9 | NVIDIA TITAN RTX (24GB) | 32GB | 1TB SSD | ~$3,000 | 420 | | Google Colab Free Tier | 2-core CPU | NVIDIA Tesla V100 (16GB) | 12GB | 100GB SSD | Free or $10/month for more compute | 421 | 422 | Notes: 423 | 424 | * Only training time was measured as this generally takes far more time than inference (except for Llama 2 text generation, this was inference only). 425 | * If a result isn't present for a particular machine, it means it either failed or didn't have enough memory to complete the test (e.g. M3 Pro 14" 2023 with 8GB RAM couldn't run batch size 64 for PyTorch CV Food101). 426 | 427 | ### TensorFlow Computer Vision (CIFAR100) 428 | 429 | ![TensorFlow CV CIFAR100](results/tensorflow_cv_resnet50_cifar100.png) 430 | 431 | ### TensorFlow Computer Vision (Food101) 432 | 433 | ![TensorFlow CV Food101](results/tensorflow_cv_resnet50_food101.png) 434 | 435 | ### TensorFlow Natural Language Processing (NLP) 436 | 437 | ![TensorFlow NLP](results/tensorflow_nlp_imdb.png) 438 | 439 | ### PyTorch Computer Vision (CIFAR100) 440 | 441 | ![PyTorch CV CIFAR100](results/pytorch_cv_resnet50_cifar100.png) 442 | 443 | ### PyTorch Computer Vision (Food101) 444 | 445 | ![PyTorch CV Food101](results/pytorch_cv_resnet50_food101.png) 446 | 447 | ### PyTorch Natural Language Processing (NLP) 448 | 449 | ![PyTorch NLP](results/pytorch_nlp_distilbert_imdb.png) 450 | 451 | ### Llama 2 (LLM) 452 | 453 | ![Llama 2 text generation](results/llamacpp_2_7b_chat_q4_0_gguf_tokens_per_second.png) 454 | 455 | ### Geekbench ML 456 | 457 | All tests were done using [Geekbench ML 0.6.0](https://www.geekbench.com/ml/) for Mac. 458 | 459 | Tests include a series of [inference-only benchmarks](https://www.geekbench.com/doc/ml-0.6-inference-workloads.pdf) across different domains. 460 | 461 | | Machine | Num CPU cores | CPU | CPU-link | Num GPU Cores | GPU | GPU-link | Neural Engine | Neural Engine-link | 462 | |--------------------------------|---------------|------|--------------------------------------------------------------|---------------|------|--------------------------------------------------------------|---------------|---------------------------------------------------------------| 463 | | MacBook Pro M1 Pro 14 inch, 2021| 10 | 1809 | [Link](https://browser.geekbench.com/ml/v0/inference/330843) | 16 | 5192 | [Link](https://browser.geekbench.com/ml/v0/inference/330844) | 6462 | [Link](https://browser.geekbench.com/ml/v0/inference/330846) | 464 | | MacBook Pro M3 14 inch, 2023 | 8 | 2356 | [Link](https://browser.geekbench.com/ml/v0/inference/330849) | 10 | 5747 | [Link](https://browser.geekbench.com/ml/v0/inference/330850) | 8399 | [Link](https://browser.geekbench.com/ml/v0/inference/330853) | 465 | | MacBook Pro M3 Pro 14 inch, 2023| 11 | 2355 | [Link](https://browser.geekbench.com/ml/v0/inference/330860) | 14 | 7030 | [Link](https://browser.geekbench.com/ml/v0/inference/330861) | 10237 | [Link](https://browser.geekbench.com/ml/v0/inference/330859) | 466 | | MacBook Pro M3 Max 14 inch, 2023| 14 | 2393 | [Link](https://browser.geekbench.com/ml/v0/inference/330866) | 30 | 9008 | [Link](https://browser.geekbench.com/ml/v0/inference/330869) | 9450 | [Link](https://browser.geekbench.com/ml/v0/inference/334715) | 467 | 468 | 469 | ## Discussion 470 | 471 | It's quite clear that the newest M3 Macs are quite capable of machine learning tasks. 472 | 473 | However, dedicated NVIDIA GPUs still have a clear lead. 474 | 475 | The results also show that more GPU cores and more RAM equates to better performance (e.g. M3 Max outperforming most other Macs on *most* batch sizes). 476 | 477 | An interesting result was that the M3 base chip outperformed (or performed level with) the M3 Pro and M3 Max on smaller-scale experiments (CIFAR100, smaller batch sizes). 478 | 479 | I'm not 100% sure why this is the case but my intuition tells me this is likely because the overhead of copying data to and from the GPU is more expensive than the actual training itself (e.g. the GPU is waiting for data to be copied to it, rather than being fully utilized). 480 | 481 | So in practice, the M3 can compete with M3 Pro and M3 Max because the actual computation doesn't take long but the copying does. 482 | 483 | Either way, the Food101 examples show a more realistic example with larger image sizes. It's here that the machines with more GPU cores perform faster and the machines with more RAM can handle larger batch sizes. 484 | 485 | For the best results, you'll want to always pack as much data into the GPU as possible (to utilize all of your GPU cores) and avoid copying data between memory. 486 | 487 | I thought that the unified memory system on the M-series chips would reduce copying overheads. Perhaps this is not yet the case from a software perspective (e.g. PyTorch and TensorFlow are not designed for Apple Silicon). 488 | 489 | Maybe newer frameworks designed for Apple Silicon such as [MLX](https://github.com/ml-explore/mlx) will better utilize the unified memory system. This will require further investigation. 490 | 491 | The Geekbench ML results were as expected (newer and bigger chips doing better) with the exception of the M3 Max performing slightly worse on the Neural Engine than the M3 Pro. However, I'd take this number with a grain of salt as it will likely be close to unnoticed in real-world applications. 492 | 493 | ## Recommendations 494 | 495 | For smaller experiments, fine-tuning models and learning the fundamentals of machine learning, the M3 Macs will be more than fine to use. 496 | 497 | But for larger scale workloads, you'll likely still want a dedicated NVIDIA GPU. 498 | 499 | Personally, I use my M1 MacBook Pro as a daily driver but perform all larger-scale deep learning experiments on my NVIDIA GPU PC (connected via SSH). For example, I do plenty of data exploration for [Nutrify](https://nutrify.app/) (an app my brother I have built to help people learn about food) but all model training happens on a NVIDIA Titan RTX. 500 | 501 | And Google Colab helps to fill in the gaps whenever necessary. 502 | 503 | Based on the results across the new M3 Macs, I'm not personally going to upgrade my M1 MacBook Pro. 504 | 505 | But I am curious to see how a spec'd up M3 Max (or future M3 Ultra) would go with a dedicated MLX model against my NVIDIA GPU PC. 506 | 507 | In summary my recommendations are: 508 | 509 | * Go for as much RAM and GPU cores as you can afford, typically in that order. 510 | * More GPU cores = faster training/inference. 511 | * More RAM = larger batch sizes/models. 512 | * Avoid the 8GB RAM M3, 16GB is a good minimum. 513 | * As value for money, the M3 Pro with a RAM upgrade (16GB -> 36GB) and GPU upgrade (14-cores -> 18 cores) still comes in cheaper than an M3 Max. 514 | * If you've got the option, perhaps spend less on a MacBook and buy a dedicated NVIDIA GPU and setup a deep learning PC you can SSH into (this is what I do). 515 | * For example, get the baseline M3 with a RAM upgrade and spend the rest of the money on a NVIDIA GPU. 516 | 517 | ## Notes 518 | 519 | * Big big big: found you need to increase `ulimit -n` on M3 Pro and M3 Max to run larger experiments (e.g. default on M3 Pro, M3 Max is `ulimit -n 256`, I increased to `ulimit -n 2560` (10x increase, which is the default on the base M3 and my M1 Pro) and was able to run larger experiments, e.g. batch size 64+ for computer vision) 520 | * If you get the error `OSError: [Errno 24] Too many open files...` (or something similar), try increasing `ulimit -n` 521 | * As far as I know, float16 (mixed-precision training) doesn't work on MPS devices, this is why I've used float32 for all tests, float16 will typically halve training times on compatible devices (e.g. NVIDIA GPUs). 522 | * Also, MPS doesn't support `torch.compile()` which also speeds up training times on NVIDIA Ampere GPUs & above. 523 | * Tests should not be compared between frameworks, e.g. TensorFlow vs PyTorch for X task. They are more designed to compare the same code across hardware. 524 | 525 | ## Potential upgrades 526 | 527 | * Add total memory count + num GPU cores to results e.g. "Apple_M1_Pro_18GB_Memory_14_GPU_Cores..." 528 | * Add scikit-learn/XGBoost tests, e.g. 100,000 rows, 1,000,000 rows? 529 | * Could I use Keras 3.0 for the same code to run on multiple backends? :thinking: 530 | * Apple has recently released a deep learning framework called [`MLX`](https://github.com/ml-explore/mlx) which is designed for Apple Silicon, this may significantly improve speed on Apple Silicon Macs, see the `mlx/` directory for more. See this example of Llama 2 running on MLX - https://huggingface.co/mlx-llama/Llama-2-7b-chat-mlx 531 | --------------------------------------------------------------------------------