├── main.py ├── LICENSE └── README.md /main.py: -------------------------------------------------------------------------------- 1 | def optimize(theta_1, learning_rate, T, hyperparameters, estimator_choice): 2 | lambda_val, beta1, beta2, epsilon, rho = hyperparameters 3 | m_t_minus_1 = v_t_minus_1 = h_1_minus_k = 0 4 | theta_t = theta_1 5 | 6 | for t in range(1, T + 1): 7 | L_t = compute_minibatch_loss(theta_t) 8 | g_t = compute_gradients(L_t, theta_t) 9 | m_t = beta1 * m_t_minus_1 + (1 - beta1) * g_t 10 | 11 | if t % k == 1: 12 | h_t_hat = estimator(theta_t) 13 | h_t = beta2 * h_1_minus_k + (1 - beta2) * h_t_hat 14 | else: 15 | h_t = h_t_minus_1 16 | 17 | theta_t = theta_t - learning_rate * lambda_val * theta_t 18 | theta_t_plus_1 = theta_t - learning_rate * clip(m_t / max(h_t, epsilon), rho) 19 | m_t_minus_1 = m_t 20 | h_t_minus_1 = h_t 21 | 22 | return theta_t_plus_1 23 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 TAWSIF AHMED 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Repository contains experimental code to write Sophia and use it in your LLM model. (LLM refers to the Large Language Model). 2 | 3 | Sophia came out a day ago maybe? The paper does not provide any code whatsoever to write the Optimizer. But, it contains the process/Algorithm to write Sophia. Now, authors might have chosen this as writing an optimizer with given algorithmic instructions might be an easy task. I personally think, if you have the code, it becomes clearer that’s all! 4 | 5 | It is a quite fast-pace project/implementation which is why I have only provided the function for the optimizer and did not include any estimators yet. You can go over the code and get a fresh first hand experience of what the Optimizer looks like. 6 | 7 | ##### What’s the repository for? 8 | 9 | The repository is for machine learning learners and students alike and personally for myself. It is quite hard to keep up with papers and codes and most of those papers that do not provide code. Someone has to do the hard task of writing this code and reading papers and explaining them. 10 | 11 | Which is why, I wrote this repository. At the moment, I only wrote the hypothetical implementation of the Optimizer no fancy (Pytorch or Jax stuff yet) Kindly take the time to learn and understand. 12 | 13 | ###### Status: Epsilon 14 | 15 | I am thinking of writing a training script to train the GPT-2 (small) model using this optimizer using Pytorch. It will take a bit time as I have a few submissions and deadlines are near but will try to write them by 2nd week of June (no promise tho) 16 | 17 | #### Updates 18 | *Provide a training script with the Optimizer 19 | * Program-out estimators and other functions 20 | * Explain the Optimizer in a blog post (maybe fast) 21 | * Writing a few variations of Sophia 22 | 23 | ##### Technical requirements: 24 | 25 | If you’re trying to write the Optimizer at its fullest and use it to perform training make sure to use either Pytorch or Jax (as mentioned in the paper) 26 | 27 | #### Theoretical Explanation of the Algorithm in (SPELL OUT): 28 | 29 | 1. Initialize variables for the first moment estimate (m), second moment estimate (v), and a lagged version of the second moment estimate (h). 30 | 2. Iterate from t=1 to T. 31 | 3. Compute the minibatch loss (L) for the current parameters (θt). 32 | 4. Compute the gradients (g) of the loss with respect to the parameters. 33 | 5. Update the first moment estimate (m) using a decaying average of the gradients. 34 | 6. If t is divisible by k, compute an estimate of the second moment (h_hat) using the chosen estimator. 35 | 7. Update the second moment estimate (h) using a decaying average of the estimators. 36 | 8. If t is not divisible by k, use the previous value of the second moment estimate (h). 37 | 9. Update the parameters (θt) by applying weight decay and subtracting the learning rate times the current parameters. 38 | 10. Compute the updated parameters (θt+1) by subtracting the learning rate times a clipped ratio of the first moment estimate and the maximum of the second moment estimate and a small value (ϵ). 39 | 11. Repeat the process for the next iteration. 40 | 12. Return the final updated parameters (θt+1). 41 | 42 | ##### [Paper Link](https://arxiv.org/abs/2305.14342) 43 | --------------------------------------------------------------------------------