└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Awesome Fine-Tuning 2 | 3 | Welcome to the Awesome Fine-Tuning repository! This is a curated list of resources, tools, and information specifically about fine-tuning. Our focus is on the latest techniques and tools that make fine-tuning LLaMA models more accessible and efficient. 4 | 5 | ## Table of Contents 6 | 7 | - [Awesome Fine-Tuning](#awesome-fine-tuning) 8 | - [Table of Contents](#table-of-contents) 9 | - [Tools and Frameworks](#tools-and-frameworks) 10 | - [Tutorials and Guides](#tutorials-and-guides) 11 | - [Data Preparation](#data-preparation) 12 | - [Optimization Techniques](#optimization-techniques) 13 | - [Evaluation and Quality Measurement](#evaluation-and-quality-measurement) 14 | - [Best Practices](#best-practices) 15 | - [Contributing](#contributing) 16 | 17 | ## Tools and Frameworks 18 | 19 | A list of cutting-edge tools and frameworks used for fine-tuning LLaMA models: 20 | 21 | - [Hugging Face Transformers](https://github.com/huggingface/transformers) 22 | - Offers easy-to-use interfaces for working with models 23 | - [Unsloth](https://github.com/unslothai/unsloth) 24 | - Accelerates fine-tuning of LLaMA models with optimized kernels 25 | - [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) 26 | - Simplifies the process of fine-tuning LLaMA and other large language models 27 | - [PEFT (Parameter-Efficient Fine-Tuning)](https://github.com/huggingface/peft) 28 | - Implements efficient fine-tuning methods like LoRA, prefix tuning, and P-tuning 29 | - [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) 30 | - Enables 4-bit and 8-bit quantization for memory-efficient fine-tuning 31 | 32 | ## Tutorials and Guides 33 | 34 | Step-by-step tutorials and comprehensive guides on fine-tuning LLaMA: 35 | 36 | - Fine-tuning LLaMA 3 with Hugging Face Transformers 37 | - Efficient Fine-Tuning of LLaMA using Unsloth 38 | - Custom Dataset Fine-Tuning with Axolotl 39 | - Implementing LoRA for LLaMA Fine-Tuning 40 | 41 | ## Data Preparation 42 | 43 | Resources and techniques for preparing data to fine-tune LLaMA models: 44 | 45 | - Creating High-Quality Datasets for LLaMA Fine-Tuning 46 | - Data Cleaning and Preprocessing for LLM Fine-Tuning 47 | - Techniques for Handling Limited Datasets 48 | 49 | ## Optimization Techniques 50 | 51 | Methods to optimize the fine-tuning process for LLaMA models: 52 | 53 | - Quantization Techniques for Memory-Efficient Fine-Tuning 54 | - LoRA: Low-Rank Adaptation for Fast Fine-Tuning 55 | - Gradient Checkpointing to Reduce Memory Usage 56 | 57 | ## Evaluation and Quality Measurement 58 | 59 | Methods and metrics for evaluating the quality of fine-tuned LLaMA models: 60 | 61 | - Perplexity and Other Language Model Metrics 62 | - Task-Specific Evaluation for Fine-Tuned Models 63 | - Human Evaluation Strategies for LLM Outputs 64 | 65 | ## Best Practices 66 | 67 | Tips and best practices for effective LLaMA fine-tuning: 68 | 69 | - Choosing the Right LLaMA Model Size for Your Task 70 | - Hyperparameter Tuning for LLaMA Fine-Tuning 71 | - Ethical Considerations in LLM Fine-Tuning 72 | 73 | ## Contributing 74 | 75 | We welcome contributions to this repository! If you have resources, tools, or information to add about fine-tuning, please follow these steps: 76 | 77 | 1. Fork the repository 78 | 2. Create a new branch (`git checkout -b add-new-resource`) 79 | 3. Add your changes 80 | 4. Commit your changes (`git commit -am 'Add new resource'`) 81 | 5. Push to the branch (`git push origin add-new-resource`) 82 | 6. Create a new Pull Request 83 | 84 | Please ensure your contribution is relevant to fine-tuning and provides value to the community. 85 | 86 | --- 87 | 88 | We hope you find this repository helpful in your LLaMA fine-tuning journey. If you have any questions or suggestions, please open an issue or contribute to the discussions. 89 | 90 | Happy fine-tuning! --------------------------------------------------------------------------------