adalkiran / llama-nuts-and-boltsLinks
A holistic way of understanding how Llama and its components run in practice, with code and detailed documentation.
☆313Updated last year
Alternatives and similar repositories for llama-nuts-and-bolts
Users that are interested in llama-nuts-and-bolts are comparing it to the libraries listed below
Sorting:
- A compact LLM pretrained in 9 days by using high quality data☆332Updated 7 months ago
- ☆573Updated last year
- Comparison of Language Model Inference Engines☆233Updated 10 months ago
- ☆453Updated last week
- A collection of all available inference solutions for the LLMs☆91Updated 8 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆250Updated last year
- Reverse Engineering Gemma 3n: Google's New Edge-Optimized Language Model☆250Updated 5 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆161Updated 2 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆494Updated last year
- Automatically evaluate your LLMs in Google Colab☆664Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆207Updated 8 months ago
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆685Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆258Updated this week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.☆690Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆344Updated 6 months ago
- ☆225Updated 2 weeks ago
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆344Updated 6 months ago
- Fast parallel LLM inference for MLX☆225Updated last year
- Banishing LLM Hallucinations Requires Rethinking Generalization☆275Updated last year
- a small code base for training large models☆311Updated 6 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Best practices for distilling large language models.☆583Updated last year
- ☆446Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆886Updated 2 weeks ago
- 1.58-bit LLaMa model☆83Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆225Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year
- scalable and robust tree-based speculative decoding algorithm☆361Updated 9 months ago
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆227Updated 5 months ago