pranavjad / tinyllama-bitnet
Train your own small bitnet model
☆67Updated 6 months ago
Alternatives and similar repositories for tinyllama-bitnet:
Users that are interested in tinyllama-bitnet are comparing it to the libraries listed below
- 1.58-bit LLaMa model☆81Updated last year
- ☆129Updated 8 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 11 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆146Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 11 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 2 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆35Updated last year
- Experiments with BitNet inference on CPU☆53Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆197Updated 9 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆120Updated last year
- ☆53Updated 10 months ago
- Merge Transformers language models by use of gradient parameters.☆206Updated 8 months ago
- GPT-2 small trained on phi-like data☆66Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 11 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆235Updated 10 months ago
- Inference of Mamba models in pure C☆187Updated last year
- ☆66Updated 10 months ago
- A pipeline for LLM knowledge distillation☆100Updated 3 weeks ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆139Updated 2 months ago
- QuIP quantization☆51Updated last year
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆82Updated last month
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 6 months ago
- 5X faster 60% less memory QLoRA finetuning☆21Updated 10 months ago
- Video+code lecture on building nanoGPT from scratch☆66Updated 10 months ago
- Inference RWKV v7 in pure C.☆31Updated 3 weeks ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆60Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆274Updated last year
- Let's create synthetic textbooks together :)☆74Updated last year
- RWKV in nanoGPT style☆189Updated 10 months ago