pranavjad / tinyllama-bitnet
Train your own small bitnet model
☆70Updated 6 months ago
Alternatives and similar repositories for tinyllama-bitnet
Users that are interested in tinyllama-bitnet are comparing it to the libraries listed below
Sorting:
- 1.58-bit LLaMa model☆81Updated last year
- ☆129Updated 8 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 11 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆150Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Inference of Mamba models in pure C☆188Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆199Updated 10 months ago
- The training notebooks that were similar to the original script used to train TinyMistral.☆21Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 7 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆140Updated 2 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- tinygrad port of the RWKV large language model.☆44Updated 2 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆238Updated 11 months ago
- Inference RWKV v7 in pure C.☆33Updated last month
- Video+code lecture on building nanoGPT from scratch☆67Updated 11 months ago
- GPT-2 small trained on phi-like data☆66Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆121Updated last year
- RWKV in nanoGPT style☆189Updated 11 months ago
- Merge Transformers language models by use of gradient parameters.☆208Updated 9 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆150Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- ☆66Updated 11 months ago
- entropix style sampling + GUI☆26Updated 6 months ago
- Scripts to create your own moe models using mlx☆89Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆238Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆162Updated last year
- ☆53Updated 11 months ago
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year