Beomi / BitNet-TransformersLinks
0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch with Llama(2) Architecture
☆305Updated last year
Alternatives and similar repositories for BitNet-Transformers
Users that are interested in BitNet-Transformers are comparing it to the libraries listed below
Sorting:
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆96Updated last year
- Mamba training library developed by kotoba technologies☆71Updated last year
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,864Updated 2 weeks ago
- Ongoing research training Mixture of Expert models.☆20Updated 10 months ago
- ☆16Updated 11 months ago
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆113Updated 6 months ago
- A framework for few-shot evaluation of autoregressive language models.☆155Updated 10 months ago
- Japanese LLaMa experiment☆53Updated 8 months ago
- ☆69Updated last year
- ☆176Updated last year
- TPI-LLM: Serving 70b-scale LLMs Efficiently on Low-resource Edge Devices☆186Updated 2 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 9 months ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆42Updated 5 months ago
- ☆61Updated last year
- ☆49Updated 7 months ago
- Project of llm evaluation to Japanese tasks☆86Updated this week
- Experimental BitNet Implementation☆69Updated last month
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆437Updated 6 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated last year
- ☆19Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆376Updated last year
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆339Updated 3 months ago
- ☆137Updated this week
- ☆53Updated last year
- cli tool to quantize gguf, gptq, awq, hqq and exl2 models☆74Updated 7 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆856Updated this week
- LLM構築用の日本語チャットデータセット☆85Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆458Updated last year
- GPT-4 を用いて、言語モデルの応答を自動評価するスクリプト☆16Updated last year