Entropy-xcy / bitnet158
☆68Updated last year
Alternatives and similar repositories for bitnet158:
Users that are interested in bitnet158 are comparing it to the libraries listed below
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated 5 months ago
- Code for studying the super weight in LLM☆94Updated 3 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆99Updated last year
- Prune transformer layers☆68Updated 10 months ago
- QuIP quantization☆52Updated last year
- PB-LLM: Partially Binarized Large Language Models☆152Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 6 months ago
- ☆125Updated last year
- ☆194Updated 3 months ago
- ☆67Updated 8 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆196Updated 8 months ago
- A repository for log-time feedforward networks☆220Updated 11 months ago
- RWKV, in easy to read code☆71Updated this week
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- Pytorch/XLA SPMD Test code in Google TPU☆23Updated 11 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 10 months ago
- ☆126Updated 7 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated 11 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆150Updated 3 months ago
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆80Updated 10 months ago
- Work in progress.☆50Updated last week
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆127Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆226Updated 2 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆35Updated 11 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆104Updated 5 months ago
- ☆49Updated last year
- ☆220Updated 9 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆163Updated last month
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆362Updated last year