ingur / bitlinear-pytorchLinks
Implementation of the BitLinear layer from: The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
☆13Updated last year
Alternatives and similar repositories for bitlinear-pytorch
Users that are interested in bitlinear-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta☆126Updated 2 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 8 months ago
- RWKV-7: Surpassing GPT☆103Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- BitLinear implementation☆35Updated last week
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆18Updated last year
- Implementation of the Mamba SSM with hf_integration.☆56Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- Modeling code for a BitNet b1.58 Llama-style model.☆25Updated last year
- RWKV, in easy to read code☆72Updated 9 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 6 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 9 months ago
- PyTorch implementation of Titans.☆31Updated 11 months ago
- Token Omission Via Attention☆128Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆53Updated 5 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- Here we will test various linear attention designs.☆62Updated last year
- ☆70Updated last year
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆46Updated 4 months ago
- ☆107Updated 5 months ago
- ☆82Updated last year
- Collection of autoregressive model implementation☆85Updated 8 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- ☆82Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year