LegallyCoder / mamba-hfLinks
Implementation of the Mamba SSM with hf_integration.
☆56Updated 11 months ago
Alternatives and similar repositories for mamba-hf
Users that are interested in mamba-hf are comparing it to the libraries listed below
Sorting:
- ☆37Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 10 months ago
- GoldFinch and other hybrid transformer components☆46Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 8 months ago
- Collection of autoregressive model implementation☆86Updated 3 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆52Updated 11 months ago
- Modeling code for a BitNet b1.58 Llama-style model.☆25Updated last year
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆31Updated last week
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 3 months ago
- This is the official repository for Inheritune.☆112Updated 5 months ago
- ☆63Updated 10 months ago
- ☆31Updated last year
- RWKV-7: Surpassing GPT☆94Updated 8 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated last month
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆65Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆82Updated 2 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated last week
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- ☆49Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆37Updated 5 months ago
- ☆81Updated last year
- ☆53Updated 8 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆56Updated last year
- Utilities for Training Very Large Models☆58Updated 10 months ago