geronimi73 / mamba
☆32Updated last year
Alternatives and similar repositories for mamba:
Users that are interested in mamba are comparing it to the libraries listed below
- A repository for research on medium sized language models.☆76Updated 11 months ago
- This is the code that went into our practical dive using mamba as information extraction☆54Updated last year
- My fork os allen AI's OLMo for educational purposes.☆30Updated 5 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- ☆63Updated 7 months ago
- ☆48Updated 6 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated last week
- Collection of autoregressive model implementation☆85Updated 2 weeks ago
- The Next Generation Multi-Modality Superintelligence☆71Updated 8 months ago
- Latent Large Language Models☆18Updated 8 months ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated 8 months ago
- ☆46Updated 9 months ago
- ☆15Updated last month
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 8 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆49Updated 3 months ago
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- A new way to generate large quantities of high quality synthetic data (on par with GPT-4), with better controllability, at a fraction of …☆22Updated 7 months ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 11 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- ☆50Updated 6 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 8 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 11 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Training hybrid models for dummies.☆21Updated 3 months ago