jadechip / nanoXLSTM
The simplest, fastest repository for training/finetuning medium-sized xLSTMs.
☆41Updated 9 months ago
Alternatives and similar repositories for nanoXLSTM:
Users that are interested in nanoXLSTM are comparing it to the libraries listed below
- ☆126Updated 6 months ago
- entropix style sampling + GUI☆25Updated 4 months ago
- ☆48Updated 4 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆61Updated last month
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆53Updated 11 months ago
- PyTorch implementation of models from the Zamba2 series.☆177Updated last month
- ☆113Updated 5 months ago
- GPT-2 small trained on phi-like data☆65Updated last year
- Collection of autoregressive model implementation☆83Updated 3 weeks ago
- RWKV-7: Surpassing GPT☆80Updated 3 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆135Updated 3 weeks ago
- 5X faster 60% less memory QLoRA finetuning☆21Updated 9 months ago
- Set of scripts to finetune LLMs☆36Updated 11 months ago
- 1.58-bit LLaMa model☆82Updated 11 months ago
- ☆89Updated last month
- Implementation of the Mamba SSM with hf_integration.☆56Updated 6 months ago
- ☆49Updated 11 months ago
- This is the official repository for Inheritune.☆109Updated last month
- RWKV, in easy to read code☆69Updated 3 months ago
- ☆65Updated 9 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 4 months ago
- ☆53Updated 9 months ago
- ☆15Updated last year