jadechip / nanoXLSTM
The simplest, fastest repository for training/finetuning medium-sized xLSTMs.
☆42Updated 9 months ago
Alternatives and similar repositories for nanoXLSTM:
Users that are interested in nanoXLSTM are comparing it to the libraries listed below
- ☆126Updated 7 months ago
- Set of scripts to finetune LLMs☆37Updated 11 months ago
- ☆48Updated 4 months ago
- entropix style sampling + GUI☆25Updated 4 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated 11 months ago
- ☆49Updated last year
- ☆113Updated 5 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 10 months ago
- Collection of autoregressive model implementation☆83Updated last month
- Implementation of mamba with rust☆77Updated last year
- 1.58-bit LLaMa model☆82Updated 11 months ago
- RWKV-7: Surpassing GPT☆82Updated 4 months ago
- ☆65Updated 9 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated last month
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆137Updated last month
- ☆89Updated 2 months ago
- ☆60Updated last year
- ☆79Updated 11 months ago
- Video+code lecture on building nanoGPT from scratch☆66Updated 9 months ago
- A repository for research on medium sized language models.☆76Updated 9 months ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated 6 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆67Updated 4 months ago
- ☆27Updated 8 months ago
- 5X faster 60% less memory QLoRA finetuning☆21Updated 9 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year