AI-Guru / helibrunnaLinks
A HuggingFace compatible Small Language Model trainer.
☆76Updated 7 months ago
Alternatives and similar repositories for helibrunna
Users that are interested in helibrunna are comparing it to the libraries listed below
Sorting:
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆69Updated last month
- Implementation of a Light Recurrent Unit in Pytorch☆48Updated 11 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 9 months ago
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆80Updated 3 weeks ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- ☆82Updated last year
- ☆84Updated 3 months ago
- A State-Space Model with Rational Transfer Function Representation.☆81Updated last year
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆61Updated 10 months ago
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architecture☆132Updated last year
- ☆49Updated 7 months ago
- Official implementation of "GPT or BERT: why not both?"☆59Updated last month
- ☆71Updated 2 months ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆37Updated 7 months ago
- ☆51Updated 8 months ago
- Collection of autoregressive model implementation☆86Updated 5 months ago
- Library to facilitate pruning of LLMs based on context☆32Updated last year
- ☆14Updated 3 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆207Updated 2 weeks ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆90Updated 3 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆128Updated last year
- Implementation of Agent Attention in Pytorch☆91Updated last year
- DPO, but faster 🚀☆44Updated 9 months ago
- Use QLoRA to tune LLM in PyTorch-Lightning w/ Huggingface + MLflow☆64Updated last year
- Open-source reproducible benchmarks from Argmax☆58Updated this week
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆63Updated 3 weeks ago
- ☆124Updated 10 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆59Updated last year
- SlamKit is an open source tool kit for efficient training of SpeechLMs. It was used for "Slamming: Training a Speech Language Model on On…☆218Updated 4 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆124Updated 10 months ago