AI-Guru / helibrunnaLinks
A HuggingFace compatible Small Language Model trainer.
☆76Updated 11 months ago
Alternatives and similar repositories for helibrunna
Users that are interested in helibrunna are comparing it to the libraries listed below
Sorting:
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆81Updated last month
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- ☆56Updated 2 weeks ago
- ☆90Updated 6 months ago
- ☆82Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- ☆158Updated 2 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆62Updated last year
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architecture☆134Updated last year
- Library to facilitate pruning of LLMs based on context☆32Updated last year
- ☆101Updated 7 months ago
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆92Updated 3 weeks ago
- A State-Space Model with Rational Transfer Function Representation.☆83Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆124Updated this week
- Collection of autoregressive model implementation☆85Updated this week
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆212Updated this week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆132Updated 2 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- ☆53Updated 11 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 8 months ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 11 months ago
- SaLSa Optimizer implementation (No learning rates needed)☆31Updated 7 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- Implementation of Agent Attention in Pytorch☆93Updated last year
- Pytorch implementation of the xLSTM model by Beck et al. (2024)☆181Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆41Updated 3 weeks ago
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆40Updated last year
- Visualize multi-model embedding spaces. The first goal is to quickly get a lay of the land of any embedding space. Then be able to scroll…☆27Updated last year