Jaykef / ai-algorithms
First-principle implementations of groundbreaking AI algorithms using a wide range of deep learning frameworks, accompanied by supporting research papers.
☆160Updated 3 weeks ago
Alternatives and similar repositories for ai-algorithms:
Users that are interested in ai-algorithms are comparing it to the libraries listed below
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆166Updated 2 weeks ago
- An extension of the nanoGPT repository for training small MOE models.☆129Updated last month
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆98Updated last week
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- minimal GRPO implementation from scratch☆72Updated last month
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆317Updated 4 months ago
- Code repository for Black Mamba☆245Updated last year
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆199Updated last month
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆164Updated 3 months ago
- The official implementation of Tensor ProducT ATTenTion Transformer (T6)☆361Updated this week
- Reproduction of DeepSeek-R1☆222Updated last week
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆152Updated last week
- ☆176Updated 4 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆191Updated 2 weeks ago
- From scratch implementation of a vision language model in pure PyTorch☆213Updated 11 months ago
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architecture☆130Updated 11 months ago
- ☆264Updated 2 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆156Updated last month
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆116Updated 10 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆286Updated last week
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆214Updated last week
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆355Updated 8 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆405Updated last week
- Official repo of paper LM2☆37Updated 2 months ago
- Naively combining transformers and Kolmogorov-Arnold Networks to learn and experiment☆34Updated 8 months ago
- Notes and commented code for RLHF (PPO)☆86Updated last year
- PyTorch implementation of models from the Zamba2 series.☆179Updated 2 months ago
- Normalized Transformer (nGPT)☆168Updated 5 months ago
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆28Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆96Updated last month