Jaykef / ai-algorithmsLinks
First-principle implementations of groundbreaking AI algorithms using a wide range of deep learning frameworks, accompanied by supporting research papers and demos.
☆177Updated 2 weeks ago
Alternatives and similar repositories for ai-algorithms
Users that are interested in ai-algorithms are comparing it to the libraries listed below
Sorting:
- minimal GRPO implementation from scratch☆94Updated 4 months ago
- Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation☆367Updated this week
- An extension of the nanoGPT repository for training small MOE models.☆164Updated 4 months ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆113Updated 2 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆179Updated 4 months ago
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆128Updated 11 months ago
- ☆164Updated this week
- From scratch implementation of a vision language model in pure PyTorch☆231Updated last year
- Tina: Tiny Reasoning Models via LoRA☆274Updated 2 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆200Updated 2 weeks ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆56Updated last year
- ☆190Updated 7 months ago
- Exploring Applications of GRPO☆245Updated 3 weeks ago
- Code repository for Black Mamba☆252Updated last year
- The official implementation of TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆380Updated last week
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆29Updated 5 months ago
- Reproduction of DeepSeek-R1☆235Updated 3 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆219Updated last month
- An open source implementation of LFMs from Liquid AI: Liquid Foundation Models☆179Updated 2 weeks ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆324Updated last month
- LoRA and DoRA from Scratch Implementations☆207Updated last year
- ☆293Updated 3 months ago
- A easy, reliable, fluid template for python packages complete with docs, testing suites, readme's, github workflows, linting and much muc…☆184Updated 2 weeks ago
- A compact LLM pretrained in 9 days by using high quality data☆320Updated 3 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆344Updated 7 months ago
- Build your own visual reasoning model☆401Updated this week
- A collection of tricks and tools to speed up transformer models☆169Updated 2 months ago
- PyTorch implementation of models from the Zamba2 series.☆184Updated 6 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆431Updated 2 months ago