zaydzuhri / flameLinks
Fork of Flame repo for training of some new stuff in development
☆15Updated last month
Alternatives and similar repositories for flame
Users that are interested in flame are comparing it to the libraries listed below
Sorting:
- Lottery Ticket Adaptation☆39Updated 9 months ago
- ☆85Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 3 months ago
- JAX Scalify: end-to-end scaled arithmetics☆16Updated 9 months ago
- Code for the paper "Function-Space Learning Rates"☆23Updated 2 months ago
- Official code for the paper "Attention as a Hypernetwork"☆40Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- GoldFinch and other hybrid transformer components☆46Updated last year
- ☆23Updated 8 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Updated last month
- Here we will test various linear attention designs.☆62Updated last year
- ☆31Updated last year
- ☆11Updated 5 months ago
- Official implementation of ECCV24 paper: POA☆24Updated last year
- ☆82Updated last year
- ☆68Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆37Updated 6 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 2 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆31Updated last week
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆30Updated 3 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆34Updated 5 months ago
- ☆45Updated last year
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆61Updated 2 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Official Repository for Task-Circuit Quantization☆22Updated 2 months ago
- ☆34Updated 11 months ago
- ☆21Updated 9 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 8 months ago
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆36Updated 9 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year