johnma2006 / candle
Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.
☆48Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for candle
- Understand and test language model architectures on synthetic tasks.☆161Updated 6 months ago
- ☆35Updated 7 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆156Updated 10 months ago
- ☆72Updated 4 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 6 months ago
- Implementation of Infini-Transformer in Pytorch☆104Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆49Updated 7 months ago
- Evaluating the Mamba architecture on the Othello game☆42Updated 6 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated last month
- PyTorch implementation of models from the Zamba2 series.☆158Updated 2 months ago
- This is the code that went into our practical dive using mamba as information extraction☆50Updated 10 months ago
- Token Omission Via Attention☆119Updated 3 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆83Updated last week
- Accelerated First Order Parallel Associative Scan☆162Updated 2 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆111Updated 2 months ago
- Collection of autoregressive model implementation☆66Updated this week
- ☆61Updated 2 months ago
- code for training & evaluating Contextual Document Embedding models☆92Updated this week
- RWKV, in easy to read code☆55Updated last week
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆66Updated 5 months ago
- Some preliminary explorations of Mamba's context scaling.☆190Updated 9 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆77Updated last month
- ☆35Updated last week
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆134Updated this week
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆59Updated 6 months ago
- ☆50Updated 5 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆213Updated 2 months ago
- ☆53Updated 9 months ago
- ☆76Updated 6 months ago