johnma2006 / candleLinks
Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.
☆51Updated last year
Alternatives and similar repositories for candle
Users that are interested in candle are comparing it to the libraries listed below
Sorting:
- Evaluating the Mamba architecture on the Othello game☆47Updated last year
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆126Updated 3 weeks ago
- Understand and test language model architectures on synthetic tasks.☆197Updated 2 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆234Updated 3 months ago
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆93Updated 10 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆116Updated 5 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆165Updated 4 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆86Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆124Updated 9 months ago
- ☆95Updated 4 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆70Updated last year
- Collection of autoregressive model implementation☆85Updated last month
- ☆78Updated 10 months ago
- supporting pytorch FSDP for optimizers☆79Updated 5 months ago
- ☆37Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- Accelerated First Order Parallel Associative Scan☆181Updated 9 months ago
- ☆27Updated 10 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆63Updated last year
- Experiment of using Tangent to autodiff triton☆79Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- An extension of the nanoGPT repository for training small MOE models.☆147Updated 2 months ago
- ☆53Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆119Updated 7 months ago
- Implementations of attention with the softpick function, naive and FlashAttention-2☆76Updated last month
- ☆49Updated last year
- Mixture of A Million Experts☆45Updated 10 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆88Updated 11 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆81Updated last year