ethansmith2000 / TransformerExperimentsLinks
☆19Updated 6 months ago
Alternatives and similar repositories for TransformerExperiments
Users that are interested in TransformerExperiments are comparing it to the libraries listed below
Sorting:
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- ☆34Updated last year
- ☆34Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 6 months ago
- Code for the paper "Function-Space Learning Rates"☆23Updated 6 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- ☆53Updated last year
- Combining SOAP and MUON☆17Updated 9 months ago
- research impl of Native Sparse Attention (2502.11089)☆63Updated 9 months ago
- ☆82Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Latent Diffusion Language Models☆70Updated 2 years ago
- H-Net Dynamic Hierarchical Architecture☆80Updated 2 months ago
- ☆21Updated last year
- supporting pytorch FSDP for optimizers☆84Updated 11 months ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆57Updated 8 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- ☆41Updated last month
- ☆50Updated last year
- ☆91Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 4 months ago
- BFloat16 Fused Adam Operator for PyTorch☆16Updated last year
- Efficient PScan implementation in PyTorch☆17Updated last year
- Focused on fast experimentation and simplicity☆75Updated 11 months ago
- Experiments on the impact of depth in transformers and SSMs.☆38Updated last month
- ☆20Updated 2 years ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 7 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 5 months ago