joey00072 / ohara
Collection of autoregressive model implementation
☆85Updated last week
Alternatives and similar repositories for ohara:
Users that are interested in ohara are comparing it to the libraries listed below
- ☆49Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆63Updated last week
- ☆47Updated 8 months ago
- research impl of Native Sparse Attention (2502.11089)☆53Updated 2 months ago
- ☆78Updated 10 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated last month
- Prune transformer layers☆69Updated 11 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆105Updated this week
- ☆27Updated 9 months ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆123Updated last year
- DPO, but faster 🚀☆41Updated 5 months ago
- Simple GRPO scripts and configurations.☆58Updated 3 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 5 months ago
- supporting pytorch FSDP for optimizers☆80Updated 4 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated this week
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆49Updated last month
- working implimention of deepseek MLA☆40Updated 3 months ago
- Simple repository for training small reasoning models☆27Updated 2 months ago
- ☆78Updated 8 months ago
- ☆80Updated last year
- A repository for research on medium sized language models.☆76Updated 11 months ago
- prime-rl is a codebase for decentralized RL training at scale☆85Updated this week
- ☆43Updated last year
- ☆94Updated 3 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆63Updated last year
- ☆53Updated last year
- Focused on fast experimentation and simplicity☆71Updated 4 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated last month