xjdr-alt / mla_blog_translationLinks
☆14Updated last year
Alternatives and similar repositories for mla_blog_translation
Users that are interested in mla_blog_translation are comparing it to the libraries listed below
Sorting:
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 5 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆106Updated last week
- train with kittens!☆62Updated 10 months ago
- NSA Triton Kernels written with GPT5 and Opus 4.1☆64Updated 2 weeks ago
- DeMo: Decoupled Momentum Optimization☆190Updated 8 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 10 months ago
- ☆24Updated last year
- Collection of autoregressive model implementation☆86Updated 4 months ago
- ☆27Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆69Updated 4 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- RWKV-7: Surpassing GPT☆94Updated 9 months ago
- Modded vLLM to run pipeline parallelism over public networks☆38Updated 3 months ago
- ☆49Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Updated last month
- QuIP quantization☆57Updated last year
- An introduction to LLM Sampling☆79Updated 8 months ago
- Cerule - A Tiny Mighty Vision Model☆67Updated 11 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- Simple Transformer in Jax☆139Updated last year
- ☆61Updated last year
- A synthetic story narration dataset to study small audio LMs.☆32Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆129Updated 8 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 9 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated last year
- H-Net Dynamic Hierarchical Architecture☆79Updated last month
- Experimental GPU language with meta-programming☆22Updated 11 months ago
- Just a bunch of benchmark logs for different LLMs☆120Updated last year
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆143Updated last year