cchan / nanoGPT-fp8Links
☆13Updated 2 years ago
Alternatives and similar repositories for nanoGPT-fp8
Users that are interested in nanoGPT-fp8 are comparing it to the libraries listed below
Sorting:
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Collection of autoregressive model implementation☆85Updated 8 months ago
- BFloat16 Fused Adam Operator for PyTorch☆16Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- ☆50Updated last year
- Google TPU optimizations for transformers models☆131Updated last week
- ☆63Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- DPO, but faster 🚀☆46Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- ☆71Updated last year
- ☆18Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- Simple GRPO scripts and configurations.☆59Updated 10 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 5 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- train with kittens!☆63Updated last year
- ☆39Updated last year
- ☆47Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆63Updated 2 weeks ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- ☆62Updated 2 years ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- Cerule - A Tiny Mighty Vision Model☆68Updated last month
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year