cchan / nanoGPT-fp8Links
☆13Updated 2 years ago
Alternatives and similar repositories for nanoGPT-fp8
Users that are interested in nanoGPT-fp8 are comparing it to the libraries listed below
Sorting:
- ☆63Updated last year
- Collection of autoregressive model implementation☆86Updated 6 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- ☆50Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 10 months ago
- ☆69Updated last year
- ☆46Updated last year
- ☆18Updated last year
- Data preparation code for Amber 7B LLM☆93Updated last year
- DPO, but faster 🚀☆45Updated 10 months ago
- BFloat16 Fused Adam Operator for PyTorch☆16Updated 11 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated 2 years ago
- implementation of https://arxiv.org/pdf/2312.09299☆21Updated last year
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 3 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated last week
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- ☆64Updated 7 months ago
- Simple GRPO scripts and configurations.☆59Updated 8 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- ☆61Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- ☆91Updated last year
- ☆136Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated 2 years ago