cchan / nanoGPT-fp8Links
☆13Updated 2 years ago
Alternatives and similar repositories for nanoGPT-fp8
Users that are interested in nanoGPT-fp8 are comparing it to the libraries listed below
Sorting:
- ☆63Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Collection of autoregressive model implementation☆86Updated 5 months ago
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- ☆49Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- ☆46Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated 2 years ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 8 months ago
- Data preparation code for Amber 7B LLM☆93Updated last year
- ☆62Updated last year
- RWKV-7: Surpassing GPT☆96Updated 10 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 2 weeks ago
- Make triton easier☆47Updated last year
- DPO, but faster 🚀☆44Updated 10 months ago
- ☆40Updated last year
- ☆39Updated 3 years ago
- ☆23Updated 2 years ago
- Docker image NVIDIA GH200 machines - optimized for vllm serving and hf trainer finetuning☆48Updated 7 months ago
- ☆69Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 10 months ago
- Latent Large Language Models☆19Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Updated 2 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated 11 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year