cchan / nanoGPT-fp8
☆12Updated last year
Alternatives and similar repositories for nanoGPT-fp8:
Users that are interested in nanoGPT-fp8 are comparing it to the libraries listed below
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated 10 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 10 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- ☆60Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- ☆22Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆27Updated this week
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- ☆24Updated last year
- Data preparation code for Amber 7B LLM☆84Updated 8 months ago
- ☆41Updated last year
- Latent Large Language Models☆17Updated 5 months ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- Simplex Random Feature attention, in PyTorch☆72Updated last year
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- ☆48Updated 2 months ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆29Updated 4 months ago
- Code repository for the c-BTM paper☆105Updated last year
- Make triton easier☆44Updated 7 months ago
- Utilities for Training Very Large Models☆57Updated 4 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆67Updated 8 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆58Updated 3 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- ☆31Updated 7 months ago
- ☆64Updated 2 years ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated 8 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 10 months ago