Lightning-Universe / lightning-ColossalAI
Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI
☆57Updated last year
Alternatives and similar repositories for lightning-ColossalAI:
Users that are interested in lightning-ColossalAI are comparing it to the libraries listed below
- ☆24Updated 2 years ago
- Transformers at any scale☆41Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated last month
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated 9 months ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆72Updated last year
- Utilities for Training Very Large Models☆58Updated 5 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 4 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- ☆19Updated 2 years ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- Tools for content datamining and NLP at scale☆42Updated 8 months ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆119Updated 2 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆37Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆36Updated 2 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- Elixir: Train a Large Language Model on a Small GPU Cluster☆13Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Truly flash T5 realization!☆63Updated 9 months ago
- Plug in and play implementation of " Textbooks Are All You Need", ready for training, inference, and dataset generation☆76Updated last year
- ☆51Updated 8 months ago
- [COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-training☆15Updated 5 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆46Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated last year
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆33Updated last year