Lightning-Universe / lightning-ColossalAILinks
Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI
☆56Updated 2 years ago
Alternatives and similar repositories for lightning-ColossalAI
Users that are interested in lightning-ColossalAI are comparing it to the libraries listed below
Sorting:
- ☆24Updated 2 years ago
- Transformers at any scale☆41Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆109Updated 2 years ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆39Updated 11 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago
- Scalable PaLM implementation of PyTorch☆188Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆121Updated 2 years ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆76Updated last year
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)☆50Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆136Updated last year
- ☆67Updated last year
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago
- Calculating Expected Time for training LLM.☆38Updated 2 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated this week
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- [COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-training☆17Updated last year
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Updated 2 years ago
- ☆98Updated 2 years ago
- Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.☆32Updated 3 years ago
- Tools for content datamining and NLP at scale☆44Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆69Updated 2 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆49Updated 4 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year