hpcaitech / TitansLinks
A collection of models built with ColossalAI
☆32Updated 2 years ago
Alternatives and similar repositories for Titans
Users that are interested in Titans are comparing it to the libraries listed below
Sorting:
- Scalable PaLM implementation of PyTorch☆190Updated 2 years ago
- ☆104Updated 2 years ago
- A memory efficient DLRM training solution using ColossalAI☆106Updated 2 years ago
- The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca☆98Updated 2 years ago
- Examples of training models with hybrid parallelism using ColossalAI☆340Updated 2 years ago
- Repository for analysis and experiments in the BigCode project.☆124Updated last year
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆177Updated last year
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆314Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- An experimental implementation of the retrieval-enhanced language model☆76Updated 2 years ago
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆86Updated 2 years ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆51Updated 2 years ago
- ⏳ ChatLog: Recording and Analysing ChatGPT Across Time☆102Updated last year
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆97Updated last year
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆76Updated last year
- ☆121Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- Longitudinal Evaluation of LLMs via Data Compression☆32Updated last year
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆109Updated 5 months ago
- ☆98Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆209Updated last year
- ☆59Updated 2 years ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated 2 years ago
- ☆24Updated 2 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year