hpcaitech / TitansLinks
A collection of models built with ColossalAI
☆32Updated 3 years ago
Alternatives and similar repositories for Titans
Users that are interested in Titans are comparing it to the libraries listed below
Sorting:
- Scalable PaLM implementation of PyTorch☆190Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated 2 years ago
- ⏳ ChatLog: Recording and Analysing ChatGPT Across Time☆103Updated last year
- ☆105Updated 2 years ago
- ☆98Updated 2 years ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- A unified tokenization tool for Images, Chinese and English.☆153Updated 2 years ago
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆96Updated last year
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆89Updated 3 years ago
- ☆59Updated 2 years ago
- The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca☆98Updated 2 years ago
- Inspired by google c4, here is a series of colossal clean data cleaning scripts focused on CommonCrawl data processing. Including Chinese…☆135Updated 2 years ago
- Gaokao Benchmark for AI☆109Updated 3 years ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- Examples of training models with hybrid parallelism using ColossalAI☆339Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- ☆47Updated last year
- A LLaMA1/LLaMA12 Megatron implement.☆28Updated 2 years ago
- An experimental implementation of the retrieval-enhanced language model☆75Updated 3 years ago
- Repository for analysis and experiments in the BigCode project.☆128Updated last year
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆77Updated last year
- Light local website for displaying performances from different chat models.☆87Updated 2 years ago
- reStructured Pre-training☆99Updated 3 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated 2 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆40Updated 2 years ago
- A memory efficient DLRM training solution using ColossalAI☆105Updated 3 years ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆113Updated 10 months ago
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆81Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated 2 years ago