hpcaitech / TitansLinks
A collection of models built with ColossalAI
☆32Updated 3 years ago
Alternatives and similar repositories for Titans
Users that are interested in Titans are comparing it to the libraries listed below
Sorting:
- Scalable PaLM implementation of PyTorch☆189Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- ☆105Updated 2 years ago
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated 2 years ago
- A unified tokenization tool for Images, Chinese and English.☆153Updated 2 years ago
- The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca☆98Updated 2 years ago
- Examples of training models with hybrid parallelism using ColossalAI☆339Updated 2 years ago
- Repository for analysis and experiments in the BigCode project.☆128Updated last year
- ☆98Updated 2 years ago
- The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective☆62Updated 3 years ago
- An experimental implementation of the retrieval-enhanced language model☆75Updated 2 years ago
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆88Updated 3 years ago
- ⏳ ChatLog: Recording and Analysing ChatGPT Across Time☆103Updated last year
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆97Updated last year
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- A memory efficient DLRM training solution using ColossalAI☆106Updated 3 years ago
- A Multi-Turn Dialogue Corpus based on Alpaca Instructions☆177Updated 2 years ago
- Gaokao Benchmark for AI☆109Updated 3 years ago
- ☆59Updated 2 years ago
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆51Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆210Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- ☆47Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated 2 years ago
- ☆123Updated 2 years ago
- ☆83Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆317Updated 2 years ago
- A more efficient GLM implementation!☆54Updated 2 years ago