juncongmoo / chatllama
ChatLLaMA π’ Open source implementation for LLaMA-based ChatGPT runnable in a single GPU. 15x faster training process than ChatGPT
β1,205Updated 2 months ago
Alternatives and similar repositories for chatllama:
Users that are interested in chatllama are comparing it to the libraries listed below
- LLaMA: Open and Efficient Foundation Language Modelsβ2,801Updated last year
- Let ChatGPT teach your own chatbot in hours with a single GPU!β3,168Updated last year
- Alpaca dataset from Stanford, cleaned and curatedβ1,546Updated 2 years ago
- 4 bits quantization of LLaMA using GPTQβ3,050Updated 9 months ago
- β1,468Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flβ¦β2,468Updated 8 months ago
- Chat with Meta's LLaMA models at home made easyβ833Updated 2 years ago
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.β472Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLMβ1,618Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.β44Updated last year
- Aligning pretrained language models with instruction data generated by themselves.β4,337Updated 2 years ago
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.β760Updated 5 months ago
- Open Multilingual Chatbot for Everyoneβ1,257Updated 11 months ago
- LOMO: LOw-Memory Optimizationβ984Updated 9 months ago
- LLM as a Chatbot Serviceβ3,314Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructionsβ821Updated last year
- Instruction Tuning with GPT-4β4,297Updated last year
- β‘LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.β‘β2,943Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformerβ1,631Updated last year
- Quantized inference code for LLaMA modelsβ1,051Updated 2 years ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformβ¦β1,451Updated last year
- The Official Python Client for Lamini's APIβ2,529Updated this week
- β459Updated last year
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.β4,696Updated 4 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bitsβ720Updated 10 months ago
- The hub for EleutherAI's work on interpretability and learning dynamicsβ2,449Updated last month
- Multi-language Enhanced LLaMAβ301Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,621Updated last year
- Quick Start LLaMA models with multiple methods, and fine-tune 7B/65B with One-Click.β354Updated last year
- This repo contains the data preparation, tokenization, training and inference code for BLOOMChat. BLOOMChat is a 176 billion parameter muβ¦β583Updated last year