juncongmoo / chatllama
ChatLLaMA π’ Open source implementation for LLaMA-based ChatGPT runnable in a single GPU. 15x faster training process than ChatGPT
β1,202Updated last year
Related projects: β
- 4 bits quantization of LLaMA using GPTQβ2,979Updated 2 months ago
- LLaMA: Open and Efficient Foundation Language Modelsβ2,801Updated 10 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flβ¦β2,368Updated last month
- Alpaca dataset from Stanford, cleaned and curatedβ1,493Updated last year
- Let ChatGPT teach your own chatbot in hours with a single GPU!β3,155Updated 6 months ago
- β1,411Updated last year
- LLM as a Chatbot Serviceβ3,280Updated 9 months ago
- Chat with Meta's LLaMA models at home made easyβ835Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformerβ1,601Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLMβ1,585Updated last year
- Aligning pretrained language models with instruction data generated by themselves.β4,062Updated last year
- Instruction Tuning with GPT-4β4,165Updated last year
- Quantized inference code for LLaMA modelsβ1,052Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.β19Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,442Updated 8 months ago
- Simple UI for LLM Model Finetuningβ2,046Updated 8 months ago
- The hub for EleutherAI's work on interpretability and learning dynamicsβ2,210Updated 3 weeks ago
- Implementation of Toolformer, Language Models That Can Use Tools, by MetaAIβ1,941Updated last month
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersβ5,691Updated 6 months ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.β4,522Updated last month
- LLM training code for Databricks foundation modelsβ3,964Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.β2,730Updated 11 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought promptingβ2,513Updated last month
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.β4,326Updated last month
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformβ¦β1,446Updated 10 months ago
- Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-sourβ¦β2,587Updated 5 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".β1,875Updated 5 months ago
- β2,504Updated last month
- Official supported Python bindings for llama.cpp + gpt4allβ1,024Updated last year
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)β1,063Updated 8 months ago