tatsu-lab / stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
☆29,738Updated 6 months ago
Alternatives and similar repositories for stanford_alpaca:
Users that are interested in stanford_alpaca are comparing it to the libraries listed below
- Instruct-tune LLaMA on consumer hardware☆18,758Updated 5 months ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆37,496Updated this week
- JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf☆23,871Updated 3 months ago
- Inference code for Llama models☆57,227Updated 4 months ago
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,320Updated last week
- LlamaIndex is the leading framework for building LLM-powered agents over your data.☆38,057Updated this week
- StableLM: Stability AI Language Models☆15,831Updated 9 months ago
- Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)☆25,536Updated 4 months ago
- Locally run an Instruction-Tuned Chat-Style LLM☆10,240Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,168Updated 7 months ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆13,005Updated last week
- Universal LLM Deployment Engine with ML Compilation☆19,630Updated this week
- The simplest way to run LLaMA on your local machine☆13,099Updated 6 months ago
- The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.☆21,094Updated 6 months ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,254Updated 2 months ago
- 🦜🔗 Build context-aware reasoning applications☆98,422Updated this week
- Making large AI models cheaper, faster and more accessible☆39,013Updated last week
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,679Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆36,255Updated this week
- ☆9,016Updated 9 months ago
- ☆34,540Updated last year
- 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)☆18,651Updated 8 months ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,312Updated 5 months ago
- ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.☆9,451Updated last month
- LLM inference in C/C++☆70,826Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆16,978Updated this week
- GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.☆71,730Updated this week
- Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!☆35,268Updated this week
- ☆20,808Updated 2 months ago