Mooler0410 / LLMsPracticalGuideLinks
A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)
β9,941Updated last year
Alternatives and similar repositories for LLMsPracticalGuide
Users that are interested in LLMsPracticalGuide are comparing it to the libraries listed below
Sorting:
- Train transformer language models with reinforcement learning.β14,193Updated this week
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β18,774Updated last week
- Awesome-LLM: a curated list of Large Language Modelβ23,819Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,490Updated last year
- The official GitHub page for the survey paper "A Survey of Large Language Models".β11,581Updated 3 months ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adβ¦β6,068Updated 9 months ago
- Aligning pretrained language models with instruction data generated by themselves.β4,391Updated 2 years ago
- Large Language Model Text Generation Inferenceβ10,216Updated this week
- The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.β7,717Updated 10 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMsβ4,031Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β12,090Updated 6 months ago
- Fast and memory-efficient exact attentionβ17,846Updated this week
- A framework for few-shot evaluation of language models.β9,264Updated this week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.β14,829Updated 3 months ago
- Instruct-tune LLaMA on consumer hardwareβ18,913Updated 10 months ago
- Accessible large language models via k-bit quantization for PyTorch.β7,142Updated this week
- Code and documentation to train Stanford's Alpaca models, and generate the data.β30,040Updated 11 months ago
- Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.β16,351Updated 5 months ago
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We alsβ¦β17,490Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.β12,264Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinksβ6,906Updated 11 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersβ5,882Updated last year
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.β9,694Updated 11 months ago
- Instruction Tuning with GPT-4β4,308Updated 2 years ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AIβs LLaMA 7B trained on the RedPajama datasetβ7,497Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMathβ9,420Updated last week
- Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)β12,151Updated this week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.β41,856Updated 6 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,667Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)β¦β13,706Updated last week