jzhang38 / TinyLlamaLinks
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
☆8,727Updated last year
Alternatives and similar repositories for TinyLlama
Users that are interested in TinyLlama are comparing it to the libraries listed below
Sorting:
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,648Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,030Updated last year
- Tools for merging pretrained large language models.☆6,231Updated 2 weeks ago
- Train transformer language models with reinforcement learning.☆15,330Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,533Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,938Updated 4 months ago
- Go ahead and axolotl questions☆10,324Updated this week
- Large Language Model Text Generation Inference☆10,477Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,079Updated 2 months ago
- Modeling, training, eval, and inference code for OLMo☆5,943Updated last week
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,806Updated 8 months ago
- Python bindings for llama.cpp☆9,531Updated 3 weeks ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,447Updated last week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,071Updated 2 weeks ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆12,691Updated last week
- PyTorch native post-training library☆5,458Updated this week
- High-speed Large Language Model Serving for Local Deployment☆8,319Updated last month
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆6,943Updated last week
- A framework for few-shot evaluation of language models.☆9,955Updated last week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆23,428Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,457Updated 2 months ago
- Tensor library for machine learning☆13,111Updated last week
- Robust recipes to align language models with human and AI preferences☆5,338Updated last month
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,303Updated 2 weeks ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,236Updated last month
- Universal LLM Deployment Engine with ML Compilation☆21,259Updated this week
- Inference Llama 2 in one file of pure C☆18,715Updated last year
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆12,627Updated 8 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,893Updated last year
- Ongoing research training transformer models at scale☆13,458Updated this week