mit-han-lab / streaming-llmLinks
[ICLR 2024] Efficient Streaming Language Models with Attention Sinks
☆7,166Updated last year
Alternatives and similar repositories for streaming-llm
Users that are interested in streaming-llm are comparing it to the libraries listed below
Sorting:
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,019Updated 9 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,255Updated 3 weeks ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,175Updated 4 months ago
- Large Language Model Text Generation Inference☆10,728Updated this week
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,525Updated 2 years ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,410Updated last month
- Tools for merging pretrained large language models.☆6,673Updated last week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,805Updated last year
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,913Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,504Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,475Updated 7 months ago
- An Open-source Toolkit for LLM Development☆2,799Updated last year
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,093Updated 6 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,881Updated this week
- Aligning pretrained language models with instruction data generated by themselves.☆4,561Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,934Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,685Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,867Updated last year
- ☆4,110Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,414Updated 5 months ago
- High-speed Large Language Model Serving for Local Deployment☆8,548Updated 5 months ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,755Updated 2 months ago
- Robust recipes to align language models with human and AI preferences☆5,473Updated 4 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,888Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,697Updated last year
- PyTorch native post-training library☆5,642Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,905Updated 2 years ago
- Instruction Tuning with GPT-4☆4,342Updated 2 years ago
- [ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.☆5,459Updated 7 months ago
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,137Updated 2 months ago