dzhulgakov / llama-mistral
Inference code for Mistral and Mixtral hacked up into original Llama implementation
☆373Updated last year
Alternatives and similar repositories for llama-mistral:
Users that are interested in llama-mistral are comparing it to the libraries listed below
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆583Updated last year
- ☆493Updated 4 months ago
- Official repository for LongChat and LongEval☆518Updated 7 months ago
- A bagel, with everything.☆315Updated 9 months ago
- batched loras☆336Updated last year
- ☆413Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆419Updated last year
- ☆267Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆684Updated 9 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆634Updated 7 months ago
- ☆861Updated last year
- ☆151Updated 6 months ago
- Merge Transformers language models by use of gradient parameters.☆202Updated 5 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,179Updated 3 months ago
- ☆484Updated last month
- Fine-tune mistral-7B on 3090s, a100s, h100s☆704Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆538Updated 10 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆667Updated 5 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆714Updated 7 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆263Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆687Updated 3 months ago
- Inference code for Persimmon-8B☆416Updated last year
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆625Updated 11 months ago
- Generate textbook-quality synthetic LLM pretraining data☆492Updated last year
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆426Updated 4 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆541Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆447Updated 9 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆491Updated 7 months ago