dzhulgakov / llama-mistral
Inference code for Mistral and Mixtral hacked up into original Llama implementation
☆371Updated last year
Alternatives and similar repositories for llama-mistral:
Users that are interested in llama-mistral are comparing it to the libraries listed below
- ☆524Updated 7 months ago
- A bagel, with everything.☆318Updated last year
- ☆865Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆585Updated last year
- Official repository for LongChat and LongEval☆517Updated 10 months ago
- ☆412Updated last year
- Merge Transformers language models by use of gradient parameters.☆206Updated 8 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆693Updated last year
- batched loras☆341Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆300Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆423Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆234Updated 10 months ago
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆712Updated 6 months ago
- Run evaluation on LLMs using human-eval benchmark☆404Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆648Updated 10 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆167Updated last year
- Customizable implementation of the self-instruct paper.☆1,042Updated last year
- Experiments on speculative sampling with Llama models☆125Updated last year
- ☆509Updated 4 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆457Updated last year
- ☆268Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆498Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆273Updated last year
- Inference code for Persimmon-8B☆415Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆187Updated 8 months ago
- ☆153Updated 8 months ago
- scalable and robust tree-based speculative decoding algorithm☆341Updated 2 months ago
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRav…☆313Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year