mistralai / megablocks-public
☆864Updated last year
Alternatives and similar repositories for megablocks-public:
Users that are interested in megablocks-public are comparing it to the libraries listed below
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated last year
- ☆529Updated 8 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆711Updated last year
- a small code base for training large models☆294Updated last week
- A bagel, with everything.☆320Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆697Updated last year
- ☆412Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,479Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆980Updated 9 months ago
- batched loras☆341Updated last year
- A repository for research on medium sized language models.☆495Updated last week
- Inference code for Persimmon-8B☆415Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆498Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,378Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆867Updated last week
- ☆515Updated 5 months ago
- ☆713Updated last month
- ☆444Updated last year
- ☆706Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆586Updated last year
- ☆533Updated 6 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- Customizable implementation of the self-instruct paper.☆1,044Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆652Updated 11 months ago
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆923Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,056Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,243Updated 2 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,526Updated last year
- The repository for the code of the UltraFastBERT paper☆517Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆301Updated last year