mistralai / megablocks-public
☆864Updated last year
Alternatives and similar repositories for megablocks-public:
Users that are interested in megablocks-public are comparing it to the libraries listed below
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated last year
- ☆512Updated 7 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,374Updated 11 months ago
- ☆412Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆584Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,451Updated 11 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆709Updated last year
- ☆502Updated 4 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,218Updated 2 weeks ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆980Updated 8 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,481Updated last year
- A bagel, with everything.☆317Updated 11 months ago
- a small code base for training large models☆288Updated 3 months ago
- Inference code for Persimmon-8B☆415Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,040Updated 10 months ago
- batched loras☆340Updated last year
- ☆445Updated 11 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆691Updated 11 months ago
- A repository for research on medium sized language models.☆493Updated 2 months ago
- Minimalistic large language model 3D-parallelism training☆1,701Updated this week
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- Large Context Attention☆693Updated 2 months ago
- Code for Quiet-STaR☆721Updated 7 months ago
- Salesforce open-source LLMs with 8k sequence length.☆716Updated last month
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆855Updated last month
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆705Updated 5 months ago
- ☆707Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆770Updated this week
- Data and tools for generating and inspecting OLMo pre-training data.☆1,162Updated last week
- ☆694Updated this week