imoneoi / multipack_sampler
Multipack distributed sampler for fast padding-free training of LLMs
☆188Updated 8 months ago
Alternatives and similar repositories for multipack_sampler:
Users that are interested in multipack_sampler are comparing it to the libraries listed below
- Experiments on speculative sampling with Llama models☆125Updated last year
- ☆94Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated 11 months ago
- A bagel, with everything.☆320Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆254Updated 9 months ago
- Code repository for the c-BTM paper☆106Updated last year
- DSIR large-scale data selection framework for language model training☆246Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆220Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆301Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆232Updated 2 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆121Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- ☆186Updated this week
- batched loras☆341Updated last year
- ☆92Updated last year
- Understand and test language model architectures on synthetic tasks.☆194Updated last month
- Scaling Data-Constrained Language Models☆334Updated 7 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆459Updated last year
- ☆125Updated last year
- Official PyTorch implementation of QA-LoRA☆132Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆221Updated 6 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆176Updated 7 months ago
- Self-Alignment with Principle-Following Reward Models☆160Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆95Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆198Updated this week
- ☆159Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- ☆198Updated 5 months ago