llm-efficiency-challenge / neurips_llm_efficiency_challengeLinks
NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day
☆259Updated 2 years ago
Alternatives and similar repositories for neurips_llm_efficiency_challenge
Users that are interested in neurips_llm_efficiency_challenge are comparing it to the libraries listed below
Sorting:
- Scaling Data-Constrained Language Models☆343Updated 5 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆269Updated 7 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆278Updated last year
- A repository for research on medium sized language models.☆524Updated 6 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆202Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆244Updated 10 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆315Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 3 months ago
- batched loras☆347Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆247Updated 3 months ago
- git extension for {collaborative, communal, continual} model development☆217Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- DSIR large-scale data selection framework for language model training☆266Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆112Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Updated last year
- ☆205Updated last week
- Official PyTorch implementation of QA-LoRA☆145Updated last year
- Fast bare-bones BPE for modern tokenizer training☆173Updated 6 months ago
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- Pre-training code for Amber 7B LLM☆170Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆180Updated 6 months ago
- ☆167Updated 2 years ago
- ☆94Updated 2 years ago
- nanoGPT-like codebase for LLM training☆113Updated last month
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆560Updated 11 months ago
- Explorations into some recent techniques surrounding speculative decoding☆295Updated last year
- LLM-Merging: Building LLMs Efficiently through Merging☆207Updated last year
- experiments with inference on llama☆103Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated 2 years ago