OpenGPTX / lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
☆11Updated last month
Alternatives and similar repositories for lm-evaluation-harness
Users that are interested in lm-evaluation-harness are comparing it to the libraries listed below
Sorting:
- ☆24Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Advanced Reasoning Benchmark Dataset for LLMs☆46Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- ☆25Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 8 months ago
- Entailment self-training☆25Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- ☆48Updated 6 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆58Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- A repository for research on medium sized language models.☆76Updated 11 months ago
- Aioli: A unified optimization framework for language model data mixing☆25Updated 4 months ago
- ☆20Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated last week
- Source-to-Source Debuggable Derivatives in Pure Python☆15Updated last year
- ☆17Updated 3 weeks ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- ☆22Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 7 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated 11 months ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆64Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆104Updated 5 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- BigOBench assesses the capacity of Large Language Models (LLMs) to comprehend time-space computational complexity of input or generated c…☆32Updated last month