InflectionAI / Inflection-Benchmarks
Public Inflection Benchmarks
☆69Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for Inflection-Benchmarks
- ☆55Updated last month
- Code repository for the c-BTM paper☆105Updated last year
- ☆103Updated last month
- Functional Benchmarks and the Reasoning Gap☆78Updated last month
- Experiments for efforts to train a new and improved t5☆76Updated 7 months ago
- Evaluating LLMs with CommonGen-Lite☆85Updated 8 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆48Updated 7 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆41Updated last month
- ☆38Updated 7 months ago
- ☆49Updated 6 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Can Language Models Solve Olympiad Programming?☆101Updated 3 months ago
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- ☆57Updated 11 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆97Updated last year
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆39Updated 10 months ago
- Multimodal language model benchmark, featuring challenging examples☆149Updated 3 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆46Updated 2 months ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆68Updated last year
- ☆22Updated last year
- ☆46Updated last week
- Small, simple agent task environments for training and evaluation☆16Updated 3 weeks ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆44Updated 10 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆73Updated 3 months ago
- ☆28Updated 5 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆41Updated 10 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆199Updated 6 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆77Updated 8 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆216Updated 7 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year