InflectionAI / Inflection-Benchmarks
Public Inflection Benchmarks
☆68Updated last year
Alternatives and similar repositories for Inflection-Benchmarks:
Users that are interested in Inflection-Benchmarks are comparing it to the libraries listed below
- Functional Benchmarks and the Reasoning Gap☆84Updated 6 months ago
- Code repository for the c-BTM paper☆106Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆86Updated 3 weeks ago
- Experiments for efforts to train a new and improved t5☆77Updated last year
- ☆38Updated 11 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- ☆48Updated last year
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆69Updated 2 years ago
- ☆24Updated 3 months ago
- Replicating O1 inference-time scaling laws☆83Updated 4 months ago
- Evaluating LLMs with CommonGen-Lite☆89Updated last year
- Language models scale reliably with over-training and on downstream tasks☆96Updated last year
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆53Updated 4 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- ☆22Updated last year
- ☆44Updated 4 months ago
- ☆60Updated last year
- ☆89Updated 6 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆187Updated 8 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 8 months ago
- ☆48Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆72Updated 7 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Probabilistic LLM evaluations. [CogSci2023; ACL2023]☆73Updated 8 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Can Language Models Solve Olympiad Programming?☆114Updated 3 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Multimodal language model benchmark, featuring challenging examples☆163Updated 3 months ago
- Experiments on speculative sampling with Llama models☆125Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆217Updated last year