neulab / gemini-benchmark
☆150Updated last year
Alternatives and similar repositories for gemini-benchmark:
Users that are interested in gemini-benchmark are comparing it to the libraries listed below
- Self-Alignment with Principle-Following Reward Models☆154Updated 11 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆112Updated 2 months ago
- ☆117Updated 4 months ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆157Updated 9 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆77Updated 6 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆172Updated 3 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆215Updated 3 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 9 months ago
- Evaluating LLMs with fewer examples☆145Updated 10 months ago
- ☆116Updated 3 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆135Updated 3 months ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆108Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆100Updated 7 months ago
- ☆95Updated 7 months ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆93Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆178Updated 7 months ago
- ☆66Updated last year
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆52Updated 4 months ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆108Updated last year
- ☆58Updated 9 months ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆69Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆126Updated this week
- ☆111Updated 7 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆215Updated 10 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆158Updated this week
- Code accompanying "How I learned to start worrying about prompt formatting".☆102Updated 4 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 5 months ago
- ☆130Updated 2 months ago