ExtensityAI / benchmark
Evaluation of neuro-symbolic engines
☆35Updated 9 months ago
Alternatives and similar repositories for benchmark:
Users that are interested in benchmark are comparing it to the libraries listed below
- ☆23Updated last month
- ☆80Updated 3 months ago
- ☆49Updated last month
- ☆45Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆27Updated last month
- Understanding how features learned by neural networks evolve throughout training☆34Updated 6 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- ☆34Updated last year
- ☆67Updated 8 months ago
- Google Research☆46Updated 2 years ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated 10 months ago
- Minimum Description Length probing for neural network representations☆19Updated 3 months ago
- PyTorch library for Active Fine-Tuning☆68Updated 2 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆55Updated 8 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆104Updated last year
- ☆21Updated 3 months ago
- Sparse and discrete interpretability tool for neural networks☆61Updated last year
- ☆73Updated last week
- ☆31Updated 3 months ago
- ☆16Updated this week
- Universal Neurons in GPT2 Language Models☆28Updated 11 months ago
- ☆91Updated 2 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆73Updated 5 months ago
- Materials for ConceptARC paper☆92Updated 5 months ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 3 months ago
- ☆16Updated 3 weeks ago
- Repository for the paper Stream of Search: Learning to Search in Language☆145Updated 3 months ago
- Accompanying code for "Boosted Prompt Ensembles for Large Language Models"☆30Updated 2 years ago
- ☆34Updated 5 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆76Updated last month