allenai / ariesLinks
Aligned, Review-Informed Edits of Scientific Papers
☆54Updated 2 years ago
Alternatives and similar repositories for aries
Users that are interested in aries are comparing it to the libraries listed below
Sorting:
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆42Updated 11 months ago
- ☆57Updated 10 months ago
- List of papers on Self-Correction of LLMs.☆78Updated 9 months ago
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆42Updated 6 months ago
- Annotation meets Large Language Models (ChatGPT, GPT-3 and alike).☆58Updated 2 years ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆98Updated 10 months ago
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆86Updated last year
- 🌾 Universal, customizable and deployable fine-grained evaluation for text generation.☆24Updated last year
- Pretraining Efficiently on S2ORC!☆170Updated 11 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆133Updated last year
- Official implementation of the ACL 2024: Scientific Inspiration Machines Optimized for Novelty☆85Updated last year
- Resources related to EACL 2023 paper "SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domain…☆52Updated 2 years ago
- Discovering Data-driven Hypotheses in the Wild☆113Updated 4 months ago
- ToMATO: Verbalizing the Mental States of Role-Playing LLMs for Benchmarking Theory of Mind (AAAI2025)☆16Updated 5 months ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- ☆74Updated last year
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆71Updated 2 years ago
- This project develops compact transformer models tailored for clinical text analysis, balancing efficiency and performance for healthcare…☆18Updated last year
- ☆50Updated last year
- ☆102Updated last year
- The codebase for our ACL2023 paper: Did You Read the Instructions? Rethinking the Effectiveness of Task Definitions in Instruction Learni…☆30Updated 2 years ago
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆33Updated last year
- Metacognitive Prompting Improves Understanding in Large Language Models (NAACL 2024)☆38Updated last year
- Source code and data for The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code (Findings of ACL 2023…☆29Updated 2 years ago
- ☆49Updated 2 years ago
- GHOSTS dataset☆39Updated 2 years ago
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆75Updated last year
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆94Updated last year
- A data set based on all arXiv publications, pre-processed for NLP, including structured full-text and citation network☆295Updated last year
- Code/data for MARG (multi-agent review generation)☆51Updated last week