Spico197 / paper-hero
πͺ A toolkit to help search for papers from aclanthology, arXiv and dblp.
β43Updated last year
Related projects β
Alternatives and complementary repositories for paper-hero
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found heβ¦β31Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Modelβ41Updated 10 months ago
- β47Updated 9 months ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Faceβ32Updated last year
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Modelsβ108Updated 2 years ago
- [EACL 2023] CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verificationβ38Updated last year
- HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation [ACL 2023]β14Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"β44Updated 10 months ago
- On Transferability of Prompt Tuning for Natural Language Processingβ97Updated 6 months ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorchβ29Updated last week
- Code for paper 'Data-Efficient FineTuning'β29Updated last year
- Adding new tasks to T0 without catastrophic forgettingβ30Updated 2 years ago
- β¨ Resolving Knowledge Conflicts in Large Language Models, COLM 2024β15Updated last month
- [ICLR'24 spotlight] Tool-Augmented Reward Modelingβ36Updated 8 months ago
- Code and dataset for the emnlp paper titled Instruct and Extract: Instruction Tuning for On-Demand Information Extractionβ50Updated 10 months ago
- Retrieval as Attentionβ83Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"β30Updated 6 months ago
- β33Updated last year
- β15Updated 4 months ago
- β25Updated 11 months ago
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)β22Updated 2 years ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modelingβ35Updated 11 months ago
- β18Updated 3 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 laβ¦β44Updated last year
- Can LLMs generate code-mixed sentences through zero-shot prompting?β11Updated last year
- Repository for Skill Set Optimizationβ12Updated 3 months ago
- Conic10K: A large-scale dataset for closed-vocabulary math problem understanding. Accepted to EMNLP2023 Findings.β23Updated 11 months ago
- Transformers at any scaleβ41Updated 10 months ago
- β31Updated 7 months ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answeringβ16Updated last year