adiSimhi / Interpreting-Embedding-Spaces-by-ConceptualizationLinks
☆13Updated last year
Alternatives and similar repositories for Interpreting-Embedding-Spaces-by-Conceptualization
Users that are interested in Interpreting-Embedding-Spaces-by-Conceptualization are comparing it to the libraries listed below
Sorting:
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆32Updated last year
- Repo for "Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks" ACL 2023 Findings☆16Updated 2 years ago
- ☆21Updated 3 months ago
- Few-shot Learning with Auxiliary Data☆31Updated last year
- ☆24Updated 11 months ago
- EasyRLHF aims to provide an easy and minimal interface to train aligned language models, using off-the-shelf solutions and datasets☆9Updated last year
- ☆26Updated 5 months ago
- ☆29Updated 3 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- PyTorch code for System-1.x: Learning to Balance Fast and Slow Planning with Language Models☆24Updated last year
- ☆14Updated 10 months ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
- [ACL 2023] Gradient Ascent Post-training Enhances Language Model Generalization☆29Updated 10 months ago
- ☆44Updated 8 months ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- ☆45Updated 4 months ago
- Code release for "TempLM: Distilling Language Models into Template-Based Generators"☆14Updated 3 years ago
- Embedding Recycling for Language models☆39Updated 2 years ago
- [COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-training☆17Updated 9 months ago
- ☆46Updated 3 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆58Updated 2 years ago
- ☆13Updated 8 months ago
- ☆14Updated 3 years ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 7 months ago
- Pretraining summarization models using a corpus of nonsense☆13Updated 3 years ago
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- SILO Language Models code repository☆81Updated last year
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering☆16Updated 2 years ago