UpstageAI / evalverse-IFEvalLinks
Submodule of evalverse forked from [google-research/instruction_following_eval](https://github.com/google-research/google-research/tree/master/instruction_following_eval)
☆14Updated last year
Alternatives and similar repositories for evalverse-IFEval
Users that are interested in evalverse-IFEval are comparing it to the libraries listed below
Sorting:
- ☆55Updated 11 months ago
- XTR: Rethinking the Role of Token Retrieval in Multi-Vector Retrieval☆58Updated last year
- ☆57Updated last year
- Just a bunch of benchmark logs for different LLMs☆118Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 8 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- Supercharge huggingface transformers with model parallelism.☆77Updated 2 months ago
- 🚢 Data Toolkit for Sailor Language Models☆94Updated 7 months ago
- The first dense retrieval model that can be prompted like an LM☆89Updated 5 months ago
- A repository for research on medium sized language models.☆78Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated 2 weeks ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- Track the progress of LLM context utilisation☆54Updated 6 months ago
- ☆32Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last week
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆138Updated 2 years ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 9 months ago
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)☆22Updated last year
- ☆48Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated last year
- Experiments for efforts to train a new and improved t5☆75Updated last year
- code for training & evaluating Contextual Document Embedding models☆197Updated 5 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆77Updated 7 months ago
- This is the official repository for Inheritune.☆115Updated 8 months ago
- Public Inflection Benchmarks☆68Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆92Updated 11 months ago