huggingface / wikirace-llmsLinks
☆25Updated 9 months ago
Alternatives and similar repositories for wikirace-llms
Users that are interested in wikirace-llms are comparing it to the libraries listed below
Sorting:
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆35Updated 9 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- ☆29Updated 3 months ago
- ☆67Updated 8 months ago
- ☆56Updated last year
- ☆14Updated 9 months ago
- Project code for training LLMs to write better unit tests + code☆21Updated 8 months ago
- Simple GRPO scripts and configurations.☆59Updated last year
- ☆45Updated 2 years ago
- ☆40Updated last year
- Verbosity control for AI agents☆66Updated last year
- ☆53Updated last year
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆26Updated last month
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆61Updated 9 months ago
- ☆15Updated 9 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆92Updated last year
- An introduction to LLM Sampling☆79Updated last year
- alternative way to calculating self attention☆18Updated last year
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 4 months ago
- Training Proactive and Personalized LLM Agents☆100Updated 3 weeks ago
- CLaMR: Contextualized Late-Interaction for Multimodal Content Retrieval☆23Updated 7 months ago
- ☆39Updated 6 months ago
- PyTorch implementation for MRL☆21Updated last year
- Chat Markup Language conversation library☆55Updated 2 years ago
- Tiny evaluation of leading LLMs on competitive programming problems☆14Updated last year
- PyLate efficient inference engine☆71Updated last month
- Training code for Sparse Autoencoders on Embedding models☆39Updated 11 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆102Updated 6 months ago
- ☆39Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year