kevinwu23 / StanfordFineTuneBenchLinks
☆31Updated last year
Alternatives and similar repositories for StanfordFineTuneBench
Users that are interested in StanfordFineTuneBench are comparing it to the libraries listed below
Sorting:
- ☆53Updated 10 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 11 months ago
- Python library to use Pleias-RAG models☆67Updated 7 months ago
- Datamodels for hugging face tokenizers☆86Updated last week
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- ☆23Updated 2 years ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- An introduction to LLM Sampling☆79Updated 11 months ago
- ☆55Updated last year
- Simple GRPO scripts and configurations.☆59Updated 10 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆277Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year
- ☆87Updated this week
- ☆138Updated 3 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆32Updated 2 months ago
- Code for ExploreTom☆88Updated 5 months ago
- LLM training in simple, raw C/CUDA☆15Updated last year
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆60Updated 7 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆70Updated last year
- XTR/WARP (SIGIR'25) is an extremely fast and accurate retrieval engine based on Stanford's ColBERTv2/PLAID and Google DeepMind's XTR.☆173Updated 7 months ago
- ☆159Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆106Updated 2 months ago
- Seemless interface of using PyTOrch distributed with Jupyter notebooks☆57Updated 2 months ago
- Storing long contexts in tiny caches with self-study☆218Updated this week
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆79Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated last month
- Source code for the collaborative reasoner research project at Meta FAIR.☆110Updated 7 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 9 months ago
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.☆88Updated 3 weeks ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆67Updated 2 months ago