VikParuchuri / classifiedLinks
Score LLM pretraining data with classifiers
☆54Updated last year
Alternatives and similar repositories for classified
Users that are interested in classified are comparing it to the libraries listed below
Sorting:
- ☆88Updated last year
- ☆55Updated 11 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated last week
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆79Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆109Updated 10 months ago
- Simple GRPO scripts and configurations.☆59Updated 8 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Just a bunch of benchmark logs for different LLMs☆118Updated last year
- ☆50Updated 8 months ago
- Data preparation code for Amber 7B LLM☆93Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Open Implementations of LLM Analyses☆107Updated last year
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated 2 years ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- ☆80Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆44Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆50Updated last year
- The first dense retrieval model that can be prompted like an LM☆89Updated 5 months ago
- Track the progress of LLM context utilisation☆54Updated 6 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆105Updated last month
- Train your own SOTA deductive reasoning model☆109Updated 7 months ago
- Simple examples using Argilla tools to build AI☆56Updated 11 months ago
- Testing speed and accuracy of RAG with, and without Cross Encoder Reranker.☆50Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 9 months ago
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆120Updated this week
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year