strangeloopcanon / LLMRankLinks
PageRank for LLMs
☆51Updated 4 months ago
Alternatives and similar repositories for LLMRank
Users that are interested in LLMRank are comparing it to the libraries listed below
Sorting:
- An introduction to LLM Sampling☆79Updated last year
- ☆40Updated last year
- look how they massacred my boy☆63Updated last year
- smolLM with Entropix sampler on pytorch☆149Updated last year
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 3 months ago
- Simple Transformer in Jax☆140Updated last year
- Storing long contexts in tiny caches with self-study☆229Updated last month
- lossily compress representation vectors using product quantization☆59Updated 2 months ago
- code for training & evaluating Contextual Document Embedding models☆202Updated 7 months ago
- Tools to make language models a bit easier to use☆63Updated last week
- ☆160Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Updated last week
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆183Updated 2 months ago
- ☆29Updated 2 months ago
- Train your own SOTA deductive reasoning model☆107Updated 10 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆85Updated 4 months ago
- ☆45Updated 2 years ago
- Approximating the joint distribution of language models via MCTS☆22Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- Plotting (entropy, varentropy) for small LMs☆99Updated 7 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆100Updated 5 months ago
- ☆68Updated 7 months ago
- ☆53Updated 11 months ago
- Training code for Sparse Autoencoders on Embedding models☆39Updated 10 months ago
- Chat Markup Language conversation library☆55Updated 2 years ago
- MoE training for Me and You and maybe other people☆315Updated last week
- Simple GRPO scripts and configurations.☆59Updated 11 months ago