vgel / biblically-accurate-sampler
llm sampler that only allows words that are in the bible
☆26Updated 5 months ago
Alternatives and similar repositories for biblically-accurate-sampler:
Users that are interested in biblically-accurate-sampler are comparing it to the libraries listed below
- Official repo for Learning to Reason for Long-Form Story Generation☆44Updated 2 weeks ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 2 months ago
- ☆49Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆139Updated 2 months ago
- Lego for GRPO☆27Updated last month
- look how they massacred my boy☆63Updated 6 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆22Updated last year
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆44Updated last year
- ☆48Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- entropix style sampling + GUI☆26Updated 6 months ago
- Lightweight package that tracks and summarizes code changes using LLMs (Large Language Models)☆34Updated 2 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- ☆20Updated 6 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- ☆38Updated 9 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆39Updated 3 months ago
- Tokun to can tokens☆17Updated this week
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆171Updated this week
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated 2 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- implementation of https://arxiv.org/pdf/2312.09299☆20Updated 10 months ago
- An introduction to LLM Sampling☆77Updated 4 months ago
- smolLM with Entropix sampler on pytorch☆151Updated 6 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 7 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆31Updated this week
- Cerule - A Tiny Mighty Vision Model☆67Updated 8 months ago
- Using modal.com to process FineWeb-edu data☆20Updated last month
- Collection of autoregressive model implementation☆85Updated 2 weeks ago