vgel / biblically-accurate-sampler
llm sampler that only allows words that are in the bible
☆26Updated 3 months ago
Alternatives and similar repositories for biblically-accurate-sampler:
Users that are interested in biblically-accurate-sampler are comparing it to the libraries listed below
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆138Updated last month
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- ☆49Updated last year
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆31Updated 3 weeks ago
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated 6 months ago
- look how they massacred my boy☆63Updated 5 months ago
- ☆20Updated 4 months ago
- ☆38Updated 7 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 4 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆91Updated 2 weeks ago
- ☆22Updated last year
- ☆48Updated last year
- A Collection of Pydantic Models to Abstract IRL☆18Updated this week
- ☆22Updated 9 months ago
- Lego for GRPO☆25Updated last week
- ☆80Updated 2 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 11 months ago
- Using modal.com to process FineWeb-edu data☆20Updated 2 weeks ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 4 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆168Updated this week
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆41Updated last year
- Lightweight package that tracks and summarizes code changes using LLMs (Large Language Models)☆32Updated 3 weeks ago
- PageRank for LLMs☆39Updated 3 weeks ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Training code for Sparse Autoencoders on Embedding models☆36Updated 3 weeks ago
- smol models are fun too☆90Updated 4 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- An introduction to LLM Sampling☆77Updated 3 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆39Updated last month