EduardTalianu / EntropixLabLinks
entropix style sampling + GUI
☆27Updated 10 months ago
Alternatives and similar repositories for EntropixLab
Users that are interested in EntropixLab are comparing it to the libraries listed below
Sorting:
- ☆54Updated 10 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 9 months ago
- ☆67Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆64Updated last year
- ☆116Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- Simple GRPO scripts and configurations.☆59Updated 7 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆92Updated 4 months ago
- ☆51Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆92Updated 7 months ago
- Modified Beam Search with periodical restart☆12Updated last year
- ☆61Updated 2 months ago
- ☆13Updated 4 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆42Updated this week
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 11 months ago
- ☆31Updated last year
- run ollama & gguf easily with a single command☆52Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21Updated last year
- Yet another frontend for LLM, written using .NET and WinUI 3☆10Updated 9 months ago
- Nexusflow function call, tool use, and agent benchmarks.☆29Updated 9 months ago
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆52Updated 7 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆176Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimization☆65Updated last year
- ☆27Updated 2 years ago
- GPT-2 small trained on phi-like data☆67Updated last year
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- ☆49Updated 7 months ago