EdwardDali / EntropixLab
entropix style sampling + GUI
☆25Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for EntropixLab
- ☆40Updated 2 weeks ago
- ☆28Updated this week
- ☆64Updated 5 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 5 months ago
- Simple examples using Argilla tools to build AI☆40Updated this week
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆113Updated 3 weeks ago
- 5X faster 60% less memory QLoRA finetuning☆21Updated 5 months ago
- ☆20Updated last year
- ☆53Updated 5 months ago
- ☆104Updated 8 months ago
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆38Updated 5 months ago
- All the world is a play, we are but actors in it.☆47Updated 4 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated 10 months ago
- Modified Beam Search with periodical restart☆12Updated 2 months ago
- never forget anything again! combine AI and intelligent tooling for a local knowledge base to track catalogue, annotate, and plan for you…☆32Updated 6 months ago
- Experimental sampler to make LLMs more creative☆30Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 8 months ago
- ☆14Updated 7 months ago
- ☆27Updated last year
- Official homepage for "Self-Harmonized Chain of Thought"☆83Updated 2 months ago
- Yet another frontend for LLM, written using .NET and WinUI 3☆11Updated last week
- GPT-4 Level Conversational QA Trained In a Few Hours☆55Updated 3 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆40Updated 8 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated 10 months ago
- GPT-2 small trained on phi-like data☆65Updated 9 months ago
- A guidance compatibility layer for llama-cpp-python☆34Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated 10 months ago
- ☆37Updated 11 months ago