Dan-wanna-M / formatronLinks
Formatron empowers everyone to control the format of language models' output with minimal overhead.
☆228Updated 5 months ago
Alternatives and similar repositories for formatron
Users that are interested in formatron are comparing it to the libraries listed below
Sorting:
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆95Updated 6 months ago
- A compact LLM pretrained in 9 days by using high quality data☆332Updated 7 months ago
- ☆138Updated 2 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆242Updated last year
- ☆163Updated 3 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- A pipeline for LLM knowledge distillation☆109Updated 7 months ago
- ☆119Updated last year
- A simple tool that let's you explore different possible paths that an LLM might sample.☆190Updated 6 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last year
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆145Updated last month
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Self-hosted LLM chatbot arena, with yourself as the only judge☆41Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- Comparison of Language Model Inference Engines☆235Updated 10 months ago
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆91Updated 2 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆253Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆161Updated 3 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- ☆89Updated 9 months ago
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆98Updated 5 months ago
- 1.58-bit LLaMa model☆83Updated last year
- Synthetic Data for LLM Fine-Tuning☆119Updated last year
- ☆136Updated last year
- ☆51Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆259Updated this week
- Merge Transformers language models by use of gradient parameters.☆207Updated last year