shirley-wu / cot_decodingLinks
☆45Updated last year
Alternatives and similar repositories for cot_decoding
Users that are interested in cot_decoding are comparing it to the libraries listed below
Sorting:
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆95Updated 5 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- ☆51Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- entropix style sampling + GUI☆27Updated last year
- ☆119Updated last year
- 1.58-bit LLaMa model☆83Updated last year
- autologic is a Python package that implements the SELF-DISCOVER framework proposed in the paper SELF-DISCOVER: Large Language Models Self…☆60Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 9 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆95Updated 5 months ago
- ☆62Updated 3 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆139Updated 2 years ago
- Official repo for "Make Your LLM Fully Utilize the Context"☆261Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 9 months ago
- ☆136Updated 2 months ago
- ☆55Updated 11 months ago
- Train your own SOTA deductive reasoning model☆109Updated 7 months ago
- Pivotal Token Search☆130Updated 3 months ago
- ☆53Updated last year
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆129Updated 2 months ago
- A compact LLM pretrained in 9 days by using high quality data☆330Updated 6 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆240Updated last year
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆43Updated this week
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆130Updated 11 months ago
- ☆80Updated 11 months ago
- Function Calling Benchmark & Testing☆92Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- This is the official repository for Inheritune.☆115Updated 8 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated last week