shirley-wu / cot_decodingLinks
☆45Updated last year
Alternatives and similar repositories for cot_decoding
Users that are interested in cot_decoding are comparing it to the libraries listed below
Sorting:
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 7 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆93Updated 5 months ago
- ☆119Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 7 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- ☆51Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- autologic is a Python package that implements the SELF-DISCOVER framework proposed in the paper SELF-DISCOVER: Large Language Models Self…☆60Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 8 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆240Updated 11 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆90Updated 4 months ago
- entropix style sampling + GUI☆27Updated 11 months ago
- Function Calling Benchmark & Testing☆90Updated last year
- ☆118Updated 5 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 8 months ago
- Simple examples using Argilla tools to build AI☆56Updated 10 months ago
- Pivotal Token Search☆126Updated 2 months ago
- ☆54Updated 11 months ago
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆129Updated 10 months ago
- ☆73Updated 2 years ago
- Who needs o1 anyways. Add CoT to any OpenAI compatible endpoint.☆44Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆115Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- ☆155Updated 5 months ago
- 1.58-bit LLaMa model☆82Updated last year
- ☆62Updated 3 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆137Updated 2 years ago
- Merge Transformers language models by use of gradient parameters.☆208Updated last year
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆90Updated last month
- ☆23Updated 8 months ago