shirley-wu / cot_decodingLinks
☆45Updated last year
Alternatives and similar repositories for cot_decoding
Users that are interested in cot_decoding are comparing it to the libraries listed below
Sorting:
- Easy to use, High Performant Knowledge Distillation for LLMs☆92Updated 3 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- ☆51Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆175Updated last year
- ☆118Updated last year
- Train your own SOTA deductive reasoning model☆104Updated 5 months ago
- ☆59Updated last month
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆83Updated 3 months ago
- 1.58-bit LLaMa model☆82Updated last year
- autologic is a Python package that implements the SELF-DISCOVER framework proposed in the paper SELF-DISCOVER: Large Language Models Self…☆60Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆92Updated 7 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆238Updated 9 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 7 months ago
- Official repo for "Make Your LLM Fully Utilize the Context"☆253Updated last year
- ☆134Updated last week
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆135Updated last year
- ☆54Updated 9 months ago
- ☆134Updated last year
- entropix style sampling + GUI☆27Updated 10 months ago
- Pivotal Token Search☆121Updated last month
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- GRadient-INformed MoE☆265Updated 11 months ago
- ☆80Updated 9 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆120Updated 2 weeks ago
- This is the official repository for Inheritune.☆112Updated 6 months ago
- A pipeline parallel training script for LLMs.☆154Updated 4 months ago
- Function Calling Benchmark & Testing☆89Updated last year