Stanford NLP Python library for Representation Finetuning (ReFT)
☆1,563Mar 5, 2026Updated 3 weeks ago
Alternatives and similar repositories for pyreft
Users that are interested in pyreft are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆868Mar 6, 2026Updated 3 weeks ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆175Mar 12, 2026Updated 2 weeks ago
- Tools for merging pretrained large language models.☆6,895Mar 15, 2026Updated last week
- Robust recipes to align language models with human and AI preferences☆5,535Sep 8, 2025Updated 6 months ago
- Training LLMs with QLoRA + FSDP☆1,540Nov 9, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Official repository for ORPO☆473May 31, 2024Updated last year
- ☆553Jan 2, 2025Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,131Mar 16, 2026Updated last week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,965Mar 16, 2026Updated last week
- Go ahead and axolotl questions☆11,508Updated this week
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,681Oct 28, 2024Updated last year
- DSPy: The framework for programming—not prompting—language models☆33,038Updated this week
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,234May 8, 2024Updated last year
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients. Published in Nature.☆3,439Jul 25, 2025Updated 8 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Representation Engineering: A Top-Down Approach to AI Transparency☆965Aug 14, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,781Updated this week
- PyTorch native post-training library☆5,707Updated this week
- Generative Representational Instruction Tuning☆688Jun 25, 2025Updated 9 months ago
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆361Aug 7, 2024Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,841Mar 18, 2026Updated last week
- Efficient few-shot learning with Sentence Transformers☆2,699Dec 11, 2025Updated 3 months ago
- A framework for few-shot evaluation of language models.☆11,802Mar 18, 2026Updated last week
- AllenAI's post-training codebase☆3,643Updated this week
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- A library for making RepE control vectors☆699Sep 24, 2025Updated 6 months ago
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,745Mar 4, 2026Updated 3 weeks ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,739May 21, 2025Updated 10 months ago
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'☆1,662Dec 4, 2025Updated 3 months ago
- Efficient Triton Kernels for LLM Training☆6,242Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆919Feb 26, 2026Updated last month
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,889May 17, 2025Updated 10 months ago
- A library for mechanistic interpretability of GPT-style language models☆3,223Updated this week
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,231Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- Structured Outputs☆13,588Updated this week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆907Sep 30, 2025Updated 5 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆449Oct 16, 2024Updated last year