mozilla-ai / federated-finetuning
Blueprint for federated finetuning, enabling multiple data owners to collaboratively fine-tune models without sharing raw data. Developed in collaboration with Flower.
☆34Updated last week
Alternatives and similar repositories for federated-finetuning:
Users that are interested in federated-finetuning are comparing it to the libraries listed below
- ☆53Updated last week
- Combining Base and Instruction-Tuned Language Models for Better Synthetic Data Generation☆29Updated 2 months ago
- ☆90Updated last month
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆86Updated 2 weeks ago
- ☆24Updated last month
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆25Updated 3 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆90Updated 3 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆55Updated 7 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆117Updated 10 months ago
- Evaluating LLMs with fewer examples☆151Updated last year
- ☆48Updated 5 months ago
- ☆122Updated last month
- Codebase accompanying the Summary of a Haystack paper.☆77Updated 7 months ago
- Lean implementation of various multi-agent LLM methods, including Iteration of Thought (IoT)☆109Updated 2 months ago
- Open Implementations of LLM Analyses☆102Updated 6 months ago
- ☆47Updated 7 months ago
- all code examples in the blog posts☆24Updated 3 months ago
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆67Updated 3 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 9 months ago
- accompanying material for sleep-time compute paper☆56Updated this week
- ☆128Updated 3 weeks ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆104Updated 6 months ago
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆124Updated this week
- Federated Transformer (NeurIPS 24): a framework to enhance the performance of multi-party Vertical Federated Learning involving fuzzy ide…☆37Updated 4 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 11 months ago
- Simple examples using Argilla tools to build AI☆52Updated 5 months ago
- Knowledge Unlearning for Large Language Models☆25Updated 3 weeks ago
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆48Updated last month
- ☆38Updated last month
- Source code of "How to Correctly do Semantic Backpropagation on Language-based Agentic Systems" 🤖☆67Updated 4 months ago