Xtra-Computing / FeTLinks
Federated Transformer (NeurIPS 24): a framework to enhance the performance of multi-party Vertical Federated Learning involving fuzzy identifiers
☆38Updated 7 months ago
Alternatives and similar repositories for FeT
Users that are interested in FeT are comparing it to the libraries listed below
Sorting:
- ☆27Updated last week
- Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning☆20Updated 2 months ago
- Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning☆50Updated 2 months ago
- [COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?☆79Updated 6 months ago
- Official Code for paper "Towards Efficient and Effective Unlearning of Large Language Models for Recommendation" (Frontiers of Computer S…☆37Updated last year
- ☆49Updated 8 months ago
- The code of RouterDC☆65Updated 3 months ago
- Implementation for PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs☆23Updated last year
- ☆38Updated 10 months ago
- Repo for the paper: PerAda: Parameter-Efficient Federated Learning Personalization with Generalization Guarantees (CVPR 2024)☆19Updated 11 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆116Updated last year
- The official implementation for Collaborative Word-based Pre-trained Item Representation for Transferable Recommendation.☆24Updated last year
- This is the official implementation for the paper: Ferret: Federated Full-Parameter Tuning at Scale for Large Language Models☆15Updated 11 months ago
- ☆128Updated 3 months ago
- ☆18Updated last year
- ☆19Updated 6 months ago
- [CCS 2024] "BadMerging: Backdoor Attacks Against Model Merging": official code implementation.☆28Updated 11 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆97Updated last year
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆71Updated last month
- This is the implementation for the paper "LARGE LANGUAGE MODEL CASCADES WITH MIX- TURE OF THOUGHT REPRESENTATIONS FOR COST- EFFICIENT REA…☆24Updated last year
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆59Updated last year
- JudgeLRM: Large Reasoning Models as a Judge☆32Updated 3 months ago
- Code for this paper "HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork"☆33Updated last year
- Code associated with the EMNLP 2024 Main paper: "Image, tell me your story!" Predicting the original meta-context of visual misinformatio…☆42Updated 3 months ago
- Implementation of FedBary☆14Updated 4 months ago
- RGL - RAG-on-Graphs Library☆110Updated 3 months ago
- The official implementation of Cross-Task Experience Sharing (COPS)☆25Updated 9 months ago
- ☆87Updated 7 months ago
- A curated list of Model Merging methods.☆92Updated 10 months ago
- Open-LLM-Leaderboard: Open-Style Question Evaluation. Paper at https://arxiv.org/abs/2406.07545☆46Updated last year