Codebase for Hyperdecoders https://arxiv.org/abs/2203.08304
☆14Oct 11, 2022Updated 3 years ago
Alternatives and similar repositories for hyperdecoders
Users that are interested in hyperdecoders are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This repo contains the code for Late Prompt Tuning.☆12Dec 22, 2025Updated 3 months ago
- ☆54May 8, 2023Updated 2 years ago
- Learning adapter weights from task descriptions☆19Nov 12, 2023Updated 2 years ago
- Neural-etwork-parameters-with-Diffusion☆39May 27, 2024Updated last year
- ☆13Apr 22, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Building modular LMs with parameter-efficient fine-tuning.☆114Apr 3, 2026Updated last week
- ☆13Apr 30, 2020Updated 5 years ago
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Sep 2, 2024Updated last year
- Code for the paper "Optimal Off-Policy Evaluation from Multiple Logging Policies"☆15Jul 17, 2021Updated 4 years ago
- [ICML 2025] Logits are All We Need to Adapt Closed Models☆22May 2, 2025Updated 11 months ago
- ☆158Aug 24, 2021Updated 4 years ago
- ☆11Jun 5, 2024Updated last year
- Code for EMNLP 2022 Paper: On the Calibration of Massively Multilingual Language Models☆15Jun 12, 2023Updated 2 years ago
- ☆17Jul 11, 2023Updated 2 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ACL'2023: Multi-Task Pre-Training of Modular Prompt for Few-Shot Learning☆40Oct 24, 2022Updated 3 years ago
- Showcasing various NLP Downstream tasks Training with pre-trained Language models using Pytorch Lightning☆13Aug 7, 2022Updated 3 years ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆52Nov 17, 2024Updated last year
- ☆10Oct 17, 2022Updated 3 years ago
- The repo for In-context Autoencoder☆168May 11, 2024Updated last year
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆69Sep 18, 2022Updated 3 years ago
- ☆42Nov 7, 2023Updated 2 years ago
- ☆23Mar 18, 2024Updated 2 years ago
- ☆68May 18, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- [EMNLP 2024] Multi-modal reasoning problems via code generation.☆28Feb 5, 2025Updated last year
- ☆15Apr 29, 2025Updated 11 months ago
- ☆26Nov 12, 2025Updated 5 months ago
- Fast Polar Decomposition for Muon☆133Updated this week
- Code Repository for the NeurIPS 2021 paper: "Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic P…☆22Jul 10, 2024Updated last year
- ☆19Jan 3, 2025Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated 2 years ago
- Causality with machine learning, topic including causal represenation learning, causal reinforcement learning☆11Apr 19, 2021Updated 4 years ago
- ☆12Feb 17, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Efficient Finetuning for OpenAI GPT-OSS☆23Oct 2, 2025Updated 6 months ago
- ☆32Jul 24, 2023Updated 2 years ago
- ☆131Mar 31, 2024Updated 2 years ago
- LONGAGENT: Scaling Language Models to 128k Context through Multi-Agent Collaboration☆11Mar 11, 2024Updated 2 years ago
- Code for Estimating Multi-cause Treatment Effects via Single-cause Perturbation (NeurIPS 2021)☆14Jan 5, 2022Updated 4 years ago
- Retrieval as Attention☆81Dec 16, 2022Updated 3 years ago
- ☆28Jul 11, 2024Updated last year