krafton-ai / mini-batch-clLinks
☆11Updated 2 years ago
Alternatives and similar repositories for mini-batch-cl
Users that are interested in mini-batch-cl are comparing it to the libraries listed below
Sorting:
- ☆17Updated 2 years ago
- Self-Contrastive Learning: Single-viewed Supervised Contrastive Framework using Sub-network (AAAI 2023)☆21Updated 2 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆32Updated 2 years ago
- ☆34Updated 10 months ago
- ☆79Updated 3 years ago
- Model Stock: All we need is just a few fine-tuned models☆127Updated 4 months ago
- Learning Features with Parameter-Free Layers, ICLR 2022☆84Updated 2 years ago
- Code Implementation for "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP …☆16Updated 2 years ago
- Efficient Transformers with Dynamic Token Pooling☆65Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆69Updated last year
- A curated list of awesome adversarial reprogramming and input prompting methods for neural networks since 2022☆37Updated 2 years ago
- A PyTorch Implementation of the Luna: Linear Unified Nested Attention☆41Updated 4 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆83Updated last year
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated last year
- Implementation of RQ Transformer, proposed in the paper "Autoregressive Image Generation using Residual Quantization"☆122Updated 3 years ago
- [ICLR 2023] Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classifi…☆21Updated 4 months ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- ☆20Updated 2 years ago
- ☆14Updated 3 years ago
- ☆130Updated 3 years ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆60Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆34Updated 6 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- [NeurIPS 2025] MergeBench: A Benchmark for Merging Domain-Specialized LLMs☆37Updated this week
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 3 years ago
- TF/Keras code for DiffStride, a pooling layer with learnable strides.☆124Updated 3 years ago
- Code for paper: “What Data Benefits My Classifier?” Enhancing Model Performance and Interpretability through Influence-Based Data Selecti…☆24Updated last year
- Code for "Surgical Fine-Tuning Improves Adaptation to Distribution Shifts" published at ICLR 2023☆29Updated 2 years ago
- ☆30Updated 2 years ago