simran-arora / focusLinks
This repo contains code for the paper: "Can Foundation Models Help Us Achieve Perfect Secrecy?"
☆24Updated 2 years ago
Alternatives and similar repositories for focus
Users that are interested in focus are comparing it to the libraries listed below
Sorting:
- Google Research☆46Updated 2 years ago
- ☆26Updated last year
- ☆29Updated 2 years ago
- Minimum Description Length probing for neural network representations☆18Updated 5 months ago
- Official code for the paper: "Metadata Archaeology"☆19Updated 2 years ago
- The repository contains code for Adaptive Data Optimization☆25Updated 7 months ago
- Code for T-MARS data filtering☆35Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated 2 years ago
- Code for Fooling Contrastive Language-Image Pre-trainined Models with CLIPMasterPrints☆15Updated 8 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 5 months ago
- Code repository for the public reproduction of the language modelling experiments on "MatFormer: Nested Transformer for Elastic Inference…☆24Updated last year
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- 📰 Computing the information content of trained neural networks☆21Updated 3 years ago
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Updated last year
- Code for "Merging Text Transformers from Different Initializations"☆20Updated 5 months ago
- ☆26Updated last year
- ☆44Updated 7 months ago
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- ☆10Updated last year
- Official PyTorch implementation of "Neural Relation Graph: A Unified Framework for Identifying Label Noise and Outlier Data" (NeurIPS'23)☆15Updated last year
- Recycling diverse models☆45Updated 2 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- Lottery Ticket Adaptation☆39Updated 7 months ago
- This is official project in our paper: Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and Layers☆30Updated last year
- Ranking of fine-tuned HF models as base models.☆35Updated 2 months ago
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆47Updated 2 years ago
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [to appear at ICLR 2025]☆19Updated last month
- ☆18Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago