ml-jku / SDLGLinks
SDLG is an efficient method to accurately estimate aleatoric semantic uncertainty in LLMs
☆27Updated last year
Alternatives and similar repositories for SDLG
Users that are interested in SDLG are comparing it to the libraries listed below
Sorting:
- ☆133Updated last month
- Simple and scalable tools for data-driven pretraining data selection.☆27Updated 3 months ago
- PyTorch library for Active Fine-Tuning☆91Updated last week
- Official implementation of "GPT or BERT: why not both?"☆58Updated last month
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆63Updated 2 years ago
- Interpretating the latent space representations of attention head outputs for LLMs☆34Updated last year
- Efficient Transformers with Dynamic Token Pooling☆63Updated 2 years ago
- Code for Language-Interfaced FineTuning for Non-Language Machine Learning Tasks.☆130Updated 10 months ago
- Implementation of the BatchTopK activation function for training sparse autoencoders (SAEs)☆47Updated last month
- ☆36Updated 2 years ago
- ☆54Updated 2 years ago
- A Toolkit for Distributional Control of Generative Models☆73Updated last month
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 6 months ago
- ☆69Updated last year
- Modalities, a PyTorch-native framework for distributed and reproducible foundation model training.☆84Updated last week
- ☆43Updated 3 years ago
- ☆82Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆42Updated 11 months ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆59Updated last year
- PyTorch implementation for "Long Horizon Temperature Scaling", ICML 2023☆20Updated 2 years ago
- ☆45Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆96Updated 4 years ago
- LTG-Bert☆33Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆115Updated last year
- Training and evaluating NBM and SPAM for interpretable machine learning.☆78Updated 2 years ago
- nanoGPT-like codebase for LLM training☆107Updated 4 months ago
- Code repository for the NAACL 2022 paper "ExSum: From Local Explanations to Model Understanding"☆64Updated 3 years ago
- ☆27Updated 2 years ago
- A library for calibrating classifiers and computing calibration metrics☆14Updated 2 years ago