neelsjain / BYOD
The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"
☆108Updated last year
Alternatives and similar repositories for BYOD:
Users that are interested in BYOD are comparing it to the libraries listed below
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆82Updated 3 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆112Updated 2 months ago
- SILO Language Models code repository☆81Updated 11 months ago
- ☆117Updated 4 months ago
- ☆44Updated 3 months ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆69Updated last year
- AI Logging for Interpretability and Explainability🔬☆104Updated 8 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆102Updated 4 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆83Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆125Updated 11 months ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆93Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆102Updated last year
- PyTorch library for Active Fine-Tuning☆56Updated this week
- A repository to perform self-instruct with a model on HF Hub☆32Updated last year
- ☆72Updated 9 months ago
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆54Updated last year
- Self-Alignment with Principle-Following Reward Models☆154Updated 11 months ago
- Inspecting and Editing Knowledge Representations in Language Models☆112Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆113Updated 5 months ago
- ☆67Updated 6 months ago
- ☆26Updated 7 months ago
- ☆166Updated last year
- ☆66Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆108Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆64Updated 8 months ago
- ☆44Updated 5 months ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆162Updated last week
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆84Updated last week
- ☆33Updated 3 months ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆157Updated 9 months ago