JShollaj / awesome-llm-interpretability
A curated list of Large Language Model (LLM) Interpretability resources.
☆1,269Updated 3 months ago
Alternatives and similar repositories for awesome-llm-interpretability:
Users that are interested in awesome-llm-interpretability are comparing it to the libraries listed below
- Representation Engineering: A Top-Down Approach to AI Transparency☆809Updated 7 months ago
- List of papers on hallucination detection in LLMs.☆807Updated 2 weeks ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,135Updated 10 months ago
- A reading list on LLM based Synthetic Data Generation 🔥☆1,211Updated last month
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,445Updated last month
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆721Updated last month
- LLM Transparency Tool (LLM-TT), an open-source interactive toolkit for analyzing internal workings of Transformer-based language models. …☆809Updated 3 months ago
- The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.☆740Updated 10 months ago
- Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models☆733Updated 3 weeks ago
- A unified evaluation framework for large language models☆2,569Updated last month
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,374Updated 11 months ago
- This repository collects all relevant resources about interpretability in LLMs☆327Updated 4 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,312Updated this week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆817Updated 2 weeks ago
- Training LLMs with QLoRA + FSDP☆1,464Updated 4 months ago
- Automatically evaluate your LLMs in Google Colab☆603Updated 10 months ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,640Updated 8 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,695Updated 2 months ago
- All the projects related to Llama☆374Updated last month
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,313Updated this week
- LLM Finetuning with peft☆2,385Updated last month
- Best practices for distilling large language models.☆506Updated last year
- Aligning Large Language Models with Human: A Survey☆726Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,523Updated 4 months ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,496Updated 9 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆885Updated last week
- Curated list of datasets and tools for post-training.☆2,866Updated last month
- Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard a…☆1,084Updated 2 months ago
- Minimalistic large language model 3D-parallelism training☆1,715Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,767Updated 7 months ago