facebookresearch / llm-transparency-toolLinks
LLM Transparency Tool (LLM-TT), an open-source interactive toolkit for analyzing internal workings of Transformer-based language models. *Check out demo at* https://huggingface.co/spaces/facebook/llm-transparency-tool-demo
☆846Updated 11 months ago
Alternatives and similar repositories for llm-transparency-tool
Users that are interested in llm-transparency-tool are comparing it to the libraries listed below
Sorting:
- Reaching LLaMA2 Performance with 0.1M Dollars☆987Updated last year
- Evaluation suite for LLMs☆366Updated 4 months ago
- Train Models Contrastively in Pytorch☆754Updated 7 months ago
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,444Updated 5 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,403Updated last year
- Training LLMs with QLoRA + FSDP☆1,529Updated last year
- ☆446Updated last year
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,104Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,217Updated last year
- Data and tools for generating and inspecting OLMo pre-training data.☆1,345Updated 2 weeks ago
- Code for Quiet-STaR☆741Updated last year
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆685Updated last year
- Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".☆652Updated 3 months ago
- ☆1,009Updated 9 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆828Updated last month
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,013Updated 6 months ago
- DataComp for Language Models☆1,389Updated 2 months ago
- [NeurlPS D&B 2024] Generative AI for Math: MathPile☆418Updated 7 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆730Updated last year
- Automatically evaluate your LLMs in Google Colab☆670Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,906Updated 3 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆268Updated 6 months ago
- ☆552Updated last year
- Build, evaluate, understand, and fix LLM-based apps☆491Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆750Updated last year
- A benchmark to evaluate language models on questions I've previously asked them to solve.☆1,033Updated 6 months ago
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆597Updated 2 years ago
- Generative Representational Instruction Tuning☆678Updated 4 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆477Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆551Updated last year