facebookresearch / llm-transparency-tool
LLM Transparency Tool (LLM-TT), an open-source interactive toolkit for analyzing internal workings of Transformer-based language models. *Check out demo at* https://huggingface.co/spaces/facebook/llm-transparency-tool-demo
☆776Updated 3 months ago
Alternatives and similar repositories for llm-transparency-tool:
Users that are interested in llm-transparency-tool are comparing it to the libraries listed below
- Evaluation suite for LLMs☆312Updated 3 weeks ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆961Updated 4 months ago
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,156Updated 3 months ago
- Training LLMs with QLoRA + FSDP☆1,422Updated 2 weeks ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,339Updated 7 months ago
- Train Models Contrastively in Pytorch☆547Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆829Updated this week
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,048Updated 6 months ago
- ☆448Updated 7 months ago
- ReFT: Representation Finetuning for Language Models☆1,164Updated 2 weeks ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,057Updated this week
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆687Updated 3 months ago
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆986Updated 9 months ago
- ☆893Updated last month
- ☆458Updated this week
- A native PyTorch Library for large model training☆2,657Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆650Updated last month
- Data and tools for generating and inspecting OLMo pre-training data.☆1,009Updated this week
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆806Updated 3 months ago
- ☆882Updated 5 months ago
- Minimalistic large language model 3D-parallelism training☆1,284Updated this week
- ☆337Updated 4 months ago
- Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions☆649Updated 3 weeks ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,395Updated 8 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆442Updated 8 months ago
- Code for Quiet-STaR☆654Updated 3 months ago
- ☆1,494Updated this week
- Website for hosting the Open Foundation Models Cheat Sheet.☆257Updated 5 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,174Updated this week