EGjoni / DRUGS
Stop messing around with finicky sampling parameters and just use DRµGS!
☆339Updated 7 months ago
Alternatives and similar repositories for DRUGS:
Users that are interested in DRUGS are comparing it to the libraries listed below
- Visualize the intermediate output of Mistral 7B☆334Updated last week
- This is our own implementation of 'Layer Selective Rank Reduction'☆232Updated 8 months ago
- A library for making RepE control vectors☆537Updated 3 weeks ago
- An implementation of bucketMul LLM inference☆215Updated 6 months ago
- The repository for the code of the UltraFastBERT paper☆514Updated 10 months ago
- ☆253Updated this week
- Low-Rank adapter extraction for fine-tuned transformers models☆167Updated 8 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 3 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆163Updated this week
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Mistral7B playing DOOM☆127Updated 6 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated 9 months ago
- a curated list of data for reasoning ai☆121Updated 5 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆420Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆605Updated 2 months ago
- ☆412Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆218Updated 8 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆706Updated last year
- A multimodal, function calling powered LLM webui.☆213Updated 4 months ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- run paligemma in real time☆129Updated 8 months ago
- ☆517Updated 3 months ago
- batched loras☆338Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆312Updated last year
- ☆199Updated 11 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆199Updated 2 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆276Updated last month
- a small code base for training large models☆283Updated last month
- A fast batching API to serve LLM models☆179Updated 9 months ago