EGjoni / DRUGS
Stop messing around with finicky sampling parameters and just use DRµGS!
☆313Updated 3 months ago
Related projects: ⓘ
- Visualize the intermediate output of Mistral 7B☆300Updated 7 months ago
- A library for making RepE control vectors☆451Updated last month
- An implementation of bucketMul LLM inference☆212Updated 2 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆229Updated 3 months ago
- Simple Python library/structure to ablate features in LLMs which are supported by TransformerLens☆288Updated 3 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆223Updated 4 months ago
- Mistral7B playing DOOM☆117Updated 2 months ago
- The repository for the code of the UltraFastBERT paper☆508Updated 5 months ago
- ☆409Updated 10 months ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆192Updated 4 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆260Updated last month
- Web UI for ExLlamaV2☆420Updated 3 weeks ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆190Updated 3 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆267Updated last month
- a curated list of data for reasoning ai☆105Updated last month
- Low-Rank adapter extraction for fine-tuned transformers model☆154Updated 4 months ago
- Reasoning Computers. Lambda Calculus, Fully Differentiable. Also Neural Stacks, Queues, Arrays, Lists, Trees, and Latches.☆201Updated this week
- Fine-tune mistral-7B on 3090s, a100s, h100s☆701Updated 11 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆545Updated 2 months ago
- A fast batching API to serve LLM models☆172Updated 4 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆143Updated this week
- Stateful load balancer custom-tailored for llama.cpp☆523Updated this week
- Full finetuning of large language models without large memory requirements☆94Updated 8 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆117Updated 8 months ago
- ☆478Updated 2 weeks ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆217Updated 6 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆405Updated 9 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated 2 months ago
- Inference code for Persimmon-8B☆415Updated last year