valine / NeuralFlowLinks
Visualize the intermediate output of Mistral 7B
☆367Updated 6 months ago
Alternatives and similar repositories for NeuralFlow
Users that are interested in NeuralFlow are comparing it to the libraries listed below
Sorting:
- Stop messing around with finicky sampling parameters and just use DRµGS!☆351Updated last year
- LLM Analytics☆675Updated 9 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆622Updated 4 months ago
- a small code base for training large models☆307Updated 3 months ago
- a curated list of data for reasoning ai☆137Updated last year
- A library for making RepE control vectors☆622Updated 6 months ago
- Inference code for Persimmon-8B☆415Updated last year
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆208Updated 8 months ago
- The repository for the code of the UltraFastBERT paper☆516Updated last year
- Mistral7B playing DOOM☆133Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆286Updated last month
- A bagel, with everything.☆323Updated last year
- Open weights language model from Google DeepMind, based on Griffin.☆645Updated 2 months ago
- An implementation of bucketMul LLM inference☆221Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- ☆116Updated 6 months ago
- A compact LLM pretrained in 9 days by using high quality data☆320Updated 3 months ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆240Updated 5 months ago
- ☆416Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆175Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆717Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆318Updated 9 months ago
- ☆447Updated last year
- Neural Search☆362Updated 4 months ago
- batched loras☆344Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago