valine / NeuralFlowLinks
Visualize the intermediate output of Mistral 7B
☆381Updated 10 months ago
Alternatives and similar repositories for NeuralFlow
Users that are interested in NeuralFlow are comparing it to the libraries listed below
Sorting:
- Stop messing around with finicky sampling parameters and just use DRµGS!☆359Updated last year
- LLM Analytics☆698Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆627Updated 8 months ago
- A library for making RepE control vectors☆668Updated 2 months ago
- a small code base for training large models☆315Updated 7 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆219Updated last year
- Inference code for Persimmon-8B☆412Updated 2 years ago
- Mistral7B playing DOOM☆138Updated last year
- a curated list of data for reasoning ai☆140Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- The repository for the code of the UltraFastBERT paper☆520Updated last year
- An implementation of bucketMul LLM inference☆223Updated last year
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆286Updated 2 months ago
- A bagel, with everything.☆325Updated last year
- batched loras☆347Updated 2 years ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆330Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆179Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆450Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- ☆115Updated 10 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆654Updated 6 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆717Updated 2 years ago
- A comprehensive deep dive into the world of tokens☆227Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆250Updated 9 months ago
- A compact LLM pretrained in 9 days by using high quality data☆336Updated 7 months ago
- Full finetuning of large language models without large memory requirements☆94Updated 2 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- ☆415Updated 2 years ago
- ☆581Updated last year