rentruewang / bocoelLinks
Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few lines of modular code.
☆286Updated 2 weeks ago
Alternatives and similar repositories for bocoel
Users that are interested in bocoel are comparing it to the libraries listed below
Sorting:
- LLM Analytics☆677Updated 10 months ago
- Implement recursion using English as the programming language and an LLM as the runtime.☆239Updated 2 years ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆622Updated 4 months ago
- A BERT that you can train on a (gaming) laptop.☆209Updated last year
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆253Updated last year
- Radient turns many data types (not just text) into vectors for similarity search, RAG, regression analysis, and more.☆279Updated this week
- Dead Simple LLM Abliteration☆231Updated 6 months ago
- The Fast Vector Similarity Library is designed to provide efficient computation of various similarity measures between vectors.☆401Updated 5 months ago
- ☆253Updated 2 years ago
- ☆745Updated last year
- Revealing example of self-attention, the building block of transformer AI models☆131Updated 2 years ago
- This project collects GPU benchmarks from various cloud providers and compares them to fixed per token costs. Use our tool for efficient …☆220Updated 8 months ago
- Agent accuracy measurements for LLMs☆205Updated last year
- Run and explore Llama models locally with minimal dependencies on CPU☆191Updated 10 months ago
- a curated list of data for reasoning ai☆137Updated last year
- See Through Your Models☆399Updated last month
- Visualize the intermediate output of Mistral 7B☆368Updated 7 months ago
- ai for jq☆244Updated 11 months ago
- ☆163Updated last year
- A pure NumPy implementation of Mamba.☆224Updated last year
- Lightweight Nearest Neighbors with Flexible Backends☆298Updated last month
- Enforce structured output from LLMs 100% of the time☆250Updated last year
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆126Updated 4 months ago
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated last year
- Docker-based inference engine for AMD GPUs☆231Updated 10 months ago
- OpenAI's Structured Outputs with Logprobs☆182Updated 2 months ago
- ☆220Updated 5 months ago
- High-Performance Implementation of OpenAI's TikToken.☆444Updated last month
- An implementation of bucketMul LLM inference☆222Updated last year
- Enable decision-making based on simulations☆227Updated last year