anarchy-ai / llm-speed-benchmarkLinks
Benchmarking tool for assessing LLM models' performance across different hardwares
☆17Updated last year
Alternatives and similar repositories for llm-speed-benchmark
Users that are interested in llm-speed-benchmark are comparing it to the libraries listed below
Sorting:
- never forget anything again! combine AI and intelligent tooling for a local knowledge base to track catalogue, annotate, and plan for you…☆37Updated last year
- Public reports detailing responses to sets of prompts by Large Language Models.☆30Updated 5 months ago
- Runner in charge of collecting metrics from LLM inference endpoints for the Unify Hub☆17Updated last year
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆26Updated 7 months ago
- Generates grammer files from typescript for LLM generation☆38Updated last year
- Transformer GPU VRAM estimator☆65Updated last year
- Geniusrise: Framework for building geniuses☆60Updated last year
- Contains the model patches and the eval logs from the passing swe-bench-lite run.☆10Updated 11 months ago
- Neural search engine for discovering semantically similar Python repositories on GitHub☆28Updated last year
- ☆20Updated last year
- ☆18Updated 9 months ago
- LLM finetuning☆42Updated last year
- LLM code editor for backend services☆14Updated 8 months ago
- 360M model running in the browser on WebGPU☆22Updated 10 months ago
- ☆16Updated last year
- StarListify is a Python package that classifies GitHub stars history into organized category lists based on user-defined criteria.☆25Updated 7 months ago
- Access different AI models in a one place☆22Updated last year
- Convert Python code into JSON consumable by OpenAI's function API.☆28Updated 2 years ago
- The AI-powered CLI Assistant☆27Updated last year
- Large-Language-Model to Machine Interface project.☆19Updated last year
- An open source MCP proxy.☆13Updated 5 months ago
- Web Interface for Vision Language Models Including InternVLM2☆22Updated 10 months ago
- Cache requests to OpenAI API and see what requests were made with responses, analytics inspired by Helicone but more simple, currently ai…☆17Updated 2 years ago
- Mistral-7B finetuned for function calling☆16Updated last year
- A better way of testing, inspecting, and analyzing AI Agent traces.☆38Updated 3 weeks ago
- The Prime Intellect CLI provides a powerful command-line interface for managing GPU resources across various providers☆29Updated last month
- Command line tool for Deep Infra cloud ML inference service☆31Updated last year
- Repository for opt-out requests.☆9Updated last year
- The backend behind the LLM-Perf Leaderboard☆10Updated last year
- watch your screen while doing sales and fill your crm automatically☆17Updated last year