Picovoice / llm-compression-benchmarkLinks
LLM Compression Benchmark
☆22Updated last week
Alternatives and similar repositories for llm-compression-benchmark
Users that are interested in llm-compression-benchmark are comparing it to the libraries listed below
Sorting:
- Implementation of mamba with rust☆87Updated last year
- a lightweight, open-source blueprint for building powerful and scalable LLM chat applications☆28Updated last year
- ☆134Updated 11 months ago
- Public reports detailing responses to sets of prompts by Large Language Models.☆30Updated 6 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated 10 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆30Updated this week
- Access the Cohere Command R family of models☆37Updated 3 months ago
- run ollama & gguf easily with a single command☆52Updated last year
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 5 months ago
- ☆41Updated 2 months ago
- Chat Markup Language conversation library☆55Updated last year
- Google TPU optimizations for transformers models☆116Updated 5 months ago
- ☆95Updated 6 months ago
- Tools for LLM agents.☆63Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Train an adapter for any embedding model in under a minute☆106Updated 3 months ago
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆100Updated 6 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆80Updated 2 months ago
- A guidance compatibility layer for llama-cpp-python☆35Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆81Updated 2 months ago
- Practical and advanced guide to LLMOps. It provides a solid understanding of large language models’ general concepts, deployment techniqu…☆70Updated 11 months ago
- ☆66Updated last year
- This project implements a demonstrator agent that compares the Cache-Augmented Generation (CAG) Framework with traditional Retrieval-Augm…☆33Updated 6 months ago
- Fast parallel LLM inference for MLX☆198Updated last year
- Generate Glue Code in seconds to simplify your Nvidia Triton Inference Server Deployments☆20Updated last year
- Simple high-throughput inference library☆120Updated 2 months ago
- PyTorch implementation of models from the Zamba2 series.☆184Updated 5 months ago
- RWKV-7: Surpassing GPT☆92Updated 8 months ago
- ☆14Updated 10 months ago
- MLX implementation of xLSTM model by Beck et al. (2024)☆28Updated last year