jxmorris12 / embzipLinks
lossily compress representation vectors using product quantization
☆59Updated 2 months ago
Alternatives and similar repositories for embzip
Users that are interested in embzip are comparing it to the libraries listed below
Sorting:
- ☆40Updated last year
- look how they massacred my boy☆63Updated last year
- ☆53Updated 11 months ago
- Python library to use Pleias-RAG models☆67Updated 8 months ago
- An introduction to LLM Sampling☆79Updated last year
- NLP with Rust for Python 🦀🐍☆70Updated 7 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆61Updated 8 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year
- Simple GRPO scripts and configurations.☆59Updated 11 months ago
- Project code for training LLMs to write better unit tests + code☆21Updated 7 months ago
- ☆90Updated 6 months ago
- ☆68Updated 7 months ago
- ☆29Updated 2 months ago
- utilities for loading and running text embeddings with onnx☆44Updated 4 months ago
- ☆92Updated 3 weeks ago
- Train your own SOTA deductive reasoning model☆107Updated 10 months ago
- Aana SDK is a powerful framework for building AI enabled multimodal applications.☆55Updated 4 months ago
- BPE modification that implements removing of the intermediate tokens during tokenizer training.☆25Updated last year
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆77Updated 11 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- code for training & evaluating Contextual Document Embedding models☆202Updated 7 months ago
- ☆45Updated 2 years ago
- Pivotal Token Search☆142Updated 3 weeks ago
- XTR/WARP (SIGIR'25) is an extremely fast and accurate retrieval engine based on Stanford's ColBERTv2/PLAID and Google DeepMind's XTR.☆176Updated 8 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆100Updated 5 months ago
- ☆25Updated 8 months ago
- PageRank for LLMs☆51Updated 4 months ago
- Storing long contexts in tiny caches with self-study☆229Updated last month
- ☆36Updated 5 months ago