AI-Hypercomputer / RecMLLinks
☆211Updated this week
Alternatives and similar repositories for RecML
Users that are interested in RecML are comparing it to the libraries listed below
Sorting:
- Multi-backend recommender systems with Keras 3☆146Updated 2 weeks ago
- Simple UI for debugging correlations of text embeddings☆299Updated 5 months ago
- An introduction to LLM Sampling☆79Updated 11 months ago
- ☆159Updated 11 months ago
- code for training & evaluating Contextual Document Embedding models☆200Updated 6 months ago
- SIMD quantization kernels☆92Updated 2 months ago
- High-Performance Engine for Multi-Vector Search☆183Updated last week
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆97Updated 4 months ago
- lossily compress representation vectors using product quantization☆59Updated 3 weeks ago
- ☆210Updated 4 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆299Updated 2 weeks ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- XTR/WARP (SIGIR'25) is an extremely fast and accurate retrieval engine based on Stanford's ColBERTv2/PLAID and Google DeepMind's XTR.☆169Updated 6 months ago
- Super basic implementation (gist-like) of RLMs with REPL environments.☆248Updated last month
- An implementation of PSGD Kron second-order optimizer for PyTorch☆97Updated 3 months ago
- Train your own SOTA deductive reasoning model☆107Updated 8 months ago
- look how they massacred my boy☆63Updated last year
- This repository contain the simple llama3 implementation in pure jax.☆70Updated 9 months ago
- ☆86Updated 4 months ago
- Library for text-to-text regression, applicable to any input string representation and allows pretraining and fine-tuning over multiple r…☆287Updated this week
- ☆68Updated 5 months ago
- Efficient vector database for hundred millions of embeddings.☆208Updated last year
- Modular, scalable library to train ML models☆170Updated this week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆196Updated 5 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆327Updated last year
- ☆40Updated last year
- Low memory full parameter finetuning of LLMs☆53Updated 4 months ago
- Lightweight Nearest Neighbors with Flexible Backends☆316Updated last month
- ☆243Updated 8 months ago
- PageRank for LLMs☆51Updated 2 months ago