AI-Hypercomputer / RecMLLinks
☆214Updated last week
Alternatives and similar repositories for RecML
Users that are interested in RecML are comparing it to the libraries listed below
Sorting:
- Multi-backend recommender systems with Keras 3☆160Updated 2 weeks ago
- ☆162Updated last year
- Simple UI for debugging correlations of text embeddings☆305Updated 8 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆307Updated 2 months ago
- ☆210Updated 7 months ago
- An introduction to LLM Sampling☆79Updated last year
- ☆90Updated 7 months ago
- High-Performance Engine for Multi-Vector Search☆207Updated 3 weeks ago
- XTR/WARP (SIGIR'25) is an extremely fast and accurate retrieval engine based on Stanford's ColBERTv2/PLAID and Google DeepMind's XTR.☆181Updated 9 months ago
- code for training & evaluating Contextual Document Embedding models☆202Updated 8 months ago
- lossily compress representation vectors using product quantization☆59Updated 3 months ago
- Library for text-to-text regression, applicable to any input string representation and allows pretraining and fine-tuning over multiple r…☆313Updated this week
- Lightweight Nearest Neighbors with Flexible Backends☆330Updated last month
- ☆67Updated 8 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆98Updated 6 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Updated 11 months ago
- Storing long contexts in tiny caches with self-study☆233Updated 2 months ago
- ☆465Updated 2 months ago
- Train your own SOTA deductive reasoning model☆107Updated 11 months ago
- Datamodels for hugging face tokenizers☆87Updated this week
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆102Updated 6 months ago
- Modular, scalable library to train ML models☆204Updated this week
- MoE training for Me and You and maybe other people☆335Updated last month
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆115Updated last month
- Low memory full parameter finetuning of LLMs☆53Updated 6 months ago
- Seemless interface of using PyTOrch distributed with Jupyter notebooks☆57Updated 4 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆347Updated last year
- Where GPUs get cooked 👩🍳🔥☆362Updated 2 weeks ago
- Getting crystal-like representations with harmonic loss☆195Updated 10 months ago