AI-Hypercomputer / RecMLLinks
☆213Updated last week
Alternatives and similar repositories for RecML
Users that are interested in RecML are comparing it to the libraries listed below
Sorting:
- Multi-backend recommender systems with Keras 3☆157Updated last week
- An introduction to LLM Sampling☆79Updated last year
- ☆160Updated last year
- Super basic implementation (gist-like) of RLMs with REPL environments.☆290Updated 2 months ago
- ☆210Updated 6 months ago
- Simple UI for debugging correlations of text embeddings☆306Updated 7 months ago
- SIMD quantization kernels☆93Updated 3 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆305Updated 3 weeks ago
- PageRank for LLMs☆51Updated 3 months ago
- lossily compress representation vectors using product quantization☆59Updated 2 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 5 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- code for training & evaluating Contextual Document Embedding models☆201Updated 7 months ago
- ☆536Updated 4 months ago
- ☆68Updated 7 months ago
- look how they massacred my boy☆63Updated last year
- High-Performance Engine for Multi-Vector Search☆195Updated this week
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆97Updated 5 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 7 months ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆105Updated 3 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year
- MoE training for Me and You and maybe other people☆309Updated this week
- Storing long contexts in tiny caches with self-study☆228Updated 3 weeks ago
- ☆40Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 9 months ago
- XTR/WARP (SIGIR'25) is an extremely fast and accurate retrieval engine based on Stanford's ColBERTv2/PLAID and Google DeepMind's XTR.☆175Updated 7 months ago
- Modular, scalable library to train ML models☆182Updated last week
- Low memory full parameter finetuning of LLMs☆53Updated 5 months ago
- ☆90Updated 5 months ago