charlesfrye / cuda-substringsLinks
Because it's there.
☆16Updated last year
Alternatives and similar repositories for cuda-substrings
Users that are interested in cuda-substrings are comparing it to the libraries listed below
Sorting:
- ☆40Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 4 months ago
- Rust Implementation of micrograd☆53Updated last year
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆58Updated last year
- look how they massacred my boy☆63Updated last year
- Latent Large Language Models☆19Updated last year
- Using modal.com to process FineWeb-edu data☆20Updated 9 months ago
- ☆27Updated last year
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 3 months ago
- Cerule - A Tiny Mighty Vision Model☆68Updated 2 months ago
- alternative way to calculating self attention☆18Updated last year
- Approximating the joint distribution of language models via MCTS☆22Updated last year
- Simplex Random Feature attention, in PyTorch☆75Updated 2 years ago
- Simple high-throughput inference library☆155Updated 8 months ago
- Collection of autoregressive model implementation☆85Updated 2 weeks ago
- A miniature version of Modal☆23Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated 2 years ago
- Training hybrid models for dummies.☆29Updated 2 months ago
- ☆27Updated last year
- new optimizer☆20Updated last year
- utilities for loading and running text embeddings with onnx☆45Updated 5 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆77Updated 11 months ago
- An introduction to LLM Sampling☆79Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- lossily compress representation vectors using product quantization☆59Updated 2 months ago
- A Learning Journey: Micrograd in Mojo 🔥☆65Updated last year
- ☆22Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago