kaitz / fx2-cmix
Hutter Prize Submission
☆19Updated 3 months ago
Alternatives and similar repositories for fx2-cmix:
Users that are interested in fx2-cmix are comparing it to the libraries listed below
- Hutter Prize Submission☆12Updated 3 years ago
- ☆10Updated last year
- A synthetic story narration dataset to study small audio LMs.☆31Updated 11 months ago
- Gpu benchmark☆50Updated 3 months ago
- ☆46Updated 10 months ago
- ☆46Updated this week
- Sparse autoencoders for Contra text embedding models☆25Updated 8 months ago
- SGEMM that beats cuBLAS☆45Updated this week
- Data compression using LSTM in TensorFlow☆98Updated last year
- Experiments with BitNet inference on CPU☆52Updated 9 months ago
- Grokking on modular arithmetic in less than 150 epochs in MLX☆12Updated 2 months ago
- A Python implementation of Garbled Circuits MPC protocol☆56Updated 11 months ago
- Semi-Classical Quantum Random Number Generator library written in C for cryptographic, simulation and generative AI applications with exa…☆46Updated 3 weeks ago
- An implementation of LLMzip using GPT-2☆12Updated last year
- Training Models Daily☆17Updated last year
- ☆53Updated 7 months ago
- ☆27Updated 6 months ago
- This repository contains the source code and dataset link mentioned in WWW 2022 accepted paper "TRACE:A Fast Transformer-based General-Pu…☆28Updated 2 years ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆96Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆126Updated this week
- Dzip: improved general-purpose lossless compression based on novel neural network modeling☆63Updated 2 years ago
- webgpu autograd library☆19Updated 3 weeks ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆88Updated 10 months ago
- QuIP quantization☆48Updated 10 months ago
- An implementation of bucketMul LLM inference☆214Updated 6 months ago
- Tensor library with autograd using only Rust's standard library☆64Updated 6 months ago
- ☆40Updated last year
- CompChomper is a framework for measuring how LLMs perform at code completion.☆15Updated last month
- WebGPU LLM inference tuned by hand☆148Updated last year
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆60Updated 8 months ago