jxmorris12 / gptzipLinks
Losslessly encode text natively with arithmetic coding and HuggingFace Transformers
☆76Updated last year
Alternatives and similar repositories for gptzip
Users that are interested in gptzip are comparing it to the libraries listed below
Sorting:
- ☆40Updated last year
- ☆101Updated 9 months ago
- Storing long contexts in tiny caches with self-study☆194Updated 3 weeks ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- DeMo: Decoupled Momentum Optimization☆193Updated 10 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆99Updated 2 months ago
- Experiments for efforts to train a new and improved t5☆76Updated last year
- ☆91Updated last year
- ☆49Updated last year
- ☆61Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 7 months ago
- Latent Large Language Models☆19Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 10 months ago
- ☆81Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 7 months ago
- σ-GPT: A New Approach to Autoregressive Models☆68Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 8 months ago
- A reading list of relevant papers and projects on foundation model annotation☆28Updated 7 months ago
- An introduction to LLM Sampling☆79Updated 9 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 5 months ago
- Collection of autoregressive model implementation☆86Updated 5 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 5 months ago
- ☆54Updated last year
- look how they massacred my boy☆63Updated 11 months ago
- gzip Predicts Data-dependent Scaling Laws☆34Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 2 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 3 weeks ago
- Project code for training LLMs to write better unit tests + code☆21Updated 4 months ago
- https://hf.co/hexgrad/Kokoro-82M☆14Updated 7 months ago
- RWKV-7: Surpassing GPT☆96Updated 10 months ago