Figura-Labs-Inc / telegraf_nv_export
Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.
☆62Updated 6 months ago
Alternatives and similar repositories for telegraf_nv_export:
Users that are interested in telegraf_nv_export are comparing it to the libraries listed below
- Simple Transformer in Jax☆128Updated 6 months ago
- Modify Entropy Based Sampling to work with Mac Silicon via MLX☆49Updated 2 months ago
- smolLM with Entropix sampler on pytorch☆147Updated 2 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆60Updated 2 months ago
- look how they massacred my boy☆63Updated 3 months ago
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated last year
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆191Updated 3 weeks ago
- Solve puzzles to improve your tinygrad skills!☆100Updated 3 months ago
- smol models are fun too☆86Updated 2 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆26Updated last month
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- papers.day☆83Updated last year
- a highly efficient compression algorithm for the n1 implant (neuralink's compression challenge)☆46Updated 7 months ago
- ☆37Updated 5 months ago
- run paligemma in real time☆129Updated 8 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- A really tiny autograd engine☆89Updated 9 months ago
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆60Updated 8 months ago
- Gpu benchmark☆50Updated 3 months ago
- ☆62Updated this week
- Run GGML models with Kubernetes.☆173Updated last year
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- DeMo: Decoupled Momentum Optimization☆170Updated last month
- Stream of my favorite papers and links☆39Updated 4 months ago
- ☆250Updated this week
- An introduction to LLM Sampling☆75Updated last month
- ☆107Updated 3 weeks ago
- Fast parallel LLM inference for MLX☆152Updated 6 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆118Updated 2 months ago