Figura-Labs-Inc / telegraf_nv_export
Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.
☆62Updated 10 months ago
Alternatives and similar repositories for telegraf_nv_export
Users that are interested in telegraf_nv_export are comparing it to the libraries listed below
Sorting:
- Simple Transformer in Jax☆136Updated 10 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆65Updated 3 weeks ago
- look how they massacred my boy☆63Updated 7 months ago
- ☆23Updated 9 months ago
- smolLM with Entropix sampler on pytorch☆151Updated 6 months ago
- could we make an ml stack in 100,000 lines of code?☆42Updated 10 months ago
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated last year
- smol models are fun too☆92Updated 6 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆67Updated last month
- Modify Entropy Based Sampling to work with Mac Silicon via MLX☆50Updated 6 months ago
- A miniature version of Modal☆20Updated 11 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated 2 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆64Updated 6 months ago
- A really tiny autograd engine☆92Updated last year
- Solve puzzles. Learn CUDA.☆64Updated last year
- Lego for GRPO☆28Updated last month
- ☆49Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated 8 months ago
- An introduction to LLM Sampling☆78Updated 5 months ago
- compute, storage, and networking infra at home☆65Updated last year
- ☆55Updated 2 months ago
- ☆38Updated 9 months ago
- moondream in zig.☆67Updated last month
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆28Updated last week
- llm sampler that only allows words that are in the bible☆27Updated 5 months ago
- ☆48Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆170Updated this week