☆13Jan 28, 2026Updated 3 months ago
Alternatives and similar repositories for libdfx
Users that are interested in libdfx are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- RPCNIC: A High-Performance and Reconfigurable PCIe-attached RPC Accelerator [HPCA2025]☆14Dec 9, 2024Updated last year
- [PACT'24] GraNNDis. A fast and unified distributed graph neural network (GNN) training framework for both full-batch (full-graph) and min…☆10Aug 13, 2024Updated last year
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆12Jul 13, 2023Updated 2 years ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Sep 15, 2022Updated 3 years ago
- ☆21Jun 6, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆11May 28, 2015Updated 10 years ago
- Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples [NeurIPS 2021]☆34Dec 12, 2021Updated 4 years ago
- [ECCV 2024] VISAGE: Video Instance Segmentation with Appearance-Guided Enhancement☆37Jul 29, 2024Updated last year
- ☆28Nov 29, 2024Updated last year
- Find hardware supported by the Nerves Framework and where to buy it☆12Oct 27, 2023Updated 2 years ago
- Luthier, a GPU binary instrumentation tool for AMD GPUs☆28Updated this week
- Erlang SSH Subsystem for Nerves firmware updates☆14Apr 21, 2026Updated last week
- The artifact for NDSS '25 paper "ASGARD: Protecting On-Device Deep Neural Networks with Virtualization-Based Trusted Execution Environmen…☆15Oct 16, 2025Updated 6 months ago
- HyFiSS: A Hybrid Fidelity Stall-Aware Simulator for GPGPUs☆42Dec 9, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Forkable repo for entries in Phoenix Phrenzy (https://phoenixphrenzy.com/)☆11Oct 11, 2020Updated 5 years ago
- Multi-path UDP protocol - an example implementation☆10Jul 6, 2015Updated 10 years ago
- 💡 Example for using websockets to control Nerves devices☆12Jan 9, 2024Updated 2 years ago
- ☆23Sep 11, 2025Updated 7 months ago
- BadgerTrap is a tool to instrument x86-64 TLB misses.☆13Nov 13, 2016Updated 9 years ago
- Send a message from a shell script to the BEAM☆15Feb 2, 2026Updated 3 months ago
- Prompt format and padding guide for Llama 2☆12Sep 18, 2023Updated 2 years ago
- C++17 implementation of einops for libtorch - clear and reliable tensor manipulations with einstein-like notation☆11Oct 16, 2023Updated 2 years ago
- ☆11Sep 3, 2025Updated 8 months ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Excel (xlsx) file reader for Elixir☆14Aug 31, 2017Updated 8 years ago
- ☆16Mar 6, 2026Updated last month
- PLCのライブラリを作成・配布しま す☆11Jun 11, 2023Updated 2 years ago
- (elastic) cuckoo hashing☆17Jun 20, 2020Updated 5 years ago
- Rapid editable EX command launcher.☆16Aug 8, 2020Updated 5 years ago
- ☆13Apr 15, 2025Updated last year
- ☆14Oct 30, 2024Updated last year
- An MQTT 3.1.1 Client written in Elixir☆18Apr 3, 2026Updated last month
- Alveo Versal Example Design☆65Jan 28, 2026Updated 3 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [ISCA'25] LIA: A Single-GPU LLM Inference Acceleration with Cooperative AMX-Enabled CPU-GPU Computation and CXL Offloading☆12Jun 28, 2025Updated 10 months ago
- Dense optical flow toolbox (from C.Liu)☆18Jun 14, 2012Updated 13 years ago
- ☆10Aug 7, 2021Updated 4 years ago
- Pie: Programmable LLM Serving☆152Updated this week
- InstAttention: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference☆17Mar 30, 2025Updated last year
- ☆14Oct 1, 2021Updated 4 years ago
- ☆10Nov 15, 2023Updated 2 years ago