friendliai / LLMServingPerfEvaluator
☆45Updated 6 months ago
Alternatives and similar repositories for LLMServingPerfEvaluator:
Users that are interested in LLMServingPerfEvaluator are comparing it to the libraries listed below
- FMO (Friendli Model Optimizer)☆12Updated 2 months ago
- Welcome to PeriFlow CLI ☁︎☆12Updated last year
- FriendliAI Model Hub☆91Updated 2 years ago
- ☆102Updated last year
- PyTorch CoreSIG☆55Updated 2 months ago
- ☆25Updated 2 years ago
- MIST: High-performance IoT Stream Processing☆17Updated 6 years ago
- Friendli: the fastest serving engine for generative AI☆42Updated 2 months ago
- A performance library for machine learning applications.☆183Updated last year
- ☆50Updated 4 months ago
- Nemo: A flexible data processing system☆21Updated 7 years ago
- Dotfile management with bare git☆19Updated this week
- ☆15Updated 3 years ago
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆49Updated this week
- ☆24Updated 6 years ago
- Study Group of Deep Learning Compiler☆157Updated 2 years ago
- Forked repo from https://github.com/EleutherAI/lm-evaluation-harness/commit/1f66adc☆76Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆59Updated this week
- ☆83Updated 11 months ago
- Network Contention-Aware Cluster Scheduling with Reinforcement Learning (IEEE ICPADS 2023)☆15Updated 5 months ago
- Official Github repository for the SIGCOMM '24 paper "Accelerating Model Training in Multi-cluster Environments with Consumer-grade GPUs"☆67Updated 8 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆116Updated last year
- A very simple performing matrix multiplication example for CPU / CUDA / METAL using GGML / llama.cpp☆12Updated 8 months ago
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year
- ☆28Updated 3 years ago
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆77Updated last year
- ☆61Updated 2 months ago
- ☆105Updated 10 months ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆95Updated last month
- ☆14Updated last month