basetenlabs / Workshop-TRT-LLMLinks
☆20Updated last year
Alternatives and similar repositories for Workshop-TRT-LLM
Users that are interested in Workshop-TRT-LLM are comparing it to the libraries listed below
Sorting:
- Fine-tune an LLM to perform batch inference and online serving.☆117Updated 8 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆115Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- An introduction to LLM Sampling☆79Updated last year
- A miniature version of Modal☆23Updated last year
- LLM training in simple, raw C/CUDA☆15Updated last year
- ☆23Updated 2 years ago
- Seemless interface of using PyTOrch distributed with Jupyter notebooks☆57Updated 4 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- A tool that facilitates easy, efficient and high-quality fine-tuning of Cohere's models☆76Updated 10 months ago
- ☆31Updated last year
- Modded vLLM to run pipeline parallelism over public networks☆40Updated 8 months ago
- ☆19Updated last year
- Collection of autoregressive model implementation☆85Updated 2 weeks ago
- ☆161Updated last year
- ☆125Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year
- ☆68Updated 8 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated 2 years ago
- ☆80Updated last year
- ☆45Updated 2 years ago
- ☆89Updated 2 years ago
- Lite weight wrapper for the independent implementation of SPLADE++ models for search & retrieval pipelines. Models and Library created by…☆34Updated last year
- Set of scripts to finetune LLMs☆37Updated last year
- ML/DL Math and Method notes☆66Updated 2 years ago
- I learn about and explain quantization☆26Updated last year
- Google TPU optimizations for transformers models☆133Updated last week
- Manage scalable open LLM inference endpoints in Slurm clusters☆279Updated last year
- Cray-LM unified training and inference stack.☆22Updated last year
- ☆140Updated 5 months ago