hamelsmu / llama-inferenceView external linksLinks
experiments with inference on llama
☆103Jun 6, 2024Updated last year
Alternatives and similar repositories for llama-inference
Users that are interested in llama-inference are comparing it to the libraries listed below
Sorting:
- ☆13May 25, 2023Updated 2 years ago
- ☆20Nov 23, 2022Updated 3 years ago
- Leverage your LangChain trace data for fine tuning☆46Aug 2, 2024Updated last year
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Aug 29, 2023Updated 2 years ago
- Non-local Modeling for Image Quality Assessment☆13Dec 20, 2023Updated 2 years ago
- ☆16Aug 10, 2022Updated 3 years ago
- extensible collectives library in triton☆95Mar 31, 2025Updated 10 months ago
- Example of applying CUDA graphs to LLaMA-v2☆12Aug 25, 2023Updated 2 years ago
- ☆120Apr 22, 2024Updated last year
- Track the progress of LLM context utilisation☆55Apr 14, 2025Updated 9 months ago
- Janus is an opensource IA for Star Citizen☆11Dec 23, 2023Updated 2 years ago
- This repo lets you run mistral-7b in Google Colab.☆16Oct 1, 2023Updated 2 years ago
- An automation webcrawler based on Selenium library for retrieving parliamentary questions on The Website of Taiwan Legislative Yuan (http…☆11Jun 8, 2023Updated 2 years ago
- ☆135Nov 24, 2023Updated 2 years ago
- My custom Helm Chart repository☆17Dec 20, 2025Updated last month
- Interface to multicore QR factorization qr_mumps☆18Updated this week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,689Oct 23, 2024Updated last year
- Smol but mighty language model☆65Apr 4, 2023Updated 2 years ago
- ☆16Oct 24, 2023Updated 2 years ago
- A fork of sqlite-utils with CLI etc removed☆17Jan 29, 2026Updated 2 weeks ago
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Nov 15, 2025Updated 2 months ago
- batched loras☆349Sep 6, 2023Updated 2 years ago
- Various transformers for FSDP research☆38Nov 11, 2022Updated 3 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Jan 21, 2024Updated 2 years ago
- Serving multiple LoRA finetuned LLM as one☆1,139May 8, 2024Updated last year
- ☆17Feb 19, 2024Updated last year
- Use Actions to acquire those precious lambda GPUs☆19Sep 7, 2023Updated 2 years ago
- Experiments to assess SPADE on different LLM pipelines.☆17Apr 7, 2024Updated last year
- ☆593Aug 23, 2024Updated last year
- Generate BERT vocabularies and pretraining examples from Wikipedias☆17May 11, 2020Updated 5 years ago
- Make triton easier☆50Jun 12, 2024Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,093Jun 30, 2025Updated 7 months ago
- Experiments on speculative sampling with Llama models☆128Jun 8, 2023Updated 2 years ago
- ☆20Jan 27, 2024Updated 2 years ago
- ☆19May 6, 2023Updated 2 years ago
- Temporary remove unused tokens during training to save ram and speed.☆23Jun 15, 2025Updated 7 months ago
- ☆126Mar 17, 2024Updated last year
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,879Jan 28, 2024Updated 2 years ago
- ☆75Jul 2, 2021Updated 4 years ago