RahulSChand / llama2.c-for-dummiesLinks
Step by step explanation/tutorial of llama2.c
☆224Updated last year
Alternatives and similar repositories for llama2.c-for-dummies
Users that are interested in llama2.c-for-dummies are comparing it to the libraries listed below
Sorting:
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆344Updated 5 months ago
- Easy and Efficient Quantization for Transformers☆203Updated 3 months ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- Efficient fine-tuning for ko-llm models☆182Updated last year
- OSLO: Open Source for Large-scale Optimization☆174Updated 2 years ago
- Newsletter bot for 🤗 Daily Papers☆127Updated this week
- Inference of Mamba models in pure C☆191Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 11 months ago
- Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.☆49Updated 2 months ago
- llama3.np is a pure NumPy implementation for Llama 3 model.☆990Updated 5 months ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.☆88Updated 2 months ago
- 1-Click is all you need.☆62Updated last year
- Extension of Langchain for RAG. Easy benchmarking, multiple retrievals, reranker, time-aware RAG, and so on...☆284Updated last year
- manage histories of LLM applied applications☆90Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆161Updated last month
- evolve llm training instruction, from english instruction to any language.☆119Updated 2 years ago
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated last year
- The Universe of Evaluation. All about the evaluation for LLMs.☆226Updated last year
- Python bindings for ggml☆146Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆96Updated 2 years ago
- ☆298Updated last week
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆382Updated last year
- Inference Llama 2 in one file of pure C++☆84Updated 2 years ago
- a small code base for training large models☆308Updated 5 months ago
- ONNX Runtime Server: The ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference.☆167Updated last week
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆314Updated 2 months ago
- Comparison of Language Model Inference Engines☆229Updated 9 months ago
- ☆12Updated last year
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆502Updated 3 weeks ago