RahulSChand / llama2.c-for-dummiesLinks
Step by step explanation/tutorial of llama2.c
☆222Updated last year
Alternatives and similar repositories for llama2.c-for-dummies
Users that are interested in llama2.c-for-dummies are comparing it to the libraries listed below
Sorting:
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆335Updated 2 months ago
- Easy and Efficient Quantization for Transformers☆198Updated 2 weeks ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- Efficient fine-tuning for ko-llm models☆182Updated last year
- Newsletter bot for 🤗 Daily Papers☆125Updated this week
- Inference of Mamba models in pure C☆188Updated last year
- OSLO: Open Source for Large-scale Optimization☆175Updated last year
- Extension of Langchain for RAG. Easy benchmarking, multiple retrievals, reranker, time-aware RAG, and so on...☆281Updated last year
- Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.☆44Updated this week
- 1-Click is all you need.☆62Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 9 months ago
- manage histories of LLM applied applications☆91Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 10 months ago
- evolve llm training instruction, from english instruction to any language.☆118Updated last year
- A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.☆75Updated this week
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆388Updated last month
- A collection of all available inference solutions for the LLMs☆91Updated 4 months ago
- The Universe of Evaluation. All about the evaluation for LLMs.☆224Updated last year
- ☆271Updated last month
- Comparison of Language Model Inference Engines☆219Updated 6 months ago
- A performance library for machine learning applications.☆184Updated last year
- RWKV in nanoGPT style☆191Updated last year
- ☆12Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆153Updated last year
- Python bindings for ggml☆142Updated 10 months ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆68Updated last year
- ONNX Runtime Server: The ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference.☆162Updated last month
- Inference Llama 2 in one file of pure C++☆83Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year