RahulSChand / llama2.c-for-dummiesLinks
Step by step explanation/tutorial of llama2.c
☆223Updated last year
Alternatives and similar repositories for llama2.c-for-dummies
Users that are interested in llama2.c-for-dummies are comparing it to the libraries listed below
Sorting:
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆342Updated 4 months ago
- Easy and Efficient Quantization for Transformers☆203Updated 2 months ago
- Newsletter bot for 🤗 Daily Papers☆127Updated this week
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- Inference of Mamba models in pure C☆191Updated last year
- OSLO: Open Source for Large-scale Optimization☆175Updated last year
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 10 months ago
- Efficient fine-tuning for ko-llm models☆182Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆161Updated 3 weeks ago
- Extension of Langchain for RAG. Easy benchmarking, multiple retrievals, reranker, time-aware RAG, and so on...☆282Updated last year
- Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.☆49Updated last month
- 1-Click is all you need.☆62Updated last year
- manage histories of LLM applied applications☆91Updated last year
- llama3.np is a pure NumPy implementation for Llama 3 model.☆988Updated 4 months ago
- A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.☆80Updated last month
- ☆293Updated last month
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆481Updated last week
- code to train a gpt-2 model to train it on tiny stories dataset according to the TinyStories paper☆39Updated last year
- evolve llm training instruction, from english instruction to any language.☆119Updated last year
- Inference Llama 2 in one file of pure C++☆83Updated 2 years ago
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated last year
- ☆35Updated last year
- Python bindings for ggml☆146Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated last year
- SGLang is fast serving framework for large language models and vision language models.☆25Updated 2 weeks ago
- A collection of all available inference solutions for the LLMs☆91Updated 6 months ago
- The Universe of Evaluation. All about the evaluation for LLMs.☆227Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- ☆51Updated last year