NVIDIA / workbench-llamafactoryLinks
This is an NVIDIA AI Workbench example project that demonstrates an end-to-end model development workflow using Llamafactory.
☆56Updated 7 months ago
Alternatives and similar repositories for workbench-llamafactory
Users that are interested in workbench-llamafactory are comparing it to the libraries listed below
Sorting:
- Official repository for RAGViz: Diagnose and Visualize Retrieval-Augmented Generation [EMNLP 2024]☆83Updated 4 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- FuseAI Project☆87Updated 4 months ago
- Imitate OpenAI with Local Models☆87Updated 9 months ago
- II-Thought-RL is our initial attempt at developing a large-scale, multi-domain Reinforcement Learning (RL) dataset☆17Updated last month
- ☆83Updated 3 weeks ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆37Updated last year
- ☆40Updated last year
- ☆74Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆61Updated 9 months ago
- Code for KaLM-Embedding models☆78Updated 2 months ago
- [ICLR'25] ApolloMoE: Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts☆44Updated 6 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 6 months ago
- PGRAG☆48Updated 10 months ago
- Evaluation of bm42 sparse indexing algorithm☆67Updated 10 months ago
- ☆47Updated 5 months ago
- ☆223Updated last week
- ☆94Updated 6 months ago
- ☆53Updated last year
- ☆51Updated 10 months ago
- TextEmbed is a REST API crafted for high-throughput and low-latency embedding inference. It accommodates a wide variety of embedding mode…☆24Updated 9 months ago
- Receipts for creating AI Applications with APIs from DashScope (and friends)!☆55Updated 8 months ago
- Collection of model-centric MCP servers☆17Updated 2 weeks ago
- My implementation of "Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models"☆97Updated last year
- ☆59Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- LLaMA Factory Document☆132Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 6 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated 8 months ago