M1n9X / GraphRAG_LiteLinks
☆16Updated last year
Alternatives and similar repositories for GraphRAG_Lite
Users that are interested in GraphRAG_Lite are comparing it to the libraries listed below
Sorting:
- ☆94Updated 8 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- ☆92Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆40Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- FuseAI Project☆87Updated 7 months ago
- The code for paper: Decoupled Planning and Execution: A Hierarchical Reasoning Framework for Deep Search☆53Updated last month
- Reformatted Alignment☆113Updated 11 months ago
- ☆83Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b Mo…☆27Updated last year
- Official repository for RAGViz: Diagnose and Visualize Retrieval-Augmented Generation [EMNLP 2024]☆85Updated 7 months ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- Automatic prompt optimization framework for multi-step agent tasks.☆33Updated 9 months ago
- The official implementation of "LevelRAG: Enhancing Retrieval-Augmented Generation with Multi-hop Logic Planning over Rewriting Augmented…☆41Updated 4 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Deep Reasoning Translation (DRT) Project☆227Updated 2 months ago
- ☆90Updated 3 months ago
- ☆46Updated 2 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆149Updated last year
- ☆36Updated 11 months ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆43Updated 6 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆292Updated 2 months ago
- ☆94Updated 2 weeks ago
- Imitate OpenAI with Local Models☆89Updated 11 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- PGRAG☆53Updated last year
- ☆50Updated last year