M1n9X / GraphRAG_Lite
☆15Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for GraphRAG_Lite
- ☆83Updated 7 months ago
- Repo for for paper "AgentRE: An Agent-Based Framework for Navigating Complex Information Landscapes in Relation Extraction".☆50Updated 4 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated 7 months ago
- ☆54Updated last month
- Imitate OpenAI with Local Models☆85Updated 2 months ago
- ☆78Updated 2 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆28Updated 5 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated 7 months ago
- Hammer: Robust Function-Calling for On-Device Language Models via Function Masking☆33Updated this week
- ☆42Updated 2 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆52Updated 7 months ago
- connecting humans and agents☆52Updated this week
- ☆34Updated 6 months ago
- LLM RAG 应用,支持 API 调用,语音交互。☆10Updated 4 months ago
- 使用langchain实现 故事情景生成,情感情景引导,剧情总结,性格分析☆14Updated 5 months ago
- Recursive Abstractive Processing for Tree-Organized Retrieval☆11Updated 5 months ago
- ☆51Updated 4 months ago
- FuseAI Project☆76Updated 3 months ago
- 大语言模型训练和服务调研☆34Updated last year
- ☆129Updated 4 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆34Updated 10 months ago
- SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems☆77Updated 10 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆193Updated last month
- ☆28Updated 2 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- TianGong-AI-Unstructure☆51Updated this week
- Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b Mo…☆25Updated 4 months ago
- PGRAG☆42Updated 4 months ago
- ☆85Updated 2 weeks ago
- ☆79Updated 7 months ago