RUC-NLPIR / FlashRAG-PaddleLinks
⚡FlashRAG: A Python Toolkit for Efficient RAG Research
☆22Updated 7 months ago
Alternatives and similar repositories for FlashRAG-Paddle
Users that are interested in FlashRAG-Paddle are comparing it to the libraries listed below
Sorting:
- [ACL 2025] An official pytorch implement of the paper: Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement☆32Updated 2 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆53Updated 10 months ago
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆160Updated 2 weeks ago
- ☆49Updated last year
- ☆27Updated 9 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆41Updated last year
- The code repository of paper "TransferTOD: A Generalizable Chinese Multi-Domain Task-Oriented Dialogue System with Transfer Capabilities"☆20Updated 7 months ago
- code for piccolo embedding model from SenseTime☆134Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆59Updated last year
- ☆94Updated 8 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- Qwen DianJin: LLMs for the Financial Industry by Alibaba Cloud☆120Updated this week
- 怎么训练一个LLM分词器☆151Updated 2 years ago
- A Toolkit for Table-based Question Answering☆112Updated last year
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆65Updated last year
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆91Updated 10 months ago
- The code for LaRA Benchmark☆38Updated 2 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- a toolkit on knowledge distillation for large language models☆134Updated last week
- ☆144Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆56Updated 8 months ago
- The official implementation of "LevelRAG: Enhancing Retrieval-Augmented Generation with Multi-hop Logic Planning over Rewriting Augmented…☆38Updated 3 months ago
- 中文原生检索增强生成测评基准☆120Updated last year
- ☆57Updated 9 months ago
- Recursive Abstractive Processing for Tree-Organized Retrieval☆10Updated last year
- Our 2nd-gen LMM☆34Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆40Updated last year
- ☆28Updated 9 months ago