DUTIR-LegalIntelligence / TailingLinks
☆17Updated last year
Alternatives and similar repositories for Tailing
Users that are interested in Tailing are comparing it to the libraries listed below
Sorting:
- Repo for for paper "AgentRE: An Agent-Based Framework for Navigating Complex Information Landscapes in Relation Extraction".☆73Updated last year
- TechGPT 2.0: Technology-Oriented Generative Pretrained Transformer 2.0☆114Updated last year
- TianGong-AI-Unstructure☆69Updated 3 weeks ago
- ☆94Updated last year
- 中文原生检索增强生成测评基准☆123Updated last year
- 如需体验textin文档解析,请点击https://cc.co/16YSIy☆21Updated last year
- 国内首个全参数训练的法律大模型 HanFei-1.0 (韩非)☆124Updated 2 years ago
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆91Updated 2 years ago
- ☆15Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- ☆95Updated 10 months ago
- ☆106Updated 2 years ago
- 利用LLM+敏感词库,来自动判别是否涉及敏感词。☆132Updated 2 years ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- ☆49Updated last month
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆90Updated 2 years ago
- SearchGPT: Building a quick conversation-based search engine with LLMs.☆46Updated 9 months ago
- [ACL 2024] IEPile: A Large-Scale Information Extraction Corpus☆206Updated 9 months ago
- 我们是第一个完全可商用的角色大模型。☆40Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆264Updated last year
- 大语言模型训练和服务调研☆36Updated 2 years ago
- Search, organize, discover anything!☆48Updated last year
- SUS-Chat: Instruction tuning done right☆49Updated last year
- ☆83Updated last year
- Fine-Tuning Dataset Auto-Generation for Graph Query Languages.☆78Updated this week
- deep learning☆148Updated 5 months ago
- ☆194Updated 8 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- ☆29Updated last month