DeepLangAI / LingoWhale-8B
LingoWhale-8B: Open Bilingual LLMs | 开源双语预训练大模型
☆138Updated last year
Alternatives and similar repositories for LingoWhale-8B:
Users that are interested in LingoWhale-8B are comparing it to the libraries listed below
- ☆128Updated last year
- ☆172Updated 2 years ago
- ☆104Updated 4 months ago
- SuperCLUE琅琊榜:中文通用大模型匿名对战评价基准☆145Updated 10 months ago
- 中文图书语料MD5链接☆217Updated last year
- ☆221Updated last year
- 文本去重☆70Updated 10 months ago
- Light local website for displaying performances from different chat models.☆86Updated last year
- Gaokao Benchmark for AI☆108Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated last year
- “百聆”是一个基于LLaMA的语言对齐增强的英语/中文大语言模型,具有优越的英语/中文能力,在多语言和通用任务等多项测试中取得ChatGPT 90%的性能。BayLing is an English/Chinese LLM equipped with advanced l…☆315Updated 4 months ago
- 中文大语言模型评测第一期☆108Updated last year
- 《自然语言处理概论》 张奇、桂韬、黄萱菁著☆111Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆99Updated last year
- CodeGPT: A Code-Related Dialogue Dataset Generated by GPT and for GPT☆113Updated last year
- 语言模型中文认知能力分析☆236Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆124Updated 10 months ago
- ☆83Updated last year
- 活字通用大模型☆387Updated 7 months ago
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆236Updated last year
- ☆61Updated 2 years ago
- ☆97Updated last year
- 中文通用大模型开放域多轮测评基准 | An Open Domain Benchmark for Foundation Models in Chinese☆78Updated last year
- A large-scale language model for scientific domain, trained on redpajama arXiv split☆133Updated last year
- ☆97Updated last year
- The Roadmap for LLMs☆84Updated last year
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆61Updated 6 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆260Updated 11 months ago
- "桃李“: 国际中文教育大模型☆177Updated last year