padeoe / hf-mirror-siteLinks
a huggingface mirror site.
☆314Updated last year
Alternatives and similar repositories for hf-mirror-site
Users that are interested in hf-mirror-site are comparing it to the libraries listed below
Sorting:
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆610Updated last year
- huggingface mirror download☆587Updated 7 months ago
- hf-mirror-cli 使用国内镜像,无需配置开箱即用,快速下载hugingface上的模型☆142Updated 9 months ago
- 【逐条处理完成】人为审核+修改每一条的弱智吧精选问题QA数据集☆237Updated 7 months ago
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆446Updated last year
- Yuan 2.0 Large Language Model☆690Updated last year
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆655Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆265Updated last year
- Phi3 中文后训练模型仓库☆324Updated 11 months ago
- ☆453Updated 2 years ago
- ☆347Updated last year
- 360zhinao☆290Updated 6 months ago
- C++ implementation of Qwen-LM☆606Updated 11 months ago
- Llama3-Chinese是以Meta-Llama-3-8B为底座,使用 DORA + LORA+ 的训练方法,在50w高质量中文多轮SFT数据 + 10w英文多轮SFT数据 + 2000单轮自我认知数据训练而来的大模型。☆295Updated last year
- ☆177Updated this week
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆692Updated 10 months ago
- ☆110Updated last month
- This is the first Chinese chat model specifically fine-tuned for Chinese through ORPO based on the Meta-Llama-3-8B-Instruct model.☆322Updated last year
- ☆337Updated last month
- [EMNLP'24] CharacterGLM: Customizing Chinese Conversational AI Characters with Large Language Models☆482Updated last month
- A lightweight multilingual LLM☆1,005Updated 3 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆255Updated 8 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 6 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆792Updated 11 months ago
- 😜 表情包视觉数据集,使用glm-4v、step-1v的图像解析能力标注。☆140Updated last year
- ☆229Updated 2 years ago
- Mixture-of-Experts (MoE) Language Model☆192Updated last year
- Chinese Community for Google Gemma LLM☆63Updated last year
- ☆729Updated 2 years ago
- Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sour…☆1,452Updated 8 months ago