shiyemin / light-hf-proxyLinks
A light proxy solution for HuggingFace hub.
☆46Updated last year
Alternatives and similar repositories for light-hf-proxy
Users that are interested in light-hf-proxy are comparing it to the libraries listed below
Sorting:
- Light local website for displaying performances from different chat models.☆86Updated last year
- 基于baichuan-7b的开源多模态大语言模型☆73Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆69Updated 2 years ago
- zero零训练llm调参☆31Updated last year
- ☆36Updated 8 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated last year
- ☆79Updated last year
- SUS-Chat: Instruction tuning done right☆48Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Our 2nd-gen LMM☆33Updated last year
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆37Updated 8 months ago
- 我们是第一个完全可商用的角色大模型。☆40Updated 9 months ago
- A more efficient GLM implementation!☆55Updated 2 years ago
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆120Updated 6 months ago
- Gaokao Benchmark for AI☆108Updated 2 years ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Updated 3 months ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated 2 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆37Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated last year
- ☆31Updated 2 years ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆138Updated last year
- ☆172Updated 2 years ago
- 文本去重☆72Updated last year
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- Its an open source LLM based on MOE Structure.☆58Updated 11 months ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- GLM Series Edge Models☆141Updated 3 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- ☆105Updated last year