Lacusking / linux_clashLinks
为centos服务器配置clash服务
☆13Updated 10 months ago
Alternatives and similar repositories for linux_clash
Users that are interested in linux_clash are comparing it to the libraries listed below
Sorting:
- 小模型LLM的搭建,学习LLM的建模、训练过程 基于DeepSeek-MOE架构的小模型,用于个人学习,从0开始,解释每一条语句☆11Updated 6 months ago
- Spring Deep Java Library 通过利用DJL框架与其他Spring框架进行整合,进行深度学习模型训练和推导。☆24Updated 3 years ago
- 尚硅谷数仓文档☆11Updated 6 years ago
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆114Updated this week
- simple decoder-only GTP model in pytorch☆42Updated last year
- transformer深入学习,使用Excel实现☆16Updated 6 years ago
- 基于LLM的多轮问答系统。结合了意图识别和词槽填充技术☆23Updated 2 months ago
- 基于Flink的用户画像系统☆10Updated 2 years ago
- accelerate generating vector by using onnx model☆18Updated last year
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆57Updated last year
- pretrain a wiki llm using transformers☆54Updated last year
- ☆29Updated 3 weeks ago
- 大型语言模型实战指南:应用实践与场景落地☆80Updated last year
- 极客时间 - 大模型应用开发实战课: 展现LLM应用开发之强大☆64Updated last year
- 大语言模型训练和服务调研☆36Updated 2 years ago
- 最少使用 3090 即可训练自己的比特大脑(miniLLM)🧠(进行中). Train your own BitBrain(A mini LLM) with just an RTX 3090 minimum.☆37Updated 3 months ago
- LLM手撕代码合集☆16Updated 6 months ago
- ☆44Updated 3 weeks ago
- Byzer-retrieval is a distributed retrieval system which designed as a backend for LLM RAG (Retrieval Augmented Generation). The system su…☆49Updated 7 months ago
- llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deploy…☆86Updated last year
- An interactive thinking and deep reasoning model. It provides a cognitive reasoning paradigm for complex multi-hop problems.☆66Updated 3 months ago
- ☆105Updated last year
- 配合 HAI Platform 使用的集成化用户界面☆53Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆39Updated last month
- SwanLab Local Visualization Python Package Plugin|SwanLab本地可视化python包插件☆20Updated last week
- Manages vllm-nccl dependency☆17Updated last year
- 筱可的工程实验仓库!☆87Updated last week
- 国产加速卡-海光DCU实战(大模型训练、微调、推理 等)☆50Updated 2 months ago
- Datawhale论文分享,阅读前沿论文,分享技术创新☆50Updated last year
- Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory☆29Updated last year