Oneflow-Inc / one-glmLinks
A more efficient GLM implementation!
☆55Updated 2 years ago
Alternatives and similar repositories for one-glm
Users that are interested in one-glm are comparing it to the libraries listed below
Sorting:
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆66Updated 2 years ago
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆68Updated last year
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- A unified tokenization tool for Images, Chinese and English.☆152Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆58Updated last year
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆61Updated 8 months ago
- ☆83Updated last year
- Light local website for displaying performances from different chat models.☆87Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- Silk Road will be the dataset zoo for Luotuo(骆驼). Luotuo is an open sourced Chinese-LLM project founded by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子…☆39Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- 基于baichuan-7b的开源多模态大语言模型☆73Updated last year
- Summarize all open source Large Languages Models and low-cost replication methods for Chatgpt.☆137Updated 2 years ago
- Model Compression for Big Models☆163Updated 2 years ago
- llama inference for tencentpretrain☆98Updated 2 years ago
- deep learning☆148Updated 2 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆97Updated last year
- 中文大语言模型评测第一期☆109Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆108Updated last year
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆74Updated 2 years ago
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- 文本去重☆73Updated last year
- MOSS 003 WebSearchTool: A simple but reliable implementation☆45Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆174Updated last year
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆83Updated 2 years ago
- ☆172Updated 2 years ago