SomeoneKong / llm_long_context_bench202405Links
☆29Updated last year
Alternatives and similar repositories for llm_long_context_bench202405
Users that are interested in llm_long_context_bench202405 are comparing it to the libraries listed below
Sorting:
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆248Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆42Updated last year
- Mixture-of-Experts (MoE) Language Model☆190Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆248Updated 10 months ago
- Imitate OpenAI with Local Models☆88Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Efficient AI Inference & Serving☆477Updated last year
- ☆231Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆137Updated 9 months ago
- Light local website for displaying performances from different chat models.☆87Updated last year
- SUS-Chat: Instruction tuning done right☆49Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆264Updated last month
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆149Updated last year
- ☆147Updated last year
- Repository of LV-Eval Benchmark☆70Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 2 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- SOTA Math Opensource LLM☆333Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year
- zero零训练llm调参☆32Updated 2 years ago
- A flexible and efficient training framework for large-scale alignment tasks☆425Updated this week
- ☆114Updated 10 months ago
- ☆49Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆70Updated 2 years ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆91Updated last year
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆227Updated last week
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆140Updated last year
- large language model training-3-stages+deployment☆49Updated 2 years ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆56Updated 10 months ago