SomeoneKong / llm_long_context_bench202405
☆28Updated 4 months ago
Alternatives and similar repositories for llm_long_context_bench202405:
Users that are interested in llm_long_context_bench202405 are comparing it to the libraries listed below
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆38Updated 10 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆71Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆127Updated 7 months ago
- ☆44Updated 7 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆64Updated last year
- Imitate OpenAI with Local Models☆85Updated 4 months ago
- Light local website for displaying performances from different chat models.☆85Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆133Updated 9 months ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆79Updated last year
- ☆136Updated 6 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆208Updated 3 months ago
- ☆95Updated 2 months ago
- Repo for Paper "Unfolding the Headline: Iterative Self-Questioning for News Retrieval and Timeline Summarization"☆62Updated last week
- 中文原生检索增强生成测评基准☆105Updated 9 months ago
- ☆87Updated last month
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆55Updated 8 months ago
- 怎么训练一个LLM分词器☆137Updated last year
- ☆161Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆272Updated this week
- ☆94Updated 9 months ago
- Mixture-of-Experts (MoE) Language Model☆184Updated 4 months ago
- ☆221Updated 8 months ago
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆160Updated 10 months ago
- ☆62Updated 3 months ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆163Updated last year
- ☆81Updated 9 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆125Updated 5 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆127Updated last month
- 使用单个24G显卡,从0开始训练LLM☆50Updated 2 months ago
- SUS-Chat: Instruction tuning done right☆48Updated last year