OpenCoder-llm / opc_data_filteringLinks
Heuristic filtering framework for RefineCode
โ82Updated 9 months ago
Alternatives and similar repositories for opc_data_filtering
Users that are interested in opc_data_filtering are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] ๐งฌ RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)โ181Updated 10 months ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialoguesโ135Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsโ198Updated last month
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningโ184Updated 6 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodingsโ168Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuningโ284Updated 2 years ago
- โ109Updated 5 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMsโ257Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)โ99Updated 10 months ago
- CFBench: A Comprehensive Constraints-Following Benchmark for LLMsโ46Updated last year
- โ318Updated last year
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | ็ปง็ปญ้ข่ฎญ็ปๆๅ โฆโ36Updated 7 months ago
- Fantastic Data Engineering for Large Language Modelsโ93Updated last year
- Collection of papers for scalable automated alignment.โ94Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Modelsโ118Updated 6 months ago
- a-m-team's exploration in large language modelingโ195Updated 7 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Modelsโ193Updated last year
- The official repo of INF-34B models trained by INF Technology.โ34Updated last year
- Codes for the paper "โBench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718โ368Updated last year
- โ46Updated last year
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.โ249Updated 8 months ago
- โ65Updated last year
- The related works and background techniques about Openai o1โ221Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.โ253Updated last year
- LeetCode Training and Evaluation Datasetโ45Updated 8 months ago
- โ52Updated 10 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"โ243Updated 3 months ago
- Towards Systematic Measurement for Long Text Qualityโ37Updated last year
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agentโ147Updated 3 weeks ago
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMโฆโ68Updated last year