huybery / Awesome-Code-LLMLinks
👨💻 An awesome and curated list of best code-LLM for research.
☆1,231Updated 8 months ago
Alternatives and similar repositories for Awesome-Code-LLM
Users that are interested in Awesome-Code-LLM are comparing it to the libraries listed below
Sorting:
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,563Updated last week
- [TMLR] A curated list of language modeling researches for code (and other software engineering activities), plus related datasets.☆2,844Updated last week
- A framework for the evaluation of autoregressive code generation language models.☆975Updated last month
- A collection of awesome-prompt-datasets, awesome-instruction-dataset, to train ChatLLM such as chatgpt 收录各种各样的指令数据集, 用于训练 ChatLLM 模型。☆690Updated last year
- LDB: A Large Language Model Debugger via Verifying Runtime Execution Step by Step (ACL'24)☆556Updated 11 months ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆416Updated 4 months ago
- The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.☆781Updated last year
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆471Updated 6 months ago
- ☆664Updated 9 months ago
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆633Updated last month
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,128Updated last year
- A curated list of awesome LLM agents frameworks.☆1,066Updated last week
- Run evaluation on LLMs using human-eval benchmark☆419Updated last year
- Repository for the paper "Large Language Model-Based Agents for Software Engineering: A Survey". Keep updating.☆493Updated 5 months ago
- A collection of benchmarks and datasets for evaluating LLM.☆499Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,483Updated 2 years ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,883Updated 8 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,840Updated 3 weeks ago
- [ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct☆2,025Updated 9 months ago
- Code for the paper "Evaluating Large Language Models Trained on Code"☆2,895Updated 7 months ago
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,518Updated this week
- ☆472Updated last year
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆562Updated 2 weeks ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,556Updated 2 months ago
- O1 Replication Journey☆1,998Updated 7 months ago
- Reasoning in LLMs: Papers and Resources, including Chain-of-Thought, OpenAI o1, and DeepSeek-R1 🍓☆3,308Updated 3 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,516Updated 3 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆958Updated 10 months ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,740Updated last year
- Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization☆1,345Updated 8 months ago