microsoft / CodeTLinks
β663Updated 9 months ago
Alternatives and similar repositories for CodeT
Users that are interested in CodeT are comparing it to the libraries listed below
Sorting:
- π OctoPack: Instruction Tuning Code Large Language Modelsβ470Updated 6 months ago
- Run evaluation on LLMs using human-eval benchmarkβ419Updated last year
- A framework for the evaluation of autoregressive code generation language models.β975Updated last month
- PaL: Program-Aided Language Models (ICML 2023)β503Updated 2 years ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.β724Updated 10 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agentsβ555Updated last year
- β472Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β252Updated 9 months ago
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β170Updated last year
- β271Updated 2 years ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diveβ¦β957Updated 10 months ago
- This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (Neurβ¦β542Updated 7 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898β223Updated last year
- Data and code for "DocPrompting: Generating Code by Retrieving the Docs" @ICLR 2023β248Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Modelβ542Updated 6 months ago
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"β775Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.β547Updated last year
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".β805Updated last year
- LDB: A Large Language Model Debugger via Verifying Runtime Execution Step by Step (ACL'24)β557Updated 11 months ago
- β367Updated 2 years ago
- A multi-programming language benchmark for LLMsβ268Updated last week
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"β468Updated last year
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting witβ¦β1,084Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Themβ507Updated last year
- APPS: Automated Programming Progress Standard (NeurIPS 2021)β482Updated last year
- Fine-tune SantaCoder for Code/Text Generation.β192Updated 2 years ago
- Repo for paper "Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration"β344Updated last year
- [ICLR 2023] Code for the paper "Binding Language Models in Symbolic Languages"β321Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)β153Updated last week
- Accepted by Transactions on Machine Learning Research (TMLR)β130Updated 10 months ago