AMAAI-Lab / Text2midi
Text2midi is the first end-to-end model for generating MIDI files from textual descriptions. By leveraging pretrained large language models and a powerful autoregressive transformer decoder, text2midi allows users to create symbolic music that aligns with detailed textual prompts, including musical attributes like chords, tempo, and style.
☆55Updated 3 weeks ago
Alternatives and similar repositories for Text2midi:
Users that are interested in Text2midi are comparing it to the libraries listed below
- Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls☆79Updated 8 months ago
- A large-scale dataset of caption-annotated MIDI files.☆59Updated 7 months ago
- Code for Chamber Ensemble Generator pipeline and CocoChorales Dataset☆60Updated last year
- CLaMP 2: Multimodal Music Information Retrieval Across 101 Languages Using Large Language Models [NAACL 2025]☆51Updated 3 weeks ago
- A library for computing Frechet Music Distance.☆22Updated last month
- A graph-based deep MIDI music generator. Official repository of the paper "Graph-based Polyphonic Multitrack Music Generation".☆21Updated last year
- ☆31Updated last year
- A list of all repositories from Music X Lab☆30Updated this week
- a notebook containing scripts, documentation, and examples for finetuning musicgen☆90Updated 11 months ago
- Official Implementation of Jointist☆33Updated last year
- ScorePerformer: Expressive Piano Performance Rendering with Fine-Grained Control (ISMIR 2023)☆36Updated last week
- ☆79Updated 2 years ago
- Multitrack music mixing style transfer given a reference song using differentiable mixing console.☆48Updated 4 months ago
- Repository for the paper "Combining audio control and style transfer using latent diffusion", accepted at ISMIR 2024☆45Updated last month
- 80s FM video game music dataset☆23Updated 2 years ago
- Pytorch implementation of automatic music transcription method that uses a two-level hierarchical frequency-time Transformer architecture…☆99Updated last year
- PiCoGen (Piano Cover Generation) is an academic project aimed at developing an automatic piano cover generation system.☆26Updated 4 months ago
- Unofficial implementation JEN-1 Composer: A Unified Framework for High-Fidelity Multi-Track Music Generation(https://arxiv.org/abs/2310.1…☆29Updated last year
- Joint Embedding Predictive Architecture for Musical Stem Compatibility Estimation☆32Updated 7 months ago
- MR-MT3: Memory Retaining Multi-Track Music Transcription to Mitigate Instrument Leakage☆40Updated 8 months ago
- [PyTorch] Minimal codebase for MusicGen models☆58Updated 2 months ago
- Official PyTorch implementation of ICASSP 2023 paper "Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage A…☆31Updated 6 months ago
- Models and datasets for training deep learning automatic mixing models☆98Updated 6 months ago
- Code and MIDI demo for paper: Zhao et al., AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer, ISMIR 2021☆46Updated 10 months ago
- ATEPP is a dataset of expressive piano performances by virtuoso pianists. (ISMIR2022)☆46Updated 7 months ago
- A 101 guide to AI in music☆35Updated last year
- The official implementation of our paper "Instruct-MusicGen: Unlocking Text-to-Music Editing for Music Language Models via Instruction Tu…☆81Updated 6 months ago