Neural Notes - Paper replays by the AMAAI Lab cover art

Neural Notes - Paper replays by the AMAAI Lab

Neural Notes - Paper replays by the AMAAI Lab

Written by: Dorien Herremans
Listen for free

About this listen

Dive into the latest research papers from the Audio, Music & AI Lab (AMAAI) at Singapore University of Technology and Design. Every episode turns a fresh AMAAI publication into an engaging, understandable conversation. Multimodal generative AI, symbolic music, automatic mastering, and beyond. Hosted by AI, powered by humans.Dorien Herremans
Episodes
  • Inside the text2midi Architecture
    Dec 11 2025

    This episode of Neural Notes explores text2midi, the breakthrough end-to-end model that converts textual descriptions directly into symbolic MIDI music files,. We reveal how this system utilizes Large Language Models (LLMs) to give users unprecedented control, allowing them to generate compositions simply by typing prompts that specify elements like chords, keys, and tempo,. Discover how text2midi streamlines the music creation process, generating compositions with superior long-term structure, and making AI-guided composition accessible to expert composers and everyday users alike.


    Original paper:

    Bhandari, K., Roy, A., Wang, K., Puri, G., Colton, S., & Herremans, D. (2025, April). Text2midi: Generating symbolic music from captions. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 39, No. 22, pp. 23478-23486).

    Read the paper here.

    Show More Show Less
    31 mins
  • Why Your AI Music Lacks Soul: Aligning Computational Goals with Human Taste.
    Dec 11 2025

    This episode of Neural Notes discusses a new AAAI paper by Dorien Herremans and Abhinaba Roy which tackles the persistent challenge in generative music AI: why systems, despite achieving high technical fidelity, often fail to produce music that is aesthetically pleasing and emotionally resonant to human listeners. Traditional training methods optimize for likelihood, successfully capturing surface-level patterns but failing to grasp the deeper qualities that drive human musical appreciation. We explore how researchers are bridging this fundamental gap between computational optimization and human preference through systematic alignment techniques. This includes detailed discussions of large-scale preference learning (e.g., MusicRL), Direct Preference Optimization (DPO) integrated into modern diffusion architectures (e.g., DiffRhythm+), and inference-time optimization strategies (e.g., Text2midi-InferAlign), all focused on shifting the generative modeling objective from statistical fidelity to human-centered quality optimization.


    Paper discussed:

    Aligning Generative Music AI with Human Preferences: Methods and Challenges by Dorien Herremans, Abhinaba Roy. Accepted for presentation in the senior member track of AAAI 2026, Singapore.

    Read the paper here.

    Show More Show Less
    31 mins
No reviews yet