9 December 2025

AI got you babe, when it comes to making medical music

The ethical and artistic debates aside, there are good reasons for research into artificial-intelligence systems that can generate music. A new system described in the International Journal of Arts and Technology has improved on the quality and coherence of low-cost computer-generated music for use in music therapy and mental-health support.

The new approach combines two influential machine-learning approaches, long short-term memory networks, or LSTMs, and a multi-scale attention mechanism. This allows the system to overcome the shortcomings of previous algorithmic composition methods, side-stepping erratic structure, avoiding repetitive melodies, and extending emotional range.

LSTMs are a class of recurrent neural networks designed to preserve information over long sequences, making them well suited to modelling time-based data such as music. In the current work, the team used multi-layer LSTM structures with residual connections, a method that stabilises learning by allowing information to bypass certain network layers when needed. In addition, multi-scale attention allows the model to focus dynamically on musical features as they play out over different timespans.

Attention mechanisms, widely used in natural-language processing. Conventionally, they help AI systems weigh the importance of different inputs and when this is applied to making music, it allows the simultaneous consideration of local motifs, longer-term harmonic movement, and rhythmic development.

Tests of the new approach led to clearly coherent music, improved stylistic control, and more musical variety. Such qualities have proved difficult for generative models to balance until now. This means that an appropriate prompt for the generative AI could create music tailored to specific therapeutic needs.

While there are those who lament the emergence of AI music on streaming platforms and elsewhere to the detriment of songwriters and musicians, and perhaps rightly so, the researchers suggest that their system might have use in music-assisted therapy sessions for adolescents. Though preliminary, the results suggest that it is possible to tailor AI compositions to make music that helps improve sleep quality and reduce stress.

The team says that their model’s ability to encode emotional nuance could make it useful for clinical and wellbeing contexts. This could be important where known recorded music may not fit the medical requirements precisely or may simply add music-licensing costs to cash-strapped healthcare facilities.

Li, L. (2025) ‘Music intelligent creation method based on LSTM and multi-scale attention‘, Int. J. Arts and Technology, Vol. 15, No. 6, pp.1-25.

No comments: