Could a computer distinguish between the moods of a mournful classical movement or an angst-ridden emo rock song? Research to be published in the International Journal of Computational Intelligence Studies, suggests that it should be possible to categorise music accurately without human listeners having to listen in. An algorithm developed by researchers in Poland could help the record industry automate playlist generation based on listener choices as well as allow users themselves to better organise their music collections.
Multimedia experts Bozena Kostek and Magdalena Plewa of Gdansk University of Technology, point out that so-called “meta data” associated with a music file becomes redundant in a large collection where lots of pieces of music will share basic information such as composer, performer, copyright details and perhaps genre tags. As such, conventional management of music content of the kind used by web sites that stream and suggest music as well as the software used on computers and portable music players is often ineffective. Handling vast music collections, which might contain hundreds, if not tens of thousands of song excerpts with overlapping meta data is increasingly difficult, especially in terms of allowing streaming sites and users to select songs across genres that share particular moods.
Of course, music appreciate is highly subjective as is appreciation of any art form. “Musical expressivity can be described by properties such as meter, rhythm, tonality, harmony, melody and form,” the team explains. These allow a technical definition of a given piece. “On the other hand, music can also be depicted by evaluative characteristics such as aesthetic experience, perception of preference, mood or emotions,” they add. “Mood, as one of the pre-eminent functions of music should be an important means for music classification,” the team says.
Previous mood classification systems have used words, such as rousing, passionate, fun, brooding, wistful in clusters to help categorise a given piece. There are dozens of words to describe a piece of music and that each might be associated with various emotions. The team has turned to a database of mp3 files containing more than 52,000 pieces of music to help them develop a statistical analysis that can automatically correlate different adjectives and their associated emotions with the specific pieces of music in the database.
Fundamentally, the algorithm carries out an analysis of the audio spectrum of samples from each track and is “taught” by human users, which spectral patterns are associated with given moods. It can thus automatically classify future sound files with which it is presented across a range of musical genres: alternative rock, classical, jazz, opera and rock. Artists including Coldplay, Maroon 5, Linda Eder, Imogen Heap, Paco De Lucia, Nina Sky, Dave Brubek and many others were analysed, the team says.
“Parametrisation and correlation analysis applied to music mood classification” in Int. J. Computational Intelligence Studies, 2013, 2, 4-25
Mood music is a post from: David Bradley's Science Spot
via Science Spot http://sciencespot.co.uk/mood-music.html
Multimedia experts Bozena Kostek and Magdalena Plewa of Gdansk University of Technology, point out that so-called “meta data” associated with a music file becomes redundant in a large collection where lots of pieces of music will share basic information such as composer, performer, copyright details and perhaps genre tags. As such, conventional management of music content of the kind used by web sites that stream and suggest music as well as the software used on computers and portable music players is often ineffective. Handling vast music collections, which might contain hundreds, if not tens of thousands of song excerpts with overlapping meta data is increasingly difficult, especially in terms of allowing streaming sites and users to select songs across genres that share particular moods.
Of course, music appreciate is highly subjective as is appreciation of any art form. “Musical expressivity can be described by properties such as meter, rhythm, tonality, harmony, melody and form,” the team explains. These allow a technical definition of a given piece. “On the other hand, music can also be depicted by evaluative characteristics such as aesthetic experience, perception of preference, mood or emotions,” they add. “Mood, as one of the pre-eminent functions of music should be an important means for music classification,” the team says.
Previous mood classification systems have used words, such as rousing, passionate, fun, brooding, wistful in clusters to help categorise a given piece. There are dozens of words to describe a piece of music and that each might be associated with various emotions. The team has turned to a database of mp3 files containing more than 52,000 pieces of music to help them develop a statistical analysis that can automatically correlate different adjectives and their associated emotions with the specific pieces of music in the database.
Fundamentally, the algorithm carries out an analysis of the audio spectrum of samples from each track and is “taught” by human users, which spectral patterns are associated with given moods. It can thus automatically classify future sound files with which it is presented across a range of musical genres: alternative rock, classical, jazz, opera and rock. Artists including Coldplay, Maroon 5, Linda Eder, Imogen Heap, Paco De Lucia, Nina Sky, Dave Brubek and many others were analysed, the team says.
“Parametrisation and correlation analysis applied to music mood classification” in Int. J. Computational Intelligence Studies, 2013, 2, 4-25
Mood music is a post from: David Bradley's Science Spot
via Science Spot http://sciencespot.co.uk/mood-music.html
No comments:
Post a Comment