Here is a synopsis of our research.
The Beginning
Originally, our methodology focused on the development of Matlab
code to extract features from a song as described in our Thesis Proposal. Features are mathematically defined
components within the music, such as tempo or key signature. In feature
extraction of audio signals, you start with a song file, find the corresponding
audio signal and then extrapolate certain data points, or features, from this
waveform. We were going to use music theory to determine which
higher-order features we would extract from this type of data, and then compare
the features of songs to each other to estimate the degree of musical
similarity for song recommendation.
However, we have
recently realized that this method, being so mathematically oriented, seems
rather impersonal from a user’s perspective as it fails to take into account
the individuality of the user. We have been working on refocusing
our approach, so that we could allow our program to be more attentive to the
user’s individual musical preferences.
This change in approach led our search for APIs that would
generate musical features, which we can then apply to music recommendation in a
way that models a user’s musical cognition. The use of an API allows us to
concentrate on the user-specificity of the program, especially in dealing with
the perception of music. We found The Echo Nest, which is an API that returns
numbers corresponding to an input song’s features. We will develop a cognitive
music model in order to represent the musical perception of listeners and then
choose features from The Echo Nest’s API for our program.