You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, while testing with the Music Descriptor, I noticed that it can predict valence and arousal for different music samples. What I don't understand is why the predictions are always positive, even for sad and depressing songs.
According to the dataset paper, valence and arousal should be in the range of [−0.5,0.5]. Could you explain how to convert the predictions to this range? This would allow me to map them to the nearest emotion using Russell's model.
Thank you!
The text was updated successfully, but these errors were encountered:
First of all, impressive work.
However, while testing with the Music Descriptor, I noticed that it can predict valence and arousal for different music samples. What I don't understand is why the predictions are always positive, even for sad and depressing songs.
According to the dataset paper, valence and arousal should be in the range of [−0.5,0.5]. Could you explain how to convert the predictions to this range? This would allow me to map them to the nearest emotion using Russell's model.
Thank you!
The text was updated successfully, but these errors were encountered: