Models of Emotion

There are two main types of emotional models used in relation to affective analysis of music—categorical and dimensional models. Categorical models use discrete labels to describe affective responses. Dimensional approaches attempt to model affective phenomenon as a set of coordinates in a low-dimensional space (Eerola and Vuoskoski 2010). Discrete labels from categorical approaches (for example, mood tags in music databases) can often be mapped onto dimensional models, giving a degree of convergence between the two. Neither are music-specific emotional models, but both have been applied to music in many studies. More recently, music- specific approaches have been developed (Zentner et al. 2000, 2008).

The circumplex dimensional model, for instance, describes the semantic space of emotion within two orthogonal dimensions, valence and arousal, in four quadrants (e.g. positive valence, high arousal). This space has been proposed to represent the blend of interacting neurophysiological systems dedicated to the processing of valence (pleasure–displeasure) and arousal (quiet–activated) (Russell 1980, 2003). The Geneva Emotion Music Scale (GEMS) describes nine dimensions that represent the semantic space of musically evoked emotions (Scherer 2004), but unlike the circumplex model, no assumption is made as to the neural circuitry underlying these semantic dimensions. GEMS is a measurement tool to guide researchers who wish to probe the emotion felt by the listener as they are experiencing it. The same researchers devised experiments which examined the differences between felt and perceived emotions.
Generally speaking, emotions were less frequently felt in response to music than they were perceived as expressive properties of the music. (Zentner et al. 2008, p. 502)
This distinction has been well documented, for example see (Scherer 2004; Marin and Bhattacharya 2010; Daly et al. 2015) though the precise terminology used to differentiate the two varies widely. Perhaps unsurprisingly, results tying musical parameters to induced or experienced emotions do not often provide a clear description of the mechanisms at play, and the terminology used is inconsistent. n induced emotion would be an affective state experienced by the listener, rather than an affect which the listener understands from the composition—by way of example, this would be the difference between listeners reporting that they have ‘heard sad music’ rather than actually ‘felt sad’ as a result of listening to the same.
For a more complete investigation of the differences in methodological and epistemological approaches to perceived and induced emotional responses to music, the reader is referred to Scherer (2004).

Sad Music

A growing body of research suggests that empathy is one of the key factors which dictates the success of emotional communication from music (Bråten 2007; Arizmendi 2011). The voice has been the subject of significant acoustic analysis in order to determine prosodic cues (Barkat et al. 1999; Bach et al. 2008) and findings suggest some level of universal understanding of these cues. Intuitively, for example, an angry voice will be louder, perhaps brighter (a higher spectral centroid) and with a faster rate of temporal cues. However, beyond such acoustic cues and their affective connotations in the prosody of the voice, singing also contains another emotional signpost: lyrics. Often it is enough to simply read lyrics to determine their emotional quality, but when lyrics are matched with congruent acoustic or musical cues in the vocal delivery, for example, happy lyrics accompanied by a major key as in Bernstein and Sondheim’s “I Feel Pretty” from the musical West Side Story (1957), an unambiguous affective reading is readily identifiable:
I feel pretty,
Oh, so pretty,
I feel pretty and witty and bright!
And I pity
Any girl who isn’t me tonight.
However, the affective content of a lyric can also be contradicted by the musical features used in its delivery. An example of this can be heard in Lesley Gore’s “It’s My Party”, from I’ll Cry If I Want to (1963), which sets lyrics relaying the distress of a heartbroken teenager to an incongruous major key (A Major). The resulting affect becomes unclear to the listener, though the song has maintained an enduring popularity in any case. David Bowie’s “Heroes” (1977) illustrates the spread of this kind of affective ambiguity. Despite the dark subject matter relating to alcoholism and the breakdown of a relationship, it is routinely used as a soundtrack for celebrations, major sporting events, and the like. The vocal recording process, led by producer Tony Visconti, made use of a novel system of processing whereby a number of discrete microphones were used to capture Bowie’s vocal, each positioned further from the singer. These ambient microphones were then muted during the quieter, opening passages of the vocal, and gradually introduced as the vocal became more intense—multitracking was not used to achieve this effect, as the recording sessions were subject to the restrictions of analogue technology at the time. The sonic parallel between the characters in Bowie’s lyrics and the increased ambient sound in the final sections of the vocal, which, thanks to Visconti’s microphone technique becoming continually more lost and set back in the wall-of-sound production, is quite striking.
The question as to why a listener might deliberately choose to listen to singing which reflects a negative state—in other words, explicitly sad music—has recently been the subject of much investigation by music psychologists (Vuoskoski and Eerola 2012). The popularity of sad music again suggests that empathy, particularly in the voice, can be a powerful trigger for the listener. Leonard Cohen’s “Famous Blue Raincoat”, from Songs of Love and Hate (1971) makes use of a number of affective correlates for negative valence, performed as it is in A minor, with a slow tempo and a ¾ time signature, the lyrics are mainly accented as amphibrachs (˘ ¯ ˘) over the meter with limited melodic leap and a lilting, low volume. A full analysis of the song and Cohen’s use of the link between lyrics and structure to create affective meaning is given by Christophe Herold in (Herold, n.d.). Again, the performance has endured a lasting popularity.

اپنا تبصرہ بھیجیں