An Overview of Emotion as a Parameter in Music, Definitions, and Historical Approaches

Introduction

This chapter presents a theoretical overview of emotion in the context of music, particularly emotional analysis, different types of model, and the distinction between perceived and induced emotions. All are necessary to understand in order to examine emotion in the video game soundtracking context. You may be a videogame designer, sound designer, composer, or player; professional or enthusiastic amateur. Regardless, you will be familiar with the powerful role which soundtracking can play in shaping your experience. Music is a well-documented way to communicate feelings and emotional states, regardless of whether one has written, performed, or simply listened to it. When combined with other modalities, i.e., listening and seeing, or in the case of many games, listening, seeing, and responding with gameplay actions, the experience can become even more intense. High quality soundtracking has the potential to enhance player experience in video games (Grimshaw et al. 2008). Combining emotionally congruent sound-tracking with game narrative has the potential to create significantly stronger affective responses than either stimulus alone—the power of multimodal stimuli on affective response has been shown both anecdotally and scientifically (Camurri et al. 2005). Video game soundtracking has an inexorable link with the available technology at the time of development. This meant that there were—at least—some limitations in terms of what might be achievable in the soundtracking efforts for earlier generations of game, whether that be restrictions based on the type of synthesizer available to the composer, or the storage medium in terms of digital sound effects and speech. Game audio requires at least two additional challenges over other sound-for-picture work; firstly, the need to be dynamic (responding to gameplay states) and secondly to be emotionally congruent whilst adapting to non-linear narrative changes (Collins 2008). Early solutions such as looping can be come repetitive, and ultimately break player immersion, but branching strategies (where different cues are multiplexed at narrative breakpoints) can drastically increase the compositional complexity required in the music implementation when creating a soundtrack (Lipscomb and Zehnder 2004).

This chapter explores definitions for these terms, and a few traditional approaches to achieving such goals. Of course, for practical reasons, it would be impossible to include every example of novel game soundtracking with regards to emotional content, therefore this chapter will consider one as a starting point from which the interested reader may continue to explore. Our example is Lucasarts implementation of a dynamic system, iMuse (see Strank (2013), for a full treatment) to accompany their role-playing games series in the late 1980s (which included the Indiana Jones series and perhaps most famously, the Monkey Island series of games) (Warren 2003), which this chapter will explore in some more detail. This system implemented two now commonplace solutions, horizontal re-sequencing and vertical re- orchestration, both of which were readily implementable due to the use of MIDI orchestration (Huron 1997). However, the move towards real audio made many of these transformations more complex beyond the compositional aspect alone. This chapter considers how a combination of factors, not least historical improvements in storage space and audio quality both address and create such difficulties.

Music and Emotion

This chapter Music has been shown to induce physical responses on a conscious, and unconscious level (Grewe et al. 2005; Grewe et al. 2007). Such measurements can be used as indicators of affective states. Emotional assessment in empirical work often makes use of recorded music (Wedin 1969, 1972; Gabrielsson and Lindström 2001; Gabrielsson and Juslin 2003) or synthesised test tones (Scherer 1972; Juslin 1997) to populate stimulus sets for subsequent emotional evaluation. There are some reported difficulties with such evaluations with specific regards to measurement of emotional responses to music. For example, some research has found that the same piece of music can elicit different responses at different times in the same listener (Juslin and Sloboda 2010). If we consider a listener who is already in a sad or depressed state, it is quite possible that listening to ‘sad’ music may in fact increase listener valence. Another challenge for such evaluations is that music may intentionally be written, conducted, or performed in such a manner as to be intentionally ambiguous. Indeed, perceptual ambiguity might be considered beneficial as listeners can be left to craft their own discrete responses (Cross 2005). The breadth of analysis given to song lyrics gives many such examples of the pleasure listeners can take in deriving their own meaning from seemingly ambiguous music
Before examining existing systems and considering the future of video game soundtracking, we must first define the terminology that will be used, including the various psychological approaches to documenting musical affect, and the musical and acoustic features that such systems utilize.

اپنا تبصرہ بھیجیں