Discussion

Experiment 1 showed that participants were sensitive to a within-key change in tonal function (tonic versus subdominant), even with small acoustic changes between the two melodies of a pair (i.e., a single note change). Melodies were judged as more complete when the last tone functioned as the tonic than when it functioned as the subdominant. This finding is in agreement with music theory, and provides corroborating evidence to melodic context effects previously reported with completion or tension judgments (Bigand, 1997; Boltz, 1989a; Hébert et al., 1995; Schmuckler, 1989). The priming data obtained with the timbre identification task showed that tonic target tones were processed more accurately and faster than subdominant target tones. This extends the previously reported harmonic priming effects (Bharucha & Stoeckig, 1986; Bigand & Pineau, 1997; Bigand et al., 2003) to melody perception. Since sensory components such as tone repetition, melodic contour, and intervals were controlled, our data suggests the importance of tonal knowledge in melody perception, even for musically non-expert participants. Based on the key of the prime context, listeners develop tonal expectations for future events that influence target processing. Processing is facilitated for the targets that are more stable in the tonal hierarchy, namely the tonic.

The main effect of tonal relatedness was significant in this experiment. Its interaction with target timbre was not significant, but an interactive pattern was observed (i.e., notably with the tonal relatedness effect being supported by Timbre A). Similar interactive patterns have been observed previously in harmonic priming studies using both intonation and timbre tasks. The original chord priming studies by Bharucha and Stoeckig (1986, 1987) used an intonation task (i.e., participants judged whether the target chord was consonant or dissonant). In these studies, related consonant targets were judged faster than less-related consonant targets, but related dissonant targets were judged slower than less-related dissonant targets. This interaction between harmonic relatedness and chord types has been interpreted as resulting from a response bias: participants are biased to judge less-related targets as dissonant and related targets as consonant. Recently, Tillmann et al. (2006) investigated harmonic priming using a timbre identification task with the rationale that the response bias should be reduced when the task-relevant manipulations (i.e., the timbre change) did not concern the pitch dimension. However, an interactive pattern was still observed: a stronger priming effect was reported for the more continuously sounding target timbre than for the less continuously sounding target timbre (Tillmann et al.. 2006, Experiment 1). A discontinuity in timbre (between prime context and target) may induce a response bias and/or it may lead to perceive the target as belonging to a different auditory stream than the prime context. Notably, changes in harmonic spectrum contribute to segregate an event from the current stream (Bregman, 1990). A more dissimilar target timbre may pop-out from the context, leading targets to be perceived as a “deviant”, which would be less (or not) integrated into the tonal context. Similarly, a deviance detection situation might have emerged with the timbres in our experiment. The brighter target timbre (Timbre B) might have been perceived as being more different from the timbre of the prime context than the dull timbre, and thus leading to be detected faster and being less influenced by the tonal context. The observed interactive pattern is thus congruent with previous priming data, showing weaker, no or inversed priming effects for the deviant target type, notably for dissonant chords and mistuned tones in intonation tasks and for more dissimilar timbre in timbre identification tasks.

The goal of our experiment was to show a tonal priming effect for melodies – and more specifically, to focus on the cognitive components of priming by controlling the sensory components of tone repetition, as well as the local influence of melodic contour and intervals. The intervals and contour preceding the target were identical in the two melodies of a pair and the target tone occurred equally often in related and less-related melodies. The two melodies of a pair differed only by a semi-tone alteration of one possibly repeated tone, and this made the related and less-related conditions as similar as possible on the musical score. However, since the melodies were played by piano tones, which are complex tones, it may be argued that the difference of a single element between related and less-related conditions on a musical score may not transpose into a single difference for the auditory system. Tones played by a musical instrument have a complex spectrum of frequencies resulting in more or less audible overtones and eliciting virtual pitches (Terhardt, 1974). Thus, in our material, the single note differing between the two melodies of a pair might result in more than a single acoustic difference for the auditory system. Furthermore, in Western music, the tonal relations between notes are partly correlated with the psychoacoustic structures of sound. For example, the strongest overtones of the tone C are those whose frequency is the fundamental frequency of the tones E and G. These three tones (C, E, G) define strongly related tones in the tonality of C-major. This congruency between psychoacoustic structures and tonal relatedness has received empirical support for perception. For example, Bigand, Parncutt and Lerdahl (1996) have shown that judgments of musical tension in short chord sequences can be predicted equally well by cognitive and psychoacoustic influences. In that respect, we cannot reject the possibility that even the single-note difference of our material might introduce sensory differences in the melodies that are sufficient to elicit different expectations in related and less-related conditions without any need for a cognitive explanation.

To test the hypothesis that additional differences due to sound structure might explain the tonal relatedness effect, we presented our melodic material to a sensory model. This model extracts information about periodicity pitch over time and accumulates it in a memory buffer with temporal decay. It thus proposes to simulate music perception solely with information based on the acoustic signal, without postulating processes linked to listeners’ tonal knowledge. In previous research, sensory models based on short-term memory and pitch perception models have challenged the need for a cognitive explanation, notably for the seminal probe-tone data (Huron and Parncutt, 1993, with a model based on a pitch model by Terhardt, Stoll, and Seewann, 1982; Leman, 2000, with a model based on a pitch model by Van Immerseel and Martens, 1992).

We presented our experimental material to the sensory model proposed by Leman (2000) to test whether this model can predict a difference between related and less-related conditions and thus can simulate the behavioral data of Experiment 1. Since the use of complex tones in Experiment 1 might favor sensory components, simulations were performed with melodies played by piano tones and by pure tones. Pure tones remove the richness of the spectral information and reduce the presented sensory information to the fundamental frequency of the tones, thus getting closer to the difference of a single element as in the representations on the musical score.