Introduction

Cochlear implantation is a treatment option for adults and children with bilateral severe-to-profound sensorineural hearing loss who do not benefit from traditional amplification. A cochlear implant functions by bypassing the stimulation of the hair cells of the inner ear and rather activates the auditory nerve fibers directly, resulting in the transmission of impulses to the central auditory pathway (Abbas, 1993; Kiang & Moxon, 1972; Loeb, White & Jenkins, 1983). Most users demonstrate improved performance when compared with their pre-implant abilities, although there is wide variation in outcome. This variation may depend on parameters such as duration of implant use, age at implantation, speech production, lip-reading performance, intelligence quotient, motivation, and family background (Waltzman et al., 1995; Blamey et al., 1996). However, other factors that are linked to deafness may have even more influence on cochlear implant benefit. Such factors include, etiology (Brimacombe & Eisenberg, 1984), duration of deafness (Blamey et al., 1996), poor auditory nerve survival or atrophy of the central auditory nervous system (Hall, 1990; Pfingst et al., 1980; Jyung et al., 1989; Shepherd et al., 1983; Walsh & Leake-Jones, 1982; Kraus et al., 1993b, 1998; Micco et al., 1995; Oviatt & Kileny, 1991; Stypulkowski et al., 1986; Ponton et al., 1996; Blamey, 1997; Fayad et al., 1991; Pauler et al., 1986). Studies investigating the relationship between performance and evoked potentials yield controversial results (Makhdoum et al., 1998; Firszt et al., 2002b; Kelly et al., 2005), which may be linked to the fact that different protocols were used. It is indeed known that evoked potentials are dependent on the stimulus characteristics in a conflicting way (for a review: Näätänen and Picton, 1987). We propose to study the influence of some characteristics of deafness on the auditory pathway using electrically evoked potentials (EEPs) in response to electrical pulses delivered from various sites in the cochlea.

Many characteristics of the neural responses elicited by electrical stimulation of the peripheral auditory system can be demonstrated using far-field evoked potential measurements. Electrically evoked late responses (ELARs), such as the N1, P2, N2, P300 and mismatch negativity (MMN) occur with a latency beyond 50 msec and have complex generators that reflect various levels of cortical processing, including subcortical and thalamo-cortical projections, primary auditory cortex and association areas (Cunningham et al., 2000; Kraus et al., 1993a; Kraus & McGee, 1992; Näätänen & Picton, 1987; Scherg & von Cramon, 1986; Tremblay et al., 2001; Vaughan & Ritter, 1970). The generators of N1 and P2 are centered in the primary and secondary auditory complex [belt and parabelt regions in the anatomical model of Hackett et al. (2001) and planum temporale according to Näätanen & Picton (1987)]. ELARs have been used widely to investigate auditory function in cochlear implant users (e.g., Wable et al., 2000; Groenen et al., 1996; Pelizzone et al., 1989). Their latency is less variable than their amplitude (Eggermont, 1988) and may therefore provide a better tool to assess the auditory pathway integrity. Similar ELAR latencies have been reported for cochlear implant users and normal hearing subjects in studies using speech stimuli delivered through a loudspeaker in the sound field (Kileny et al., 1997; Micco et al., 1995; Oviatt & Kileny, 1991). However, click stimuli delivered directly to the implanted electrodes and bypassing the speech processor resulted in shorter ELAR latencies in cochlear implant users (Ponton & Don, 1995; Ponton et al., 1996; Firszt et al., 2002a; Maurer et al., 2002). This shortening may be due to the fact that the delay in the travelling wave and the transduction delay are bypassed in implantees. This may also be a consequence of much better synchronization of a larger number of neural units by the electrical current pulses, resulting in reduced mean synaptic delays. Hence, the total time to reach the primary auditory cortex and the synaptic delays of cortico-cortical reactivation will be shorter. When the auxiliary input of the sound processor and then the recipient’s program are used, this latency decrease may be compensated by the time the cochlear implant takes to process the sound, resulting in latencies similar to those of normal-hearing subjects. Nevertheless, the latency of the electrically evoked potential may be linked with the degeneration of auditory fibers (Guiraud et al., in press) and reflect the maturity of the auditory pathway (Ponton et al., 1996a, 1996b, 1999, 2000; Ponton & Don, 1995; Ponton & Eggermont, 2001; Eggermont et al., 1997; Sharma et al., 2002, 2005). Hence, it seems appropriate to use ELAR latency to investigate the impact of deafness on the auditory pathway if they are recorded in the same way for all subjects.

The effects of deafness on the auditory pathway have been investigated in many studies. There is abundant evidence indicating that, in mammals and birds, neural activity within the sensory pathway plays an important role in the development and maintenance of that pathway (Cowan, 1970; Globus, 1975; Purves & Lichtman, 1985). More specifically, widespread auditory degeneration is caused by auditory deprivation in the central auditory system (e.g., Hardie & Shepherd, 1999; Leake et al., 1992; Moore, 1994; Ryugo et al., 1997, 1998). Degeneration of cochlear hair cells results in secondary degeneration of the spiral ganglion cells of the auditory nerve (Kohenen, 1965. Johnsson, 1974; Spoendlin, 1975; Otte et al., 1978). The extent of survival depends on factors such as etiology, severity of the pathology, and duration of deafness (Nadol et al., 1989). Other changes due to auditory deprivation include reduction of cell density in the anteroventral cochlear nucleus and ventral cochlear nucleus, changes in the neural projections between brainstem nuclei (Nordeen et al., 1983), reduced cortical synaptic activity in cortico-cortical and cortico-thalamic connections (Kral et al., 2000), reduced number of primary dendrites in cortical pyramidal cells, and encroachment of auditory cortical areas by visual and somatosensory systems in congenitally deaf humans (Lee et al., 2001; Finney et al., 2001; McFeely et al., 1998; Nishimura et al., 1999). Some of these alterations can be objectively evidenced with ELAR latency. Less numerous neural cells may result in a decrease in neural transmission speed and an increase in synaptic delays (Rattay, 1987). More generally, a relationship between the number of neural fibers and conduction velocity was indeed shown for both sensory and motor pathways (respectively Cavalcanti do Egito Vasconcelos et al., 2003; Morgan & Proske, 2001). Damage to the myelin sheaths also reduces the neural transmission speed (Zhou et al., 1995). Such alterations of conduction velocity could lead to less synaptic synchronization (Paolini, Roberts, Clark & Shepherd, unpublished observations), which could be shown objectively with longer electrically evoked potential latencies.

Previous studies about the effect of tonotopicity on the speed of human auditory processing in normal-hearing subjects are controversial. The latency of the N1m (the analogous magnetic component of N1) seems to vary as a function of tones frequency according to Roberts and colleagues (Roberts & Poeppel, 1996; Stufflebeam et al., 1998; for a review: Roberts et al., 2000). Results with the N1 latency are more conflicting: Some studies report no frequency-related differences in animals (Redies et al., 1989; Woods et al., 1989; Sutter & Schreiner, 1991) and humans (Picton et al., 1976), whereas others show significant effects of frequency on cortical response latencies in humans (Rapin et al., 1966; Jacobson et al., 1992; Woods et al., 1993a,d). Verkindt et al. (1995) showed that the latency of N1 generated by tones of 250 Hz is delayed compared with latencies of N1 generated by tones of 500, 1000, 2000, and 4000 Hz which are the same. A particular behaviour seems therefore to arise for tones under 500 Hz in N1 latency of normal-hearing subjects. This is confirmed by the study by Roberts and Poeppel (1996) that shows no variation in N1m latency above 500 Hz. As each electrode of the cochlear implant is supposed to evoke a unique pitch percept, known as electrode place-pitch (Clark, 1987), it is possible that frequency-related differences in the speed of implantees’ auditory processing exist. The relationship between ELAR latency and deafness characteristics would then be disrupted by a tonotopic gradient of ELAR latency according to the stimulation site. On the one hand, the peripheral tonotopic gradient found with electrical auditory brainstem responses (EABRs) (Allum et al., 1990; Shallop et al., 1990; Abbas & Brown, 1991; Miller et al., 1993; Firszt et al., 2002a; Guiraud et al., in press) could indeed be projected at upper levels of the auditory pathway. ELAR latency would then be longer for activation of basal electrodes and reflect anatomic aspects of the auditory pathway periphery since auditory fibers are longer (Moore, 1987; Spoendlin & Schrott, 1989) and more numerous (Hinojosa et al., 1985; Spoendlin & Schrott, 1990) at the cochlea base. On the other hand, the ELAR latency Firszt et al. (2002a) recorded at high intensity stimulation show a possible influence of frequency similar to the one found in normal hearing subjects (e.g., Woods et al., 1993d). The latency values they found for stimulation at 100 % of the dynamic range were 87.31 msec for activation of electrode 1 (apical), 86.85 msec for electrode 4 (mid), and 85.04 msec for electrode 7 (basal). Then a gradient could exist. It would reflect anatomic aspects of the central auditory pathway with longer afferent pathways toward the more lateral cortical areas that are activated by lower pitched tones (Pantev et al., 1988). Firszt et al. (2002a) did not perform any statistical analysis for the data recorded for this intensity only and it is not possible to know whether these changes according to the electrode site are significant. They found no significant effect of stimulation site on ELAR latency when latencies of ELARs recorded for several stimulation intensities were analysed. A possible effect of stimulation sites on ELAR latency could have then faded away because N1 and P2 are less well shaped at lower intensities of stimulation and their latencies would be more difficult to pick up resulting in more variable values. In our study, we therefore recorded the ELARs initiated from various stimulation sites at comfortably loud intensity to investigate whether the anatomy of the auditory pathway had an effect on latency and whether this effect interfered with the possible relationship between deafness and latency.

In the present study, the effects of the cochlear site of stimulation and deafness characteristics on the ELAR latency were investigated using the same protocol as in a previous study involving EABR latency (Guiraud et al., in press). On the one hand, the effects of auditory pathway anatomy on ELAR latency (i.e., whether the neural conduction speed varies according to the part of the auditory pathway responsible for a specific pitch) were studied by comparing latencies recorded for stimulation at various sites on the electrode array. On the other hand, three parameters were examined to investigate the influence of deafness on ELAR latency. The first parameter was the duration of deafness since it correlated with the population of neurons in the auditory system (Webster & Webster, 1977; Chouard et al., 1983) and also with performance (Blamey et al., 1996). The second parameter was the degree of hearing loss measured by an audiogram before implantation, since a correlation was found in hearing-impaired subjects between the degree of hearing loss and both the number of spiral ganglion cells (Schmidt, 1985) and auditory performance (Brimacombe & Eisenberg, 1984 and Wable et al., 2001). As psychophysical levels and dynamic range are closely dependent on subjects’ etiology and spiral ganglion neuron population (Pfingst et al., 1980; Pfingst, 1984; Shannon, 1983; Lusted et al., 1984; Kawano et al., 1998), the third parameter examined was the M level (most comfortable level) at first fitting of the cochlear implant for activation of one electrode. Using the M level would allow us to take into account the effects of deafness on a population of auditory fibers restricted to the vicinity of the electrode activated and complete the investigation of the global effect of deafness on the auditory pathway provided by pre-implant hearing thresholds. M levels were assumed to reflect the state of fiber degeneration, as it was believed that higher intensities of stimulation would be necessary for the subjects to have a comfortable perception when nerve fibers were more damaged. The fact that high stimulation intensities were used to generate the evoked potentials for them to be better shaped may increase the excitation area. Hence, using M levels rather than the thresholds would address approximately the same anatomical areas and allow a better comparison between subjective and objective measures. Being able to distinguish any individual effects on latency for these three parameters (duration of deafness, audiogram before implantation and M levels at first fitting) combined with the fact that different parts of the auditory pathway are activated for different pitches could have useful clinical applications. This could facilitate interpretation of response latency when assessing neural integrity, particularly considering stimulation parameters (e.g., the electrode site used to initiate the potentials). Using the same protocol as in our previous article which investigated EABR latency (in press), will also give an insight into whether some electrically evoked potentials are less influenced by stimulation parameters, reflect more the integrity of the auditory pathway, and therefore could possibly be more clinically useful in the assessment of cochlear implant benefit.