More notes/summaries from lab:
Intermodal perception refers to perception of information from objects and events that is available simultaneously to multiple senses. Philosophers proposed that we need to integrate information across different senses before we could perceive an object or an event.
Piaget proposed that integration took place by interacting with objects and coordinating information across the different senses. Gibson proposed that the senses work in a unified perceptual system and that different forms of sensory stimulation were necessary to perceive unified events. His view was different from the integration view because he proposed that the senses are unified at birth – not that the senses are separate at birth.
As infants grow older, they learn to perceive subtler differences and complex objects and events in their environment, moving away from just simply detecting general features of unified multimodal events.
Evidence supports the belief that temporal synchrony between sights and sounds are “the glue” that binds information across the senses. For example, when someone claps, we detect synchrony, rhythm, and tempo. We look at the event as a whole – we don’t pay attention to the sights, sounds, or tactile stimulation separately.
Amodal information that is provided and is synchronized across more than one sense modality is called intersensory information. An infant will notice the intersensory redundancy between the face and the voice of a person speaking before they notice modality-specific information, such as pitch of the voice. When redundancy is not available, an infant will look at nonredundant information, such as the appearance of a person’s face. Since selective attention is the basis for what we perceive, learn, and memorize, intersensory redundancy is highly influential in organizing early perceptual, cognitive, social, and emotional development.
Intersensory redundancy allows the development of more specific processing. As infants grow older, detection of sight-sound relations allows detection of more specific amodal relations, such as tempo, and this then allows the detection of arbitrary, modality-specific relations, such as pitch and color.
Protoconversation is the intercoordinated mutual exchange of sounds, movements, and touch; it’s the foundation for communication and social development.
Speech is inherently multimodal, due to the coordinating of facial, vocal, and gestural information; audiovisual redundancy promotes learning, as well.
Proprioception is information about self-movement based on feedback from muscles, joints, and balance system. By three to five months, infants can discriminate between a live video of their own legs kicking verses one of another infant by detecting temporal synchrony and spatial correspondence between their proprioceptive feedback and the display of their own legs kicking.