-logue

word-forming element meaning “one who is immersed in or driven by,” mostly from French-derived words, ultimately from Greek -logos, -logon. As a combining element meaning “kind of discourse,” it is from French -logue, from Greek -logos.

Source: etymonline.com

Opinions presented are those of the guest authors and not necessarily those of SPELL-Links | Learning By Design. They are presented to generate new insights, critical thinking, and solutions for educators and learners.

Comments? Questions? Divergent opinions? We’d love to hear from you! Post them on SPELLTalk, the FREE online professional discussion group dedicated to improving literacy through discussion of current research and evidence-based best practices.

Articulatory Gestures in Literacy Instruction

Laura MacGrath, MSc(A)

January 25, 2024

If you’re someone who’s interested in early literacy instruction, there is an excellent chance that you’ve heard of sound walls with articulatory gestures, a current hot topic in literacy. On this type of sound wall, speech sounds are laid out according to how they are produced. Consonants are arranged according to their place and manner of articulation and voicing. For example, the sound [b] would be classified as a sound made with the lips, with completely stopped airflow, and with the voice turned on. Right next to it would be the [p], which is the same as [b], except with the voice turned off. As for the approximately 18 vowels, they are laid out in a valley shape, meant to represent tongue/jaw height and front-back position of the tongue in the mouth. The speech sounds (or phonemes) are made visible by pictures of mouths producing each sound. The phonemes are linked with the letters (or graphemes) that commonly represent them, and a keyword picture is often also featured. Students are encouraged to consider their own speech production, taught the articulatory features of each phoneme and how to locate phonemes on the sound wall, to link to the corresponding graphemes. Having children attend to their production of speech sounds is proposed to enhance their learning of letter-sound correspondences, improve their phonemic awareness (the understanding that spoken words can be abstractly broken into phonemes, or speech sounds), and support their learning of reading decoding and spelling. In my work settings and on social media, there are lots of people asking about the efficacy of such an approach. As an SLP (speech-language pathologist) specializing in literacy and a university lecturer on reading disabilities and child speech disorders, this topic is pretty much the perfect intersection of my interests. I have dug in over the past few years and now aim to share some of the theory and research evidence, along with my own reflections on this trend. In this three-part series, we will first focus on the theoretical rationale for this approach. Part two will dive into the research evidence and part three will be a take on how we might proceed, based on the theory and evidence.

 

Part 1—The theoretical rationale

You might be wondering about the origins of this idea. Why should literacy teachers pay attention to speech production? For some answers to this, we need to dive into history. In the 1950s, psychologist Alvin Liberman and colleagues at Haskins Laboratory endeavoured to build a reading machine, an early attempt at the text-to-speech technology that we have today. In doing so, he discovered that it wasn’t so straightforward as lining up discrete phonemes one after another to create artificial speech. Liberman and colleagues discovered that there are no tidy boundaries between phonemes; rather, speech sounds overlap in time, in a phenomenon known as coarticulation. In fact, when researchers tried to string together consonant and vowel sounds, the result was mostly a string of unintelligible sounds. Because of this overlap of sounds, the acoustic cues for each phoneme vary widely, depending on the context; for example, the /d/ in “deep” is acoustically different from the /d/ in “do” (Liberman et al., 1957). In other words, our words are actually sort of a blur of sound, rather than discrete acoustic building blocks. As a person who is literate in an alphabetic orthography, you might find this hard to believe, because you can easily imagine words being broken into sounds (of course you know that “cat”  is /k + æ + t/), but to young children and people who have not learned to read in an alphabetic language (e.g., readers of the Japanese syllabary system), it is not obvious that words can be broken down into abstract units of phonemes. “Cat” is just “cat” and not easily abstracted into /k + æ + t/.

A simplified spectrogram, a visual representation of the acoustic speech signal. The two /d/ sounds are perceived as the same “d” to listeners but note how the acoustic signal is different. The different vowels influence the preceding /d/ (Source: Liberman et al., 1967)

Because the acoustic cues for any given phoneme vary depending on their context, Liberman theorized that there must be some other, more constant, cue that serves to distinguish phonemes for listeners. This led to the proposition of the Motor Theory of Speech Perception, which holds that the objects of our perception are the articulatory gestures of the speaker, rather than the acoustic signal itself. That is, the listener identifies a spoken [p] as the phoneme /p/ not because of its acoustic properties, but because they can recover the articulatory gesture of the speaker’s lips closing in a voiceless stop gesture. Different researchers have proposed different variations since Liberman’s inception of the Motor Theory, but the gist of gesturalist theories remains that we perceive speech largely via the motor system. Unsurprisingly, proponents of teaching articulatory gestures in early literacy often point to the Motor Theory of Speech Perception as theoretical support for the approach.

However, there is considerable debate surrounding speech perception, with many researchers arguing for an account that opposes Motor Theory, holding instead that speech perception is driven primarily by auditory processes, despite the wide variance in auditory cues that Liberman first identified (Diehl et al., 2004, Redford & Baese-Berk, 2023). The Motor Theory, in its original form, predicts that in order to perceive human speech, one must have a human vocal apparatus. But it turns out that categorical perception of human speech (e.g., /p/ vs /b/ sounds) has been demonstrated by several types of experimental subjects who are incapable of speech production including people with severe production impairments such as Broca’s aphasia (Hickok et al., 2011), very young babies (e.g. Eimas, 1971), and chinchillas (Kuhl & Miller, 1975). Though these findings are inconsistent with the original Motor Theory, they of course do not rule out involvement of the motor system in speech perception. Indeed, there is evidence of a role of the motor system in speech perception, some of which we will now discuss.  

In a fascinating study, researchers from UBC found that babies’ perception of dentalized [d] versus retroflex [d] was inhibited when they used a teether that blocked tongue tip movement, versus a teether that did not (Bruderer et al., 2015). Further, the well-known McGurk effect, (McGurk & MacDonald, 1975) is taken as support for motor involvement, or at least evidence against a purely auditory account. We can also look to evidence from brain imaging studies: for example, Fridriksson and colleagues found that a common neural network was involved in both producing speech and passive listening to speech (Fridriksson et al., 2009). More recently, it is suggested that the motor system facilitates perception during difficult listening conditions (e.g., a noisy party), but that the contribution of the speech motor system to perception is generally more modest (Stokes et al., 2019). In a 2017 systematic review, Skipper and colleagues pointed out that asking whether the motor system plays any role in speech perception is too simplistic an inquiry, and the fact that there is some role of the motor system should be uncontroversial. Rather, given the complexity of both perception and production, we should seek to better understand what regions and networks in the brain play a role in perception, under what circumstances (Skipper et al., 2017). Long story short, there is a lot left to learn about speech perception, but it is safe to say that the original Motor Theory is no longer considered viable. 

Are we getting lost in the theoretical weeds here? Maybe. But it seems to me that the nature and extent of motor involvement in speech perception matters to the discussion of teaching articulatory gestures in the context of literacy instruction. I would submit that appeals to the Motor Theory are best taken with a grain of salt. In any case, practices must be based on more than theory. Next, we’ll look at the experimental evidence that examines the effectiveness of teaching articulatory gestures in the context of literacy instruction.

 

Part 2—The research evidence

First, we will look at studies of this approach as incorporated into interventions for students needing extra support in literacy, and later we’ll get to studies of typically-developing young children.

Auditory Discrimination in Depth (ADD), later revised and renamed the Lindamood Phoneme Sequencing Program (LiPS), is a program that makes extensive reference to articulatory gestures, as an intervention for struggling readers (Lindamood & Lindamood, 1975 & 1998). This program calls students’ attention to their articulatory gestures and demonstrates the sequence of sounds in syllables using coloured blocks, and then links to actual word reading. Consonant phonemes are labelled with friendly terms like “tip tappers” for /t/ and /d/, presented in pairs of voiced and voiceless sounds referred to as “brothers.” Vowel phonemes are presented in a “vowel circle,” arranged by tongue height and front-back position of the tongue in the mouth. Though the focus on speech articulation is what sets ADD/LiPS apart from other programs, it is only one aspect, and this program also follows a systematic, cumulative sequence of instruction in decoding. So, does it work? 

Torgesen and colleagues studied the effectiveness of two approaches for children with weak phonological skills: ADD/LiPS and another less explicit approach, with both groups benefiting from 88 hours of one-on-one intervention from mid-Kindergarten through to second grade. The less explicit approach, called embedded phonics (EP) involved children learning whole words by drill, then learning letter-sound correspondence for letters from the drilled words, and writing and reading sentences with these words, with focus on phonemic segmenting during the writing task. At the end of grade two, though both groups made good gains, the ADD/LiPS group outperformed the EP group (Torgesen et al., 1999). But, the groups differed not only in the type of phonemic awareness instruction they received, they differed drastically in the amount of time devoted to phonemic awareness and decoding: 74% of instructional time for children in the ADD/LiPS condition, versus 26% in the less explicit condition. The authors suggest that it’s not possible to determine whether the nature of the activities or the intensity of explicit instruction (three times more focus on phonemic awareness and decoding) was responsible for the advantage seen for the ADD/LiPS group (Torgesen et al., 1999).

Sample of materials from LiPS (Lindamood & Lindamood, 1998)

Torgesen and colleagues again studied the effects of ADD/LiPS versus a more developed version of their embedded phonics (EP), with additional phonics and phonemic awareness, so that time on foundational skills was the same across conditions. Children aged 8-10 with severe reading disabilities received two 50-minute one-on-one intervention sessions per day over 8 weeks (a remarkably high “dose” of interventions). Results showed that students from both conditions made very good gains in reading, and gains were still evident two years after the 8 weeks of interventions. However, there was no difference between the two groups on measures of word reading, nonword decoding, or passage reading accuracy, fluency, or comprehension, neither immediately after interventions, nor two years later (Torgesen et al., 2001), suggesting that focus on articulatory gestures did not yield an advantage. Somewhat confusingly, this study is often cited as support for the use of articulatory gestures.

From Torgesen et al., 2001. The broad reading cluster included measures of word-level reading and passage comprehension.  Outcomes were the same, whether or not participants focused on articulation.

A variation on the ADD/LiPS program that incorporated computer activities was compared to Read, Write and Type (RWT), a computer-assisted program from the late 1990s that provides explicit instruction in foundational literacy (without incorporation of articulatory gestures), targeted primarily through written expression. The study looked at 104 first grade students at risk of reading difficulties who received four 50-minute small group session per week, from October to May of first grade. While both groups made good gains compared to controls, there was no significant advantage for children in ADD group compared to the RWT group. Additionally, the authors pointed out that the ADD/LiPS program was much more difficult for teachers to learn and administer (Torgesen et al., 2018).

Borrowing some procedures from ADD/LiPS, Wise and colleagues studied the effect of explicit instruction in articulatory gestures. Three groups of second- to fifth-grade students with reading difficulties participated in interventions that varied in the amount of focus on articulation, but the time spent on phonemic awareness and decoding instruction was the same for all three groups. One group targeted phonological awareness by manipulating blocks and reading/spelling words. Another group focused heavily on articulation, without traditional phonemic awareness activities. A third group did a mix of both, and all three groups received identical phonics instruction. After 50 hours, all groups made good gains on all tests of reading, compared to controls, and gains were sustained 10 months later. Remarkably, the three groups did not differ from each other on any measure of reading, i.e. there was no advantage (or disadvantage) to focusing on articulatory gestures. The authors suggest that there appears to be flexibility in exactly how these concepts are taught (Wise et al., 1999). 

Researchers in France studied the outcomes of 19 children aged 7-10 with moderate to severe dyslexia. Joly-Pottuz and colleagues randomly assigned participants into two groups. One group received three weeks of auditory-phonological training and then three weeks of auditory-phonological with articulation training and the second group received the same trainings, in reverse order. Outcomes were measured before, between, and after the two training periods. The auditory-phonological training consisted of listening to pre-recorded phonological awareness tasks (e.g. find the word with the same common middle syllable: puté – carburant – débutant) that increased in difficulty over the course of the training period. There was no feedback on performance, and no link to written words. The articulation training involved two activities: learning the place of articulation and voicing for stop sounds (/p, b, t, d, k, g/) and a set of computerized games that further taught voicing contrasts for consonant pairs (e.g., /p, b/, /f, v/). Both groups showed gains in all measure of reading over the six weeks. Of interest: following the articulation training, both groups made better gains in phonological awareness and nonword reading (compared to gains seen following the auditory-phonological training), and one group made better gains on word reading. However, there was no effect of the type of training on spelling (Joly-Pottuz et al., 2008). This study suggests that learning some consonant places of articulation and voicing contrasts was helpful for these students with dyslexia or that visual cues (absent during the auditory-phonological only condition) may have facilitated learning.

In 2014, Trainin and colleagues examined 53 third-grade students reading below grade level, participating in two different interventions: WordWork (Calfee & Patrick, 1995) which included focus on how speech sounds are produced, and Phonological Awareness Training for Reading (Torgesen & Bryant, 1993), with much less emphasis on speech articulation. Interventions were in small groups, over three 45-minute sessions per week, for 11 weeks. After the interventions, the WordWork students performed better on spelling, decoding, and reading fluency compared to the other intervention group (though they did not show stronger phonemic awareness). However, WordWorks differed from the control intervention in other important ways, for example emphasizing instruction in morphology (e.g., suffixes, doubling rule as in tap/tapping, tape/taping), versus the other condition, which focused heavily on oral-only phonemic awareness activities. The authors suggest that teaching a combination of metaphonics, articulation, and orthography in WordWork resulted in superior gains (Trainin et al., 2014), thus this study does not speak to role of articulation specifically.

Swedish researchers aimed to study the effect of a FonoMix, a commercially available phonological training that includes emphasis on articulation, similar to ADD/LiPS with mouth pictures, but in Swedish. Thirty-nine 6-year-olds participated in the experimental condition, while 30 children served as a comparison group. The children had not yet begun formal literacy instruction, which is customarily introduced at age 7 in Sweden. Children in the experimental group showed stronger gains in letter names and sounds, word and nonword reading. Students who were at-risk showed particularly good benefit. However, it is not possible to conclude what aspect of FonoMix led to the gains, given that the comparison groups were taught using “whole language” inspired materials and procedures (Fälth et al., 2017), suggesting they lacked explicit teaching of foundational skills. Another study of FonoMix looked at the outcomes of 38 Grade 1 students, randomly assigned to receive either FonoMix interventions condensed into 4-5 weeks or regular classroom instruction. Again, students in the intervention condition made good gains, but the components of the regular classroom instruction were not specified, so it is difficult to draw conclusions about what contributed to the progress of the experimental group (Fälth et al., 2020).

More recently, Norwegian researchers conducted a randomized control trial investigating the effect of articulation training for at-risk grade one students (Thurmann-Moe et al., 2021). A sample of 129 first graders with weak phonological processing abilities were randomly assigned to an intervention condition or a control condition, where students received their schools’ usual interventions. Children in the experimental condition received 20 sessions of 40 minutes, in small groups, focusing on exploring their own articulation using mirrors, learning a symbol system that represents the features of Norwegian speech sounds (see example figure below). They then used these symbols to read and spell words and practiced transferring to Norwegian orthography. Although the children were able to learn the articulation symbols, there was no difference in their performance on any of the reading or phonological awareness measures, compared to controls. However, there were a number of limitations, including that interventions took place over the last 11 weeks of the school year, and there was perhaps insufficient time to apply their learning to real reading and spelling.

Pictographic Articulatory System (Kausrud, 2003) for the sounds /g/ and /s/, used by Thurmann-Moe et al., 2021. The pictographs depict the place, manner, and voicing of consonant sounds, and lip shape for vowels.

So far, we have seen studies that focused on children requiring literacy interventions, and none has made a compelling case for incorporating articulatory gestures. Now we will look at two studies of typically developing beginning readers. Reading researcher Linnea Ehri published two studies with colleagues in 2003 and 2011 that are very often cited as support for incorporating articulatory gestures into early literacy instruction (Castiglioni-Spalten & Ehri, 2003; Boyer and Ehri, 2011). We should understand the details of these studies, in order to draw conclusions about the practical implications.  

The 2003 study involved 45 Kindergarteners who were assigned to three conditions: ear condition, mouth condition, and control condition. Under the ear condition, the children were taught to segment words into phonemes using blocks with a picture of an ear, to cue them to pay attention to the sounds in words. Under the mouth condition, children learned to segment words into phonemes, using blocks with pictures of eight different mouth positions, depicting 13 consonant sounds and three vowel sounds. For example, /t, d, l, n/ were all represented by the same picture of an open mouth and tongue tip lifted to the alveolar ridge. The three vowel sounds included were long E, long A, and long O.  Prior to segmenting words, the children in ear and mouth conditions were taught the correspondences between the blocks and the phonemes in words. The children in the mouth condition were taught the correspondence of the mouth pictures to phonemes, using a mirror and discussing their own production of these sounds. The control group participated in regular Kindergarten instruction, where reading was not formally taught.

Mouth pictures used in Castiglioni-Spalten & Ehri, 2003 and Boyer and Ehri, 2011 (from Lindamood & Lindamood, 1975)

After six sessions of instruction, the ear and mouth groups performed equally on segmenting words and invented spelling, but both outperformed controls. Experimenters looked at the transfer of these skills to reading of novel words that contained taught sounds, and again there was no advantage for the mouth group. However, the mouth group did outperform both the ear and control groups, when considering how many words were read partially correctly (i.e., a post-hoc analysis, e.g., reading feel as “feet” would be partially correct). The authors concluded that phonemic awareness training was successful for both methods, with and without attention to articulatory gestures. They suggested that “articulatory training facilitated the graphophonemic, connection-forming process that is involved in bonding spellings to phonological representations of words to secure them in memory,” based on the advantage for the mouth group’s ability to read words partially correctly. The authors indicated that findings were suggestive and could not conclusively inform instructional practices.   

The 2011 study by Boyer and Ehri studied 60 preschoolers aged 4-5, split into three groups. The two treatment groups were taught to segment words into phonemes using tiles with letters only (LO), versus letter tiles plus pictures of articulatory gestures (LPA), using the same pictures as the 2003 study. Again, a third control group received regular preschool teaching. Children in the LO and LPA conditions were taught letter-sound correspondences for the 15 letters and were taught to segment words using letter tiles to represent the phonemes in words. In the LPA condition, children were additionally taught to segment words using the mouth pictures. Children participated until they reached preset criteria for letter naming and phoneme segmenting, which took 4-11 sessions.

Children in both the LO and LPA conditions performed better than controls on all outcome measures of reading, spelling, and phonemic awareness, suggesting both conditions were effective, both one day and seven days later. Children in the LPA condition were able match mouth pictures to their corresponding sounds, whereas the LO group was not, indicating that they learned articulatory awareness from this kind of instruction. To assess specifically the contribution of learning the mouth pictures, the outcomes of the LO and LPA groups were compared. There was no significant difference in these groups’ nonword spelling, number of sounds spelled correctly, recall of the words taught during the training period, number of phonemes segmented (in a phoneme segmenting task using blank tiles), or nonwords repetition (phonological memory), after one week. On the other hand, children in the LPA group outperformed their LO peers on some measures one week after the training. The LPA children were able to segment more words correctly (an average of 3.6/14 words for LO versus 5.1/14 for LPA). Further, on a word-learning task, where children received up to eight trials to learn to read six new words made of taught sounds, the LPA group required fewer trials to learn to read the words, and one week later were able to read significantly more words than the LO group. However, both groups performed the same on spelling of the learned words.  

Boyer and Ehri concluded that the addition of teaching articulatory gestures boosted children’s word learning in this study. Given that study participants were 4- and 5-year-olds in the early stages of literacy acquisition, with large vocabularies, from mid-upper socioeconomic status families, the authors indicated that it remained to be investigated whether findings can be generalized to other circumstances; how exactly articulatory gestures should be incorporated into reading instruction was suggested to be a question for further research (Boyer & Ehri, 2011).  

A more recent 2021 study examined the effect of a collaborative approach to instruction with a group of 17 preschoolers, over a period of seven weeks. A speech-language pathologist (SLP) worked with a classroom teacher to provide instruction that included articulation placement, phoneme segmentation, letter-sound knowledge, word and nonword decoding, and phoneme segmentation using mouth pictures and letters. Children who had difficulty were seen for small group instruction with the SLP. Prior to the collaboration, the usual classroom practice involved daily thematic stories with some phonics infused, but without explicit phonemic awareness instruction. The researchers measured children’s phoneme segmentation and word reading at three times: at baseline, after seven weeks of usual classroom instruction, and after seven weeks of the collaborative instruction. The children made no gains on these measures after seven weeks of the usual instruction, and impressive gains on all measures after seven weeks of the SLP-teacher collaboration, suggesting that the SLP and teacher were able to work collaboratively to improve early literacy instruction. However, it is not possible to know what elements of the systematic instruction in the classroom were responsible for the children’s growth, not to mention the additional small-group support with the SLP.

And finally, a very recent study was the first to clearly examine the role of attention to articulation in the learning of letter-sound correspondences. Five preschoolers who did not know any letter-sound correspondences and had no developmental concerns were taught the letter-sound correspondences by explicit teaching using flashcards, in twice-daily 5-minute sessions, for about 36 days. Researchers compared the children’s ability to learn letter-sounds under two experimental conditions: with the researcher wearing a blue procedural face mask (as was commonplace during the Covid-19 pandemic), versus without a face mask. In the masked condition, the researcher cued the children to listen to the sound, pointing to her ear before presenting the sound and letter. In the no-mask condition, children were instead prompted to look at the researcher’s mouth, though there was no explicit discussion of the speech gestures. Children were able to learn the letter-sounds in both conditions but were able to master them after much less instructional time when the researcher was not masked and they were cued to look at her mouth (Novelli et al., 2023). This study suggests that simply being cued to look at a teacher’s mouth may help children make the links between sounds and letters.    

That about summarizes the research evidence for articulatory gestures in literacy instruction, but if I have missed any, please do share.  While some studies show promise for this approach, they are few, their findings have not yet been replicated, and the instructional implications are often limited, given that the studies often do not look at the contribution of the focus on articulatory gestures specifically. And importantly, the findings do not necessarily reflect what is seen in some current trends. So, what do we do with all of this information That is the big question and different people will have different ideas about how to proceed. Next, I will share my view on how we may translate the theory and research to practice.

 

Part 3—What now?

These days, articulatory gestures and sound walls with mouth pictures are the subject of much discussion in the literacy world. In part two, we saw that the few studies that show promise for this approach had limited instructional implications. And yet, current trends have not only embraced this concept, but have taken it to a level not investigated by any of the relevant studies. Practices popularized over the past several years recommend teaching students about concepts such as liquid, fricative, affricate, nasal, voiced/voiceless consonant phonemes and subtle gradations in vowel height and front-back position of the tongue in the mouth. For example, teachers may be encouraged to tell children remember to keep your voice on, because /u/ is voiced, like all vowels, or give detailed placement cues such as put your teeth together and round your lips like this, and pull your tongue toward the back of your mouth and make sure your voice is off (to make the /sh/ sound). Or, dubious cues such as fit one finger in your mouth to say short i (as in “bit”) and two fingers to say short e (as in “bet”) and suggestions for metalinguistic activities that are rather removed from actual reading or spelling, such as asking students what is the nasal sound you make with your two lips? I recently read a description of how to produce “j” that was very nearly 100 words long. Further, some instructional materials reference articulatory gestures over the first few years of literacy instruction. All of these points represent a very significant overextension of the findings, in my view.

True, perhaps this depth of instruction is indeed more effective, even if yet to be demonstrated by experimental evidence. So, if it might help, then why not? A couple of responses here. Firstly, spending class time on instructional routines that are not evidence-based results in an opportunity cost. If we allot an average of even 3 minutes daily for a low-impact activity, that is 9 hours over the course of the school year that could be devoted to higher-impact activities. Further, and probably more of a problem in my estimation, learning to teach the articulatory gestures requires a serious investment of time, energy, and resources on the part of very busy and underfunded teachers. Is it possible that teachers will miss out on deepening their expertise in higher impact concepts, such as on the benefits of direct instruction, while they are devoting precious professional learning time to understanding the details of phonology and speech articulation? I would argue that we need to be reasonably sure that the benefits of this approach are worth the costs associated with its implementation, and I’m not sold, based on the theoretical considerations and empirical findings to date.

Although I am skeptical of many current practices, I’m not ready to abandon the whole idea, given especially the findings of Boyer & Ehri, 2011. For example, while doing phoneme segmenting for spelling, we may cue children to think about the sounds in their words, draw their attention to our mouths as we model the spoken words, but without much explanation of the gesture. And if a child writes “hug” as “hg,” I would definitely cue them to see how their mouth opens to make a vowel sound and might prompt them to look in the mirror to see it happen. In my view, this approach is responsive to needs. On the other hand, there are some contrasts where universally teaching the gestures likely makes sense, e.g. contrasts that are very similar acoustically, for example /f/ versus /th/, or /m/ versus /n/, which are much more distinct in terms of their articulatory gestures. Finally, I have often referenced the gesture of a sound when teaching children new sound contrasts in their second language, for example teaching French Immersion students that French “u” (as in “tu”) is like “eee” but with rounded lips. However, even for these second-language learners, I would personally stick to the basics and reference gestures only for sounds that pose some difficulty.

Spectrograms (visual representations of the acoustic signal) for the words “fin” and “thin.” The /f/ and /th/ sounds are very similar acoustically, and also children at the age of early literacy acquisition commonly misarticulate /th/, so probably it makes sense to reference the articulatory gestures to differentiate these sounds.

There are several sets of published materials for teaching children about articulatory gestures, some embedded into more comprehensive programs, some more-or-less stand-alone, and some low-budget printable versions available on Teachers Pay Teachers. I have heard some people argue that by embedding more detail into instructional content, teachers learn along with their students, and develop knowledge that will improve their teaching. For example, when teachers learn about continuant consonants, they can learn how to teach continuous blending, a research-based strategy that involves selecting words that start with “stretchy sounds” such as ssssat but not cat (Gonzalez-Frey & Ehri, 2021). Indeed, this particular point is highly practical knowledge for teachers, but I do wonder if it gets lost in a sea of not-so-useful details.

Since sharing my take on this trend, a few people have suggested that incorporating this kind of teaching into the classroom may reduce the number of children who will require individual speech therapy, and so it might be justified as a preventative approach. It’s possible. We do know that a child with speech difficulties who is learning to read will benefit from an approach that targets both speech sound production and reading skills in a coherent fashion (Rvachew & Brosseau-Lapré, 2015). However, I am still unconvinced that the heavy emphasis on the gestures is sensical. In fact, most speech disorders in children, about 85-90%, are primarily phonological in nature (Dodd, 1995), meaning that the children have a delayed or disordered concept of the system of sounds (i.e., the phonology) in their language. Consequently, speech therapy involves much less discussion of the articulatory gestures than most people would probably imagine. For these children, it’s less about how to move your mouth and more about creating opportunities and providing feedback to help establish the meaningful sound contrasts of the language, i.e., building their phonological system. I would argue that using a sound wall with the lengthy descriptions of the articulatory gestures to help a child with a phonological disorder would be conflating articulation and phonology, which SLPs (speech-language pathologists) know are different things, the former in the domain of speech and the latter in the domain of language, requiring different approaches to intervention (Rvachew & Brosseau-Lapré, 2018). True, speech articulation difficulties, such as a dental lisp of /s/ and /z/, are a different matter, and typically would require placement cues, but these errors are commonly present on only a subset of speech sounds. Furthermore, when teaching articulation, we teach only whatever features are required to differentiate the error from the target sound, without getting into every feature.

I admit, when I first heard about mouth pictures and articulatory gestures and sound walls in literacy several years ago, I was excited that my expert knowledge about speech might be put to use in this way. Like a lot of SLPs with a passion for literacy, I was ready to dive in, having been assured that there was ample evidence for this approach. I have since changed my mind, after spending considerable time investigating this issue. Of course, SLPs are still critical to the discussion on literacy, but we need to do our homework and try to check our bias at the door, just like everyone else.  

In a field as vast as literacy, it’s easy to get caught up in trends, because there is so much going on at all times, amplified by the din of social media. If we find ourselves questioning the utility of a certain trend, we might take a lesson from the recent discussion regarding the teaching of phonemic awareness. For ages, many people (I was one of them) advocated for teaching phonemic awareness isolated from printed words, to be sure that children were really focusing on the phonemes. Well, this seems to be rather a misinterpretation of the literature that became entrenched in practice, as some researchers have pointed out (e.g., Seidenberg, 2021; Brady, 2020), leading many people to reconsider aspects of their instruction and intervention. There likely are some circumstances in which we would want to zero-in on phonemic awareness specifically, to achieve a higher volume of practice in a short time, though research has yet to identify what exactly those circumstances are, just that extensive oral-only phonemic awareness instruction is probably not an efficient use of classroom time. In fact, an extremely popular top-selling program that teaches precisely this concept has been the subject of some criticism, and recent research findings suggest that there may not be an advantage to incorporating this program over teaching a quality phonics program alone (Little et al., 2023). I’m no social psychologist, but it seems to me that trends and bandwagons are simply a fact of life for us fallible humans. As long we are aware of this fact, if we commit to doing the homework, and are willing to adapt in the face of new information, then everything should be fine.

An excerpt of a different type of sound wall (in French), one that shows the corresponding graphemes for each phoneme, without highlighting the articulatory gestures (École des Bâtisseurs, used with permission)

Back to the articulatory gestures. Let’s put this trend in the current context. Over the past several years, lots of people have realized that the predominant approach to early literacy needs serious work. Very important here: the present critique, specifically of the use of articulatory gestures, should not be taken as criticism of the broader shifts towards explicit, systematic foundational literacy instruction. There is finally general agreement that we need to move toward a more systematic approach to foundational literacy, so that the greatest number of children learn to read during the first few years of school, and major changes are underway, in some places at least. This is an important moment that we really can’t afford to mess up. We should aim to be sure that our implementation of best practices is not inadvertently derailed by promoting elements that are needlessly burdensome and distract from higher-impact instruction and interventions. As with everything, the devil is in the details, and keeping up with this field is time-consuming and, frankly, exhausting, given the volume of information out there. So, what’s a teacher, interventionist, SLP, or other literacy leader to do? As Anita Archer famously said: Teach the stuff and cut the fluff, reminding us to keep it as simple as we can and as complicated as we must, based on the best available evidence. So, I will continue to teach the linking of speech sounds to their corresponding graphemes, but for now I’ll hold off on vowel valleys and complex displays and descriptions of articulatory gestures, until there is evidence that these result in better outcomes in literacy.

 

Laura MacGrath is a speech-language pathologist in Montreal, Canada.  She works in public schools and is a course lecturer at the McGill University School of Communication Sciences and Disorders, on the topics of reading disabilities and speech development and disorders.  She can be found at www.doyoureadme.ca

 

References: 

Becker, R., & Sylvan, L. (2021). Coupling Articulatory Placement Strategies With Phonemic Awareness Instruction to Support Emergent Literacy Skills in Preschool Children: A Collaborative Approach. Language, Speech, and Hearing Services in Schools52(2), 661–674. https://doi.org/10.1044/2020_LSHSS-20-00095 

Boyer, N., & Ehri, L. C. (2011). Contribution of Phonemic Segmentation Instruction With Letters and Articulation Pictures to Word Reading and Spelling in Beginners. Scientific Studies of Reading15(5), 440–470. https://doi.org/10.1080/10888438.2010.520778

Brady, S. (2020). A 2020 Perspective on Research Findings on Alphabetics (Phoneme Awareness and Phonics): Implications for Instruction.

Bruderer, A. G., Danielson, D. K., Kandhadai, P., & Werker, J. F. (2015). Sensorimotor influences on speech perception in infancy. Proceedings of the National Academy of Sciences112(44), 13531–13536. https://doi.org/10.1073/pnas.1508631112

Castiglioni-Spalten, M. L., & Ehri, L. C. (2003). Phonemic Awareness Instruction: Contribution of Articulatory Segmentation to Novice Beginners’ Reading and Spelling. Scientific Studies of Reading7(1), 25–52. https://doi.org/10.1207/S1532799XSSR0701_03

Diehl, R. L., Lotto, A. J., & Holt, L. L. (2004). Speech Perception. Annual Review of Psychology55(1), 149–179. https://doi.org/10.1146/annurev.psych.55.090902.142028

Dodd, B. (1995). The differential diagnosis and treatment of children with speech disorder (Ser. Studies in disorders of communication). Whurr.

Eimas, P. D. (1975). Auditory and phonetic coding of the cues for speech: Discrimination of the [r-l] distinction by young infants. Perception & Psychophysics18(5), 341–347. https://doi.org/10.3758/BF03211210

Fälth, L., Gustafson, S., & Svensson, I. (2017). Phonological awareness training with articulation promotes early reading development. Education137(3), 261–276.

Fälth, L., Svensson, E., & Ström, A. (2020). Intensive Phonological Training With Articulation—An Intervention Study to Boost Pupils’ Word Decoding in Grade 1. Journal of Cognitive Education and Psychology19(2), 161–171. https://doi.org/10.1891/JCEP-D-20-00015

Fridriksson, J., Moser, D., Ryalls, J., Bonilha, L., Rorden, C., & Baylis, G. (2009). Modulation of Frontal Lobe Speech Areas Associated With the Production and Perception of Speech Movements. Journal of Speech, Language, and Hearing Research52(3), 812–819. https://doi.org/10.1044/1092-4388(2008/06-0197)

Gonzalez-Frey, S. M., & Ehri, L. C. (2021). Connected Phonation is More Effective than Segmented Phonation for Teaching Beginning Readers to Decode Unfamiliar Words. Scientific Studies of Reading25(3), 272–285. https://doi.org/10.1080/10888438.2020.1776290

Hickok, G., Costanzo, M., Capasso, R., & Miceli, G. (2011). The role of Broca’s area in speech perception: Evidence from aphasia revisited. Brain and Language119(3), 214–220. https://doi.org/10.1016/j.bandl.2011.08.001

Joly-Pottuz, B., Mercier, M., Leynaud, A., & Habib, M. (2008). Combined auditory and articulatory training improves phonological deficit in children with dyslexia. Neuropsychological Rehabilitation18(4), 402–429. https://doi.org/10.1080/09602010701529341

Kuhl, P. K., & Miller, J. D. (1975). Speech Perception by the Chinchilla: Voiced-Voiceless Distinction in Alveolar Plosive Consonants. Science190(4209), 69–72. https://doi.org/10.1126/science.1166301

Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review74(6), 431–461. https://doi.org/10.1037/h0020279

Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review74(6), 431–461. https://doi.org/10.1037/h0020279

Liberman, A. M., Harris, K. S., Hoffman, H. S., & Griffith, B. C. (1957). The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology54(5), 358–368. https://doi.org/10.1037/h0044417

Lindamood, C. H., & Lindamood, P. C. (1975). The ADD Program: Auditory discrimination in depth. (2nd ed.). Austin, TX: PRO-ED.

Lindamood, C. H., & Lindamood, P. C. (1998). The Lindamood phoneme sequencing® program for reading, spelling, and speech. Austin, TX: PRO-ED.

Little, C., E. Edwards, A., Harris, M., Santangelo, D., & Patton Terry, N. (2023). Examining student reading achievement in the Heggerty Phonemic Awareness Curriculum (Version 1). https://doi.org/10.6084/m9.figshare.24585075.v1

Novelli, C., Ardoin, S. P., & Rodgers, D. B. (2023). Seeing the mouth: The importance of articulatory gestures during phonics training. Reading and Writinghttps://doi.org/10.1007/s11145-023-10487-3

Redford, M., & Baese-Berk, M. (2023). Acoustic Theories of Speech Perception. In M. Redford & M. Baese-Berk, Oxford Research Encyclopedia of Linguistics. Oxford University Press. https://doi.org/10.1093/acrefore/9780199384655.013.742

Rvachew, S., & Brosseau-Lapré, F. (2015). A Randomized Trial of 12-Week Interventions for the Treatment of Developmental Phonological Disorder in Francophone Children. American Journal of Speech-Language Pathology24(4), 637–658. https://doi.org/10.1044/2015_AJSLP-14-0056

Rvachew, S., & Brosseau-Lapré, F. (2018). Developmental Phonological Disorders: Foundations of clinical practice, second edition). Plural Publishing.

Seidenberg, M. (2021). Miniseries on Phonemes and Phoneme Awareness. Reading Matters. 

Skipper, J. I., Devlin, J. T., & Lametti, D. R. (2017). The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. Brain and Language164, 77–105. https://doi.org/10.1016/j.bandl.2016.10.004

Stokes, R. C., Venezia, J. H., & Hickok, G. (2019). The motor system’s [modest] contribution to speech perception. Psychonomic Bulletin & Review26(4), 1354–1366. https://doi.org/10.3758/s13423-019-01580-2

Thurmann-Moe, A. C., Melby-Lervåg, M., & Lervåg, A. (2021). The impact of articulatory consciousness training on reading and spelling literacy in students with severe dyslexia: An experimental single case study. Annals of Dyslexia71(3), 373–398. https://doi.org/10.1007/s11881-021-00225-1

Torgesen, J. K., Alexander, A., Wagner, R. K., & Rashotte, C. A. (2001). Intensive remedial instruction for children with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities.

Torgesen, J. K., Wagner, R. K., Rashotte, C. A., & Herron, J. (2018). Summary of Outcomes from First grade Study with Read, Write, and Type and Auditory Discrimination in Depth instruction and software with at-risk children. Florida Centre of Reading Research.

Torgesen, J. K., Wagner, R. K., Rashotte, C. A., Rose, E., Lindamood, P., Conway, T., & Garvan, C. (1999). Preventing Reading Failure in Young Children With Phonological Processing Disabilities: Group and Individual Responses to Instruction.

Trainin, G., Wilson, K. M., Murphy-Yagil, M., & Rankin-Erickson, J. L. (2014). Taking a Different Route: Contribution of Articulation and Metacognition to Intervention With At-Risk Third-Grade Readers. Journal of Education for Students Placed at Risk (JESPAR)19(3–4), 183–195. https://doi.org/10.1080/10824669.2014.972103

Whalen, D. H. (2019). The Motor Theory of Speech Perception. In D. H. Whalen, Oxford Research Encyclopedia of Linguistics. Oxford University Press. https://doi.org/10.1093/acrefore/9780199384655.013.404

Wise, B. W., Ring, J., & Olson, R. K. (1999). Training Phonological Awareness with and without Explicit Attention to Articulation. Journal of Experimental Child Psychology72(4), 271–304. https://doi.org/10.1006/jecp.1999.2490

Opinions presented are those of the guest authors and not necessarily those of SPELL-Links | Learning By Design. They are presented to generate new insights, critical thinking, and solutions for educators and learners.

Comments? Questions? Divergent opinions? We’d love to hear from you! Post them on SPELLTalk, the FREE online professional discussion group dedicated to improving literacy through discussion of current research and evidence-based best practices.

Our Choice: Rapidly Translate, Evaluate and Adopt Innovative Literacy Methods or Prolong the Reading Wars

Bruce Howlett with Caitlin S. Howlett, Ph.D.

September 6, 2023

If I had fallen asleep at my classroom desk thirty years ago and woke up today, I would be in for a shock. I would find that the Internet exploded, molecular biology has transformed medicine, and that our pockets now hold a revolution in information technology. And yet, if I woke up curious about the debate about teaching reading, I would be shocked in a different way: Had I only lost a night’s sleep? Despite these transformative inventions and innovations, the Reading Wars rages on today as it had before, with little lasting change having been achieved in the literacy capabilities of our students.

READ MORE 

Opinions presented are those of the guest authors and not necessarily those of SPELL-Links | Learning By Design. They are presented to generate new insights, critical thinking, and solutions for educators and learners.

Comments? Questions? Divergent opinions? We’d love to hear from you! Post them on SPELLTalk, the FREE online professional discussion group dedicated to improving literacy through discussion of current research and evidence-based best practices.