Communication Series: Using Neutral Language

Emotions run high when issues are near and dear to our hearts. As a listener or facilitator, you can help a speaker by using neutral language AND by summarizing using neutral language. Listen in as our novice facilitator receives coaching on this critical skill.  Think about how you can use this skill with colleagues, in meetings and with family members to guide productive discussions.

This video is the second in a series on using effective communication skills.

Funded by the FIEP (Facilitating IEPs) Project at ESC Region XIII.

Questions?
Contact Linda McDaniel at 512-919-5225 or at linda.mcdaniel@esc13.txed.net.

Course Codes for Modified Curriculum

The PEIMS C022 Table has been revised to include course codes (Service ID Codes) for students who need modified curriculum in courses linked to STAAR EOC (End of Course) assessments.

Region XIII has created a document to help you review the new codes and their corresponding assessment.

You can access the C022 Table here.

Language Impairment Eligibility: Which test should I use?

The current language eligibility guidelines provided by TSHA, and many districts across the state, consider students with a language disorder eligible for Speech Language Pathology services in the schools when their standard scores on a language battery fall 1.5 standard deviations below the mean.  This is consistent with eligibility criteria for many other states, as well as many researchers in the field (Spaulding, Plante, and Farinella, 2006); yet, how do we know that children with language disorders will have a score that low or lower on any given language test?  This is the question that Spaulding, Plante and Farinella asked in their 2006 article in Language, Speech and Hearing Services in the Schools.   They were interested in 2 things in their study of a large number of language tests:  (1) Did children with language disorders typically score at the low end of “normal” for those on whom the test was normed? (2) Which tests provided information on specificity (i.e., percentage of typical children diagnosed as having typical language) and sensitivity (i.e., percentage of disordered children diagnosed as having a language impairment) in their test manuals.

So, what did they find?  After looking at 43 tests, they found that only 10 reported score differences for children with language impairments greater than 1.5 standard deviation.  The average mean difference between typical and disordered children’s scores in the norming samples was 1.34 SD.  In 9 of the tests, most of the children with language impairment scored within 1 SD of the mean!   Are there students with language impairments that aren’t made eligible simply because of the test we selected to administer?

Research indicates that using sensitivity and specificity is a better way to determine the accuracy of a test for the purposes of identification of a disorder (Spaulding et al, 2006).  Specificity is how accurate the test is at identifying  typical children as typical, and sensitivity is how accurate the test is at identifying disordered children as disordered.  Specificity and sensitivity information was provided in 9 of the tests’ manuals.  Only 5 of those tests had sensitivity and specificity reported in their manuals that would be good enough to support their use in identifying language disorders:  CELF-4, PLS-4, Test of Narrative Language, Test of Early Grammatical Impairment, and Test of Language Competence – Expanded Edition.  I frequently see the CELF and PLS in use in the schools, but the others are not quite so frequently used; in fact, the TLC-E hasn’t been updated since 1989!

So what does this mean for us?  The authors of this study recommend that if we are using a test to determine eligibility for services, or to identify a disorder, we should make sure that the specificity and sensitivity data from the test support its use in that way.  Further, we should not consider that 1.5 standard deviation score to be a “cut off” for services without gathering other data to support our decision.  If everything says the child is disordered, but the test does not, maybe the problem is with the test.

Spaulding, T.J., Plante, E., and Farinella, K.A., (2006).  Eligibility Criteria for Language Impairment:  Is the Low End of Normal Always Appropriate?  Language, Speech and Hearing Services in the Schools, 37, 61-72.

How Do I Interpret the Research I Read? What to Know about Effect Size

We all (at least vaguely) remember those statistics and research methods classes in which we learned about ANOVA, p values, and all of that other complicated statistical stuff.  At our SLP Hill Country Institute this week, Dr. Ron Gillam, from Utah State University, gave me a reminder of the one statistic to which we SHOULD pay attention:  Effect Size.  Now, all of the American Speech Language Hearing Association journals require that intervention studies report effect size.  So, what exactly is it, and what do you need to know?

Effect sizes “estimate the magnitude of effect or association between two or more variables” (p. 1, Ferguson, 2009).  Basically, it tells how big of a change was seen in the participants in the study.   While the other statistics tell how likely that result would have happened by chance, effect size tells whether or not the change was meaningful.

While there are a number of different indexes of effect size, one of the commonly used is Cohen’s d.  Cohen’s d is the difference between two means divided by the standard deviation.  Here’s a simple example:  What if you planned to do an intervention, and you were pre- and post-testing with the CELF.  On the pre-test, the student got a standard score of 80, and after intervention, on the post-test, the student got a standard score of 90.  The difference between the pretest and posttest score is 10 points.  To determine the effect size, you would divide the difference by the standard deviation, which is 15.  So, 10/15 would be .67.  What does that mean?

Generally, a large effect size is .8 or higher, which is almost 1 standard deviation of change.  That kind of effect can be observed just by looking at the child.  A medium effect is .5 to .79.  That effect could be seen through the administration of some kind of test, but is not readily observable.  A small effect is .2 to .49.  A small effect is a barely noticeable change, even with a test.

So, the next time you are reading an article on an intervention in your ASHA journals, I hope you take a moment to look for the effect size to help you determine if the evidence indicates that it is an intervention you might want to use in therapy.

Ferguson, C J., (2009).  An Effect Size Primer:  A Guide for Clinicians and Researchers.  Professional Psychology:  Research and Practice, 40 (5), 532-538.

Inclusion Institute for Secondary Educators 2011

Many of us were able to escape Austin’s record breaking heatwave by attending the first annual Inclusion Institute for Secondary Educators at Education Service Center Region XIII on June 22, 2011.  We appreciate the contributions that all of our speakers made to making this day a great success especially Dr. Lisa Dieker and Dr. Kelly Grillo from the University of Central Florida.  Their personal experiences and expertise in serving all students at the secondary level made an impact on all participants. Below are links and documents graciously shared by some of our speakers. 

Mark your calendar for June 18 & 19, 2012 for the expanded version of the Inclusion Institute for Secondary Educators.

If you have questions regarding the 2011 or the 2012 institutes, do not hesitate to contact me.

Cathy Miller                         512-919-5160                       cathy.miller@esc13.txed.net