#135: Assessment 101: Formal Assessments

image (1)

Listen on Apple Podcasts Listen on Spotify

This week, we are continuing our series on assessments and our focus this week is formal assessments. I will be touching on some things that we want to consider when administering and interpreting scores for these norm-referenced assessments.

Let’s dive into some of these strategies for formal assessments and continue reading for some excellent resources to add to your toolbox!

Interpreting Scores

1. Diagnostic Accuracy: How well does the test identify the presence/absence of disorder?

🍏 Sensitivity: Does the test identify children with language impairments? 

🍏 Specificity: How well does the test identify children with typical language? 

✨ Should not be used if below .8 ✨

2. Reliability: If you repeat the test and get the same score?

🍎 Inter-Examiner and Test-Retest

✨ Above .90 = Good✨

🎯 Dart Board: Darts all over the board? Or always hit the same spot?

3. Validity: Does the test measure what it is supposed to?

 🍏 Constructive (predict later performance)

 🍏Concurrent (correlate with scores of other tests that measure the same thing)

🎯 Dart Board: Do we hit the bull’s eye?

Additional Links

The Informed SLPArticle: Standardized language tests: That score might not mean what you think it means

The Informed SLP Cheat Sheet: Evaluating Standardized Language Tests: Simplified Checklist of Psychometric Properties

DOE Chart: SLP Comprehensive Assessment Card

✨ If you’re using standardized scores that are normed on a population that would not be representative of your student, you must include a disclaimer in your report that the test was not normed on that population. ✨

Check out this episode Evaluations for Culturally & Linguistically Diverse Students: The Why for more discussion!

🎧 Stay tuned for future episodes that dive into reviews of specific assessments for specific areas!

Next Up in this Pod Series

9/6/22  Assessment 101: A Checklist
9/13/22  Assessment 101: Formal Assessments
9/20/22 Assessment 101: Informal Assessments
9/27/22  Assessment 101: Language Samples

Subscribe & Review on iTunes

Are you subscribed to the podcast? If you’re not, subscribe today to get the latest episodes sent directly to you! Click here to make your listening experience auto-magic and as easy as possible.

Bonus points if you leave us a review over on iTunes → Those reviews help other SLPs find the podcast, and I love reading your feedback! Just click here to review, select “Ratings and Reviews,” “Write a Review,” and let me know what your favorite part of the podcast is.

Thanks so much!

Transcript

Speaker 1: Hello there and welcome to the SLP Now Podcast, where we share practical therapy tips and ideas for busy speech language pathologists. Grab your favorite beverage and sit back as we dive into this week's episode.

Speaker 1: Hello there and welcome to the SLP Now Podcast. This week, we are continuing our series on assessment and diving into formal assessments. And just some things that we want to consider when administering these norm referenced assessments. And one of the most important things that we want to do, is to make sure that we're interpreting the scores correctly. There's very few worse things I think we can do as an SLP, at least in the schools, than qualifying a student who doesn't need to be qualified or vice versa. But especially if we are identifying students as qualifying for special education, when they don't actually need the services, we're doing them harm.

Speaker 1: We are removing them from least restrictive environment. They're missing out on classroom instruction. There's some statistics on having that special education label and outcomes for students. And you might be thinking, "Oh, well, if I just qualify them for a speech sound disorder, that's no problem." But they are still missing out on class time. They are getting that label and there's a lot of other potential consequences. And we just want to make sure that we're using our district's resources wisely. That we're using your resources wisely and that we're doing what's best for students. And I know that all of our hearts are absolutely in the right place. And I just think this is a helpful check-in discussion, just to make sure that we're doing our very best. We do better when we know better. And so I think this is a good check-in conversation, even if it's something that we are already doing and implementing beautifully.

Speaker 1: So when we are interpreting scores, there's three measures that we can really look at. And this is often in the assessment manual. It's not always, which is a little frustrating, but I want to break down the three measures that we want to be looking at. And then we'll dive into some other strategies there. So the first measure is diagnostic accuracy, and that talks about how well the test identifies the presence or absence of a disorder. So with sensitivity, we're asking if the test identifies children with language impairment. And specificity, we're looking at how well the test identifies children with typical language. So sensitivity, identifying the delay or disorder or the impairment. And specificity, is it identifying typical language? And we should not be using an assessment if sensitivity or specificity are below 0.8, because we don't want to be inaccurately assessing or diagnosing students.

Speaker 1: The next measure that we want to look at is reliability. So reliability refers to, if we repeat the test, will we get the same score again? And this can be done, there's inter examiner reliability. So if I give the assessment and you give the assessment, do we get the same score? And there's also test retest reliability. So if I test a student twice, do we have reliability in between those scores? And we want our test to have above 0.9 for our reliability. Then with validity, we're asking if the test measures what it's supposed to measure. And there's two types of validity. So constructive validity predicts later performance. So if we give an assessment in preschool, does it have constructive validity for how the student is going to do later in elementary school or later in their educational career? Whereas concurrent validity, looks at whether the scores correlate with the scores of other tests that measure the same thing. So is there concurrent validity between vocabulary assessments, for example. Do they measure the same thing?

Speaker 1: And then I always got a little bit confused between reliability and validity. So I like to think about a dart board. So if a measure is reliable, like I'm throwing darts at a dart board, do I consistently hit the same spot or are all of my darts all over the board? If I am a reliable dart thrower, then all of my darts will land in the same spot. But just because it's reliable, doesn't mean it's valid. I can be consistently hitting the top right of the board, but that doesn't mean I'm hitting the bullseye. That doesn't mean that I'm valid. If I am a valid dart thrower, that means that I'm always hitting that bullseye, like I'm always identifying. That means that we're always measuring exactly what we want to measure. So if I'm doing a vocabulary assessment, I'm always hitting the bullseye. That means that I'm always measuring vocabulary. But if I'm on the top right, maybe I'm actually measuring working memory instead or something else. So that's just a visual to help with reliability versus validity.

Speaker 1: The Informed SLP has a phenomenal article and a cheat sheet. So I will link to that in the show notes at slpnow.com/135. That is a fabulous resource. And the Department of Education, the Virginia Department of Education also has a beautiful comprehensive assessment card. And it lists a lot of the measures like diagnostic, accuracy, reliability, validity. It lists some of those on this beautiful assessment card, that can be a good reference. And if you have assessments and you're curious if they are meeting the metrics that we want to hit, if you can't find it in the manual, you can reach out to the test publisher to request that information as well. And you're always allowed to return a test if it doesn't meet your expectations. And sometimes the only way to find those scores is to order the test and open up the manual and get that information. So maybe just the action item for this is to open up one of your assessments and look at the measures.

Speaker 1: And it's also very, very important that if we're using these standardized scores that are normed on a population, that's not representative of the student, we do need to include a disclaimer in the report that the test was not normed on that population. And we may not even want to report the scores in that case. That it's actually not appropriate. So if you want more of a discussion on that head to Episode 114, and then yeah, stay tuned for future episodes that dive into more on informal assessments next week and language samples, the following week. Hope you have a fabulous rest of your day, and we'll see you soon.

Speaker 1: Thanks for listening to the SLP Now Podcast. If you enjoyed this episode, please share with your SLP friends and don't forget to subscribe to the podcast to get the latest episode sent directly to you. See you next time.