Microsoft's Resident Psychometrician
The Microsoft Learning Group's exam maven, Liberty Munson, offers some insights into exam development, piracy, and the innovations that come from asking the right questions.
MCP exams are unlike laws or sausages -- it's better to know what goes into the process of developing the MCP exams. For such insight, we asked Liberty Munson, the Microsoft Learning Group's exam psychometrician, to divulge in an e-mail exchange over the course of a few weeks her role in the exam-making process. The result is an examination of the complex issues and challenges that go into maintaining exam relevancy with quickly evolving technology.
MCPmag.com: How early in the process does a psychometrician contribute to test development and assess an exam's questions?
Liberty Munson: I am involved in exam development from beginning to end, but my involvement ranges from strategic to hands on depending on the phase of development and innovativeness of the approach being used at that particular phase. In a typical exam development cycle, my involvement during the development of the content domain (i.e., objective domain), blueprinting, and item writing tends to be strategic because the processes we have in place are robust and repeatable.
I have more hands on involvement after the beta exam where I closely review the psychometric performance of each item -- is it too easy or too hard? does it differentiate between high and low performers? is there one psychometrically correct answer and is it the same as the keyed correct answer? -- work with subject matter experts to set the cut score, and monitor the validity and reliability of the exam over its lifetime.
In atypical exam development cycles, such as when we introduce some type of innovation to the exam development process, I'm actively involved in those sessions to ensure that the innovation doesn't undermine the quality of the exam; as these processes become repeatable, I become less involved in actual facilitation of the session.
Even when I'm not actively involved in a phase of exam development, I'm spend a lot of time answering questions about tweaks to the process and how they are likely to impact the quality of the exam and compliance with ANSI.
"An interesting challenge for psychometricians is the impact of piracy on our traditional psychometric analyses and interpretations because our psychometric standards and processes assume that the exam is secure." -- Liberty Munson, exam psychometrician, Microsoft Learning Group |
|
How many psychometricians work for Microsoft Learning?
Just one -- me.
What does a psychometrician do, exactly, and why is it so important to the process?
A psychometrician's job is to ensure the validity and reliability of an exam. Every choice that we make about how an exam is developed and maintained over its lifetime could have an impact on its validity and reliability. A good psychometrician understands the impact of each choice and ensures that the integrity of the exam is at the forefront of the decision maker's mind, who is not always the psychometrician. An even better psychometrician can find the compromise between the needs of the business and the needs of the exam -- which often compete against each other -- without sacrificing the validity of the exam.
Do you compare notes with psychometricians from other programs? What are top issues for psychometricians these days?
Performance-based testing is a challenging psychometric issue because psychometric standards were not written with these item types in mind. As a result, many psychometricians interpret and apply these standards differently when it comes to PBT items.
Another interesting challenge for psychometricians is the impact of piracy on our traditional psychometric analyses and interpretations because our psychometric standards and processes assume that the exam is secure. In fact, several important psychometric analyses are based on the relationship between item and overall exam performance. As a result, new items that are added to a compromised exam tend to perform poorly from a psychometric perspective because high performing cheaters don't know the answers to these questions.
Following the strict interpretation of psychometric guidelines requires removing the item from the pool, but odds are the item isn't bad ... it's a good item in a "bad" situation. Psychometricians need to think differently about how to do these analyses, and because these decisions impact the interpretations of the validity of an exam, we need to agree on the appropriate standards and analyses.
Speaking of piracy, this is a huge concern for not only psychometricians but for every certifying body. Psychometricians in all certification industries are constantly looking for ways to proactively mitigate piracy through innovative delivery and forms assembly methods that also ensure the validity and reliability of the exam.
There are quite a few exams out and live. Does a psychometrician continue to monitor the results, and if so, what do you learn from them?
Yes, I review the psychometric performance of every exam in every language at least once per year to ensure that it remains a valid and reliable measure of the content area. From this analysis, I am able to identify items that are no longer functioning from a psychometric perspective (those that have become too easy, are no longer relevant, or no longer differentiate between high and low performers) and if the exam is a valid and reliable measure of the content area and will continue to be if we make changes to it.
Additionally, I review the performance of each form in each language to ensure that they're psychometrically equivalent in terms of difficulty, reliability, passing rate, and mean, median, and modal scores, etc. Depending on the results of that analysis, we will remove or fix the content that doesn't meet Microsoft's psychometric standards or create new content for the exam.
I will psychometrically review exams more frequently if I see a pattern in the item challenges that we receive about a particular exam, if those item challenges reveal technically inaccurate content, or if satisfaction with the exam is below our threshold or drops dramatically from one quarter to the next.
If a service pack is released, we review the content to ensure that these changes don't impact the validity, reliability, and psychometric quality of the exam. If we make changes based on these service packs, we will indicate that in both the prep guide and the items themselves. If not specified in either prep guide or the item, candidates should assume that the question covers RTM functionality.
Stay tuned; Munson addresses piracy, cheating and the diffulty in developing questions for developer and business-oriented exams. in the next part of this interview.
About the Author
Michael Domingo has held several positions at 1105 Media, and is currently the editor in chief of Visual Studio Magazine.