August 29, 2013

The Pygmalion Effect

I think I've mentioned in previous posts that I'm earning a second masters (MS) in Instructional Design and Educational Technology, but a new update to that is I'm also earning a certificate in motivation + learning environments through the Educational Psychology department to coincide with my degree. My Ed Psych course for this semester is Seminal Readings in Education and Educational Psychology. So, I might blog about either Ed Tech or Ed Psych as I'm going along.

Image from theinsideouteffect.com

Today we discussed some readings we did on the Pygmalion Effect. This is the notion that preconceived expectations for others impact performance or an outcome, so it's the self-fulfilling prophecy. What's interesting is these preconceived expectations have the same effect whether they are self-generated or imposed by an outside source (though, naturalistic expectations are stronger). So, for example, foremen in a warehouse were told certain employees did good or bad on an exam for the job (regardless of how good or bad they actually did), and the foremen rated those employees who they believed to be smarter as better and more efficient. Another study experimented on mice (I am not a fan of this, but...) where mice were either lesioned through lobotomy or made to look like they were so mice handlers could not tell the difference. Handlers were told mice were either bright or dull regardless of lobotomy. Unsurprising is that lesion-free mice with handlers led to believe they were bright performed the best. What was surprising was that lesion-free mice with handlers led to believe they were dull performed just as poorly as lesioned mice who were determined to be dull.

We are looking at this research more directly related to classrooms and formal education next week, but there are huge implications. Visual cues in all of this are one of the most important factors. There is a study in psychology of "thin-slicing" (person perception based on superficial aspects in a short period of time.. so, first impressions, essentially) for student perceptions of teachers, where students watched 30 second clips of teachers teaching with no sound and were to rate their effectiveness as a teacher based on that video clip alone. The study found that the students watching the clips had nearly the same ratings of teachers as the students who actually completed the class and filled out TCEs.
Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology,64(3), 431-441.
Another (and very recent) study by Chia Jung Tsay did something similar where people would receive short clips with no sound of musicians competing in formal events, and they would need to predict who won based on the videos alone, hearing no sound. Accuracy in guessing was astounding, where visual impressions clearly had a greater impact than actual talent. When participants tried to base their ranking guesses on audio alone, they were not able to distinguish who won. Tsay points out that this "suggests that the visual trumps the audio, even in a setting where audio information should matter much more."

In looking at perception-of-self and perception of librarians by patrons, students, faculty, etc., this is important to think about (and something we are examining in the Librarian Wardrobe book). How we are perceived by others might influence how they evaluate us, and how we perceive others might influence how we evaluate them. If visual cues are especially important, then understanding how we present ourselves, whether in gesturing, other physical movements, or clothing, then studying how we dress and public perceptions would be quite significant.

August 10, 2013

IRB: Your research isn't Research


Image from Southern Fried Science
I'm writing this post in part to procrastinate getting to my 3rd year review packet candidate statement (overwhelming!) and also to share some interesting facts I learned from meeting with IRB on campus to discuss restrictions of my study on effectiveness of digital badges in an IL course for student success.

There was a lot of confusion at first when filling out IRB because we were so low risk, and then had to deal with so many restrictions. Our IRB rep explained that historically, any research involving human subjects needed to come through them, and this was a ton of work for them to handle even if research studies proposed didn't really need to be under their jurisdiction. More recently, "Research" with a capital R has been defined by the federal government as being generalizable (some of this info might be here, though our rep said there isn't one definitive page or site explaining this yet for the general public). This is wholly different from research that is not meant to be generalizable, for example: program evaluations, quality improvement, case studies, etc.

It was funny because after providing the explanation, our rep wanted to backtrack and say that she wasn't implying we aren't doing "research," we just aren't doing "Research." We in fact are doing program evaluation research because we are measuring a specific program at the UA in order to improve the program and will be showing what was successful/problematic for us. We would not be saying our results clearly apply to all IL programs or credit courses across the country. However, if we did want to try to prove that somehow, we would need to stay under their jurisdiction (we have been approved as Exempt level 2 I believe). If we were to stay under IRB, we have to keep them updated on any changes to methods and the form of consent for students, as well as if we were to make any changes in obtaining data. We would also have a ton of restrictions in what we can and cannot access with student info.

Filling out what is the "309 form" here at the UA to essentially rescind our IRB application (I am thrilled to do this, but it's probably not a good idea for me to think about how much time I spent on this for my own sanity) will then move us to program evaluation and we can essentially do whatever we want so long as it's generally ethical and follows FERPA regulations.

With IRB, we would have had to only do an opt-in to our study for students, anonymize data to potentially miss out on seeing trends, and be cautious about asking certain questions in our survey. As program evaluation, we can really gather data any way we want.

My co-researcher made a good point that the basis for Research vs research is in the eye of the beholder (the reader). We won't be saying our research is generalizable, but obviously if someone is reading the article (if we are able to get published, of course), that means they are considering how they might apply our findings to their own program or credit course. It can be very nuanced.

So anyhow, I just wanted to share this. It sounds like it's getting a big push not just on our campus but all over. This conflicts with LIS anxiety over publishing Research, but program evaluation is not any less important. The only distinction is that it does not go through IRB.