, , , , , , , , , , , , , , , , , , , ,

Craps dice, like the state of education reform via numbers crunching these days, August 6, 2006. (Source/Roland Scheicher via Wikipedia). In public domain.

This could just as easily be titled, “Why your multivariate regression analysis isn’t better than my chi-square test,” because that is the state of mainstream education research these days. I find it stifling, like being wrapped in Saran Wrap covered by a condom lined in sheep’s intestines.

Numbers have their place, but the field’s obsession with crunching numbers for trends that defy quantification has increased as a result of federal mandates like NCLB and philanthropy’s accountability movement over the past fifteen years. What’s the long-term impact of the thousands of studies and the deployment of thousands of psychometricians and research analysts in P-20 education reform (that’s early childhood education, K-12, undergraduate and graduate education combined)? Not much, because our politicians and philanthropists are staking themselves to trends almost regardless of numbers.

It all started for me about this time twenty years ago. I did an independent study with Bruce Anthony Jones,

Linear regression graph with over 200 data points, February 22, 2009. (Source/Michael Hardy via Wikipedia). In public domain.

then an assistant professor in the University of Pittsburgh School of Education’s ed policy and administration department. In that one semester, I quickly learned that folks in the education field defined research in only two ways: quantitative and qualitative. And by qualitative, they meant soft research, like Carvel’s soft-serve ice cream. What I didn’t know was that many in the field were working to make the qualitative — surveys, focus groups, oral interviews/transcripts — quantifiable.

Today, everything that can be tracked in American education usually has a number attached to it. It’s hardly grades and standardized test scores anymore. Homework hours, time to task on lesson plans to work on a single problem that may be part of a high-stakes state exam, teacher effectiveness, suspensions and disciplinary reports disaggregated by race and gender. It drives me nuts, and I’ve used SAS and SPSS before, during my grad school days. I can only imagine how a teacher who just wants their students to learn and do well must feel about this numbers game.

But if education has become a number game, it most resembles the game of craps. Take the issue of teacher

Michelle Rhee, former DCPS Chancellor, one of many who've taken advantage of education as craps game, Washington, DC, February 19, 2008. (Source/US Department of Labor). In public domain.

effectiveness, often tied to state-mandates around test scores and students meeting or exceeding a percentile at a given school on these tests. Let’s say if a school as a whole actually exceeds the proficiency percentile. They may well receive more money, and teachers may well get a bonus (depending on the state and school district and union contracts, which by the way, may also be part of a statistical formula). None of this actually proves that these students are better prepared for, say, thinking independently or critically, because critical reasoning isn’t tested by most of the high-stakes state tests.

Nor can they show the writing skills necessary for student success later on in their education, as most of these tests don’t test writing comprehension skills either. Most importantly, where does teacher effectiveness come in as a factor? Do we have to account for time to task in comparison to each exam item, like a psychometrician at the Educational Testing Service (ETS) would? Do we factor out home studying/ homework time, parents’ education, income and race, or whether they eat a hearty breakfast the morning of the exam? Or do we continue to simply say, if Teacher X gets Class A to raise its state test score by 25 percent, they get a raise and a pat on the head? Really?

What’s more, whether teacher effectiveness, student success, or free and reduced lunch programs, politicians, parents and pundits hardly look at any numbers beyond any report’s executive summary. We all insist that our school and community colleges and universities get better at graduating students ready for the real world of work. Fine. Then we insist on lower taxes, blaming teachers, destroying unions, complaining about the state of things but not doing anything to make education work for all of our students. Not fine.

It doesn’t take a two-year study from The Education Trust to realize that there’s no one-to-one correlation

Taco Bell's Gordita Supreme, September 22, 2011. (Source/TacoBell.com).

between an effective teacher and higher student test scores. Or a report from the Institute of Education Sciences at the US Department of Education to know that a lunch of murder burgers and suicide fries with ketchup as a vegetable is about as nutritious as a Taco Bell gordita. School districts and many a college have gone without even adequate resources for years. But instead of providing them, we make them kneel in begging for them, and yet expect them to perform Lazarus-type miracles in the process.

We waste time with numbers and spend little time on causes and solutions that make sense in education. I think about that weak +0.4 correlation number that ETS has put out for years regarding the SAT. It’s the likelihood of someone who does well on the exam beginning their freshman year in college with a 3.0 GPA. I scored an 1120 on the SAT in October ’86, not exactly the greatest score. But I did manage a 3.02 average my first year (and a 2.63 my first semester, by the way), and still came within a few days of dropping out because I was homeless at the beginning of my sophomore year.

I dare say the numbers crunchers at ETS didn’t factor that in their multivariate analysis. Or my homesickness or obsession with a former high school crush. Mark Twain is right about statistics — they can “a good walk spoiled (or lies, I think).”