A Fulton County grand jury indicted former Atlanta Public Schools Superintendent Beverly Hall and 34 others — top aides, principals, teachers and a secretary — for racketeering as well as theft by taking for the bonuses they received for good test scores or making false statements or writings, charges that provided the basis for the Racketeer Influenced and Corrupt Organization count.
The indictment came about 20 months after Governor Deal released the results of an investigation carried out by two attorney’s appointed by former Govern Sonny Perdue and an investigative team of about 50 GBI agents who fanned out into schools and classrooms to interrogate educators who were suspected of involvement in the “Atlanta Cheating Scandal.”
As stated in the Governor’s Investigative Report a “culture of fear” took over the Atlanta School System, and led to a conspiracy of silence which enabled the bubble sheet erasure scandal to happen.
In light of the grand jury indictments I am re-posting a blog article published a year ago on March 26, 2012. My post was a reaction to an article written by the Atlanta Journal Constitution cheating investigative team which has published 30 articles on the cheating scandal not only in Atlanta, but throughout the country over the past five years.
Although there was cheating happening in some Atlanta schools, the data analysis conducted by the AJC has been questioned by testing experts, and they suggest that the AJC should have been very clear that other reasons could have accounted for wide swings in test results other than cheating.
Here’s the article published last year
Suspicions About the Atlanta Journal’s Investigation into Cheating Across the Nation
| March 26, 2012 | Filed under: Assessment
On a Sunday in March 2012, the Atlanta Journal-Constitution (AJC), a Cox newspaper published the results of its investigation into “cheating” in American schools. The article was entitled Suspicious Scores Across the Nation, and you can read it by following the link. The article was subtitled “Cheating Our Children.”
I was immediately suspicious of the report that the AJC published. They have put into place an aggressive team of watchdog reporters, database specialists, and investigative reporters. In whose interest motivates this team and this newspaper? After reading the report, I reaffirm my suspicions. Let me explain.
The data (collection of facts, observations, and measurements) that is available to this team is a database specialists’ fantasy. Fifty-states provided standardized testing data from 69,000 schools, 14, 743 districts, 13 million students, representing 1.6 million records. Most states provided the data immediately (public records law), but a few Departments of Education hedged a bit, but eventually sent on their data.
Comparing Average Scores, Not Individual Students
According to the researchers,
For each state, grade, cohort and year, we created a linear regression model, weighted by the number of students in a class, and compared the average score for a class with the score predicted by the model based on the previous year’s average score. We then calculated a p-value — an estimated probability that such a difference would occur by chance — using standardized residuals and the “T” probability distribution, which adjusts the probability upward for classes with fewer students. (Links Mine)
What the researchers looked for was scores rising or falling with probabilities less than 0.05. These were flagged. Maybe the bubble sheets were erased and correct answers added? This would mean that some one cheated. Wouldn’t it? Or could there be other explanations?
Is cheating what caused the some scores to change at a probability level that you wouldn’t expect. According to the Journal-Constitution,
A statistical analysis cannot prove cheating. It can only identify improbable events that can be caused by cheating and should be investigated.
If it smells like cheating, it must be cheating
But if you read the AJC’s initial article, and the one they published today (Cheating our children: AJC’s testing investigation spurs action), the only suggestion that the Journal makes is that the T-scores might indicate cheating. And indeed, in the latter article, Georgia U.S. Senator Johnny Isakson said that this report is troubling, and if the districts don’t do something about it, then the Governors should. If they don’t, then “I don’t think Congress should look the other way.” I am sorry Mr. Isakson, but the Congress has looked the other way since the NCLB Act was signed into law ten years ago.
Here is what the Atlanta Journal staff had to say about their research model:
A statistical analysis cannot prove cheating. It can only identify improbable events that can be caused by cheating and should be investigated.
Ideally we would look at how individual student test scores change from year to year, but federal privacy regulations precluded access to that data. The approximate cohorts we used were the only available substitute. It is unlikely that two groups of students in a cohort are perfectly identical. Urban districts in particular have high student mobility.
In the model that the data analysts used, average scores from one year to the next were not necessarily based on the same population of students. They didn’t have student data. The only had average data for a school by subject. So it is possible that they are making predictions based on unreliable data.
High student mobility might give us a clue about other possibilities to explain score changes given the demographics of the schools and districts that were highlighted. For example if you look at the districts that were highlighted in the article (Amarillo, Atlanta, Baltimore, Chicago, Dallas, Detroit, East St. Louis, Fresno, Gary, IN, Houston, Los Angeles, & Mobile County, AL), these districts reported that at least 64% of the students were eligible for free or reduced-price meals. The poverty concentration of these schools averaged 80%. Mobility is high in these districts.
Critique of Methodology used by the Atlanta Journal Data Analysts
Gary Miron, professor of education at Western Michigan University was one of four academics or test specialists that advised USA Today and its affiliated Gannett newspapers on a multi-state analysis of irregularities in assessment data. In his article published on The Answer Sheet, Dr. Miron writes about some of his concerns about the research method used by the Atlanta-Journal Constitution.
My review, however, yielded serious concerns about the data used, the methods of analysis employed, and the conclusions drawn. I shared these concerns with journalists at the Dayton Daily News, which is one of the Cox affiliates involved in this story.
To be clear, the Cox analysis may accurately detect large variations in assessment results from year to year. But my analysis of the data suggests that these irregularities are less likely due to real cheating than due to mobility in student population (recall the lack of student-level data). Although the Cox news articles on this study offer a disclaimer that their analysis does not actually prove cheating, this disclaimer should be expanded considerably.
The evidence is that the AJC may be practicing sensationalism by throwing the data up against the wall, and hoping that some of it will stick. The map they published is impressive, but it raises more questions than it answers, and the writers of the article were quick draw the “cheat” card. “Cheating” was mentioned 57 times in the first article of 3000 words, and 31 times in the second article, which was 978 words long. Mobility or any mention of changing student populations was mentioned once in both articles. One might raise the issue that the publication of this article is self-serving. Are they really interested in uncovering serious issues facing our nation’s schools, or are they more interested in selling newspapers, and receiving awards for investigative reporting. I really don’t know. But the thought crossed my mind.
Dr. Miron conveys his concerns about the method used by the AJC. Here are his concerns:
- As noted, the analysis is based on school-level data and not individual student-level data. Accordingly, it was not possible to ensure that the same students were in the group in both years.
- The analysis of irregular jumps in test scores should have been coupled with irregularities in erasure data where this data was available.
- The analysis by Cox generates predicted values for schools, but this does not incorporate demographic characteristics of the student population.
- The limited details available on the study methods made it impossible to replicate and verify what the journalists were doing. Further, the rationale was unclear for some of the steps they took.
I am angry at the machinations of those who, with so little knowledge of learning, of teachers, or of children, are twisting the life out of schools.
She adds that the current use of standardized tests, which promotes competition between schools, and used to evaluate teachers and principals to decide their salaries bring out the worst in adults. It should be no surprise to the Atlanta Journal staff, but they fail to pursue the data other than to blame schools for the current state of our schools. This is not about cheating.
According to Dr. Delpit, “the problem is that the cultural framework of our country has, almost since its start, dictated that “black” is bad and less than and in all arenas “white” is good and superior.”
To point to urban schools as the harbinger of cheating and surprising test results leaves us with many questions.
- Why don’t we talk about how we got ourselves into this mess in the first place?
- What has been the effect of the NCLB policy that has turned schooling into test factories, only interested in finding out what low-level of knowledge students might have learned in school?
We simply can not continue to test with out really reforming education. And simply writing common standards (goals) for all kids regardless of where they live, and with little teacher flexibility or comments from teachers simply will only reinforce the authoritarian nature of schooling, but do little to improve the lives of teachers and students. Testing students doesn’t make them smarter, any more than weighing the cows makes them heavier.
I agree with Dr. Delpit when she calls for us to create excellence in urban classrooms, and she suggests that we must do the following:
- Recognize the importance of a teacher and good teaching, especially for the “school dependent” children of low-income communities.
- Recognize the brilliance of poor, urban children and teach them more content, not less.
- Whatever method or instructional program is used, demand critical thinking while at the same time assuring that all children gain access to “basic skills”—the conventions and strategies that are essential to success in American society.
- Provide children with the emotional ego strength to challenge racist societal views of their own competence and worthiness and that of their families and communities.
- Recognize and build on children’s strengths.
- Use familiar metaphors and experiences from the children’s world to connect what students already know to school-taught knowledge.
- Create a sense of family and caring in the classroom.
- Monitor and assess students’ needs and then address them with a wealth of diverse strategies.
- Honor and respect the children’s home cultures.
- Foster a sense of children’s connection to community, to something greater than themselves. (Delpit, Lisa (2012-03-20). “Multiplication Is for White People”: Raising Expectations for Other People’s Children . Perseus Books Group. Kindle Edition.)
What do you think about the Atlanta Journal’s inference that cheating might be the reason for the unexpected changes in test scores? Do you think the questions raised in this post have implications for the Atlanta educators indicted by the Fulton County grand jury?
0 Comments