Fordham Institute Review of the State Science Standards: Use with Caution!

Written by Jack Hassard

On February 2, 2012

The Fordham Institute published The State of State Science Standards 2012, a document that details evaluations of the science standards developed by each U.S. state, the District of Columbia, and even the NAEP.   The 217 page report includes an evaluation of each state’s science standards.  The authors evaluated each state by assigning a score to two attributes of the standards, content and rigor (scored 1 – 7), and clarity and specificity (scores 1 – 3).   Adding the scores gave each state a total which was used to “grade” the state standards (A – F).

According to the report,

a majority of the states’ standards remain mediocre to awful. In fact, the average grade across all states is—once again—a thoroughly undistinguished C.

Image by http://www.tagxedo.com

The blogosphere was filled with articles on the web pointing to the Fordham report on the science standards.  Scientific American reported in a post that the “new report paints a grim picture of state science standards”.   The National Center on Science Education noted posted an article on its website that focused on one of the report’s conclusions that there is an undermining of evolution throughout the science standards.  Another blogger reported on his blog and commented on how his state did on the standards.

The 2012 report is a follow up of Fordham Institutes 2005 review of science standards.  Comparative data is available for each state enabling you to see how the states science standard’s evaluations changed over the past 7 years.

Last Fall’s Report  on Framework for K-12 Science

Last fall I reviewed the Fordham Institute’s evaluation of the Framework for K-12 Science Education.  In that post I said

The report was not a juried review, but written by one individual, who appears to have an ax to grind, especially with the science education research community, as well as those who advocate science inquiry, STS, or student-centered ideology.  Indeed, the only good standard is one that is rigorous, and clearly content and discipline oriented.

The Fordham Institute’s review of the Framework was the personal view of one person.  It is a review that fits with the philosophy of the Fordham Institute, which has been in the business of supporting the corporate-style take over of public education.  It is a conservative think tank, and the report’s author did not subject his evaluation for peer review.  Any findings that are reported need to be viewed in the context of this reviewers personal views of science education, which appear to fly in the face of science education research and curriculum development.  You can read my report here.

New Report on the State of the State Science Standards

Now comes a new publication by the Fordham Institute and that is an evaluation of all of the state science standards in the United States.   You can go to this website to find state profiles and download the report for free.

The report is written in the context of Fordham Institute’s bias about the state of science education in the nation, especially in terms of achievement test results on PISA, TIMSS, and NAEP.  It is very similar to the Broad Foundation’s view of American youth which I wrote about here on my blog.  The Broad Foundation has low expectations for American students, and they go on to support their claim with distorted statistics, and use them to paint negative pictures of American youth.

The Fordham Institute’s view is embedded in the “crisis mentality” that began with “Sputnik” and has carried forward through today.  According to the Fordham report, American youth do not show strong science achievement, and show “woeful” results on international tests.  And yet during the time that American youth showed such dismal scores on science tests, American science and technology innovations and creative development flourished, and still does.  We thought our nation was risk because of technological advances, and global economic growth of Russia (then, the USSR), Japan, Germany, and China.  Now we have to worry about finland, South Korea, the Czech Republic, Hungary and Slovenia.  Their scores are higher on these tests than ours. They must be doing something different to educate their students in math and science.  The race is on!

The sky is falling

Where the State Science Standards Go Wrong

In the introduction of the report (which everyone reads, plus the section on their state’s science standards), the Fordham Institute report identifies four problem areas where in their opinion many state standards are mediocre to poor.

  • Problem 1. An Undermining of  Evolution.  The report, rightfully, repeated what many know and that is that science in the public schools has had a long history of having to deal with religious groups wanting to have equal time with evolution by inserting creation science and intelligent design into the curriculum, or requiring more scrutiny on topics that are controversial in science.
  • Problem 2. The Propensity to be Vague.  The Fordham group has some way of determining whether a standard is vague to be able to rate the standard.  I don’t know if they really analyzed every standard in each state document, but they seem to only like standards that are “really content oriented.”

For example, here is a good one according to the writers:

  • Students know how to design and build simple series and parallel circuits by using components such as wires, batteries, and bulbs.

And here is a vague one:

  • Demonstrate understanding of the interrelationships among fundamental concepts in the physical, life, and Earth systems sciences. (grade 4)

Now, what do you think about this science standard?  Do you think it is a good one, or do you think it is vague?  What are students to really know here?

  • Know how to define “gravity.”

The last one, “know how to define gravity” was one of the Fordham Institute standards that they used to “evaluate” the state science standards.  It was taken from the Fordham “content-specific criteria” (which means standards) which you will find in pages 205 – 209 of the report.  The reviewers used this list of objectives (an other word for standards) divided into K- 8 and grades 9 – 12.  I am not sure of the origin of the content-specific criteria that they used, but I assume they wrote these objectives, and have used them the past.  I wonder how they would evaluate their own objectives?

  • Problem 3. Poor Integration of Scientific Inquiry. Please keep in mind that one of the reviewers has a real disdain for the notion of inquiry science teaching.  If a state wrote an objective that asked students to make observations in order to raise questions, the reviewers were bent out of shape.  You can not write a standard unless it is tied to content. Period.  They claim that too many states do this, and they are distressed.

Problem 4. Where did all the Numbers Go?  Math is integral to science.  Who could have guessed?  After reading the Fordham report it was evident that if states included an objective like the following one, they got good reviews.


If they wrote an objective, like this one from Illinois, this will not possibly prepare them for college and they got a bad review.

 


I wonder if they ever heard of Conceptual Physics?

Where the Report Goes Wrong

First the report is amazing in its scope.  Imagine, in one report you can find the evaluations of every set of state science standards, as well as the District of Columbia, and the standards that the NAEP uses in the construction of its test items.

Here we have a scorecard on each state’s science standards, grades A – F.  At first glance, this is a report that ought to be read and studied by parents, teachers, administrators, and professors of science and science education.   On one map, you have at a glance how the states stack up.  Most of the colors of the states are purple (grade C), green (grade D), and grey (grade F).

There is, however, a problem.  This report is deeply flawed, and the analysis of any of the state’s science standards should be read with caution.  The language used in the report should raise a red flag.  Here are some comments you will find in the report.

  • The results of this rigorous analysis paint a fresh—but still bleak—picture. A majority of the states’ standards remain mediocre to awful.
  • At the high school level, the Georgia standards offer a bewilderingly large number of courses.
  • (Name of State) science standards—unchanged since 1998, in spite of much earlier criticism, ours included—are simply worthless

Worthless, mediocre, awful, bewildering—not the choice of descriptors that science educators typically use in a scientific report.

Because the report has used numbers to assign a grade of A – F to each state, there is believability in the results as presented by Fordham.  We live in culture that is driven by numbers—just look at the effect of high-stakes tests on the well-being of students and teachers.  Consequently the Fordham report will be cited and admired by many, yet it represents another agenda driven report.  What is the intent of the Fordham foundations preoccupation with standards in the first place?  Are they trying to influence the direction that the country takes with regards to a set of common standards?

State Science Standards Grades, 2012. Source: The State of the State Standards, Thomas B. Fordham Institute

 

The evaluations that are reported by the Fordham experts are their opinions.  There were five reviewers used to compile the “evaluations” of the state science standards.  They divided the work up as follows:

  • Reviewer #1.  K-12 physical science and high school physics
  • Reviewer #2. K-12 life science including high school biology
  • Reviewer #3. K-12 scientific inquiry and methodology standards
  • Reviewer #4. K-12 earth and space science standards
  • Reviewer #5. K-12 physical science and high school chemistry

A sixth person was used to put the reviews into a single document.

The data sources to evaluate the state science standards were the websites of the state DOE, and they also “communicated” with the states’ science experts.  No mention is made about the purpose of the communication, or if a standard set of questions was used.

Although there are detailed statements made about each state’s science standards, there is really no way of knowing how the evaluators came to the conclusions they made.  There was no systematic analysis of each state’s standards.  That is to say, you can not find in the report any data that is objective.  For example, the authors do identify “content-specific criteria” (pp. 206 – 209).  These criteria are lists of objectives that the authors used against the state standards in scientific inquiry, physical science, physics, chemistry, earth and space science, and life science.   Here are a couple of examples from Earth and Space Science (Fordham Report, p. 206):

  • Describe the organization of matter in the universe into stars and galaxies.
  • Describe the motions of planets in the solar system and recognize our star as one of a multitude in the Milky Way.
  • Recognize Earth as one planet among its solar system neighbors.

The Fordham “Scores” Should Be Questioned

There are more than 100 of these objective like statement in the report.  They are used to evaluate the “Content and Rigor” of the state’s standards, on a scale from 0 – 7 points, with seven being the most rigorous.  “Clarity and Specificity” are evaluated on a 0 – 3 point scale.  Using rubric style criteria, each evaluator read the assigned section of state science standards (earth science, physical science, etc.), and used the “criteria” to make a judgment (assign a numerical score) about the content and rigor and clarity and specificity.  The final science score for each state was a composite of the evaluator scores.

The authors give us no clues about the reliability of the observations that were made.  Although none of the content subsections was rated by more than one evaluator, it would have been possible to do a reliability study using the scores they assigned.  This was not done so it is not possible to judge the reliability of the observations.

There is no information about the validity of the criteria used to judge the science standards.  The Fordham Institute report includes their own set of science standards which they use to judge the “content and rigor” of the state science standards.  There is no information given about the content validity of their science standards.  To do this this, we should know if their standards reflect the knowledge actually required for K-12 science education.  One way to do this would have a panel of experts (scientists and science educators, etc.) who would be asked to judge validity of the Fordham science standards.  This was not done, to my knowledge.  Therefore any scores that the evaluators assigned to the “content and rigor” of a state’s science standards should be called into question.

What is the basis for the standards that the Fordham Institute  used to judge the work of 50 states, D.C. and the NAEP?

A Research Study?

Is the State of the State Science Standards a research study?

The report does not meet he standards of educational research established by the American Educational Research Association (AERA).  The Fordham Institute report is a type of evaluation research, but does not meet the standards as shown below.  In fact, they only meet two of the principles that are outlined below.

AERA Principles of Research

Does the Fordham Report Meet the AERA Principle?

Comments

Development of a logical, evidence-based chain of reasoning

Yes

The report itself is written logically, and organized for easy consumption
Methods appropriate to questions posed

No

 

The questions are not directly posed.  Yes they are implied, but they are couched within a lot of rhetoric.
Observational or experimental designs and instruments that provide reliable and generalizable findings

No

Although the evaluators identify criteria, they are not used to make observations, but instead the authors jump to inferences and opinions
Data and analysis adequate to support findings

No

The authors provide no information about inter-rater reliability in the form—the degree of agreement or disagreement among the four raters.  Because of this, the data is skeptical.
Explication of procedures and results clearly and in detail

Yes

Procedures used are clearly stated and results are found in the state summaries.  However, because the procedures are flawed, the results should be used cautiously.
Adherence to professional norms of peer review

No

The report was not reviewed by outside science educators or any other peers.  This is a serious issue, and one of the problems with using reports from think tanks, whether they be on the left or in this case, the right.
Dissemination of findings to contribute to scientific knowledge

No

Although the report is being disseminated, the results are flawed, and biased, and should be viewed with suspicion.
Access to data for reanalysis, replication and opportunity to build on findings

No

Because the “data” reported are based on opinions, it is difficult to reanalyze the study.  Perhaps if the authors subjected their criteria to an outside panel, and applied the same methods, we might get more valid and reliable results.

There is no evidence of any data to determine the reliability of the evaluators’ scores.  Furthermore, the validity of the science standards used to make judgement is questionable.  The lists of objectives that the evaluators used were not subjected to any review by outside experts.  How do know that these lists reflect the knowledge base of science education.  We do not.

Searching the journals in science education research, I want to share a study that was similar in intent to the Fordham study, but this study would have scored a “yes” in each of the category boxes above.  In 2002, in the journal Science Education, Gerald Skoog, and Kimberly Bilica published a study entitled The emphasis given to evolution in state science standards: A lever for change in evolution education?  They analyzed the frameworks of 49 state standards in science to determine the emphasis given to evolution in these documents at the middle and secondary levels.  Their methodology included identifying a select group of evolution concepts (from the NSES, 1996), such as species over time, speciation, diversity and so forth.  One table in the research study shows the detailed analysis of evolutionary concepts evident in each state’s science standards.  Discussion is based on the data reported for each of the evolution concepts.

Unlike the Fordham study, this study was peer reviewed, and could be replicated that the methodology is clear and ambiguous.

Looking Back

I have tried to show that the Fordham Institute report on the State of Science Standards should be questioned, and that the results should be held up to criticism and caution.  The Fordham report, as most foundation supported “studies” do, tailor their findings to the intended goals of the organization.

Science educators should use caution when reading reports about the study, and should not draw conclusions that the results in the report are valid.

 What do you think?

If you have the time, take a look at the report, and tell us what you think.  Do you think the results are valid and reliable?  Does the report reflect what is going on in science education today?  What do you think?

 

 

 

You May Also Like…

Beyond Science Standards

Beyond Science Standards

I want to tell you about a book that was just published in the field of science education. It’s a book that I was asked to write the forward. I’ve never been asked to write a forward for a book. But I was honored, and I want to encourage you to examine this book, whether you are a science educator, a scientist or a citizen interested in public education.

Extreme Earth: Coming to An Environment Near You

The Earth's climate has changed rapidly over the past fifty years, but when people talk about climate change, they frame it as a future threat. David Popeik, in Scientific American guest blog, says that "climate report nails risk communication."  He suggests that the...

0 Comments

Post your comments

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Citizen Jack

Subscribe now to keep reading and get access to the full archive.

Continue reading