Frequently Asked Questions

Click a question to reveal its answer. Click again to hide the answer.

Why ignore market-based reforms?

Examples of market-based "reforms" include school choice, school competition, and teacher merit pay. Proponents of these "reforms" rarely emphasize the fact that their proposals are designed to indirectly improve Curriculum & Instruction. During a video interview with a Wall Street Journal reporter, a well-known proponent recently explained his conception of this relationship:

Q: Why not focus on better teachers or better curriculum or better principals?

A: I think you get all of those things by expanding choice and competition. I mean choice and competition itself doesn't actually do anything on schools. What it does though is it provides the right incentives for schools to figure out the right kinds of teachers, the right kinds of curriculum, the right kinds of pedagogical techniques. Choice is really just a mechanism for reform. It is not a reform itself…

Dr. Jay P. Greene
Author of Why America Needs School Choice
Department Head in Education Reform at the University of Arkansas

In short, market-based "reforms" spur real reforms in Curriculum & Instruction (C&I)…or so the story goes.

Everyone agrees that reform must ultimately improve C&I. Real Education Reform aims at the heart of the matter by reporting on a few reforms in C&I that have been shown to greatly enhance student learning.

Why trust the Institute of Education Sciences?

The Institute of Education Sciences (IES) is the US Department of Education's research center, and IES affirms the value of evidence-based practice and the need for research and peer-review. The Preamble to one IES practice guide states:

The Institute of Education Sciences (IES) publishes practice guides in education to bring the best available evidence and expertise to bear on the types of systemic challenges that cannot currently be addressed by single interventions or programs…One unique feature of IES-sponsored practice guides is that they are subjected to rigorous external peer review through the same office that is responsible for independent review of other IES publications. A critical task of the peer reviewers of a practice guide is to determine whether the evidence cited in support of particular recommendations is up-to-date and that studies of similar or better quality that point in a different direction have not been ignored. Peer reviewers are also asked to evaluate whether the evidence grade assigned to particular recommendations by the practice guide authors is appropriate (Pashler et al., 2007, pp. v, vii).

Furthermore, IES strives to maintain an objective outlook on educational matters. On its Web site, IES states, "...by law our activities must be free of partisan political influence."

What's a controlled experiment?

A controlled experiment is a study with the following essential features:

  • Two (or more) separate groups of subjects
  • Random assignment of subjects to groups
  • Manipulation of one variable (the independent variable) and control of all other variables

An example should help illustrate these features. Let's suppose that a couple researchers want to investigate how a graphic organizer affects argument quality on a writing task. After designing and developing a rubric to rate arguments, the researchers randomly assign students to two groups. Both groups receive the same instruction and unlimited time to write an essay. But the treatment group receives the graphic organizer, and the control group receives only blank paper. Later, the researchers independently rate each essay (at the time, unaware of each writer's identity and group assignment), reconcile any differences in scoring, and compare the achievement of the two groups.

The researchers performed a controlled experiment. They randomly assigned students to groups, and they intentionally held all variables constant, except for one: access to the graphic organizer. Presence or absence of the graphic organizer was intentionally manipulated to determine its effect on argument quality. If there was a significant difference in argument quality between the two groups, the researchers could logically and justifiably conclude that access to the graphic organizer caused the difference to occur. They could also estimate the strength of the effect.

Researchers perform controlled experiments such as the one described above because this methodology, properly implemented, produces results that are valid and reliable. It is highly likely that the results of a well-designed controlled experiment are due to manipulation of the independent variable rather than some other factor (i.e. validity). And if the experiment is replicated by other researchers, the results are likely to remain the same (i.e. reliability).

What's an effect size?

An effect size indicates the practical significance of an effect. In general, an effect size of 0.8 or more is regarded as strong, an effect size near 0.5 is regarded as moderate, and an effect size of 0.2 or less is regarded as weak. These values are placemarks along a spectrum. They do not represent strict ranges.

An effect size may be positive or negative. In the education world, a positive effect generally means an increase in student achievement while a negative effect means a decrease in student achievement. Most reforms are, of course, designed to increase student achievement, but relatively few produce large, positive effects.

Effect Size Level of Evidence
0.8 or more Strong
0.5 Moderate
0.2 or less Low
How to read an effect size

Effect size is closely associated with the statistical concepts of mean and standard deviation. The mean of a data set is commonly called the "center of gravity." If all data points along a number line are assigned the same weight, then the mean is the point of balance. (Picture a see-saw.) The standard deviation of a data set is commonly called the "spread." One standard deviation is the average distance of a data point from the mean, and about 68% of data points typically fall within one standard deviation of the mean (i.e. to the left and to the right of the mean).

The effect size is calculated using two means and a standard deviation. See the diagram to the right. For a study involving an experimental group and control group, the effect size is the difference between experimental and control means divided by the control standard deviation (or a pooled standard deviation):

Effect size = (Meanexperimental - Meancontrol)/Standard Deviationcontrol or pooled

Diagram illustrating how to calculate the effect size using means and the control standard deviation
Diagram illustrating the concept of an effect size

How do a meta-analysis and a practice guide differ?

A meta-analysis is a summary of the findings of a set of related studies. To conduct a meta-analysis, a researcher must first search databases for studies. After compiling a set of studies, the researcher reviews the studies. A study may be excluded from the meta-analyis if it was poorly designed or conducted. To produce the summary, the researcher retrieves data from the selected studies and then applies various statistical methods.

The Institute of Education Sciences explains the difference between a meta-analysis and a practice guide in this way:

Authors of practice guides seldom conduct the types of systematic literature searches that are the backbone of a meta-analysis, though they take advantage of such work when it is already published. Instead, they use their expertise to identify the most important research with respect to their recommendations, augmented by a search of recent publications to assure that the research citations are up-to-date. Further, the characterization of the quality and direction of the evidence underlying a recommendation in a practice guide relies less on a tight set of rules and statistical algorithms and more on the judgment of the authors than would be the case in a high quality meta-analysis. Another distinction is that a practice guide, because it aims for a comprehensive and coherent approach, operates with more numerous and more contextualized statements of what works than does a typical meta-analysis (Pashler et al., 2007, p. v).