Common Core State Standards

The following sample Education literature review is 2844 words long, in APA format, and written at the undergraduate level. It has been downloaded 5489 times and is available for you to use, free of charge.

The Common Core State Standards (CCSS) is the first moderately effective attempt at creating a unified standard for academic content and improving public schools across the United States. It is the latest effort to modify education policy on a national level so as to increase student performance both in an international context and within the United States.

The Elementary and Secondary Education Act of 1965 (P.L., 89-10, 1965) was passed during the Administration of Lyndon Johnson in 1965. The act provided funding for primary and secondary education through the United States while at the same time developing an accountability mechanism for schools and forbidding the establishment of a national curriculum. Title I of the ESEA provided for additional school funding for schools with 40% or more of the student body being classified as living below the federal poverty level.

The introduction of such a strong presence of the federal government into matters that were previously strictly state level matters began a dramatic shift in the way educational policy was directed in the United States. Once schools began receiving funds for what are now identified as “at-risk” populations, they became dependent upon the funding, as did the districts and states the schools were in. Eventually, the dependence upon Title I funding allowed the federal government to place great pressure on schools, and thus districts and states, to comply with new federal policies.

At that time there was a significant gap in student performance on standardized tests between students of different ethnic groups and socioeconomic status (see Figure 1). The ESEA was a first massive attempt on the part of the federal government to pressure schools into improving assessment scores in Reading and Mathematics for grades one through six. What was needed to evaluate performance of the nation as a whole however was a standardized assessment that was administered across all states in a systematic manner to provide comparable results.

To this end, the National Assessment of Educational Progress (NAEP) was developed, along with various new government agencies to oversee the assessment. Prior to NAEP the results from state assessments were available, however each state administered its own assessments, using its own methods, based upon its own curriculum and content standards. The NAEP assessment allowed the nation for the first time to review in a comprehensive manner for specific subject areas how the nation was performing as a whole. Critics argued that the NAEP was not a valid or reliable instrument for measuring such performance as the assessment was voluntary and considered a “no-stakes” assessment. There was no state, district, school, or student-level reporting; the only information that was provided was at the national level. As a result, questions of bias in the data were very real concerns as there may have been common factors associated with those states and schools that were willing to participate. Covariance of such a magnitude would very likely have resulted in results that were not true estimates of what students knew and could do. For example, schools that were confident in their ability to perform well may have been more likely to participate than poorly performing schools, thus giving overinflated estimates of national abilities.

The 2002 reauthorization of the Elementary and Secondary Education Act, frequently referred to as the No Child Left Behind Act of 2001 (P.L.107-110, 2001) under the Bush administration included a requirement for any institution receiving federal funds under the auspices of ESEA Title I. All such institutions were required to participate in the Reading and Mathematics NAEP assessments in grades 4 and 8. Institutions that refused to participate could have their Title I funding withheld. The newly constructed ESEA also required the expansion of NAEP so as to allow state-level reporting. Unlike state assessments, NAEP is administered to a random sampling of students from a random sampling of schools. There are no population data but rather estimates of how students would perform. To facilitate state-level reporting, dramatic increases in sample size per state were required in order to have valid data at each state.

In addition to requiring participation in NAEP, the new ESEA further required that all students in all states reach the status of “proficient” by 2014, again on threat of Title I funding being jeopardized. This again created problems. The NAEP had been using “proficient” as a performance category for years. Tracking the proficiency rates of those states that had voluntarily been participating in NAEP presumably would allow the use of this common metric to help understand how states were faring in educating their students. Despite this being a reasonable assumption, it was not necessarily a valid one.

The text of the ESEA allowed for each state to provide its own “reasonable” measure of “proficient”. The term “proficient” in state assessments often refers to a student performing at grade level, while the NAEP term “proficient” is defined entirely differently. Indeed, this federally-mandated assessment clearly indicates that the NAEP term “proficient” is not the equivalent of the state term “proficient”:

…it is important to understand clearly that the Proficient achievement level does not refer to ‘at grade’ performance. Nor is performance at the Proficient level synonymous with ‘proficiency’ in the subject. That is, students who may be considered proficient in a subject, given the common usage of the term, might not satisfy the requirements for performance at the NAEP achievement level. (Loomis & Bourque, 2001)

Arriving at a point where the federal definition of “proficient” is different from the common usage of the term “proficient” which is different from the state education department’s definition of “proficient” led to some political upheaval. The National Center for Education Statistics (NCES), a division of the U.S. Department of Education, commissioned a series of reports comparing state assessment performance to NAEP assessment performance. Data such as shown in Figure 2 were shared with politicians and the public alike (Braun, 2007).

Figure 2 clearly shows that across the country on average as assessment scores for state assessments increased, there was a grader discrepancy between state assessment scores and scores on the NAEP. This was interpreted by many to suggest that states were essentially rigging their assessment system so as to meet the performance requirements by ESEA. Many assumed that because the NAEP scores stayed relatively flat in comparison to state assessment scores, the states had been adjusting their own standards for proficiency to accommodate the increasingly stringent requirements of the Elementary and Secondary Education Act. With Title I funding at stake for states that did not meet the requirement of all students being proficient, and with such a threat also looming if states did not make Adequate Yearly Progress (AYP) towards that goal it was assumed that states had been setting their own standards to look good and the NAEP performance was revealing the behavior.

This was only possible because the ESEA allowed each state to set its own assessment and proficiency standards. The result was states did not have comparable assessment systems. States were permitted to set their own content standards which determined what the target of teaching should be; what students should know by the time they were assessed in any given grade. The real problem has been proficiency standards. Proficiency standards, that is, those formulae used to determine cut scores for determinants of proficiency, were set by each individual state. State stores therefore could not be validly compared to each other because that would be a comparison of dozens of different assessments, each with their own scale, scoring system, and based upon disparate standards. NAEP, being administered in a standardized method across all states, was thought to provide a common metric with which to compare state performance as well as giving an overall picture of the nation. While states can be compared on this common metric, methodological issues exist that prompted some to claim such comparisons were invalid or at least of dubious value. (Stoneberg, 2005)

As the 2014 deadline for all students being proficient approached, and as the dramatic consequences for schools failing to meet annual AYP evaluations, states and critics alike became increasingly vocal claiming the Act was a well-intentioned failure. Ultimately this led the Obama administration, in conjunction with the new Secretary of Education, Arne Duncan, to offer waivers to states that were in jeopardy of losing Title I funding due to an inability to meet the requirements of the ESEA. A great many requirements needed to be satisfied in the application of each date for this waiver. It was at this time the Common Core State Standards started to be discussed.

The Common Core State Standards, just recently having been implemented, are an attempt to put all states on a common metric. Although federal legislation prohibits the establishment of the national curriculum, the Common Core State Standards circumvent that to a certain degree. The CCSS are a collection of standards; a common set of standards across participating states that dictates what students should know and be able to do. They are not, however, a curriculum; they do not dictate what specific material should be taught to students. States are therefore free to satisfy the standards in whatever manner they see fit, however the end result must be the same. Participation in CCSS is voluntary. States are not required to adopt them, however the Administration has closely tied the ability to achieve a waiver from the ESEA requirements with adoption of the CCSS. Once again the federal government has extended it’s leverage over the states by using threat of the removal of federal funding that states have grown to be dependent upon over the past five decades. In light of the recent collapses of economies, cities having to declare bankruptcy, and funds for education generally having decreased, dependence upon federal financial assistance for education is absolute, particularly with regard to students historically defined as “at-risk.”

At this point, 43 states have voluntarily adopted the Common Core State Standards. The question has become one of validly evaluating the performance of states in meeting the standards. Two state-led assessment consortia have been created: the Smarter Balanced Assessment Consortium (SBAC), and the Partnership for Assessment of Readiness for College and Careers (PARCC). There has been great debate within and between states regarding which consortium to be a member of. There has also been some indecision within states as to whether participation is in their best interest. Utah, for example, dropped out of the SBAC. Part of the discontent in participating in the consortia stems from federal funding intervention again being part of the system; the consortia are funded, in part, by federal dollars. Withdrawing from the consortium however does not exclude Utah from the consortium’s resources; it only eliminates the ability of Utah to have a voice in the crafting of the assessments.

To some degree, the division of the states into separate assessment systems has again created a scenario in which direct comparisons across states may be problematic. The PARCC assessment system and the SBAC assessment system are both the result of the development of the CCSS; they are, however, based upon two very different approaches to assessment and to some degree appear to be competitions between two assessment development institutions, Achieve and WestEd, to eventually own the market for a nearly national assessment system that would assess at least 86% of students in the U.S. This is reminiscent of the battle between Sirius and XMradio for the satellite radio market, or the Sony Betamax v. JVC VHS battle of the 1970s.

The similarities between the systems are clear: they are both assessing performance of students in grades 3-8 to evaluate student performance in meeting the CCSS. Of critical importance is the shift to computer-based assessment, and this is one of the key criticisms of the SBAC over PARCC. The PARCC will be assessing students using a traditional format of pre-defined assessment questions that do not shift during the assessment. The questions are taken from a pool of comparable questions for each topic being assessed; this is similar to the assessment format of traditional paper and pencil assessments. The SBAC assessment system however will be using what is referred to as computer adaptive testing in which the performance of students on one question will determine the type and difficulty of the subsequent question. Although this type of testing has the ability to more accurately determine student performance and has been applied for years in a variety of assessment systems including the Graduate Records Exam (GRE) used for university applications, there has been great backlash against this system. Additionally, SBAC will be incorporating “responsible flexibility” which allows individual states within the consortium to make adjustments to their assessment system independent of the rest of the states. This again leads to the possibility of methodological criticism in directly comparing state levels of proficiency, or the equivalent term that will be used relative to adhering to CCSS and meeting the goal of students being college and career ready at the end of the primary and secondary education. The comparative performance of states, which will surely be made by legislators and in the press, and will therefore become the focal point of the new education policy, will likely devolve into arguments over the different types of assessment systems between consortia and, in the case of SBAC, within states. It seems likely that, as is the case with the current assessment systems across states, the focus within state education departments and across the nation, will not truly be on student performance and educational opportunity for students, but rather a political debate for election season over the legislative hot potato referred to as student “proficiency.”

Given the conflicting definitions of “proficient,” the various standards states have been teaching to, the incompatible assessment systems used to evaluate student performance across states, and the continual use of withdrawal of federal Title I funding as a means of forcing states to comply with federal policy changes, there is no reason to believe that the continued infiltration of federal education initiatives will decrease. Indeed, aspects of the NAEP assessment are being incorporated into the SBAC and PARCC assessment systems to provide some type of alignment with the federal NAEP assessment.

The overarching goal of CCSS is to increase student readiness for successful career readiness and adequate academic preparation for college. These goals, identified as the new catch phrase, “College and Career Readiness,” should in theory also improve the international standing of the United States on global assessments such as the Trends in International Mathematics and Science Study, on which the U.S. sometimes ranks in the top 10, and sometimes in the top 25 or lower. While these are productive and well-intended goals, there continues to be a great deal of discontent among a wide array of individual and institutional critics regarding the increasing role the federal government is playing in issues legislatively mandated to be those of state government.

With the purportedly increased rigor of the SBAC and PARCC assessments relative to state assessments, we can expect a drop in the percentage of students performing at the proficient level. This, however, will not be a meaningful evaluation as the break in trend between the old assessments and the new will mean they are not directly comparable. However similar situations have not historically prevented reporters and even governors and legislators from misinterpreting the meaning of results and publicly speaking out against federal policies from a misinformed perspective.

Ultimately, as Porter, Hwang, and Yang (2011) imply, it seems reasonable to consider that we are on our way to a repeal of the ban on a national curriculum. The increased involvement of federal education policy in state level education activities, in conjunction with what will assuredly be a political free-for-all when the CCSS prompted SBAC and PARCC assessment results start coming out may be the final impetus for that development.

References

Braun, H. (2007). Mapping 2005 state proficiency standards onto the NAEP Scales (NCES 2007-482). U.S. Department of Education, National Center for Education Statistics. Washington, D.C.

"Elementary and Secondary Education Act of 1965" (PL 89-10, 11 April 1965)

Loomis, S.C. & Bourque M.L. (Eds.) (2001). National assessment of educational progress achievement levels 1992-1998 for mathematics. Washington D.C.: National Assessment Governing Board

“No Child Left Behind Act of 2001” (PL 107-110, April 2001)

Porter, A; McMaken, J.; Hwang, J.; and Yang, Rui (2011). Common core standards: The new U.S. intended curriculum. In Educati/onal Researcher, 40, 3, 103-116

Stoneberg, B. (2005). Please don’t use NAEP scores to rank order the 50 states. In Practical Assessment, Rese4arch & Evaluation, 10, 9

U.S. Department of Education, Institute for Education Sciences, National Center for Education Statistics. (2005). NAEP Data, 2005 [Data file]. Available from the National Center for Education Statistics web site, http://www.nces.ed.gov/nationsreportcard/naepdata/

U.S. Department of Education, Institute for Education Sciences, National Center for Education Statistics. (2012). NAEP data, long term trend [Data file]. Available from the National Center for Education Statistics web site, http://www.nces.ed.gov/nationsreportcard/naepdata