Are you a teacher? Access our free library of standards-driven lessons now!

From Formative Assessment to Formative Instruction

If you’ve been teaching or serving as an administrator in schools, you have undoubtedly practiced formative assessment. According to ASCD,  “Formative assessment delivers information during the instructional process, before the summative assessment. Both the teacher and the student use formative assessment results to make decisions about what actions to take to promote further learning.” 

As someone who is using LDC mini-tasks and modules in your coursework, you know that LDC tasks help you scaffold instruction in a way that provides you with information about student learning and instructional effectiveness. With student work that results from both mini-tasks and modules you are in a good position to make those decisions and adjustments that “promote further learning.”

Eleanor Dougherty Core Skills Corner 734 September 2, 2015

If you’ve been teaching or serving as an administrator in schools, you have undoubtedly practiced formative assessment. According to ASCD,  “Formative assessment delivers information during the instructional process, before the summative assessment. Both the teacher and the student use formative assessment results to make decisions about what actions to take to promote further learning.”1  

As someone who is using LDC mini-tasks and modules in your coursework, you know that LDC tasks help you scaffold instruction in a way that provides you with information about student learning and instructional effectiveness. With student work that results from both mini-tasks and modules you are in a good position to make those decisions and adjustments that “promote further learning.”

 

Words of Caution    

However, let’s be clear about the word “assessment” in “formative assessment.” It doesn’t mean a task is given as homework or taught with minimal instruction, and it certainly doesn’t mean a test.  “Formative” connotes “determinative, developmental, shaping.”  In contrast, “summative” connotes “that which is final.”  If we apply the terms that refer to formative to our understanding of formative assessment, then we will understand that formative assessment is really formative instruction—a process of teaching and learning that shapes and develops student learning.

The danger is that the term “assessment” can blur the distinction between what is formative and what is summative. We can fall into the trap of interpreting student work with shaky results that will influence in misleading ways next step decisions about student learning, instruction, and supports. Instead, when we say “formative assessment,” think “formative instruction.” In this way we make a clear distinction between how we teach and when we test.  

An example of this confusion between “formative” and “summative” occurs with culminating assignments, which occur at the end of units: Are they formative or summative? Because a culminating assignment doesn’t allow time or opportunity for that formative process, especially if it is given as homework or with minimal instruction, student work, then, presents a blurry picture of learning and instruction—and judgments about student learning can be misguided.

In contrast, if a culminating assignment is taught formatively, we teach within the unit in order to engage in the process of teaching, re-teaching, and providing feedback frequently. Students in this context have sufficient instruction and support to move through the assignment and, after scoring, have time to revise or relearn weak skills.  LDC to the rescue: All LDC mini-tasks and modules engage you in formative instruction. They are explicitly taught with built-in opportunities for feedback, teacher-to-student interactions, and thoughtful “next step” decisions.

While we are on this topic, I’ll mention an interesting view of formative instruction that occurs in scholarly research, particularly in papers from educators abroad. In this view the formative experience develops self-regulation in students. It occurred to me that mini-tasks and modules can actualize the self-regulating experiences described by Black and Williams on page 4 of their paper: Developing the Theory of Formative Assessment. 2:

  • Clarifying and sharing learning intentions and criteria for success;
  • Engineering effective classroom discussions and other learning tasks that elicit evidence of student understanding;
  • Providing feedback that moves learners forward through the use of  “success criteria” and “learning intentions;
  • Activating students as instructional resources for one another; and
  • Activating students as the owners of their own learning. 

So as you teach a mini-task or module, you might be more aware of how the instructional process and structure of an LDC task helps students acquire self-regulating skills and independence as learners.


Chappuis, S. and Chappuis, J. (2007). The Best Value in Formative Assessment. Educational Leadership (ASCD). 
Black, P. & Williams, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability. 

 

 

New LDC Rubrics for Scoring Student Work Coming Soon

New and improved LDC Rubrics for scoring student work will soon be finalized and released for the 2015–2016 school year. In addition to making the rubrics “lighter” and more aligned with teachers’ expectations, these new rubrics have more grade-level precision as well as much more flexibility to help teachers assess content understandings specific to each academic discipline.

LDC Modules 675 June 24, 2015

New and improved LDC Rubrics for scoring student work will soon be finalized and released for the 2015–2016 school year.

In addition to making the rubrics “lighter” and more aligned with teachers’ expectations, these new rubrics have more grade-level precision as well as much more flexibility to help teachers assess content understandings specific to each academic discipline.

 

 

Field-Testing and Community Feedback on the Rubrics

In July, various sets of LDC teachers will field-test these newly drafted rubrics to provide more feedback to the Stanford Center for Assessment, Learning, & Equity (SCALE)—our partners who have been working on these rubrics. In addition to those field tests, we’d love to provide the drafts of these rubrics to the LDC Community of Practice so that you can test them out as well and provide LDC and SCALE with your feedback before the rubrics are finalized. Once the rubrics have been validated by the community, we will release them on LDC.org and incorporate them into LDC CoreTools. 

Take a look at the draft rubrics and even try them out and let us know what you think:

Note that at the moment the following draft rubrics are not yet available, but we will update the community when they are ready.

  • K–5 Informational/Explanatory Rubric
  • K–5 Argumentation Rubric
  • K–5 Social Studies Content Understandings
  • K–5, 6–8, 9–12 ELA Content Understandings

Please provide your feedback by August 10th so that we can ensure the new field-tested rubrics are ready for back-to-school.

 

Details About Changes to the LDC Student Work Rubrics

Among other things, the revised LDC Rubrics include the following changes:

Separate Rubrics for Grades K–5, Grades 6–8, and Grades 9–12
The original LDC rubrics were designed with the Common Core Literacy “Anchor Standards” in mind. However, there are distinct expectations for the level of literacy and writing skills across grade levels (hence, grade-level standards). Having separate rubrics for different ranges of grade levels communicates that there are real, qualitative differences in expectations for each.

A Reduction in the Number of Scoring Dimensions from Seven Dimensions to Four
Feedback indicated that there was a perceived overlap or repetition of ideas between Focus and Controlling Idea, and between Reading/Research and Development. In addition, some teachers found seven dimensions of scoring time-consuming and onerous. Evidence from studies conducted by SCALE/Measured Progress indicated that reducing the number of dimensions would not hurt overall reliability. To simplify the scoring process and reduce confusion, the four dimensions listed have been collapsed into two: Controlling Idea and Development/Use of Sources. Ideas from Focus and Reading/Research are retained within new dimensions.

A New Scoring Dimension for “Additional Demands”
Because there will no longer be a “Focus” dimension, which evaluated the completion of “all aspects of the prompt,” a dimension for Additional Demands has been added. The additional demands will be scored separately to provide more accurate analytic scoring (without muddling what is being scored in “Controlling Idea”) and more productive feedback to students.

Flexibility to Add “Content Understanding” Dimensions Specific to Each Academic Discipline
In addition to updated rubrics, we will be introducing a variety of options for “Content Understanding” that can be dropped into the rubric depending on the discipline and the disciplinary-specific cognitive demands related to the content of the specific teaching task. For example, the options for science come from the Next Generation Science Standards (NGSS). Right now, the options for social studies 9-12 are available, as are the options for science K-12.

Read more about the Rationale for Proposed Revisions to the LDC Rubrics.. We look forward to hearing your feedback. Questions? Contact us at: questions@ldc.org.

 

Collaborate and Score! LDC as a Model of Collaboration on Student Work

I could go on and on about the beauty and usefulness of the LDC design system and what it brings to the classroom experience and to the development of teachers.

Shelia Banks LDC Modules 669 June 22, 2015

I could go on and on about the beauty and usefulness of the LDC design system and what it brings to the classroom experience and to the development of teachers. In terms of teacher development, LDC offers multiple opportunities for professional growth from training on how to create and implement modules to providing a platform where student results can be discussed.

One of the most powerful occurrences I have experienced and seen with LDC is being able to collaborate with other teachers and trainers on calibrating the scoring process. In fact, I just attended a session where teachers from different schools and content areas were brought together for a scoring calibration experience and one of the teachers expressed appreciation that she was able to hear the ideas of a variety of teachers.

When teachers keep students at the center of their work and discuss how skills and instruction can have a positive impact on them, everyone wins. Now, that’s not to say that calibrating on scoring is easy; the honest truth is that calibration can be a challenge. However, it’s a challenge well worth the effort in clearing defining what work that meets expectations looks like. 


Here are some things to consider that can make the calibration experience more powerful and less arduous.

Distinguish Between “Grading” versus “Scoring.” One of the hard conversations to have with teachers is one that distinguishes “grading” from “scoring.” Grading includes looking at measures that aren’t necessarily included in learning outcomes such as effort, participation, or improvement. You could say that grading is quasi subjective and varies slightly from teacher to teacher. In many cases, the grade that a teacher would assign to a student is different from the scoring feedback that the student would receive.

Adhering to a common rubric facilitates this approach as teachers are provided with clearly defined expectations for the elements of writing. Scoring has the advantage of identifying the areas in need of further instruction that would best support the students. At times it is necessary to gently push teachers away from mentally assigning a grade to a paper that is used for scoring calibration and remind them of the purpose of using a common rubric.

Steer the Conversation Back to Instructional Implications. Speaking of scoring, one powerful use of scoring indicators like the elements on the LDC Rubric is to identify the areas that need additional or supplemental instruction. Typically, LDC teachers don’t score just for the sake of scoring. They are using student work to inform them of what needs to happen in the classroom.  After scoring a batch of student writing products, a teacher can identify areas in need of intentional instruction.

For example, if the majority of students struggled with constructing controlling ideas, one implication for instruction would be to teach a lesson (mini-task) that targets that skill (see this Controlling Idea mini-task in LDC CoreTools). The calibration experience permits teachers to share their insights regarding what they would choose for further instruction and how they would go about doing that.

Encourage Constructive Conversations with Evidence. The more teachers share their insights regarding the task, the rubric, and student work, the more opportunities there are for making the best instructional decisions. Sometimes these conversations lead to a debate-type situation. Of course, good manners and consideration for others are always in play, and if norms are set for respectful interactions, teachers will be more willing to share without fear of judgement.

It is okay for educators to present an opposing viewpoint and disagree. That is sometimes part of the process. Constructive debates may actually present the opportunity for teachers to provide reasons for their stance that someone else in the room hadn’t considered. It is very useful to prompt teachers to present evidence to support their claims (sounds familiar, right?) to keep the conversation moving and productive. Remember that everyone brings her or his own frame of reference to the conversation; as long as students remain at the center of the work, even a heated discussion can be productive and meaningful.

Recently, I was part of a session aimed at calibrating scoring among teachers from different schools and content areas in Jefferson Parish. There were times during the session when those constructive debates took place. It was thrilling to note how they enlightened teachers’ mindsets. I could see professional growth right before my eyes. By the end of the session, we all walked away knowing something we hadn’t known before. Participants left with a much clearer idea of how to identify work that aligns with each of the performance indicators on the LDC Rubric and deepened their knowledge about how to identify instructional needs using data from authentic products in a professional learning community.

This is a very powerful thing indeed.

 

Keep me up to date with LDC Blog

LDC updates in your inbox.