Why attend an LDC Peer Review training?Nicole Renner Professional Development 1183 April 25, 2017
Why attend an LDC Peer Review training?
LDC was founded on the idea that quality assignments matter and the research supports us: the quality and rigor of a task predicts the quality of student work.
Whether you want to join the national community of LDC Peer Reviewers or renew your thinking about curriculum quality in your school or district, LDC Peer Review training provides a rigorous, validated way to vet and improve modules to the level of quality that drives outstanding teaching and supports deeper student learning.
Attending a Peer Review training also unlocks the pathway to earning Certified and Expert Reviewer badges.
This summer's workshop in Nashville will also include the first-ever LDC Advanced Peer Review training and "Review Jam"! If you have attended an LDC Peer Review training in the past, sign up for this advanced session to:
Explore the updated 2017 UL-SCALE-created Curriculum Alignment Rubric
Practice giving effective feedback
Engage in Summer 2017 National Peer Review with in-person consensus partners.
The Advanced Peer Review training is the first step toward becoming a member of the new LDC Peer Review National Faculty! The National Faculty is a network of educators dedicated to providing consistent, rigorous, and supportive reviews and feedback to LDC module authors nationwide. Click here to request more information.
Hot on the heels of the IU13 LDC National Peer Review Studio, the team at LDC wanted to share with the community ten things you should know to help finesse your module-reviewing skills.Literacy Design Collaborative LDC Modules 1127 November 28, 2016
Hot on the heels of the IU13 LDC National Peer Review Studio, the team at LDC wanted to share with the community ten things you should know to help finesse your module-reviewing skills. They are*:
1. Know the rubric.
It is your Constitution. Granted, that means it is sometimes hard to interpret, but every score must be an attempt to apply the rubric's specific language and meaning.
2. Trust evidence, not intuition.
Intuition is a powerful force, but it is also highly subjective or specific to the individual. Calibration with other scorers requires us to base our judgments on the evidence that everyone can see, not on what a particular person feels.
3. Match evidence to language in the rubric.
A safe rule of thumb: If you select an indicator on the rubric, be sure you can identify evidence that justifies that selection in the module itself.
4. Begin with the end in mind.
Since a Good-to-Go score is the goal for any dimension, begin with the Good-to-Go criteria first when matching evidence to the rubric. You want to approach each dimension with these criteria in mind and then consider whether or not the module design meets them.
5. When scoring each dimension, isolate your judgment.
A dimensional rubric like the peer review rubric is not designed to assess one's overall impression of a module. Rather, it is isolating variables, distinguishing between relative strengths and weaknesses. Be mindful that a module's low score in one scoring dimension does not cloud your judgment on the scoring of other, unrelated dimensions.
6. When scoring holistically, base judgments on the preponderance of evidence.
After scoring each dimension, we ask peer reviewers to make two holistic judgments - a holistic score for the Teaching Task and a holistic score for the Instructional Ladder. The holistic score should be based on the preponderance or strength of the body of evidence. However, there are often times when a single problem is big enough that it must be weighted more heavily when determining the holistic score. We sometimes call this a "fatal flaw" or a "deal breaker.”
7. Score only what you can see.
Scorers who attend to exactly what is presented on the page rather than making inferences about the module author’s intent tend to score more accurately. That shouldn't surprise us: It is easier to agree on what is than on what could be. A score is always based on what is. Avoid “filling in the gaps” of a task or instructional ladder with strong potential. Instead, score what is present and give formative feedback to help the module author fully achieve that potential.
8. Resist seduction: one good element does not equal a good module.
You read a particularly well designed task prompt and after that, the module designer can do no wrong. (This is known as the "halo effect.") One exceptional strength does not cancel out the weaknesses.
9. Recognize pre-loaded template elements.
Some LDC module templates provide a standardized set of Academic Standards, Skills List, and Instructional Ladder Elements that are meant to be selected and adapted for a particular module. Focus on how well aligned those standards, skills, and instructional elements are to the demands of the teaching task and whether the teacher has sufficiently customized those elements for the specific purposes of the module.
10. Know your biases; minimize their impact.
The trick is not to rid yourself of bias; that's impossible. But you do need to recognize what your biases are, and be mindful of how they can trigger first impressions that color all judgments that follow.
To learn more, please contact firstname.lastname@example.org.
*Adapted from TeaMSS "Top Ten Scoring Principles" (Measured Progress and the Stanford Center for Assessment, Learning, & Equity for the Literacy Design Collaborative, 2013).
In collaboration with the Literacy Design Collaborative, Lancaster-Lebanon Intermediate Unit 13 is offering an in-depth workshop on the LDC “looking at teacher work” peer review and feedback system to be held on November 1-2 at The Conference and Training Center at IU13 in Lancaster, PA. Registration is open on My Learning Plan!Literacy Design Collaborative Professional Development 1104 September 20, 2016
In collaboration with the Literacy Design Collaborative, Lancaster-Lebanon Intermediate Unit 13 is offering an in-depth workshop on the LDC “looking at teacher work” peer review and feedback system to be held on November 1-2 at The Conference and Training Center at IU13 in Lancaster, PA. Registration is open on My Learning Plan!
This two-day LDC Peer Review Studio is open to LDC teachers, instructional coaches, administrators, and LDC professional development facilitators working toward LDC coach certification who already use the LDC Design System and seek to deepen their understanding of and their own alignment to the LDC Peer Review Rubric.
By completing the work of the Studio, participants earn the Nationally Trained LDC Peer Reviewer badge. This LDC badge signifies:
(1) an ability to design modules that will get stronger results with students;
(2) an ability to provide actionable, rubric-aligned feedback to colleagues on their modules; and
(3) the beginning of the rubric-calibration work that leads to the Nationally Certified LDC Peer Reviewer badge, which enables practitioners to participate in national LDC peer-review events.
The Studio will be held from 8:30 a.m. to 4:00 p.m. on November 1st and 8:30 a.m. - 12:30 p.m. on November 2nd. A light breakfast and lunch will be provided on both days.
Learn more about the LDC Peer Review system and the rubric developed by the Stanford Center for Assessment, Learning, & Equity (SCALE).
Event registration is open on My Learning Plan. After you complete your My Learning Plan registration, please complete this form to provide us with additional information that will inform our planning.
We encourage you to register as soon as possible. There are limited spaces available for the National LDC Peer Review Studio. Registration will close when all spots are filled.
If you have any questions about the National LDC Peer Review Studio, please email Kelly Galbraith at email@example.com or call (717) 606-1667.
LDC updates in your inbox.