• Grades 6-12
  • School Leaders

Black History Month for Kids: Google Slides, Resources, and More!

15 Helpful Scoring Rubric Examples for All Grades and Subjects

In the end, they actually make grading easier.

Collage of scoring rubric examples including written response rubric and interactive notebook rubric

When it comes to student assessment and evaluation, there are a lot of methods to consider. In some cases, testing is the best way to assess a student’s knowledge, and the answers are either right or wrong. But often, assessing a student’s performance is much less clear-cut. In these situations, a scoring rubric is often the way to go, especially if you’re using standards-based grading . Here’s what you need to know about this useful tool, along with lots of rubric examples to get you started.

What is a scoring rubric?

In the United States, a rubric is a guide that lays out the performance expectations for an assignment. It helps students understand what’s required of them, and guides teachers through the evaluation process. (Note that in other countries, the term “rubric” may instead refer to the set of instructions at the beginning of an exam. To avoid confusion, some people use the term “scoring rubric” instead.)

A rubric generally has three parts:

  • Performance criteria: These are the various aspects on which the assignment will be evaluated. They should align with the desired learning outcomes for the assignment.
  • Rating scale: This could be a number system (often 1 to 4) or words like “exceeds expectations, meets expectations, below expectations,” etc.
  • Indicators: These describe the qualities needed to earn a specific rating for each of the performance criteria. The level of detail may vary depending on the assignment and the purpose of the rubric itself.

Rubrics take more time to develop up front, but they help ensure more consistent assessment, especially when the skills being assessed are more subjective. A well-developed rubric can actually save teachers a lot of time when it comes to grading. What’s more, sharing your scoring rubric with students in advance often helps improve performance . This way, students have a clear picture of what’s expected of them and what they need to do to achieve a specific grade or performance rating.

Learn more about why and how to use a rubric here.

Types of Rubric

There are three basic rubric categories, each with its own purpose.

Holistic Rubric

A holistic scoring rubric laying out the criteria for a rating of 1 to 4 when creating an infographic

Source: Cambrian College

This type of rubric combines all the scoring criteria in a single scale. They’re quick to create and use, but they have drawbacks. If a student’s work spans different levels, it can be difficult to decide which score to assign. They also make it harder to provide feedback on specific aspects.

Traditional letter grades are a type of holistic rubric. So are the popular “hamburger rubric” and “ cupcake rubric ” examples. Learn more about holistic rubrics here.

Analytic Rubric

Layout of an analytic scoring rubric, describing the different sections like criteria, rating, and indicators

Source: University of Nebraska

Analytic rubrics are much more complex and generally take a great deal more time up front to design. They include specific details of the expected learning outcomes, and descriptions of what criteria are required to meet various performance ratings in each. Each rating is assigned a point value, and the total number of points earned determines the overall grade for the assignment.

Though they’re more time-intensive to create, analytic rubrics actually save time while grading. Teachers can simply circle or highlight any relevant phrases in each rating, and add a comment or two if needed. They also help ensure consistency in grading, and make it much easier for students to understand what’s expected of them.

Learn more about analytic rubrics here.

Developmental Rubric

A developmental rubric for kindergarten skills, with illustrations to describe the indicators of criteria

Source: Deb’s Data Digest

A developmental rubric is a type of analytic rubric, but it’s used to assess progress along the way rather than determining a final score on an assignment. The details in these rubrics help students understand their achievements, as well as highlight the specific skills they still need to improve.

Developmental rubrics are essentially a subset of analytic rubrics. They leave off the point values, though, and focus instead on giving feedback using the criteria and indicators of performance.

Learn how to use developmental rubrics here.

Ready to create your own rubrics? Find general tips on designing rubrics here. Then, check out these examples across all grades and subjects to inspire you.

Elementary School Rubric Examples

These elementary school rubric examples come from real teachers who use them with their students. Adapt them to fit your needs and grade level.

Reading Fluency Rubric

A developmental rubric example for reading fluency

You can use this one as an analytic rubric by counting up points to earn a final score, or just to provide developmental feedback. There’s a second rubric page available specifically to assess prosody (reading with expression).

Learn more: Teacher Thrive

Reading Comprehension Rubric

Reading comprehension rubric, with criteria and indicators for different comprehension skills

The nice thing about this rubric is that you can use it at any grade level, for any text. If you like this style, you can get a reading fluency rubric here too.

Learn more: Pawprints Resource Center

Written Response Rubric

Two anchor charts, one showing

Rubrics aren’t just for huge projects. They can also help kids work on very specific skills, like this one for improving written responses on assessments.

Learn more: Dianna Radcliffe: Teaching Upper Elementary and More

Interactive Notebook Rubric

Interactive Notebook rubric example, with criteria and indicators for assessment

If you use interactive notebooks as a learning tool , this rubric can help kids stay on track and meet your expectations.

Learn more: Classroom Nook

Project Rubric

Rubric that can be used for assessing any elementary school project

Use this simple rubric as it is, or tweak it to include more specific indicators for the project you have in mind.

Learn more: Tales of a Title One Teacher

Behavior Rubric

Rubric for assessing student behavior in school and classroom

Developmental rubrics are perfect for assessing behavior and helping students identify opportunities for improvement. Send these home regularly to keep parents in the loop.

Learn more: Teachers.net Gazette

Middle School Rubric Examples

In middle school, use rubrics to offer detailed feedback on projects, presentations, and more. Be sure to share them with students in advance, and encourage them to use them as they work so they’ll know if they’re meeting expectations.

Argumentative Writing Rubric

An argumentative rubric example to use with middle school students

Argumentative writing is a part of language arts, social studies, science, and more. That makes this rubric especially useful.

Learn more: Dr. Caitlyn Tucker

Role-Play Rubric

A rubric example for assessing student role play in the classroom

Role-plays can be really useful when teaching social and critical thinking skills, but it’s hard to assess them. Try a rubric like this one to evaluate and provide useful feedback.

Learn more: A Question of Influence

Art Project Rubric

A rubric used to grade middle school art projects

Art is one of those subjects where grading can feel very subjective. Bring some objectivity to the process with a rubric like this.

Source: Art Ed Guru

Diorama Project Rubric

A rubric for grading middle school diorama projects

You can use diorama projects in almost any subject, and they’re a great chance to encourage creativity. Simplify the grading process and help kids know how to make their projects shine with this scoring rubric.

Learn more: Historyourstory.com

Oral Presentation Rubric

Rubric example for grading oral presentations given by middle school students

Rubrics are terrific for grading presentations, since you can include a variety of skills and other criteria. Consider letting students use a rubric like this to offer peer feedback too.

Learn more: Bright Hub Education

High School Rubric Examples

In high school, it’s important to include your grading rubrics when you give assignments like presentations, research projects, or essays. Kids who go on to college will definitely encounter rubrics, so helping them become familiar with them now will help in the future.

Presentation Rubric

Example of a rubric used to grade a high school project presentation

Analyze a student’s presentation both for content and communication skills with a rubric like this one. If needed, create a separate one for content knowledge with even more criteria and indicators.

Learn more: Michael A. Pena Jr.

Debate Rubric

A rubric for assessing a student's performance in a high school debate

Debate is a valuable learning tool that encourages critical thinking and oral communication skills. This rubric can help you assess those skills objectively.

Learn more: Education World

Project-Based Learning Rubric

A rubric for assessing high school project based learning assignments

Implementing project-based learning can be time-intensive, but the payoffs are worth it. Try this rubric to make student expectations clear and end-of-project assessment easier.

Learn more: Free Technology for Teachers

100-Point Essay Rubric

Rubric for scoring an essay with a final score out of 100 points

Need an easy way to convert a scoring rubric to a letter grade? This example for essay writing earns students a final score out of 100 points.

Learn more: Learn for Your Life

Drama Performance Rubric

A rubric teachers can use to evaluate a student's participation and performance in a theater production

If you’re unsure how to grade a student’s participation and performance in drama class, consider this example. It offers lots of objective criteria and indicators to evaluate.

Learn more: Chase March

How do you use rubrics in your classroom? Come share your thoughts and exchange ideas in the WeAreTeachers HELPLINE group on Facebook .

Plus, 25 of the best alternative assessment ideas ..

Scoring rubrics help establish expectations and ensure assessment consistency. Use these rubric examples to help you design your own.

You Might Also Like

What is Formative Assessment? #buzzwordsexplained

What Is Formative Assessment and How Should Teachers Use It?

Check student progress as they learn, and adapt to their needs. Continue Reading

Copyright © 2023. All rights reserved. 5335 Gate Parkway, Jacksonville, FL 32256

Essay Rubric

Essay Rubric

About this printout

This rubric delineates specific expectations about an essay assignment to students and provides a means of assessing completed student essays.

Teaching with this printout

More ideas to try.

Grading rubrics can be of great benefit to both you and your students. For you, a rubric saves time and decreases subjectivity. Specific criteria are explicitly stated, facilitating the grading process and increasing your objectivity. For students, the use of grading rubrics helps them to meet or exceed expectations, to view the grading process as being “fair,” and to set goals for future learning. In order to help your students meet or exceed expectations of the assignment, be sure to discuss the rubric with your students when you assign an essay. It is helpful to show them examples of written pieces that meet and do not meet the expectations. As an added benefit, because the criteria are explicitly stated, the use of the rubric decreases the likelihood that students will argue about the grade they receive. The explicitness of the expectations helps students know exactly why they lost points on the assignment and aids them in setting goals for future improvement.

  • Routinely have students score peers’ essays using the rubric as the assessment tool. This increases their level of awareness of the traits that distinguish successful essays from those that fail to meet the criteria. Have peer editors use the Reviewer’s Comments section to add any praise, constructive criticism, or questions.
  • Alter some expectations or add additional traits on the rubric as needed. Students’ needs may necessitate making more rigorous criteria for advanced learners or less stringent guidelines for younger or special needs students. Furthermore, the content area for which the essay is written may require some alterations to the rubric. In social studies, for example, an essay about geographical landforms and their effect on the culture of a region might necessitate additional criteria about the use of specific terminology.
  • After you and your students have used the rubric, have them work in groups to make suggested alterations to the rubric to more precisely match their needs or the parameters of a particular writing assignment.
  • Print this resource

Explore Resources by Grade

  • Kindergarten K

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, sat essay rubric: full analysis and writing strategies.

feature_satessay

We're about to dive deep into the details of that least beloved* of SAT sections, the SAT essay . Prepare for a discussion of the SAT essay rubric and how the SAT essay is graded based on that. I'll break down what each item on the rubric means and what you need to do to meet those requirements.

On the SAT, the last section you'll encounter is the (optional) essay. You have 50 minutes to read a passage, analyze the author's argument, and write an essay. If you don’t write on the assignment, plagiarize, or don't use your own original work, you'll get a 0 on your essay. Otherwise, your essay scoring is done by two graders - each one grades you on a scale of 1-4 in Reading, Analysis, and Writing, for a total essay score out of 8 in each of those three areas . But how do these graders assign your writing a numerical grade? By using an essay scoring guide, or rubric.

*may not actually be the least belovèd.

Feature image credit: Day 148: the end of time by Bruce Guenter , used under CC BY 2.0 /Cropped from original. 

UPDATE: SAT Essay No Longer Offered

(adsbygoogle = window.adsbygoogle || []).push({});.

In January 2021, the College Board announced that after June 2021, it would no longer offer the Essay portion of the SAT (except at schools who opt in during School Day Testing). It is now no longer possible to take the SAT Essay, unless your school is one of the small number who choose to offer it during SAT School Day Testing.

While most colleges had already made SAT Essay scores optional, this move by the College Board means no colleges now require the SAT Essay. It will also likely lead to additional college application changes such not looking at essay scores at all for the SAT or ACT, as well as potentially requiring additional writing samples for placement.

What does the end of the SAT Essay mean for your college applications? Check out our article on the College Board's SAT Essay decision for everything you need to know.

The Complete SAT Essay Grading Rubric: Item-by-Item Breakdown

Based on the CollegeBoard’s stated Reading, Analysis, and Writing criteria, I've created the below charts (for easier comparison across score points). For the purpose of going deeper into just what the SAT is looking for in your essay, I've then broken down each category further (with examples).

The information in all three charts is taken from the College Board site .

The biggest change to the SAT essay (and the thing that really distinguishes it from the ACT essay) is that you are required to read and analyze a text , then write about your analysis of the author's argument in your essay. Your "Reading" grade on the SAT essay reflects how well you were able to demonstrate your understanding of the text and the author's argument in your essay.

You'll need to show your understanding of the text on two different levels: the surface level of getting your facts right and the deeper level of getting the relationship of the details and the central ideas right.

Surface Level: Factual Accuracy

One of the most important ways you can show you've actually read the passage is making sure you stick to what is said in the text . If you’re writing about things the author didn’t say, or things that contradict other things the author said, your argument will be fundamentally flawed.

For instance, take this quotation from a (made-up) passage about why a hot dog is not a sandwich:

“The fact that you can’t, or wouldn’t, cut a hot dog in half and eat it that way, proves that a hot dog is once and for all NOT a sandwich”

Here's an example of a factually inaccurate paraphrasing of this quotation:

The author builds his argument by discussing how, since hot-dogs are often served cut in half, this makes them different from sandwiches.

The paraphrase contradicts the passage, and so would negatively affect your reading score. Now let's look at an accurate paraphrasing of the quotation:

The author builds his argument by discussing how, since hot-dogs are never served cut in half, they are therefore different from sandwiches.

It's also important to be faithful to the text when you're using direct quotations from the passage. Misquoting or badly paraphrasing the author’s words weakens your essay, because the evidence you’re using to support your points is faulty.

Higher Level: Understanding of Central Ideas

The next step beyond being factually accurate about the passage is showing that you understand the central ideas of the text and how details of the passage relate back to this central idea.

Why does this matter? In order to be able to explain why the author is persuasive, you need to be able to explain the structure of the argument. And you can’t deconstruct the author's argument if you don’t understand the central idea of the passage and how the details relate to it.

Here's an example of a statement about our fictional "hot dogs are sandwiches" passage that shows understanding of the central idea of the passage:

Hodgman’s third primary defense of why hot dogs are not sandwiches is that a hot dog is not a subset of any other type of food. He uses the analogy of asking the question “is cereal milk a broth, sauce, or gravy?” to show that making such a comparison between hot dogs and sandwiches is patently illogical.

The above statement takes one step beyond merely being factually accurate to explain the relation between different parts of the passage (in this case, the relation between the "what is cereal milk?" analogy and the hot dog/sandwich debate).

Of course, if you want to score well in all three essay areas, you’ll need to do more in your essay than merely summarizing the author’s argument. This leads directly into the next grading area of the SAT Essay.

The items covered under this criterion are the most important when it comes to writing a strong essay. You can use well-spelled vocabulary in sentences with varied structure all you want, but if you don't analyze the author's argument, demonstrate critical thinking, and support your position, you will not get a high Analysis score .

Because this category is so important, I've broken it down even further into its two different (but equally important) component parts to make sure everything is as clearly explained as possible.

Part I: Critical Thinking (Logic)

Critical thinking, also known as critical reasoning, also known as logic, is the skill that SAT essay graders are really looking to see displayed in the essay. You need to be able to evaluate and analyze the claim put forward in the prompt. This is where a lot of students may get tripped up, because they think “oh, well, if I can just write a lot, then I’ll do well.” While there is some truth to the assertion that longer essays tend to score higher , if you don’t display critical thinking you won’t be able to get a top score on your essay.

What do I mean by critical thinking? Let's take the previous prompt example:

Write an essay in which you explain how Hodgman builds an argument to persuade his audience that the hot dog cannot, and never should be, considered a sandwich.

An answer to this prompt that does not display critical thinking (and would fall into a 1 or 2 on the rubric) would be something like:

The author argues that hot dogs aren’t sandwiches, which is persuasive to the reader.

While this does evaluate the prompt (by providing a statement that the author's claim "is persuasive to the reader"), there is no corresponding analysis. An answer to this prompt that displays critical thinking (and would net a higher score on the rubric) could be something like this:

The author uses analogies to hammer home his point that hot dogs are not sandwiches. Because the readers will readily believe the first part of the analogy is true, they will be more likely to accept that the second part (that hot dogs aren't sandwiches) is true as well.

See the difference? Critical thinking involves reasoning your way through a situation (analysis) as well as making a judgement (evaluation) . On the SAT essay, however, you can’t just stop at abstract critical reasoning - analysis involves one more crucial step...

Part II: Examples, Reasons, and Other Evidence (Support)

The other piece of the puzzle (apparently this is a tiny puzzle) is making sure you are able to back up your point of view and critical thinking with concrete evidence . The SAT essay rubric says that the best (that is, 4-scoring) essay uses “ relevant, sufficient, and strategically chosen support for claim(s) or point(s) made. ” This means you can’t just stick to abstract reasoning like this:

That explanation is a good starting point, but if you don't back up your point of view with quoted or paraphrased information from the text to support your discussion of the way the author builds his/her argument, you will not be able to get above a 3 on the Analysis portion of the essay (and possibly the Reading portion as well, if you don't show you've read the passage). Let's take a look of an example of how you might support an interpretation of the author's effect on the reader using facts from the passage :

The author’s reference to the Biblical story about King Solomon elevates the debate about hot dogs from a petty squabble between friends to a life-or-death disagreement. The reader cannot help but see the parallels between the two situations and thus find themselves agreeing with the author on this point.

Does the author's reference to King Solomon actually "elevate the debate," causing the reader to agree with the author? From the sentences above, it certainly seems plausible that it might. While your facts do need to be correct,  you get a little more leeway with your interpretations of how the author’s persuasive techniques might affect the audience. As long as you can make a convincing argument for the effect a technique the author uses might have on the reader, you’ll be good.

body_saywhat

Say whaaat?! #tbt by tradlands , used under CC BY 2.0 /Cropped and color-adjusted from original.

Did I just blow your mind? Read more about the secrets the SAT doesn’t want you to know in this article . 

Your Writing score on the SAT essay is not just a reflection of your grasp of the conventions of written English (although it is that as well). You'll also need to be focused, organized, and precise.

Because there's a lot of different factors that go into calculating your Writing score, I've divided the discussion of this rubric area into five separate items:

Precise Central Claim

Organization, vocab and word choice, sentence structure, grammar, etc..

One of the most basic rules of the SAT essay is that you need to express a clear opinion on the "assignment" (the prompt) . While in school (and everywhere else in life, pretty much) you’re encouraged to take into account all sides of a topic, it behooves you to NOT do this on the SAT essay. Why? Because you only have 50 minutes to read the passage, analyze the author's argument, and write the essay, there's no way you can discuss every single way in which the author builds his/her argument, every single detail of the passage, or a nuanced argument about what works and what doesn't work.

Instead, I recommend focusing your discussion on a few key ways the author is successful in persuading his/her audience of his/her claim.

Let’s go back to the assignment we've been using as an example throughout this article:

"Write an essay in which you explain how Hodgman builds an argument to persuade his audience that the hot dog cannot, and never should be, considered a sandwich."

Your instinct (trained from many years of schooling) might be to answer:

"There are a variety of ways in which the author builds his argument."

This is a nice, vague statement that leaves you a lot of wiggle room. If you disagree with the author, it's also a way of avoiding having to say that the author is persuasive. Don't fall into this trap! You do not necessarily have to agree with the author's claim in order to analyze how the author persuades his/her readers that the claim is true.

Here's an example of a precise central claim about the example assignment:

The author effectively builds his argument that hot dogs are not sandwiches by using logic, allusions to history and mythology, and factual evidence.

In contrast to the vague claim that "There are a variety of ways in which the author builds his argument," this thesis both specifies what the author's argument is and the ways in which he builds the argument (that you'll be discussing in the essay).

While it's extremely important to make sure your essay has a clear point of view, strong critical reasoning, and support for your position, that's not enough to get you a top score. You need to make sure that your essay  "demonstrates a deliberate and highly effective progression of ideas both within paragraphs and throughout the essay."

What does this mean? Part of the way you can make sure your essay is "well organized" has to do with following standard essay construction points. Don't write your essay in one huge paragraph; instead, include an introduction (with your thesis stating your point of view), body paragraphs (one for each example, usually), and a conclusion. This structure might seem boring, but it really works to keep your essay organized, and the more clearly organized your essay is, the easier it will be for the essay grader to understand your critical reasoning.

The second part of this criteria has to do with keeping your essay focused, making sure it contains "a deliberate and highly effective progression of ideas." You can't just say "well, I have an introduction, body paragraphs, and a conclusion, so I guess my essay is organized" and expect to get a 4/4 on your essay. You need to make sure that each paragraph is also organized . Recall the sample prompt:

“Write an essay in which you explain how Hodgman builds an argument to persuade his audience that the hot dog cannot, and never should be, considered a sandwich.”

And our hypothetical thesis:

Let's say that you're writing the paragraph about the author's use of logic to persuade his reader that hot dogs aren't sandwiches. You should NOT just list ways that the author is logical in support of his claim, then explain why logic in general is an effective persuasive device. While your points might all be valid, your essay would be better served by connecting each instance of logic in the passage with an explanation of how that example of logic persuades the reader to agree with the author.

Above all, it is imperative that you make your thesis (your central claim) clear in the opening paragraph of your essay - this helps the grader keep track of your argument. There's no reason you’d want to make following your reasoning more difficult for the person grading your essay (unless you’re cranky and don’t want to do well on the essay. Listen, I don’t want to tell you how to live your life).

In your essay, you should use a wide array of vocabulary (and use it correctly). An essay that scores a 4 in Writing on the grading rubric “demonstrates a consistent use of precise word choice.”

You’re allowed a few errors, even on a 4-scoring essay, so you can sometimes get away with misusing a word or two. In general, though, it’s best to stick to using words you are certain you not only know the meaning of, but also know how to use. If you’ve been studying up on vocab, make sure you practice using the words you’ve learned in sentences, and have those sentences checked by someone who is good at writing (in English), before you use those words in an SAT essay.

Creating elegant, non-awkward sentences is the thing I struggle most with under time pressure. For instance, here’s my first try at the previous sentence: “Making sure a sentence structure makes sense is the thing that I have the most problems with when I’m writing in a short amount of time” (hahaha NOPE - way too convoluted and wordy, self). As another example, take a look at these two excerpts from the hypothetical essay discussing how the author persuaded his readers that a hot dog is not a sandwich:

Score of 2: "The author makes his point by critiquing the argument against him. The author pointed out the logical fallacy of saying a hot dog was a sandwich because it was meat "sandwiched" between two breads. The author thus persuades the reader his point makes sense to be agreed with and convinces them."

The above sentences lack variety in structure (they all begin with the words "the author"), and the last sentence has serious flaws in its structure (it makes no sense).

Score of 4: "The author's rigorous examination of his opponent's position invites the reader, too, to consider this issue seriously. By laying out his reasoning, step by step, Hodgman makes it easy for the reader to follow along with his train of thought and arrive at the same destination that he has. This destination is Hodgman's claim that a hot dog is not a sandwich."

The above sentences demonstrate variety in sentence structure (they don't all begin with the same word and don't have the same underlying structure) that presumably forward the point of the essay.

In general, if you're doing well in all the other Writing areas, your sentence structures will also naturally vary. If you're really worried that your sentences are not varied enough, however, my advice for working on "demonstrating meaningful variety in sentence structure" (without ending up with terribly worded sentences) is twofold:

  • Read over what you’ve written before you hand it in and change any wordings that seem awkward, clunky, or just plain incorrect.
  • As you’re doing practice essays, have a friend, family member, or teacher who is good at (English) writing look over your essays and point out any issues that arise. 

This part of the Writing grade is all about the nitty gritty details of writing: grammar, punctuation, and spelling . It's rare that an essay with serious flaws in this area can score a 4/4 in Reading, Analysis, or Writing, because such persistent errors often "interfere with meaning" (that is, persistent errors make it difficult for the grader to understand what you're trying to get across).

On the other hand, if they occur in small quantities, grammar/punctuation/spelling errors are also the things that are most likely to be overlooked. If two essays are otherwise of equal quality, but one writer misspells "definitely" as "definately" and the other writer fails to explain how one of her examples supports her thesis, the first writer will receive a higher essay score. It's only when poor grammar, use of punctuation, and spelling start to make it difficult to understand your essay that the graders start penalizing you.

My advice for working on this rubric area is the same advice as for sentence structure: look over what you’ve written to double check for mistakes, and ask someone who’s good at writing to look over your practice essays and point out your errors. If you're really struggling with spelling, simply typing up your (handwritten) essay into a program like Microsoft Word and running spellcheck can alert you to problems. We've also got a great set of articles up on our blog about SAT Writing questions that may help you better understand any grammatical errors you are making.

How Do I Use The SAT Essay Grading Rubric?

Now that you understand the SAT essay rubric, how can you use it in your SAT prep? There are a couple of different ways.

Use The SAT Essay Rubric To...Shape Your Essays

Since you know what the SAT is looking for in an essay, you can now use that knowledge to guide what you write about in your essays!

A tale from my youth: when I was preparing to take the SAT for the first time, I did not really know what the essay was looking for, and assumed that since I was a good writer, I’d be fine.

Not true! The most important part of the SAT essay is using specific examples from the passage and explaining how they convince the reader of the author's point. By reading this article and realizing there's more to the essay than "being a strong writer," you’re already doing better than high school me.

body_readsleeping

Change the object in that girl’s left hand from a mirror to a textbook and you have a pretty good sketch of what my junior year of high school looked like.

Use The SAT Essay Rubric To...Grade Your Practice Essays

The SAT can’t exactly give you an answer key to the essay. Even when an example of an essay that scored a particular score is provided, that essay will probably use different examples than you did, make different arguments, maybe even argue different interpretations of the text...making it difficult to compare the two. The SAT essay rubric is the next best thing to an answer key for the essay - use it as a lens through which to view and assess your essay.

Of course, you don’t have the time to become an expert SAT essay grader - that’s not your job. You just have to apply the rubric as best as you can to your essays and work on fixing your weak areas . For the sentence structure, grammar, usage, and mechanics stuff I highly recommend asking a friend, teacher, or family member who is really good at (English) writing to take a look over your practice essays and point out the mistakes.

If you really want custom feedback on your practice essays from experienced essay graders, may I also suggest the PrepScholar test prep platform ? I manage the essay grading and so happen to know quite a bit about the essay part of this platform, which gives you both an essay grade and custom feedback for each essay you complete. Learn more about how it all works here .

What’s Next?

Are you so excited by this article that you want to read even more articles on the SAT essay? Of course you are. Don't worry, I’ve got you covered. Learn how to write an SAT essay step-by-step and read about the 6 types of SAT essay prompts .

Want to go even more in depth with the SAT essay? We have a complete list of past SAT essay prompts as well as tips and strategies for how to get a 12 on the SAT essay .

Still not satisfied? Maybe a five-day free trial of our very own PrepScholar test prep platform (which includes essay practice and feedback) is just what you need.

Trying to figure out whether the old or new SAT essay is better for you? Take a look at our article on the new SAT essay assignment to find out!

Want to improve your SAT score by 160 points?

Check out our best-in-class online SAT prep classes . We guarantee your money back if you don't improve your SAT score by 160 points or more.

Our classes are entirely online, and they're taught by SAT experts . If you liked this article, you'll love our classes. Along with expert-led classes, you'll get personalized homework with thousands of practice problems organized by individual skills so you learn most effectively. We'll also give you a step-by-step, custom program to follow so you'll never be confused about what to study next.

Try it risk-free today:

Improve Your SAT Score by 160+ Points, Guaranteed

Laura graduated magna cum laude from Wellesley College with a BA in Music and Psychology, and earned a Master's degree in Composition from the Longy School of Music of Bard College. She scored 99 percentile scores on the SAT and GRE and loves advising students on how to excel in high school.

Student and Parent Forum

Our new student and parent forum, at ExpertHub.PrepScholar.com , allow you to interact with your peers and the PrepScholar staff. See how other students and parents are navigating high school, college, and the college admissions process. Ask questions; get answers.

Join the Conversation

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

essay writing scoring rubric

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

Rubric Design

Main navigation, articulating your assessment values.

Reading, commenting on, and then assigning a grade to a piece of student writing requires intense attention and difficult judgment calls. Some faculty dread “the stack.” Students may share the faculty’s dim view of writing assessment, perceiving it as highly subjective. They wonder why one faculty member values evidence and correctness before all else, while another seeks a vaguely defined originality.

Writing rubrics can help address the concerns of both faculty and students by making writing assessment more efficient, consistent, and public. Whether it is called a grading rubric, a grading sheet, or a scoring guide, a writing assignment rubric lists criteria by which the writing is graded.

Why create a writing rubric?

  • It makes your tacit rhetorical knowledge explicit
  • It articulates community- and discipline-specific standards of excellence
  • It links the grade you give the assignment to the criteria
  • It can make your grading more efficient, consistent, and fair as you can read and comment with your criteria in mind
  • It can help you reverse engineer your course: once you have the rubrics created, you can align your readings, activities, and lectures with the rubrics to set your students up for success
  • It can help your students produce writing that you look forward to reading

How to create a writing rubric

Create a rubric at the same time you create the assignment. It will help you explain to the students what your goals are for the assignment.

  • Consider your purpose: do you need a rubric that addresses the standards for all the writing in the course? Or do you need to address the writing requirements and standards for just one assignment?  Task-specific rubrics are written to help teachers assess individual assignments or genres, whereas generic rubrics are written to help teachers assess multiple assignments.
  • Begin by listing the important qualities of the writing that will be produced in response to a particular assignment. It may be helpful to have several examples of excellent versions of the assignment in front of you: what writing elements do they all have in common? Among other things, these may include features of the argument, such as a main claim or thesis; use and presentation of sources, including visuals; and formatting guidelines such as the requirement of a works cited.
  • Then consider how the criteria will be weighted in grading. Perhaps all criteria are equally important, or perhaps there are two or three that all students must achieve to earn a passing grade. Decide what best fits the class and requirements of the assignment.

Consider involving students in Steps 2 and 3. A class session devoted to developing a rubric can provoke many important discussions about the ways the features of the language serve the purpose of the writing. And when students themselves work to describe the writing they are expected to produce, they are more likely to achieve it.

At this point, you will need to decide if you want to create a holistic or an analytic rubric. There is much debate about these two approaches to assessment.

Comparing Holistic and Analytic Rubrics

Holistic scoring .

Holistic scoring aims to rate overall proficiency in a given student writing sample. It is often used in large-scale writing program assessment and impromptu classroom writing for diagnostic purposes.

General tenets to holistic scoring:

  • Responding to drafts is part of evaluation
  • Responses do not focus on grammar and mechanics during drafting and there is little correction
  • Marginal comments are kept to 2-3 per page with summative comments at end
  • End commentary attends to students’ overall performance across learning objectives as articulated in the assignment
  • Response language aims to foster students’ self-assessment

Holistic rubrics emphasize what students do well and generally increase efficiency; they may also be more valid because scoring includes authentic, personal reaction of the reader. But holistic sores won’t tell a student how they’ve progressed relative to previous assignments and may be rater-dependent, reducing reliability. (For a summary of advantages and disadvantages of holistic scoring, see Becker, 2011, p. 116.)

Here is an example of a partial holistic rubric:

Summary meets all the criteria. The writer understands the article thoroughly. The main points in the article appear in the summary with all main points proportionately developed. The summary should be as comprehensive as possible and should be as comprehensive as possible and should read smoothly, with appropriate transitions between ideas. Sentences should be clear, without vagueness or ambiguity and without grammatical or mechanical errors.

A complete holistic rubric for a research paper (authored by Jonah Willihnganz) can be  downloaded here.

Analytic Scoring

Analytic scoring makes explicit the contribution to the final grade of each element of writing. For example, an instructor may choose to give 30 points for an essay whose ideas are sufficiently complex, that marshals good reasons in support of a thesis, and whose argument is logical; and 20 points for well-constructed sentences and careful copy editing.

General tenets to analytic scoring:

  • Reflect emphases in your teaching and communicate the learning goals for the course
  • Emphasize student performance across criterion, which are established as central to the assignment in advance, usually on an assignment sheet
  • Typically take a quantitative approach, providing a scaled set of points for each criterion
  • Make the analytic framework available to students before they write  

Advantages of an analytic rubric include ease of training raters and improved reliability. Meanwhile, writers often can more easily diagnose the strengths and weaknesses of their work. But analytic rubrics can be time-consuming to produce, and raters may judge the writing holistically anyway. Moreover, many readers believe that writing traits cannot be separated. (For a summary of the advantages and disadvantages of analytic scoring, see Becker, 2011, p. 115.)

For example, a partial analytic rubric for a single trait, “addresses a significant issue”:

  • Excellent: Elegantly establishes the current problem, why it matters, to whom
  • Above Average: Identifies the problem; explains why it matters and to whom
  • Competent: Describes topic but relevance unclear or cursory
  • Developing: Unclear issue and relevance

A  complete analytic rubric for a research paper can be downloaded here.  In WIM courses, this language should be revised to name specific disciplinary conventions.

Whichever type of rubric you write, your goal is to avoid pushing students into prescriptive formulas and limiting thinking (e.g., “each paragraph has five sentences”). By carefully describing the writing you want to read, you give students a clear target, and, as Ed White puts it, “describe the ongoing work of the class” (75).

Writing rubrics contribute meaningfully to the teaching of writing. Think of them as a coaching aide. In class and in conferences, you can use the language of the rubric to help you move past generic statements about what makes good writing good to statements about what constitutes success on the assignment and in the genre or discourse community. The rubric articulates what you are asking students to produce on the page; once that work is accomplished, you can turn your attention to explaining how students can achieve it.

Works Cited

Becker, Anthony.  “Examining Rubrics Used to Measure Writing Performance in U.S. Intensive English Programs.”   The CATESOL Journal  22.1 (2010/2011):113-30. Web.

White, Edward M.  Teaching and Assessing Writing . Proquest Info and Learning, 1985. Print.

Further Resources

CCCC Committee on Assessment. “Writing Assessment: A Position Statement.” November 2006 (Revised March 2009). Conference on College Composition and Communication. Web.

Gallagher, Chris W. “Assess Locally, Validate Globally: Heuristics for Validating Local Writing Assessments.” Writing Program Administration 34.1 (2010): 10-32. Web.

Huot, Brian.  (Re)Articulating Writing Assessment for Teaching and Learning.  Logan: Utah State UP, 2002. Print.

Kelly-Reilly, Diane, and Peggy O’Neil, eds. Journal of Writing Assessment. Web.

McKee, Heidi A., and Dànielle Nicole DeVoss DeVoss, Eds. Digital Writing Assessment & Evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press, 2013. Web.

O’Neill, Peggy, Cindy Moore, and Brian Huot.  A Guide to College Writing Assessment . Logan: Utah State UP, 2009. Print.

Sommers, Nancy.  Responding to Student Writers . Macmillan Higher Education, 2013.

Straub, Richard. “Responding, Really Responding to Other Students’ Writing.” The Subject is Writing: Essays by Teachers and Students. Ed. Wendy Bishop. Boynton/Cook, 1999. Web.

White, Edward M., and Cassie A. Wright.  Assigning, Responding, Evaluating: A Writing Teacher’s Guide . 5th ed. Bedford/St. Martin’s, 2015. Print.

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

ESL Essay Writing Rubric

  • Resources for Teachers
  • Pronunciation & Conversation
  • Writing Skills
  • Reading Comprehension
  • Business English
  • TESOL Diploma, Trinity College London
  • M.A., Music Performance, Cologne University of Music
  • B.A., Vocal Performance, Eastman School of Music

Scoring essays written by English learners can at times be difficult due to the challenging task of writing larger structures in English. ESL / EFL teachers should expect errors in each area and make appropriate concessions in their scoring. Rubrics should be based on a keen understanding of English learner communicative levels . This essay writing rubric provides a scoring system which is more appropriate to English learners than standard rubrics. This essay writing rubric also contains marks not only for organization and structure, but also for important sentence level mistakes such as the correct usage of linking language , spelling , and grammar.

Essay Writing Rubric

  • ESL Presentation Rubric
  • How to Teach Essay Writing
  • Writing Cause and Effect Essays for English Learners
  • Beginning Guide to Teaching ESL
  • Learn English
  • Standard Lesson Plan Format for ESL Teachers
  • Top Lesson Plans for ESL and EFL
  • Sentence Type Basics for English Learners
  • How to Build an ESL Class Curriculum
  • CALL Use in the ESL/EFL Classroom
  • Process Writing
  • How to Teach Pronunciation
  • Speaking Strategies for English Learners
  • 3 Tips to Improve Writing in English
  • Showing Cause / Effect in Written English
  • Methods for Teaching Grammar in an ESL/EFL Setting

ACADEMIC WRITING Scoring Rubric (weighted) adapted from MELAB (L. Hamp-Lyons, 1992)

Walden University

Writing Assessment: Scoring Criteria

  • Doctoral Writing Assessment
  • Assessment Overview
  • Scoring Criteria
  • Essay Tips and Resources
  • Essay Scores
  • Post-Assessment Resources
  • Frequently Asked Questions
  • Vision and Mission
  • Technical Support
  • Staff Biographies
  • Tips and Resources

Skip to Open Chat in New Window

Essay Scoring Rubric

Your Writing Assessment essay will be scored based on the rubric in your DRWA Doctoral Writing Assessment classroom focusing on:

  • Central idea of essay is clear, related to the prompt, and developed
  • Paraphrase and analysis of reading material supports the overall argument
  • Organization of ideas uses a logical structure, clear paragraphs, and appropriate transitions
  • Grammar and mechanics effectively communicates meaning

To view the scoring criteria for each rubric category, visit the DRWA Doctoral Writing Assessment: Essay Score module in your DRWA classroom.

To test out of the required Graduate Writing I and Graduate Writing II courses, you must show mastery of the writing skills represented in the rubric in your DRWA Doctoral Writing Assessment classroom.

If you are required to take Graduate Writing I and/or Graduate Writing II based on your assessment score, you can learn more about the learning outcomes of these courses below.

Graduate Writing I Learning Outcomes

Graduate writing ii learning outcomes, top 3 scoring criteria faqs, who will review and score my essay, what does my score for my doctoral writing assessment essay mean, when will i learn my essay score for the doctoral writing assessment.

  • Previous Page: Assessment Overview
  • Next Page: Essay Tips and Resources
  • Office of Student Disability Services

Walden Resources

Departments.

  • Academic Residencies
  • Academic Skills
  • Career Planning and Development
  • Customer Care Team
  • Field Experience
  • Military Services
  • Student Success Advising
  • Writing Skills

Centers and Offices

  • Center for Social Change
  • Office of Academic Support and Instructional Services
  • Office of Degree Acceleration
  • Office of Research and Doctoral Services
  • Office of Student Affairs

Student Resources

  • Form & Style Review
  • Quick Answers
  • ScholarWorks
  • SKIL Courses and Workshops
  • Walden Bookstore
  • Walden Catalog & Student Handbook
  • Student Safety/Title IX
  • Legal & Consumer Information
  • Website Terms and Conditions
  • Cookie Policy
  • Accessibility
  • Accreditation
  • State Authorization
  • Net Price Calculator
  • Contact Walden

Walden University is a member of Adtalem Global Education, Inc. www.adtalem.com Walden University is certified to operate by SCHEV © 2024 Walden University LLC. All rights reserved.

  • Open access
  • Published: 26 September 2020

Examining consistency among different rubrics for assessing writing

  • Enayat A. Shabani   ORCID: orcid.org/0000-0002-7341-1519 1 &
  • Jaleh Panahi 1  

Language Testing in Asia volume  10 , Article number:  12 ( 2020 ) Cite this article

8045 Accesses

4 Citations

3 Altmetric

Metrics details

The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the internationally recognized tests of English language proficiency. To carry out this study, two hundred essays (task 2) from the academic IELTS test were randomly selected from about 800 essays from an official IELTS center, a representative of IDP Australia, which was taken between 2015 and 2016. The test takers were 19 to 42 years of age, 120 of them were female and 80 were males. Three raters were provided with four sets of rubrics used for scoring the essay writing task of tests developed by Educational Testing Service (ETS) and Cambridge English Language Assessment (i.e., Independent TOELF iBT, GRE, CPE, and CAE) to score the essays which had been previously scored officially by a certified IELTS examiner. The data analysis through correlation and factor analysis showed a general agreement among raters and scores; however, some deviant scorings were spotted by two of the raters. Follow-up interviews and a questionnaire survey revealed that the source of score deviations could be related to the raters’ interests and (un)familiarity with certain exams and their corresponding rubrics. Specifically, the results indicated that despite the significance which can be attached to rubrics in writing assessment, raters themselves can exceed them in terms of impact on scores.

Introduction

Writing effectively is a very crucial part of advancement in academic contexts (Rosenfeld et al. 2004 ; Rosenfeld et al. 2001 ), and generally, it is a leading contributor to anyone’s progress in the professional environment (Tardy and Matsuda 2009 ). It is an essential skill enabling individuals to have a remarkable role in today’s communities (Cumming 2001 ; Dunsmuir and Clifford 2003 ). Capable and competent L2 writers demonstrate their idea in the written form, present and discuss their contentions, and defend their stances in different circumstances (Archibald 2004 ; Bridgeman and Carlson 1983 ; Brown and Abeywickrama 2010 ; Cumming 2001 ; Hinkel 2009 ; Hyland 2004 ). Writing correctly and impressively is vital as it ensures that ideas and beliefs are expressed and transferred effectively. Being capable of writing well in the academic environment leads to better scores (Faigley et al. 1981 ; Graham et al. 2005 ; Harman 2013 ). It also helps those who require admission to different organizations of higher education (Lanteigne 2017 ) and provides them with better opportunities to get better job positions. Business communications, proceedings, legal agreements, and military agreements all have to be well written to transmit information in the most influential way (Canseco and Byrd 1989 ; Grabe and Kaplan 1996 ; Hyland 2004 ; Kroll and Kruchten 2003 ; Matsuda 2002 ). What should be taken into consideration is that even well until the mid-1980s, L2 writing in general, and academic L2 writing in particular, was hardly regarded as a major part of standard language tests desirable of being tested on its own right. Later, principally owing to the announced requirements of some universities, it meandered through its path to first being recognized as an option in these tests and then recently turning into an indispensable and integral part of them.

L2 writing is not the mere adequate use of grammar and vocabulary in composing a text, rather it is more about the content, organization and accurate use of language, and proper use of linguistic and textual parts of the language (Chenoweth and Hayes 2001 ; Cumming 2001 ; Holmes 2006 ; Hughes 2003 ; Sasaki 2000 ; Weissberg 2000 ; Wiseman 2012 ). Essay, as one of the official practices of writing, has become a major part of formal education in different countries. It is used by different universities and institutes in selecting qualified applicants, and the applicants’ mastery and comprehension of L2 writing are evaluated by their performance in essay writing.

Essay, as one of the most formal types of writing, constitutes a setting in which clear explanations and arguments on a given topic are anticipated (Kane 2000 ; Muncie 2002 ; Richards and Schmidt 2002 ; Spurr 2005 ). The first steps in writing an essay are to gain a good grasp of the topic, apprehend the raised question and produce the response in an organized way, select the proper lexicon, and use the best structures (Brown and Abeywickrama 2010 ; Wyldeck 2008 ). To many, writing an essay is hampering, yet is a key to success. It makes students think critically about a topic, gather information, organize and develop an idea, and finally produce a fulfilling written text (Levin 2009 ; Mackenzie 2007 ; McLaren 2006 ; Wyldeck 2008 ).

L2 writing has had a great impact on the field of teaching and learning and is now viewed not only as an independent skill in the classroom but also as an integral aspect of the process of instruction, learning, and most freshly, assessment (Archibald 2001 ; Grabe and Kaplan 1996 ; MacDonald 1994 ; Nystrand et al. 1993 ; Raimes 1991 ). Now, it is not possible to think of a dependable test of English language proficiency without a section on essay writing, especially when academic and educational purposes are of concern. Educational Testing Service (ETS) and Cambridge English Language Assessment offer a particular section on essay writing for their tests of English language proficiency. The independent TOEFL iBT writing section, the objective of which is to gauge and assess learners’ ability to logically and precisely express their opinions using their L2 requires the learners to write well at the sentence, paragraph, and essay level. It is written on a computer using a word processing program with rudimentary qualities which does not have a self-checker and a grammar or spelling checker. Generally, the essay should have an introduction, a body, and a conclusion. A standard essay usually has four paragraphs, five is possibly better, and six is too many (Biber et al. 2004 ; Cumming et al. 2000 ). TOEFL iBT is scored based on the candidates’ performance on two tasks in the writing section. Candidates should at least do one of the writing tasks. Scoring could be done either by human rater or automatically (the eRater). Using human judgment for assessing content and meaning along with automated scoring for evaluating linguistic features ensures the consistency and reliability of scores (Jamieson and Poonpon 2013 ; Kong et al. 2009 ; Weigle 2013 ).

The Graduate Record Examination (GRE) analytic writing consists of two different essay tasks, an “issue task” and an “argument task”, the latter being the focus of the present study. Akin to TOELF iBT, the GRE is also written on a computer employing very basic features of a word processing program. Each essay has an introduction including some contextual and upbringing information about what is going to be analyzed, a body in which complex ideas should be articulated clearly and effectively using enough examples and relevant reasons for supporting the thesis statement. Finally, the claims and opinions have to be summed up coherently in the concluding part (Broer et al. 2005 ). The GRE is scored two times on a holistic scale, and usually, the average score is reported if the two scores are within one point; otherwise, a third reader steps in and examines the essay (Staff 2017 ; Zahler 2011 ).

IELTS essay writing (in both Academic and General Modules) involves developing a formal five-paragraph essay in 40 min. Similar to essays in other exams, it should include an introductory paragraph, two to three body paragraphs, and a concluding paragraph (Aish and Tomlinson 2012 ; Dixon 2015 ; Jakeman 2006 ; Loughead 2010 ; Stewart 2009 ). To score IELTS essay writing, the received scores for the (four) components of the rubric are averaged (Fleming et al. 2011 ).

The writing sections of the Cambridge Advanced Certificate in English (CAE) and the Cambridge English: Proficiency (CPE) exams have two parts. The first part is compulsory and candidates are asked to write in response to an input text including articles, leaflets, notices, and formal and/or informal letters. In the second part, the candidates must select one of the writing tasks that might be a letter, proposal, report, or a review (Brookhart and Haines 2009 ; Corry 1999 ; Duckworth et al. 2012 ; Evans 2005 ; Moore 2009 ). The essays should include an introduction, a body, and a conclusion (Spratt and Taylor 2000 ). Similar to IELTS essay writing, these exams are scored analytically. The scores are added up and then converted to a scale of 1 to 20 (Brookhart 1999 ; Harrison 2010 ).

Assessing L2 writing proficiency is a flourishing area, and the precise assessment of writing is a critical matter. Practically, learners are generally expected to produce a piece of text so that raters can evaluate the overall quality of their performance using a variety of different scoring systems including holistic and analytic scoring, which are the most common and acceptable ways of assessing essays (Anderson 2005 ; Brossell 1986 ; Brown and Abeywickrama 2010 ; Hamp-Lyons 1990 , 1991 ; Kroll 1990 ). Today, the significance of L2 writing assessment is on an increase not only in language-related fields of studies but also arguably in all disciplines, and it is a very pressing concern in various educational and also vocational settings.

L2 writing assessment is the focal point of an effective teaching process of this complicated skill (Jones 2001 ). A diligent assessment of writing completes the way it is taught (White 1985 ). The challenging and thorny natures of assessment and writing skills impede the reliable assessment of an essay (Muenz et al. 1999 ) such that, to date, a plethora of research studies have been conducted to discern the validity and reliability of writing assessment. Huot ( 1990 ) argues that writing assessment encounters difficulty because usually, there are more than two or three raters assessing essays, which may lead to uncertainty in writing assessment.

L2 writing assessment is generally prone to subjectivity and bias, and “the assessment of writing has always been threatened due to raters’ biasedness” (Fahim and Bijani 2011 , p. 1). Ample studies document that raters’ assessment and judgments are biased (Kondo-Brown 2002 ; Schaefer 2008 ). They also suggested that in order to reduce the bias and subjectivity in assessing L2 writing, standard and well-described rating scales, viz rubrics, should be determined (Brown and Jaquith 2007 ; Diederich et al. 1961 ; Hamp-Lyons 2007 ; Jonsson and Svingby 2007 ; Aryadoust and Riazi 2016 ). Furthermore, there are some studies suggesting the tendency of many raters toward subjectivity in writing assessment (Eckes 2005 ; Lumley 2005 ; O’Neil and Lunz 1996 ; Saeidi et al. 2013 ; Schaefer 2008 ). In light of these considerations, it becomes of prominence to improve consistency among raters’ evaluations of writing proficiency and to increase the reliability and validity of their judgments to avoid bias and subjectivity to produce a greater agreement between raters and ratings. The most notable move toward attaining this objective is using rubrics (Cumming 2001 ; Hamp-Lyons 1990 ; Hyland 2004 ; Raimes 1991 ; Weigle 2002 ). In layman’s terms, rubrics ensure that all the raters evaluate a writing task by the same standards (Biggs and Tang 2007 ; Dunsmuir and Clifford 2003 ; Spurr 2005 ). To curtail the probable subjectivity and personal bias in assessing one’s writing, there should be some determined and standard criteria for assessing different types of writing tasks (Condon 2013 ; Coombe et al. 2012 ; Shermis 2014 ; Weigle 2013 ).

Assessment rubrics (alternatively called instruments) should be reliable, valid, practical, fair, and constructive to learning and teaching (Anderson et al. 2011 ). Moskal and Leydens ( 2000 ) considered validity and reliability as the two significant factors when rubrics are used for assessing an individual’s work. Although researchers may define validity and reliability in various ways (for instance, Archibald 2001 ; Brookhart 1999 ; Bachman and Palmer 1996 ; Coombe et al. 2012 ; Cumming 2001 ; Messick 1994 ; Moskal and Leydens 2000 ; Moss 1994 ; Rezaei and Lovorn 2010 ; Weigle 2002 ; White 1994 ; Wiggan 1994 ), they generally agree that validity in this area of investigation is the degree to which the criteria support the interpretations of what is going to be measured. Reliability, they generally settle, is the consistency of assessment scores regardless of time and place. Rubrics and any rating scales should be so developed to corroborate these two important factors and equip raters and scorers with an authoritative tool to assess writing tasks fairly. Arguably, “the purpose of the essay task, whether for diagnosis, development, or promotion, is significant in deciding which scale is chosen” (Brossell 1986 , p. 2). As rubrics should be conceived and designed with the purpose of assessment of any given type of written task (Crusan 2015 ; Fulcher 2010 ; Knoch 2009 ; Malone and Montee 2014 ; Weigle 2002 ), the development and validation of rating scales are very challenging issues.

Writing rubrics can also help teachers gauge their own teaching (Coombe et al. 2012 ). Rubrics are generally perceived as very significant resources attainable for teachers enabling them to provide insightful feedback on L2 writing performance and assess learners’ writing ability (Brown and Abeywickrama 2010 ; Knoch 2011 ; Shaw and Weir 2007 ; Weigle 2002 ). Similarly, but from another perspective, rubrics help learners to follow a clear route of progress and contribute to their own learning (Brown and Abeywickrama 2010 ; Eckes 2012 ). Well-defined rubrics are constructive criteria, which help learners to understand what the desired performance is (Bachman and Palmer 1996 ; Fulcher and Davidson 2007 ; Weigle 2002 ). Employing rubrics in the realm of writing assessment helps learners understand raters’ and teachers’ expectations better, judge and revise their own work more successfully, promote self-assessment of their learning, and improve the quality of their writing task. Rubrics can be used as an effective tool enabling learners to focus on their efforts, produce works of higher quality, get better grades, find better jobs, and feel more concerned and confident about doing their assignment (Bachman and Palmer 2010 ; Cumming 2013 ; Kane 2006 ).

Rubrics are set to help scorers evaluate writers’ performances and provide them with very clear descriptions about organization and coherence, structure and vocabulary, fluent expressions, ideas and opinions, among other things. They are also practical for the purpose of describing one’s competence in logical sequencing of ideas in producing a paragraph, use of sufficient and proper grammar and vocabulary related to the topic (Kim 2011 ; Pollitt and Hutchinson 1987 ; Weigle 2002 ). Employing rubrics reduces the time required to assess a writing performance and, most importantly, well-defined rubrics clarify criteria in particular terms enabling scorers and raters to judge a work based on standard and unified yardsticks (Gustilo and Magno 2015 ; Kellogg et al. 2016 ; Klein and Boscolo 2016 ).

Selecting and designing an effective rating scale hinges upon the purpose of the test (Alderson et al. 1995 ; Attali et al. 2012 ; Becker 2011 ; East 2009 ). Although rubrics are crucial in essay evaluation, choosing the appropriate rating scale and forming criteria based on the purpose of assessment are as important (Bacha 2001 ; Coombe et al. 2012 ). It seems that a considerable part of scale developers prefers to adapt their scoring scales from a well-established existing one (Cumming 2001 ; Huot et al. 2009 ; Wiseman 2012 ). The relevant literature supports the idea of adapting rating scales used in large-scale tests for academic purposes (Bacha 2001 ; Leki et al. 2008 ). Yet, East ( 2009 ) warned about the adaptation of rating scales from similar tests, especially when they are to be used across languages.

Holistic and analytic scoring systems are now widely used to identify learners’ writing proficiency levels for different purposes (Brown and Abeywickrama 2010 ; Charney 1984 ; Cohen 1994 ; Coombe et al. 2012 ; Cumming 2001 ; Hamp-Lyons 1990 ; Reid 1993 ; Weir 1990 ). Unlike the analytic scoring system, the holistic one takes the whole written text into consideration. This scoring system generally emphasizes what is done well and what is deficient (Brown and Hudson 2002 ; White 1985 ). The analytic scoring system (multi-trait rubrics), however, includes discrete components (Bacha 2001 ; Becker 2011 ; Brown and Abeywickrama 2010 ; Coombe et al. 2012 ; Hamp-Lyons 2007 ; Knoch 2009 ; Kuo 2007 ; Shaw and Weir 2007 ). To Weigle ( 2002 ), accuracy, cohesion, content, organization, register, and appropriacy of language conventions are the key components or traits of an analytic scoring system. One of the early analytic scoring rubrics for writing was employed in the ESL Composition by Jacobs et al. 1981 , which included five components, namely language development, organization, vocabulary, language use, and mechanics).

Each scoring system has its own merits and limitations. One of the advantages of analytic scoring is its distinctive reliability in scoring (Brown et al. 2004 ; Zhang et al. 2008 ). Some researchers (e.g. Johnson et al. 2000 ; McMillan 2001 ; Ward and McCotter 2004 ) contend that analytic scoring provides the maximum opportunity for reliability between raters and ratings since raters can use one scoring criteria for different writing tasks at a time. Yet, Myford and Wolfe ( 2003 ) considered the halo effect as one of the major disadvantages of analytic rubrics. The most commonly recognized merit of holistic scoring is its feasibility as it requires less time. However, it does not encompass different criteria, affecting its validity in comparison to analytic scoring, as it entails the personal reflection of raters (Elder et al. 2007 ; Elder et al. 2005 ; Noonan and Sulsky 2001 ; Roch and O’Sullivan 2003 ). Cohen ( 1994 ) stated that the major demerit of the holistic scoring system is its relative weakness in providing enough diagnostic information about learners’ writing.

Many research studies have been conducted to examine the effect of analytic and holistic scoring systems on writing performance. For instance, more than half a century ago, Diederich et al. ( 1961 ) carried out a study on the holistic scoring system in a large-scale testing context. Three-hundred essays were rated by 53 raters, and the results showed variation in ratings based on three criteria, namely ideas, organization, and language. About two score years later, Borman ( 1979 ) conducted a similar study on 800 written tasks and found that the variations can be attributed to ideas, organizations, and supporting details. Charney ( 1984 ) did a comparison study between analytic and holistic rubrics in assessing writing performance in terms of validity and found a holistic scoring system to be more valid. Bauer ( 1981 ) compared the cost-effectiveness of analytic and holistic rubrics in assessing essay tasks and found the time needed to train raters to be able to employ analytic rubrics was about two times more than the required time to train raters to use the holistic one. Moreover, the time needed to grade the essays using analytic rubrics was four times the time needed to grade essays using holistic rubrics. Some studies reported findings that corroborated that holistic scoring can be the preferred scoring system in large-scale testing context (Bell et al. 2009 ). Chi ( 2001 ) compared analytic and holistic rubrics in terms of their appropriacy, the agreement of the learners’ scores, and the consistency of rater. The findings revealed that raters who used the holistic scoring system outperformed those employing analytic scoring in terms of inter-rater and intra-rater reliability. Thus, there is research to suggest the superiority of analytic rubrics in assessing writing performance in terms of reliability and accuracy in scoring (Birky 2012 ; Brown and Hudson 2002 ; Diab and Balaa 2011 ; Kondo-Brown 2002 ). It is, generally speaking, difficult to decide which one is the best, and the research findings so far can best be described as inconclusive.

Rubrics of internationally recognized tests used in assessing essays have many similar components, including organization and coherence, task achievement, range of vocabulary used, grammatical accuracy, and types of errors. The wording used, however, is usually different in different rubrics, for instance, “task achievement” that is used in the IELTS rubrics is represented as the “realization of tasks” in CPE and CAE, “content coverage” in GRE, and “task accomplishment” in TOEFL iBT. Similarly, it can be argued that the point of focus of the rubrics for different tests may not be the same. Punctuation, spelling, and target readers’ satisfaction, for example, are explicitly emphasized in CAE and CPE while none of them are mentioned in GRE and TOEFL iBT. Instead, idiomaticity and exemplifications are listed in the TOEFL iBT rubrics, and using enough supporting ideas to address the topic and task is the focus of GRE rating scales (Brindley 1998 ; Hamp-Lyons and Kroll 1997 ; White 1984 ).

Broadly speaking, the rubrics employed in assessing L2 writing include the above-mentioned components but as mentioned previously, they are commonly expressed in different wordings. For example, the criteria used in IELTS Task 2 rating scale are task achievement, coherence and cohesion, lexical resources, and grammatical range and accuracy. These criteria are the ones based on which candidates’ work is assessed and scored. Each of these criteria has its own descriptors, which determine the performance expected to secure a certain score on that criterion. The summative outcome, along with the standards, determines if the candidate has attained the required qualification which is established based on the criteria. The summative outcome of IELTS Task 2 rating scale will be between 0 and 9. Similar components are used in other standard exams like CAE and CPE, their summative outcomes being determined from 1 to 5. Their criteria are used to assess content (relevance and completeness), language (vocabulary, grammar, punctuation, and spelling), organization (logic, coherence, variety of expressions and sentences, and proper use of linking words and phrases), and finally communicative achievement (register, tone, clarity, and interest). CAE and CPE have their particular descriptors which demonstrate the achievement of each learners’ standard for each criterion (Betsis et al. 2012 ; Capel and Sharp 2013 ; Dass 2014 ; Obee 2005 ). Similar to the other rubrics, the GRE scoring scale has the main components like the other essay writing scales but in different wordings. In the GRE, the standards and summative outcomes are reported from 0–6, denoting fundamentally deficient, seriously flawed, limited, adequate, strong, and outstanding, respectively. Like the GRE, the TOEFL iBT is scored from 0–5. Akin to the GRE, Independent Writing Rubrics for the TOEFL iBT delineates the descriptors clearly and precisely (Erdosy 2004 ; Gass et al. 2011 ).

Abundant research studies have been carried out to show that idea and content, organization, cohesion and coherence, vocabulary and grammar, and language and mechanics are the main components of essay rubrics (Jacobs et al. 1981 ; Schoonen 2005 ). What has been considered a missing element in the analytic rating scale is the raters’ knowledge of, and familiarity with, rubrics and their corresponding elements as one of the key yardsticks in measuring L2 writing ability (Arter et al. 1994 ; Sasaki and Hirose 1999 ; Weir 1990 ). Raters play a crucial role in assessing writing. There is research to allude to the impact of raters’ judgments on L2 writing assessment (Connor-Linton 1995 ; Sasaki 2000 ; Schoonen 2005 ; Shi 2001 ).

The past few decades have witnessed an increasing growth in research on different scoring systems and raters’ critical role in assessment. There are some recent studies discussing the importance of rubrics in L2 writing assessment (e.g. Deygers et al. 2018 ; Fleckenstein et al. 2018 ; Rupp et al. 2019 ; Trace et al. 2016 ; Wesolowski et al. 2017 ; Wind et al. 2018 ). They commonly consider rubrics as significant tools for measuring L2 learners’ performances and suggest that rubrics enhance the reliability and validity of writing assessment. More importantly, they argue that employing rubrics can increase the consistency among raters.

Shi ( 2001 ) made comparisons between native and non-native, as well as between experienced and novice raters, and found that raters have their own criteria to assess an essay, virtually regardless of whether they are native or non-native and experienced or novice. Lumley ( 2002 ) and Schoonen ( 2005 ) conducted comparison studies between two groups of raters, one group trained expert raters provided with no standard rubrics, the other group novice raters with no training who had standard rubrics. The trained raters with no rubrics outperformed the other group in terms of accuracy in assessing the essays, implying the importance of raters. Rezaei and Lovorn ( 2010 ) compared the use of rubrics between summative and formative assessment. They argued that using rubrics in summative assessment is predominant and that it overshadows the formative aspects of rubrics. Their results showed that rubrics can be more beneficial when used for formative assessment purposes.

Izadpanah et al. ( 2014 ) conducted a study drawing on Jacobs et al. ( 1981 ) to see if the rubrics of one exam can be the predictor of another one. Practically, they wanted to examine whether the same score would be obtained if a rubric for an IELTS exam was used for assessing CPE or any other standard test. Their findings revealed that the rubrics were comparable with each other in terms of their different components by which different standard essays are assessed. Bachman ( 2000 ) compared TOEFL PBT and CPE and found a very meaningful relationship between the scores gained from essay writing tests. He also concluded that scoring CPE was usually more difficult than PBT, and that under similar conditions, exams from UCLES/Cambridge Assessment (like CPE) received lower scores in comparison to the ones from ETS (like PBT). In Fleckenstein et al. ( 2019 ) experts from different countries linked upper secondary students’ writing profiles elicited in a constructed response test (integrated and independent essays from the TOEFL iBT) to CEFR level. The Delphi technic was used to find out the intra- and inter-panelist consistency while scoring students’ writing profiles. The findings showed that panelists are able to provide ratings consistent with the empirical item difficulties and the validity of the estimate of the cut scores.

Schoonen ( 2005 ) and Attali and Burstein ( 2005 ) compared the generalizability of writing scores to different essays using only one set of the rubric. They checked and analyzed three components of writing rubric, including content, language use, and organization and found that the obtained scores from different essays are similar. Wind ( 2020 ) conducted a study to illustrate and explore methods for evaluating the degree to which raters apply a common rating scale consistently in analytic writing assessments. The results indicated a lack of invariance in rating scale category functioning across domains for several raters. Becker ( 2011 ) also examined different rubrics used to measure writing performance. He investigated the three different types of rubrics, namely holistic, analytic, and primary-trait scoring systems, to find which one is more appropriate for assessing L2 writing. He studied the merits and demerits of the three rubrics and concluded that none of them had superiority over the others, making each legitimate for assessing a piece of writing depending on the purpose of writing, the time allocated for assessment, and the raters’ expertise.

In a recent study, Ghaffar et al. ( 2020 ) examined the impact of rubrics and co-constructed rubrics on middle school students’ writing skill performance. The findings of their study indicated that co-constructed rubrics as assessment tools help students to outperform in their writing due to their familiarity with these types of rubrics. In addition, there are researchers who are of the contention that the use of rubrics is inconclusive and can be controversial especially when they are just used for summative assessment purposes and that when rubrics are used for both summative and formative assessment, they are more advantageous (Andrade 2000 ; Broad 2003 ; Ene and Kosobucki 2016 ; Inoue 2004 ; Panadero and Jonsson 2013 ; Schirmer and Bailey 2000 ; Wilson 2006 , 2017 ).

What all of these studies indicated is that employing well-developed rubrics increase equality and fairness in writing assessment. It is also suggested that various factors could affect writing assessment, especially raters’ expertise and time allocated to the rating (Bacha 2001 ; Ghalib and Hattami 2015 ; Knoch 2009 , 2011 ; Lu and Zhang 2013 ; Melendy 2008 ; Nunn 2000 ; Nunn and Adamson 2007 ). The purpose of the present study is twofold. First, it attempts to investigate the consistency among different standard rubrics in writing assessment. Second, it tries to examine whether any of these rubrics could be used as a predictor of others and if they all tap the same underlying construct.

To meet the objectives of the study, 200 samples of Academic IELTS Task 2 (i.e., essay writing) were used. The samples were randomly selected from more than 800 essays written as part of academic IELTS tests taken between 2015 and 2016 at an official IELTS test center, a representative of IDP Australia. The essays were asked to be written based on different prompts. As an instruction to the IELTS writing Task 2, it is required that the test takers write at least 250 words, a condition that 21 samples did not meet. Test takers were 19 to 42 years of age, 120 of the females and 80 males.

One of the raters in this study was an (anonymous) official IELTS examiner who had scored the essays officially; the other raters were four experienced IELTS instructors from an English department of a nationally prominent language institute, three males and one female, between 26 and 39 years of age, with 5 to 12 years of English language teaching experience. These four raters were selected based on their qualifications, teaching credentials and certifications, and years of teaching experience, particularly in IELTS classes. All the four raters were M.A. holders in TEFL and had been teaching different writing courses at universities and language institutes and were familiar with different scoring systems and their relevant components. Each rater was invited to an individual briefing session with one of the researchers to ensure their familiarity with the rubrics of interest and discuss some practical considerations pertaining to this study. They were asked to read and score each essay four times, each time based on one of the four rubrics (TOELF iBT, GRE, CPE, and CAE). The raters completed the scorings in 12 weeks during which time they were instructed not to share ideas about the task (the costs of scorings were modestly met).

Instrumentation

Four sets of rubrics for different writing tests (i.e., Independent TOEFL iBT, GRE, CPE, & CAE) were taken from ETS and Cambridge English Language Assessment. The official IELTS scores of the 200 essays were collected from the IELTS center. The rubrics employed for assessing and evaluating the writing tasks of these five standard exams were analytic rubrics with different scales, namely a nine-point scale for assessing IELTS Task 2, five-point scales for GRE and TOEFL iBT, and six-point scales for CAE and CPE writing tasks. They assess the main components of essay writing construct, including the range of vocabulary and grammar used in addressing the task, cohesion and organization, and range of using cohesive devices, which were presented in different wordings in these rubrics.

Another instrument was a questionnaire designed by the researchers, which included both open-ended and closed-ended questions (see Appendix). The aim was to determine the raters’ attitudes toward their rating experience and their familiarity with each exam and its corresponding rubrics. The themes of questionnaire items were determined based on a review of the literature on the important issues and factors affecting raters’ performances and attitudes (Brown and Abeywickrama 2010 ; Coombe et al. 2012 ; Fulcher and Davidson 2007 ; Weigle 2002 ). In addition, an interview was carried out with the four raters to find out about their interest in rating and also to investigate their familiarity of the exams and their conforming rating scales.

To carry out the study, 200 essay samples were scored once by a certified IELTS examiner. The assigned scores together with the IELTS examiner’s relevant comments were written next to each essay sample. Afterward, all essays were rated by the four other raters, who were kept uninformed of the official IELTS scores. They were provided with the rubrics of the four essay writing tests and were instructed to assess each essay with the four given rubrics. By so doing, in addition to the official IELTS scores, four other scores were given to each essay from each rater; that is to say, each essay received 16 scores plus the official IELTS score. Therefore, all in all, the researchers collected 17 scores for each essay. The researcher-made questionnaire was carried out, and then an interview was conducted whereby the 4 raters were asked about their interest in rating and also their awareness and concerns about each exam and their relevant rubrics.

To do the analysis of the data, the SPSS program, version 22, was employed. Initially, the descriptive statistics of the data were computed, and intercorrelations among the 17 scores were calculated to see if any statistically significant association could be found among the rubrics. To have a better picture of the existing association among the scoring rubrics of the different exams, PCA as a variant of factor analysis was run to examine the extent the rubrics tap the same underlying construct.

To address the first research question, intercorrelations were computed among the IELTS, CAE, CPE, TOEFL iBT, and GRE scores. To answer the second research question, factor analysis was run to examine the extent the standard essay writings in these five tests of English language proficiency tap the same underlying construct. In this section, the results of the intercorrelations and factor analyses computations are reported in detail.

Intercorrelations among ratings

To estimate the intercorrelations among test ratings and raters, first, alpha was calculated for these five sets of scores together (i.e., IELTS, CAE, CPE, TOEFL iBT, and GRE). To analyze the data, primarily, alpha was calculated for each rater separately to check the consistency among raters. Then, alpha was computed for all the raters together to find inter-reliability among the raters. The intercorrelations were afterward computed between each exam score and the IELTS scores to see which score is (more) correlated with the IELTS.

Table 1 presents the alphas as the average of intercorrelations among the five sets of scores including the IELTS scores, and the four scores given by the raters. Evidently, rater 1 has an alpha of about .67, which is lower than the other alphas. However, because there were only five sets of scores correlated in each alpha, this low value of alpha could still be considered acceptable. Nevertheless, this lower value of alpha in comparison to the other alphas could be meaningful since, after all, this rater showed less internal consistency among his ratings.

To see which test rating given by the four raters agreed the least with the IELTS scores, intercorrelations of each test rating with the IELTS scores were computed as shown in Table 2 . As the intercorrelations of the first rater demonstrate, Rater 1’s CPE rating and Rater 4’s TOEFL iBT rating show lower correlations with the IELTS ratings. Afterward, an alpha was computed for an aggregate of the ratings of all the raters including the IELTS scores.

Table 3 shows an alpha of around .86, which could be considered acceptable with regard to the small number of ratings.

To see which rating had a negative effect on the total alpha, item-total correlation for each test rating was computed. Item-total correlation showed the extent to which each test rating agrees with the total of the other test ratings including the IELTS scores. As it is shown in Table 4 , CPE1 and iBT4 had the lowest correlations with the total ratings. This table also indicates that the removal of these scores would have increased the total alpha considerably.

These results, as expected, confirmed the results found in each rater’s alpha and inter-test correlations computed in the previous section.

Factor analysis

This study was carried out having hypothesized that the construct of essay writing is similar across different standardized tests (i.e., IELTS, CAE, CPE, TOEFL iBT, and GRE), and a given essay is expected to be scored similarly by the rubrics and scales of these different exams. To see whether this was the case, the ratings of these exams were examined. The correlation analyses reported above showed that there is an acceptable agreement among all test ratings except two of them, CPE and TOEFL iBT. That is, rater 1 in CPE and Rater 4 in TOEFL iBT showed the least correlation among other test ratings (.15 and .13, respectively). To have a better picture of this issue, it was decided to run a PCA to examine the extent these exams tap the same underlying construct. Factor analysis provides some factor loadings for each test item (i.e., test rating); if two or more items load on the same factor, it will show that these items (i.e., test ratings) tap the same construct (i.e., essay writing construct).

Table 5 presents the results of Kaiser-Meyer-Olkin measure (KMO) and Bartlett’s test of sphericity on the sampling adequacy for the analysis. The reported KMO is .83, which is larger than the acceptable value (KMO > .5) according to Field ( 2009 ). Bartlett’s test of sphericity [ χ 2 (136) = 1377.12, p < .001] was also found significant, indicating large enough correlations among the items for PCA; therefore, this sample could be considered adequate for running the PCA.

The next step was to investigate the number of factors required to be retained in the PCA. To do so, the scree plot was checked (Fig. 1 ). The first point that should be identified in the scree plot is the point of inflexion, that is, where the slopes of the line in the scree plot changes dramatically. Only those factors, which fall to the left of the point of inflexion, should be retained. Based on Fig. 1 , it seems that the point of inflexion is on the fourth factor; therefore, four factors were retained.

figure 1

According to Table 6 , the first four retained factors explain around 60 percent of the whole variance, which is quite considerable.

Table 7 presents the four factor loadings after varimax rotation. Obviously, the different test ratings were loaded on 4 factors. In other words, those test ratings that clustered around the same factor seemed to be loading on the same underlying factor or latent variable.

Following the above analysis, it was decided to further examine the factor loadings as follows: It should be noted that the above factor structure was achieved by considering only those loadings above .4 as suggested by Stevens ( 2002 ), which explained around 16 percent of the variance in the variable. This value was strict, though, resulting in the emergence of limited factors. Therefore, employing Kaiser’s criterion, a second factor analysis was run with a more lenient absolute value for each factor, which was .3 as suggested by Field ( 2009 ). By so doing, more factor loadings emerged and more information was achieved. The factor loadings above .3 are presented in Table 8 , which almost revealed the same factor structure as found in the previous factor analysis with absolute values greater than .4; however, one important finding was that the IELTS ratings this time showed loadings on all the factors on which other tests also loaded. It can be construed, therefore, that the other tests had significant potential to tap the same construct.

After estimating reliability using Cronbach’s alpha and then by running a Confirmatory Factor Analysis, it was decided to omit Rater 1 due to his unfamiliarity with the exam and its corresponding rubrics reported by him in the questionnaire.

Table 9 and Fig. 2 (scree plot) demonstrate the factor structure after removing Rater 1. The scree plot shows that 4 factors should be retained in the analysis, and Table 9 indicates that the first four retained factors explain about 70 percent of the whole variance, which was quite satisfactory.

figure 2

Scree plot (Rater 1 removed)

Finally, Table 10 shows that after removing Rater 1’s data, all the ratings of Raters 3 and 4 have loaded on the same factors with the IELTS. Of course, like the previous factor analysis, the IELTS ratings again showed loadings on all the factors on which other tests loaded except iBT4. All in all, it could be concluded that the results from the factor analysis confirm the previous findings from alpha computations showing iBT4 ratings had the lowest correlations with the total ratings.

Discussion and conclusions

The purpose of the present study is to examine the consistency of the rubrics endorsed for assessing the writing tasks by the internationally recognized tests of English language proficiency. Standard rubrics can be considered constructive tool helping raters to assess different types of essays (Busching, 1998 ). Using rubrics enhances the reliability of the assessment of essays provided that these rubrics are well described and that they tap the same construct (Jonsson & Svingby, 2007 ). The current study is an attempt to examine the reliability among different rubrics of essay writing with regard to their major components, namely, organization, coherence and cohesion, range of lexical and grammatical complexity used, and accuracy.

The results of this study show that all in all, there is a high correlation among raters (i.e., the IELTS examiner and the four other raters) and rating scores (i.e., the official IELTS scores and the other 16 test ratings received from the four raters). The intercorrelations among test ratings and the raters as well as the computation of inter-item correlations between each test rating and the IELTS scores revealed that CPE1 and iBT4 had the least agreement with the official IELTS ratings. Therefore, these low correlations were investigated in a follow-up study by giving the four raters a questionnaire including both open-ended and closed-ended questions. The raters’ responses to the questionnaire denoted the extent to which they were familiar with each exam and their corresponding rubrics.

The responses of two of the raters, that is, Rater 1 in CPE and Rater 4 in TOEFL iBT, proved to be illuminating in explaining their performance. Rater 1 s’ responses to the questionnaire showed that he had no teaching experience for CPE classes. However, his responses to other questions of the questionnaire indicated his familiarity with this exam and its writing essay scoring rubrics. The responses of Rater 4 revealed that she had no teaching experience for TOEFL iBT and no familiarity with the exam and its corresponding rating scales. The outcome from the interview with Rater 4 suggests that using well-trained raters leads to fewer problems in rating. What Rater 4 stated in her responses to the questionnaire and interview were in line with the findings of Sasaki and Hirose ( 1999 ), who concluded that familiarity with different tests and their relevant rubrics leads to better scoring. Additionally, the results of the present study are consistent with what Schoonen ( 2005 ), Attali and Burstein ( 2005 ), Wind et al. ( 2018 ), Deygers et al. ( 2018 ), Wesolowski et al. ( 2017 ), Trace et al. ( 2016 ), Fleckenstein et al. ( 2018 ), Rupp et al. ( 2019 ) found in their studies, that is employing rubrics enhances the reliability of writing assessment as well as among raters.

To this point, the obtained results from this study provide an affirmative answer to the first question of the study, indicating a very high agreement among test ratings and the raters. Also, in order to ensure that the construct of essay writing is similar across different standardized tests and identical essays are scored similarly by the internationally recognized rubrics of these different exams, inter-item correlation analysis was computed which indicated that CPE1 and iBT4 had the lowest correlations with the total ratings. This could be due to either the raters’ inconsistencies or the hypothesis that essay writing is conceptualized differently based on the scoring rubrics of these exams. The follow-up survey also corroborated that the disagreement among Raters 1 and 4 and the other raters was due to either the rater’s discrepancies or the way every writing task was hypothesized differently according to the rubrics of each exam. It can be supported by Weigle ( 2002 ) who concluded that raters should have a good grasp of scoring and its essential details. She also discussed that raters should have a sharp conceptualization of the construct of essay writing.

The results from the rotated component matrix revealed that all the ratings of Raters 3 and 4 loaded on the same factor, meaning that they tap the same construct. Examining the other factor loadings revealed that CAE1, iBT1, CAE2, and iBT2 also loaded on the same factor with the IELTS, suggesting that these rater’s conceptualizations of the construct of essay writing in CAE and TOEFL iBT were more similar to that of the IELTS raters rather than those of CPE and GRE scorers. However, what remained questionable was why CPE1 and GRE1 did not load on the same factor as CAE1 and iBT1, and why CPE1 and GRE1 loaded on the same factor with CPE2 and GRE2. Additionally, why CPE1 also loaded with GRE2 and CPE2 on the same factor remained open to discussion.

What was found above was the results of the PCA considering those factor loadings above .4 based on Stevens ( 2002 ). As this value was strict, and the number of obtained factors was limited, it was decided to apply Kaiser’s Criterion with a less rigorous eigenvalue of .3 based on Field’ ( 2009 ) suggestion. The findings showed almost the same factor loadings as was found in the previous factor analysis. Again Raters 3 and 4 loaded on the same factor, but this time, the IELTS scores loaded on the same factor with CAE2 and iBT2. CAE1, GRE1, and iBT1 loaded on the same factor and what was still debatable was why CPE1 loaded with GRE 2 and CPE2.

Up to this mentioned point, all the results obtained from alpha computation and factor analysis indicated something different in Rater 1, based on which it was decided to omit Rater 1 from the PCA. It is interesting to note that after interviewing all the four raters and scrutinizing the questionnaire survey, it was found that Rater 1, in his responses to the questionnaire, had indicated that he had no teaching experience in teaching CPE classes, and yet he claimed that he was familiar with this exam and its related rating scales, contrary to other raters’ responses to the questionnaire.

After omitting Rater 1 from the PCA, the findings showed that Rater 3’s and Rater 4’s test ratings loaded on the same factor, and this time the IELTS loaded on the factors that all the other tests had loaded except iBT4, meaning that Rater 4 had no agreement with the IELTS raters in rating the essay. What was found from the questionnaire survey of this rater indicated that Rater 4 had no teaching experience for this particular exam. She also had no familiarity with the exam and its corresponding rubrics. This rater also believed that scoring exams like TOEFL iBT and the exams developed by ETS were more difficult, and that they generally received lower scores in comparison to the Cambridge English Language Assessment exams. The results of what Rater 4 stated were not in line with the findings of Bachman ( 2000 ) who did a comparison study between TOEFL PBT and CPE essay task and concluded that CPE scoring is more difficult than scoring TOEFL PBT. Contrary to the findings of the present study, he also concluded that exams like CPE received lower scores.

The results from alpha computation and factor analysis showed the noticeable role of raters in assessing writing. The results from this study are in line with the findings of Lumley ( 2002 ) and Schoonen ( 2005 ) who argue that raters need to be considered one of the most remarkable concerns in the process of assessment. Shi ( 2001 ) argued in favor of the significant role of raters in assessing essays using their own criteria in addition to the standard and determined rating scales. Likewise, the outcome of factor analysis in this research study revealed that raters play a remarkable role in assessing essays by showing that all the items (i.e., test ratings) load on the same factor, especially when all the essay writings were rated by the same rater.

This study aimed to examine the consistency and reliability among different standard rubrics and rating scales used for assessing writing in the internationally recognized tests of English language proficiency. The results from alpha estimation provide evidence for a strong association among the raters and test ratings. Also, what has been found from the PCA indicate that these test ratings tap the same underlying construct. This study encourages employing practical rater trainer and rater training courses, providing them with the authentic opportunities to get familiar with different rubrics. This area requires more investigation on how raters themselves might affect the rating and how employing trained and certified raters can affect the process of rating. Test administrators and developers are the other groups who benefit from the findings of this study, since, when argued that all the test ratings tap the same underlying construct and different essay writing rating scales can be predictors of each other, it would be practical for them to set standard essay writing rubrics which can be used for rating and assessing writing. Also, as the findings of the present study alluded, the developers of the writing rubrics for these tests may also take into stock the implication that there are critical constructs within writing that weigh more heavily when being assessed across standardized measures. Teachers and learners are other groups who benefit from the result of this research study. They might devote less time on describing all these rubrics with their descriptions stated in different words. Instead, they could spend more time on practicing writing and essay writing tasks.

The study tried to examine the reliability of analytic rubrics used in assessing the essay component of the following standardized examinations: IELTS, TOEFL iBT, CAE, CPE, and GRE. While the first four of the tests listed above are indeed English language proficiency examinations designed to assess language skills of English as a Second Language (ESL) learners, the last one (i.e. GRE) is intended for those seeking admission to graduate programs in the U.S., regardless of the first language background. GRE candidates are, at minimum, bachelor degree holders, most of whom are native speakers of English whose education was completed in the English language, while the minority are international applicants to U.S. universities’ master’s and Ph.D. programs from various language backgrounds. GRE writing task, in other words, is not intended for L2 English learners. Therefore, it seems that juxtaposing the GRE requirements for the writing task, which zero in on argumentation and critical thinking, with English language proficiency standards as measured by the other four tests can dilute the generalizability of the results particularly with reference to this particular exam, due to the divergent assessment purposes and intended candidate profiles for this test. Future researchers are encouraged to take heed of this limitation in the present study.

Availability of data and materials

The authors were provided with the data for research purposes. Sharing the data with a third party requires obtaining consent from the organization which provided the data. The materials are available in the article.

Aish, F., & Tomlinson, J. (2012). Get ready for IELTS writing . London: HarperCollins.

Google Scholar  

Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation . Cambridge: Cambridge University Press.

Anderson, B., Bollela, V., Burch, V., Costa, M. J., Duvivier, R., Galbraith, R., & Roberts, T. (2011). Criteria for assessment: consensus statement and recommendations form the Ottawa 2010 conference. Medical Teacher , 33 (3), 206–214.

Anderson, C. (2005). Assessing writers . Portsmouth: Heinemann.

Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership , 57 (5), 13–18.

Archibald, A. (2001). Targeting L2 writing proficiencies: Instruction and areas of change in students’ writing over time. International Journal of English Studies , 1 (2), 153–174.

Archibald, A. (2004). Writing in a second language. In The higher education academy subject centre for languages, linguistics and area studies Retrieved from http://www.llas.ac.uk/resources/gpg/2175 .

Arter, J. A., Spandel, V., Culham, R., & Pollard, J. (1994). The impact of training students to be self-assessors of writing . New Orleans: Paper presented at the Annual Meeting of the American Educational Research Association.

Aryadoust, V., & Riazi, A. M. (2016). Role of assessment in second language writing research and pedagogy. Educational Psychology , 37 (1), 1–7.

Attali, Y., & Burstein, J. (2005). Automated essay scoring with e-rater.V.2.0. (RR- 04-45) . Princeton: ETS.

Attali, Y., Lewis, W., & Steier, M. (2012). Scoring with the computer: alternative procedures for improving the reliability of holistic essay scoring. Language Testing , 30 (1), 125–141.

Bacha, N. (2001). Writing evaluation: what can analytic versus holistic essay scoring tell? System , 29 (3), 371–383.

Bachman, L., & Palmer, A. S. (2010). Language assessment in practice: developing language assessments and justifying their use in the real world . Oxford: Oxford University Press.

Bachman, L. F. (2000). Modern language testing at turn of the century: assuring that what we count counts. Language Testing , 17 (1), 1–42.

Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: designing and developing useful language tests . Oxford: Oxford University Press.

Bauer, B. A. (1981). A study of the reliabilities and the cost-efficiencies of three methods of assessment for writing ability . Champaign: University of Illinois.

Becker, A. (2011). Examining rubrics used to measure writing performance in U.S. intensive English programs. The CATESOL Journal , 22 (1), 113–117.

Bell, R. M., Comfort, K., Klein, S. P., McCarffey, D., Ormseth, T., Othman, A. R., & Stecher, B. M. (2009). Analytic versus holistic scoring of science performance tasks. Applied Measurement in Education , 11 (2), 121–137.

Betsis, A., Haughton, L., & Mamas, L. (2012). Succeed in the new Cambridge proficiency (CPE)- student’s book with 8 practice tests . Brighton: GlobalELT.

Biber, D., Byrd, M., Clark, V., Conrad, S. M., Cortes, E., Helt, V., & Urzua, A. (2004). Representing language use in the university: analysis of the TOEFL 2000 spoken and written academic language corpus. In ETS research report series (RM-04-3, TOEFL Report MS-25) . Princeton: ETS.

Biggs, J., & Tang, C. (2007). Teaching for quality learning at university . Maidenhead: McGraw Hill.

Birky, B. (2012). A good solution for assessment strategies. A Journal for Physical and Sport Educators , 25 (7), 19–21.

Borman, W. C. (1979). Format and training effects on rating accuracy and rater errors. Journal of Applied Psychology , 64 (4), 410–421.

Bridgeman, B., & Carlson, S. (1983). Survey of academic writing tasks required of graduate and undergraduate foreign students. In ETS Research Report Series (RR- 83-18, TOELF- RR-15) . Princeton: ETS.

Brindley, G. (1998). Describing language development? Rating scales and SLA. In L. F. Bachman, & A. D. Cohen (Eds.), Interfaces between second language acquisition and language testing research , (pp. 112–140). Cambridge: Cambridge University Press.

Broad, B. (2003). What we really value: beyond rubrics in teaching and assessing writing . Logan: Utah State UP.

Broer, M., Lee, Y. W., Powers, D. E., & Rizavi, S. (2005). Ensuring the fairness of GRE writing prompts: Assessing differential difficulty. In ETS research report series (GREB Report No. 02-07R, RR-05-11) .

Brookhart, G., & Haines, S. (2009). Complete CAE student’s book with answers . Cambridge: Cambridge University Press.

Brookhart, S. M. (1999). The art and science of classroom assessment: the missing part of pedagogy. ASHE-ERIC Higher Education Report , 27 (1), 1–128.

Brossell, G. (1986). Current research and unanswered questions in writing assessment. In K. Greenberg, H. Wiener, & R. Donovan (Eds.), Writing assessment: issues and strategies , (pp. 168–182). New York: Longman.

Brown, A., & Jaquith, P. (2007). Online rater training: perceptions and performance . Dubai: Paper presented at Current Trends in English Language Testing Conference (CTELT).

Brown, G. T. L., Glasswell, K., & Harland, D. (2004). Accuracy in the scoring of writing: studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing , 9 (2), 105–121.

Article   Google Scholar  

Brown, H. D., & Abeywickrama, P. (2010). Language assessment: Principles and classroom practice . Lewiston: Pearson Longman.

Brown, J. (2002). Training needs assessment: a must for developing an effective training program. Sage Journal , 31 (4), 569–578 https://doi.org/10.1177/009102600203100412 .

Brown, J. D., & Hudson, T. (2002). Criterion-referenced language testing. Cambridge applied linguistics series . Cambridge: Cambridge University Press.

Busching, B. (1998). Grading inquiry projects. New Directions for Teaching and Learning , ( 74 ), 89–96.

Canseco, G., & Byrd, P. (1989). Writing required in graduate courses in business administration. TESOL Quarterly , 23 (2), 305–316.

Capel, A., & Sharp, W. (2013). Cambridge english objective proficiency , (2nd ed., ). Cambridge: Cambridge University Press.

Charney, D. (1984). The validity of using holistic scoring to evaluate writing. Research in the Teaching of English , 18 (1), 65–81.

Chenoweth, N. A., & Hayes, J. R. (2001). Fluency in writing: Generating text in L1 and L2. Written Communication , 18 (1), 80–98 https://doi.org/10.1177/0741088301018001004 .

Chi, E. (2001). Comparing holistic and analytic scoring for performance assessment with many facet models. Journal of Applied Measurement , 2 (4), 379–388.

Cohen, A. D. (1994). Assessing language ability in the classroom . Boston: Heinle & Heinle.

Condon, W. (2013). Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings? Assessing Writing , 18 , 100–108.

Connor-Linton, J. (1995). Crosscultural comparison of writing standards: American ESL and Japanese EFL. World English , 14 (1), 99–115.

Coombe, C., Davidson, P., O’Sullivan, B., & Stoynoff, S. (2012). The Cambridge guide to second language assessment . New York: Cambridge University Press.

Corry, H. (1999). Advanced writing with English in use: CAE . Oxford: Oxford University Press.

Crusan, D. (2015). And then a miracle occurs: the use of computers to assess student writing. International Journal of TESOL and Learning , 4 (1), 20–33.

Cumming, A. (2001). Learning to write in a second language: two decades of research. International Journal of English Studies , 1 (2), 1–23.

Cumming, A. (2013). Assessing integrated writing tasks for academic purposes: promises and perils. Language Assessment Quarterly , 10 (1), 1–8.

Cumming, A. H., Kantor, R., Powers, D., Santos, T., & Taylor, C. (2000). TOEFL 2000 writing framework: A working paper , ETS Research Report Series (RM-00-5; TOEFL-MS-18) . Princeton: ETS.

Dass, B. (2014). Adult & continuing professional education practices: CPE among professional providers . Singapore: Partridge Singapore.

Deygers, B., Zeidler, B., Vilcu, D., & Carlsen, C. H. (2018). One framework to unite them all? Use of CEFR in European university entrance policies. Language Assessment Quarterly , 15 (1), 3–15 https://doi.org/10.1080/15434303.2016.1261350 .

Diab, R., & Balaa, L. (2011). Developing detailed rubrics for assessing critique writing: impact on EFL university students’ performance and attitudes. TESOL Journal , 2 (1), 52–72.

Diederich, P. B., French, J. W., & Carlton, S. T. (1961). Factors in judgments of writing ability (Research Bulletin No. RB-61-15) . Princeton: Educational Testing Service https://doi.org/10.1002/j.2333-8504.1961.tb00286.x .

Dixon, N. (2015). Band 9-IELTS writing task 2-real tests . Oxford: Oxford University Press.

Duckworth, M., Gude, K., & Rogers, L. (2012). Cambridge english: proficiency (CPE) masterclass: student’s book . Oxford: Oxford University Press.

Dunsmuir, S., & Clifford, V. (2003). Children’s writing and the use of ICT. Educational Psychology in Practice , 19 (3), 171–187.

East, M. (2009). Evaluating the reliability of a detailed analytic scoring rubric for foreign language writing. Assessing Writing , 14 (2), 88–115.

Eckes, T. (2005). Examining rater effects in TestDaF writing and speaking performance assessments: a many-facet Rasch analysis. Language Assessment Quarterly , 2 (3), 197–221.

Eckes, T. (2012). Operational rater types in writing assessment: linking rater cognition to rater behavior. Language Assessment Quarterly , 9 ( 3 ), 270–292.

Elder, C., Barkhuizen, G., Knoch, U., & von Randow, J. (2007). Evaluating rater responses to an online training program for L2 writing assessment. Language Testing , 24 (1), 37–64.

Elder, C., Knoch, U., Barkhuizen, G., & von Randow, J. (2005). Individual feedback to enhance rater training: does it work? Language Assessment Quarterly , 2 (3), 175–196.

Ene, E., & Kosobucki, V. (2016). Rubrics and corrective feedback in ESL writing: a longitudinal case study of an L2 writer. Assessing Writing , 30 , 3–20 https://doi.org/10.1016/j.asw.2016.06.003 .

Erdosy, M. U. (2004). Exploring variability in judging writing ability in a second language: a study of four experienced raters of ESL composition. In ETS research report series (RR-03-17) . Ontario: ETS.

Evans, V. (2005). Entry tests CPE 2 for the revised Cambridge proficiency examination: Student’s book . New York City: Pearson Education.

Fahim, M., & Bijani, H. (2011). The effect of rater training on raters’ severity and bias in second language writing assessment. Iranian Journal of Language Testing , 1 (1), 1–16.

Faigley, L., Daly, J. A., & Witte, S. P. (1981). The role of writing apprehension in writing performance and competence. Journal of Educational Research , 75 (1), 16–21.

Field, A. P. (2009). Discovering statistics using SPSS (and sex and drugs and rock’ n’ roll) , (3rd ed., ). London: Sage Publication.

Fleckenstein, J., Keller, S., Kruger, M., Tannenbaum, R. J., & Köller, O. (2019). Linking TOEFL iBT writing rubrics to CEFR levels: Cut scores and validity evidence from a standard setting study. Assessing Writing , 43 https://doi.org/10.1016/j.asw.2019.100420 .

Fleckenstein, J., Leucht, M., & Köller, O. (2018). Teachers’ judgement accuracy concerning CEFR levels of prospective university students. Language Assessment Quarterly , 15 (1), 90–101 https://doi.org/10.1080/15434303.2017.1421956 .

Fleming, S., Golder, K., & Reeder, K. (2011). Determination of appropriate IELTS writing and speaking band scores for admission into two programs at a Canadian post-secondary polytechnic institution. The Canadian Journal of Applied Linguistics , 14 (1), 222 – 250 .

Fulcher, G. (2010). Practical language testing . London: Hodder Education.

Fulcher, G., & Davidson, F. (2007). Language testing and assessment: an advanced resource book . New York: Routledge.

Gass, S., Myford, C., & Winke, P. (2011). Raters’ L2 background as a potential source of bias in rating oral performance. Language Testing , 30 (2), 231–252.

Ghaffar, M. A., Khairallah, M., & Salloum, S. (2020). Co-constructed rubrics and assessment forlearning: The impact on middle school students’ attitudes and writing skills. Assessing Writing , 45 https://doi.org/10.1016/j.asw.2020.100468 .

Ghalib, T. K., & Hattami, A. A. (2015). Holistic versus analytic evaluation of EFL writing: a case study. English Language Teaching , 8 (7), 225–236.

Grabe, W., & Kaplan, R. B. (1996). Theory and practice of writing: an applied linguistic perspective . London: Longman.

Graham, S., Harris, K. R., & Mason, L. (2005). Improving the writing performance, knowledge, and self-efficacy of struggling young writers: the effects of self-regulated strategy development. Contemporary Educational Psychology , 30 (2), 207–241 https://doi.org/10.1016/j.cedpsych.2004.08.001 .

Gustilo, L., & Magno, C. (2015). Explaining L2 Writing performance through a chain of predictors: A SEM approach. 3 L: The Southeast Asian Journal of English Language Studies , 21 (2), 115–130.

Hamp-Lyons, L. (1990). Second language writing assessment. In B. Kroll (Ed.), Second language writing: research insights for the classroom , (pp. 69–87). California: Cambridge University Press.

Hamp-Lyons, L. (1991). Holistic writing assessment of LEP students . Washington, DC: Paper presented at Symposium on limited English proficient student.

Hamp-Lyons, L. (2007). Editorial: worrying about rating. Assessing Writing , 12 , 1–9.

Hamp-Lyons, L., & Kroll, B. (1997). TOEFL 2000 – writing: composition, community and assessment (toefl monograph series no. 5) . Princeton: Educational Testing Service.

Harman, R. (2013). Literary intertextuality in genre-based pedagogies: building lexicon cohesion in fifth-grade L2 writing. Journal of Second Language Writing , 22 (2), 125–140.

Harrison, J. (2010). Certificate of proficiency in English (CPE) test preparation course . Oxford: Oxford University Press.

Hinkel, E. (2009). The effects of essay topics on modal verb uses in L1 and L2 academic writing. Journal of Pragmatics , 41 (4), 667–683.

Holmes, P. (2006). Problematizing intercultural communication competence in the pluricultural classroom: Chinese students in New Zealand University. Journal of Language and Intercultural Communication , 6 (1), 18–34.

Hughes, A. (2003). Testing for language teachers . Cambridge: Cambridge University Press.

Huot, B. (1990). The literature of direct writing assessment: major concerns and prevailing trends. Review of Educational Research , 60 (2), 237–239.

Huot, B., Moore, C., & O’Neill, P. (2009). Creating a culture of assessment in writing programs and beyond. College Composition and Communication , 61 ( 1 ), 107–132.

Hyland, K. (2004). Disciplinary discourses: social interactions in academic writing . Michigan: University of Michigan Press.

Inoue, A. (2004). Community-based assessment pedagogy. Assessing Writing , 9 (3), 208–238 https://doi.org/10.1016/j.asw.2004.12.001 .

Izadpanah, M. A., Rakhshandehroo, F., & Mahmoudikia, M. (2014). On the consensus between holistic rating system and analytical rating system: a comparison between TOEFL iBT and Jacobs’ et al. composition. International Journal of Language Learning and Applied Linguistics World , 6 (1), 170–187.

Jacobs, H. L., Zingraf, S. A., Wormuth, D. R., Hartfiel, V. F., & Hughey, J. B. (1981). Testing ESL composition: a practical approach . Rowley: Newbury House.

Jakeman, V. (2006). Cambridge action plan for IELTS: academic module . Cambridge: Cambridge University Press.

Jamieson, J., & Poonpon, K. (2013). Developing analytic rating guides for TOEFL iBT integrated speaking tasks. In ETS research series (RR-13-13, TOEFLiBT-20) . Princeton: ETS.

Johnson, R. L., Penny, J., & Gordon, B. (2000). The relation between score resolution methods and interrater reliability: An empirical study of an analytic scoring rubric. Applied Measurement in Education , 13 , 121–138 https://doi.org/10.1207/S15324818AME1302_1 .

Jones, C. (2001). The relationship between writing centers and improvement in writing ability: An assessment of the literature. Journal of Education , 122 (1), 3–20.

Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review , 2 , 130–144.

Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement , (4th ed., pp. 17–64). Westport: American Council on Education and Praeger Publishers.

Kane, T. S. (2000). Oxford essential guide to writing . New York: Berkey Publishing Group.

Kellogg, R. T., Turner, C. E., Whiteford, A. P., & Mertens, A. (2016). The role of working memory in planning and generating written sentences. Journal of Writing Research , 7 (3), 397–416.

Kim, Y. H. (2011). Diagnosing EAP writing ability using the reduced reparametrized unified model. Language Testing , 28 (4), 509–541.

Klein, P. D., & Boscolo, P. (2016). Trends in research on writing as a learning activity. Journal of Writing Research , 7 (3), 311–350 https://doi.org/10.17239/jowr-2016.07.3.01 .

Knoch, U. (2009). The assessment of academic style in EAP writing: the case of the rating scale. Melbourne Papers in Language Testing , 13 (1), 35.

Knoch, U. (2011). Rating scales for diagnostic assessment of writing: what should they look like and where should the criteria come from? Assessing Writing , 16 (2), 81–96.

Kondo-Brown, K. (2002). A facet analysis of rater bias in Japanese second language writing performance. Language Testing , 19 (1), 3–31.

Kong, N., Liu, O. L., Malloy, J., & Schedl, M. A. (2009). Does content knowledge affect TOEFL iBT reading performance? A confirmatory approach to differential item functioning. In ETS research report series (RR-09-29, TOEFLiBT-09) . Princeton: ETS.

Kroll, B. (1990). Second language writing (Cambridge Applied Linguistics): research insights for the classroom . Cambridge: Cambridge University Press.

Kroll, B., & Kruchten, P. (2003). The rational unified process made essay: a practitioner’s guide to the RUP . Boston: Pearson Education.

Kuo, S. (2007). Which rubric is more suitable for NSS liberal studies? Analytic or holistic? Educational Research Journal , 22 (2), 179–199.

Lanteigne, B. (2017). Unscrambling jumbled sentences: an authentic task for English language assessment? Studies in Second Language Learning and Teaching , 7 (2), 251–273 https://doi.org/10.14746/ssllt.2017.7.2.5 .

Leki, L., Cumming, A., & Silva, T. (2008). A synthesis of research on second language writing in English . New York: Routledge.

Levin, P. (2009). Write great essays . London: McGraw-Hill Education.

Loughead, L. (2010). IELTS practice exam: with audio CDs . Hauppauge: Barron’s Education Series.

Lu, J., & Zhang, Z. (2013). Assessing and supporting argumentation with online rubrics. International Education Studies , 6 (7), 66–77.

Lumley, T. (2002). Assessment criteria in a large-scale writing test: what do they really mean to the raters? Language Testing , 19 (3), 246–276.

Lumley, T. (2005). Assessing second language writing: the rater’s perspective . Frankfurt: Lang.

MacDonald, S. (1994). Professional academic writing in the humanities and social sciences . Carbondale: Southern Illinois University Press.

Mackenzie, J. (2007). Essay writing: teaching the basics from the group up . Markham: Pembroke Publishers.

Malone, M. E., & Montee, M. (2014). Stakeholders’ beliefs about the TOEFL iBT test as a measure of academic language ability (TOEFL iBT Report No. 22, ETS Research Report No. RR-14-42) . Princeton: Educational Testing Service https://doi.org/10.1002/ets2.12039 .

Matsuda, P. K. (2002). Basic writing and second language writers: Toward an inclusive definition. Journal of Basic Writing , 22 (2), 67–89.

McLaren, S. (2006). Essay writing made easy . Sydney: Pascal Press.

McMillan, J. H. (2001). Classroom assessment: principles and practice for effective instruction , (2nd ed., ). Boston: Allyn & Bacon.

Melendy, G. A. (2008). Motivating writers: the power of choice. Asian EFL Journal , 20 (3), 187–198.

Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessment. Educational Researcher , 23 (2), 13–23.

Moore, J. (2009). Common mistakes at proficiency and how to avoid them . Cambridge: Cambridge University Press.

Moskal, B. M., & Leydens, J. (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research & Evaluation , 7 , 10.

Moss, P. A. (1994). Can there be validity without reliability? Educational Researcher , 23 (2), 5–12.

Muenz, T. A., Ouchi, B. Y., & Cole, J. C. (1999). Item analysis of written expression scoring systems from the PIAT-R and WIAT. Psychology and Schools , 36 (1), 31–40.

Muncie, J. (2002). Using written teacher feedback in EFL composition classes. ELT Journal , 54 (1), 47–53 https://doi.org/10.1093/elt/54.1.47 .

Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet rasch measurement: Part I. Journal of Applied Measurement , 4 (4), 386–422.

Noonan, L. E., & Sulsky, L. M. (2001). Impact of frame-of-reference and behavioral observation training on alternative training effectiveness criteria in a Canadian military sample. Human Performance , 14 (1), 3–26.

Nunn, R. C. (2000). Designing rating scales for small-group interaction. ELT Journal , 54 (2), 169–178.

Nunn, R. C., & Adamson, J. (2007). Toward the development of interactional criteria for journal paper evaluation. Asian EFL Journal , 9 (4), 205–228.

Nystrand, M., Greene, S., & Wiemelt, J. (1993). Where did composition studies come from? An intellectual history. Written Communication , 10 (3), 267–333.

O’Neil, T. R., & Lunz, M. E. (1996). Examining the invariance of rater and project calibrations using a multi-facet rasch model . New York: Paper presented at the Annual Meeting of the American Educational Research Associations.

Obee, B. (2005). Practice tests for the revised CPE . Berkshire: Express Publishing.

Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purpose revisited. Educational Research Review , 9 , 129–144.

Pollitt, A., & Hutchinson, C. (1987). Calibrating graded assessments: rasch partial credit analysis of performance in writing. Language Testing , 4 (1), 72–92.

Raimes, A. (1991). Out of the woods: Emerging traditions in the teaching of writing. TESOL Quarterly , 25 (3), 407–430.

Reid, J. (1993). Teaching ESL writing . Englewood Cliffs: Regents Prentice Hall.

Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing Writing , 15 (1), 18–39.

Richards, J. C., & Schmidt, R. (2002). Longman dictionary of language teaching and applied linguistics . New York: Pearson Education.

Roch, S. G., & O’Sullivan, B. J. (2003). Frame of reference rater training issues: recall, time and behavior observation training. International Journal of Training and Development , 7 (2), 93–107.

Rosenfeld, M., Courtney, R., & Fowles, M. (2004). Identifying the writing tasks important for academic success at the undergraduate and graduate levels. Research report 42 . Princeton: Educational Testing Service.

Rosenfeld, M., Leung, S., & Oltman, P. K. (2001). Identifying the reading, writing, speaking, and listening tasks important for academic success at the undergraduate and graduate levels (TOEFL Monograph Series MS-21) . Princeton: Educational Testing Service.

Rupp, A. A., Casabianca, J. M., Krüger, M., Keller, S., & Köller, O. (2019). Automated essay scoring at scale: a case study in Switzerland and Germany (RR-86. ETS RR-19-12) . ETS Research Report Series , 2019 https://doi.org/10.1002/ets2.12249 .

Saeidi, M., Yousefi, M., & Baghayi, P. (2013). Rater bias in assessing Iranian EFL learners’ writing performance. Iranian Journal of Applied Linguistics , 16 (1), 145–175.

Sasaki, M. (2000). Toward an empirical model of EFL writing processes: an explanatory study. Journal of Second Language Writing , 9 (3), 259–291.

Sasaki, M., & Hirose, K. (1999). Development of an analytic rating scale for Japanese L1 writing. Language Testing , 16 (4), 457–478.

Schaefer, E. (2008). Rater bias patterns in an EFL writing assessment. Language Testing , 25 (4), 465–493.

Schirmer, B. R., & Bailey, J. (2000). Writing assessment rubric: an instructional approach for struggling writers. Teaching Exceptional Children , 33 (1), 52–58.

Schoonen, R. (2005). Generalizability of writing scores: an application of structural equation modeling. Language Testing , 22 (1), 1–5.

Shaw, S. D., & Weir, C. J. (2007). Examining writing: research and practice in assessing second language writing . Cambridge: Cambridge University Press.

Shermis, M. (2014). State-of-the-art automated essay scoring: competition, results, and future directions from a United States demonstration. Assessing Writing , 20 , 53–76 https://doi.org/10.1016/j.asw.2013.04.001 .

Shi, L. (2001). Native- and nonnative- speaking EFL teachers’ evaluation of Chinese students’ English writing. Language Testing , 18 (3), 303–325.

Spratt, M., & Taylor, L. B. (2000). The Cambridge CAE course: self-study student’s book . Cambridge: Cambridge University Press.

Spurr, B. (2005). Successful essay writing for senior high school . NSW: New Frontier Publishing.

Staff, M. P. (2017). GRE guide to the use of scores. In Graduate record examination . Princeton: ETS.

Stevens, J. P. (2002). Applied multivariate statistics for the social sciences , (4th ed., ). Hillsdale: Erlbaum.

Stewart, A. (2009). IELTS preparation & practice: reading and writing—academic module . New York: Pearson Education.

Tardy, M. C., & Matsuda, P. K. (2009). The construction of author voice by editorial board members. Written Communication , 26 (1), 32–52.

Trace, J., Meier, V., & Janseen, G. (2016). “I can see that”: developing shared rubric category interpretations through score negotiation. Assessing Writing , 30 , 32–43 https://doi.org/10.1016/j.asw.2016.08.001 .

Ward, J. R., & McCotter, S. S. (2004). Reflection as a visible outcome for preservice teachers. Teaching and Teacher Education , 20 (3), 243–257.

Weigle, S. C. (2002). Assessing writing . Cambridge: Cambridge University Press.

Book   Google Scholar  

Weigle, S. C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing , 18 , 85–99.

Weir, C. J. (1990). Communicative language testing . New Jersey: Prentice Hall, Inc.

Weissberg, B. (2000). Developmental relationship in the acquisition of English syntax: Writing vs. speech. Journal of Learning and Instruction , 10 (1), 37–53 https://doi.org/10.1016/S0959-4752(99)00017-1 .

Wesolowski, B. W., Wind, S. A., & Engelhard, G. (2017). Evaluating differential rater functioning over time in the context of solo music performance assessment. Bulletin of the Council for Research in Music Education , ( 212 ), 75–98 https://doi.org/10.5406/bulcouresmusedu.212.0075 .

White, E. M. (1984). Teaching and assessing writing , (2nd ed., ). San Francisco: Jossey-Bass.

White, E. M. (1985). Teaching and assessing writing . San Francisco: Jossey-Bass.

White, E. M. (1994). Teaching and assessing writing , (2nd ed. ). San Francisco: Jossey-Bass.

Wiggan, G. (1994). The constant danger of sacrificing validity to reliability: making writing assessment serves writer. Assessing Writing , 1 , 129–139 https://doi.org/10.1016/1075-2935(94)90008-6 .

Wilson, M. (2006). Rethinking rubrics in writing assessment . Postmouth: Heinemann.

Wilson, M. (2017). Reimaging writing assessment: from scales to stories . Postmouth: Heinemann.

Wind, S. A. (2020). Do raters use rating scale categories consistently across analytic rubric domains in writing assessment? Assessing Writing , 43 https://doi.org/10.1016/j.asw.2019.100416 .

Wind, S. A., Tsai, C. L., Grajeda, S. B., & Bergin, C. (2018). Principals’ use of rating scale categories in classroom observation for teacher evaluation. School Effectiveness and School Improvement , 29 (3), 485–510 https://doi.org/10.1080/09243453.2018.1470989 .

Wiseman, C. S. (2012). A comparison of the performance of analytic vs. holistic scoring rubrics to assess L2 writing. Iranian Journal of Language Testing , 2 (1), 59–61.

Wyldeck, K. (2008). Everyday spelling and grammar . Sydney: Pascal Press.

Zahler, K. A. (2011). McGraw-Hill’s conquering the NEW GRE verbal and writing . New York: McGraw-Hill Education.

Zhang, B., Johnson, L., & Kilic, G. B. (2008). Assessing the reliability of self-and-peer rating in student group work. Assessment & Evaluation in Higher Education , 33 (3), 329–340 https://doi.org/10.1080/02602930701293181 .

Download references

Acknowledgements

The authors would like to thank the reviewers for their fruitful comments. We would also like to thank the raters who kindly accepted to contribute to this study.

Author information

Authors and affiliations.

Department of Foreign Languages, TUMS International College, Tehran University of Medical Sciences (TUMS), Keshavarz Blvd., Tehran, 1415913311, Iran

Enayat A. Shabani & Jaleh Panahi

You can also search for this author in PubMed   Google Scholar

Contributions

The authors made almost equal contributions to this manuscript, and both read and approved the final manuscript.

Authors’ information

Enayat A. Shabani 1 ( [email protected] ) is a Ph.D. in TEFL and is currently the Chair of the Department of Foreign Languages at Tehran University of Medical Sciences (TUMS). His areas of research interest include language testing and assessment, and internationalization of higher education.

Jaleh Panahi 2 ( [email protected] ) holds an M.A. in TEFL. She has been teaching English for 12 years with the main focus of IELTS teaching and instruction. She is currently a part-time instructor at the Department of Foreign Languages, Tehran University of Medical Sciences. Her fields of research interest are language assessment, and language and cognition.

Corresponding author

Correspondence to Enayat A. Shabani .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Shabani, E.A., Panahi, J. Examining consistency among different rubrics for assessing writing. Lang Test Asia 10 , 12 (2020). https://doi.org/10.1186/s40468-020-00111-4

Download citation

Received : 16 June 2020

Accepted : 03 September 2020

Published : 26 September 2020

DOI : https://doi.org/10.1186/s40468-020-00111-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Scoring rubrics
  • Essay writing
  • Tests of English language proficiency
  • Writing assessment

essay writing scoring rubric

  • Manage Account
  • Voter Guide
  • Solar Eclipse
  • Bleeding Out
  • Things to Do
  • Public Notices
  • Help Center

news Education

How are computers scoring STAAR essays? Texas superintendents, lawmaker want answers

Educators and legislators are concerned about transparency and a spike in high schoolers scoring zero points on written answers..

Texas superintendents want answers from the state education commissioner Mike Morath about...

By Talia Richman

11:10 AM on Feb 15, 2024 CST — Updated at 8:00 PM on Feb 15, 2024 CST

Texas superintendents — and at least one lawmaker — want answers from the state education commissioner about how computers are scoring STAAR essays.

The Texas Education Agency quietly debuted a new system for examining student answers on the State of Texas Assessments of Academic Readiness, or STAAR, in December . Roughly three-quarters of written responses are scored by a computer rather than a person.

“This is surprising news to me as a member of the House Public Education Committee, as I do not recall ever receiving notice of this novel and experimental method for grading high-stakes, STAAR tests,” Rep. Gina Hinojosa, D-Austin, wrote in a recent letter to Commissioner Mike Morath, which was also shared with The Dallas Morning News .

Superintendents across the state also were caught off guard until recently. Many school districts already are suing the state over changes to the academic accountability system that’s largely based on STAAR performance.

Receive our in-depth coverage of education issues and stories that affect North Texans.

By signing up you agree to our  Terms of Service  and  Privacy Policy

Related: Computers scoring Texas students’ STAAR essay answers, state officials say

The News reported on the rollout of computer scoring Wednesday.

The use of computers to score essays “was never communicated to school districts; yet this seems to be an unprecedented change that a ‘heads up’ would be reasonably warranted,” HD Chambers, director of the Texas School Alliance, wrote to Morath in a letter shared with The News .

TEA spokesman Jake Kobersky said in a statement that the agency is developing a comprehensive presentation for educators, explaining the changes in detail and addressing outstanding questions.

He added that the agency alerted the House Public Education Committee in August 2022 that it was pursuing automated scoring.

The final bulletpoint on an 18-page slideshow read: “TEA is pursuing automation for scoring where appropriate to reduce costs while ensuring reliability. Full human scoring is not possible under item-level computer-adaptive (B), and full human scoring with no automation under the current system would require at least $15-20M more per year.”

The new scoring method rolled out amid a broader STAAR redesign. The revamped test — which launched last year — has a cap on multiple choice questions and essays at every grade level. State officials say it would cost millions more to have only humans score the test.

The “automated scoring engines” are programmed to emulate how humans would assess an essay, and they don’t learn beyond a single question. The computer determines how to score written answers after analyzing thousands of students’ responses that were previously scored by people.

Among the district leaders’ biggest concerns is a huge spike in low scores among high schoolers under the new system.

Roughly eight in 10 written responses on the most recent English II End of Course exam received zero points this fall.

For the spring test — the first iteration of the redesigned test, but scored only by humans — roughly a quarter of responses scored zero points in the same subject.

Members of the Texas School Alliance , which represents 46 districts, “examined their individual district results and found shockingly consistent scoring differences.”

Chris Rozunick, the director of the state’s assessment development division, previously told The News that she understands why people connect the spike in zeroes to the rollout of automated scoring based on the timing. But she insists that the two are unrelated.

Many students who take STAAR in the fall are “re-testers” who did not meet grade level on a previous test attempt. Spring testers tend to perform better, according to agency officials who were asked to explain the spike in low scores in the fall.

“It really is the population of testers much more than anything else,” Rozunick said.

Kobersky added that, under the previous STAAR design, a score of zero was reserved for “unscorable responses,” meaning the question was left blank or written in a nonsensical way. The redesigned test rubric allows for a zero both if a response is unscorable or if it’s the value of the response as determined by the scorer, he said.

Some district leaders requested the state education agency provide them images of students’ responses so that they could “better understand what led to the significant increase in the number of zeroes, and most importantly how to help students write their responses” to receive better scores.

“Each request has been denied,” Chambers wrote in his letter to Morath.

Kobersky said fall questions are not released because they can be reused for other tests.

TEA officials say a technical report, with a detailed overview of the system, will be available later this year.

STAAR scores are of tremendous importance to district leaders, families and communities. Schools are graded on the state’s academic accountability system largely based on how students perform on these standardized tests.

Related: What are Texas’ A-F school grades, and why do they matter?

“As with all aspects of the STAAR test and the A-F accountability system, it is important that there is transparency, accuracy and fairness in these high-stakes results,” Hinojosa wrote.

The DMN Education Lab deepens the coverage and conversation about urgent education issues critical to the future of North Texas.

The DMN Education Lab is a community-funded journalism initiative, with support from Bobby and Lottye Lyle, Communities Foundation of Texas, The Dallas Foundation, Dallas Regional Chamber, Deedie Rose, Garrett and Cecilia Boone, The Meadows Foundation, The Murrell Foundation, Solutions Journalism Network, Southern Methodist University, Sydney Smith Hicks and the University of Texas at Dallas. The Dallas Morning News retains full editorial control of the Education Lab’s journalism.

Talia Richman

Talia Richman , Staff writer . Talia is a reporter for The Dallas Morning News Education Lab. A Dallas native, she attended Richardson High School and graduated from the University of Maryland. She previously covered schools and City Hall for The Baltimore Sun.

Brace yourself for traffic after 2024 solar eclipse in Dallas-Fort Worth

Kevin durant confronted hecklers ahead of mavs-suns, ensured fans weren’t kicked out, at&t blames coding error for massive cellular outage, attorney general says emails from denton isd principals violated election law, lies and dirty tricks: texas republicans are eating their own in this year’s primaries.

IMAGES

  1. 46 Editable Rubric Templates (Word Format) ᐅ TemplateLab

    essay writing scoring rubric

  2. 010 Essay Examplerubric Page Rubrics In ~ Thatsnotus

    essay writing scoring rubric

  3. 022 Essay Example Writing Rubrics For High School English ~ Thatsnotus

    essay writing scoring rubric

  4. Two Customizable ESL Writing Rubric Templates

    essay writing scoring rubric

  5. 018 Essay Example Grading Rubric ~ Thatsnotus

    essay writing scoring rubric

  6. Middle School Writing Rubrics

    essay writing scoring rubric

VIDEO

  1. score essay automatically

  2. SCORING ESSAY ITEM: Holistic VS Analytical Method!!!

  3. Essay Writing Rubric

  4. GUIDE TO WRITING AN ESSAY📝

  5. LEQ Breakdown

  6. Instructions for Essay #3

COMMENTS

  1. Writing Rubrics: How to Score Well on Your Paper

    A writing rubric is a clear set of guidelines on what your paper should include, often written as a rating scale that shows the range of scores possible on the assignment and how to earn each one. Professors use writing rubrics to grade the essays they assign, typically scoring on content, organization, mechanics, and overall understanding.

  2. PDF Essay Rubric

    Essay Rubric Directions: Your essay will be graded based on this rubric. Consequently, use this rubric as a guide when writing your essay and check it again before you submit your essay.

  3. 15 Helpful Scoring Rubric Examples for All Grades and Subjects

    Assessment 15 Helpful Scoring Rubric Examples for All Grades and Subjects In the end, they actually make grading easier. By Jill Staake Jun 16, 2023 When it comes to student assessment and evaluation, there are a lot of methods to consider.

  4. Ultimate Guide to the Praxis Essay Scoring Rubric

    Scoring Rubric for the Praxis Argumentative Essay Score of 6 An essay with a score of 6 shows a high level of competence in responding to the assignment, though it may contain minor errors. A level 6 essay: Has a clear thesis Is well organized and developed with strong connections between ideas

  5. ACT Writing Rubric: Full Analysis and Essay Strategies

    Ideas and Analysis The Ideas and Analysis domain is the rubric area most intimately linked with the basic ACT essay task itself. Here's what the ACT website has to say about this domain: Scores in this domain reflect the ability to generate productive ideas and engage critically with multiple perspectives on the given issue.

  6. Sample Essay Rubric for Elementary Teachers

    An essay rubric is a way teachers assess students' essay writing by using specific criteria to grade assignments. Essay rubrics save teachers time because all of the criteria are listed and organized into one convenient paper. If used effectively, rubrics can help improve students' writing . How to Use an Essay Rubric

  7. PDF Writing Assessment and Evaluation Rubrics

    the writing mode-specific rubrics, or the analytic rubrics designed specifically for the assignment. In addition, annotated above-average, average, and below-average

  8. Essay Rubric

    Essay Rubric Grades 6 - 12 Printout Type Assessment Tool View Printout About this printout This rubric delineates specific expectations about an essay assignment to students and provides a means of assessing completed student essays. Teaching with this printout More ideas to try Teaching with this printout

  9. Essay Writing Rubric

    The essay writing rubric is designed to score students' essays using a 100-point grading scale in three major categories: Organization, Content, and Style & Mechanics. Essay Writing Assignment Scoring Rubric ( 100 points possible) Organization: ( 20 points possible) I. Essay is easy to read due to clear organization of main points 8-10 II.

  10. SAT Essay Rubric: Full Analysis and Writing Strategies

    Otherwise, your essay scoring is done by two graders - each one grades you on a scale of 1-4 in Reading, Analysis, and Writing, for a total essay score out of 8 in each of those three areas. But how do these graders assign your writing a numerical grade? By using an essay scoring guide, or rubric. *may not actually be the least belovèd.

  11. Rubric Design

    Writing rubrics can help address the concerns of both faculty and students by making writing assessment more efficient, consistent, and public. Whether it is called a grading rubric, a grading sheet, or a scoring guide, a writing assignment rubric lists criteria by which the writing is graded.

  12. SAT Essay Scoring Rubric

    Each reader gives a score of 1-4 for each of three criteria, the two scores are added, and the student gets three essay scores ranging from 2-8, one for each criterion. So what are the criteria that readers so rigidly follow? SAT Essay Scoring Criteria Reading One Point Demonstrates little or no comprehension of the source text

  13. Rubric Best Practices, Examples, and Templates

    A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

  14. PDF Argumentative essay rubric

    Arrangement of essay is unclear and illogical. The writing lacks a clear sense of direction. Ideas, details or events seem strung together in a loose or random fashion; there is no identifiable internal structure and readers have trouble following the writer's line of thought. Few, forced transitions in the essay or no transitions are present.

  15. ESL Essay Writing Rubric for Teachers

    ESL Essay Writing Rubric. Scoring essays written by English learners can at times be difficult due to the challenging task of writing larger structures in English. ESL / EFL teachers should expect errors in each area and make appropriate concessions in their scoring. Rubrics should be based on a keen understanding of English learner ...

  16. PDF Essay Contest Judging Rubric

    Essay Contest Judging Rubric For each criterion listed, score the essay on a scale of 1-5, with 5 being the best score. Use a separate form for each essay. Do not score in decimals or fractions - whole numbers only. 5=Excellent 4=Above Average 3=Average 2=Below Average 1=Poor/Incomplete Total Score: ______

  17. Essay Scoring Rubric

    many sentence structure problems; 5-7 fragments or run-on sentences; grammatical errors distract from meaning. S pelling, capitalization, punctuation, and citation errors are frequent and distracting. 1. (1.9-1) Off-topic; f ormatting guidelines for layout (headings), spacing, and alignment are not followed, making the assignment unattractive ...

  18. PDF The ACT Writing Test Scoring Rubric

    The ACT Writing Test Scoring Rubric Ideas and Analysis Development and Support Organization Language Use Score 2: Responses at this scorepoint demonstrate weak or inconsistent skill in writing an argumentative essay. The writer generates an argument that weakly responds to multiple perspectives on the given issue. The argument's thesis, if ...

  19. PDF Independent Writing Rubrics

    Independent Writing Rubrics. SCORE. TASK DESCRIPTION. 5. An essay at this level largely accomplishes all of the following: Efectively addresses the topic and task. Is well organized and well developed, using clearly appropriate explanations, exemplifications and/or details. Displays unity, progression and coherence.

  20. Essay Grading Rubric: Content, Organization, Style, Mechanics

    Essay Grading Rubric ... ESSAY: Score. Points. Criteria. CONTENT 40%. 40-33 : Excellent to Very Good: There is one clear, well-focused thesis. Excellent command of subject matter. ... paraphrases and summaries expertly woven into own writing; structural design versatile and complex. A variety of thoughtful transitions show how ideas are ...

  21. Academic Guides: Writing Assessment: Scoring Criteria

    Essay Scoring Rubric. Your Writing Assessment essay will be scored based on the rubric in your DRWA Doctoral Writing Assessment classroom focusing on: Central idea of essay is clear, related to the prompt, and developed. Paraphrase and analysis of reading material supports the overall argument.

  22. Examining consistency among different rubrics for assessing writing

    Three raters were provided with four sets of rubrics used for scoring the essay writing task of tests developed by Educational Testing Service (ETS) and Cambridge English Language Assessment (i.e., Independent TOELF iBT, GRE, CPE, and CAE) to score the essays which had been previously scored officially by a certified IELTS examiner.

  23. Scoring rubric for writing assessment

    Scoring rubric for writing assessment Source publication USING DICTOGLOSS TECHNIQUE TO IMPROVE STUDENTS' WRITING SKILL Article Full-text available Jul 2019 Moh. Choirul Huda Piping Rahadianto...

  24. How are computers scoring STAAR essays? Texas superintendents, lawmaker

    The new scoring method rolled out amid a broader STAAR redesign. The revamped test — which launched last year — has a cap on multiple choice questions and essays at every grade level.