The College Study

Essay, Letter , Paragrah , Aplication

Examination System

Short Essay on Our Examination System

Our examination system essay (500 words), outlines for examination system, examinations are necessary, but students are afraid of them, reasons for student’s fear of examinations, two systems of examination, the annual examination system, and the semester examination system (of sessional tests), the use of unfair means in examinations, when examinations can become easy for students..

“Examinations are necessary to bring out what lies in our minds and hearts and our skill in answering all kinds of questions.”

Students are afraid of examinations for different reasons. First of all, they think that examinations are very difficult. They fear they may not be able to answer most of the questions in their examinations and might fail. But it often happens that a student in the examination hall finds his papers to be unexpectedly easy. Examinations are quite easy for a student who studies his books at least for some time every day. Examinations are a good means to test the ability of the student. It is through the answers he gives in his examination that we learn about his real ability. The preparation for an examination makes him able and efficient.[the_ad id=”17141″]

There are two systems of examinations. In one system, we have an annual examination. Students take this examination after reading the courses for one or two years. Students find this to be quite difficult. But, it helps them in having a full understanding of their books or courses The other examination system or the semester system is that of sessional tests. There are tests in classes after every month or every second or third month. The marks that a student gets in these tests are added up. There is, then, no need for the “total” annual examination so much. This system is easy for students. But often they cannot have a good understanding of their whole courses. It is said that teachers can practice favoritism in this system. This system does not work well in developing or corrupt societies.

The use of unfair means in examinations is quite common these days. Most students cannot study their courses regularly all through the year. When the examination gets near, some of them find that they can pass it by copying in the examination hall. Some members of the staff working in examination halls also accept bribes and provide facilities for copying. All this should be stopped forthwith for the sake of genuine (real) educational progress. Luckily, in some main parts of the country, we find more stringent (strict or severe) controls in examination centers than ever before. Examinations become easy if students work and study regularly. Then they will like to take exams and tests oftener.

[the_ad id=”17142″]

Our Examination System Essay (400 Words)

Examinations are of great use. Examinations are a means (way) of judging or knowing the ability of candidates. Good results in examinations are taken as a sign of knowledge and ability.

One important reason for the decrease in the importance of examinations is the teaching standards. Because of uncertain social conditions, our teachers, like the students, have not been able to attend to their work very well. Sometimes they had to cover long courses over short periods of time. Or they had to leave out portions of the courses to prepare students for examinations. Because of the absence of facilities of research or advanced studies, quite a few of our teachers, especially in the sciences, could not improve their knowledge adequately (suitably). All this affected the examination (and education) system very badly.

In our examination system, examinations of most of the classes are held once a year. In most of the schools, colleges and universities students are prepared for their annual examinations. This gives them a chance to study their books for a few months or weeks before their examinations. Quite a few students can easily neglect their studies for the greater part of the year. Only in some educational institutions, students have to do class or sessional work under the strict supervision of teachers.[the_ad id=”17150″]

In the 1970s, we experimented with the semester system of examinations. It was hoped that sessional tests after everyone, two or three months would, make our students more careful. The system failed because of overcrowded classes and absence of good teaching and library facilities. The present experiment with the semester system faces the same challenges. The effort of the authorities to introduce the objective system at the school, Intermediate and higher levels has met with utter failure.

The courses for our examinations should include (contain) the continuing developments in the arts and sciences around the world. That is, these courses should be made as much modern and useful as possible. In the arts, they should include proper details of our history, culture and religion and should be in agreement with our national aims and purposes.

All our examinations should be arranged and conducted honestly and efficiently. Examiners and persons responsible for the conduct of examinations should be men of ability and high principles. Then, the results of examinations should be declared as early as possible.

[ PDF Download ]

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Notify me of follow-up comments by email.

Notify me of new posts by email.

CoursesXpert_Logo

Essay on Examination System in India: Striking a Balance Between Assessment and Learning

The examination system in India plays a pivotal role in the education landscape, serving as a crucial mechanism for assessing students’ knowledge, skills, and understanding of various subjects. It has been a longstanding tradition, shaping academic journeys and influencing the futures of countless individuals. However, the system is not without its challenges and criticisms. This essay delves into the examination system in India, examining its strengths, weaknesses, and the ongoing discourse around its effectiveness.

Quick Overview:

  • Assessment of Knowledge: The primary purpose of examinations is to assess students’ comprehension of academic content. They provide a standardized method to evaluate the knowledge acquired during a specific period, acting as a benchmark for academic progress.
  • Pressure and Stress: One of the prominent critiques of the examination system is the stress and pressure it imposes on students. High-stakes exams, such as board exams and entrance tests, can lead to mental health issues, anxiety, and a skewed focus on rote memorization rather than holistic learning.
  • Competitive Nature: The system fosters a competitive environment among students, often emphasizing grades and ranks over a genuine understanding of concepts. This competition can lead to a narrow focus on scoring well rather than nurturing a love for learning and critical thinking.
  • Limited Assessment Methods: Examinations predominantly rely on written tests, which may not effectively capture a student’s overall abilities, creativity, or practical skills. Alternative assessment methods, such as project work, presentations, and practical examinations, are sometimes overshadowed.
  • Role in Career Opportunities: The examination results often play a significant role in determining access to higher education institutions and career opportunities. The emphasis on a single exam determining future paths raises questions about the fairness and inclusivity of the system.

Conclusion: In conclusion, the examination system in India is a double-edged sword, serving as a vital tool for assessment while also bearing the weight of critiques and challenges. While it effectively gauges students’ understanding of academic content, the undue stress, competitive nature, limited assessment methods, and the disproportionate influence on career opportunities raise valid concerns.

Efforts to reform the examination system should focus on striking a balance between assessment and holistic learning. The inclusion of diversified assessment methods, reducing the emphasis on rote memorization, and fostering a culture that values creativity and critical thinking are essential steps. Additionally, addressing the mental health aspect by re-evaluating the pressure associated with examinations is crucial for nurturing well-rounded individuals.

The examination system, when thoughtfully restructured, can evolve into a tool that not only assesses academic knowledge but also encourages a love for learning, innovation, and a broader understanding of the world. As India’s education system continues to adapt to the changing needs of society, a reimagined examination system could play a significant role in shaping a generation of individuals equipped with both knowledge and the skills necessary for success in a rapidly evolving world.

Rahul Kumar

Rahul Kumar is a passionate educator, writer, and subject matter expert in the field of education and professional development. As an author on CoursesXpert, Rahul Kumar’s articles cover a wide range of topics, from various courses, educational and career guidance.

Related Posts

Political Science

How To Write An Argumentative Essay On Political Science?

Essay-Writing

Creative Essay Writing Techniques: How To Write a Creative Essay

My Mother

10 Lines on My Mother in English

Goher Amin

An English Essay on Our Examination system for B.A. and F.A students

essay writing on examination system

Our Examination System

  • Share on Facebook
  • Share on Twitter

I am a published author of dozens of books in Pakistan. I have been Editor of The Guide in National University of Modern Languages Islamabad (NUML). I am MPhil in Applied Languistics from University of the Lahore. Being an M.Ed I mostly spend my time training the teachers.

Post a Comment

Contact form.

CbseAcademic.in

Essay on Examination 500+ Words

Examinations, often called “exams,” are a common part of education. They are tests that help us learn, measure our knowledge, and prepare for the future. In this essay, we will explore the importance of examinations in education, how they help us grow, and why they are necessary.

Assessing Learning

Examinations are essential for assessing what we have learned. They evaluate our understanding of subjects like math, science, history, and more. Through exams, teachers can identify areas where students excel and where they might need extra help.

Goal Setting

Examinations set goals for students. Knowing that there will be tests encourages us to study and learn. Achieving good results in exams gives us a sense of accomplishment and motivates us to keep learning.

Academic Progress

Exams help track our academic progress. By taking regular tests, teachers and parents can see how we are doing in school. If we are struggling in a particular subject, exams help identify the areas where we need improvement.

Preparing for the Future

Examinations prepare us for the future. As we grow, we face bigger exams like high school finals and college entrance exams. The skills we develop in earlier exams, such as time management and problem-solving, help us succeed in these more significant tests.

Critical Thinking

Exams encourage critical thinking. We are often asked to solve problems, analyze information, and apply what we have learned. These skills are valuable in everyday life and future careers.

Fair Assessment

Examinations provide a fair way to assess students. They are standardized, which means that all students take the same test under the same conditions. This ensures that everyone is evaluated fairly.

Time Management

Exams teach us time management. We have a limited amount of time to complete the test, which helps us learn how to prioritize tasks and work efficiently.

Building Confidence

Exams can boost our confidence. When we prepare well and do our best, we feel proud of our accomplishments. This self-confidence extends beyond exams and into other areas of life.

Identifying Strengths and Weaknesses

Exams help us identify our strengths and weaknesses. If we do well in a particular subject, we may discover a passion for it. On the other hand, if we struggle, we can seek help and improve.

Preparing for Challenges

Exams prepare us for life’s challenges. In the real world, we often face situations where we need to think critically, solve problems, and make decisions. The skills we develop through exams help us tackle these challenges.

Conclusion of Essay on Examination

In conclusion, examinations play a vital role in education. They assess our learning, set goals, track progress, and prepare us for the future. Exams encourage critical thinking, time management, and confidence-building. They provide a fair way to evaluate students and help us identify our strengths and weaknesses. While exams can be challenging, they are a valuable part of our educational journey. Embracing them and approaching them with a positive mindset can lead to personal growth and success. Examinations are not just tests; they are stepping stones to a brighter future.

Also Check: Simple Guide on How To Write An Essay

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing Essays for Exams

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

While most OWL resources recommend a longer writing process (start early, revise often, conduct thorough research, etc.), sometimes you just have to write quickly in test situations. However, these exam essays can be no less important pieces of writing than research papers because they can influence final grades for courses, and/or they can mean the difference between getting into an academic program (GED, SAT, GRE). To that end, this resource will help you prepare and write essays for exams.

What is a well written answer to an essay question?

Well Focused

Be sure to answer the question completely, that is, answer all parts of the question. Avoid "padding." A lot of rambling and ranting is a sure sign that the writer doesn't really know what the right answer is and hopes that somehow, something in that overgrown jungle of words was the correct answer.

Well Organized

Don't write in a haphazard "think-as-you-go" manner. Do some planning and be sure that what you write has a clearly marked introduction which both states the point(s) you are going to make and also, if possible, how you are going to proceed. In addition, the essay should have a clearly indicated conclusion which summarizes the material covered and emphasizes your thesis or main point.

Well Supported

Do not just assert something is true, prove it. What facts, figures, examples, tests, etc. prove your point? In many cases, the difference between an A and a B as a grade is due to the effective use of supporting evidence.

Well Packaged

People who do not use conventions of language are thought of by their readers as less competent and less educated. If you need help with these or other writing skills, come to the Writing Lab

How do you write an effective essay exam?

  • Read through all the questions carefully.
  • Budget your time and decide which question(s) you will answer first.
  • Underline the key word(s) which tell you what to do for each question.
  • Choose an organizational pattern appropriate for each key word and plan your answers on scratch paper or in the margins.
  • Write your answers as quickly and as legibly as you can; do not take the time to recopy.
  • Begin each answer with one or two sentence thesis which summarizes your answer. If possible, phrase the statement so that it rephrases the question's essential terms into a statement (which therefore directly answers the essay question).
  • Support your thesis with specific references to the material you have studied.
  • Proofread your answer and correct errors in spelling and mechanics.

Specific organizational patterns and "key words"

Most essay questions will have one or more "key words" that indicate which organizational pattern you should use in your answer. The six most common organizational patterns for essay exams are definition, analysis, cause and effect, comparison/contrast, process analysis, and thesis-support.

Typical questions

  • "Define X."
  • "What is an X?"
  • "Choose N terms from the following list and define them."

Q: "What is a fanzine?"

A: A fanzine is a magazine written, mimeographed, and distributed by and for science fiction or comic strip enthusiasts.

Avoid constructions such as "An encounter group is where ..." and "General semantics is when ... ."

  • State the term to be defined.
  • State the class of objects or concepts to which the term belongs.
  • Differentiate the term from other members of the class by listing the term's distinguishing characteristics.

Tools you can use

  • Details which describe the term
  • Examples and incidents
  • Comparisons to familiar terms
  • Negation to state what the term is not
  • Classification (i.e., break it down into parts)
  • Examination of origins or causes
  • Examination of results, effects, or uses

Analysis involves breaking something down into its components and discovering the parts that make up the whole.

  • "Analyze X."
  • "What are the components of X?"
  • "What are the five different kinds of X?"
  • "Discuss the different types of X."

Q: "Discuss the different services a junior college offers a community."

A: Thesis: A junior college offers the community at least three main types of educational services: vocational education for young people, continuing education for older people, and personal development for all individuals.

Outline for supporting details and examples. For example, if you were answering the example question, an outline might include:

  • Vocational education
  • Continuing education
  • Personal development

Write the essay, describing each part or component and making transitions between each of your descriptions. Some useful transition words include:

  • first, second, third, etc.
  • in addition

Conclude the essay by emphasizing how each part you have described makes up the whole you have been asked to analyze.

Cause and Effect

Cause and effect involves tracing probable or known effects of a certain cause or examining one or more effects and discussing the reasonable or known cause(s).

Typical questions:

  • "What are the causes of X?"
  • "What led to X?"
  • "Why did X occur?"
  • "Why does X happen?"
  • "What would be the effects of X?"

Q: "Define recession and discuss the probable effects a recession would have on today's society."

A: Thesis: A recession, which is a nationwide lull in business activity, would be detrimental to society in the following ways: it would .......A......., it would .......B......., and it would .......C....... .

The rest of the answer would explain, in some detail, the three effects: A, B, and C.

Useful transition words:

  • consequently
  • for this reason
  • as a result

Comparison-Contrast

  • "How does X differ from Y?"
  • "Compare X and Y."
  • "What are the advantages and disadvantages of X and Y?"

Q: "Which would you rather own—a compact car or a full-sized car?"

A: Thesis: I would own a compact car rather than a full-sized car for the following reasons: .......A......., .......B......., .......C......., and .......D....... .

Two patterns of development:

  • Full-sized car

Disadvantages

  • Compact car

Useful transition words

  • on the other hand
  • unlike A, B ...
  • in the same way
  • while both A and B are ..., only B ..
  • nevertheless
  • on the contrary
  • while A is ..., B is ...
  • "Describe how X is accomplished."
  • "List the steps involved in X."
  • "Explain what happened in X."
  • "What is the procedure involved in X?"

Process (sometimes called process analysis)

This involves giving directions or telling the reader how to do something. It may involve discussing some complex procedure as a series of discrete steps. The organization is almost always chronological.

Q: "According to Richard Bolles' What Color Is Your Parachute?, what is the best procedure for finding a job?"

A: In What Color Is Your Parachute?, Richard Bolles lists seven steps that all job-hunters should follow: .....A....., .....B....., .....C....., .....D....., .....E....., .....F....., and .....G..... .

The remainder of the answer should discuss each of these seven steps in some detail.

  • following this
  • after, afterwards, after this
  • subsequently
  • simultaneously, concurrently

Thesis and Support

  • "Discuss X."
  • "A noted authority has said X. Do you agree or disagree?"
  • "Defend or refute X."
  • "Do you think that X is valid? Defend your position."

Thesis and support involves stating a clearly worded opinion or interpretation and then defending it with all the data, examples, facts, and so on that you can draw from the material you have studied.

Q: "Despite criticism, television is useful because it aids in the socializing process of our children."

A: Television hinders rather than helps in the socializing process of our children because .......A......., .......B......., and .......C....... .

The rest of the answer is devoted to developing arguments A, B, and C.

  • it follows that

A. Which of the following two answers is the better one? Why?

Question: Discuss the contribution of William Morris to book design, using as an example his edition of the works of Chaucer.

a. William Morris's Chaucer was his masterpiece. It shows his interest in the Middle Ages. The type is based on medieval manuscript writing, and the decoration around the edges of the pages is like that used in medieval books. The large initial letters are typical of medieval design. Those letters were printed from woodcuts, which was the medieval way of printing. The illustrations were by Burn-Jones, one of the best artists in England at the time. Morris was able to get the most competent people to help him because he was so famous as a poet and a designer (the Morris chair) and wallpaper and other decorative items for the home. He designed the furnishings for his own home, which was widely admired among the sort of people he associated with. In this way he started the arts and crafts movement.

b. Morris's contribution to book design was to approach the problem as an artist or fine craftsman, rather than a mere printer who reproduced texts. He wanted to raise the standards of printing, which had fallen to a low point, by showing that truly beautiful books could be produced. His Chaucer was designed as a unified work of art or high craft. Since Chaucer lived in the Middle Ages, Morris decided to design a new type based on medieval script and to imitate the format of a medieval manuscript. This involved elaborate letters and large initials at the beginnings of verses, as well as wide borders of intertwined vines with leaves, fruit, and flowers in strong colors. The effect was so unusual that the book caused great excitement and inspired other printers to design beautiful rather than purely utilitarian books.

From James M. McCrimmon, Writing with a Purpose , 7th ed. (Boston: Houghton Mifflin Company, 1980), pp. 261-263.

B. How would you plan the structure of the answers to these essay exam questions?

1. Was the X Act a continuation of earlier government policies or did it represent a departure from prior philosophies?

2. What seems to be the source of aggression in human beings? What can be done to lower the level of aggression in our society?

3. Choose one character from Novel X and, with specific references to the work, show how he or she functions as an "existential hero."

4. Define briefly the systems approach to business management. Illustrate how this differs from the traditional approach.

5. What is the cosmological argument? Does it prove that God exists?

6. Civil War historian Andy Bellum once wrote, "Blahblahblah blahed a blahblah, but of course if blahblah blahblahblahed the blah, then blahblahs are not blah but blahblah." To what extent and in what ways is the statement true? How is it false?

For more information on writing exam essays for the GED, please visit our Engagement area and go to the Community Writing and Education Station (CWEST) resources.

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies

Chinese Studies

  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section The Examination System

Introduction, introductory works.

  • General Overviews
  • Han through Tang (206  bce –905)
  • Five Dynasties through Yuan (907–1368)
  • Ming through Qing (1368–1911)
  • Taiping (1850–1864)
  • Primary Sources
  • Secondary Sources
  • Terminological Issues
  • Data for Specific Examinations
  • Collections of Source Materials
  • Modern Historiography
  • Examination Tiers
  • Examination Administration
  • Examination Curriculum
  • The Eight-Legged Essay
  • Examination Aids
  • Impact on Literature
  • Examination Life
  • Impact on Society
  • Foreign Perspectives

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Intellectual Trends in Late Imperial China
  • Local Elites in Ming-Qing China
  • Local Elites in Song-Yuan China
  • Middle-Period China
  • Neo-Confucianism
  • Printing and Book Culture
  • Qing Dynasty up to 1840
  • Traditional Prose

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Computing in China
  • Popular Music in Contemporary China
  • Sino-Russian Relations Since the 1980s
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

The Examination System by Rui Magone LAST REVIEWED: 12 April 2019 LAST MODIFIED: 28 April 2014 DOI: 10.1093/obo/9780199920082-0078

The examination system, also known as “civil service examinations” or “imperial examinations”—and, in Chinese, as keju 科舉, keju zhidu 科舉制度, gongju 貢舉, xuanju 選舉 or zhiju 制舉—was the imperial Chinese bureaucracy’s central institution for recruiting its officials. Following both real and idealized models from previous times, the system was established at the beginning of the 7th century CE evolving over several dynasties into a complex institution that prevailed for 1,300 years before its abolition in 1905. One of the system’s most salient features, especially in the late imperial period (1400–1900), was its meritocratic structure (at least in principle, if not necessarily in practice): almost anyone from among the empire’s male population could sit for the examinations. Moreover, candidates were selected based on their performance rather than their pedigree. In order to be accessible to candidates anywhere in the empire, the system’s infrastructure spanned the entire territory. In a long sequence of triennial qualifying examinations at the local, provincial, metropolitan, and palace levels candidates were mainly required to write rhetorically complicated essays elucidating passages from the Confucian canon. Most candidates failed at each level, and only a couple of hundred out of a million or often more examinees attained final examination success at the metropolitan and palace levels. Due to its accessibility and ubiquity, the examination system had a decisive impact on the intellectual and social landscapes of imperial China. This impact was reinforced by the rule that candidates were allowed to retake examinations as often as they needed to in order to reach the next level. It was therefore not uncommon for individuals in imperial China to spend the great part of their lives, occasionally even until their last breath, sitting for the competitions. Indeed the extant sources reveal, by their sheer quantity alone, that large parts of the population, not only aspiring candidates, were in fact obsessed with the civil service examinations in the same way that modern societies are fascinated by sports leagues. To a great extent, it was this obsession, along with the system’s centripetal force constantly pulling the population in the different regions toward the political center in the capital, which may have held the large territory of imperial China together, providing it with both coherence and cohesion. Modern Historiography has tended to have a negative view of the examination system, singling it out, and specifically its predominantly literary curriculum, as the major cause for traditional Chinese society’s failure to develop into a modern nation with a strong scientific and technological tradition of its own. In the late 20th and early 21st century, this paradigm has become gradually more nuanced as historians have begun to develop new ways of approaching the extant sources, in particular the large number of examination essays and aids.

This section addresses readers who have little or no knowledge of the examination system and need both readable and reliable introductions to the subject. These works tend to highlight and describe extensively the Qing civil examinations during the 19th century, thus often creating among readers the impression that the system worked more or less the same in previous periods. While this was clearly not so, it is undeniable that no period in the long history of the civil examinations happens to be as well documented as the 19th century. Readers who desire to obtain a historically more nuanced sense of the system are referred to the sections General Overviews and Overviews by Period . Another problem with introductory works concerns the ideal balance between information and narration. Miyazaki 1981 and especially Jackson and Hugus 1999 are focused on telling a good story rather than providing copious evidence in dense footnotes. By contrast, Wilkinson 2012 and Zi 1894 are overtly technical, requiring a slow reading pace. The best way to strike a balance is to combine both approaches by, ideally, pairing Jackson and Hugus 1999 and Wilkinson 2012 . A problem that concerns Wang 1988 , Qi 2006 , and Li 2010 , all introductory works written by Chinese scholars in Chinese, is that they often quote passages from original Primary Sources in classical Chinese without providing a modern Chinese translation. One way to access these passages linguistically is to work directly with the literature cited under Terminological Issues . Finally, even though often neglected, the examination system also included a military branch, of which Zi 1896 provides the most readable account. Compared to their civil counterparts, the military competitions were of minimal significance, but they often served as a platform to obliquely move up the civil examination ladder.

Jackson, Beverley, and David Hugus. Ladder to the Clouds: Intrigue and Tradition in Chinese Rank . Berkeley, CA: Ten Speed, 1999.

Follows the trajectory of a late Qing examination candidate from his birth to his official position. Even though often leaning toward the fictitious, it is definitely a good read and one of the best illustrated books about the late imperial examination system and officialdom.

Li Bing李兵. Qiannian keju (千年科举). Changsha, China: Yuelu shushe, 2010.

Written in a rather colloquial and therefore accessible style, this well-illustrated book by a renowned expert of the examination system gives answers to questions most frequently asked about this topic, such as whether women were allowed to sit for the examinations. Has a good and sizable list of further readings.

Miyazaki, Ichisada. China’s Examination Hell: The Civil Service Examinations of Imperial China . Translated by Conrad Shirokauer. New Haven, CT: Yale University Press, 1981.

Originally published in 1963 in a longer and more academic version, this is a popular work by one of the most prominent Japanese scholars of the examination system. Packed with vivid anecdotes, this brief and captivating text describes all examination tiers. It is focused on the circumstances of the late Qing period, albeit not always explicitly.

Qi Rushan 齐如山. Zhongguo de kemin g (中国的科名). Shenyang, China: Liaoning Jiaoyu chubanshe, 2006.

Originally published in Taipei in 1956, this is a very accessible introduction arranged according to key terms used at the examinations.

Wang Daocheng 王道成. Keju shihua (科举史话). Beijing: Zhonghua shuju, 1988.

This is a short, easy to read, yet very informative introduction to the topic by a leading expert. While mainly focused on describing the Qing period, it also devotes a chapter to the system’s history. Has a very valuable appendix containing samples of all Qing examination genres. There are several books with an identical title, so make sure to use the one authored by Wang Daocheng.

Wilkinson, Endymion. Chinese History: A New Manual . Cambridge, MA: Harvard University Asia Center, 2012.

Chapter 22 “Education and Examinations” (pp. 292–304) of this monumental work contains a systematic introduction to the structure and curriculum of the late imperial examination system. Has also a section on primary and secondary sources. There are several editions of this manual; the 2012 version is the one you should use.

Zi, Étienne. Pratique des examens littéraires en Chine . Shanghai: Imprimerie de la Mission Catholique, 1894.

This is the most thorough and reliable description of the late Qing examination system. Has a copious amount of high-quality illustrations, which have been recycled in many other publications. Even though this book is now available online, try to use the original edition if you want to consult or reproduce the illustrative material, in particular the large-scale map of the Jiangnan examination compound. Also available in a 1971 reprint (Taipei: Chengwen, 1971).

Zi, Étienne. Pratique des examens militaires en Chine . Shanghai: Imprimerie de la Mission Catholique, 1896.

This is the best account of the late Qing military examination system available in any language. Describes all tiers and provides samples of examination topics. Richly illustrated, it also includes images of the weaponry used for testing the military candidates. Like the previous text, available in a 1971 reprint (Taipei: Chengwen, 1971).

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Chinese Studies »
  • Meet the Editorial Board »
  • 1989 People's Movement
  • Agricultural Technologies and Soil Sciences
  • Agriculture, Origins of
  • Ancestor Worship
  • Anti-Japanese War
  • Architecture, Chinese
  • Assertive Nationalism and China's Core Interests
  • Astronomy under Mongol Rule
  • Book Publishing and Printing Technologies in Premodern Chi...
  • Buddhist Monasticism
  • Buddhist Poetry of China
  • Budgets and Government Revenues
  • Calligraphy
  • Central-Local Relations
  • Chiang Kai-shek
  • Children’s Culture and Social Studies
  • China and Africa
  • China and Peacekeeping
  • China and the World, 1900-1949
  • China's Agricultural Regions
  • China’s Soft Power
  • China’s West
  • Chinese Alchemy
  • Chinese Communist Party Since 1949, The
  • Chinese Communist Party to 1949, The
  • Chinese Diaspora, The
  • Chinese Nationalism
  • Chinese Script, The
  • Christianity in China
  • Classical Confucianism
  • Collective Agriculture
  • Concepts of Authentication in Premodern China
  • Confucius Institutes
  • Consumer Society
  • Contemporary Chinese Art Since 1976
  • Criticism, Traditional
  • Cross-Strait Relations
  • Cultural Revolution
  • Daoist Canon
  • Deng Xiaoping
  • Dialect Groups of the Chinese Language
  • Disability Studies
  • Drama (Xiqu 戏曲) Performance Arts, Traditional Chinese
  • Dream of the Red Chamber
  • Early Imperial China
  • Economic Reforms, 1978-Present
  • Economy, 1895-1949
  • Emergence of Modern Banks
  • Energy Economics and Climate Change
  • Environmental Issues in Contemporary China
  • Environmental Issues in Pre-Modern China
  • Establishment Intellectuals
  • Ethnicity and Minority Nationalities Since 1949
  • Ethnicity and the Han
  • Examination System, The
  • Fall of the Qing, 1840-1912, The
  • Falun Gong, The
  • Family Relations in Contemporary China
  • Fiction and Prose, Modern Chinese
  • Film, Chinese Language
  • Film in Taiwan
  • Financial Sector, The
  • Five Classics
  • Folk Religion in Contemporary China
  • Folklore and Popular Culture
  • Foreign Direct Investment in China
  • Gender and Work in Contemporary China
  • Gender Issues in Traditional China
  • Great Leap Forward and the Famine, The
  • Guomindang (1912–1949)
  • Han Expansion to the South
  • Health Care System, The
  • Heritage Management
  • Heterodox Sects in Premodern China
  • Historical Archaeology (Qin and Han)
  • Hukou (Household Registration) System, The
  • Human Origins in China
  • Human Resource Management in China
  • Human Rights in China
  • Imperialism and China, c. 1800-1949
  • Industrialism and Innovation in Republican China
  • Innovation Policy in China
  • Islam in China
  • Journalism and the Press
  • Judaism in China
  • Labor and Labor Relations
  • Landscape Painting
  • Language, The Ancient Chinese
  • Language Variation in China
  • Late Imperial Economy, 960–1895
  • Late Maoist Economic Policies
  • Law in Late Imperial China
  • Law, Traditional Chinese
  • Li Bai and Du Fu
  • Liang Qichao
  • Literati Culture
  • Literature Post-Mao, Chinese
  • Literature, Pre-Ming Narrative
  • Liu, Zongzhou
  • Macroregions
  • Management Style in "Chinese Capitalism"
  • Marketing System in Pre-Modern China, The
  • Marxist Thought in China
  • Material Culture
  • May Fourth Movement
  • Media Representation of Contemporary China, International
  • Medicine, Traditional Chinese
  • Medieval Economic Revolution
  • Migration Under Economic Reform
  • Ming and Qing Drama
  • Ming Dynasty
  • Ming Poetry 1368–1521: Era of Archaism
  • Ming Poetry 1522–1644: New Literary Traditions
  • Ming-Qing Fiction
  • Modern Chinese Drama
  • Modern Chinese Poetry
  • Modernism and Postmodernism in Chinese Literature
  • Music in China
  • Needham Question, The
  • Neolithic Cultures in China
  • New Social Classes, 1895–1949
  • One Country, Two Systems
  • Opium Trade
  • Orientalism, China and
  • Palace Architecture in Premodern China (Ming-Qing)
  • Paleography
  • People’s Liberation Army (PLA), The
  • Philology and Science in Imperial China
  • Poetics, Chinese-Western Comparative
  • Poetry, Early Medieval
  • Poetry, Traditional Chinese
  • Political Art and Posters
  • Political Dissent
  • Political Thought, Modern Chinese
  • Polo, Marco
  • Popular Music in the Sinophone World
  • Population Dynamics in Pre-Modern China
  • Population Structure and Dynamics since 1949
  • Porcelain Production
  • Post-Collective Agriculture
  • Poverty and Living Standards since 1949
  • Prose, Traditional
  • Regional and Global Security, China and
  • Religion, Ancient Chinese
  • Renminbi, The
  • Republican China, 1911-1949
  • Revolutionary Literature under Mao
  • Rural Society in Contemporary China
  • School of Names
  • Silk Roads, The
  • Sino-Hellenic Studies, Comparative Studies of Early China ...
  • Sino-Japanese Relations Since 1945
  • Social Welfare in China
  • Sociolinguistic Aspects of the Chinese Language
  • Su Shi (Su Dongpo)
  • Sun Yat-sen and the 1911 Revolution
  • Taiping Civil War
  • Taiwanese Democracy
  • Technology Transfer in China
  • Television, Chinese
  • Terracotta Warriors, The
  • Tertiary Education in Contemporary China
  • Texts in Pre-Modern East and South-East Asia, Chinese
  • The Economy, 1949–1978
  • The Shijing詩經 (Classic of Poetry; Book of Odes)
  • Township and Village Enterprises
  • Traditional Historiography
  • Transnational Chinese Cinemas
  • Tribute System, The
  • Unequal Treaties and the Treaty Ports, The
  • United States-China Relations, 1949-present
  • Urban Change and Modernity
  • Vernacular Language Movement
  • Village Society in the Early Twentieth Century
  • Warlords, The
  • Water Management
  • Women Poets and Authors in Late Imperial China
  • Xi, Jinping
  • Yan'an and the Revolutionary Base Areas
  • Yuan Dynasty
  • Yuan Dynasty Poetry
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [66.249.64.20|185.148.24.167]
  • 185.148.24.167
  • Privacy Policy

Zahid Notes

Examinations system essay for 2nd year with outline

Examinations essay, the examination system of pakistan essay.

The difference between try and triumph is a little umph. - Marvin Phillips
Exams are not just tests, they tell you what you possess in your mind - Saif Ullah Zahid
Your final exams grades must not be damaging to your employment and prospect - Anonymous 
much good work is lost for the lack of a little more - Edward H. Herriman
  • Education problems in Pakistan essay
  • My first day at college essay
  • Cricket match essay for college level
  • Life in a big city essay
  • Courtesy Essay

No comments:

Post a Comment

Trending Topics

Latest posts.

  • 9th class Islamiat Lazmi guess paper pdf 2024
  • 9th class guess paper 2024 pdf
  • 9th class Tarjuma tul Quran Notes pdf download
  • 9th class maths guess paper 2024
  • 9th class physics guess paper 2024 pdf download
  • 9th class general maths guess paper 2024 urdu medium
  • 9th class Islamiat Notes PDF download
  • 9th class Islamiat Compulsory new book 2022
  • 9th class Tarjuma Tul Quran paper pattern and scheme 2024
  • 2nd year English guess paper 2024 for Punjab Boards
  • 9th class English guess paper 2024 pdf download
  • 10th class guess paper 2024 pdf download
  • 10th class physics guess paper English Medium 2024
  • 10th class English guess paper 2024 pdf download
  • 10th class chemistry guess paper Punjab Boards 2024
  • 9th class general science guess paper 2024 pdf download
  • BISE Hyderabad
  • BISE Lahore
  • bise rawalpindi
  • BISE Sargodha
  • career-counseling
  • how to pass
  • Punjab Board
  • Sindh-Board
  • Solved mcqs
  • Student-Guide

The Writing Center • University of North Carolina at Chapel Hill

Essay Exams

What this handout is about.

At some time in your undergraduate career, you’re going to have to write an essay exam. This thought can inspire a fair amount of fear: we struggle enough with essays when they aren’t timed events based on unknown questions. The goal of this handout is to give you some easy and effective strategies that will help you take control of the situation and do your best.

Why do instructors give essay exams?

Essay exams are a useful tool for finding out if you can sort through a large body of information, figure out what is important, and explain why it is important. Essay exams challenge you to come up with key course ideas and put them in your own words and to use the interpretive or analytical skills you’ve practiced in the course. Instructors want to see whether:

  • You understand concepts that provide the basis for the course
  • You can use those concepts to interpret specific materials
  • You can make connections, see relationships, draw comparisons and contrasts
  • You can synthesize diverse information in support of an original assertion
  • You can justify your own evaluations based on appropriate criteria
  • You can argue your own opinions with convincing evidence
  • You can think critically and analytically about a subject

What essay questions require

Exam questions can reach pretty far into the course materials, so you cannot hope to do well on them if you do not keep up with the readings and assignments from the beginning of the course. The most successful essay exam takers are prepared for anything reasonable, and they probably have some intelligent guesses about the content of the exam before they take it. How can you be a prepared exam taker? Try some of the following suggestions during the semester:

  • Do the reading as the syllabus dictates; keeping up with the reading while the related concepts are being discussed in class saves you double the effort later.
  • Go to lectures (and put away your phone, the newspaper, and that crossword puzzle!).
  • Take careful notes that you’ll understand months later. If this is not your strong suit or the conventions for a particular discipline are different from what you are used to, ask your TA or the Learning Center for advice.
  • Participate in your discussion sections; this will help you absorb the material better so you don’t have to study as hard.
  • Organize small study groups with classmates to explore and review course materials throughout the semester. Others will catch things you might miss even when paying attention. This is not cheating. As long as what you write on the essay is your own work, formulating ideas and sharing notes is okay. In fact, it is a big part of the learning process.
  • As an exam approaches, find out what you can about the form it will take. This will help you forecast the questions that will be on the exam, and prepare for them.

These suggestions will save you lots of time and misery later. Remember that you can’t cram weeks of information into a single day or night of study. So why put yourself in that position?

Now let’s focus on studying for the exam. You’ll notice the following suggestions are all based on organizing your study materials into manageable chunks of related material. If you have a plan of attack, you’ll feel more confident and your answers will be more clear. Here are some tips: 

  • Don’t just memorize aimlessly; clarify the important issues of the course and use these issues to focus your understanding of specific facts and particular readings.
  • Try to organize and prioritize the information into a thematic pattern. Look at what you’ve studied and find a way to put things into related groups. Find the fundamental ideas that have been emphasized throughout the course and organize your notes into broad categories. Think about how different categories relate to each other.
  • Find out what you don’t know, but need to know, by making up test questions and trying to answer them. Studying in groups helps as well.

Taking the exam

Read the exam carefully.

  • If you are given the entire exam at once and can determine your approach on your own, read the entire exam before you get started.
  • Look at how many points each part earns you, and find hints for how long your answers should be.
  • Figure out how much time you have and how best to use it. Write down the actual clock time that you expect to take in each section, and stick to it. This will help you avoid spending all your time on only one section. One strategy is to divide the available time according to percentage worth of the question. You don’t want to spend half of your time on something that is only worth one tenth of the total points.
  • As you read, make tentative choices of the questions you will answer (if you have a choice). Don’t just answer the first essay question you encounter. Instead, read through all of the options. Jot down really brief ideas for each question before deciding.
  • Remember that the easiest-looking question is not always as easy as it looks. Focus your attention on questions for which you can explain your answer most thoroughly, rather than settle on questions where you know the answer but can’t say why.

Analyze the questions

  • Decide what you are being asked to do. If you skim the question to find the main “topic” and then rush to grasp any related ideas you can recall, you may become flustered, lose concentration, and even go blank. Try looking closely at what the question is directing you to do, and try to understand the sort of writing that will be required.
  • Focus on what you do know about the question, not on what you don’t.
  • Look at the active verbs in the assignment—they tell you what you should be doing. We’ve included some of these below, with some suggestions on what they might mean. (For help with this sort of detective work, see the Writing Center handout titled Reading Assignments.)

Information words, such as who, what, when, where, how, and why ask you to demonstrate what you know about the subject. Information words may include:

  • define—give the subject’s meaning (according to someone or something). Sometimes you have to give more than one view on the subject’s meaning.
  • explain why/how—give reasons why or examples of how something happened.
  • illustrate—give descriptive examples of the subject and show how each is connected with the subject.
  • summarize—briefly cover the important ideas you learned about the subject.
  • trace—outline how something has changed or developed from an earlier time to its current form.
  • research—gather material from outside sources about the subject, often with the implication or requirement that you will analyze what you’ve found.

Relation words ask you to demonstrate how things are connected. Relation words may include:

  • compare—show how two or more things are similar (and, sometimes, different).
  • contrast—show how two or more things are dissimilar.
  • apply—use details that you’ve been given to demonstrate how an idea, theory, or concept works in a particular situation.
  • cause—show how one event or series of events made something else happen.
  • relate—show or describe the connections between things.

Interpretation words ask you to defend ideas of your own about the subject. Don’t see these words as requesting opinion alone (unless the assignment specifically says so), but as requiring opinion that is supported by concrete evidence. Remember examples, principles, definitions, or concepts from class or research and use them in your interpretation. Interpretation words may include:

  • prove, justify—give reasons or examples to demonstrate how or why something is the truth.
  • evaluate, respond, assess—state your opinion of the subject as good, bad, or some combination of the two, with examples and reasons (you may want to compare your subject to something else).
  • support—give reasons or evidence for something you believe (be sure to state clearly what it is that you believe).
  • synthesize—put two or more things together that haven’t been put together before; don’t just summarize one and then the other, and say that they are similar or different—you must provide a reason for putting them together (as opposed to compare and contrast—see above).
  • analyze—look closely at the components of something to figure out how it works, what it might mean, or why it is important.
  • argue—take a side and defend it (with proof) against the other side.

Plan your answers

Think about your time again. How much planning time you should take depends on how much time you have for each question and how many points each question is worth. Here are some general guidelines: 

  • For short-answer definitions and identifications, just take a few seconds. Skip over any you don’t recognize fairly quickly, and come back to them when another question jogs your memory.
  • For answers that require a paragraph or two, jot down several important ideas or specific examples that help to focus your thoughts.
  • For longer answers, you will need to develop a much more definite strategy of organization. You only have time for one draft, so allow a reasonable amount of time—as much as a quarter of the time you’ve allotted for the question—for making notes, determining a thesis, and developing an outline.
  • For questions with several parts (different requests or directions, a sequence of questions), make a list of the parts so that you do not miss or minimize one part. One way to be sure you answer them all is to number them in the question and in your outline.
  • You may have to try two or three outlines or clusters before you hit on a workable plan. But be realistic—you want a plan you can develop within the limited time allotted for your answer. Your outline will have to be selective—not everything you know, but what you know that you can state clearly and keep to the point in the time available.

Again, focus on what you do know about the question, not on what you don’t.

Writing your answers

As with planning, your strategy for writing depends on the length of your answer:

  • For short identifications and definitions, it is usually best to start with a general identifying statement and then move on to describe specific applications or explanations. Two sentences will almost always suffice, but make sure they are complete sentences. Find out whether the instructor wants definition alone, or definition and significance. Why is the identification term or object important?
  • For longer answers, begin by stating your forecasting statement or thesis clearly and explicitly. Strive for focus, simplicity, and clarity. In stating your point and developing your answers, you may want to use important course vocabulary words from the question. For example, if the question is, “How does wisteria function as a representation of memory in Faulkner’s Absalom, Absalom?” you may want to use the words wisteria, representation, memory, and Faulkner) in your thesis statement and answer. Use these important words or concepts throughout the answer.
  • If you have devised a promising outline for your answer, then you will be able to forecast your overall plan and its subpoints in your opening sentence. Forecasting impresses readers and has the very practical advantage of making your answer easier to read. Also, if you don’t finish writing, it tells your reader what you would have said if you had finished (and may get you partial points).
  • You might want to use briefer paragraphs than you ordinarily do and signal clear relations between paragraphs with transition phrases or sentences.
  • As you move ahead with the writing, you may think of new subpoints or ideas to include in the essay. Stop briefly to make a note of these on your original outline. If they are most appropriately inserted in a section you’ve already written, write them neatly in the margin, at the top of the page, or on the last page, with arrows or marks to alert the reader to where they fit in your answer. Be as neat and clear as possible.
  • Don’t pad your answer with irrelevancies and repetitions just to fill up space. Within the time available, write a comprehensive, specific answer.
  • Watch the clock carefully to ensure that you do not spend too much time on one answer. You must be realistic about the time constraints of an essay exam. If you write one dazzling answer on an exam with three equally-weighted required questions, you earn only 33 points—not enough to pass at most colleges. This may seem unfair, but keep in mind that instructors plan exams to be reasonably comprehensive. They want you to write about the course materials in two or three or more ways, not just one way. Hint: if you finish a half-hour essay in 10 minutes, you may need to develop some of your ideas more fully.
  • If you run out of time when you are writing an answer, jot down the remaining main ideas from your outline, just to show that you know the material and with more time could have continued your exposition.
  • Double-space to leave room for additions, and strike through errors or changes with one straight line (avoid erasing or scribbling over). Keep things as clean as possible. You never know what will earn you partial credit.
  • Write legibly and proofread. Remember that your instructor will likely be reading a large pile of exams. The more difficult they are to read, the more exasperated the instructor might become. Your instructor also cannot give you credit for what they cannot understand. A few minutes of careful proofreading can improve your grade.

Perhaps the most important thing to keep in mind in writing essay exams is that you have a limited amount of time and space in which to get across the knowledge you have acquired and your ability to use it. Essay exams are not the place to be subtle or vague. It’s okay to have an obvious structure, even the five-paragraph essay format you may have been taught in high school. Introduce your main idea, have several paragraphs of support—each with a single point defended by specific examples, and conclude with a restatement of your main point and its significance.

Some physiological tips

Just think—we expect athletes to practice constantly and use everything in their abilities and situations in order to achieve success. Yet, somehow many students are convinced that one day’s worth of studying, no sleep, and some well-placed compliments (“Gee, Dr. So-and-so, I really enjoyed your last lecture”) are good preparation for a test. Essay exams are like any other testing situation in life: you’ll do best if you are prepared for what is expected of you, have practiced doing it before, and have arrived in the best shape to do it. You may not want to believe this, but it’s true: a good night’s sleep and a relaxed mind and body can do as much or more for you as any last-minute cram session. Colleges abound with tales of woe about students who slept through exams because they stayed up all night, wrote an essay on the wrong topic, forgot everything they studied, or freaked out in the exam and hyperventilated. If you are rested, breathing normally, and have brought along some healthy, energy-boosting snacks that you can eat or drink quietly, you are in a much better position to do a good job on the test. You aren’t going to write a good essay on something you figured out at 4 a.m. that morning. If you prepare yourself well throughout the semester, you don’t risk your whole grade on an overloaded, undernourished brain.

If for some reason you get yourself into this situation, take a minute every once in a while during the test to breathe deeply, stretch, and clear your brain. You need to be especially aware of the likelihood of errors, so check your essays thoroughly before you hand them in to make sure they answer the right questions and don’t have big oversights or mistakes (like saying “Hitler” when you really mean “Churchill”).

If you tend to go blank during exams, try studying in the same classroom in which the test will be given. Some research suggests that people attach ideas to their surroundings, so it might jog your memory to see the same things you were looking at while you studied.

Try good luck charms. Bring in something you associate with success or the support of your loved ones, and use it as a psychological boost.

Take all of the time you’ve been allotted. Reread, rework, and rethink your answers if you have extra time at the end, rather than giving up and handing the exam in the minute you’ve written your last sentence. Use every advantage you are given.

Remember that instructors do not want to see you trip up—they want to see you do well. With this in mind, try to relax and just do the best you can. The more you panic, the more mistakes you are liable to make. Put the test in perspective: will you die from a poor performance? Will you lose all of your friends? Will your entire future be destroyed? Remember: it’s just a test.

Works consulted

We consulted these works while writing this handout. This is not a comprehensive list of resources on the handout’s topic, and we encourage you to do your own research to find additional publications. Please do not use this list as a model for the format of your own reference list, as it may not match the citation style you are using. For guidance on formatting citations, please see the UNC Libraries citation tutorial . We revise these tips periodically and welcome feedback.

Axelrod, Rise B., and Charles R. Cooper. 2016. The St. Martin’s Guide to Writing , 11th ed. Boston: Bedford/St Martin’s.

Fowler, Ramsay H., and Jane E. Aaron. 2016. The Little, Brown Handbook , 13th ed. Boston: Pearson.

Gefvert, Constance J. 1988. The Confident Writer: A Norton Handbook , 2nd ed. New York: W.W. Norton and Company.

Kirszner, Laurie G. 1988. Writing: A College Rhetoric , 2nd ed. New York: Holt, Rinehart, and Winston.

Lunsford, Andrea A. 2015. The St. Martin’s Handbook , 8th ed. Boston: Bedford/St Martin’s.

Woodman, Leonara, and Thomas P. Adler. 1988. The Writer’s Choices , 2nd ed. Northbrook, Illinois: Scott Foresman.

You may reproduce it for non-commercial use if you use the entire handout and attribute the source: The Writing Center, University of North Carolina at Chapel Hill

Make a Gift

Home

  • Website Inauguration Function.
  • Vocational Placement Cell Inauguration
  • Media Coverage.
  • Certificate & Recommendations
  • Privacy Policy
  • Science Project Metric
  • Social Studies 8 Class
  • Computer Fundamentals
  • Introduction to C++
  • Programming Methodology
  • Programming in C++
  • Data structures
  • Boolean Algebra
  • Object Oriented Concepts
  • Database Management Systems
  • Open Source Software
  • Operating System
  • PHP Tutorials
  • Earth Science
  • Physical Science
  • Sets & Functions
  • Coordinate Geometry
  • Mathematical Reasoning
  • Statics and Probability
  • Accountancy
  • Business Studies
  • Political Science
  • English (Sr. Secondary)

Hindi (Sr. Secondary)

  • Punjab (Sr. Secondary)
  • Accountancy and Auditing
  • Air Conditioning and Refrigeration Technology
  • Automobile Technology
  • Electrical Technology
  • Electronics Technology
  • Hotel Management and Catering Technology
  • IT Application
  • Marketing and Salesmanship
  • Office Secretaryship
  • Stenography
  • Hindi Essays
  • English Essays

Letter Writing

  • Shorthand Dictation

Essay on “Examination System” Complete Essay for Class 10, Class 12 and Graduation and other classes.

Examination System

Examination are a necessary evil. It is quite understandable that whenever we put in hard work to make successful and venture, we wait for some time to see or guess the results that might’s have been achieved or might possibly be achieved. It is in this context that examinations become unavoidable. Though methods and yardsticks employed may differ and that even widely.

A student studies the whole year and then needs to be examined. It is even in the interest of the students himself or herself to know where he or she stands and how far his or her efforts have borne fruit.

However, the examination system as we have today becomes a farce in essence. It is because of many reasons and factors. The most distressing among these factors is the menace of copying. The students who may be dullards but can manage to indulge in large-scale copying get high marks, whereas the really meritorious students who    have worked hard get low marks.

Even otherwise , the prevalent examination system encourages cramming. Those who have a good memory or can indulge. In cramming, steal a march over others who cannot do this. Then, it is extremely painful to all lovers  of transparency that sometimes even the question papers are sold in the market a day or so before an examination.

Some efforts have been made to bring reforms in the examination such as the introduction of gradation system, the setting of a number of different question papers. Objective questions , etc. but much still remains to be desired and done.

About evirtualguru_ajaygour

essay writing on examination system

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Quick Links

essay writing on examination system

Popular Tags

Visitors question & answer.

  • S.J Roy on Letter to the editor of a daily newspaper, about the misuse and poor maintenance of a public park in your area.
  • ashutosh jaju on Essay on “If there were No Sun” Complete Essay for Class 10, Class 12 and Graduation and other classes.
  • Unknown on Essay on “A Visit to A Hill Station” Complete Essay for Class 10, Class 12 and Graduation and other classes.
  • Amritpal kaur on Hindi Essay on “Pratahkal ki Sair” , ”प्रातःकाल की सैर ” Complete Hindi Essay for Class 10, Class 12 and Graduation and other classes.

Download Our Educational Android Apps

Get it on Google Play

Latest Desk

  • Relevance of Gandhian Economics Today | Social Issue Essay, Article, Paragraph for Class 12, Graduation and Competitive Examination.
  • Should Public Sector Be Restructured or Abolished | Social Issue Essay, Article, Paragraph for Class 12, Graduation and Competitive Examination.
  • Oil Crisis and the World Economy | Social Issue Essay, Article, Paragraph for Class 12, Graduation and Competitive Examination.
  • Role of the Public Sector in the Indian Economy | Social Issue Essay, Article, Paragraph for Class 12, Graduation and Competitive Examination.
  • Sanskrit Diwas “संस्कृत दिवस” Hindi Nibandh, Essay for Class 9, 10 and 12 Students.
  • Nagrik Suraksha Diwas – 6 December “नागरिक सुरक्षा दिवस – 6 दिसम्बर” Hindi Nibandh, Essay for Class 9, 10 and 12 Students.
  • Jhanda Diwas – 25 November “झण्डा दिवस – 25 नवम्बर” Hindi Nibandh, Essay for Class 9, 10 and 12 Students.
  • NCC Diwas – 28 November “एन.सी.सी. दिवस – 28 नवम्बर” Hindi Nibandh, Essay for Class 9, 10 and 12 Students.
  • Example Letter regarding election victory.
  • Example Letter regarding the award of a Ph.D.
  • Example Letter regarding the birth of a child.
  • Example Letter regarding going abroad.
  • Letter regarding the publishing of a Novel.

Vocational Edu.

  • English Shorthand Dictation “East and Dwellings” 80 and 100 wpm Legal Matters Dictation 500 Words with Outlines.
  • English Shorthand Dictation “Haryana General Sales Tax Act” 80 and 100 wpm Legal Matters Dictation 500 Words with Outlines meaning.
  • English Shorthand Dictation “Deal with Export of Goods” 80 and 100 wpm Legal Matters Dictation 500 Words with Outlines meaning.
  • English Shorthand Dictation “Interpreting a State Law” 80 and 100 wpm Legal Matters Dictation 500 Words with Outlines meaning.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

An automated essay scoring systems: a systematic literature review

Dadi ramesh.

1 School of Computer Science and Artificial Intelligence, SR University, Warangal, TS India

2 Research Scholar, JNTU, Hyderabad, India

Suresh Kumar Sanampudi

3 Department of Information Technology, JNTUH College of Engineering, Nachupally, Kondagattu, Jagtial, TS India

Associated Data

Assessment in the Education system plays a significant role in judging student performance. The present evaluation system is through human assessment. As the number of teachers' student ratio is gradually increasing, the manual evaluation process becomes complicated. The drawback of manual evaluation is that it is time-consuming, lacks reliability, and many more. This connection online examination system evolved as an alternative tool for pen and paper-based methods. Present Computer-based evaluation system works only for multiple-choice questions, but there is no proper evaluation system for grading essays and short answers. Many researchers are working on automated essay grading and short answer scoring for the last few decades, but assessing an essay by considering all parameters like the relevance of the content to the prompt, development of ideas, Cohesion, and Coherence is a big challenge till now. Few researchers focused on Content-based evaluation, while many of them addressed style-based assessment. This paper provides a systematic literature review on automated essay scoring systems. We studied the Artificial Intelligence and Machine Learning techniques used to evaluate automatic essay scoring and analyzed the limitations of the current studies and research trends. We observed that the essay evaluation is not done based on the relevance of the content and coherence.

Supplementary Information

The online version contains supplementary material available at 10.1007/s10462-021-10068-2.

Introduction

Due to COVID 19 outbreak, an online educational system has become inevitable. In the present scenario, almost all the educational institutions ranging from schools to colleges adapt the online education system. The assessment plays a significant role in measuring the learning ability of the student. Most automated evaluation is available for multiple-choice questions, but assessing short and essay answers remain a challenge. The education system is changing its shift to online-mode, like conducting computer-based exams and automatic evaluation. It is a crucial application related to the education domain, which uses natural language processing (NLP) and Machine Learning techniques. The evaluation of essays is impossible with simple programming languages and simple techniques like pattern matching and language processing. Here the problem is for a single question, we will get more responses from students with a different explanation. So, we need to evaluate all the answers concerning the question.

Automated essay scoring (AES) is a computer-based assessment system that automatically scores or grades the student responses by considering appropriate features. The AES research started in 1966 with the Project Essay Grader (PEG) by Ajay et al. ( 1973 ). PEG evaluates the writing characteristics such as grammar, diction, construction, etc., to grade the essay. A modified version of the PEG by Shermis et al. ( 2001 ) was released, which focuses on grammar checking with a correlation between human evaluators and the system. Foltz et al. ( 1999 ) introduced an Intelligent Essay Assessor (IEA) by evaluating content using latent semantic analysis to produce an overall score. Powers et al. ( 2002 ) proposed E-rater and Intellimetric by Rudner et al. ( 2006 ) and Bayesian Essay Test Scoring System (BESTY) by Rudner and Liang ( 2002 ), these systems use natural language processing (NLP) techniques that focus on style and content to obtain the score of an essay. The vast majority of the essay scoring systems in the 1990s followed traditional approaches like pattern matching and a statistical-based approach. Since the last decade, the essay grading systems started using regression-based and natural language processing techniques. AES systems like Dong et al. ( 2017 ) and others developed from 2014 used deep learning techniques, inducing syntactic and semantic features resulting in better results than earlier systems.

Ohio, Utah, and most US states are using AES systems in school education, like Utah compose tool, Ohio standardized test (an updated version of PEG), evaluating millions of student's responses every year. These systems work for both formative, summative assessments and give feedback to students on the essay. Utah provided basic essay evaluation rubrics (six characteristics of essay writing): Development of ideas, organization, style, word choice, sentence fluency, conventions. Educational Testing Service (ETS) has been conducting significant research on AES for more than a decade and designed an algorithm to evaluate essays on different domains and providing an opportunity for test-takers to improve their writing skills. In addition, they are current research content-based evaluation.

The evaluation of essay and short answer scoring should consider the relevance of the content to the prompt, development of ideas, Cohesion, Coherence, and domain knowledge. Proper assessment of the parameters mentioned above defines the accuracy of the evaluation system. But all these parameters cannot play an equal role in essay scoring and short answer scoring. In a short answer evaluation, domain knowledge is required, like the meaning of "cell" in physics and biology is different. And while evaluating essays, the implementation of ideas with respect to prompt is required. The system should also assess the completeness of the responses and provide feedback.

Several studies examined AES systems, from the initial to the latest AES systems. In which the following studies on AES systems are Blood ( 2011 ) provided a literature review from PEG 1984–2010. Which has covered only generalized parts of AES systems like ethical aspects, the performance of the systems. Still, they have not covered the implementation part, and it’s not a comparative study and has not discussed the actual challenges of AES systems.

Burrows et al. ( 2015 ) Reviewed AES systems on six dimensions like dataset, NLP techniques, model building, grading models, evaluation, and effectiveness of the model. They have not covered feature extraction techniques and challenges in features extractions. Covered only Machine Learning models but not in detail. This system not covered the comparative analysis of AES systems like feature extraction, model building, and level of relevance, cohesion, and coherence not covered in this review.

Ke et al. ( 2019 ) provided a state of the art of AES system but covered very few papers and not listed all challenges, and no comparative study of the AES model. On the other hand, Hussein et al. in ( 2019 ) studied two categories of AES systems, four papers from handcrafted features for AES systems, and four papers from the neural networks approach, discussed few challenges, and did not cover feature extraction techniques, the performance of AES models in detail.

Klebanov et al. ( 2020 ). Reviewed 50 years of AES systems, listed and categorized all essential features that need to be extracted from essays. But not provided a comparative analysis of all work and not discussed the challenges.

This paper aims to provide a systematic literature review (SLR) on automated essay grading systems. An SLR is an Evidence-based systematic review to summarize the existing research. It critically evaluates and integrates all relevant studies' findings and addresses the research domain's specific research questions. Our research methodology uses guidelines given by Kitchenham et al. ( 2009 ) for conducting the review process; provide a well-defined approach to identify gaps in current research and to suggest further investigation.

We addressed our research method, research questions, and the selection process in Sect.  2 , and the results of the research questions have discussed in Sect.  3 . And the synthesis of all the research questions addressed in Sect.  4 . Conclusion and possible future work discussed in Sect.  5 .

Research method

We framed the research questions with PICOC criteria.

Population (P) Student essays and answers evaluation systems.

Intervention (I) evaluation techniques, data sets, features extraction methods.

Comparison (C) Comparison of various approaches and results.

Outcomes (O) Estimate the accuracy of AES systems,

Context (C) NA.

Research questions

To collect and provide research evidence from the available studies in the domain of automated essay grading, we framed the following research questions (RQ):

RQ1 what are the datasets available for research on automated essay grading?

The answer to the question can provide a list of the available datasets, their domain, and access to the datasets. It also provides a number of essays and corresponding prompts.

RQ2 what are the features extracted for the assessment of essays?

The answer to the question can provide an insight into various features so far extracted, and the libraries used to extract those features.

RQ3, which are the evaluation metrics available for measuring the accuracy of algorithms?

The answer will provide different evaluation metrics for accurate measurement of each Machine Learning approach and commonly used measurement technique.

RQ4 What are the Machine Learning techniques used for automatic essay grading, and how are they implemented?

It can provide insights into various Machine Learning techniques like regression models, classification models, and neural networks for implementing essay grading systems. The response to the question can give us different assessment approaches for automated essay grading systems.

RQ5 What are the challenges/limitations in the current research?

The answer to the question provides limitations of existing research approaches like cohesion, coherence, completeness, and feedback.

Search process

We conducted an automated search on well-known computer science repositories like ACL, ACM, IEEE Explore, Springer, and Science Direct for an SLR. We referred to papers published from 2010 to 2020 as much of the work during these years focused on advanced technologies like deep learning and natural language processing for automated essay grading systems. Also, the availability of free data sets like Kaggle (2012), Cambridge Learner Corpus-First Certificate in English exam (CLC-FCE) by Yannakoudakis et al. ( 2011 ) led to research this domain.

Search Strings : We used search strings like “Automated essay grading” OR “Automated essay scoring” OR “short answer scoring systems” OR “essay scoring systems” OR “automatic essay evaluation” and searched on metadata.

Selection criteria

After collecting all relevant documents from the repositories, we prepared selection criteria for inclusion and exclusion of documents. With the inclusion and exclusion criteria, it becomes more feasible for the research to be accurate and specific.

Inclusion criteria 1 Our approach is to work with datasets comprise of essays written in English. We excluded the essays written in other languages.

Inclusion criteria 2  We included the papers implemented on the AI approach and excluded the traditional methods for the review.

Inclusion criteria 3 The study is on essay scoring systems, so we exclusively included the research carried out on only text data sets rather than other datasets like image or speech.

Exclusion criteria  We removed the papers in the form of review papers, survey papers, and state of the art papers.

Quality assessment

In addition to the inclusion and exclusion criteria, we assessed each paper by quality assessment questions to ensure the article's quality. We included the documents that have clearly explained the approach they used, the result analysis and validation.

The quality checklist questions are framed based on the guidelines from Kitchenham et al. ( 2009 ). Each quality assessment question was graded as either 1 or 0. The final score of the study range from 0 to 3. A cut off score for excluding a study from the review is 2 points. Since the papers scored 2 or 3 points are included in the final evaluation. We framed the following quality assessment questions for the final study.

Quality Assessment 1: Internal validity.

Quality Assessment 2: External validity.

Quality Assessment 3: Bias.

The two reviewers review each paper to select the final list of documents. We used the Quadratic Weighted Kappa score to measure the final agreement between the two reviewers. The average resulted from the kappa score is 0.6942, a substantial agreement between the reviewers. The result of evolution criteria shown in Table ​ Table1. 1 . After Quality Assessment, the final list of papers for review is shown in Table ​ Table2. 2 . The complete selection process is shown in Fig. ​ Fig.1. 1 . The total number of selected papers in year wise as shown in Fig. ​ Fig.2. 2 .

Quality assessment analysis

Final list of papers

An external file that holds a picture, illustration, etc.
Object name is 10462_2021_10068_Fig1_HTML.jpg

Selection process

An external file that holds a picture, illustration, etc.
Object name is 10462_2021_10068_Fig2_HTML.jpg

Year wise publications

What are the datasets available for research on automated essay grading?

To work with problem statement especially in Machine Learning and deep learning domain, we require considerable amount of data to train the models. To answer this question, we listed all the data sets used for training and testing for automated essay grading systems. The Cambridge Learner Corpus-First Certificate in English exam (CLC-FCE) Yannakoudakis et al. ( 2011 ) developed corpora that contain 1244 essays and ten prompts. This corpus evaluates whether a student can write the relevant English sentences without any grammatical and spelling mistakes. This type of corpus helps to test the models built for GRE and TOFEL type of exams. It gives scores between 1 and 40.

Bailey and Meurers ( 2008 ), Created a dataset (CREE reading comprehension) for language learners and automated short answer scoring systems. The corpus consists of 566 responses from intermediate students. Mohler and Mihalcea ( 2009 ). Created a dataset for the computer science domain consists of 630 responses for data structure assignment questions. The scores are range from 0 to 5 given by two human raters.

Dzikovska et al. ( 2012 ) created a Student Response Analysis (SRA) corpus. It consists of two sub-groups: the BEETLE corpus consists of 56 questions and approximately 3000 responses from students in the electrical and electronics domain. The second one is the SCIENTSBANK(SemEval-2013) (Dzikovska et al. 2013a ; b ) corpus consists of 10,000 responses on 197 prompts on various science domains. The student responses ladled with "correct, partially correct incomplete, Contradictory, Irrelevant, Non-domain."

In the Kaggle (2012) competition, released total 3 types of corpuses on an Automated Student Assessment Prize (ASAP1) (“ https://www.kaggle.com/c/asap-sas/ ” ) essays and short answers. It has nearly 17,450 essays, out of which it provides up to 3000 essays for each prompt. It has eight prompts that test 7th to 10th grade US students. It gives scores between the [0–3] and [0–60] range. The limitations of these corpora are: (1) it has a different score range for other prompts. (2) It uses statistical features such as named entities extraction and lexical features of words to evaluate essays. ASAP +  + is one more dataset from Kaggle. It is with six prompts, and each prompt has more than 1000 responses total of 10,696 from 8th-grade students. Another corpus contains ten prompts from science, English domains and a total of 17,207 responses. Two human graders evaluated all these responses.

Correnti et al. ( 2013 ) created a Response-to-Text Assessment (RTA) dataset used to check student writing skills in all directions like style, mechanism, and organization. 4–8 grade students give the responses to RTA. Basu et al. ( 2013 ) created a power grading dataset with 700 responses for ten different prompts from US immigration exams. It contains all short answers for assessment.

The TOEFL11 corpus Blanchard et al. ( 2013 ) contains 1100 essays evenly distributed over eight prompts. It is used to test the English language skills of a candidate attending the TOFEL exam. It scores the language proficiency of a candidate as low, medium, and high.

International Corpus of Learner English (ICLE) Granger et al. ( 2009 ) built a corpus of 3663 essays covering different dimensions. It has 12 prompts with 1003 essays that test the organizational skill of essay writing, and13 prompts, each with 830 essays that examine the thesis clarity and prompt adherence.

Argument Annotated Essays (AAE) Stab and Gurevych ( 2014 ) developed a corpus that contains 102 essays with 101 prompts taken from the essayforum2 site. It tests the persuasive nature of the student essay. The SCIENTSBANK corpus used by Sakaguchi et al. ( 2015 ) available in git-hub, containing 9804 answers to 197 questions in 15 science domains. Table ​ Table3 3 illustrates all datasets related to AES systems.

ALL types Datasets used in Automatic scoring systems

Features play a major role in the neural network and other supervised Machine Learning approaches. The automatic essay grading systems scores student essays based on different types of features, which play a prominent role in training the models. Based on their syntax and semantics and they are categorized into three groups. 1. statistical-based features Contreras et al. ( 2018 ); Kumar et al. ( 2019 ); Mathias and Bhattacharyya ( 2018a ; b ) 2. Style-based (Syntax) features Cummins et al. ( 2016 ); Darwish and Mohamed ( 2020 ); Ke et al. ( 2019 ). 3. Content-based features Dong et al. ( 2017 ). A good set of features appropriate models evolved better AES systems. The vast majority of the researchers are using regression models if features are statistical-based. For Neural Networks models, researches are using both style-based and content-based features. The following table shows the list of various features used in existing AES Systems. Table ​ Table4 4 represents all set of features used for essay grading.

Types of features

We studied all the feature extracting NLP libraries as shown in Fig. ​ Fig.3. that 3 . that are used in the papers. The NLTK is an NLP tool used to retrieve statistical features like POS, word count, sentence count, etc. With NLTK, we can miss the essay's semantic features. To find semantic features Word2Vec Mikolov et al. ( 2013 ), GloVe Jeffrey Pennington et al. ( 2014 ) is the most used libraries to retrieve the semantic text from the essays. And in some systems, they directly trained the model with word embeddings to find the score. From Fig. ​ Fig.4 4 as observed that non-content-based feature extraction is higher than content-based.

An external file that holds a picture, illustration, etc.
Object name is 10462_2021_10068_Fig3_HTML.jpg

Usages of tools

An external file that holds a picture, illustration, etc.
Object name is 10462_2021_10068_Fig4_HTML.jpg

Number of papers on content based features

RQ3 which are the evaluation metrics available for measuring the accuracy of algorithms?

The majority of the AES systems are using three evaluation metrics. They are (1) quadrated weighted kappa (QWK) (2) Mean Absolute Error (MAE) (3) Pearson Correlation Coefficient (PCC) Shehab et al. ( 2016 ). The quadratic weighted kappa will find agreement between human evaluation score and system evaluation score and produces value ranging from 0 to 1. And the Mean Absolute Error is the actual difference between human-rated score to system-generated score. The mean square error (MSE) measures the average squares of the errors, i.e., the average squared difference between the human-rated and the system-generated scores. MSE will always give positive numbers only. Pearson's Correlation Coefficient (PCC) finds the correlation coefficient between two variables. It will provide three values (0, 1, − 1). "0" represents human-rated and system scores that are not related. "1" represents an increase in the two scores. "− 1" illustrates a negative relationship between the two scores.

RQ4 what are the Machine Learning techniques being used for automatic essay grading, and how are they implemented?

After scrutinizing all documents, we categorize the techniques used in automated essay grading systems into four baskets. 1. Regression techniques. 2. Classification model. 3. Neural networks. 4. Ontology-based approach.

All the existing AES systems developed in the last ten years employ supervised learning techniques. Researchers using supervised methods viewed the AES system as either regression or classification task. The goal of the regression task is to predict the score of an essay. The classification task is to classify the essays belonging to (low, medium, or highly) relevant to the question's topic. Since the last three years, most AES systems developed made use of the concept of the neural network.

Regression based models

Mohler and Mihalcea ( 2009 ). proposed text-to-text semantic similarity to assign a score to the student essays. There are two text similarity measures like Knowledge-based measures, corpus-based measures. There eight knowledge-based tests with all eight models. They found the similarity. The shortest path similarity determines based on the length, which shortest path between two contexts. Leacock & Chodorow find the similarity based on the shortest path's length between two concepts using node-counting. The Lesk similarity finds the overlap between the corresponding definitions, and Wu & Palmer algorithm finds similarities based on the depth of two given concepts in the wordnet taxonomy. Resnik, Lin, Jiang&Conrath, Hirst& St-Onge find the similarity based on different parameters like the concept, probability, normalization factor, lexical chains. In corpus-based likeness, there LSA BNC, LSA Wikipedia, and ESA Wikipedia, latent semantic analysis is trained on Wikipedia and has excellent domain knowledge. Among all similarity scores, correlation scores LSA Wikipedia scoring accuracy is more. But these similarity measure algorithms are not using NLP concepts. These models are before 2010 and basic concept models to continue the research automated essay grading with updated algorithms on neural networks with content-based features.

Adamson et al. ( 2014 ) proposed an automatic essay grading system which is a statistical-based approach in this they retrieved features like POS, Character count, Word count, Sentence count, Miss spelled words, n-gram representation of words to prepare essay vector. They formed a matrix with these all vectors in that they applied LSA to give a score to each essay. It is a statistical approach that doesn’t consider the semantics of the essay. The accuracy they got when compared to the human rater score with the system is 0.532.

Cummins et al. ( 2016 ). Proposed Timed Aggregate Perceptron vector model to give ranking to all the essays, and later they converted the rank algorithm to predict the score of the essay. The model trained with features like Word unigrams, bigrams, POS, Essay length, grammatical relation, Max word length, sentence length. It is multi-task learning, gives ranking to the essays, and predicts the score for the essay. The performance evaluated through QWK is 0.69, a substantial agreement between the human rater and the system.

Sultan et al. ( 2016 ). Proposed a Ridge regression model to find short answer scoring with Question Demoting. Question Demoting is the new concept included in the essay's final assessment to eliminate duplicate words from the essay. The extracted features are Text Similarity, which is the similarity between the student response and reference answer. Question Demoting is the number of repeats in a student response. With inverse document frequency, they assigned term weight. The sentence length Ratio is the number of words in the student response, is another feature. With these features, the Ridge regression model was used, and the accuracy they got 0.887.

Contreras et al. ( 2018 ). Proposed Ontology based on text mining in this model has given a score for essays in phases. In phase-I, they generated ontologies with ontoGen and SVM to find the concept and similarity in the essay. In phase II from ontologies, they retrieved features like essay length, word counts, correctness, vocabulary, and types of word used, domain information. After retrieving statistical data, they used a linear regression model to find the score of the essay. The accuracy score is the average of 0.5.

Darwish and Mohamed ( 2020 ) proposed the fusion of fuzzy Ontology with LSA. They retrieve two types of features, like syntax features and semantic features. In syntax features, they found Lexical Analysis with tokens, and they construct a parse tree. If the parse tree is broken, the essay is inconsistent—a separate grade assigned to the essay concerning syntax features. The semantic features are like similarity analysis, Spatial Data Analysis. Similarity analysis is to find duplicate sentences—Spatial Data Analysis for finding Euclid distance between the center and part. Later they combine syntax features and morphological features score for the final score. The accuracy they achieved with the multiple linear regression model is 0.77, mostly on statistical features.

Süzen Neslihan et al. ( 2020 ) proposed a text mining approach for short answer grading. First, their comparing model answers with student response by calculating the distance between two sentences. By comparing the model answer with student response, they find the essay's completeness and provide feedback. In this approach, model vocabulary plays a vital role in grading, and with this model vocabulary, the grade will be assigned to the student's response and provides feedback. The correlation between the student answer to model answer is 0.81.

Classification based Models

Persing and Ng ( 2013 ) used a support vector machine to score the essay. The features extracted are OS, N-gram, and semantic text to train the model and identified the keywords from the essay to give the final score.

Sakaguchi et al. ( 2015 ) proposed two methods: response-based and reference-based. In response-based scoring, the extracted features are response length, n-gram model, and syntactic elements to train the support vector regression model. In reference-based scoring, features such as sentence similarity using word2vec is used to find the cosine similarity of the sentences that is the final score of the response. First, the scores were discovered individually and later combined two features to find a final score. This system gave a remarkable increase in performance by combining the scores.

Mathias and Bhattacharyya ( 2018a ; b ) Proposed Automated Essay Grading Dataset with Essay Attribute Scores. The first concept features selection depends on the essay type. So the common attributes are Content, Organization, Word Choice, Sentence Fluency, Conventions. In this system, each attribute is scored individually, with the strength of each attribute identified. The model they used is a random forest classifier to assign scores to individual attributes. The accuracy they got with QWK is 0.74 for prompt 1 of the ASAS dataset ( https://www.kaggle.com/c/asap-sas/ ).

Ke et al. ( 2019 ) used a support vector machine to find the response score. In this method, features like Agreeability, Specificity, Clarity, Relevance to prompt, Conciseness, Eloquence, Confidence, Direction of development, Justification of opinion, and Justification of importance. First, the individual parameter score obtained was later combined with all scores to give a final response score. The features are used in the neural network to find whether the sentence is relevant to the topic or not.

Salim et al. ( 2019 ) proposed an XGBoost Machine Learning classifier to assess the essays. The algorithm trained on features like word count, POS, parse tree depth, and coherence in the articles with sentence similarity percentage; cohesion and coherence are considered for training. And they implemented K-fold cross-validation for a result the average accuracy after specific validations is 68.12.

Neural network models

Shehab et al. ( 2016 ) proposed a neural network method that used learning vector quantization to train human scored essays. After training, the network can provide a score to the ungraded essays. First, we should process the essay to remove Spell checking and then perform preprocessing steps like Document Tokenization, stop word removal, Stemming, and submit it to the neural network. Finally, the model will provide feedback on the essay, whether it is relevant to the topic. And the correlation coefficient between human rater and system score is 0.7665.

Kopparapu and De ( 2016 ) proposed the Automatic Ranking of Essays using Structural and Semantic Features. This approach constructed a super essay with all the responses. Next, ranking for a student essay is done based on the super-essay. The structural and semantic features derived helps to obtain the scores. In a paragraph, 15 Structural features like an average number of sentences, the average length of sentences, and the count of words, nouns, verbs, adjectives, etc., are used to obtain a syntactic score. A similarity score is used as semantic features to calculate the overall score.

Dong and Zhang ( 2016 ) proposed a hierarchical CNN model. The model builds two layers with word embedding to represents the words as the first layer. The second layer is a word convolution layer with max-pooling to find word vectors. The next layer is a sentence-level convolution layer with max-pooling to find the sentence's content and synonyms. A fully connected dense layer produces an output score for an essay. The accuracy with the hierarchical CNN model resulted in an average QWK of 0.754.

Taghipour and Ng ( 2016 ) proposed a first neural approach for essay scoring build in which convolution and recurrent neural network concepts help in scoring an essay. The network uses a lookup table with the one-hot representation of the word vector of an essay. The final efficiency of the network model with LSTM resulted in an average QWK of 0.708.

Dong et al. ( 2017 ). Proposed an Attention-based scoring system with CNN + LSTM to score an essay. For CNN, the input parameters were character embedding and word embedding, and it has attention pooling layers and used NLTK to obtain word and character embedding. The output gives a sentence vector, which provides sentence weight. After CNN, it will have an LSTM layer with an attention pooling layer, and this final layer results in the final score of the responses. The average QWK score is 0.764.

Riordan et al. ( 2017 ) proposed a neural network with CNN and LSTM layers. Word embedding, given as input to a neural network. An LSTM network layer will retrieve the window features and delivers them to the aggregation layer. The aggregation layer is a superficial layer that takes a correct window of words and gives successive layers to predict the answer's sore. The accuracy of the neural network resulted in a QWK of 0.90.

Zhao et al. ( 2017 ) proposed a new concept called Memory-Augmented Neural network with four layers, input representation layer, memory addressing layer, memory reading layer, and output layer. An input layer represents all essays in a vector form based on essay length. After converting the word vector, the memory addressing layer takes a sample of the essay and weighs all the terms. The memory reading layer takes the input from memory addressing segment and finds the content to finalize the score. Finally, the output layer will provide the final score of the essay. The accuracy of essay scores is 0.78, which is far better than the LSTM neural network.

Mathias and Bhattacharyya ( 2018a ; b ) proposed deep learning networks using LSTM with the CNN layer and GloVe pre-trained word embeddings. For this, they retrieved features like Sentence count essays, word count per sentence, Number of OOVs in the sentence, Language model score, and the text's perplexity. The network predicted the goodness scores of each essay. The higher the goodness scores, means higher the rank and vice versa.

Nguyen and Dery ( 2016 ). Proposed Neural Networks for Automated Essay Grading. In this method, a single layer bi-directional LSTM accepting word vector as input. Glove vectors used in this method resulted in an accuracy of 90%.

Ruseti et al. ( 2018 ) proposed a recurrent neural network that is capable of memorizing the text and generate a summary of an essay. The Bi-GRU network with the max-pooling layer molded on the word embedding of each document. It will provide scoring to the essay by comparing it with a summary of the essay from another Bi-GRU network. The result obtained an accuracy of 0.55.

Wang et al. ( 2018a ; b ) proposed an automatic scoring system with the bi-LSTM recurrent neural network model and retrieved the features using the word2vec technique. This method generated word embeddings from the essay words using the skip-gram model. And later, word embedding is used to train the neural network to find the final score. The softmax layer in LSTM obtains the importance of each word. This method used a QWK score of 0.83%.

Dasgupta et al. ( 2018 ) proposed a technique for essay scoring with augmenting textual qualitative Features. It extracted three types of linguistic, cognitive, and psychological features associated with a text document. The linguistic features are Part of Speech (POS), Universal Dependency relations, Structural Well-formedness, Lexical Diversity, Sentence Cohesion, Causality, and Informativeness of the text. The psychological features derived from the Linguistic Information and Word Count (LIWC) tool. They implemented a convolution recurrent neural network that takes input as word embedding and sentence vector, retrieved from the GloVe word vector. And the second layer is the Convolution Layer to find local features. The next layer is the recurrent neural network (LSTM) to find corresponding of the text. The accuracy of this method resulted in an average QWK of 0.764.

Liang et al. ( 2018 ) proposed a symmetrical neural network AES model with Bi-LSTM. They are extracting features from sample essays and student essays and preparing an embedding layer as input. The embedding layer output is transfer to the convolution layer from that LSTM will be trained. Hear the LSRM model has self-features extraction layer, which will find the essay's coherence. The average QWK score of SBLSTMA is 0.801.

Liu et al. ( 2019 ) proposed two-stage learning. In the first stage, they are assigning a score based on semantic data from the essay. The second stage scoring is based on some handcrafted features like grammar correction, essay length, number of sentences, etc. The average score of the two stages is 0.709.

Pedro Uria Rodriguez et al. ( 2019 ) proposed a sequence-to-sequence learning model for automatic essay scoring. They used BERT (Bidirectional Encoder Representations from Transformers), which extracts the semantics from a sentence from both directions. And XLnet sequence to sequence learning model to extract features like the next sentence in an essay. With this pre-trained model, they attained coherence from the essay to give the final score. The average QWK score of the model is 75.5.

Xia et al. ( 2019 ) proposed a two-layer Bi-directional LSTM neural network for the scoring of essays. The features extracted with word2vec to train the LSTM and accuracy of the model in an average of QWK is 0.870.

Kumar et al. ( 2019 ) Proposed an AutoSAS for short answer scoring. It used pre-trained Word2Vec and Doc2Vec models trained on Google News corpus and Wikipedia dump, respectively, to retrieve the features. First, they tagged every word POS and they found weighted words from the response. It also found prompt overlap to observe how the answer is relevant to the topic, and they defined lexical overlaps like noun overlap, argument overlap, and content overlap. This method used some statistical features like word frequency, difficulty, diversity, number of unique words in each response, type-token ratio, statistics of the sentence, word length, and logical operator-based features. This method uses a random forest model to train the dataset. The data set has sample responses with their associated score. The model will retrieve the features from both responses like graded and ungraded short answers with questions. The accuracy of AutoSAS with QWK is 0.78. It will work on any topics like Science, Arts, Biology, and English.

Jiaqi Lun et al. ( 2020 ) proposed an automatic short answer scoring with BERT. In this with a reference answer comparing student responses and assigning scores. The data augmentation is done with a neural network and with one correct answer from the dataset classifying reaming responses as correct or incorrect.

Zhu and Sun ( 2020 ) proposed a multimodal Machine Learning approach for automated essay scoring. First, they count the grammar score with the spaCy library and numerical count as the number of words and sentences with the same library. With this input, they trained a single and Bi LSTM neural network for finding the final score. For the LSTM model, they prepared sentence vectors with GloVe and word embedding with NLTK. Bi-LSTM will check each sentence in both directions to find semantic from the essay. The average QWK score with multiple models is 0.70.

Ontology based approach

Mohler et al. ( 2011 ) proposed a graph-based method to find semantic similarity in short answer scoring. For the ranking of answers, they used the support vector regression model. The bag of words is the main feature extracted in the system.

Ramachandran et al. ( 2015 ) also proposed a graph-based approach to find lexical based semantics. Identified phrase patterns and text patterns are the features to train a random forest regression model to score the essays. The accuracy of the model in a QWK is 0.78.

Zupanc et al. ( 2017 ) proposed sentence similarity networks to find the essay's score. Ajetunmobi and Daramola ( 2017 ) recommended an ontology-based information extraction approach and domain-based ontology to find the score.

Speech response scoring

Automatic scoring is in two ways one is text-based scoring, other is speech-based scoring. This paper discussed text-based scoring and its challenges, and now we cover speech scoring and common points between text and speech-based scoring. Evanini and Wang ( 2013 ), Worked on speech scoring of non-native school students, extracted features with speech ratter, and trained a linear regression model, concluding that accuracy varies based on voice pitching. Loukina et al. ( 2015 ) worked on feature selection from speech data and trained SVM. Malinin et al. ( 2016 ) used neural network models to train the data. Loukina et al. ( 2017 ). Proposed speech and text-based automatic scoring. Extracted text-based features, speech-based features and trained a deep neural network for speech-based scoring. They extracted 33 types of features based on acoustic signals. Malinin et al. ( 2017 ). Wu Xixin et al. ( 2020 ) Worked on deep neural networks for spoken language assessment. Incorporated different types of models and tested them. Ramanarayanan et al. ( 2017 ) worked on feature extraction methods and extracted punctuation, fluency, and stress and trained different Machine Learning models for scoring. Knill et al. ( 2018 ). Worked on Automatic speech recognizer and its errors how its impacts the speech assessment.

The state of the art

This section provides an overview of the existing AES systems with a comparative study w. r. t models, features applied, datasets, and evaluation metrics used for building the automated essay grading systems. We divided all 62 papers into two sets of the first set of review papers in Table ​ Table5 5 with a comparative study of the AES systems.

State of the art

Comparison of all approaches

In our study, we divided major AES approaches into three categories. Regression models, classification models, and neural network models. The regression models failed to find cohesion and coherence from the essay because it trained on BoW(Bag of Words) features. In processing data from input to output, the regression models are less complicated than neural networks. There are unable to find many intricate patterns from the essay and unable to find sentence connectivity. If we train the model with BoW features in the neural network approach, the model never considers the essay's coherence and coherence.

First, to train a Machine Learning algorithm with essays, all the essays are converted to vector form. We can form a vector with BoW and Word2vec, TF-IDF. The BoW and Word2vec vector representation of essays represented in Table ​ Table6. 6 . The vector representation of BoW with TF-IDF is not incorporating the essays semantic, and it’s just statistical learning from a given vector. Word2vec vector comprises semantic of essay in a unidirectional way.

Vector representation of essays

In BoW, the vector contains the frequency of word occurrences in the essay. The vector represents 1 and more based on the happenings of words in the essay and 0 for not present. So, in BoW, the vector does not maintain the relationship with adjacent words; it’s just for single words. In word2vec, the vector represents the relationship between words with other words and sentences prompt in multiple dimensional ways. But word2vec prepares vectors in a unidirectional way, not in a bidirectional way; word2vec fails to find semantic vectors when a word has two meanings, and the meaning depends on adjacent words. Table ​ Table7 7 represents a comparison of Machine Learning models and features extracting methods.

Comparison of models

In AES, cohesion and coherence will check the content of the essay concerning the essay prompt these can be extracted from essay in the vector from. Two more parameters are there to access an essay is completeness and feedback. Completeness will check whether student’s response is sufficient or not though the student wrote correctly. Table ​ Table8 8 represents all four parameters comparison for essay grading. Table ​ Table9 9 illustrates comparison of all approaches based on various features like grammar, spelling, organization of essay, relevance.

Comparison of all models with respect to cohesion, coherence, completeness, feedback

comparison of all approaches on various features

What are the challenges/limitations in the current research?

From our study and results discussed in the previous sections, many researchers worked on automated essay scoring systems with numerous techniques. We have statistical methods, classification methods, and neural network approaches to evaluate the essay automatically. The main goal of the automated essay grading system is to reduce human effort and improve consistency.

The vast majority of essay scoring systems are dealing with the efficiency of the algorithm. But there are many challenges in automated essay grading systems. One should assess the essay by following parameters like the relevance of the content to the prompt, development of ideas, Cohesion, Coherence, and domain knowledge.

No model works on the relevance of content, which means whether student response or explanation is relevant to the given prompt or not if it is relevant to how much it is appropriate, and there is no discussion about the cohesion and coherence of the essays. All researches concentrated on extracting the features using some NLP libraries, trained their models, and testing the results. But there is no explanation in the essay evaluation system about consistency and completeness, But Palma and Atkinson ( 2018 ) explained coherence-based essay evaluation. And Zupanc and Bosnic ( 2014 ) also used the word coherence to evaluate essays. And they found consistency with latent semantic analysis (LSA) for finding coherence from essays, but the dictionary meaning of coherence is "The quality of being logical and consistent."

Another limitation is there is no domain knowledge-based evaluation of essays using Machine Learning models. For example, the meaning of a cell is different from biology to physics. Many Machine Learning models extract features with WordVec and GloVec; these NLP libraries cannot convert the words into vectors when they have two or more meanings.

Other challenges that influence the Automated Essay Scoring Systems.

All these approaches worked to improve the QWK score of their models. But QWK will not assess the model in terms of features extraction and constructed irrelevant answers. The QWK is not evaluating models whether the model is correctly assessing the answer or not. There are many challenges concerning students' responses to the Automatic scoring system. Like in evaluating approach, no model has examined how to evaluate the constructed irrelevant and adversarial answers. Especially the black box type of approaches like deep learning models provides more options to the students to bluff the automated scoring systems.

The Machine Learning models that work on statistical features are very vulnerable. Based on Powers et al. ( 2001 ) and Bejar Isaac et al. ( 2014 ), the E-rater was failed on Constructed Irrelevant Responses Strategy (CIRS). From the study of Bejar et al. ( 2013 ), Higgins and Heilman ( 2014 ), observed that when student response contain irrelevant content or shell language concurring to prompt will influence the final score of essays in an automated scoring system.

In deep learning approaches, most of the models automatically read the essay's features, and some methods work on word-based embedding and other character-based embedding features. From the study of Riordan Brain et al. ( 2019 ), The character-based embedding systems do not prioritize spelling correction. However, it is influencing the final score of the essay. From the study of Horbach and Zesch ( 2019 ), Various factors are influencing AES systems. For example, there are data set size, prompt type, answer length, training set, and human scorers for content-based scoring.

Ding et al. ( 2020 ) reviewed that the automated scoring system is vulnerable when a student response contains more words from prompt, like prompt vocabulary repeated in the response. Parekh et al. ( 2020 ) and Kumar et al. ( 2020 ) tested various neural network models of AES by iteratively adding important words, deleting unimportant words, shuffle the words, and repeating sentences in an essay and found that no change in the final score of essays. These neural network models failed to recognize common sense in adversaries' essays and give more options for the students to bluff the automated systems.

Other than NLP and ML techniques for AES. From Wresch ( 1993 ) to Madnani and Cahill ( 2018 ). discussed the complexity of AES systems, standards need to be followed. Like assessment rubrics to test subject knowledge, irrelevant responses, and ethical aspects of an algorithm like measuring the fairness of student response.

Fairness is an essential factor for automated systems. For example, in AES, fairness can be measure in an agreement between human score to machine score. Besides this, From Loukina et al. ( 2019 ), the fairness standards include overall score accuracy, overall score differences, and condition score differences between human and system scores. In addition, scoring different responses in the prospect of constructive relevant and irrelevant will improve fairness.

Madnani et al. ( 2017a ; b ). Discussed the fairness of AES systems for constructed responses and presented RMS open-source tool for detecting biases in the models. With this, one can change fairness standards according to their analysis of fairness.

From Berzak et al.'s ( 2018 ) approach, behavior factors are a significant challenge in automated scoring systems. That helps to find language proficiency, word characteristics (essential words from the text), predict the critical patterns from the text, find related sentences in an essay, and give a more accurate score.

Rupp ( 2018 ), has discussed the designing, evaluating, and deployment methodologies for AES systems. They provided notable characteristics of AES systems for deployment. They are like model performance, evaluation metrics for a model, threshold values, dynamically updated models, and framework.

First, we should check the model performance on different datasets and parameters for operational deployment. Selecting Evaluation metrics for AES models are like QWK, correlation coefficient, or sometimes both. Kelley and Preacher ( 2012 ) have discussed three categories of threshold values: marginal, borderline, and acceptable. The values can be varied based on data size, model performance, type of model (single scoring, multiple scoring models). Once a model is deployed and evaluates millions of responses every time for optimal responses, we need a dynamically updated model based on prompt and data. Finally, framework designing of AES model, hear a framework contains prompts where test-takers can write the responses. One can design two frameworks: a single scoring model for a single methodology and multiple scoring models for multiple concepts. When we deploy multiple scoring models, each prompt could be trained separately, or we can provide generalized models for all prompts with this accuracy may vary, and it is challenging.

Our Systematic literature review on the automated essay grading system first collected 542 papers with selected keywords from various databases. After inclusion and exclusion criteria, we left with 139 articles; on these selected papers, we applied Quality assessment criteria with two reviewers, and finally, we selected 62 writings for final review.

Our observations on automated essay grading systems from 2010 to 2020 are as followed:

  • The implementation techniques of automated essay grading systems are classified into four buckets; there are 1. regression models 2. Classification models 3. Neural networks 4. Ontology-based methodology, but using neural networks, the researchers are more accurate than other techniques, and all the methods state of the art provided in Table ​ Table3 3 .
  • The majority of the regression and classification models on essay scoring used statistical features to find the final score. It means the systems or models trained on such parameters as word count, sentence count, etc. though the parameters extracted from the essay, the algorithm are not directly training on essays. The algorithms trained on some numbers obtained from the essay and hear if numbers matched the composition will get a good score; otherwise, the rating is less. In these models, the evaluation process is entirely on numbers, irrespective of the essay. So, there is a lot of chance to miss the coherence, relevance of the essay if we train our algorithm on statistical parameters.
  • In the neural network approach, the models trained on Bag of Words (BoW) features. The BoW feature is missing the relationship between a word to word and the semantic meaning of the sentence. E.g., Sentence 1: John killed bob. Sentence 2: bob killed John. In these two sentences, the BoW is "John," "killed," "bob."
  • In the Word2Vec library, if we are prepared a word vector from an essay in a unidirectional way, the vector will have a dependency with other words and finds the semantic relationship with other words. But if a word has two or more meanings like "Bank loan" and "River Bank," hear bank has two implications, and its adjacent words decide the sentence meaning; in this case, Word2Vec is not finding the real meaning of the word from the sentence.
  • The features extracted from essays in the essay scoring system are classified into 3 type's features like statistical features, style-based features, and content-based features, which are explained in RQ2 and Table ​ Table3. 3 . But statistical features, are playing a significant role in some systems and negligible in some systems. In Shehab et al. ( 2016 ); Cummins et al. ( 2016 ). Dong et al. ( 2017 ). Dong and Zhang ( 2016 ). Mathias and Bhattacharyya ( 2018a ; b ) Systems the assessment is entirely on statistical and style-based features they have not retrieved any content-based features. And in other systems that extract content from the essays, the role of statistical features is for only preprocessing essays but not included in the final grading.
  • In AES systems, coherence is the main feature to be considered while evaluating essays. The actual meaning of coherence is to stick together. That is the logical connection of sentences (local level coherence) and paragraphs (global level coherence) in a story. Without coherence, all sentences in a paragraph are independent and meaningless. In an Essay, coherence is a significant feature that is explaining everything in a flow and its meaning. It is a powerful feature in AES system to find the semantics of essay. With coherence, one can assess whether all sentences are connected in a flow and all paragraphs are related to justify the prompt. Retrieving the coherence level from an essay is a critical task for all researchers in AES systems.
  • In automatic essay grading systems, the assessment of essays concerning content is critical. That will give the actual score for the student. Most of the researches used statistical features like sentence length, word count, number of sentences, etc. But according to collected results, 32% of the systems used content-based features for the essay scoring. Example papers which are on content-based assessment are Taghipour and Ng ( 2016 ); Persing and Ng ( 2013 ); Wang et al. ( 2018a , 2018b ); Zhao et al. ( 2017 ); Kopparapu and De ( 2016 ), Kumar et al. ( 2019 ); Mathias and Bhattacharyya ( 2018a ; b ); Mohler and Mihalcea ( 2009 ) are used content and statistical-based features. The results are shown in Fig. ​ Fig.3. 3 . And mainly the content-based features extracted with word2vec NLP library, but word2vec is capable of capturing the context of a word in a document, semantic and syntactic similarity, relation with other terms, but word2vec is capable of capturing the context word in a uni-direction either left or right. If a word has multiple meanings, there is a chance of missing the context in the essay. After analyzing all the papers, we found that content-based assessment is a qualitative assessment of essays.
  • On the other hand, Horbach and Zesch ( 2019 ); Riordan Brain et al. ( 2019 ); Ding et al. ( 2020 ); Kumar et al. ( 2020 ) proved that neural network models are vulnerable when a student response contains constructed irrelevant, adversarial answers. And a student can easily bluff an automated scoring system by submitting different responses like repeating sentences and repeating prompt words in an essay. From Loukina et al. ( 2019 ), and Madnani et al. ( 2017b ). The fairness of an algorithm is an essential factor to be considered in AES systems.
  • While talking about speech assessment, the data set contains audios of duration up to one minute. Feature extraction techniques are entirely different from text assessment, and accuracy varies based on speaking fluency, pitching, male to female voice and boy to adult voice. But the training algorithms are the same for text and speech assessment.
  • Once an AES system evaluates essays and short answers accurately in all directions, there is a massive demand for automated systems in the educational and related world. Now AES systems are deployed in GRE, TOEFL exams; other than these, we can deploy AES systems in massive open online courses like Coursera(“ https://coursera.org/learn//machine-learning//exam ”), NPTEL ( https://swayam.gov.in/explorer ), etc. still they are assessing student performance with multiple-choice questions. In another perspective, AES systems can be deployed in information retrieval systems like Quora, stack overflow, etc., to check whether the retrieved response is appropriate to the question or not and can give ranking to the retrieved answers.

Conclusion and future work

As per our Systematic literature review, we studied 62 papers. There exist significant challenges for researchers in implementing automated essay grading systems. Several researchers are working rigorously on building a robust AES system despite its difficulty in solving this problem. All evaluating methods are not evaluated based on coherence, relevance, completeness, feedback, and knowledge-based. And 90% of essay grading systems are used Kaggle ASAP (2012) dataset, which has general essays from students and not required any domain knowledge, so there is a need for domain-specific essay datasets to train and test. Feature extraction is with NLTK, WordVec, and GloVec NLP libraries; these libraries have many limitations while converting a sentence into vector form. Apart from feature extraction and training Machine Learning models, no system is accessing the essay's completeness. No system provides feedback to the student response and not retrieving coherence vectors from the essay—another perspective the constructive irrelevant and adversarial student responses still questioning AES systems.

Our proposed research work will go on the content-based assessment of essays with domain knowledge and find a score for the essays with internal and external consistency. And we will create a new dataset concerning one domain. And another area in which we can improve is the feature extraction techniques.

This study includes only four digital databases for study selection may miss some functional studies on the topic. However, we hope that we covered most of the significant studies as we manually collected some papers published in useful journals.

Below is the link to the electronic supplementary material.

Not Applicable.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Dadi Ramesh, Email: moc.liamg@44hsemaridad .

Suresh Kumar Sanampudi, Email: ni.ca.hutnj@idupmanashserus .

  • Adamson, A., Lamb, A., & December, R. M. (2014). Automated Essay Grading.
  • Ajay HB, Tillett PI, Page EB (1973) Analysis of essays by computer (AEC-II) (No. 8-0102). Washington, DC: U.S. Department of Health, Education, and Welfare, Office of Education, National Center for Educational Research and Development
  • Ajetunmobi SA, Daramola O (2017) Ontology-based information extraction for subject-focussed automatic essay evaluation. In: 2017 International Conference on Computing Networking and Informatics (ICCNI) p 1–6. IEEE
  • Alva-Manchego F, et al. (2019) EASSE: Easier Automatic Sentence Simplification Evaluation.” ArXiv abs/1908.04567 (2019): n. pag
  • Bailey S, Meurers D (2008) Diagnosing meaning errors in short answers to reading comprehension questions. In: Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications (Columbus), p 107–115
  • Basu S, Jacobs C, Vanderwende L. Powergrading: a clustering approach to amplify human effort for short answer grading. Trans Assoc Comput Linguist (TACL) 2013; 1 :391–402. doi: 10.1162/tacl_a_00236. [ CrossRef ] [ Google Scholar ]
  • Bejar, I. I., Flor, M., Futagi, Y., & Ramineni, C. (2014). On the vulnerability of automated scoring to construct-irrelevant response strategies (CIRS): An illustration. Assessing Writing, 22, 48-59.
  • Bejar I, et al. (2013) Length of Textual Response as a Construct-Irrelevant Response Strategy: The Case of Shell Language. Research Report. ETS RR-13-07.” ETS Research Report Series (2013): n. pag
  • Berzak Y, et al. (2018) “Assessing Language Proficiency from Eye Movements in Reading.” ArXiv abs/1804.07329 (2018): n. pag
  • Blanchard D, Tetreault J, Higgins D, Cahill A, Chodorow M (2013) TOEFL11: A corpus of non-native English. ETS Research Report Series, 2013(2):i–15, 2013
  • Blood, I. (2011). Automated essay scoring: a literature review. Studies in Applied Linguistics and TESOL, 11(2).
  • Burrows S, Gurevych I, Stein B. The eras and trends of automatic short answer grading. Int J Artif Intell Educ. 2015; 25 :60–117. doi: 10.1007/s40593-014-0026-8. [ CrossRef ] [ Google Scholar ]
  • Cader, A. (2020, July). The Potential for the Use of Deep Neural Networks in e-Learning Student Evaluation with New Data Augmentation Method. In International Conference on Artificial Intelligence in Education (pp. 37–42). Springer, Cham.
  • Cai C (2019) Automatic essay scoring with recurrent neural network. In: Proceedings of the 3rd International Conference on High Performance Compilation, Computing and Communications (2019): n. pag.
  • Chen M, Li X (2018) "Relevance-Based Automated Essay Scoring via Hierarchical Recurrent Model. In: 2018 International Conference on Asian Language Processing (IALP), Bandung, Indonesia, 2018, p 378–383, doi: 10.1109/IALP.2018.8629256
  • Chen Z, Zhou Y (2019) "Research on Automatic Essay Scoring of Composition Based on CNN and OR. In: 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, p 13–18, doi: 10.1109/ICAIBD.2019.8837007
  • Contreras JO, Hilles SM, Abubakar ZB (2018) Automated essay scoring with ontology based on text mining and NLTK tools. In: 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), 1-6
  • Correnti R, Matsumura LC, Hamilton L, Wang E. Assessing students’ skills at writing analytically in response to texts. Elem Sch J. 2013; 114 (2):142–177. doi: 10.1086/671936. [ CrossRef ] [ Google Scholar ]
  • Cummins, R., Zhang, M., & Briscoe, E. (2016, August). Constrained multi-task learning for automated essay scoring. Association for Computational Linguistics.
  • Darwish SM, Mohamed SK (2020) Automated essay evaluation based on fusion of fuzzy ontology and latent semantic analysis. In: Hassanien A, Azar A, Gaber T, Bhatnagar RF, Tolba M (eds) The International Conference on Advanced Machine Learning Technologies and Applications
  • Dasgupta T, Naskar A, Dey L, Saha R (2018) Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications p 93–102
  • Ding Y, et al. (2020) "Don’t take “nswvtnvakgxpm” for an answer–The surprising vulnerability of automatic content scoring systems to adversarial input." In: Proceedings of the 28th International Conference on Computational Linguistics
  • Dong F, Zhang Y (2016) Automatic features for essay scoring–an empirical study. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing p 1072–1077
  • Dong F, Zhang Y, Yang J (2017) Attention-based recurrent convolutional neural network for automatic essay scoring. In: Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017) p 153–162
  • Dzikovska M, Nielsen R, Brew C, Leacock C, Gi ampiccolo D, Bentivogli L, Clark P, Dagan I, Dang HT (2013a) Semeval-2013 task 7: The joint student response analysis and 8th recognizing textual entailment challenge
  • Dzikovska MO, Nielsen R, Brew C, Leacock C, Giampiccolo D, Bentivogli L, Clark P, Dagan I, Trang Dang H (2013b) SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge. *SEM 2013: The First Joint Conference on Lexical and Computational Semantics
  • Educational Testing Service (2008) CriterionSM online writing evaluation service. Retrieved from http://www.ets.org/s/criterion/pdf/9286_CriterionBrochure.pdf .
  • Evanini, K., & Wang, X. (2013, August). Automated speech scoring for non-native middle school students with multiple task types. In INTERSPEECH (pp. 2435–2439).
  • Foltz PW, Laham D, Landauer TK (1999) The Intelligent Essay Assessor: Applications to Educational Technology. Interactive Multimedia Electronic Journal of Computer-Enhanced Learning, 1, 2, http://imej.wfu.edu/articles/1999/2/04/ index.asp
  • Granger, S., Dagneaux, E., Meunier, F., & Paquot, M. (Eds.). (2009). International corpus of learner English. Louvain-la-Neuve: Presses universitaires de Louvain.
  • Higgins D, Heilman M. Managing what we can measure: quantifying the susceptibility of automated scoring systems to gaming behavior” Educ Meas Issues Pract. 2014; 33 :36–46. doi: 10.1111/emip.12036. [ CrossRef ] [ Google Scholar ]
  • Horbach A, Zesch T. The influence of variance in learner answers on automatic content scoring. Front Educ. 2019; 4 :28. doi: 10.3389/feduc.2019.00028. [ CrossRef ] [ Google Scholar ]
  • https://www.coursera.org/learn/machine-learning/exam/7pytE/linear-regression-with-multiple-variables/attempt
  • Hussein, M. A., Hassan, H., & Nassef, M. (2019). Automated language essay scoring systems: A literature review. PeerJ Computer Science, 5, e208. [ PMC free article ] [ PubMed ]
  • Ke Z, Ng V (2019) “Automated essay scoring: a survey of the state of the art.” IJCAI
  • Ke, Z., Inamdar, H., Lin, H., & Ng, V. (2019, July). Give me more feedback II: Annotating thesis strength and related attributes in student essays. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 3994-4004).
  • Kelley K, Preacher KJ. On effect size. Psychol Methods. 2012; 17 (2):137–152. doi: 10.1037/a0028086. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kitchenham B, Brereton OP, Budgen D, Turner M, Bailey J, Linkman S. Systematic literature reviews in software engineering–a systematic literature review. Inf Softw Technol. 2009; 51 (1):7–15. doi: 10.1016/j.infsof.2008.09.009. [ CrossRef ] [ Google Scholar ]
  • Klebanov, B. B., & Madnani, N. (2020, July). Automated evaluation of writing–50 years and counting. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 7796–7810).
  • Knill K, Gales M, Kyriakopoulos K, et al. (4 more authors) (2018) Impact of ASR performance on free speaking language assessment. In: Interspeech 2018.02–06 Sep 2018, Hyderabad, India. International Speech Communication Association (ISCA)
  • Kopparapu SK, De A (2016) Automatic ranking of essays using structural and semantic features. In: 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), p 519–523
  • Kumar, Y., Aggarwal, S., Mahata, D., Shah, R. R., Kumaraguru, P., & Zimmermann, R. (2019, July). Get it scored using autosas—an automated system for scoring short answers. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 9662–9669).
  • Kumar Y, et al. (2020) “Calling out bluff: attacking the robustness of automatic scoring systems with simple adversarial testing.” ArXiv abs/2007.06796
  • Li X, Chen M, Nie J, Liu Z, Feng Z, Cai Y (2018) Coherence-Based Automated Essay Scoring Using Self-attention. In: Sun M, Liu T, Wang X, Liu Z, Liu Y (eds) Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data. CCL 2018, NLP-NABD 2018. Lecture Notes in Computer Science, vol 11221. Springer, Cham. 10.1007/978-3-030-01716-3_32
  • Liang G, On B, Jeong D, Kim H, Choi G. Automated essay scoring: a siamese bidirectional LSTM neural network architecture. Symmetry. 2018; 10 :682. doi: 10.3390/sym10120682. [ CrossRef ] [ Google Scholar ]
  • Liua, H., Yeb, Y., & Wu, M. (2018, April). Ensemble Learning on Scoring Student Essay. In 2018 International Conference on Management and Education, Humanities and Social Sciences (MEHSS 2018). Atlantis Press.
  • Liu J, Xu Y, Zhao L (2019) Automated Essay Scoring based on Two-Stage Learning. ArXiv, abs/1901.07744
  • Loukina A, et al. (2015) Feature selection for automated speech scoring.” BEA@NAACL-HLT
  • Loukina A, et al. (2017) “Speech- and Text-driven Features for Automated Scoring of English-Speaking Tasks.” SCNLP@EMNLP 2017
  • Loukina A, et al. (2019) The many dimensions of algorithmic fairness in educational applications. BEA@ACL
  • Lun J, Zhu J, Tang Y, Yang M (2020) Multiple data augmentation strategies for improving performance on automatic short answer scoring. In: Proceedings of the AAAI Conference on Artificial Intelligence, 34(09): 13389-13396
  • Madnani, N., & Cahill, A. (2018, August). Automated scoring: Beyond natural language processing. In Proceedings of the 27th International Conference on Computational Linguistics (pp. 1099–1109).
  • Madnani N, et al. (2017b) “Building better open-source tools to support fairness in automated scoring.” EthNLP@EACL
  • Malinin A, et al. (2016) “Off-topic response detection for spontaneous spoken english assessment.” ACL
  • Malinin A, et al. (2017) “Incorporating uncertainty into deep learning for spoken language assessment.” ACL
  • Mathias S, Bhattacharyya P (2018a) Thank “Goodness”! A Way to Measure Style in Student Essays. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications p 35–41
  • Mathias S, Bhattacharyya P (2018b) ASAP++: Enriching the ASAP automated essay grading dataset with essay attribute scores. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
  • Mikolov T, et al. (2013) “Efficient Estimation of Word Representations in Vector Space.” ICLR
  • Mohler M, Mihalcea R (2009) Text-to-text semantic similarity for automatic short answer grading. In: Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009) p 567–575
  • Mohler M, Bunescu R, Mihalcea R (2011) Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In: Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies p 752–762
  • Muangkammuen P, Fukumoto F (2020) Multi-task Learning for Automated Essay Scoring with Sentiment Analysis. In: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop p 116–123
  • Nguyen, H., & Dery, L. (2016). Neural networks for automated essay grading. CS224d Stanford Reports, 1–11.
  • Palma D, Atkinson J. Coherence-based automatic essay assessment. IEEE Intell Syst. 2018; 33 (5):26–36. doi: 10.1109/MIS.2018.2877278. [ CrossRef ] [ Google Scholar ]
  • Parekh S, et al (2020) My Teacher Thinks the World Is Flat! Interpreting Automatic Essay Scoring Mechanism.” ArXiv abs/2012.13872 (2020): n. pag
  • Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532–1543).
  • Persing I, Ng V (2013) Modeling thesis clarity in student essays. In:Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) p 260–269
  • Powers DE, Burstein JC, Chodorow M, Fowles ME, Kukich K. Stumping E-Rater: challenging the validity of automated essay scoring. ETS Res Rep Ser. 2001; 2001 (1):i–44. [ Google Scholar ]
  • Powers DE, Burstein JC, Chodorow M, Fowles ME, Kukich K. Stumping e-rater: challenging the validity of automated essay scoring. Comput Hum Behav. 2002; 18 (2):103–134. doi: 10.1016/S0747-5632(01)00052-8. [ CrossRef ] [ Google Scholar ]
  • Ramachandran L, Cheng J, Foltz P (2015) Identifying patterns for short answer scoring using graph-based lexico-semantic text matching. In: Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications p 97–106
  • Ramanarayanan V, et al. (2017) “Human and Automated Scoring of Fluency, Pronunciation and Intonation During Human-Machine Spoken Dialog Interactions.” INTERSPEECH
  • Riordan B, Horbach A, Cahill A, Zesch T, Lee C (2017) Investigating neural architectures for short answer scoring. In: Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications p 159–168
  • Riordan B, Flor M, Pugh R (2019) "How to account for misspellings: Quantifying the benefit of character representations in neural content scoring models."In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications
  • Rodriguez P, Jafari A, Ormerod CM (2019) Language models and Automated Essay Scoring. ArXiv, abs/1909.09482
  • Rudner, L. M., & Liang, T. (2002). Automated essay scoring using Bayes' theorem. The Journal of Technology, Learning and Assessment, 1(2).
  • Rudner, L. M., Garcia, V., & Welch, C. (2006). An evaluation of IntelliMetric™ essay scoring system. The Journal of Technology, Learning and Assessment, 4(4).
  • Rupp A. Designing, evaluating, and deploying automated scoring systems with validity in mind: methodological design decisions. Appl Meas Educ. 2018; 31 :191–214. doi: 10.1080/08957347.2018.1464448. [ CrossRef ] [ Google Scholar ]
  • Ruseti S, Dascalu M, Johnson AM, McNamara DS, Balyan R, McCarthy KS, Trausan-Matu S (2018) Scoring summaries using recurrent neural networks. In: International Conference on Intelligent Tutoring Systems p 191–201. Springer, Cham
  • Sakaguchi K, Heilman M, Madnani N (2015) Effective feature integration for automated short answer scoring. In: Proceedings of the 2015 conference of the North American Chapter of the association for computational linguistics: Human language technologies p 1049–1054
  • Salim, Y., Stevanus, V., Barlian, E., Sari, A. C., & Suhartono, D. (2019, December). Automated English Digital Essay Grader Using Machine Learning. In 2019 IEEE International Conference on Engineering, Technology and Education (TALE) (pp. 1–6). IEEE.
  • Shehab A, Elhoseny M, Hassanien AE (2016) A hybrid scheme for Automated Essay Grading based on LVQ and NLP techniques. In: 12th International Computer Engineering Conference (ICENCO), Cairo, 2016, p 65-70
  • Shermis MD, Mzumara HR, Olson J, Harrington S. On-line grading of student essays: PEG goes on the World Wide Web. Assess Eval High Educ. 2001; 26 (3):247–259. doi: 10.1080/02602930120052404. [ CrossRef ] [ Google Scholar ]
  • Stab C, Gurevych I (2014) Identifying argumentative discourse structures in persuasive essays. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) p 46–56
  • Sultan MA, Salazar C, Sumner T (2016) Fast and easy short answer grading with high accuracy. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies p 1070–1075
  • Süzen, N., Gorban, A. N., Levesley, J., & Mirkes, E. M. (2020). Automatic short answer grading and feedback using text mining methods. Procedia Computer Science, 169, 726–743.
  • Taghipour K, Ng HT (2016) A neural approach to automated essay scoring. In: Proceedings of the 2016 conference on empirical methods in natural language processing p 1882–1891
  • Tashu TM (2020) "Off-Topic Essay Detection Using C-BGRU Siamese. In: 2020 IEEE 14th International Conference on Semantic Computing (ICSC), San Diego, CA, USA, p 221–225, doi: 10.1109/ICSC.2020.00046
  • Tashu TM, Horváth T (2019) A layered approach to automatic essay evaluation using word-embedding. In: McLaren B, Reilly R, Zvacek S, Uhomoibhi J (eds) Computer Supported Education. CSEDU 2018. Communications in Computer and Information Science, vol 1022. Springer, Cham
  • Tashu TM, Horváth T (2020) Semantic-Based Feedback Recommendation for Automatic Essay Evaluation. In: Bi Y, Bhatia R, Kapoor S (eds) Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing, vol 1038. Springer, Cham
  • Uto M, Okano M (2020) Robust Neural Automated Essay Scoring Using Item Response Theory. In: Bittencourt I, Cukurova M, Muldner K, Luckin R, Millán E (eds) Artificial Intelligence in Education. AIED 2020. Lecture Notes in Computer Science, vol 12163. Springer, Cham
  • Wang Z, Liu J, Dong R (2018a) Intelligent Auto-grading System. In: 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) p 430–435. IEEE.
  • Wang Y, et al. (2018b) “Automatic Essay Scoring Incorporating Rating Schema via Reinforcement Learning.” EMNLP
  • Zhu W, Sun Y (2020) Automated essay scoring system using multi-model Machine Learning, david c. wyld et al. (eds): mlnlp, bdiot, itccma, csity, dtmn, aifz, sigpro
  • Wresch W. The Imminence of Grading Essays by Computer-25 Years Later. Comput Compos. 1993; 10 :45–58. doi: 10.1016/S8755-4615(05)80058-1. [ CrossRef ] [ Google Scholar ]
  • Wu, X., Knill, K., Gales, M., & Malinin, A. (2020). Ensemble approaches for uncertainty in spoken language assessment.
  • Xia L, Liu J, Zhang Z (2019) Automatic Essay Scoring Model Based on Two-Layer Bi-directional Long-Short Term Memory Network. In: Proceedings of the 2019 3rd International Conference on Computer Science and Artificial Intelligence p 133–137
  • Yannakoudakis H, Briscoe T, Medlock B (2011) A new dataset and method for automatically grading ESOL texts. In: Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies p 180–189
  • Zhao S, Zhang Y, Xiong X, Botelho A, Heffernan N (2017) A memory-augmented neural model for automated grading. In: Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale p 189–192
  • Zupanc K, Bosnic Z (2014) Automated essay evaluation augmented with semantic coherence measures. In: 2014 IEEE International Conference on Data Mining p 1133–1138. IEEE.
  • Zupanc K, Savić M, Bosnić Z, Ivanović M (2017) Evaluating coherence of essays using sentence-similarity networks. In: Proceedings of the 18th International Conference on Computer Systems and Technologies p 65–72
  • Dzikovska, M. O., Nielsen, R., & Brew, C. (2012, June). Towards effective tutorial feedback for explanation questions: A dataset and baselines. In  Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies  (pp. 200-210).
  • Kumar, N., & Dey, L. (2013, November). Automatic Quality Assessment of documents with application to essay grading. In 2013 12th Mexican International Conference on Artificial Intelligence (pp. 216–222). IEEE.
  • Wu, S. H., & Shih, W. F. (2018, July). A short answer grading system in chinese by support vector approach. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications (pp. 125-129).
  • Agung Putri Ratna, A., Lalita Luhurkinanti, D., Ibrahim I., Husna D., Dewi Purnamasari P. (2018). Automatic Essay Grading System for Japanese Language Examination Using Winnowing Algorithm, 2018 International Seminar on Application for Technology of Information and Communication, 2018, pp. 565–569. 10.1109/ISEMANTIC.2018.8549789.
  • Sharma A., & Jayagopi D. B. (2018). Automated Grading of Handwritten Essays 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2018, pp 279–284. 10.1109/ICFHR-2018.2018.00056

IMAGES

  1. write a short essay on examining exam:- the opinion of students are

    essay writing on examination system

  2. Write a short essay on Exams

    essay writing on examination system

  3. Essay on Examination for and Against for Students & Children in English

    essay writing on examination system

  4. Essay on Examination with Outline Quotations and tips

    essay writing on examination system

  5. Essay on Examination

    essay writing on examination system

  6. Essay on Examination for Students and children

    essay writing on examination system

VIDEO

  1. How to Start Essay Writing for UPSC Exam

  2. Examination System MVC

  3. Example of Writing Answers of Board Examination 🎉

  4. writing examination words part 2

  5. Competitive examination 2024 Essay, complete paper #shorts #css #essay #pms #upsc #fpsc

  6. 3 MISTAKES to avoid during Exams 💯 #examtips #exams

COMMENTS

  1. Short Essay on Our Examination System

    Our Examination System Essay (400 Words) Examinations are of great use. Examinations are a means (way) of judging or knowing the ability of candidates. Good results in examinations are taken as a sign of knowledge and ability. One important reason for the decrease in the importance of examinations is the teaching standards.

  2. Essay on Examination System

    250 Words Essay on Examination System Introduction. The examination system is an integral part of the education system, serving as a yardstick to gauge a learner's understanding of the subjects taught. It is a time-tested method that has been used for centuries to measure students' intellectual capabilities. The Purpose of Examinations

  3. Essay on Examination System in India: Striking a Balance Between

    The emphasis on a single exam determining future paths raises questions about the fairness and inclusivity of the system. Conclusion: In conclusion, the examination system in India is a double-edged sword, serving as a vital tool for assessment while also bearing the weight of critiques and challenges. While it effectively gauges students ...

  4. An English Essay on Our Examination system for B.A. and F.A students

    Our examination system is charged with "Booty-Mafia" cheating and corruption. It cannot be called a fair hundred percent. The trend and tendency of the use of unfair means are increasing with every passing day. The idle and sluggish students employ a large variety of unfair means. The students write down the relevant material on small chits ...

  5. Essay on Examination with Outline Quotations and tips

    It is intended for students studying in Matric, FFSC, and the 2nd year. Class 12 students can utilize this essay as a practice tool for their annual exams. Similarly, FSC students can write a similar essay titled "Essay on Examination with Outline Quotations and tips" or "Best Essay on Examination System in Pakistan."

  6. Essay on Examination 500+ Words

    Essay on Examination 500+ Words. Examinations, often called "exams," are a common part of education. They are tests that help us learn, measure our knowledge, and prepare for the future. In this essay, we will explore the importance of examinations in education, how they help us grow, and why they are necessary.

  7. Essay on Examination

    The examination is the center of studies and hard work. It is a motivating force to work. Its importance and efficacy have been called in question. The most important point is that examinations are not the real test of knowledge and understanding. They are the test of ignorance or cramming. Still, we can say that examinations are necessary evil ...

  8. Examination system in schools Free Essay Example

    This is just a sample. You can get a custom paper by one of our expert writers. Get your custom essay. Helping students since 2015. Essay Sample: In 2016, thousands of children and teenagers turned to Childline, unable to cope with the stress and pressure of exams. 1 in 5 children have a diagnosed.

  9. PDF Strategies for Essay Writing

    Harvard College Writing Center 2 Tips for Reading an Assignment Prompt When you receive a paper assignment, your first step should be to read the assignment prompt carefully to make sure you understand what you are being asked to do. Sometimes your assignment will be open-ended ("write a paper about anything in the course that interests you").

  10. Writing Essays for Exams

    Most essay questions will have one or more "key words" that indicate which organizational pattern you should use in your answer. The six most common organizational patterns for essay exams are definition, analysis, cause and effect, comparison/contrast, process analysis, and thesis-support. Definition. Typical questions.

  11. The Examination System

    The examination system, also known as "civil service examinations" or "imperial examinations"—and, in Chinese, as keju 科舉, keju zhidu 科舉制度, gongju 貢舉, xuanju 選舉 or zhiju 制舉—was the imperial Chinese bureaucracy's central institution for recruiting its officials. Following both real and idealized models from ...

  12. How to Structure an Essay

    The basic structure of an essay always consists of an introduction, a body, and a conclusion. But for many students, the most difficult part of structuring an essay is deciding how to organize information within the body. This article provides useful templates and tips to help you outline your essay, make decisions about your structure, and ...

  13. Examinations system essay for 2nd year with outline

    The students of 2nd year in Pakistan have to write an essay on the given topic in their board exams. An essay on our examination system is the way of expressing your views about the examinations in Pakistan. This essay gives the students, an understanding of how they can write this essay using the given information. 1.

  14. Essay Exams

    You must be realistic about the time constraints of an essay exam. If you write one dazzling answer on an exam with three equally-weighted required questions, you earn only 33 points—not enough to pass at most colleges. This may seem unfair, but keep in mind that instructors plan exams to be reasonably comprehensive.

  15. Essay Exams

    Brainstorming and organizing. Turn to the last two pages of the blue book and sketch out your main idea and supporting points. Look for a central question in the prompt, and make sure the answer is clear in your thesis or main idea. Support that idea with information from the course such as names, dates, or facts, or use quotes.

  16. The Four Main Types of Essay

    An essay is a focused piece of writing designed to inform or persuade. There are many different types of essay, but they are often defined in four categories: argumentative, expository, narrative, and descriptive essays. Argumentative and expository essays are focused on conveying information and making clear points, while narrative and ...

  17. Essay on the Examination System of India

    Essay on the Examination System of India. Examination means the test of a student's knowledge in prescribed subjects. An examination creates a sort of care in students to prepare their studies sincerely. Examination may be of various types such as oral, written and practical. In lower classes oral test is conducted in lieu of written test ...

  18. Examination System In Pakistan Essay

    Similarly in Pakistan there is also an education system is followed that is called as the easy way of conducting the exams at school, college and university level. So the examination system is different at each level according to eh level of studies. This system is explained below that will make you familiar by reading this article.

  19. Essay on "Examination System" Complete Essay for ...

    Examination System. Examination are a necessary evil. It is quite understandable that whenever we put in hard work to make successful and venture, we wait for some time to see or guess the results that might's have been achieved or might possibly be achieved. It is in this context that examinations become unavoidable.

  20. An automated essay scoring systems: a systematic literature review

    This connection online examination system evolved as an alternative tool for pen and paper-based methods. Present Computer-based evaluation system works only for multiple-choice questions, but there is no proper evaluation system for grading essays and short answers. ... (six characteristics of essay writing): Development of ideas, organization ...

  21. Example of a Great Essay

    The writing system of raised dots used by visually impaired people was developed by Louis Braille in nineteenth-century France. In a society that did not value disabled people in general, blindness was particularly stigmatized, and lack of access to reading and writing was a significant barrier to social participation.

  22. PDF An automated essay scoring systems: a systematic literature ...

    ability, and many more. This connection online examination system evolved as an alter-native tool for pen and paper-based methods. Present Computer-based evaluation system works only for multiple-choice questions, but there is no proper evaluation system for grad-ing essays and short answers. Many researchers are working on automated essay grading