In 2016, it was estimated that over four million standardized exams were administered with an essay section.
- The most popular standardized exam of 2016, the ACT, requires two additional weeks of processing time in order to return essay scores
- Typically an essay requires two scorers. Assuming each essay takes only one minute to score, it would take 135000 man hours to score four million essays.
- Automating part of the process makes it possible to eliminate one of the human scorers while retaining the other to act as control. That can save the industry an estimate of two million dollars per year.
The method of automating the essay requires several steps. First, training samples are required for each individual topic. This can be accomplished by having the "control" scorer act as the first reader and assign a score to each essay in a database. The model discussed in this blog required only a sample of thirteen thousand essays to effectively grade eight various topics from different student age groups. Features are extracted by examining the proportion of stopwords used within the essay, vectoring the essay and topic modeling.
Visualization of the topics within the essay can be extracted and displayed to students and educators for better clarity with tools such as pyLDAVis (a Python module).
These are then transferred to gradient boosted trees and convolutional neural networks to produce a predicted score as well as predictions of probabilities for each grade assignment (a useful additional feature to examine if an essay merits a higher or lower grade due to other influences).
Various models have been tested but due to the current sample size of this particular training set (only thirteen thousand essays), the power of the neural network could not be properly harnessed. As a result, the best performing model, which is employed for the app's current prediction scores, were able to predict with a cross validated accuracy score of 94% with minimal log loss. The essay predictor was always able to predict within one point of the "control" essay score in 100% of the cases. Since typical industry standards, disagreement between manual essay grading leads to point differences so this can model such variations. In addition, with a larger sample size, the convolution neural network can be called upon as it "learns" previous behaviors and can eventually train itself on essay format changes without the need to be fed training samples from more essays.
A demonstration of the app can be found here with an example of the current prototype model grading and assessing one essay.
In the future, additional features can be added towards the essay such as improvement tips on writing, personalized tuning for educators on weights of the essay and so forth. There are also some short comings due to the sample size used to train the app. For example, when the user is using language that is far more sophisticated (Ms or PhD level) than the student population the app used to train upon, the vocabulary is often not recognize by the model and the app would thus grade the essay poorly due to a perceived "off-topic" notion. This can be combated in the future by training on a much larger corpus size than the model currently had access to.