论文部分内容阅读
本文旨在研制有效、可靠的英译汉学生译文机器评分系统,实现大规模测试的自动评分。本研究针对三种文体的译文,分别构建了五种比例训练集的评分模型,模型预测分值与人工评分的相关系数均高于0.8。并且,当训练集为130篇时,模型对说明文和记叙文译文的预测分值与人工评分非常接近;当训练集为100篇时,模型对叙议混合文译文的评分与人工评分最为接近。研究结果表明,本文提取的变量预测力较强,针对不同文体构建的评分模型效果良好,能够比较准确地预测学生的英译汉成绩。
The purpose of this paper is to develop an effective and reliable machine scoring system for English-Chinese students to achieve large-scale test of automatic grading. For the translations of the three genres, this study constructs the scoring models of five proportional training sets respectively. The correlation coefficients between the model predictive scores and the artificial scores are all above 0.8. And when the training set is 130, the predictive score of the model for the translation and the narrative translation is very close to the artificial score. When the training set is 100, the model scores of the mixed translations of the narrative and the manual are the closest. The results show that the variables extracted in this paper have strong predictive power and scoring models based on different stylistic works well, which can predict students’ English-Chinese translation performance more accurately.