How far do we agree on the quality of translation?

Authors

DOI:

https://doi.org/10.33919/esnbu.15.1.2

Keywords:

TQA, translation mistakes, inter-rater reliability, error-based evaluation, error-annotated corpus, RusLTC

Abstract

The article aims to describe the inter-rater reliability of translation quality assessment (TQA) in translator training, calculated as a measure of raters’ agreement either on the number of points awarded to each translation under a holistic rating scale or the types and number of translation mistakes marked by raters in the same translations. We analyze three different samples of student translations assessed by several different panels of raters who used different methods of assessment and draw conclusions about statistical reliability of real-life TQA results in general and objective trends in this essentially subjective activity in particular. We also try to define the more objective data as regards error-analysis based TQA and suggest an approach to rank error-marked translations which can be used for subsequent relative grading in translator training.

Author Biography

Maria Kunilovskaya, Tyumen State University, Tyumen, Russia

Maria Kunilovskaya, Ph.D (Tumen State University), is an Associate Professor with the Department of
Translation and Translation Studies, Institute of Philology and Journalism, Tyumen State University
(Russia). She specializes in translation studies and teaches courses in translation and interpreting. Maria
is involved with translation error annotation in the on-line multiple translation corpus Russian Learner
Translator Corpus (http://rus-ltc.org). Her research interests involve translation quality evaluation,
comparative text linguistics, parallel corpora and NLP.

References

Artstein, R. & Poesio, M. (2008). Inter-Coder Agreement for Computational Linguistics. Computational Linguistics, 34(4), 555–596. https://doi.org/10.1162/coli.07-034-R2

Freelon, D. G. (2010). ReCal: Intercoder Reliability Calculation as a Web Service. International Journal of Internet Science, 5(1), 20–33.

Kelly, D. (2005). A Handbook for Translator Trainers. A Guide to Reflective Practice. Manchester: St. Jerome Publishing.

Knyazheva, E & Pirko, E. (2013). Otsenka kachestva perevoda v rusle metodologii sistemnogo analiza [TQA and Systems Analysis Methodology]. Journal of Voronezh State University. Linguistics and Intercultural Communication Series, 1, 145-151.

Krippendorff, K. (2004). Content Analysis: An Introduction to Its Methodology. Sage Publications.

Krippendorff, K. (2011). Computing Krippendorff's Alpha-Reliability. Retrieved from http://repository.upenn.edu/asc_papers/43/

Neubert, A. (2000). Competence in Language, in Languages, and in Translation. In Schäffner, C. & Adab, B. (Eds.). Developing Translation Competence. Amsterdam/Philadelphia: John Benjamins Publishing Company (pp. 3–17). https://doi.org/10.1075/btl.38

Strijbos, J.-W. & Stahl, G. (2007). Methodological Issues in Developing a Multidimensional Coding Procedure for Small-group Chat Communication. Learning and Instruction, 17(4), 394-404. https://doi.org/10.1016/j.learninstruc.2007.03.005

Waddington, Ch. (2001) Should Translations be Assessed Holistically or through error

analysis?. Hermes, 26, 15-37. Retrieved from http://download2.hermes.asb.dk/archive/download/H26_03.pdf

Williams, M. (2009). Translation Quality Assessment. Mutatis Mutandis, 2(1), 3–23.

Zwilling, M. (2009). O kriteriiakh otsenki perevoda [On Translation Quality Assessment Criteria]. In Zwilling, M. (Ed.), O perevode i perevodtchikakh [On Translation and Translators] (pp. 56–63). Мoskva: Vostotchnaia kniga.

Downloads

Published

2015-02-01

How to Cite

Kunilovskaya, M. (2015). How far do we agree on the quality of translation?. English Studies at NBU, 1(1), 18–31. https://doi.org/10.33919/esnbu.15.1.2

Issue

Section

Articles