Help | Advanced Search

Computer Science > Computation and Language

Title: translation quality assessment: a brief survey on manual and automatic methods.

Abstract: To facilitate effective translation modeling and translation studies, one of the crucial questions to address is how to assess translation quality. From the perspectives of accuracy, reliability, repeatability and cost, translation quality assessment (TQA) itself is a rich and challenging task. In this work, we present a high-level and concise survey of TQA methods, including both manual judgement criteria and automated evaluation metrics, which we classify into further detailed sub-categories. We hope that this work will be an asset for both translation model researchers and quality assessment researchers. In addition, we hope that it will enable practitioners to quickly develop a better understanding of the conventional TQA field, and to find corresponding closely relevant evaluation solutions for their own needs. This work may also serve inspire further development of quality assessment and evaluation methodologies for other natural language processing (NLP) tasks in addition to machine translation (MT), such as automatic text summarization (ATS), natural language understanding (NLU) and natural language generation (NLG).

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

DBLP - CS Bibliography

Bibtex formatted citation.

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Translation Quality Assessment

Profile image of Ingemar Strandvik

2018, Machine Translation: Technologies and Applications

Related Papers

Jitka Zehnalová

Translation Quality Assessment (TQA) is a delicate issue. Bowker (2000, 183) states that " evaluation is one of the most problematic areas of translation " and quotes Bassnett-McGuire, Mahn, Malmkjaer, and Snell-Hornby who described it as " a great stumbling block, " " a complex challenge, " " a most wretched question, " and " a thorny problem, " respectively. The paper maps the relationship between traditional) and current approaches to translation quality evaluation in terms of terminology and assessment methods and procedures. It states what problems have been traditionally associated with this area of translation studies (subjectivity of evaluation, vagueness of assessment criteria, lack of a standardized terminology and of attention from both researchers and practitioners) and how contemporary authors address these issues. It presents the thesis that problems related to translation evaluation can be reduced by researching evaluation processes and by developing assessment procedures appropriate for specific situations and purposes of evaluation. Based on a survey of traditional approaches, the study seeks to explore the current terminology of the discipline and develops a three-level model of TQA and TQA terminology.

thesis on translation quality assessment

The International Journal of Translation and Interpreting Research

Rouhullah Nemati Parsa

Due to its complex nature, providing a comprehensive framework for translation quality assessment (TQA) has always been a challenging task. To address this gap, many scholars spared no effort to provide a framework, approach or theory from philosophical, linguistic, and cultural perspectives, like those by Williams (2004), House (2015) and Reiss and Vermeer (1984), to mention but a few. According to Drugan (2013: 35), “theorists and professionals overwhelmingly agree that there is no single objective way to measure quality”. In the same vein, Dong and Lan note that “translation evaluation [. . .] remains one of the most problematic areas of translation studies as a field of study” (2010: 48). Notwithstanding, there is no consensus among scholars in this regard. Yet it remains one of the most interesting but controversial research areas in Translation Studies. Bittner’s book presents the historical trajectory of this concept by critically reviewing the eclectic and up-to-date viewpoi...

Roberto Martínez Mateo

In spite of the professionalization of translators and their central role in international relations, there is yet no widely accepted methodology available for professional Translation Quality Assessment (TQA). At the same time, there exists a pressing need to standardize TQA criteria to mitigate the subjectivity prevailing over quality assessment. Evaluating the quality of a translation first requires defining concepts of Quality and Translation, which inevitably parallels the approach characteristic of a translation theory (House, 1997). In this case, the functionalist approach is embraced. The aim of this article is to review the Quality Assessment Tool (QAT), a computer-aid tool developed by the Directorate General for Translation (DGT) of the European Commission as an aid in the quality quantification process of external translations. Simultaneously, some of the most representative quantitative models for TQA are analyzed with a view to lay the foundations to develop an alternative model that improves the QAT. I begin by analyzing a selection of outstanding quantitative-oriented models of TQA (metrics), identifying their strengths and weaknesses as viable and efficient assessment tools. Having surveyed these models, I then propose suitable forms for refinement and for the polishing of existing metrics so as to establish a basis for the structuring of a new theoretical model for TQA. The result is a mixed approach to TQA that takes up an integrated view embedding both top-down (rubrics) and bottom-up (metrics) tools.

International Journal of Language and Translation Research,

Hossein Heidari Tabrizi

Translation quality is a central issue in the translation profession as well as translation education and training and is one of the utmost controversial topics in translation studies today. The terms and concepts used in discussing the process of judging translation quality in its various practices and contexts are rather confused by scholars and practitioners in the field. Perhaps, the prime example of such confusion is the interchangeable use of the terms, “evaluation” and “assessment.” Acknowledging the complexity and importance of defining these notions, a shared emphasis is found in the literature on defining and assessing quality in the context of specific situations. In fact, the lack of a universal, unified specialized terminology for judging translations is urging the need to standardize assessment terminology in order to reach a common understanding of quality standards demanded in both academic and professional settings. In order to differentiate among various practices, translation terminology is gradually being evolved. To date, efforts have been made to clarify this terminology and to identify and define different types of translation quality assessment procedures. Through a systematic review of the literature at hand, the present paper is an attempt to map out the terminology for judging quality in various translation practices as a key disciplinary desideratum.

H. Translation Project Publishing

Majid Khorsand , Bahloul Salmani

The focus of this research-oriented book is the best possible interpretation of House’s (1977, 1981, 1997, 2009) translation quality assessment (TQA) model, among the latest approaches to translation evaluation. In this book, the significance of analyzing original and translated literary and other culture specific texts through House’s revisited TQA model as well as the quality of the translation versions are discussed. Then the considerable pertinent problems are proposed, and the notion of expertise in translation is discussed. Further, the model is applied to Orwell’s Animal Farm (1945) and its two Persian translations. Finally, some challenging excerpts are offered for discussion and analysis.

Amin Karimnia

First National Conference on Innovative Multidisciplinary Research in the Humanities

Mohammad Ali Kharmandar

One of the fundamental concerns of translation quality assessment (TQA) is to reach valid and reliable criteria for evaluating translation quality. One of the lines of TQA has relied on argumentation theory, although this specific line has fallen into stagnation. The purpose of this study is to primarily review Williams’s application of Toulmin’s argumentation model to TQA, and secondly highlight certain shortcomings of the model. Two basic problems that seriously attenuate the application of Williams’s model are observed: first, the model does not consider literary translation, and second the model fails to take into account complex argumentation. To solve these problems, this study proposes a novel method based on pragma-dialectics, called argumentation-based literary translation quality assessment, which is in line with recent trends in translation philosophy.

Melek Acikbas Inozu

Íkala, revista de lenguaje y …

Juan Camilo Giraldo

Bahagia Tarigan

Quality is the ultimate goal of any translation practices. However, a quality translation is always debatable for different methods of assessment, widely known as translation quality assessment (TQA). The issue of relativity and subjectivity is apparent in many TQA models. This study aims at developing a holistic TQA applicable for assessing translation from English to Bahasa Indonesia. This study used a research and development method. The data were both primary and secondary data. The primary data were the results of the interview and forum group discussion (FGD), and the secondary data were the research report. The data were analyzed using an interactive model. The findings of this research indicate that: (i) TQA should be based on a holistic method, (ii) a holistic-based TQA model should provide clearly distinguishing quality criteria, and (iii) the TQA model developed in this study assesses both translation and linguistic skills. This study concludes that a good TQA should cove...

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Translation quality assessment : Naguib Mahfouz's Midaq Alley as case study

--> Aladwan, Dima Adwan (2012) Translation quality assessment : Naguib Mahfouz's Midaq Alley as case study. PhD thesis, University of Leeds.

This thesis is a descriptive, evaluative and comparative study in the field of translation studies. One of the objectives for this thesis is to explore a valid criterion by which a literary translation can be evaluated efficiently and to assess the translation of the selected novel for this research. The aim of this study is to measure the shifts which occurred between TTl and TT2 when compared to the ST. The thesis also aims at highlighting the significance of culture and the way cultures are introduced to the Tar et readership through translation. It is thought that the strategy of Foreignization enriches target texts and introduces cultural elements to the target reader. The corpus of the study is Ziqiiq Al Midaq the well-known novel by Naguib Mahfouz, the Nobel Prize Laureate in Literature in 1988. This novel has been first translated by Trevor Le Gassick in 1966 and a revision of this translation was published in 1975. The main objective of this study is to explore the translation shifts which were applied in TTl and TT2. The methodology of this thesis relies on the Nord Model (2005). It focuses on the translation problems introduced by Nord. The four aspects of translation problems Nord identifies are Pragmatic, linguistic, cultural and text-specific translation problems. A final assessment of the quality of both versions of translation is discussed at the end of the study.

--> Final eThesis - complete (pdf) -->

Filename: 658051.pdf

Description: 658051.pdf

Embargo Date:

[img]

You do not need to contact us to get a copy of this thesis. Please use the 'Download' link(s) above to get a copy. You can contact us about this thesis . If you need to make a general enquiry, please see the Contact us page.

-

Book cover

Translation Quality Assessment pp 9–38 Cite as

Approaches to Human and Machine Translation Quality Assessment

  • Sheila Castilho 6 ,
  • Stephen Doherty 7 ,
  • Federico Gaspari 6 , 8 &
  • Joss Moorkens 9  
  • First Online: 14 July 2018

5847 Accesses

25 Citations

1 Altmetric

Part of the book series: Machine Translation: Technologies and Applications ((MATRA,volume 1))

In both research and practice, translation quality assessment is a complex task involving a range of linguistic and extra-linguistic factors. This chapter provides a critical overview of the established and developing approaches to the definition and measurement of translation quality in human and machine translation workflows across a range of research, educational, and industry scenarios. We intertwine literature from several interrelated disciplines dealing with contemporary translation quality assessment and, while we acknowledge the need for diversity in these approaches, we argue that there are fundamental and widespread issues that remain to be addressed, if we are to consolidate our knowledge and practice of translation quality assessment in increasingly technologised environments across research, teaching, and professional practice.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

For a comprehensive review on translation theories in relation to quality see Munday ( 2008 ); Pym ( 2010 ); Drugan ( 2013 ); House ( 2015 ).

http://www.astm.org/Standards/F2575.htm

A new ISO proposal was accepted in 2017 and is under development at the time of writing. The new standard is ISO/AWI 21999 “Translation quality assurance and assessment – Models and metrics”. Details are at https://www.iso.org/standard/72345.html

http://www.qt21.eu/

The notion of ‘recall’ intended here is borrowed from cognitive psychology, and it should not be confused with the concept of ‘recall’ (as opposed to ‘precision’) more commonly used to assess natural language processing tasks and, in particular, the performance of MT systems, e.g. with automatic evaluation metrics, which are discussed in more detail in Sect. 4 (for an introduction to the role of precision and recall in automatic MTE metrics, see Koehn 2009 : 222).

See http://www.statmt.org/

The notion of ‘usability’ discussed here is different from that of ‘adequacy’ covered in Sect. 3.1 , as it involves aspects of practical operational validity and effectiveness of the translated content, e.g. whether a set of translated instructions enable a user to correctly operate a device to perform a specific function or achieve a particular objective (say, update the contact list in a mobile phone, adding a new item).

In Daems et al. ( 2015 ), the average number of production units refers to the number of production units of a segment divided by the number of source text words in that segment. The average time per word indicates the total time spent editing a segment, divided by the number of source text words in that segment. The average fixation duration is based on the total fixation duration (in milliseconds) of a segment divided by the number of fixations within that segment. The average number of fixations results from the number of fixations in a segment divided by the number of source text words in that segment. The pause ratio is given by the total time in pauses (in milliseconds) for a segment divided by the total editing time (in milliseconds) for that segment and, finally, the average pause ratio is the average time per pause in a segment divided by the average time per word in a segment.

Abdallah K (2012) Translators in production networks. Reflections on Agency, Quality and Ethics. Dissertation, University of Eastern Finland

Google Scholar  

Adab B (2005) Translating into a second language: can we, should we? In: Anderman G, Rogers M (eds) In and out of English for better, for worse? Multilingual Matters, Clevedon, pp 227–241

Alabau V, Bonk R, Buck C, Carl M, Casacuberta F, García-Martínez M, González J, Koehn P, Leiva L, Mesa-Lao B, Ortiz D, Saint-Amand H, Sanchis G, Tsoukala C (2013) CASMACAT: an open source workbench for advanced computer aided translation. Prague Bull Math Linguist 100(1):101–112

Article   Google Scholar  

Allen J (2003) Post-editing. In: Somers H (ed) Computers and translation: a translator’s guide. John Benjamins, Amsterdam, pp 297–317

Chapter   Google Scholar  

Arnold D, Balkan L, Meijer S, Lee Humphreys R, Sadler L (1994) Machine translation: an introductory guide. Blackwell, Manchester

Aziz W, Sousa SCM, Specia L (2012) PET: a tool for post-editing and assessing machine translation. In: Calzolari N, Choukri K, Declerck T, Doğan MU, Maegaard B, Mariani J, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the eighth international conference on language resources and evaluation, Istanbul, pp 3982–3987

Baker M (1992) In other words: a coursebook on translation. Routledge, London

Book   Google Scholar  

Björnsson CH (1971) Læsbarhed. København, Gad

Bojar O, Ercegovčević M, Popel M, Zaidan OF (2011) A grain of salt for the WMT manual evaluation. In: Proceedings of the 6th workshop on Statistical Machine Translation, Edinburgh, 30–31 July 2011, pp 1–11

Byrne J (2006) Technical translation: usability strategies for translating technical documentation. Springer, Heidelberg

Callison-Burch C, Osborne M, Koehn P (2006) Re-evaluating the role of BLEU in machine translation research. In: Proceedings of 11th conference of the European chapter of the association for computational linguistics 2006, Trento, 3–7 April, pp 249–256

Callison-Burch C, Fordyce C, Koehn P, Monz C, Schroeder J (2007) (Meta-)evaluation of machine translation. In: Proceedings of the second workshop on Statistical Machine Translation, Prague, pp 136–158

Callison-Burch C, Koehn P, Monz C, Schroeder J (2009) Findings of the 2009 workshop on Statistical Machine Translation. In: Proceedings of the 4th EACL workshop on Statistical Machine Translation, Athens, 30–31 March 2009, p 1–28

Callison-Burch C, Koehn P, Monz C, Zaidan OF (2011) Findings of the 2011 Workshop on Statistical Machine Translation. In: Proceedings of the 6th Workshop on Statistical Machine Translation, 30–31 July, 2011, Edinburgh, pp 22–64

Campbell S (1998) Translation into the second language. Longman, New York

Canfora C, Ottmann A (2016) Who’s afraid of translation risks? Paper presented at the 8th EST Congress, Aarhus, 15–17 September 2016

Carl M (2012) Translog – II: a program for recording user activity data for empirical reading and writing research. In: Calzolari N, Choukri K, Declerck T, Doğan MU, Maegaard B, Mariani J, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the eight international conference on language resources and evaluation, Istanbul, 23–25 May 2014, pp 4108–4112

Carl M, Gutermuth S, Hansen-Schirra S (2015) Post-editing machine translation: a usability test for professional translation settings. In: Ferreira A, Schwieter JW (eds) Psycholinguistic and cognitive inquiries into translation and interpreting. John Benjamins, Amsterdam, pp 145–174

Castilho S (2016) Measuring acceptability of machine translated enterprise content. PhD thesis, Dublin City University

Castilho S, O’Brien S (2016) Evaluating the impact of light post-editing on usability. In: Proceedings of the tenth international conference on language resources and evaluation, Portorož, 23–28 May 2016, pp 310–316

Castilho S, O’Brien S, Alves F, O’Brien M (2014) Does post-editing increase usability? A study with Brazilian Portuguese as target language. In: Proceedings of the seventeenth annual conference of the European Association for Machine Translation, Dubrovnik, 16–18 June 2014, pp 183–190

Catford J (1965) A linguistic theory of translation. Oxford University Press, Oxford

Chan YS, Ng HT (2008) MAXSIM: an automatic metric for machine translation evaluation based on maximum similarity. In: Proceedings of the MetricsMATR workshop of AMTA-2008, Honolulu, Hawaii, pp 55–62

Chomsky N (1969) Aspects of the theory of syntax. MIT Press, Cambridge, MA

Coughlin D (2003) Correlating automated and human assessments of machine translation quality. In: Proceedings of the Machine Translation Summit IX, New Orleans, 23–27 September 2003, pp 63–70

Daems J, Vandepitte S, Hartsuiker R, Macken L (2015) The Impact of machine translation error types on post-editing effort indicators. In: Proceedings of the 4th workshop on post-editing technology and practice, Miami, 3 November 2015, pp 31–45

Dale E, Chall JS (1948) A formula for predicting readability: instructions. Educ Res Bull 27(2):37–54

De Almeida G, O’Brien S (2010) Analysing post-editing performance: correlations with years of translation experience. In: Hansen V, Yvon F (eds) Proceedings of the 14th annual conference of the European Association for Machine Translation, St. Raphaël, 27–28 May 2010. Available via: http://www.mt-archive.info/EAMT-2010-Almeida.pdf . Accessed 10 Jan 2017

De Beaugrande R, Dressler W (1981) Introduction to text linguistics. Longman, New York

Debove A, Furlan S, Depraetere I (2011) A contrastive analysis of five automated QA tools (QA distiller. 6.5.8, Xbench 2.8, ErrorSpy 5.0, SDLTrados 2007 QA checker 2.0 and SDLX 2007 SP2 QA check). In: Depraetere I (ed) Perspectives on translation quality. Walter de Gruyter, Berlin, pp 161–192

DePalma D, Kelly N (2009) The business case for machine translation. Common Sense Advisory, Boston

Depraetere I (2010) What counts as useful advice in a university post-editing training context? Report on a case study. In: Proceedings of the 14th annual conference of the European Association for Machine Translation, St. Raphaël, 27–28 May 2010. Available via: http://www.mt-archive.info/EAMT-2010-Depraetere-2.pdf . Accessed 12 May 2017

Doddington G (2002) Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In: Proceedings of the second international conference on human language technology research, San Diego, pp 138–145

Doherty S (2012) Investigating the effects of controlled language on the reading and comprehension of machine translated texts. PhD dissertation, Dublin City University

Doherty S (2016) The impact of translation technologies on the process and product of translation. Int J Commun 10:947–969

Doherty S (2017) Issues in human and automatic translation quality assessment. In: Kenny D (ed) Human issues in translation technology. Routledge, London, pp 154–178

Doherty S, O’Brien S (2014) Assessing the usability of raw machine translated output: a user-centred study using eye tracking. Int J Hum Comput Interact 30(1):40–51

Doherty S, Gaspari F, Groves D, van Genabith J, Specia L, Burchardt A, Lommel A, Uszkoreit H (2013) Mapping the industry I: findings on translation technologies and quality assessment. Available via: http://www.qt21.eu/launchpad/sites/default/files/QTLP_Survey2i.pdf . Accessed 12 May 2017

Drugan J (2013) Quality in professional translation: assessment and improvement. Bloomsbury, London

Dybkjær L, Bernsen N, Minker W (2004) Evaluation and usability of multimodal spoken language dialogue systems. Speech Comm 43(1):33–54

Federmann C (2012) Appraise: an open-source toolkit for manual evaluation of MT output. Prague Bull Math Linguist 98:25–35

Fields P, Hague D, Koby GS, Lommel A, Melby A (2014) What is quality? A management discipline and the translation industry get acquainted. Revista Tradumàtica 12:404–412. Available via: https://ddd.uab.cat/pub/tradumatica/tradumatica_a2014n12/tradumatica_a2014n12p404.pdf . Accessed 12 May 2017

Flesch R (1948) A new readability yardstick. J Appl Psychol 32(3):221–233

Gaspari F (2004) Online MT services and real users’ needs: an empirical usability evaluation. In: Frederking RE, Taylor KB (eds) Proceedings of AMTA 2004: 6th conference of the Association for Machine Translation in the Americas “Machine translation: from real users to research”. Springer, Berlin, pp 74–85

Gaspari F, Almaghout H, Doherty S (2015) A survey of machine translation competences: insights for translation technology educators and practitioners. Perspect Stud Translatol 23(3):333–358

Giménez J, Màrquez L (2008) A smorgasbord of features for automatic MT evaluation. In: Proceedings of the third workshop on Statistical Machine Translation, Columbus, pp 195–198

Giménez J, Màrquez L, Comelles E, Catellón I, Arranz V (2010) Document-level automatic MT evaluation based on discourse representations. In: Proceedings of the joint fifth workshop on Statistical Machine Translation and MetricsMATR, Uppsala, pp 333–338

Guerberof A (2014) Correlations between productivity and quality when post-editing in a professional context. Mach Transl 28(3–4):165–186

Guzmán F, Joty S, Màrquez L, Nakov P (2014) Using discourse structure improves machine translation evaluation. In: Proceedings of the 52nd annual meeting of the Association for Computational Linguistics, Baltimore, June 23–25 2014, pp 687–698

Harrison C (1980) Readability in the classroom. Cambridge University Press, Cambridge

Holmes JS (1988) Translated! Papers on literary translation and translation studies. Rodopi, Amsterdam

House J (1997) Translation quality assessment. A model revisited. Gunter Narr, Tübingen

House J (2001) Translation quality assessment: linguistic description versus social evaluation. Meta 46(2):243–257

House J (2015) Translation quality assessment: past and present. Routledge, London

Hovy E, King M, Popescu-Belis A (2002) Principles of context-based machine translation evaluation. Mach Transl 17(1):43–75

International Organization for Standardisation (2002) ISO/TR 16982:2002 ergonomics of human-system interaction—usability methods supporting human centred design. International Organization for Standardisation, Geneva. Available via: http://www.iso.org/iso/catalogue_detail?csnumber=31176 . Accessed 20 May 2017

International Organization for Standardisation (2012) ISO/TS 11669:2012 technical specification: translation projects – general guidance. International Organization for Standardisation, Geneva. Available via: https://www.iso.org/standard/50687.html . Accessed 20 May 2017

Jones MJ (1988) A longitudinal study of the readability of the chairman’s narratives in the corporate reports of a UK company. Account Bus Res 18(72):297–305

Karwacka W (2014) Quality assurance in medical translation. JoSTrans 21:19–34

Kincaid JP, Fishburne RP Jr, Rogers RL, Chissom BS (1975) Derivation of new readability formulas (automated readability index, Fog count and Flesch reading ease formula) for navy enlisted personnel (No. RBR-8-75). Naval Technical Training Command Millington TN Research Branch

Klerke S, Castilho S, Barrett M, Søgaard A (2015) Reading metrics for estimating task efficiency with MT output. In: Proceedings of the sixth workshop on cognitive aspects of computational language learning, Lisbon, 18 September 2015, pp 6–13

Koby GS, Fields P, Hague D, Lommel A, Melby A (2014) Defining translation quality. Revista Tradumàtica 12:413–420. Available via: https://ddd.uab.cat/pub/tradumatica/tradumatica_a2014n12/tradumatica_a2014n12p413.pdf . Accessed 12 May 2017

Koehn P (2009) Statistical machine translation. Cambridge University Press, Cambridge

Koehn P (2010) Enabling monolingual translators: post-editing vs. options. In: Proceedings of human language technologies: the 2010 annual conference of the North American chapter of the ACL, Los Angeles, pp 537–545

Koponen M (2012) Comparing human perceptions of post-editing effort with post-editing operations. In: Proceedings of the seventh workshop on Statistical Machine Translation, Montréal, 7–8 June 2012, pp 181–190

Krings HP (2001) Repairing texts: empirical investigations of machine translation post-editing processes. Kent State University Press, Kent

Kushner S (2013) The freelance translation machine: algorithmic culture and the invisible industry. New Media Soc 15(8):1241–1258

Kussmaul P (1995) Training the translator. John Benjamins, Amsterdam

Labaka G, España-Bonet C, Marquez L, Sarasola K (2014) A hybrid machine translation architecture guided by syntax. Mach Transl 28(2):91–125

Lacruz I, Shreve GM (2014) Pauses and cognitive effort in post-editing. In: O’Brien S, Balling LW, Carl M, Simard M, Specia L (eds) Post-editing of machine translation: processes and applications. Cambridge Scholars Publishing, Newcastle-Upon-Tyne, pp 246–272

Lassen I (2003) Accessibility and acceptability in technical manuals: a survey of style and grammatical metaphor. John Benjamins, Amsterdam

Lauscher S (2000) Translation quality assessment: where can theory and practice meet? Translator 6(2):149–168

Lavie A, Agarwal A (2007) METEOR: an automatic metric for MT evaluation with high levels of correlation with human judgments. In: Proceedings of the workshop on Statistical Machine Translation, Prague, pp 228–231

Liu C, Dahlmeier D, Ng HT (2011) Better evaluation metrics lead to better machine translation. In: Proceedings of the 2011 conference on empirical methods in natural language processing, Edinburgh, 27–31 July 2011, pp 375–384

Lommel A, Uszkoreit H, Burchardt A (2014) Multidimensional Quality Metrics (MQM): a framework for declaring and describing translation quality metrics. Revista Tradumàtica 12:455–463. Available via: https://ddd.uab.cat/pub/tradumatica/tradumatica_a2014n12/tradumatica_a2014n12p455.pdf . Accessed 12 May 2017

Moorkens J (2017) Under pressure: translation in times of austerity. Perspect Stud Trans Theory Pract 25(3):464–477

Moorkens J, O’Brien S, da Silva IAL, de Lima Fonseca NB, Alves F (2015) Correlations of perceived post-editing effort with measurements of actual effort. Mach Transl 29(3):267–284

Moran J, Lewis D, Saam C (2014) Analysis of post-editing data: a productivity field test using an instrumented CAT tool. In: O’Brien S, Balling LW, Carl M, Simard M, Specia L (eds) Post-editing of machine translation: processes and applications. Cambridge Scholars Publishing, Newcastle-Upon-Tyne, pp 128–169

Muegge U (2015) Do translation standards encourage effective terminology management? Revista Tradumàtica 13:552–560. Available via: https://ddd.uab.cat/pub/tradumatica/tradumatica_a2015n13/tradumatica_a2015n13p552.pdf . Accessed 2 May 2017

Munday J (2008) Introducing translation studies: theories and applications. Routledge, London

Muzii L (2014) The red-pen syndrome. Revista Tradumàtica 12:421–429. Available via: https://ddd.uab.cat/pub/tradumatica/tradumatica_a2014n12/tradumatica_a2014n12p421.pdf . Accessed 30 May 2017

Nida E (1964) Toward a science of translation. Brill, Leiden

Nielsen J (1993) Usability engineering. Morgan Kaufmann, Amsterdam

MATH   Google Scholar  

Nießen S, Och FJ, Leusch G, Ney H (2000) An evaluation tool for machine translation: fast evaluation for MT research. In: Proceedings of the second international conference on language resources and evaluation, Athens, 31 May–2 June 2000, pp 39–45

Nord C (1997) Translating as a purposeful activity. St. Jerome, Manchester

O’Brien S (2011) Towards predicting post-editing productivity. Mach Transl 25(3):197–215

O’Brien S (2012) Towards a dynamic quality evaluation model for translation. JoSTrans 17:55–77

O’Brien S, Roturier J, de Almeida G (2009) Post-editing MT output: views from the researcher, trainer, publisher and practitioner. Paper presented at the Machine Translation Summit XII, Ottawa, 26 August 2009

O’Brien, S, Choudhury R, Van der Meer J, Aranberri Monasterio N (2011) Dynamic quality evaluation framework. Available via: https://goo.gl/eyk3Xf . Accessed 21 May 2017

O’Brien S, Simard M, Specia L (eds) (2012) Workshop on post-editing technology and practice (WPTP 2012). In: Conference of the Association for Machine Translation in the Americas (AMTA 2012), San Diego

O’Brien S, Simard M, Specia L (eds) (2013) Workshop on post-editing technology and practice (WPTP 2013). Machine Translation Summit XIV, Nice

O’Brien S, Balling LW, Carl M, Simard M, Specia L (eds) (2014) Post-editing of machine translation: processes and applications. Cambridge Scholars Publishing, Newcastle-Upon-Tyne

O’Hagan M (2012) The impact of new technologies on translation studies: a technological turn. In: Millán C, Bartrina F (eds) The Routledge handbook of translation studies. Routledge, London, pp 503–518

Owczarzak K, van Genabith J, Way A (2007) Evaluating machine translation with LFG dependencies. Mach Transl 21(2):95–119

Padó S, Cer D, Galley M, Jurafsky D, Manning CD (2009) Measuring machine translation quality as semantic equivalence: a metric based on entailment features. Mach Transl 23(2–3):181–193

Papineni K, Roukos S, Ward T, Zhu W (2002) BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting on Association for Computational Linguistics, Philadelphia, pp 311–318

Plitt M, Masselot F (2010) A productivity test of Statistical Machine Translation post-editing in a typical localisation context. Prague Bull Math Linguist 93:7–16

Pokorn NK (2005) Challenging the traditional axioms: translation into a non-mother tongue. John Benjamins, Amsterdam

Popović M (2015) ChrF: character n-gram F-score for automatic MT evaluation. In: Proceedings of the 10th workshop on Statistical Machine Translation (WMT-15), Lisbon, 17–18 September 2015, pp 392–395

Popović M, Ney H (2009) Syntax-oriented evaluation measures for machine translation output. In: Proceedings of the fourth workshop on Statistical Machine Translation (StatMT ’09), Athens, pp 29–32

Proctor R, Vu K, Salvendy G (2002) Content preparation and management for web design: eliciting, structuring, searching, and displaying information. Int J Hum Comput Interact 14(1):25–92

Pym A (2010) Exploring translation theories. Routledge, Abingdon

Pym A (2015) Translating as risk management. J Pragmat 85:67–80

Ray R, DePalma D, Pielmeier H (2013) The price-quality link. Common Sense Advisory, Boston

Reeder F (2004) Investigation of intelligibility judgments. In: Frederking RE, Taylor KB (eds) Proceedings of the 6th conference of the Association for MT in the Americas, AMTA 2004. Springer, Heidelberg, pp 227–235

Rehm G, Uszkoreit H (2012) META-NET White Paper Series: Europe’s languages in the digital age. Springer, Heidelberg

Reiss K (1971) Möglichkeiten und Grenzen der Übersetzungskritik. Hueber, Munich

Reiss K, Vermeer HJ (1984) Grundlegung einer allgemeinen Translationstheorie. Niemeyer, Tübingen

Roturier J (2006) An investigation into the impact of controlled English rules on the comprehensibility, usefulness, and acceptability of machine-translated technical documentation for French and German users. Dissertation, Dublin City University

Sacher H, Tng T, Loudon G (2001) Beyond translation: approaches to interactive products for Chinese consumers. Int J Hum Comput Interact 13:41–51

Schäffner C (1997) From ‘good’ to ‘functionally appropriate’: assessing translation quality. Curr Issue Lang Soc 4(1):1–5

Secară A (2005) Translation evaluation: a state of the art survey. In: Proceedings of the eCoLoRe/MeLLANGE workshop, Leeds, 21–23 March 2005, pp 39–44

Smith M, Taffler R (1992) Readability and understandability: different measures of the textual complexity of accounting narrative. Account Audit Account J 5(4):84–98

Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. In: Proceedings of the 7th conference of the Association for Machine Translation in the Americas: “Visions for the future of Machine Translation”, Cambridge, 8–12 August 2006, pp 223–231

Somers H, Wild E (2000) Evaluating Machine Translation: The Cloze procedure revisited. Paper presented at Translating and the Computer 22, London

Sousa SC, Aziz W, Specia L (2011) Assessing the post-editing effort for automatic and semi-automatic translations of DVD subtitles. Paper presented at the recent advances in natural language processing workshop, Hissar, pp 97–103

Specia L (2011) Exploiting objective annotations for measuring translation post-editing effort. In: Proceedings of the fifteenth annual conference of the European Association for Machine Translation, Leuven, 30–31 May, pp 73–80

Stewart D (2012) Translating tourist texts into English as a foreign language. Liguori, Napoli

Stewart D (2013) From pro loco to pro globo: translating into English for an international readership. Interpret Transl Train 7(2):217–234

Stymne S, Danielsson H, Bremin S, Hu H, Karlsson J, Lillkull AP, Wester M (2012) Eye-tracking as a tool for machine translation error analysis. In: Calzolari N, Choukri K, Declerck T, Doğan MU, Maegaard B, Mariani J, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the eighth international conference on language resources and evaluation, Istanbul, 23–25 May 2012, pp 1121–1126

Stymne S, Tiedemann J, Hardmeier C, Nivre J (2013) Statistical machine translation with readability constraints. In: Proceedings of the 19th Nordic conference of computational linguistics, Oslo, 22–24 May 2013, pp 375–386

Suojanen T, Koskinen K, Tuominen T (2015) User-centered translation. Routledge, Abingdon

Tang J (2017) Translating into English as a non-native language: a translator trainer’s perspective. Translator 23(4):388–403

Article   MathSciNet   Google Scholar  

Tatsumi M (2009) Correlation between automatic evaluation metric scores, post-editing speed, and some other factors. In: Proceedings of MT Summit XII, Ottawa, pp 332–339

Toury G (1995) Descriptive translation studies and beyond. John Benjamins, Amsterdam

Turian JP, Shen L, Melamed ID (2003) Evaluation of machine translation and its evaluation. In: Proceedings of MT Summit IX, New Orleans, pp 386–393

Uszkoreit H, Lommel A (2013) Multidimensional quality metrics: a new unified paradigm for human and machine translation quality assessment. Paper presented at Localisation World, London, 12–14 June 2013

Van Slype G (1979) Critical study of methods for evaluating the quality of machine translation. Bureau Marcel van Dijk, Bruxelles

White J, O’Connell T, O’Mara F (1994) The ARPA MT evaluation methodologies: evolution, lessons and future approaches. In: Technology partnerships for crossing the language barrier, Proceedings of the first conference of the Association for Machine Translation in the Americas, Columbia, pp 193–205

Wilks Y (1994) Stone soup and the French room. In: Zampolli A, Calzolari N, Palmer M (eds) Current issues in computational linguistics: in honour of Don Walker. Linguistica Computazionale IX–X:585–594. Reprinted in Ahmad K, Brewster C, Stevenson M (eds) (2007) Words and intelligence I: selected papers by Yorick Wilks. Springer, Heidelberg, pp 255–265

Williams J (2013) Theories of translation. Palgrave Macmillan, Basingstoke

Wong BTM, Kit C (2012) Extending machine translation evaluation metrics with lexical cohesion to document level. In: Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, Jeju Island, 12–14 July 2012, pp 1060–1068

Download references

Acknowledgments

This work has been partly supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.

Author information

Authors and affiliations.

ADAPT Centre/School of Computing, Dublin City University, Dublin, Ireland

Sheila Castilho & Federico Gaspari

School of Humanities and Languages, The University of New South Wales, Sydney, Australia

Stephen Doherty

University for Foreigners “Dante Alighieri” of Reggio Calabria, Reggio Calabria, Italy

Federico Gaspari

ADAPT Centre/School of Applied Language and Intercultural Studies, Dublin City University, Dublin, Ireland

Joss Moorkens

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sheila Castilho .

Editor information

Editors and affiliations.

Sheila Castilho

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Cite this chapter.

Castilho, S., Doherty, S., Gaspari, F., Moorkens, J. (2018). Approaches to Human and Machine Translation Quality Assessment. In: Moorkens, J., Castilho, S., Gaspari, F., Doherty, S. (eds) Translation Quality Assessment. Machine Translation: Technologies and Applications, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-319-91241-7_2

Download citation

DOI : https://doi.org/10.1007/978-3-319-91241-7_2

Published : 14 July 2018

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-91240-0

Online ISBN : 978-3-319-91241-7

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. Translation Quality Assessment: a simple 3-step model (that works!)

    thesis on translation quality assessment

  2. (DOC) A Deeper Look into Metrics for Translation Quality Assessment

    thesis on translation quality assessment

  3. (PDF) A theoretical framework for back-translation as a quality

    thesis on translation quality assessment

  4. (PDF) Study of House's Model of Translation Quality Assessment on the

    thesis on translation quality assessment

  5. (PDF) Translation Quality Assessment: A Discoursal Approach in Theory

    thesis on translation quality assessment

  6. (PDF) A Translation Quality Assessment Tool Proposed

    thesis on translation quality assessment

VIDEO

  1. Decoding the PhD Journey: Why Your Thesis Is Important

  2. Assessment of a translation with a checklist in mind

  3. House's Model of Translation Quality Assessment || Overt and Covert Translation || House's Model

  4. Models of translation quality assessment. Round table discussion. UTICamp-2016

  5. 3rd Annual 3 Minute Thesis Competition on Canadian History. $1000 in Scholarships!

  6. Transforming Teaching & Learning

COMMENTS

  1. PDF Translation Quality Assessment: A Brief Survey on Manual and Automatic

    To facilitate effective translation modeling and translation studies, one of the crucial questions to address is how to assess trans-lation quality. From the perspectives of ac-curacy, reliability, repeatability and cost, translation quality assessment (TQA) it-self is a rich and challenging task. In this work, we present a high-level and con-

  2. Translation quality assessment: a critical methodological review

    Translation quality assessment (TQA) represents one of the most productive lines of research in Translation Studies (TS), with scholars proposing different TQA models. In parallel to these theoretical models is the emergence of a diverse array of assessment methods trialled and used to evaluate translation quality in different settings (e.g ...

  3. (PDF) Translation Quality Assessment for Research Purposes: an

    A number of translation quality assessment models were analyzed to investigate the potential of integrating linguistic theories into translation theories. ... McGill Univ. (unpubl. Doctoral thesis). 1989. 130 Rui Rothe-Neves Foltz, P. W. Latent semantic analysis for text-based research. Behavior Research Methods, Instruments and Computers, 28/2 ...

  4. (PDF) Translation Quality Assessment for Research Purposes: an

    Translation Quality Assessment (TQA) or Translation Evaluation (TE) is a major and continuing concern within translation studies field. ... Originally presented as the author's Thesis (Ph. D ...

  5. Translation quality assessment: a critical methodological review

    Translation quality assessment (TQA) has been one of the central topics in Translation. Studies (TS), spawning heated debate among scholars, educators and practitioners over. the past decades ...

  6. PDF Approaches to Human and Machine Translation Quality Assessment

    Abstract In both research and practice, translation quality assessment is a complex task involving a range of linguistic and extra-linguistic factors. This chapter provides a critical overview of the established and developing approaches to the definition and measurement of translation quality in human and machine translation

  7. Translation Quality Assessment

    "Translation Quality Assessment: From Principles to Practice offers a broad coverage of a number of approaches to human and machine translation quality assessment (TQA) based on a wide range of practices from research, industry and academia. … this book is a very useful guide for students, researchers and practitioners in the field ...

  8. PDF Translation Quality Assessment: Past and Present

    13.2 Different approaches to translation quality assessment 13.2.1 Psycho- social approaches 13.2.1.1 Mentalist views Mentalist views are reflected in the century- old, intuitive and anec-dotal judgements of 'how good or how bad somebody finds a transla-tion'. In the vast majority of cases, these judgements are not based

  9. [2105.03311] Translation Quality Assessment: A Brief Survey on Manual

    To facilitate effective translation modeling and translation studies, one of the crucial questions to address is how to assess translation quality. From the perspectives of accuracy, reliability, repeatability and cost, translation quality assessment (TQA) itself is a rich and challenging task. In this work, we present a high-level and concise survey of TQA methods, including both manual ...

  10. Translation Quality Assessment: From Principles to Practice

    This is the first volume that brings together research and practice from academic and industry settings and a combination of human and machine translation evaluation. Its comprehensive collection of papers by leading experts in human and machine translation quality and evaluation who situate current developments and chart future trends fills a ...

  11. PDF The Effectiveness of Translation Quality Assessment (TQA ...

    The Effectiveness of Translation Quality Assessment (TQA) Models for English into Indonesian Translation of Students Page | 288 The growth in digital content also displays broader requirements for Translation Quality Assessment (TQA), including text types, appropriate methods for the domain, workflow, as well as end-users (Moorkens et al., 2018).

  12. Translation Quality Assessment: An Evaluation-Centered Approach

    Williams (2004) uses hi s thesis later published as a book, Translation Quality Assessment: An Ar gumentation-Centred Approach to present an improved method for rating translations as acceptable.

  13. Translation Quality Assessment: From Principles to Practice

    Translation Quality Assessment (TQA) focuses on the product, not on the process of translation. In one way or another, it affects everyone in the translation process: students, educators, project managers, language service professional and translation scholars and researchers. Therefore, this book is addressed to translation students, lecturers ...

  14. PDF Translation quality assessment practices of faculty members of ...

    quality in translation assessment is a long-debated issue in the field of translation studies because 'different ideals, expectations, and conceptions of quality are at stake' (Martinez-Mateo ...

  15. [PDF] Translation Quality Assessment: Bridge the Gap between Theory and

    Translation Quality Assessment (TQA) is a central concern for both academic research and translation practice. Yet consensus on the issue has almost never been reached among theorists and practitioners. In this paper, a detailed comparison between academic and professional assessment has been made mainly in terms of text type, assessment criteria and models. It is believed that a "one size ...

  16. PDF Translation Quality Assessment Rubric: A Rasch Model-based Validation

    2.1.1 Translation Quality Assessment In the field of Translation Studies assessing the quality of translation is seriously important. As declared by House (2015) "Translation quality assessment can thus be said to be at the heart of any theory of translation" (p. 1). However, research into TQA remains one of the most difficult

  17. (PDF) Translation Quality Assessment: A Brief Survey on Manual and

    However, as MT and translation quality assessment (TQA) researchers report, MT outputs are still far from reaching human parity (Läubli et al., 2018; Läubli et al., 2020; Han et al., 2020a). MT quality assessment is thus still an important task to facilitate MT research itself, and also for downstream applications.

  18. The analysis and quality assessment of translation strategies in

    The FAR model was selected because of its focus on subtitling unlike other general translation assessment models that are "difficult to adapt to the special conditions of the medium [subtitling]" [18, p. 212].It is also suitable because it looks at the finalized product rather than the process, despite the process being conditional on product quality.

  19. Tradition and Trends in Translation Quality assessment

    Translation Quality Assessment (TQA) is a delicate issue. ... It presents the thesis that problems related to translation evaluation can be reduced by researching evaluation processes and by ...

  20. PDF Translation Quality Assessment: Naguib Mahfouz's Midaq Alley as Case

    - II - Acknowledgements I would like to express my gratefulness to my dedicated supervisors who have guided me through this dissertation throughout the last four years.

  21. (PDF) Translation Quality Assessment

    Translation Quality Assessment (TQA) is a delicate issue. Bowker (2000, 183) states that " evaluation is one of the most problematic areas of translation " and quotes Bassnett-McGuire, Mahn, Malmkjaer, and Snell-Hornby who described it as " a great stumbling block, " " a complex challenge, " " a most wretched question, " and " a thorny problem, " respectively.

  22. Translation quality assessment : Naguib Mahfouz's Midaq Alley as case

    This thesis is a descriptive, evaluative and comparative study in the field of translation studies. One of the objectives for this thesis is to explore a valid criterion by which a literary translation can be evaluated efficiently and to assess the translation of the selected novel for this research. The aim of this study is to measure the shifts which occurred between TTl and TT2 when ...

  23. Approaches to Human and Machine Translation Quality Assessment

    In both research and practice, translation quality assessment is a complex task involving a range of linguistic and extra-linguistic factors. This chapter provides a critical overview of the established and developing approaches to the definition and measurement of translation quality in human and machine translation workflows across a range of research, educational, and industry scenarios.