Readability of texts: Human evaluation versus computer index

Pooneh Heydari, A. Mehdi Riazi

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)

Abstract

This paper reports a study which aimed at exploring if there is any difference between the evaluation of EFL expert readers and computer-based evaluation of English text difficulty. 43 participants including university EFL instructors and graduate students read 10 different English passages and completed a Likert-type scale on their perception of the different components of text difficulty. On the other hand, the same 10 English texts were fed into Word Program and Flesch Readability index of the texts were calculated. Then comparisons were made to see if readers' evaluation of texts were the same or different from the calculated ones. Results of the study revealed significant differences between participants' evaluation of text difficulty and the Flesch Readability index of the texts. Findings also indicated that there was no significant difference between EFL instructors and graduate students' evaluation of the text difficulty. The findings of the study imply that while readability formulas are valuable measures for evaluating level of text difficulty, they should be used cautiously. Further research seems necessary to check the validity of the readability formulas and the findings of the present study.

Original languageEnglish
Pages (from-to)177-190
Number of pages14
JournalMediterranean Journal of Social Sciences
Volume3
Issue number1
DOIs
Publication statusPublished - Jan 2012
Externally publishedYes

Keywords

  • Flesch reading ease readability formula
  • Readability formula
  • Text readability
  • Validity of readability formulas

Fingerprint

Dive into the research topics of 'Readability of texts: Human evaluation versus computer index'. Together they form a unique fingerprint.

Cite this