Los efectos del diseño de exámenes dependientes e independientes del contexto sobre el desempeño de estudiante ILE iraníes en los exámenes del vocabulario

Autores/as

  • Mahmoud Abdi Tabari Autor/a Oklahoma State University

DOI:

https://doi.org/10.5294/4587

Palabras clave:

Examen dependiente del contexto, examen independiente del contexto, examen del vocabulario.

Resumen

Este estudio explora el papel del contexto en la evaluación de vocabulario. Además, examina cómo los alumnos de los grupos de niveles de competencia ligeramente diferentes se desempeñaron en 2 exámenes, dependiente e independiente del contexto. Se compararon los desempeños de 40 estudiantes universitarios cono inglés L2 en artículos idénticos en dos exámenes, el examen de correspondencia ( un examen independiente del contexto) y el examen de C (un examen dependiente del contexto). El resultado mostró que todos los participantes se desempeñaron un poco mejor en el examen de correspondencia que en el examen de C. Esto sugiere que el contexto no jugó un papel importante en sus desempeños del examen de C. Por otra parte, los alumnos de alto intermedio realizan mucho mejor en las 2 pruebas que los participantes del nivel intermedio. Por tanto, se concluyó que los estudiantes de competencia superiores utilizan más contexto en respuesta a los elementos en el examen que hacer los alumnos de competencia más bajas. Estos resultados ayudan a identificar el mejor formato para la evaluación de vocabulario.

Descargas

Los datos de descargas todavía no están disponibles.

Biografía del autor/a

Mahmoud Abdi Tabari, Oklahoma State University

Mahmoud Abdi Tabari is the PhD student in Education at Oklahoma State University, USA, and a member of the Young Researchers' Club. He shows great interests in second language acquisition, second language writing, and vocabulary assessment.

Citas

REFERENCES

Babaii, E., & Ansari, H. (2001). The C-test: A valid operationalization of reduced redundancy principle. System, 29, 209-219.

Bachman, L. F. (1990). Fundamental concepts in language testing. Oxford: Oxford University Press.

Bachman, L. F. & Palmer, A. S. (1996). Language testing in practice. Oxford: Oxford University Press.

Bachman, L. F. (2000). Modern language testing at the turn of the century: Assuring that what we count counts. Language Testing Journal, 17, 1, 1-42.

Batia, L., Elder, C., Hill, K, & Congdon, P. (2004). Size and strength: Do we need both to measure vocabulary knowledge? Language Testing, 21, 2, 202-220.

Brown, J. D. (1985). Tailored cloze: Improved with classical item analysis technique. Language Testing, 4, 1, 19-31.

Brown, J. D. (1996). Testing in language programs. New Jersey: Prentice Hall Regents.

Cameron, L. (2002). Measuring vocabulary size in English as an additional language. Language Testing Research, 6, 2, 145-173.

Chapelle, C. A., & Abraham, R. G. (1990). Cloze method: What difference does it make? Language Testing, 7, 2, 121-146.

Connelly, M. (1997). Using C-test in English with postgraduate students. Pergomon, 16, 2, 139-150.

De La Funente, M. J. (2002). Acquisition of L2 vocabulary: The role of input and output in the receptive and productive acquisition of words. Cambridge: Cambridge University Press.

Eckes, Th., & Grotjahn, R. (2006). A closer look at the construct validity of C-Tests. Language Testing, 23, 3, 291-319.

Farhadi, H., & Keramati, M. N. (1996). A text-driven method for the deletion procedure in cloze passages. Language Testing, 13, 1, 191- 207.

Hatch, E., & Farhady, H. (1981). Research design and Statistics for applied linguistics. Tehran: Rahnama Publication.

Hedge, T. (2000). Teaching and learning in the language classroom. Oxford: Oxford University Press.

Hughes, A. (2003). Testing for language teachers. Cambridge: Cambridge University Press.

Jafarpour, A. (1999). Can the C-test be improved with classical item analysis? System, 27, 79-89.

Klein-Braley, Ch., & Raatz, U. (1984). A survey of research on the C-test. Language Testing, 2, 1, 135-146.

Klein-Braley, Ch. (1997). C-tests in the context of reduced redundancy testing: An appraisal. Language Testing, 14, 1, 47-84.

Khoii, R., Fotovat Ahmadi, P., & Shokouhian, M. (2007). Reading for Ideas, I. Tehran: Rahnama Publication.

Khoii, R., Fotovat Ahmadi, P., & Shokouhian, M. (2007). Reading For Ideas, II. Tehran: Rahnama Publication.

Laufer, B., & Goldstein, Z. (2004). Testing vocabulary knowledge; size, strength, and computer adaptiveness. Language Learning, 54, 3, 399-436.

Laufer, B., C., Hill, K., & Congdon, P. (2002). Size and Strength: Do we need both to measure vocabulary knowledge? Language Testing, 21, 202.

Meara, P. (2000). The rediscovery of vocabulary. Second Language Research, 18, 1.

Meara, P., & Buxton, B. (1983). An alternative to multiple choice vocabulary tests. Language Testing, 2, 3, 143-154.

Meara, P., & Fitzpatrick, T. (2000). Lex 30: An improved method of assessing productive vocabulary in an L2. System, 28, 3, 19-30.

Meara, P., & Nation, I. S. P. (2002). Vocabulary. In N. Schmitt (Ed). An introduction to applied linguistics. London: Arnold Press.

Nation, I. S. P. (1982). Beginning to learn foreign vocabulary: A review of the research. RELC Journal, 13, 14-22.

Nation, I. S. P. (2001). A study of the most frequent Learning vocabulary in another language. Cambridge: Cambridge University Press.

Oxford, R. L. (1990). Language learning strategies. Heinle: Heinle Publishers.

Perkin, K., & Lineville, S. (1984). A construct definition study of a standardized vocabulary test. Language Testing, 1, 2, 125-141.

Quian, D. D., & Schedle, M. (2004). Evaluation of an in-depth vocabulary knowledge measure for assessing reading performance. Language Testing, 21, 28.

Read, J. (1988). Measuring the vocabulary knowledge of second language learners. RELC Journal, 19, 12.

Read, J. (1993). The development of a new measure of L2 vocabulary knowledge. Language Testing Research, 10, 355-371.

Read, J. (2000). Assessing vocabulary. Cambridge: Cambridge University Press.

Read, J., & Chapelle, C. A. (2001). A framework for second language measure for assessing reading performance. Language Testing, 21, 28.

Sasaki, M. (2000). Effects of cultural schemata on students' test taking processes for cloze tests: A multiple data source approach. Language Testing, 17, 1, 85-114.

Schmitt, N. (1999). The relationship between TOEFL vocabulary items and meaning, association, collocation and word-class knowledge. Language Testing, 16, 2, 189-216.

Schmitt, N. (2000). Vocabulary in language teaching. Cambridge: Cambridge University Press.

Schmitt, N., Schmitt, D., & Clapham, C. (2001). Developing and exploring the behavior of two new versions of the vocabulary level test. Language Testing, 18, 55.

Shitotsu, T., & Weir, C. J. (2007). The relative significance of vocabulary breath in the prediction of reading comprehension test performance. Language Testing, 12, 24-99.

Descargas

Publicado

2014-10-29

Cómo citar

Abdi Tabari, M. (2014). Los efectos del diseño de exámenes dependientes e independientes del contexto sobre el desempeño de estudiante ILE iraníes en los exámenes del vocabulario. Latin American Journal of Content & Language Integrated Learning, 7(2), 83–102. https://doi.org/10.5294/4587

Número

Sección

Artículos