7.1 assessment and the cefr (1)

113 visualizaciones

Publicado el

Assessment and the CEFR (1)

Publicado en: Educación
0 comentarios
0 recomendaciones
Estadísticas
Notas
  • Sé el primero en comentar

  • Sé el primero en recomendar esto

Sin descargas
Visualizaciones
Visualizaciones totales
113
En SlideShare
0
De insertados
0
Número de insertados
0
Acciones
Compartido
0
Descargas
6
Comentarios
0
Recomendaciones
0
Insertados 0
No insertados

No hay notas en la diapositiva.
  • If we want assessment to be valid, reliable, and feasible, we need to specify:
    What is assessed: according to the CEFR, communicative activities (contexts, texts, and tasks). See examples.
    How performance is interpreted: assessment criteria. See examples
    How to make comparisons between different tests and ways of assessment (for example, between public examinations and teacher assesment). Two main procedures:
    Social moderation: discussion between experts
    Benchmarking: comparison of samples in relation to standardized definitions and examples
    Guidelines for good practice: EALTA
  • Types of tests:

    Proficiency tests: designed to measure people’s ability in a language, regardless of any training. “Proficient”: command of the language, for a particular purpose or for general purposes.

    Achievement tests: most teachers are not responsible for proficiency tests, but for achievement tests. They are normally related to language courses. Two approaches:
    to base achievement tests on the textbook (or the syllabus), so that only what is covered in the classes is tested,
    or, much better, to base test content on course objectives. More beneficial washback. The long-term interests of the students are best served by this approach.
    Two types: final achievement tests, and progress achievement tests (formative assessment)

    Diagnostic tests: Used to identify learners’ strengths and weaknesses (example: Dialang)

    Placement tests: to place students at the stage most appropriate to their abilities
  • A test is valid if it measures accurately what it is intended to measure. Or, the information gained is an accurate representation of the proficiency of the candidate. This general type of validity is called “construct validity”, the validity of the construct, the thing we want to measure

    Content validity: A test has it if its content constitutes a representative sample of the language skills or structures, etc. that it wants to measure. So, first, we need a specification of the skills of structures that we want to cover, and compare them with the test itself. For example, B2 writing skills, writing formal letters is one of the subskills shown in the specification, there are more, the more we cover, the more valid the test will be. The more content validity, the more construct validity and the more backwash effect.

    Criterion-related validity: Results on the test agree with other (independent and highly dependable) results. This independent assessment is the criterion measure.

    Two types:

    Concurrent validity: we compare the criterion test and the test that we want to check. They both take place at about the same time.
    Example 1: we administer a 45 m. oral test where all the subskills, tasks, operations, are tested. But only to a sample of the students. This is the criterion test. Then we do 10 m. interviews to the whole level of students. We compare the results, and they tell us whether 10 m. is enough or not. This is expressed in a “correlation coefficient” bw the criterion and the test being validated.
    Example 2: we compare the results of a general test (Pruebas Estandarizadas) with teachers’ assessment.

    Predictive validity: the test predicts future performance of the students. A placement test can easily be validated by the teachers teaching the students by checking if the students are well placed or not.

    Validity in scoring: not only the items need to be valid, but also the way in which the responses are scored. For example, a reading test may call for short written responses. If the scoring of these responses takes into account spelling and grammar, then it is not valid (it is not measuring what it is intended to measure). Same for the scoring of writing or speaking.

    Face validity: the test has to look as if it measures what it is supposed to measure. It is not a scientific notion, but it is important (for candidates, teachers, employers). For example, a written test to check pronunciation.
  • A test is valid if it measures accurately what it is intended to measure. Or, the information gained is an accurate representation of the proficiency of the candidate. This general type of validity is called “construct validity”, the validity of the construct, the thing we want to measure

    Content validity: A test has it if its content constitutes a representative sample of the language skills or structures, etc. that it wants to measure. So, first, we need a specification of the skills of structures that we want to cover, and compare them with the test itself. For example, B2 writing skills, writing formal letters is one of the subskills shown in the specification, there are more, the more we cover, the more valid the test will be. The more content validity, the more construct validity and the more backwash effect.

    Criterion-related validity: Results on the test agree with other (independent and highly dependable) results. This independent assessment is the criterion measure.

    Two types:

    Concurrent validity: we compare the criterion test and the test that we want to check. They both take place at about the same time.
    Example 1: we administer a 45 m. oral test where all the subskills, tasks, operations, are tested. But only to a sample of the students. This is the criterion test. Then we do 10 m. interviews to the whole level of students. We compare the results, and they tell us whether 10 m. is enough or not. This is expressed in a “correlation coefficient” bw the criterion and the test being validated.
    Example 2: we compare the results of a general test (Pruebas Estandarizadas) with teachers’ assessment.

    Predictive validity: the test predicts future performance of the students. A placement test can easily be validated by the teachers teaching the students by checking if the students are well placed or not.

    Validity in scoring: not only the items need to be valid, but also the way in which the responses are scored. For example, a reading test may call for short written responses. If the scoring of these responses takes into account spelling and grammar, then it is not valid (it is not measuring what it is intended to measure). Same for the scoring of writing or speaking.

    Face validity: the test has to look as if it measures what it is supposed to measure. It is not a scientific notion, but it is important (for candidates, teachers, employers). For example, a written test to check pronunciation.
  • Reliability: A student being tested twice will get the same result (technical concept: the rank order of the candidates is replicated in two separate—real or simulated—administrations of the same assessment )

    We compare two tests taken by the same group of students, and get a reliability coefficient: if all the students get exactly the same result, the coefficient is 1 (It never happens). High Stakes Tests need a higher coefficient than Lower Stakes exams. They shouldn’t depend on chance, or particular circumstances.

    In order to get two comparable tests, there are two procedures:

    Test-retest method: the students take the same test again
    Alternate forms method: the students take two alternate forms of the same test
    Split half method: you split the test into two (equivalent) halves and compare them as if they were two different tests. You get a “coefficient of internal consistency”.

    We also need to know the standard error of measurement of a test. This is actually the opposite of the reliability coefficient and you can get it through statistical analysis. With this number, we can find out what the true score of a student is. For example, if we have a very reliable test, it will have a low standard error of measurement, and therefore, the student will always get a very similar result no matter how many times he takes the test. In a less reliable test, his true score would be less defined. The true score lies in a range that varies depending on the standard error of measurement of the test.

    These numbers are important to compare tests and to take decisions (by companies, governments, etc.) based on those results.

    Another statistical procedure commonly used now is Item Response Theory. Very technical.

    Scorer reliability. There is also a scorer reliability coefficient, the level of agreement given by the same or different scorers on different occasions. If the scoring is not reliable, the rest results cannot be reliable.

  • Reliability: A student being tested twice will get the same result (technical concept: the rank order of the candidates is replicated in two separate—real or simulated—administrations of the same assessment )

    We compare two tests taken by the same group of students, and get a reliability coefficient: if all the students get exactly the same result, the coefficient is 1 (It never happens). High Stakes Tests need a higher coefficient than Lower Stakes exams. They shouldn’t depend on chance, or particular circumstances.

    In order to get two comparable tests, there are two procedures:

    Test-retest method: the students take the same test again
    Alternate forms method: the students take two alternate forms of the same test
    Split half method: you split the test into two (equivalent) halves and compare them as if they were two different tests. You get a “coefficient of internal consistency”.

    We also need to know the standard error of measurement of a test. This is actually the opposite of the reliability coefficient and you can get it through statistical analysis. With this number, we can find out what the true score of a student is. For example, if we have a very reliable test, it will have a low standard error of measurement, and therefore, the student will always get a very similar result no matter how many times he takes the test. In a less reliable test, his true score would be less defined. The true score lies in a range that varies depending on the standard error of measurement of the test.

    These numbers are important to compare tests and to take decisions (by companies, governments, etc.) based on those results.

    Another statistical procedure commonly used now is Item Response Theory. Very technical.

    Scorer reliability. There is also a scorer reliability coefficient, the level of agreement given by the same or different scorers on different occasions. If the scoring is not reliable, the rest results cannot be reliable.

  • Item analysis:
    Facility value
    Discrimination indices: drop some, improve others
    Analyse distractors
    Item banking
    SEE EXAMPLE FROM FUENSANTA
  • How to make tests more reliable (Hughes)
    Take enough samples of behaviour. The more items, the more reliable. The higher stakes, the longer it should be. Example from the Bible. P. 45
    Exclude items which do not descriminate well between weaker and stronger students
    Do not allow candidates too much freedom. Example p. 46
    Write unambiguous items: Critical scrutiny of colleagues, pre-testing (trialling, piloting)
    Provide clear and explicit instructions: write them down, read them aloud. No problem with writing them in L1.
    Ensure that tests are well laid out and perfectly legible
    Make candidates familiar with format and testing techniques
    Provide uniform and non-distracting conditions of administration (specified timing, good acoustic conditions)
  • Use items which permit scoring which is as objective as possible (better one-word response than multiple choice)
    Make comparisons between candidates as direct as possible (no choice of items)
    Provide a detailed scoring key
    Train scorers
    Agree acceptable responses and appropriate scores at the beginning of the scoring process. Score a sample. Choose representative examples. Agree. Then scorers can begin to score.
    Identifty candidates by number not by name
    Emply multiple, independent scorers. At least two, independently. Then, a third, senior scorer gets the results, and investigates discrepancies.

  • Washback/Backwash: (One of the) main reasons for a language teacher/school/department to use appropriate forms of assessment.

    Test the abilities/skills you want to encourage. Give them sufficient weight in relation to other skills.
    Sample widely and unpredictably: Test across the full range of the specifications
    Use direct testing
    Make testing criterion-referenced (CEFR)
    Base achievement tests on objectives
    Ensure that the test is known and understood by students and teachers (the more transparent, the better)
    (Where necessary, provide assistance to teachers)
    Counting the cost: Individual direct testing is expensive, but what is the cost of not achieving beneficial washback

  • Calibrate scales: collect samples of performance, and use them as models, reference points (European Study)
  • 7.1 assessment and the cefr (1)

    1. 1. Jesús Ángel González
    2. 2. What does the word suggest? What sort of emotions does it convey? Try to write a definition. What does it imply? Which characteristics should it have?
    3. 3.  What does the word suggest?  What sort of emotions does it convey?  Try to write a definition. What does it imply? • Collecting information • Analyzing the information and making an assessment • Taking decisions according to the assessment made:  Pedagogical decisions (formative assessment)  Social decisions  Which characteristics should it have? • Validity, reliability, feasibility
    4. 4.  Assessment: Assessment of the proficiency of the language user  3 key concepts: • Validity: the information gained is an accurate representation of the proficiency of the candidates • Reliability: A student being tested twice will get the same result (technical concept: the rank order of the candidates is replicated in two separate—real or simulated—administrations of the same assessment ) • Feasibility: The procedure needs to be practical, adapted to the available elements and features
    5. 5.  If we want assessment to be valid, reliable, and feasible, we need to specify: • What is assessed: according to the CEFR, communicative activities (contexts, texts, and tasks). See examples. • How performance is interpreted: assessment criteria. See examples • How to make comparisons between different tests and ways of assessment (for example, between public examinations and teacher assesment). Two main procedures:  Social moderation: discussion between experts  Benchmarking: comparison of samples in relation to standardized definitions and examples, which become reference points (benchmarks) • Guidelines for good practice: EALTA
    6. 6. TYPES OF ASSESSMENT 1 Achievement assessment / Proficiency assessment 2 Norm-referencing (NR)/ Criterion-referencing (CR) 3 Mastery learning CR / Continuum CR 4 Continuous assessment / Fixed assessment points 5 Formative assessment / Summative assessment 6 Direct assessment / Indirect assessment 7 Performance assessment / Knowledge assessment 8 Subjective assessment / Objective assessment 9 Checklist rating / Performance rating 10 Impression / Guided judgement 11 Holistic assessment/ Analytic assessment 12 Series assessment / Category assessment 13 Assessment by others / Self-assessment
    7. 7. Types of tests: • Proficiency tests • Achievement tests. 2 approaches:  To base achievement tests on the textbook/syllabus (contents)  To base them on course objectives. More beneficial washback. • Diagnostic tests • Placement tests
    8. 8.  Validity: the information gained is an accurate representation of the proficiency of the candidates  Validity Types: • Construct validity (very general, the information gained is an accurate representation of the proficiency of the candidate. It checks the validity of the construct, the thing we want to measure) • Content validity. This checks it the test’s content is a representative sample of the skills or structures that it wants to measure. In order to check this we need a complete specification of all the skills or structures we want to cover. If it covers 5% only, it has less content validity than if it covers 25 %.
    9. 9.  Validity Types: • Criterion-related validity: Results on the test agree with other dependable results (criterion test)  Concurrent validity. We compare the test results with the criterion test.  Predictive validity. The test predicts future performance.A placement test is validated by the teachers who teach the selected students. • Validity in scoring. Not only the items need to be valid, but also the way in which responses are scored (taking into account grammar mistakes in a reading comprehension exam is not valid) • Face validity: the test has to look as if it measures what it is supposed to measure. A written test to check pronunciation has little face validity.
    10. 10. How to make tests more valid (Hughes) Write specifications for the test. Include a representative sample ot the content of the specifications in the text Whenever feasible, use direct testing Make sure that the scoring relates directly to what is being tested Try to make the test reliable
    11. 11. Reliability: A student being tested twice will get the same result (technical concept: the rank order of the candidates is replicated in two separate—real or simulated— administrations of the same assessment. Result: a reliability coefficient, theoretical maximum 1, if all the students get exactly the same result) - We compare two tests. Methods: - Test-Retest: the student takes the same test again - Alternate Forms: the students take two alternate forms of the same test - Split.Half: you split the test into two equivalent halves and compare them as if they were two different tests.
    12. 12. - Reliability coefficient / Standard Error of Measurement A High Stakes Test needs a high reliability coefficient (highest is 1), and therefore a very low standard error of measurement (a number obtained by statistical analysis). A Lower Stakes exam does not need those coefficients. - True Score: the real score that a student would get in a perfectly reliable test. In a very reliable test, the true score is clearly defined (the student will always get a similar result, for example 65-67). In a less reliable test, the range is wider (55-75). - Scorer reliability (coefficient). You compare the scores given by different scorers (examiners). The more agreement, the more reliable their reliability coefficient.
    13. 13. Item analysis:  Facility value  Discrimination indices: drop some, improve others  Analyse distractors  Item banking
    14. 14. 1.Take enough samples of behaviour. 2.Exclude items which do not descriminate well 3.Do not allow candidates too much freedom. 4.Write unambiguous items 5.Provide clear and explicit instructions 6.Ensure that tests are well laid out and perfectly legible 7.Make candidates familiar with format and testing techniques 8.Provide uniform and non-distracting conditions of administration
    15. 15. 9. Use items which permit scoring which is as objective as possible 10. Make comparisons between candidates as direct as possible 11. Provide a detailed scoring key 12. Train scorers 13. Agree acceptable responses and appropriate scores at the beginning of the scoring process. 14. Identifty candidates by number not by name 15. Employ multiple, independent scorers..
    16. 16. To be valid a test must be reliable (provide accurate measurement) A reliable test may not be valid at all (technically perfect, but globally wrong: it does not test what it is supposed to test)
    17. 17.  Test the abilities/skills you want to encourage.  Sample widely and unpredictably  Use direct testing  Make testing criterion-referenced (CEFR)  Base achievement tests on objectives  Ensure that the test is known and understood by students and teachers  Counting the cost
    18. 18. 1. Make a full and clear statement of the testing ‘problem’. 2. Write complete specifications for the test. 3. Write and moderate items. 4. Trial the items informally on native speakers and reject or modify problematic ones as necessary. 5. Trial the test on a group of non-native speakers similar to those for whom the test is intended. 6. Analyse the results of the trial and make any necessary changes. 7. Calibrate scales: collect samples of performance, use them as models (benchmarking) 8. Validate. 9. Write handbooks for test takers, test users and staff. 10. Train any necessary staff (interviewers, raters, etc.).
    19. 19. Chapters from Hughes’ Testing for Language Teachers 8. Common Test techniques: Elaine, 24th 9. Testing Writing: Marta, Idoia, 22nd 10. Testing Oral Abilities: Paula, Ángela, 24th 11. Testing Reading: Lucía, 24th 12. Testing Listening: Lorena, 22nd 13. Testing Grammar and Vocabulary: Clara, Cristina, 22nd 14. Testing Overall Ability: Jefferson, 22nd 15. Tests for Young Learners: Tania, Diego, 24th

    ×