48,95 €
48,95 €
inkl. MwSt.
Erscheint vor. 20.01.25
24 °P sammeln
48,95 €
Als Download kaufen
48,95 €
inkl. MwSt.
Erscheint vor. 20.01.25
24 °P sammeln
Jetzt verschenken
Alle Infos zum eBook verschenken
48,95 €
inkl. MwSt.
Erscheint vor. 20.01.25
Alle Infos zum eBook verschenken
24 °P sammeln
Unser Service für Vorbesteller - Ihr Vorteil ohne Risiko:
Sollten wir den Preis dieses Artikels vor dem Erscheinungsdatum senken, werden wir Ihnen den Artikel bei der Auslieferung automatisch zum günstigeren Preis berechnen.
Sollten wir den Preis dieses Artikels vor dem Erscheinungsdatum senken, werden wir Ihnen den Artikel bei der Auslieferung automatisch zum günstigeren Preis berechnen.
- Format: ePub
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei
bücher.de, um das eBook-Abo tolino select nutzen zu können.
Hier können Sie sich einloggen
Hier können Sie sich einloggen
Sie sind bereits eingeloggt. Klicken Sie auf 2. tolino select Abo, um fortzufahren.
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
This book directly addresses the topic of "next generation" assessment design head-on by proposing a new perspective, a new understanding, of the challenge of designing, developing and implementing large (and small) scale educational testing programs.
- Geräte: eReader
- mit Kopierschutz
- eBook Hilfe
Andere Kunden interessierten sich auch für
- Nathan HaselbauerThe Everything Test Your I.Q. Book (eBook, ePUB)9,84 €
- Francesca FantiniTherapeutic Assessment with Adults (eBook, ePUB)42,95 €
- Jim BarrettThe Aptitude Test Workbook (eBook, ePUB)11,95 €
- Essential Research Methods in Psychology (eBook, ePUB)40,95 €
- Thomas MrotzekDer Test in Theorie und Praxis (eBook, ePUB)12,99 €
- Jonathan PeirceBuilding Experiments in PsychoPy (eBook, ePUB)41,95 €
- Essential Research Methods in Psychology (eBook, ePUB)40,95 €
-
-
-
This book directly addresses the topic of "next generation" assessment design head-on by proposing a new perspective, a new understanding, of the challenge of designing, developing and implementing large (and small) scale educational testing programs.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: Taylor & Francis
- Erscheinungstermin: 20. Januar 2025
- Englisch
- ISBN-13: 9781040264270
- Artikelnr.: 72259070
- Verlag: Taylor & Francis
- Erscheinungstermin: 20. Januar 2025
- Englisch
- ISBN-13: 9781040264270
- Artikelnr.: 72259070
- Herstellerkennzeichnung Die Herstellerinformationen sind derzeit nicht verfügbar.
Richard M. Luecht is a Professor Emeritus of Educational Research Methodology at UNC-Greensboro. He has designed numerous algorithms and software programs for automated test assembly and devised a computerized adaptive multistage testing framework used by several large-scale testing programs.
1. An Overview of Assessment Engineering
1.1. Some Definitions of AE
1.2. Limitations of Traditional Test Design
1.2.1. A Test Assembly Example
1.2.2. An AE Perspective on Test Design and Specifications
1.3. AE for Comprehensive Test Design Aligned to Proficiency Claims and
Score Interpretations
1.4. Chapter Summary
2. Construct Mapping and Evidence Modeling
2.1. What Are Constructs?
2.2. Limitations of Traditional Score Scale Construction and Their
Interpretations
2.2.1. Traditional Development of a Score Scale
2.2.2. Content Blueprints and Standards-Based Alignment
2.2.3. Achievement Level Descriptors and Standard Setting
2.2.4. Item Mapping
2.2.5. Limitations of Traditional Blueprints, Standard Setting, and Item
Mapping
2.3. Evidence-Centered Design
2.4. Construct Mapping: An Ordered Progression of Proficiency Claims and
Evidence
2.4.1. Choosing a Construct Trajectory
2.4.2. Evidence Models and Proficiency Claims as Building Blocks
2.5. Creating a Construct Map
2.6. Chapter Summary
3. Task Models and Task Model Families
3.1. What Is a Task Model?
3.2. Task Modeling and Cognitive Complexity
3.2.1. Task Model Grammars and the Structure of Task Models
3.2.2. Graphical Representations of Task Model Complexity
3.3. Item Scale Location (Difficulty) and Task Complexity
3.4. Complexity Design Layers
3.5. Chapter Summary
4. Task and Item Difficulty Modeling
4.1. The Need for IDM Research
4.2. Isomorphism and Composability in IDM Research
4.3. Characterizing Statistical Item Difficulty for IDM Research
4.4. Foundations of IDM Research
4.4.1 The Roles of Item Features and Complexity Design Layers
4.4.2 Some Methods for Statistically Modeling Item Difficulty
4.5. Phases in Implementing IDM
4.6. Chapter Summary
5. Task Model Mapping
5.1. Some Limitations of Traditional Test Blueprints and Test Assembly
Methods
5.1.1. Conditional Measurement Precision in Test Design and Assembly
5.1.2. The Consequences for Test Assembly of Using Fallible of Content- and
Cognitive-Coded Constraints
5.2. Building Task Model Maps (TMMs)
5.3. The Alignment of Complexity and Measurement Precision with a TMM
5.4. Chapter Summary
6. Item Model Families and Automatic Item Generation
6.1. Components and Types of Automatic Item Generation
6.1.1. AI and GPT Applications for AIG
6.1.2. Some Limitations of AIG
6.2. AE-Based Item Models
6.2.1. Item Model Structures and Content
6.2.2. Item Model Quality Control
6.3. Chapter Summary
7. AE Analytics and Quality Control
7.1. Object Analytics
7.1.1. Text Analytics
7.1.2. Image Analytics
7.1.3. Analytics for Audio/Visual Segments
7.1.4. Analytics for Tables and Tabular Data
7.1.5. Analytics for Mathematical Expressions and Equations
7.1.6. Analytics for Numbers/Numerical Sets
7.1.7. Analytics for Item Structures
7.2. Complexity Scoring Protocols
7.3. Psychometric QC Using Conditional Residual Analyses
7.4. An Object Analytic Architecture
7.5. Chapter Summary
8. AE Implementation and Future Directions
8.1. What Problems Does AE Actually Solve?
8.2. Implementing AE: A System of Integrated Systems
8.2.1. Versioning and Robust Integration
8.2.2. Quality Improvement Metrics
8.2.3. High-Level Procedures and Systems
8.3. Future Directions
References
1.1. Some Definitions of AE
1.2. Limitations of Traditional Test Design
1.2.1. A Test Assembly Example
1.2.2. An AE Perspective on Test Design and Specifications
1.3. AE for Comprehensive Test Design Aligned to Proficiency Claims and
Score Interpretations
1.4. Chapter Summary
2. Construct Mapping and Evidence Modeling
2.1. What Are Constructs?
2.2. Limitations of Traditional Score Scale Construction and Their
Interpretations
2.2.1. Traditional Development of a Score Scale
2.2.2. Content Blueprints and Standards-Based Alignment
2.2.3. Achievement Level Descriptors and Standard Setting
2.2.4. Item Mapping
2.2.5. Limitations of Traditional Blueprints, Standard Setting, and Item
Mapping
2.3. Evidence-Centered Design
2.4. Construct Mapping: An Ordered Progression of Proficiency Claims and
Evidence
2.4.1. Choosing a Construct Trajectory
2.4.2. Evidence Models and Proficiency Claims as Building Blocks
2.5. Creating a Construct Map
2.6. Chapter Summary
3. Task Models and Task Model Families
3.1. What Is a Task Model?
3.2. Task Modeling and Cognitive Complexity
3.2.1. Task Model Grammars and the Structure of Task Models
3.2.2. Graphical Representations of Task Model Complexity
3.3. Item Scale Location (Difficulty) and Task Complexity
3.4. Complexity Design Layers
3.5. Chapter Summary
4. Task and Item Difficulty Modeling
4.1. The Need for IDM Research
4.2. Isomorphism and Composability in IDM Research
4.3. Characterizing Statistical Item Difficulty for IDM Research
4.4. Foundations of IDM Research
4.4.1 The Roles of Item Features and Complexity Design Layers
4.4.2 Some Methods for Statistically Modeling Item Difficulty
4.5. Phases in Implementing IDM
4.6. Chapter Summary
5. Task Model Mapping
5.1. Some Limitations of Traditional Test Blueprints and Test Assembly
Methods
5.1.1. Conditional Measurement Precision in Test Design and Assembly
5.1.2. The Consequences for Test Assembly of Using Fallible of Content- and
Cognitive-Coded Constraints
5.2. Building Task Model Maps (TMMs)
5.3. The Alignment of Complexity and Measurement Precision with a TMM
5.4. Chapter Summary
6. Item Model Families and Automatic Item Generation
6.1. Components and Types of Automatic Item Generation
6.1.1. AI and GPT Applications for AIG
6.1.2. Some Limitations of AIG
6.2. AE-Based Item Models
6.2.1. Item Model Structures and Content
6.2.2. Item Model Quality Control
6.3. Chapter Summary
7. AE Analytics and Quality Control
7.1. Object Analytics
7.1.1. Text Analytics
7.1.2. Image Analytics
7.1.3. Analytics for Audio/Visual Segments
7.1.4. Analytics for Tables and Tabular Data
7.1.5. Analytics for Mathematical Expressions and Equations
7.1.6. Analytics for Numbers/Numerical Sets
7.1.7. Analytics for Item Structures
7.2. Complexity Scoring Protocols
7.3. Psychometric QC Using Conditional Residual Analyses
7.4. An Object Analytic Architecture
7.5. Chapter Summary
8. AE Implementation and Future Directions
8.1. What Problems Does AE Actually Solve?
8.2. Implementing AE: A System of Integrated Systems
8.2.1. Versioning and Robust Integration
8.2.2. Quality Improvement Metrics
8.2.3. High-Level Procedures and Systems
8.3. Future Directions
References
1. An Overview of Assessment Engineering
1.1. Some Definitions of AE
1.2. Limitations of Traditional Test Design
1.2.1. A Test Assembly Example
1.2.2. An AE Perspective on Test Design and Specifications
1.3. AE for Comprehensive Test Design Aligned to Proficiency Claims and
Score Interpretations
1.4. Chapter Summary
2. Construct Mapping and Evidence Modeling
2.1. What Are Constructs?
2.2. Limitations of Traditional Score Scale Construction and Their
Interpretations
2.2.1. Traditional Development of a Score Scale
2.2.2. Content Blueprints and Standards-Based Alignment
2.2.3. Achievement Level Descriptors and Standard Setting
2.2.4. Item Mapping
2.2.5. Limitations of Traditional Blueprints, Standard Setting, and Item
Mapping
2.3. Evidence-Centered Design
2.4. Construct Mapping: An Ordered Progression of Proficiency Claims and
Evidence
2.4.1. Choosing a Construct Trajectory
2.4.2. Evidence Models and Proficiency Claims as Building Blocks
2.5. Creating a Construct Map
2.6. Chapter Summary
3. Task Models and Task Model Families
3.1. What Is a Task Model?
3.2. Task Modeling and Cognitive Complexity
3.2.1. Task Model Grammars and the Structure of Task Models
3.2.2. Graphical Representations of Task Model Complexity
3.3. Item Scale Location (Difficulty) and Task Complexity
3.4. Complexity Design Layers
3.5. Chapter Summary
4. Task and Item Difficulty Modeling
4.1. The Need for IDM Research
4.2. Isomorphism and Composability in IDM Research
4.3. Characterizing Statistical Item Difficulty for IDM Research
4.4. Foundations of IDM Research
4.4.1 The Roles of Item Features and Complexity Design Layers
4.4.2 Some Methods for Statistically Modeling Item Difficulty
4.5. Phases in Implementing IDM
4.6. Chapter Summary
5. Task Model Mapping
5.1. Some Limitations of Traditional Test Blueprints and Test Assembly
Methods
5.1.1. Conditional Measurement Precision in Test Design and Assembly
5.1.2. The Consequences for Test Assembly of Using Fallible of Content- and
Cognitive-Coded Constraints
5.2. Building Task Model Maps (TMMs)
5.3. The Alignment of Complexity and Measurement Precision with a TMM
5.4. Chapter Summary
6. Item Model Families and Automatic Item Generation
6.1. Components and Types of Automatic Item Generation
6.1.1. AI and GPT Applications for AIG
6.1.2. Some Limitations of AIG
6.2. AE-Based Item Models
6.2.1. Item Model Structures and Content
6.2.2. Item Model Quality Control
6.3. Chapter Summary
7. AE Analytics and Quality Control
7.1. Object Analytics
7.1.1. Text Analytics
7.1.2. Image Analytics
7.1.3. Analytics for Audio/Visual Segments
7.1.4. Analytics for Tables and Tabular Data
7.1.5. Analytics for Mathematical Expressions and Equations
7.1.6. Analytics for Numbers/Numerical Sets
7.1.7. Analytics for Item Structures
7.2. Complexity Scoring Protocols
7.3. Psychometric QC Using Conditional Residual Analyses
7.4. An Object Analytic Architecture
7.5. Chapter Summary
8. AE Implementation and Future Directions
8.1. What Problems Does AE Actually Solve?
8.2. Implementing AE: A System of Integrated Systems
8.2.1. Versioning and Robust Integration
8.2.2. Quality Improvement Metrics
8.2.3. High-Level Procedures and Systems
8.3. Future Directions
References
1.1. Some Definitions of AE
1.2. Limitations of Traditional Test Design
1.2.1. A Test Assembly Example
1.2.2. An AE Perspective on Test Design and Specifications
1.3. AE for Comprehensive Test Design Aligned to Proficiency Claims and
Score Interpretations
1.4. Chapter Summary
2. Construct Mapping and Evidence Modeling
2.1. What Are Constructs?
2.2. Limitations of Traditional Score Scale Construction and Their
Interpretations
2.2.1. Traditional Development of a Score Scale
2.2.2. Content Blueprints and Standards-Based Alignment
2.2.3. Achievement Level Descriptors and Standard Setting
2.2.4. Item Mapping
2.2.5. Limitations of Traditional Blueprints, Standard Setting, and Item
Mapping
2.3. Evidence-Centered Design
2.4. Construct Mapping: An Ordered Progression of Proficiency Claims and
Evidence
2.4.1. Choosing a Construct Trajectory
2.4.2. Evidence Models and Proficiency Claims as Building Blocks
2.5. Creating a Construct Map
2.6. Chapter Summary
3. Task Models and Task Model Families
3.1. What Is a Task Model?
3.2. Task Modeling and Cognitive Complexity
3.2.1. Task Model Grammars and the Structure of Task Models
3.2.2. Graphical Representations of Task Model Complexity
3.3. Item Scale Location (Difficulty) and Task Complexity
3.4. Complexity Design Layers
3.5. Chapter Summary
4. Task and Item Difficulty Modeling
4.1. The Need for IDM Research
4.2. Isomorphism and Composability in IDM Research
4.3. Characterizing Statistical Item Difficulty for IDM Research
4.4. Foundations of IDM Research
4.4.1 The Roles of Item Features and Complexity Design Layers
4.4.2 Some Methods for Statistically Modeling Item Difficulty
4.5. Phases in Implementing IDM
4.6. Chapter Summary
5. Task Model Mapping
5.1. Some Limitations of Traditional Test Blueprints and Test Assembly
Methods
5.1.1. Conditional Measurement Precision in Test Design and Assembly
5.1.2. The Consequences for Test Assembly of Using Fallible of Content- and
Cognitive-Coded Constraints
5.2. Building Task Model Maps (TMMs)
5.3. The Alignment of Complexity and Measurement Precision with a TMM
5.4. Chapter Summary
6. Item Model Families and Automatic Item Generation
6.1. Components and Types of Automatic Item Generation
6.1.1. AI and GPT Applications for AIG
6.1.2. Some Limitations of AIG
6.2. AE-Based Item Models
6.2.1. Item Model Structures and Content
6.2.2. Item Model Quality Control
6.3. Chapter Summary
7. AE Analytics and Quality Control
7.1. Object Analytics
7.1.1. Text Analytics
7.1.2. Image Analytics
7.1.3. Analytics for Audio/Visual Segments
7.1.4. Analytics for Tables and Tabular Data
7.1.5. Analytics for Mathematical Expressions and Equations
7.1.6. Analytics for Numbers/Numerical Sets
7.1.7. Analytics for Item Structures
7.2. Complexity Scoring Protocols
7.3. Psychometric QC Using Conditional Residual Analyses
7.4. An Object Analytic Architecture
7.5. Chapter Summary
8. AE Implementation and Future Directions
8.1. What Problems Does AE Actually Solve?
8.2. Implementing AE: A System of Integrated Systems
8.2.1. Versioning and Robust Integration
8.2.2. Quality Improvement Metrics
8.2.3. High-Level Procedures and Systems
8.3. Future Directions
References