Download construction versus choice in cognitive measurement issues in constructed response performance testing and portfolio assessment in pdf or read construction versus choice in cognitive measurement issues in constructed response performance testing and portfolio assessment in pdf online books in PDF, EPUB and Mobi Format. Click Download or Read Online button to get construction versus choice in cognitive measurement issues in constructed response performance testing and portfolio assessment in pdf book now. This site is like a library, Use search box in the widget to get ebook that you want.



Construction Versus Choice In Cognitive Measurement

Author: William C. Ward
Publisher: Routledge
ISBN: 1136473017
Size: 51.84 MB
Format: PDF, Kindle
View: 5131
Download and Read
This book brings together psychometric, cognitive science, policy, and content domain perspectives on new approaches to educational assessment -- in particular, constructed response, performance testing, and portfolio assessment. These new assessment approaches -- a full range of alternatives to traditional multiple-choice tests -- are useful in all types of large-scale testing programs, including educational admissions, school accountability, and placement. This book's multi-disciplinary perspective identifies the potential advantages and pitfalls of these new assessment forms, as well as the critical research questions that must be addressed if these assessment methods are to benefit education.

Developing And Validating Test Items

Author: Thomas M. Haladyna
Publisher: Routledge
ISBN: 1136961976
Size: 27.80 MB
Format: PDF, ePub
View: 102
Download and Read
Since test items are the building blocks of any test, learning how to develop and validate test items has always been critical to the teaching-learning process. As they grow in importance and use, testing programs increasingly supplement the use of selected-response (multiple-choice) items with constructed-response formats. This trend is expected to continue. As a result, a new item writing book is needed, one that provides comprehensive coverage of both types of items and of the validity theory underlying them. This book is an outgrowth of the author’s previous book, Developing and Validating Multiple-Choice Test Items, 3e (Haladyna, 2004). That book achieved distinction as the leading source of guidance on creating and validating selected-response test items. Like its predecessor, the content of this new book is based on both an extensive review of the literature and on its author’s long experience in the testing field. It is very timely in this era of burgeoning testing programs, especially when these items are delivered in a computer-based environment. Key features include ... Comprehensive and Flexible – No other book so thoroughly covers the field of test item development and its various applications. Focus on Validity – Validity, the most important consideration in testing, is stressed throughout and is based on the Standards for Educational and Psychological Testing, currently under revision by AERA, APA, and NCME Illustrative Examples – The book presents various selected and constructed response formats and uses many examples to illustrate correct and incorrect ways of writing items. Strategies for training item writers and developing large numbers of items using algorithms and other item-generating methods are also presented. Based on Theory and Research – A comprehensive review and synthesis of existing research runs throughout the book and complements the expertise of its authors.

Automated Scoring Of Complex Tasks In Computer Based Testing

Author: Isaac I. Bejar
Publisher: Psychology Press
ISBN: 0805846344
Size: 15.95 MB
Format: PDF, ePub, Mobi
View: 6713
Download and Read
The use of computers and the Internet in the testing community has expanded the opportunity for innovative testing. Until now, there was no one source that reviewed the latest methods of automated scoring for complex assessments. This is the first volume to provide that coverage, along with examples of "best practices" in the design, implementation, and evaluation of automated complex assessment. The contributing authors, all noted leaders in the field, introduce each method in the context of actual applications in real assessments so as to provide a realistic view of current industry practices. Evidence Centered Design, an innovative approach to assessment design, is used as the book's conceptual framework. The chapters review both well known methods for automated scoring such as rule-based logic, regression-based, and IRT systems, as well as more recent procedures such as Bayesian and neural networks. The concluding chapters compare and contrast the various methods and provide a vision for the future. Each chapter features a discussion of the philosophical and practical approaches of the method, the associated implications for validity, reliability, and implementation, and the calculations and processes of each technique. Intended for researchers, practitioners, and advanced students in educational testing and measurement, psychometrics, cognitive science, technical training and assessment, diagnostic, licensing, and certification exams, and expert systems, the book also serves as a resource in advanced courses in educational measurement or psychometrics.

Constructing Test Items

Author: Steven J. Osterlind
Publisher: Springer Science & Business Media
ISBN: 0792380770
Size: 40.89 MB
Format: PDF, Mobi
View: 1660
Download and Read
Constructing test items for standardized tests of achievement, ability, and aptitude is a task of enormous importance. The interpretability of a test's scores flows directly from the quality of its items and exercises. Concomitant with score interpretability is the notion that including only carefully crafted items on a test is the primary method by which the skilled test developer reduces unwanted error variance, or errors of measurement, and thereby increases a test score's reliability. The aim of this entire book is to increase the test constructor's awareness of this source of measurement error, and then to describe methods for identifying and minimizing it during item construction and later review. Persons involved in assessment are keenly aware of the increased attention given to alternative formats for test items in recent years. Yet, in many writers' zeal to be `curriculum-relevant' or `authentic' or `realistic', the items are often developed seemingly without conscious thought to the interpretations that may be garnered from them. This book argues that the format for such alternative items and exercises also requires rigor in their construction and even offers some solutions, as one chapter is devoted to these alternative formats. This book addresses major issues in constructing test items by focusing on four ideas. First, it describes the characteristics and functions of test items. A second feature of this book is the presentation of editorial guidelines for writing test items in all of the commonly used item formats, including constructed-response formats and performance tests. A third aspect of this book is the presentation of methods for determining the quality of test items. Finally, this book presents a compendium of important issues about test items, including procedures for ordering items in a test, ethical and legal concerns over using copyrighted test items, item scoring schemes, computer-generated items and more.

Handbook Of Automated Essay Evaluation

Author: Mark D. Shermis
Publisher: Routledge
ISBN: 1136334793
Size: 80.34 MB
Format: PDF
View: 7221
Download and Read
This comprehensive, interdisciplinary handbook reviews the latest methods and technologies used in automated essay evaluation (AEE) methods and technologies. Highlights include the latest in the evaluation of performance-based writing assessments and recent advances in the teaching of writing, language testing, cognitive psychology, and computational linguistics. This greatly expanded follow-up to Automated Essay Scoring reflects the numerous advances that have taken place in the field since 2003 including automated essay scoring and diagnostic feedback. Each chapter features a common structure including an introduction and a conclusion. Ideas for diagnostic and evaluative feedback are sprinkled throughout the book. Highlights of the book’s coverage include: The latest research on automated essay evaluation. Descriptions of the major scoring engines including the E-rater®, the Intelligent Essay Assessor, the IntellimetricTM Engine, c-raterTM, and LightSIDE. Applications of the uses of the technology including a large scale system used in West Virginia. A systematic framework for evaluating research and technological results. Descriptions of AEE methods that can be replicated for languages other than English as seen in the example from China. Chapters from key researchers in the field. The book opens with an introduction to AEEs and a review of the "best practices" of teaching writing along with tips on the use of automated analysis in the classroom. Next the book highlights the capabilities and applications of several scoring engines including the E-rater®, the Intelligent Essay Assessor, the IntellimetricTM engine, c-raterTM, and LightSIDE. Here readers will find an actual application of the use of an AEE in West Virginia, psychometric issues related to AEEs such as validity, reliability, and scaling, and the use of automated scoring to detect reader drift, grammatical errors, discourse coherence quality, and the impact of human rating on AEEs. A review of the cognitive foundations underlying methods used in AEE is also provided. The book concludes with a comparison of the various AEE systems and speculation about the future of the field in light of current educational policy. Ideal for educators, professionals, curriculum specialists, and administrators responsible for developing writing programs or distance learning curricula, those who teach using AEE technologies, policy makers, and researchers in education, writing, psychometrics, cognitive psychology, and computational linguistics, this book also serves as a reference for graduate courses on automated essay evaluation taught in education, computer science, language, linguistics, and cognitive psychology.

Writing Test Items To Evaluate Higher Order Thinking

Author: Thomas M. Haladyna
Publisher: Prentice Hall
ISBN:
Size: 72.96 MB
Format: PDF, Kindle
View: 5563
Download and Read
Here's a book intended to help readers develop better test questions -- aimed at measuring their students' (or future students') higher level thinking abilities such as writing, reading, mathematical or scientific problem solving, critical thinking, and creative thinking. This book is practical in its approach -- replete with examples -- and focuses on many different question types with the main objective being to select the item type most appropriate for the material being measured. It covers multiple-choice items, designing performance test items, creating and scoring portfolios, and writing survey items. Item-writing templates are provided in each chapter. Preservice and inservice teachers.

Educational Assessment

Author: Thomas P. Hogan
Publisher: Wiley
ISBN: 9780471472483
Size: 55.99 MB
Format: PDF, Mobi
View: 7091
Download and Read
Thomas P. Hogan's Educational Assessment: A Practical Introduction brings you to the front-lines in educational assessment, as it is actually practiced in today's classrooms, school systems, state departments, and national organizations.

Fragile Evidence

Author: Sharon Murphy
Publisher: Lawrence Erlbaum
ISBN: 9780805825299
Size: 24.18 MB
Format: PDF, ePub
View: 7570
Download and Read
Fragile Evidence--a critique of reading assessment informed by newly emerging conceptualizations of validity and reliability--brings psychometric theory, reading theory, and social critique to bear on reading assessment. Taking its lead from contemporary psychological theory and other fields which ponder the role of evidence and argumentation in making claims about social issues, this text examines the historical and contemporary ways in which such claims have been made for reading assessment. Traditional individualized and standardized tests are critiqued from a variety of perspectives. The assumptions and operational bases of contemporary revisionist assessments (e.g., large-scale performance-based assessments, authentic assessments, etc.) are considered in terms of what they include and what they omit. Collected here in one volume is a systematic analysis of several different reading assessment instruments and conceptualizations of reading assessment, with particular emphasis on the evidence of reading they provide--a type of analysis usually found only in separate articles in journals and edited volumes. This important volume: * Offers a systematic (rather than generalized) critique of popular standardized norm-referenced group and individualized measures of reading. *Looks at the consequential validity of standardized tests. * Includes interviews with stakeholders who consider the question of how to describe reading without making reference to standardized tests. * Considers how tools such as miscue analysis influence reform. * Provides a critical analysis of contemporary reform efforts.