TY - BOOK AU - Nkwake,Apollo M. ED - SpringerLink (Online service) TI - Credibility, Validity, and Assumptions in Program Evaluation Methodology SN - 9783031456145 PY - 2023/// CY - Cham PB - Springer Nature Switzerland, Imprint: Springer KW - Social psychology KW - Psychology, Industrial KW - Social psychiatry KW - Industrial organization KW - Psychology KW - Social Psychology KW - Industrial Psychology KW - Clinical Social Work KW - Industrial Organization KW - Behavioral Sciences and Psychology N1 - Chapter 1. Constituents of Evaluation Practice -- Chapter 2. Credible Methodology -- Chapter 3. Validity in Framing an Evaluation's Purpose and Questions -- Chapter 4. Validity in Evaluation Designs and Methods -- Chapter 5. Validity in Measures and data collection -- Chapter 6. Validity in Analysis, Interpretation, and Conclusions -- Chapter 7. Validity in Evaluation Utilization -- Chapter 8. Validity in performance measurement -- Chapter 9. Explication of Methodological Assumptions: A Metaevaluation -- Chapter 10. Working with assumptions in humanitarian assistance evaluation -- Chapter 11. Conclusion N2 - This book focuses on methods of choice in program evaluation. Credible methods choice lies in the assumptions we make about the appropriateness and validity of selected methods and the validity of those assumptions. As evaluators make methodological decisions in various stages of the evaluation process, a number of validity questions arise. Yet unexamined assumptions are a risk to useful evaluation. The first edition of this book discussed the formulation of credible methodological arguments and methods of examining validity assumptions. However, previous publications suggest advantages and disadvantages of using various methods and when to use them. Instead, this book analyzes assumptions underlying actual methodological choices in evaluation studies and how these influence evaluation quality. This analysis is the basis of suggested tools. The second edition extends the review of methodological assumptions to the evaluation of humanitarian assistance. While evaluators of humanitarian action apply conventional research methods and standards, they have to adapt these methods to the challenges and constraints of crisis contexts. For example, the urgency and chaos of humanitarian emergencies makes it hard to obtain program documentation; objectives may be unclear, and early plans may quickly become outdated as the context changes or is clarified. The lack of up-to-date baseline data is not uncommon. Neither is staff turnover. Differences in perspective may intensify and undermine trust. The deviation from ideal circumstances challenges evaluation and calls for methodological innovation. And how do evaluators work with assumptions in non-ideal settings? What tools are most relevant and effective? This revised edition reviews major evaluations of humanitarian action and discusses strategies for working with evaluation assumptions in crises and stable program settings UR - #gotoholdings ER -