Read by QxMD icon Read

Evaluation Review

Andrew P Jaciw, Li Lin, Boya Ma
BACKGROUND: Prior research has investigated design parameters for assessing average program impacts on achievement outcomes with cluster randomized trials (CRTs). Less is known about parameters important for assessing differential impacts. OBJECTIVES: This article develops a statistical framework for designing CRTs to assess differences in impact among student subgroups and presents initial estimates of critical parameters. RESEARCH DESIGN: Effect sizes and minimum detectable effect sizes for average and differential impacts are calculated before and after conditioning on effects of covariates using results from several CRTs...
October 18, 2016: Evaluation Review
Andrew P Jaciw
BACKGROUND: Past studies have examined factors associated with reductions in bias in comparison group studies (CGSs). The companion work to this article extends the framework to investigate the accuracy of generalized inferences from CGS. OBJECTIVES: This article empirically examines levels of bias in CGS-based impact estimates when used for generalization, and reductions in bias resulting from covariate adjustment. It assesses potential for bias reduction against criteria from past studies...
October 12, 2016: Evaluation Review
Geoffrey Phelps, Benjamin Kelcey, Nathan Jones, Shuangshuang Liu
Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect...
October 3, 2016: Evaluation Review
T'Pring R Westbrook, Sarah A Avellar, Neil Seftor
BACKGROUND: The federal government's emphasis on supporting the implementation of evidence-based programs has fueled a need to conduct and assess rigorous evaluations of programs. Through partnerships with researchers, policy makers, and practitioners, evidence reviews-projects that identify, assess, and summarize existing research in a given area-play an important role in supporting the quality of these evaluations and how the findings are used. These reviews encourage the use of sound scientific principles to identify, select, and implement evidence-based programs...
September 30, 2016: Evaluation Review
Nianbo Dong, Wendy M Reinke, Keith C Herman, Catherine P Bradshaw, Desiree W Murray
BACKGROUND: There is a need for greater guidance regarding design parameters and empirical benchmarks for social and behavioral outcomes to inform assumptions in the design and interpretation of cluster randomized trials (CRTs). OBJECTIVES: We calculated the empirical reference values on critical research design parameters associated with statistical power for children's social and behavioral outcomes, including effect sizes, intraclass correlations (ICCs), and proportions of variance explained by a covariate at different levels (R (2))...
September 30, 2016: Evaluation Review
John Deke, Hanley Chiang
BACKGROUND: To limit the influence of attrition bias in assessments of intervention effectiveness, several federal evidence reviews have established a standard for acceptable levels of sample attrition in randomized controlled trials. These evidence reviews include the What Works Clearinghouse (WWC), the Home Visiting Evidence of Effectiveness Review, and the Teen Pregnancy Prevention Evidence Review. We believe the WWC attrition standard may constitute the first use of model-based, empirically supported bounds on attrition bias in the context of a federally sponsored systematic evidence review...
September 26, 2016: Evaluation Review
Suthinee Supanantaroek, Robert Lensink, Nina Hansen
BACKGROUND: Saving plays a crucial role in the process of economic growth. However, one main reason why poor people often do not save is that they lack financial knowledge. Improving the savings culture of children through financial education is a promising way to develop savings attitudes and behavior early in life. OBJECTIVES: This study is one of the first that examines the effects of social and financial education training and a children's club developed by Aflatoun on savings attitudes and behavior among primary school children in Uganda, besides Berry, Karlan, and Pradhan...
September 7, 2016: Evaluation Review
Neil Seftor
BACKGROUND: In 2002, the U.S. Department of Education's Institute of Education Sciences (IES) established the What Works Clearinghouse (WWC) at the confluence of a push to improve education research quality, a shift toward evidence-based decision-making, and an expansion of systematic reviews. In addition to providing decision makers with evidence to inform their choices, a systematic review sets expectations regarding study quality and execution for research on program efficacy. In this article, we examine education research through the filter of a long running systematic review to assess research quality over time and the role of the systematic review in producing evidence...
September 7, 2016: Evaluation Review
Tamara M Haegerich, Corinne David-Ferdon, Rita K Noonan, Brian J Manns, Holly C Billie
Injury and violence prevention strategies have greater potential for impact when they are based on scientific evidence. Systematic reviews of the scientific evidence can contribute key information about which policies and programs might have the greatest impact when implemented. However, systematic reviews have limitations, such as lack of implementation guidance and contextual information, that can limit the application of knowledge. "Technical packages," developed by knowledge brokers such as the federal government, nonprofit agencies, and academic institutions, have the potential to be an efficient mechanism for making information from systematic reviews actionable...
September 7, 2016: Evaluation Review
Natalie Rebelo Da Silva, Hazel Zaranyika, Laurenz Langer, Nicola Randall, Evans Muchiri, Ruth Stewart
BACKGROUND: Conducting a systematic review in social policy is a resource-intensive process in terms of time and funds. It is thus important to understand the scope of the evidence base of a topic area prior to conducting a synthesis of primary research in order to maximize these resources. One approach to conserving resources is to map out the available evidence prior to undertaking a traditional synthesis. A few examples of this approach exist in the form of gap maps, overviews of reviews, and systematic maps supported by social policy and systematic review agencies alike...
September 6, 2016: Evaluation Review
Diane Paulsell, Jaime Thomas, Shannon Monahan, Neil S Seftor
BACKGROUND: Systematic reviews sponsored by federal departments or agencies play an increasingly important role in disseminating information about evidence-based programs and have become a trusted source of information for administrators and practitioners seeking evidence-based programs to implement. These users vary in their knowledge of evaluation methods and their ability to interpret systematic review findings. They must consider factors beyond program effectiveness when selecting an intervention, such as the relevance of the intervention to their target population, community context, and service delivery system; readiness for replication and scale-up; and the ability of their service delivery system or agency to implement the intervention...
September 2, 2016: Evaluation Review
Sarah A Avellar, Jaime Thomas, Rebecca Kleinman, Emily Sama-Miller, Sara E Woodruff, Rebecca Coughlin, T'Pring R Westbrook
BACKGROUND: Systematic reviews-which identify, assess, and summarize existing research-are usually designed to determine whether research shows that an intervention has evidence of effectiveness, rather than whether an intervention will work under different circumstances. The reviews typically focus on the internal validity of the research and do not consistently incorporate information on external validity into their conclusions. OBJECTIVES: In this article, we focus on how systematic reviews address external validity...
August 31, 2016: Evaluation Review
Robin Jacob, Marie-Andree Somers, Pei Zhu, Howard Bloom
OBJECTIVE: In this article, we examine whether a well-executed comparative interrupted time series (CITS) design can produce valid inferences about the effectiveness of a school-level intervention. This article also explores the trade-off between bias reduction and precision loss across different methods of selecting comparison groups for the CITS design and assesses whether choosing matched comparison schools based only on preintervention test scores is sufficient to produce internally valid impact estimates...
August 23, 2016: Evaluation Review
Alexandra Bonardi, Christine J Clifford, Nira Hadar
BACKGROUND: This review describes the methods used for a systematic review of oral health intervention literature in a target population (people with intellectual and developmental disability (I/DD)), which spans a broad range of interventions and study types, conducted with specialized software. OBJECTIVE: The aim of this article is to demonstrate the review strategy, using the free, online systematic review data repository (SRDR) tool, for oral health interventions aimed at reducing disparities between people with I/DD and the general population...
August 19, 2016: Evaluation Review
Brian Goesling, Sarah Oberlander, Lisa Trivits
BACKGROUND: Systematic reviews help policy makers and practitioners make sense of research findings in a particular program, policy, or practice area by synthesizing evidence across multiple studies. However, the link between review findings and practical decision-making is rarely one-to-one. Policy makers and practitioners may use systematic review findings to help guide their decisions, but they may also rely on other information sources or personal judgment. OBJECTIVES: To describe a recent effort by the U...
August 19, 2016: Evaluation Review
Elizabeth A Stuart, Anna Rhodes
BACKGROUND: Given increasing concerns about the relevance of research to policy and practice, there is growing interest in assessing and enhancing the external validity of randomized trials: determining how useful a given randomized trial is for informing a policy question for a specific target population. OBJECTIVES: This article highlights recent advances in assessing and enhancing external validity, with a focus on the data needed to make ex post statistical adjustments to enhance the applicability of experimental findings to populations potentially different from their study sample...
August 4, 2016: Evaluation Review
Judy Geyer, Mikal Davis, Tulika Narayan
BACKGROUND: This article offers important statistics to evaluators planning future evaluations in southeast Africa. There are little to no published statistics describing the variance of southeast African agricultural and household indicators. OBJECTIVE: We seek to publish the standard deviations, intracluster correlation coefficients (ICCs), and R (2)s from outcomes and covariates used in a 2014 quasi-experimental evaluation of the Millennium Challenge Corporation's Mozambique Farmer Income Support Project (FISP) and thus guide researchers in their calculation of design effects relevant to future evaluations in the region...
August 3, 2016: Evaluation Review
Elizabeth Tipton, Laura R Peck
BACKGROUND: Large-scale randomized experiments are important for determining how policy interventions change average outcomes. Researchers have begun developing methods to improve the external validity of these experiments. One new approach is a balanced sampling method for site selection, which does not require random sampling and takes into account the practicalities of site recruitment including high nonresponse. METHOD: The goal of balanced sampling is to develop a strategic sample selection plan that results in a sample that is compositionally similar to a well-defined inference population...
July 29, 2016: Evaluation Review
Elizabeth Tipton, Kelly Hallberg, Larry V Hedges, Wendy Chan
BACKGROUND: Policy makers and researchers are frequently interested in understanding how effective a particular intervention may be for a specific population. One approach is to assess the degree of similarity between the sample in an experiment and the population. Another approach is to combine information from the experiment and the population to estimate the population average treatment effect (PATE). METHOD: Several methods for assessing the similarity between a sample and population currently exist as well as methods estimating the PATE...
July 8, 2016: Evaluation Review
E C Hedberg
BACKGROUND: There is an increased focus on randomized trials for proximal behavioral outcomes in early childhood research. However, planning sample sizes for such designs requires extant information on the size of effect, variance decomposition, and effectiveness of covariates. OBJECTIVES: The purpose of this article is to employ a recent large representative sample of early childhood longitudinal study kindergartners to estimate design parameters for use in planning cluster randomized trials...
June 28, 2016: Evaluation Review
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"