journal
MENU ▼
Read by QxMD icon Read
search

Evaluation Review

journal
https://www.readbyqxmd.com/read/30223677/the-importance-of-using-multiple-data-sources-in-policy-assessments-lessons-from-two-conditional-cash-transfer-programs-in-new-york-city
#1
Edith Yang, Richard Hendra
BACKGROUND: The high costs of implementing surveys are increasingly leading research teams to either cut back on surveys or to rely on administrative records. Yet no policy should be based on a single set of estimates, and every approach has its weaknesses. A mixture of approaches, each with its own biases, should provide the analyst with a better understanding of the underlying phenomenon. This claim is illustrated with a comparison of employment effect estimates of two conditional cash transfer programs in New York City using survey and administrative unemployment insurance (UI) data...
September 17, 2018: Evaluation Review
https://www.readbyqxmd.com/read/30213214/the-sensitivity-of-impact-estimates-to-data-sources-used-analysis-from-an-access-to-postsecondary-education-experiment
#2
Reuben Ford, Douwêrê Grékou, Isaac Kwakye, Taylor Shek-Wai Hui
BACKGROUND: This article reports on the Future to Discover Project-a Canadian randomized controlled trial of two high school interventions-where data on key postsecondary enrollment outcomes were collected for two phases. During the initial phase, outcomes were recorded from administrative data and follow-up surveys. During the later phase, data came from administrative records only. OBJECTIVES: The article provides analyses that are informative about the consequences of a change from administrative-only data to survey-only data (and vice versa) for the estimation of impacts...
September 13, 2018: Evaluation Review
https://www.readbyqxmd.com/read/30126296/rd-or-not-rd-using-experimental-studies-to-assess-the-performance-of-the-regression-discontinuity-approach
#3
Philip Gleason, Alexandra Resch, Jillian Berk
BACKGROUND: This article explores the performance of regression discontinuity (RD) designs for measuring program impacts using a synthetic within-study comparison design. We generate synthetic RD data sets from experimental data sets from two recent evaluations of educational interventions-the Educational Technology Study and the Teach for America Study-and compare the RD impact estimates to the experimental estimates of the same intervention. OBJECTIVES: This article examines the performance of the RD estimator with the design is well implemented and also examines the extent of bias introduced by manipulation of the assignment variable in an RD design...
August 20, 2018: Evaluation Review
https://www.readbyqxmd.com/read/30081667/the-sequential-scale-up-of-an-evidence-based-intervention-a-case-study
#4
Jaime Thomas, Thomas D Cook, Alice Klein, Prentice Starkey, Lydia DeFlorio
Policy makers face dilemmas when choosing a policy, program, or practice to implement. Researchers in education, public health, and other fields have proposed a sequential approach to identifying interventions worthy of broader adoption, involving pilot, efficacy, effectiveness, and scale-up studies. In this article, we examine a scale-up of an early math intervention to the state level, using a cluster randomized controlled trial. The intervention, Pre-K Mathematics, has produced robust positive effects on children's math ability in prior pilot, efficacy, and effectiveness studies...
August 6, 2018: Evaluation Review
https://www.readbyqxmd.com/read/30060688/using-bayesian-correspondence-criteria-to-compare-results-from-a-randomized-experiment-and-a-quasi-experiment-allowing-self-selection
#5
David M Rindskopf, William R Shadish, M H Clark
BACKGROUND: Randomized experiments yield unbiased estimates of treatment effect, but such experiments are not always feasible. So researchers have searched for conditions under which randomized and nonrandomized experiments can yield the same answer. This search requires well-justified and informative correspondence criteria, that is, criteria by which we can judge if the results from an appropriately adjusted nonrandomized experiment well-approximate results from randomized experiments...
July 30, 2018: Evaluation Review
https://www.readbyqxmd.com/read/30033752/double-down-or-switch-it-up-should-low-income-children-stay-in-head-start-for-2-years-or-switch-programs
#6
Jade Marcus Jenkins, Terri J Sabol, George Farkas
BACKGROUND: Recent growth in subsidized preschool opportunities in the United States for low-income 4-year-old children has allowed federal Head Start programs to fund more slots for 3-year-old children. In turn, when Age-3 Head Start participants turn four, they may choose to switch into one of the many alternative care options or choose to stay in Head Start for a second year. OBJECTIVES: We analyze a nationally representative sample of Age-3 Head Start participants to examine whether children who stay in Head Start for a second year at Age 4 exhibit greater school readiness and subsequent cognitive and behavioral performance compared with children who switch out of Head Start into alternative care...
January 1, 2018: Evaluation Review
https://www.readbyqxmd.com/read/29954223/designs-of-empirical-evaluations-of-nonexperimental-methods-in-field-settings
#7
Vivian C Wong, Peter M Steiner
Over the last three decades, a research design has emerged to evaluate the performance of nonexperimental (NE) designs and design features in field settings. It is called the within-study comparison (WSC) approach or the design replication study. In the traditional WSC design, treatment effects from a randomized experiment are compared to those produced by an NE approach that shares the same target population. The nonexperiment may be a quasi-experimental design, such as a regression-discontinuity or an interrupted time-series design, or an observational study approach that includes matching methods, standard regression adjustments, and difference-in-differences methods...
January 1, 2018: Evaluation Review
https://www.readbyqxmd.com/read/29888613/statistical-power-for-the-comparative-regression-discontinuity-design-with-a-pretest-no-treatment-control-function-theory-and-evidence-from-the-national-head-start-impact-study
#8
Yang Tang, Thomas D Cook
The basic regression discontinuity design (RDD) has less statistical power than a randomized control trial (RCT) with the same sample size. Adding a no-treatment comparison function to the basic RDD creates a comparative RDD (CRD); and when this function comes from the pretest value of the study outcome, a CRD-Pre design results. We use a within-study comparison (WSC) to examine the power of CRD-Pre relative to both basic RDD and RCT. We first build the theoretical foundation for power in CRD-Pre, then derive the relevant variance formulae, and finally compare them to the theoretical RCT variance...
January 1, 2018: Evaluation Review
https://www.readbyqxmd.com/read/29852743/comparative-regression-discontinuity-a-stress-test-with-small-samples
#9
Yasemin Kisbu-Sakarya, Thomas D Cook, Yang Tang, M H Clark
Compared to the randomized experiment (RE), the regression discontinuity design (RDD) has three main limitations: (1) In expectation, its results are unbiased only at the treatment cutoff and not for the entire study population; (2) it is less efficient than the RE and so requires more cases for the same statistical power; and (3) it requires correctly specifying the functional form that relates the assignment and outcome variables. One way to overcome these limitations might be to add a no-treatment functional form to the basic RDD and including it in the outcome analysis as a comparison function rather than as a covariate to increase power...
January 1, 2018: Evaluation Review
https://www.readbyqxmd.com/read/29772913/assessing-correspondence-between-experimental-and-nonexperimental-estimates-in-within-study-comparisons
#10
Peter M Steiner, Vivian C Wong
In within-study comparison (WSC) designs, treatment effects from a nonexperimental design, such as an observational study or a regression-discontinuity design, are compared to results obtained from a well-designed randomized control trial with the same target population. The goal of the WSC is to assess whether nonexperimental and experimental designs yield the same results in field settings. A common analytic challenge with WSCs, however, is the choice of appropriate criteria for determining whether nonexperimental and experimental results replicate...
January 1, 2018: Evaluation Review
https://www.readbyqxmd.com/read/29642717/optimizing-prediction-using-bayesian-model-averaging-examples-using-large-scale-educational-assessments
#11
David Kaplan, Chansoon Lee
This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP)...
January 1, 2018: Evaluation Review
https://www.readbyqxmd.com/read/29232999/using-synthetic-controls-to-evaluate-the-effect-of-unique-interventions-the-case-of-say-yes-to-education
#12
REVIEW
Robert Bifulco, Ross Rubenstein, Hosung Sohn
BACKGROUND: "Place-based" scholarships seek to improve student outcomes in urban school districts and promote urban revitalization in economically challenged cities. Say Yes to Education is a unique district-wide school reform effort adopted in Syracuse, NY, in 2008. It includes full-tuition scholarships for public and private universities, coupled with extensive wraparound support services in schools. OBJECTIVES: This study uses synthetic control methods to evaluate the effect of Say Yes on district enrollment and graduation rates...
December 2017: Evaluation Review
https://www.readbyqxmd.com/read/29232974/evaluation-influence-the-evaluation-event-and-capital-flow-in-international-development
#13
REVIEW
David A Bell
BACKGROUND: Assessing program effectiveness in human development is central to informing foreign aid policy-making and organizational learning. Foreign aid effectiveness discussions have increasingly given attention to the devaluing effects of aid flow volatility. This study reveals that the external evaluation event influences actor behavior, serving as a volatility-constraining tool. METHOD: A case study of a multidonor aid development mechanism served examining the influence of an evaluation event when considering anticipatory effects...
December 2017: Evaluation Review
https://www.readbyqxmd.com/read/29232964/a-randomized-controlled-trial-of-family-finding-a-relative-search-and-engagement-intervention-for-youth-lingering-in-foster-care
#14
RANDOMIZED CONTROLLED TRIAL
Sharon Vandivere, Karin E Malm, Tiffany J Allen, Sarah Catherine Williams, Amy McKlindon
BACKGROUND: Youth who have experienced foster care are at risk of negative outcomes in adulthood. The family finding model aims to promote more positive outcomes by finding and engaging relatives of children in foster care in order to provide options for legal and emotional permanency. OBJECTIVES: The present study tested whether family finding, as implemented in North Carolina from 2008 through 2011, improved child welfare outcomes for youth at risk of emancipating foster care without permanency...
December 2017: Evaluation Review
https://www.readbyqxmd.com/read/30231693/editor-in-chief-s-comment-external-validity-in-systematic-reviews
#15
Jacob Alex Klerman
No abstract text is available yet for this article.
October 2017: Evaluation Review
https://www.readbyqxmd.com/read/29233010/introduction-to-special-issue-external-validity-and-policy
#16
T'Pring R Westbrook
No abstract text is available yet for this article.
October 2017: Evaluation Review
https://www.readbyqxmd.com/read/30208741/special-issue-editor-s-overview-essay
#17
Jacob Alex Klerman
No abstract text is available yet for this article.
June 2017: Evaluation Review
https://www.readbyqxmd.com/read/29233004/matched-comparison-group-design-standards-in-systematic-reviews-of-early-childhood-interventions
#18
REVIEW
Jaime Thomas, Sarah A Avellar, John Deke, Philip Gleason
BACKGROUND: Systematic reviews assess the quality of research on program effectiveness to help decision makers faced with many intervention options. Study quality standards specify criteria that studies must meet, including accounting for baseline differences between intervention and comparison groups. We explore two issues related to systematic review standards: covariate choice and choice of estimation method. OBJECTIVE: To help systematic reviews develop/refine quality standards and support researchers in using nonexperimental designs to estimate program effects, we address two questions: (1) How well do variables that systematic reviews typically require studies to account for explain variation in key child and family outcomes? (2) What methods should studies use to account for preexisting differences between intervention and comparison groups? METHODS: We examined correlations between baseline characteristics and key outcomes using Early Childhood Longitudinal Study-Birth Cohort data to address Question 1...
June 2017: Evaluation Review
https://www.readbyqxmd.com/read/27694128/reviewing-the-reviews-examining-similarities-and-differences-between-federally-funded-evidence-reviews
#19
T'Pring R Westbrook, Sarah A Avellar, Neil Seftor
BACKGROUND: The federal government's emphasis on supporting the implementation of evidence-based programs has fueled a need to conduct and assess rigorous evaluations of programs. Through partnerships with researchers, policy makers, and practitioners, evidence reviews-projects that identify, assess, and summarize existing research in a given area-play an important role in supporting the quality of these evaluations and how the findings are used. These reviews encourage the use of sound scientific principles to identify, select, and implement evidence-based programs...
June 2017: Evaluation Review
https://www.readbyqxmd.com/read/27604301/technical-packages-in-injury-and-violence-prevention-to-move-evidence-into-practice-systematic-reviews-and-beyond
#20
Tamara M Haegerich, Corinne David-Ferdon, Rita K Noonan, Brian J Manns, Holly C Billie
Injury and violence prevention strategies have greater potential for impact when they are based on scientific evidence. Systematic reviews of the scientific evidence can contribute key information about which policies and programs might have the greatest impact when implemented. However, systematic reviews have limitations, such as lack of implementation guidance and contextual information, that can limit the application of knowledge. "Technical packages," developed by knowledge brokers such as the federal government, nonprofit agencies, and academic institutions, have the potential to be an efficient mechanism for making information from systematic reviews actionable...
February 2017: Evaluation Review
journal
journal
27915
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"