Add like
Add dislike
Add to saved papers

Using WWC sanctioned rigorous methods to develop comparison groups for evaluation.

Evaluation of program impact in the field of education has been a controversial topic over the years. Although randomized control trials have great advantages in causal inference, they often raise ethical and economic concerns in practice. As an alternative, quasi-experimental designs may provide valid evidence of influence if they are well-designed. In this article, we presented an evaluation case of a district-wide early learning improvement program. To strike a balance between practicability and academic rigor, we developed comparison groups from multiple perspectives, and used a series of tests consistent with WWC 3.0 standards to reach the most valid comparisons. Implications for evaluation practice were discussed.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

Related Resources

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app