Read by QxMD icon Read

Psychological Methods

Mark H C Lai, Oi-Man Kwok, Yu-Yu Hsiao, Qian Cao
The research literature has paid little attention to the issue of finite population at a higher level in hierarchical linear modeling. In this article, we propose a method to obtain finite-population-adjusted standard errors of Level-1 and Level-2 fixed effects in 2-level hierarchical linear models. When the finite population at Level-2 is incorrectly assumed as being infinite, the standard errors of the fixed effects are overestimated, resulting in lower statistical power and wider confidence intervals. The impact of ignoring finite population correction is illustrated by using both a real data example and a simulation study with a random intercept model and a random slope model...
March 16, 2017: Psychological Methods
Morten Moshagen, Max Auerswald
Guidelines to evaluate the fit of structural equation models can only offer meaningful insights to the extent that they apply equally to a wide range of situations. However, a number of previous studies found that statistical power to reject a misspecified model increases and descriptive fit-indices deteriorate when loadings are high, thereby inappropriately panelizing high reliability indicators. Based on both theoretical considerations and empirical simulation studies, we show that previous results only hold for a particular definition and a particular type of model error...
March 16, 2017: Psychological Methods
Rumen Manolov, Patrick Onghena
Alternating treatments designs (ATDs) have received comparatively less attention than other single-case experimental designs in terms of data analysis, as most analytical proposals and illustrations have been made in the context of designs including phases with several consecutive measurements in the same condition. One of the specific features of ATDs is the rapid (and usually randomly determined) alternation of conditions, which requires adapting the analytical techniques. First, we review the methodologically desirable features of ATDs, as well as the characteristics of the published single-case research using an ATD, which are relevant for data analysis...
March 16, 2017: Psychological Methods
Jason D Rights, Sonya K Sterba
Psychologists commonly apply regression mixture models in single-level (i.e., unclustered) and multilevel (i.e., clustered) data analysis contexts. Though researchers applying nonmixture regression models typically report R-squared measures of explained variance, there has been no general treatment of R-squared measures for single-level and multilevel regression mixtures. Consequently, it is common for researchers to summarize results of a fitted regression mixture by simply reporting class-specific regression coefficients and their associated p values, rather than considering measures of effect size...
March 16, 2017: Psychological Methods
Yang Tang, Thomas D Cook, Yasemin Kisbu-Sakarya
In the "sharp" regression discontinuity design (RD), all units scoring on one side of a designated score on an assignment variable receive treatment, whereas those scoring on the other side become controls. Thus the continuous assignment variable and binary treatment indicator are measured on the same scale. Because each must be in the impact model, the resulting multi-collinearity reduces the efficiency of the RD design. However, untreated comparison data can be added along the assignment variable, and a comparative regression discontinuity design (CRD) is then created...
March 16, 2017: Psychological Methods
Jolynn Pek, David B Flora
Statistical practice in psychological science is undergoing reform which is reflected in part by strong recommendations for reporting and interpreting effect sizes and their confidence intervals. We present principles and recommendations for research reporting and emphasize the variety of ways effect sizes can be reported. Additionally, we emphasize interpreting and reporting unstandardized effect sizes because of common misconceptions regarding standardized effect sizes which we elucidate. Effect sizes should directly answer their motivating research questions, be comprehensible to the average reader, and be based on meaningful metrics of their constituent variables...
March 9, 2017: Psychological Methods
Walter P Vispoel, Carrie A Morris, Murat Kilinc
Although widely recognized as a comprehensive framework for representing score reliability, generalizability theory (G-theory), despite its potential benefits, has been used sparingly in reporting of results for measures of individual differences. In this article, we highlight many valuable ways that G-theory can be used to quantify, evaluate, and improve psychometric properties of scores. Our illustrations encompass assessment of overall reliability, percentages of score variation accounted for by individual sources of measurement error, dependability of cut-scores for decision making, estimation of reliability and dependability for changes made to measurement procedures, disattenuation of validity coefficients for measurement error, and linkages of G-theory with classical test theory and structural equation modeling...
January 23, 2017: Psychological Methods
Herbert W Marsh, Jiesi Guo, Philip D Parker, Benjamin Nagengast, Tihomir Asparouhov, Bengt Muthén, Theresa Dicke
Scalar invariance is an unachievable ideal that in practice can only be approximated; often using potentially questionable approaches such as partial invariance based on a stepwise selection of parameter estimates with large modification indices. Study 1 demonstrates an extension of the power and flexibility of the alignment approach for comparing latent factor means in large-scale studies (30 OECD countries, 8 factors, 44 items, N = 249,840), for which scalar invariance is typically not supported in the traditional confirmatory factor analysis approach to measurement invariance (CFA-MI)...
January 12, 2017: Psychological Methods
Larry V Hedges
I discuss how methods that adjust for publication selection involve implicit or explicit selection models. Such models describe the relation between the studies conducted and those actually observed. I argue that the evaluation of selection models should include an evaluation of the plausibility of the empirical implications of that model. This includes how many studies would have had to exist to yield the observed sample of studies. I also argue that the amount of influence that one or a small number of studies might have on the overall results is also important to understand...
March 2017: Psychological Methods
Martyna Citkowicz, Jack L Vevea
Quantitative research literature is often biased because studies that fail to find a significant effect (or that demonstrate effects in an undesired or unexpected direction) are less likely to be published. This phenomenon, termed publication bias, can cause problems when researchers attempt to synthesize results using meta-analytic methods. Various techniques exist that attempt to estimate and correct meta-analyses for publication bias. However, there is no single method that can (a) account for continuous moderators by including them within the model, (b) allow for substantial data heterogeneity, (c) produce an adjusted mean effect size, (d) include a formal test for publication bias, and (e) allow for correction when only a small number of effects is included in the analysis...
March 2017: Psychological Methods
Lisa L Harlow
Psychological Methods celebrated its 20-year anniversary recently, having published its first quarterly issue in March 1996. It seemed time to provide a brief overview of the history, the highlights over the years, and the current state of the journal, along with tips for submissions. The article is organized to discuss (a) the background and development of the journal; (b) the top articles, authors, and topics over the years; (c) an overview of the journal today; and (d) a summary of the features of successful articles that usually entail rigorous and novel methodology described in clear and understandable writing and that can be applied in meaningful and relevant areas of psychological research...
March 2017: Psychological Methods
Patrick E Shrout, Marika Yip-Bannicq
An important step in demonstrating the validity of a new measure is to show that it is a better predictor of outcomes than existing measures-often called incremental validity. Investigators can use regression methods to argue for the incremental validity of new measures, while adjusting for competing or existing measures. The argument is often based on patterns of binary significance tests (BST): (a) both measures are significantly related to the outcome, (b) when adjusted for the new measure the competing measure is no longer significantly related to the outcome, but (c) when adjusted for the competing measure the new measure is still significantly related to the outcome...
March 2017: Psychological Methods
Ulf Böckenholt
The recently proposed class of item response tree models provides a flexible framework for modeling multiple response processes. This feature is particularly attractive for understanding how response styles may affect answers to attitudinal questions. Facilitating the disassociation of response styles and attitudinal traits, item response tree models can provide powerful process tests of how different response formats may affect the measurement of substantive traits. In an empirical study, 3 response formats were used to measure the 2-dimensional Personal Need for Structure traits...
March 2017: Psychological Methods
Nathan T Carter, Dev K Dalal, Li Guan, Alexander C LoPilato, Scott A Withrow
Psychologists are increasingly positing theories of behavior that suggest psychological constructs are curvilinearly related to outcomes. However, results from empirical tests for such curvilinear relations have been mixed. We propose that correctly identifying the response process underlying responses to measures is important for the accuracy of these tests. Indeed, past research has indicated that item responses to many self-report measures follow an ideal point response process-wherein respondents agree only to items that reflect their own standing on the measured variable-as opposed to a dominance process, wherein stronger agreement, regardless of item content, is always indicative of higher standing on the construct...
March 2017: Psychological Methods
Thierno M O Diallo, Alexandre J S Morin, HuiZhong Lu
This article evaluates the impact of partial or total covariate inclusion or exclusion on the class enumeration performance of growth mixture models (GMMs). Study 1 examines the effect of including an inactive covariate when the population model is specified without covariates. Study 2 examines the case in which the population model is specified with 2 covariates influencing only the class membership. Study 3 examines a population model including 2 covariates influencing the class membership and the growth factors...
March 2017: Psychological Methods
Bhargab Chattopadhyay, Ken Kelley
The standardized mean difference is a widely used effect size measure. In this article, we develop a general theory for estimating the population standardized mean difference by minimizing both the mean square error of the estimator and the total sampling cost. Fixed sample size methods, when sample size is planned before the start of a study, cannot simultaneously minimize both the mean square error of the estimator and the total sampling cost. To overcome this limitation of the current state of affairs, this article develops a purely sequential sampling procedure, which provides an estimate of the sample size required to achieve a sufficiently accurate estimate with minimum expected sampling cost...
March 2017: Psychological Methods
Oliver Lüdtke, Alexander Robitzsch, Simon Grund
Multiple imputation is a widely recommended means of addressing the problem of missing data in psychological research. An often-neglected requirement of this approach is that the imputation model used to generate the imputed values must be at least as general as the analysis model. For multilevel designs in which lower level units (e.g., students) are nested within higher level units (e.g., classrooms), this means that the multilevel structure must be taken into account in the imputation model. In the present article, we compare different strategies for multiply imputing incomplete multilevel data using mathematical derivations and computer simulations...
March 2017: Psychological Methods
Amanda K Montoya, Andrew F Hayes
Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis...
March 2017: Psychological Methods
Barry H Cohen
Sharpe's (2013) article considered reasons for the apparent resistance of substantive researchers to the adoption of newer statistical methods recommended by quantitative methodologists, and possible ways to reduce that resistance, focusing on improved communication. The important point that Sharpe missed, however, is that because research methods vary radically from one subarea of psychology to another, a particular statistical innovation may be much better suited to some subareas than others. Although there may be some psychological or logistical explanations that account for resistance to innovation in general, to fully understand the resistance to any particular innovation, it is necessary to consider how that innovation impacts specific subareas of psychology...
March 2017: Psychological Methods
Daniel McNeish, Laura M Stapleton, Rebecca D Silverman
In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors...
March 2017: Psychological Methods
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"