![]() ![]() Now let’s run a small simulation to show the consequences of this random-slope dependent MNAR scenario. There’s only one small problem with the joint model and that is that we almost never know what the correct model is… A small simulation ![]() We can see from the output that the estimate of the treatment effect is really close to the estimate from the complete data (-0.23 vs -0.25). The equations for the dropout can be written as, logit ( Pr ( R i j = 1 ∣ T X i j = 1 ) ) logit ( Pr ( R i j = 1 ∣ T X i j = 0 ) ) = − σ u 1 + logit ( 0. P(dropout) gives the probability of dropout, which is assumed to be constant at all time points. Circles represent complete observations the bold line represents the slope before dropping out. A sample of patients drawn from the MNAR (random slope) data-generating process. The probability of dropout is assumed to be constant over time.įigure 3. A differential MNAR dropout process where the probability of dropping out from a trial depends on the patient-specific slopes which interact with the treatment allocation. This isn’t that unlikely in unblinded studies (e.g., wait-list controls).įigure 2. Moreover, let’s assume that the missingness differs between the treatment and control group. To illustrate these concepts let’s generate data from a two-level LMM with random intercept and slopes, and included a MNAR missing data mechanism where the likelihood of dropping out depended on the patient-specific random slopes. c) Shows a random-slope MNAR mechanism where the likelihood of dropping out is related to the patient's unobserved slope. b) Shows an outcome-related MNAR mechanism, where dropout is related to a large unobserved value. a) Illustrates a MAR mechanism where the patient's likelihood of dropping out is related to an observed large value. Three different drop out mechanisms in longitudinal data from one patient. The figure below illustrates the MAR, outcome-based MNAR, and random coefficient-based MNAR mechanisms.įigure 1. Unfortunately, the random effects are latent variables and not observed variables-hence, such a missingness mechanism would also be MNAR (R. Because it is probably true that some participants’ dropout is related to their symptom’s rate of change over time. Clearly, it would be very practical if the inclusion of random slopes would allow missingness to depend on patients’ latent change over time. It is sometimes assumed tat if a random slope is included in the model it can also be used to satisfy the MAR assumption. ![]() A related misunderstanding, is that the LMM’s missing data assumption is more liberal as it allows for participants’ slopes to vary. ![]() Thus, it is assumed that missingness only depend on the previously observed values of the outcome. Sometimes you even see researchers using tests, e.g., Little’s MCAR test, to prove that the missing data mechanisms is either MCAR or MAR and hence ignorable-which is clearly a misunderstanding and builds on faulty logic.Ī common problem is that researchers do not include covariates that potentially predict dropout. However, researchers frequently misunderstand the MAR assumption and often fail to build a model that would make the assumption more plausible. LMMs are frequently used by researchers to try to deal with missing data problems. 2008 Rhoads 2012), and it is therefore recommended to perform sensitivity analyses using different MNAR mechanisms (Schafer and Graham 2002 R. There are no ways to test if the missing data are MAR or MNAR (Molenberghs et al. In theory, valid inference under MNAR missingness requires specifying a joint distribution for both the data and the missingness mechanisms (R. MCAR and MAR are called ignorable because the precise model describing the missing data process is not needed. If the missingness depends on Y m i s s , the missing values in Y, then the mechanism is MNAR. Little and Rubin 2014 Schafer and Graham 2002), then we can write the MCAR and MAR mechanisms as, MCAR : P ( R ∣ Y ) MAR : P ( R ∣ Y ) = P ( R ) = P ( R ∣ Y o b s ). If we have the complete outcome variable Y (which is made up of the observed data Y o b s and the missing values Y m i s s ) and a missing data indicator R (D. LMMs provide unbiased estimates under MAR missingness. Rubin (1976) presented three types of missing data mechanisms: missing completely at random (MCAR), missing at random (MAR), missing not at random (MNAR). This post is based on a small example from my PhD thesis. In this post I will present a simple example of when the LMM fails, and illustrate two MNAR sensitivity analyses: the pattern-mixture method and the joint model (shared parameter model). Linear mixed-effects models are often used for their ability to handle missing data using maximum likelihood estimation. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |