Estimated duration of disease approached significance for global function

Results exploring the associations between HIV disease characteristic and global neurocognitive function suggest a significant negative association between estimated duration of HIV disease and global neurocognitive function . Therefore, estimated duration of disease was included as a covariate in the linear regression model for PWH. The number of total drinks was not associated with neurocognition in PWH.In the HIV- group, results indicated significant quadratic effects of total drinks on global function , executive function , learning , delayed recall , and motor skills . We applied the J-N technique to inspect these significant changes in the slope of total drinks on neurocognition as a function of total drinks within the HIV- group . Total drinks demonstrated positive, statistically significant associations with neurocognition at the lower end of “low-risk” drinking . Conversely, total drinks demonstrated negative, statistically significant associations with neurocognition at the higher end of “low-risk” drinking . Although there was a significant quadratic association between total drinks and delayed recall, the negative slope did not reach statistical significance. Finally, to examine potential ongoing neurocognitive effects of lifetime AUD among alcohol abstainers, a Chisquare statistic was calculated. Our study is among the first to examine the curvilinear association between recent “low-risk” alcohol consumption and neurocognition among persons with and without HIV. Among HIV individuals, the association between low-risk drinking and neurocognition expectedly followed an inverted-J shaped pattern, with better neurocognition occurring at intermediate levels of “low-risk” drinking compared to alcohol abstinence and heavier consumption. Specifically, region of significance analyses indicated a positive slope of alcohol consumption on global neurocognitive function when the range of total drinks was zero to 18 drinks,drying marijuana whereas a negative slope emerged when the range of total drinks was 52 to 60 drinks; suggesting a potentially innocuous range between 18 to 52 drinks per month for HIV- individuals.

This global effect was driven by abilities supported by frontal brain regions where alcohol metabolism is thought to be particularly active . Additionally, consistent with our hypotheses, there was no quadratic association between level of low-risk alcohol consumption and neurocognition among PWH. This suggests the presence of other factors that may supersede the potentially beneficial neurocognitive effects of low-risk alcohol consumption in the context of HIV. For example, age was significantly associated with global function, executive function, learning, and delayed recall in PWH, despite using age-adjusted T-scores in analyses. Extant literature suggests that the inverted-J shaped association is not unique to neurocognition, which may point towards possible mechanisms underlying the neuroprotective effect of low-risk alcohol consumption. For example, evidence supports a cardio protective effect of low-risk alcohol consumption including a reduced risk of coronary heart disease, myocardial infarction, ischemic stroke, peripheral arterial disease, and all-cause mortality . There is a higher risk among alcohol abstainers and when alcohol consumption is high, and lower risk when alcohol consumption is low . Although our data does not directly measure pathways underlying a potential neuroprotective effect of low-risk alcohol consumption, including its specificity to HIV- adults, several plausible bio-psychosocial mechanisms can be drawn from the extant literature. From a biological perspective, low-risk alcohol use has been linked to increased high-density lipoprotein levels and may carry antithrombotic, antioxidative, and anti-inflammatory effects that benefit the neurovascular unit . Additionally, alcohol may directly enhance learning and executive function via stimulation of acetylcholine in the prefrontal cortex and hippocampus . Considering that alcohol consumption increases HDL cholesterol levels, it has been proposed that the association between HDL cholesterol and lowered risk of coronary heart disease is mediated in part by alcohol-induced increases in HDL cholesterol . Other possible mechanisms underlying the observed beneficial effect of low-risk drinking on neurocognition among HIV- individuals in our sample may involve lifestyle factors and/or indicators of socioeconomic status not measured in the current study. For example, previous research exploring beneficial effects of drinking have suggested that low-risk alcohol consumption may be an indicator of higher socioeconomic status and engagement in a healthier lifestyle that includes better nutrition and physical activity .

Moreover, persons of lower socioeconomic status may not have the means to afford alcohol and be more medically compromised which could lead to voluntary or medically recommended abstinence . In addition, it is also well known that individuals of higher socioeconomic status are less likely to experience negative consequences from alcohol use compared to those of lower socioeconomic status who drink the same amount . It is possible that our sample of HIV- participants were of relatively high socioeconomic status, especially compared to our sample of PWH, as HIV disproportionately affects individuals from lower income areas with fewer resources . Although we examined associations between certain HIV disease characteristics, alcohol use, and neurocognition, PWH face additional bio-psychosocial disadvantages that may explain the lack of beneficial effects of low-risk drinking among this group. Even in the context of low-risk use, the immuno suppressant properties of alcohol may counteract the cardioprotective effects on downstream neurocognitive health among PWH, as immunosuppression leads to greater viral infectivity, replication, and subsequently poorer neuronal integrity . Furthermore, our HIV groups had different proportions of individuals with current and lifetime depression, with significantly higher rates among PWH. Depression is known to have adverse effects on neurocognitive performance in HIV , possibly limiting the expression of potentially beneficial effects of low-risk drinking among our PWH sample. The current study has several limitations. Although we detected effects that remained statistically significant after adjusting for relevant covariates, there could be potential unmeasured health and lifestyle confounders such as disability, social status, and reason for drinking, that may mediate the association between alcohol consumption and neurocognition. Next, our sample of lowrisk drinkers, especially among the HIV- group, had fewer drinkers on the high end of the low-risk drinking range, more alcohol abstainers, and more drinkers on the lower end of the low-risk drinking range. Furthermore, we did not have any method to verify self-reported alcohol abstinence. Despite our skewed sample in terms of levels of alcohol consumption, we still detected robust effects even after adjusting for relevant covariates.

Objectively measured recent alcohol consumption would have reduced the possibility of misreporting alcohol abstinence, drinking quantities, and frequency; however, we believe structured interviews are still clinically relevant given that our timeline follow back was only 30 days prior. Future alcohol consumption research should employ methodologies to capture real time and ecologically valid data, rather than relying on retrospective recall. While the full range of “low-risk” drinking does not have discretely defined cut-points for minimal, light,pruning cannabis and moderate alcohol use, our inclusion of the J-N technique allowed us to identify specific boundaries of recent alcohol consumption in which alcohol confers neurocognitive benefits or risks among HIV individuals. Although these analyses may help clinical efforts at identifying intervals of safe drinking for certain populations, interpretations must caution against the differences in low-risk drinking criteria for men and women. According to the NIAAA criteria for low-risk drinking, we included women who self-report 0-30 drinks in the last 30 days, and men who self-report 0-60 drinks. Therefore, the results of the J-N technique for lower regions of significance are applicable to both men and women, whereas the results in the upper regions of significance are applicable only to men. Future work with equal sample sizes by sex should investigate the associations between recent drinking and neurocognitive function to further adjust for sex differences. In conclusion, our results are consistent with the hypothesis of a curvilinear association between recent alcohol consumption and neurocognition within the range of low-risk drinking and only among HIV- older adults, such that intermediate levels of recent alcohol use were associated with better neurocognition compared to alcohol abstinence as well as lower and higher ranges of low-risk consumption. Among PWH, there were no detected beneficial or deleterious effects of low-risk alcohol consumption on neurocognition, suggesting that other factors that may supersede the neurocognitive effects of low-risk alcohol consumption in the context of HIV. Whereas genetic studies have traditionally ascertained cases for a particular disorder, PBCs may contain individuals who can serve as cases for numerous different disorders. However, several limitations need to be considered. The ascertainment of PBCs, while not focused on a specific diagnosis, is never random and therefore does not represent the general population19. For example, andMe and UKB research participants are more highly educated and have higher SES than the general population. In addition, similar to traditionally ascertained genetic cohorts, current PBCs are overwhelmingly made up of individuals of European ancestry; although MVP is a notable exception. Another limitation of PBCs is that certain disorders are underrepresented; for example, in UKB, the frequency of schizophrenia is lower than the general population, perhaps reflecting the lower rate at which schizophrenia patients volunteered to participate in such a rigorous study. The age of subjects in PBCs is another potential limitation. For example, the use of diagnoses for childhood onset disorders like ADHD and autism have changed dramatically over the past few decades, meaning that older subjects will have a lower than expected prevalence of these diagnoses. In addition, the prevalence of environmental exposures , which modulate the prevalence of many traits and diseases, have changed over time, which may confound various genetic studies. Lastly, privacy and intellectual property concerns restrict the sharing of raw data and even the results obtained from some PBCs, these restrictions impede data sharing. Despite these limitations, PBCs are attractive because they are economical, offer the potential to dramatically increase sample size, provide a much greater diversity of phenotypes, and lend themselves to innovative study designs.In some PBCs, clinical diagnoses are not available. However, self-reported clinical diagnoses may be available.

For obvious reasons, these self-reported diagnoses must be interpreted with caution; however, the strength of the genetic correlation between gold-standard diagnoses and self-reported diagnoses helps to address this concern. For example, self-reported MDD and clinician assigned MDD showed a robust genetic correlation. In other cases, self-reported diagnoses are unavailable, but screening tools can be used to approximate diagnoses. For example, scores from the Alcohol Use Disorder Identification Test , which is as a screening tool for AUD, were available in research participants from 23andMe and UKB. Sanchez-Roige et al24 found that when AUDIT scores were converted into a case control phenotype, they were highly genetically correlated with AUD 25 . These examples demonstrate that, even when clinical diagnoses are not available, there is still significant value in using self-reported information from PBCs for genetic studies of psychiatric disorders.In general, there is a tradeoff between phenotyping depth and sample size . The quest for larger sample sizes has led to the adoption of “minimal phenotyping” where a complex disease or trait may be reduced to a single yes or no question. Minimal phenotyping is sometimes criticized because it implicitly assumes that minimal phenotypes are merely noisy measurements of a true underlying phenotype. Cai et al sought to empirically examine this question by considering both self-reported diagnosis of MDD and clinician measurements of the cardinal symptoms of MDD and found that minimal phenotyping yielded a qualitatively different trait. Another empirical examination of minimal phenotyping used a multivariate framework to evaluate several psychiatric disorders and self-report measures of their cardinal symptoms. That study identified large genetic correlations between some disorders and symptom pairs , but very modest genetic correlations between other pairs . Despite these limitations, robust genetic signals — of something — can be obtained using minimal phenotyping; how useful these signals will be for understanding the pathophysiology of psychiatric disorders is a matter of ongoing debate, but when large, minimally phenotyped datasets exist, it seems natural that they should be analyzed. Regardless of whether diagnoses are made by an expert clinician, a structured interview, or self-report, there is a broader question about whether or not the current diagnostic categories are optimal for genetic research, given that the DSM was never intended to be a research tool. A recent review summarized this issue with the memorable phrase “our genes don’t seem to have read the DSM” . Initiatives such as the National Institute of Mental Health Research Domain Criteria and Hierarchical Taxonomy of Psychopathology provide new ways of classifying psychiatric disorders based on dimensions of observable behavioral and neurobiological measures, rather than diagnostic categories. These approaches have not been universally accepted. Even before RDoC, there was widespread enthusiasm for genetic studies of endophenotypes ; however, studies of endophenotypes flourished in the era of candidate genes, when the necessity of large sample sizes was not generally understood. This may have fostered undue skepticism about the utility of endophenotypes for genetic research.