Our findings in Fresno County identified both similar and different risk factors for preterm birth

Data used for the study were received by the California Preterm Birth Initiative at the University of California San Francisco by June 2016.Not only did the risk models differ by residence within Fresno County, but the percentage of women with the risk varied greatly for some factors. In urban residences, 12.2% of women with preterm births smoked, while 6.6% of women in rural residences with preterm birth smoked. Similarly, 8.9% of urban women with a preterm birth used drugs or alcohol and 4.4% women in rural residences with preterm birth did. Nearly five percent of urban women delivering preterm had fewer than three prenatal care visits and 2.3% of women in suburban residences had this few number of visits. The percent of women with a preterm birth and with interpregnancy intervals less than six months ranged from 7.7%to 11.2%. When examining these risk factors in more geographic detail, appropriate targets for preterm birth reduction are elucidated. For instance, in six census tracts 15% or more mothers of preterm infants smoked during their pregnancy – four in urban residences and two in suburban residences . Also, five census tracts in urban residences show that over 10% of mothers who delivered preterm used drugs or alcohol . Over 2,600 women delivering in Fresno County had a cumulative risk score for preterm birth 3.0: 2.2% of women living in urban residences, 4.1% in suburban,cannabis equipment and 3.7% in rural residences had this high risk score . In this study of preterm births in Fresno County, we found that differences in the type and magnitude of risk and protective factors differed by the residence in which women reside.

Black women and women with diabetes, hypertension, infection, fewer than three prenatal care visits, previous preterm birth or interpregnancy interval less than six months were at increased risk of preterm birth, regardless of location of residence. Public insurance, maternal education less than 12 years, underweight BMI, and interpregnancy interval of five years or more were identified as risk factors only for women in urban residences. Women living in urban locations who were born in Mexico and who were overweight by BMI were at lower risk for preterm birth; WIC participation was protective for women in both urban and rural locations. Taken together, these findings suggest targeted place-based interventions and policy recommendations can be pursued. The preterm birth risk factors identified in these analyses are not unique to Fresno County: previous work has also shown that women of color, lower education, lower socioeconomic status, women with co-morbidities such as hypertension and diabetes, smoking, and short interpregnancy interval are at elevated risk of preterm birth.In Fresno County, however, we observed that these risks differ in magnitude. This is critical, as the percentage of women in each region with the risk factor can vary greatly.The degree of risk was mild – only a 1.1-fold increase in risk. However, 72% of the population giving birth in rural Fresno County is Hispanic, suggesting that focusing interventions reaching this population may provide the most impact. Similarly, Black women were at elevated risk of preterm birth regardless of location of residence. Since urban residences have the highest percentage of Black women and rural has the lowest , focusing prevention efforts for Black women in urban residences may be an effective approach. Others have found that with pre-pregnancy initiation of Medicaid , has been associated with earlier initiation of prenatal care,a factor that may reduce preterm birth rates.In addition, participation in the WIC program also has shown a moderate reduction of the risk of a small for gestational age infant and has been associated with reduced infant mortality in Black populations.

Fresno women from both urban and rural residences who participated in the WIC program were less likely to deliver preterm, while those women living in urban locations who were publicly insured through Medical coverage for delivery were at increased risk for preterm birth. Low income is a criterion for both public assistance programs, and over 32% of families in this region lives below the poverty line;it is apparent that social economic status is a complex risk factor for preterm birth. A key take away message from this study is that women who accessed prenatal care more frequently – three or more prenatal care visits – were less likely to deliver preterm. Fresno County may be able to improve preterm birth rates by addressing factors that encourage prenatal care access, which may include enrollment in Medi-Cal during the preconception period and increasing WIC participation. Identifying regions where a high percentage of women do not access three or more prenatal care visits may suggest locations for an intervention such as home visits or mobile clinic. Using a large administrative database allows for examination of rates and risks that would not be possible with other data sources. Despite these strengths, the study has some critical limitations. By design, the findings are very specific to one area of California and may not be as applicable to other areas of the state, country, or world. In fact, we recently conducted a similar study examining preterm birth risk factors by sub-type for all of California.Similar to the entire California population, we demonstrated increased risk of preterm birth for Fresno County women who were of Black race/ethnicity, who had diabetes or hypertension during pregnancy, or who had a previous preterm birth. However, Fresno County was different from the whole state in a few ways. Unlike the state of California as a whole, Hispanic women, women over 34 years at delivery, and underweight women in urban residences in Fresno County were at increased risk for preterm birth. Also, education over 12 years did not provide protection against preterm birth in any of the Fresno County residences, although higher education did provide protection when we looked at the whole state of California. These differences point to specific pathways occurring in Fresno County that may be distinct from the state as a whole, and demonstrate the value of place-based investigation of risk factors when examining a complex outcome such as preterm birth. Other residences may benefit from similar analyses to identify risk and protective factors that are important on a local level.

An additional limitation, as with most administrative databases, is that accuracy and ascertainment of variables is not easily validated. Previous studies of California birth certificate data suggests that race/ethnicity is a valid measure of self-identified race/ethnicity for all but Native Americans, and best obstetric estimate of gestation may underestimate preterm delivery rates.Previously reported rates of preterm birth in Fresno County are around 9.5% and was 8.4% overall in our population after removing multiple gestation pregnancies and pregnancies with major birth defects. Additionally, United States estimates for drug dependence/use during pregnancy is 5.0% to 5.4% and was only 2.5% in our population. This under ascertainment may mean that we are capturing the most severe diagnoses,vertical grow shelf potentially overestimating our risk calculations. Alternatively, under ascertainment also implies that drug users were likely in our referent population, which would underestimate our risk calculations. This examination of Fresno County preterm birth may provide important opportunities for local intervention. Several populations were identified as at risk, regardless of location of maternal residence, that deserve targeted interventions. Interventions focused on diabetes, hypertension, and drug or alcohol dependence/abuse across the county may be effective for preterm birth reduction. We identified several modifiable risk and resilience factors across the reproductive life course that can be addressed to reduce preterm birth rates. Given the complex clinical and social determinants that influence preterm birth, cross-sector collaborative efforts that take into account place-based contextual factors may be helpful and are actively being pursued in Fresno County. Ultimately, refining our understanding of risk and resilience and how these factors vary across a geography are fundamental steps in pursuing a precision public health approach to achieve health equity.The smoking prevalence among the general U.S. population is estimated to be 14% ; however, the prevalence of smoking among individuals experiencing homelessness in the U.S. is 70% . Smoking-caused cancer and cardiovascular disease are the leading causes of death among individuals experiencing homelessness . Previous studies estimating tobacco prevalence among homeless adults have focused exclusively on cigarette smoking. However, with the increasing availability and popularity of alternative tobacco products, defined as flavored and unflavored noncigarette tobacco products such as electronic cigarettes , cigars, or blunts , use of these products have increased among individuals experiencing homelessness . Between 51% and 68% of individuals experiencing homelessness have used one or more forms of ATP in the past 30 days . More than 50% of homeless smokers acknowledge high risk to health from non-cigarette combustible tobacco.

Studies have explored associations between ATP use and past year cigarette quit attempts and have found mixed results. In one study among homeless smokers, ATP use was not associated with readiness to quit or past-year quit attempts , whereas in a more recent study, ATP use was associated with a higher number of past-year quit attempts compared to cigarette only smokers . While these studies have contributed to tobacco research by showing that ATP use is common among individuals experiencing homelessness, there are still gaps in our understanding of patterns of ATP use and its consequences. Flavored non-cigarette tobacco use is increasing among the general population, and flavors are the primary motivators for initiation and continued use of ATPs . However, flavors are also associated with long-term addiction and difficulty with smoking cessation . We know of no studies to date that have examined flavored non-cigarette tobacco use among individuals experiencing homelessness. Individuals experiencing homelessness face substantial barriers to smoking cessation, and the use of ATP could make smoking cessation more difficult . However, some people may consider ATPs such as e-cigarettes as a lower risk alternative to cigarettes , potentially reducing harm . Given the varied uses for ATPs, there is a need for studies that explore how ATP use intersects with cigarette smoking behaviors among individuals experiencing homelessness. Moreover, ATP use is high among persons with mental health and substance use disorders and may be used to alleviate mental health and/or substance use cravings or may be a marker of severity of illness . ATP users have reported severe and pervasive externalizing outcomes comorbidity compared to cigarette-only users . ATP use may also increase the risk of developing substance use disorders compared to cigarette only or ecigarette only users . Mental health disorders such as depression, anxiety, bipolar disorder, schizophrenia, and post-traumatic stress disorder are common among populations experiencing homelessness and have also been shown to be associated with tobacco use . Moreover, substance use disorders are highly prevalent among homeless adults . Given the high rates of mental health and substance use disorders among people experiencing homelessness , examining use patterns of ATPs in this sub-population of smokers could be helpful with developing targeted interventions. In this study, we recruited a community-based sample of individuals experiencing homelessness who were current cigarette smokers to explore patterns of ATP use, including in-depth patterns of e-cigarette use, and association with past-year quit attempts. In addition to providing a larger sample size than previous studies to explore these associations, this study is also the first to report on the use of flavored tobacco and absolute perceptions of harm and addiction among individuals experiencing homelessness. Homeless adults are motivated to quit cigarette smoking and may use ATP as a cessation method ; therefore, we hypothesized that ATP use would be associated with increased past year quit attempts.We conducted a cross-sectional study of individuals experiencing homelessness who were recruited from eight sites, including emergency shelters, navigation centers , day-time referral programs, and community centers serving homeless adults in San Francisco, California . These sites primarily offered emergency shelter or referral services for individuals experiencing homelessness; no study site offered an on-site smoking cessation program. Individuals were eligible to participate if they were 18 years or older, had smoked at least 100 cigarettes in their lifetime, currently smoked cigarettes , were receiving services at the recruitment site, and were currently homeless as defined by the Homeless Emergency Assistance and Rapid Transition to Housing Act . We recruited participants between November 2017 and July 2018. We aimed to include participants who would express “typical” or “average” perspectives, and therefore recruited participants using typical case sampling .

Use of several CPUs allowed processing of multiple subjects’ scans to occur in parallel

No differences were found between groups before initiation, suggesting alcohol use was related to aberrant cortical thinning, as opposed to cortical thickness being predictive of initiation of alcohol use. Furthermore, widespread cortical thinning and volume reduction has also been reported in alcohol dependent adults in frontal, temporal, and occipital regions . The goals of this study were to use a set of novel analytic approaches to carefully examine within-subjects changes in morphometry and quantify cortical volume changes over time in youth who remained non-drinkers compared to those who initiated heavy drinking. We hypothesized that adolescents who transitioned into moderate to heavy drinking would show smaller cortical volumes, similar as has been seen in adolescent drinkers and adult alcoholics , but after a brief period of heavy alcohol exposure.The sample was obtained from a larger ongoing neuroimaging study of 296 adolescents examining neurocognition in youth at-risk for substance use disorders . Participants were recruited through flyers sent to households of students attending local middle schools, describing the study as a project looking at adolescent brain development in youth who do or do not use alcohol, and included major eligibility criteria, financial compensation , and contact information. Informed consent and assent were obtained, and included approval for youth and parents be contacted for follow-up interviews and scans. Eligibility criteria, substance use history, family history of substance use, developmental,greenhouse tables and mental health functioning data were obtained from the youth, their biological parent, and one other parent or close relative.

The study protocol was executed in accordance with the standards approved by the University of California, San Diego Human Research Protections Program. Participants for this study each had one brain scan acquired before the adolescent had any significant alcohol or drug use, and one follow-up scan approximately 3 years later after half transitioned into heavy substance use, for a total of 80 scans. At baseline, inclusionary criteria included being between the ages of 12 and 17 and having minimal to no experience with substances: ≤10 total drinks in their life, never with more than 2 drinks in a week; ≤5 lifetime experiences with marijuana and none in the past three months; ≤5 lifetime cigarette uses; and no history of other intoxicant use . Youth with any indication of a history of a DSM-IVAx is I disorder, determined by the NIMH Diagnostic Interview Schedule for Children –version 4.0 were excluded, as were youth who had any indicator of prenatal substance exposure, any history of traumatic brain injury, loss of consciousness , learning disorder, migraine, neurological problem, serious medical condition, or were taking a medication that could alter brain functioning or brain blood flow. After screening, approximately 12% remained eligible . Participants in the larger study completed substance use interviews every 3 months, and those who started heavy drinking were selected for a comprehensive annual follow-up with neuroimaging, and matched to a non-using control subject on baseline and follow-up age and pubertal development level, gender, race, family history of alcohol use disorders, and socioeconomic status. At follow-up, 20 were defined as heavy drinkers; 20 continuous non-drinkers were selected to match the characteristics of the heavy drinkers . Participants were assessed using rigorous follow-up procedures , with an overall follow-up rate of 99% through Year 6. Specifically, every three months after the baseline interview and imaging were complete, participants were interviewed to assess current substance use and psychiatric functioning. Those who met criteria for heavy drinking were invited to return and complete annual full in-person assessments , including neuroimaging. Each participant that endorsed heavy drinking was matched to a demographically similar participant who continued to endorse no substance use throughout the follow-up for comparison. Moderate drinkers were excluded from analysis in this paper.

Free Surfer 4.5.0 was used and required ~24-h computational time for image construction, using a dual quad core Intel Xeon CPU E5420 with a processing speed of 2.50 GHz and 16 GB ram.Subtle longitudinal morphometric changes in brain structure were measured by using a method developed at UCSD’s Multi-modal Imaging Laboratory, called “quantitative anatomic regional change analysis,” or QUARC . In the QUARC procedure, each subject’s follow-up image is registered to the baseline image using a 12-parameter affine registration and then intensity normalized to the baseline image by an iterative procedure. A deformation field is then calculated from the nonlinear registration , and used to align the images at the sub-voxel level, resulting in a one-to-one correspondence between each vertex on the baseline and follow-up images. Subcortical segmentation and cortical parcellation labels from the baseline image were used to extract an average volume change for each region of interest. A visual quality control in the volume change field was performed by a trained technician and supervised by an image analysis expert . The goal the present study was to use a recently developed longitudinal MRI paradigm to investigate brain volume differences pre- and post-substance use initiation to disentangle normal adolescent cortical thinning from alcohol-related brain changes. Cortical pruning is a key component of adolescent neural development ; however, the heavy drinking group showed exaggerated volume reductions in these areas when compared to controls, consistent with findings from adolescent and adult populations . Overall, adolescent drinkers showed greater volume reductions than demographically matched controls over the ~3 year follow-up period in the left ventral diencephalon, left inferior and middle temporal gyrus, left caudate, and brain stem. These volumetric changes were positively correlated with lifetime alcohol use and peak number of drinks on an occasion in the past year, suggesting a dose-dependent effect of substance use on cortical thinning. These findings suggest a possible effect of alcohol on neural pruning, in a way that amplifies cortical volume reductions during adolescence. These results parallel previous longitudinal functional MRI findings showing increasing brain activation over time in adolescents who initiate heavy drinking . These observed alcohol-related cortical reductions may help explain why youth required greater brain activation to complete at the same performance level as abstinent youth .

The regions showing alcohol-related volume reductions included subcortical structures , which are important for sensory integration, motor control, feedback processing, and habit learning, as well as inferior and middle temporal cortical structures important in visual object recognition and language comprehension. Previous findings suggest alcohol use interferes with language and visuospatial abilities during adolescence, which are consistent with the brain regions found in this study; continued volume reductions related to sustained drinking during adulthood might also relate to motor issues and spatial impairments found in adult alcoholics . Volume reductions in the caudate parallel findings from adult alcoholics , while reduced medial temporal volumes parallel previous results seen in adolescent heavy drinkers . While the cause of the accelerated cortical thinning is unclear, alcohol-induced dys regulated developmental timing may be responsible for the observed effects . NMDA receptor functioning could help explain accelerated thinning in heavy drinkers, as NMDA is vital for strengthening synapses and contributing to the loss of less important connections throughout development . Thus, it is possible that repeated alcohol exposure during adolescence may interfere with normal NMDA-mediated synaptic pruning. Baseline group differences were found in several frontal cortical volumes. Specifically, youth who initiated heavy drinking over the follow-up showed smaller cortical volume in three frontal regions,vertical farming as well as less cerebellar white matter volume, when compared to youth who remained substance-naïve over the follow-up. At baseline, smaller right rostral anterior cingulate volume was related to poorer performance on a test of executive functioning . These findings suggest heavy drinking youth have subtle brain abnormalities that exist prior to the onset of drinking. These findings are highly consistent with other recent functional MRI findings which found pre-existing lower frontal brain activation in teens who later initiated heavy drinking when compared to continuous controls over a three year follow-up . The current findings might help explain previous findings where heavy drinking transitioners showed less brain activation in frontal regions before they initiated alcohol use. Furthermore, the frontal regions found in this study are important brain regions for executive control, including inhibitory functioning, attention, impulsivity, and self-regulation . Poorer inhibitory functioning in sub-stance naïve youth has been found to be predictive of future substance use , and structural brain differences could help explain these behavioral findings. Limitations should be noted. Although overall groups were very well matched, follow-up lifetime cannabis use days significantly differed between groups. Cannabis use was related to increasing volume over time, possibly countering the volume reductions related to alcohol use. There is research that suggests cannabis may act as a protective factor for white matter integrity in binge drinking ; therefore, volume reductions may have been even more pronounced if we had a completely non-cannabis using comparison group. There are also statistical limitations to be considered in this preliminary study.

Findings did not survive Bonferroni or false discovery rate correction; however, the processing technique utilized is highly sensitive to morphometric brain changes, as each subject’s follow-up image was registered to the baseline image. Furthermore, a typical cubic millimeter of gray matter in an adult contains 35 to 70 million neurons and almost twice as many glial cells , as well as over 500 billion synapses , so even slight differences in cortical thickness could be associated with significant divergence from typical synaptic pruning and gray matter loss across adolescent development. Previous findings suggest that female heavy drinkers may be more vulnerable to aberrant cortical thinning than male drinkers . Unfortunately, our sample size did not allow sufficient power to detect gender effects. The parent study is ongoing and will offer larger sample sizes with more equal gender distributions, which will allow us to more fully address the moderating role of gender on the relationship between drinking and cortical thinning during adolescence. Additionally, the sample is comprised of healthy, high functioning adolescents, so findings may not generalize to clinical or lower functioning samples. The observed pattern of results may be more pronounced in those with higher levels of drinking . Despite these limitations, these findings have important clinical and public health implications, particularly given the participants’ limited, sub-diagnostic alcohol use, limited other substance use, and absence of psychopathology. Further work with larger populations is needed to increase statistical power to observe moderating effects of variables of interest and help advance the understanding of the relationship between alcohol exposure and brain morphometry, and subsequent cognitive functioning. The prevalence of alcohol, tobacco, and other substance use is higher among gay, bisexual, and other men who have sex with men than in the overall population . Although Hughes and Eliasonnoted that substance and alcohol use have declined in lesbian, gay, bisexual, and transgender populations, the prevalence of heavy alcohol and substance use remains high among younger lesbians and gay men, and in some cases older lesbians and gay men. Marginalization on the basis of sexual orientation increases the risk for problematic substance use. For example, GBM men were approximately one and half times more likely to have reported being diagnosed with a substance use disorder during their lifetime than heterosexual men , and one and a half times more likely to have been dependent on alcohol or other substances in the past year . GBM also have higher rates of mental health issues than their heterosexual counterparts . In a review of 10 studies, Meyer found that gay men were twice as likely to have experienced a mental disorder during their lives as heterosexual men. More specifically, gay men were approximately two and a half times more likely to have reported a mood disorder or an anxiety disorder than heterosexual men. A review by King and colleagues found that lesbian, gay, and bisexual individuals were more than twice as likely as heterosexuals to attempt suicide over their lifetime and one and a half times more likely to experience depression and anxiety disorders in the past year, as well as over their lifetime. Few Canadian studies have explored population-based estimates for mental health outcomes among GBM. In one cross-sectional study of Canadian gay/“homosexual” and bisexual men using 2003 Canadian Community Health Survey data, Brennan and colleagues found participants were nearly three times as likely to report a mood or anxiety disorder than heterosexual men. Pakula & Shoveller conducted a more recent cross-sectional analysis that used 2007–2008 Canadian Community Health Survey data and found again that GBM were 3.5 times more likely to report a mood disorder compared with heterosexual males.

These domains and issues are particularly relevant for the SUD workforce as well

The past two decades have seen significant advances in our understanding of the neuroscience of addiction and its implications for practice [reviewed in ]. However, despite such insights, there is a substantial lag in translating these findings into everyday practice, with few clinicians incorporating neuroscience-informed interventions in their routine practice . We recently launched the Neuroscience Interest Group within the International Society of Addiction Medicine to promote initiatives to bridge this gap between knowledge and practice. This article introduces the ISAM-NIG key priorities and strategies to achieve implementation of addiction neuroscience knowledge and tools in the assessment and treatment of substance use disorders . We cover four broad areas:cognitive assessment,neuroimaging,cognitive training and remediation, and neuromodulation. Cognitive assessment and neuroimaging provide multilevel biomarkers to be targeted with cognitive and neuromodulation interventions. Cognitive training/ remediation and neuromodulation provide neuroscience informed interventions to ameliorate neural, cognitive, and related behavioral alterations and potentially improve clinical outcomes in people with SUD. In the following sections, we review the current knowledge and challenges in each of these areas and provide ISAM-NIG recommendations to link knowledge and practice. Our goal is for researchers and clinicians to work collaboratively to address these challenges and recommendations. Cutting across the four areas,hydroponic vertical farming we focus on cognitive and neural systems that predict meaningful clinical outcomes for people with SUD and opportunities for harmonized assessment and intervention protocols.

Neuropsychological studies consistently demonstrate that many people with SUD exhibit mild to moderately severe cognitive deficits in processing speed, selective, and sustained attention, episodic memory, executive functions , decision-making and social cognition . Furthermore, neurobiologically informed theories and expert consensus have identified additional cognitive changes not typically assessed by traditional neuropsychological measures, namely, negative affectivity and reward-related processes. Cognitive deficits in SUD have moderate longevity, and although there is abstinence-related recovery , these deficits may significantly complicate treatment efforts during the first 3 to 6 months after discontinuation of drug use. Thus, one of the most critical implications of cognitive deficits for SUD is their potential negative impact on treatment retention and adherence, in addition to clinical outcomes such as craving, relapse, and quality of life. A systematic review of prospective cognitive studies measuring treatment retention and relapse across different SUD suggested that measures of processing speed and accuracy during attention and reasoning tasks were the only consistent predictors of treatment retention, whereas tests of decision-making were the only consistent predictors of relapse . A later review that focused on substance-specific cognitive predictors of relapse found that long-term episodic memory and higher-order EF predicted alcohol relapse, whereas attention and higher-order EF predicted stimulant relapse, while only higher-order EF predicted opioid relapse . Working memory and response inhibition have also been associated with increased risk of relapse among cannabis and stimulant users . Additionally, variation in response inhibition has been shown to predict poorer recovery of quality of life during SUD treatment . Therefore, consistent evidence suggests that processing speed, attention, and reasoning are critical targets for current SUD treatments, whereas higher-order EF and decision-making are critical for maintaining abstinence. Response inhibition deficits seem to be specifically associated with relapse prediction in cannabis and stimulant users and also predict quality of life.The workforce in the SUD specialist treatment sector is diverse, encompassing medical specialists, allied health professionals, generalist health workers, and peer and volunteer workers .

For instance, in the Australian context, multiple workforce surveys over the past decade suggest that around half the workforce have attained a tertiary level Bachelor degree or greater . Similarly, US and European data has shown that education qualifications in the SUD workforce are lower than in other health services . Because the administration and interpretation of many cognitive tests are restricted to individuals with specialist qualifications, this limits their adoption in the sector. In addition, when screening does occur in SUD treatment settings, its primary function is to identify individuals requiring referral to specialist service providers for more comprehensive assessment and intervention, rather than to inform individual treatment plans. Two fields in particular have driven progress in cognitive assessment practice for generalist workers: dementia, with an increasing emphasis on screening in primary care , and schizophrenia, where cognitive impairment is an established predictor of functional outcome necessitating the development of a standardized assessment battery specifically for this disorder. In the selection of domain-specific tests for the Measurement and Treatment Research to Improve Cognition in Schizophrenia standard battery, a particular emphasis was placed on test practicality and tolerability, as well as psychometric quality. Pragmatic issues of administration time, scoring time and complexity, and test difficulty and unpleasantness for the client should be considered .The dementia screening literature has also emphasized these pragmatic issues, leading to a greater awareness and access to general cognitive screening tools.To date, the majority of the published literature on routine cognitive screening in SUD contexts has focused on three tests commonly used in dementia screening : the Mini-Mental State Examination, Addenbrooke’s Cognitive Examination, and the Montreal Cognitive Assessment. Due to their development for application in dementia contexts, these screening tools placed a heavy emphasis on memory, attention, language and visuo spatial functioning . Multiple studies have demonstrated superior sensitivity of the MoCA and the ACE scales compared to the MMSE . It is possible that this arises from the MoCA and ACE including at least some items assessing EF which are absent in the MMSE.

Indeed, this may demonstrate an important limitation of adopting existing screening tools designed for dementia in the context of SUD treatment. It can be argued that cognitive screening is most beneficial in SUD contexts when focused on SUD-relevant domains, rather than the identification of general cognitive deficits. Therefore, current neuroscience based frameworks emphasise the importance of assessing EF, incentive salience, and decision-making in SUD . As such, there is much to be gained by applying a process similar to the MATRICS effortin the SUD field to identify a ‘gold-standard’ set of practical and sensitive cognitive tests that can be routinely used in clinical practice.The most commonly used cognitive assessment approach in SUD research has been the “flexible test battery”. This approach combines different types of tests to measure selected cognitive domains . Attention, memory, EF, and decision-making are the most commonly assessed domains, although there is a considerable discrepancy in the tests selected to assess these constructs . Even within specific tests, different studies have used several different versions; for example, at least four different versions of the Stroop test have been employed in the SUD literature . Another commonly used approach is the “fixed test battery”, which involves a comprehensive suite of tests that have been jointly standardized and provide a general profile of cognitive impairment. The Cambridge Automated Neuropsychological Test Battery, the Repeatable Battery for the Assessment of Neuropsychological Status,vertical agriculture the Neuropsychological Assessment Battery– Screening Module and the MicroCog™are examples of fixed test batteries utilized in SUD research , although these too have limited assessment of EF. Another limitation of these assessment modules is the lack of construct validity, as they were not originally designed to measure SUD-related cognitive deficits. As a result, they overemphasize assessment of cognitive domains that are relatively irrelevant in the context of SUD and neglect other domains that are pivotal . A common limitation of flexible and fixed batteries is their reliance on face-to-face testing, normally involving a researcher or clinician, and their duration, which is typically around 60-90 min. To address this gap, a number of semi-automated tests of cognitive performance have been developed, including the Automated Neuropsychological Assessment Metrics , ImmediatePost-concussion Assessment and Cognitive Testing battery , and CogState brief battery, have been used more widely, although validation studies to date suggest they may not yet have sufficient psychometric evidence to support clinical use . Research specifically in addictions has begun to develop and validate cognitive tests that can be delivered in client/participants’ homes or via smartphone devices. Evaluations of the reliability, validity, and feasibility of mobile cognitive assessment in individuals with SUD have been scarce, but promising . Cognitive assessment via smartphone applications and web based computing is a rapidly developing field, following many of the procedures and traditions of Ecological Momentary Assessment. The flexibility and rapidity of assessment offered by mobile applications makes it particularly suited to questions assessing change in cognitive performance over various time scales . For example, cognitive performance can be assessed in event-based , time-based and randomly prompted procedures that were not previously feasible, and or valid, in laboratory testing. While the benefits of mobile testing to longitudinal research, particularly large-scale clinical trials, appear obvious , the rapidity and frequency of deployment also provide opportunities to test questions of much shorter delays between drug use behavior and cognition.

For example, recent studies have examined if daily within-individual variability in cognitive performance, principally response inhibition, was associated with variable likelihood for binge alcohol consumption . Similarly, influencing the immediate dynamic relationship between cognition and drug use has also been used for intervention purposes. Web and smartphone platforms have been used to administer cognitive-task based interventions, such as cognitive bias modification training , where cognitive performance is routinely measured as a central element of interventions that span several weeks. The outcomes of these trials show that mobile cognitive-task based interventions are feasible but not efficacious as in a stand-alone context . However, the combination of cognitive bias modification and normative feedback significantly reduces weekly alcohol consumption in excessive drinkers .A substantial proportion of people with SUD have cognitive deficits. Alcohol, stimulants and opioid users have overlapping deficits in EF and decision-making. Alcohol users have additional deficits in learning and memory and psychomotor speed. Heavy cannabis users have specific deficits in episodic memory and attention. Cognitive assessments of speed/attention, EF and decision-making are meaningfully associated with addiction treatment outcomes such as treatment retention, relapse and quality of life . In addition, there is growing evidence that motivational and affective domains are also implicated in SUD pathophysiology and clinical symptoms . For example, both reward expectancy and valuation and negative affect have been proposed to explain SUD chronicity . However, to date, there have been no studies linking these “novel domains” with clinical outcomes. Thus, it is important to explore the predictive validity of non-traditional cognitive-motivational and cognitive-affective domains in relation to treatment response. While flexible and fixed test batteries are the most common assessment approaches, data comparability is alarmingly low and future studies should aim to apply harmonized methods . Remote monitoring and mobile cognitive assessment remain in a nascent stage for SUD research and clinical care. It is too early to make accurate cost-benefit assessments of different mobile methodologies. Yet, their potential to provide more cost-effective assessment with larger and more representative samples and in greater proximity to drug use behavior justifies continued investment into their development.One of the main challenges for the cognitive assessment of people with SUD is the disparity of tests applied across sites and studies, and the lack of a common ontology and harmonized assessment approach . Furthermore, harmonization efforts must accommodate clinicians’ needs, including brevity, simplicity, and automated scoring and interpretation . Mobile cognitive testing is a highly promising approach, although its reliability and validity are influenced by a number of key factors. Test compliance, or lack thereof, seems to be problematic. A recent meta-analysis suggested that the compliance rate for EMA with SUD samples was below the recommended rate of 80% . Designs including participant-initiated event-based assessments were associated with test compliance issues, whereas duration and frequency of assessment were not. While the latter finding suggests that extensive cognitive assessment may be feasible with mobile methods, caution is advised with regard to the scope and depth of the data that can be obtained with these brief assessments and the validity of data sets collected . Remote methods for assessing confounds such as task distraction, malingering, and “cheating” are not well established or validated. As the capability of smartphones, for example, increases, so will the potential to minimize or control for such variables. Face-recognition and fingerprint technology has been proposed for ensuring identity compliance, although this presents ethical issues regarding confidential and de-identified data collection from samples that engage in illicit drug use .

E-cigarette use continues to grow among the U.S. adult population

Furthermore, studies using obese Zucker rats demon strated that the CB1 inverse agonist, rimonabant, ame liorated proteinuria in an animal model of obesity induced nephropathy.Treatment with rimonabant partially restored creatinine clearance, reduced glomeru losclerosis and tubular-interstitial fibrosis, and lowered tubular damage and renal hypertrophy.It should also be noted that these findings may have been mediated by the effects of rimonabant and not related to the EC system. While obesity in fa/fa Zucker rats is caused by a mutation of the leptin receptor, rimonabant acts to increase leptin uptake by the kidney, which has been shown to reduce proximal tubule metabolic activity.Therefore, improvement in renal function in these rats may have occurred due to mechanisms related to leptin’s role in proximal tubule cell metabolism,as opposed to a direct action on the EC system. Using a novel mouse strain lacking CB1 receptors in renal proximal tubule cells, Udi et al.found that CB1 receptor deletion did not protect the mice from the deleterious metabolic effects associated with obesity, but significantly diminished obesity-induced lipid accumulation in the kidney. Furthermore, the stimulation of CB1 receptors in renal proximal tubule cells was found to be associated with decreased activation of liver kinase B1 and decreased activity of AMP-activated protein kinase, as well as reduced fatty acid beta-oxidation.These findings indicate a potential relationship between renal proximal tubular epithelial cell CB1 receptor and the pathologic effects of obesity-induced renal lipotoxic ity and nephropathy. In summary, the findings related to the CB1 receptor highlight its partial potential in acting as a therapeutic target for obesity-induced renal disease. Further studies are needed to ascertain the efficacy of modulating CB1 in the kidney to improve renal dysfunction independent of its effects on weight.

The CB1 receptor has been shown to be upregulated in other renal disorders marked by interstitial inflammation and fibrosis,vertical hydroponic system including acute interstitial nephritis.Using unilateral ureteral obstruction as an ex perimental model for renal fibrosis in mice, Lecru et al.showed that CB1 receptor expression was upre gulated in UUO animals compared with controls.Treatment of UUO mice with rimonabant reduced monocyte chemoattractant protein-1 synthesis and decreased macrophage infiltration.It was also shown that CB1 receptor activation led to enhanced VEGF levels, which subsequently reduced nephrin expression and protein levels.There is accumulating evidence indicating the important role of CB1 and CB2 receptors and their modulation in the pathogenesis of various forms of AKI. With regard to ischemic AKI, selective CB1 and CB2 receptor agonists were found to have a dose-dependent effect in preventing tubular damage following renal ischemia/reperfusion in jury in mouse kidney.In a separate study, however, the administration of cannabidiol, a non-psychoactive constituent of cannabis with poorly defined pharmacological properties, led to a reduction in renal tubular injury in rats following bilateral renal ischemia/reperfusion.Cannabidiol significantly attenuated the elevation of serum creatinine and renal malondialdehyde and nitric oxide levels associated with this condition.In a more recent study, a triazolopyrimidine-derived CB2 receptor agonist was demonstrated to play a protective role in inflammatory renal injury following bilateral kidney ischemia/reperfusion.A series of studies have demonstrated the deleterious role of CB1 and the protective effects of CB2 activationon a nephrotoxic model of AKI in cisplatin-induced renal injury.Inhibiting CB1 receptor or activating CB2 receptor limited oxidative stress and inflammation and reduced tubular damage in kidneys of animals with cisplatin-induced AKI. In addition, ß-Caryophyllene, a natural agonist of CB2 receptor, dose-dependently protected against the deleterious effects of cisplatin-induced nephrotoxicity.Furthermore, CB1 and CB2 receptors have been shown to play a role in renal apoptotic and inflamma tory signaling pathways.Indeed, activating CB1 receptors is known to result in enhanced expression of oxidative/nitrosative stress markers, which activate p38, MAPK, and c-Jun N-terminal kinase pathways, as well as nuclear factor kappa-light-chain-enhancer of activated B cells-dependent transcription of downstream proinflam matory target genes.

Ultimately, the activation of either route leads to apoptotic cell death and inflammation in the kidney.Conversely, CB2 receptor activation has been found to reduce proapoptotic signaling and me diate anti-inflammatory effects by attenuating immune cell infiltrates and inflammatory cytokine release.Mukhopadhyay et al. showed that a peripherally re stricted CB2 receptor agonist , in a mouse model of cisplatin-induced nephrotoxicity, dose dependently attenuated renal dysfunction as measured by serum concentrations of blood urea nitrogen and creatinine. The protective effects of CB2 receptor ac tivation in these studies were absent in CB2 receptor knockout mice, suggesting that CB2 receptors are a prom ising therapeutic target for reducing renal inflammation, oxidative/nitrosative stress, and apoptosis. Another major contributor to AKI, which is associated with significant morbidity and mortality, is sepsis associated kidney injury .In a study using a cecal ligation and puncture mouse model of sepsis, CB2 receptor knockout mice demonstrated increased mortality, lung injury, bacteremia, neutrophil recruit ment, and decreased p38 MAPK activity at the site of in fection.Treatment with a selective CB2 receptor agonist reduced the effects caused by CLP, such as inflammation, lung damage, and neutrophil recruitment, and ultimately improved survival.These findings are in line with evidence demonstrating that following CB2 localization to leukocytes, their activation has been shown to mitigate leukocyte tumor necrosis factor-a-induced endothelial cell activation, adhesion and migration of leukocytes, as well as proinflammatory modulators.Therefore, CB2 receptor modulation may represent a novel therapeutic target in the treatment of SA-AKI.The mechanism by which the cannabinoid recep tors modulate or recover tubular cell survival following acute damage are not well defined at this time. However, molecular differences in cannabinoid receptor mRNA and protein levels as well as differences in the physiological outcome of receptor activation are likely related to the type of AKI and to the abundance and lo calization of receptors. While many of the studies evaluating the role of the EC system in renal homeostasis and pathophysiology focused on CB receptors and their modulation, it is important to keep in mind that the overall effects of activation and inhibition of the EC system are dependent on various factors, only a portion of which is related to the activity of CB receptors.

For example, the chief endogenous activators of the CB receptors, AEA and 2-AG, are present in substantial concentrations in the kidney; however, physiological responses elicited by these ligands under normal or pathological conditions have not been fully elucidated. Furthermore, detailed studies on how elevated or decreased levels of these ligands may impact renal function and pathology are scarce. For example, it is well known that AEA plays a role in the modulation of renal hemodynamics.Infusion of this ligand was found to be associated with vasorelax ation of juxtamedullary afferent arterioles in vitro,in creased renal blood flow in rodents, and alteration of tubular sodium transport.While these effects may be partly mediated through the activation of CB1 and CB2 receptors, it is important to highlight that these findings indicate the total effect of this ligand and it is difficult to identify exactly which receptors are activated in each segment of the nephron. Furthermore, there are CB receptor–independent effects that are not accounted for when the role of these ligands were to be assessed only in the context of CB receptors. Recent studies have begun to address this important point by attempting to define the impact of these ligands in renal disease states. Biernacki et al.described alterations to the EC system in primary and secondary HTN, noting that these conditions resulted in renal ox idative stress through increased reactive oxygen species and diminished levels of antioxidant enzymes. Despite the enhanced activity of FAAH and MGL in primary and secondary hypertensive rats,cannabis grow set up the levels of AEA and 2-AG in the kidney were significantly in creased.Increasing endogenous levels of AEA by pharmacologically inhibiting its degradative enzyme,FAAH, with a selective FAAH inhibitor, URB597, was found to have resulted in the inhibition of ROS gener ation in both types of hypertensive rats. These effects were mediated through improvement in antioxidant defense in the primary spontaneously hypertensive rat kidney via the Nrf2 pathway, as well as through reduced proinflammatory responses in sec ondary hypertensive rats.Furthermore, URB597 augmented ROS-dependent phospholipid peroxidation products and levels of ECs in both types of hypertensive kidneys, which resulted in enhanced CB receptor expression in SHR rats and enhanced ex pression of CB2 and TRPV1 receptors in DOCA-salt rats.

Chronic treatment of Wistar normotensive con trol rats with URB597 similarly enhanced phospholipid oxidation in the kidney, comparable to its administration in DOCA-salt rats.Thus, while the EC system appears to play a protective role in HTN, the administration of a FAAH inhibitor did not significantly alter the proinflam matory or oxidative conditions caused by primary HTN, and only created imbalances between ECs, oxidants, and proinflammatory factors in secondary HTN, potentially leading to the development of kidney dysfunction. With regard to other renal conditions, such as AKI, studies have shown varied responses to kidney injury in EC expression levels. Moradi et al.demonstrated that renal ischemia/reperfusion injury is associated with a significant increase in renal 2-AG content using a bilateral ischemia/reperfusion mouse model of AKI. It was found that the augmentation of kidney 2-AG con centrations following MGL inhibitor administration resulted in improved serum BUN, creatinine, and tubular damage score; however, the mRNA gene expression of renal inflammation and oxidative stress markers was not altered. Conversely, in a cisplatin-induced nephro toxic model of AKI, cisplatin enhanced AEA but not 2- AG levels in renal tissue.To date, the mechanisms and conditions under which CB receptors are activated by ECs in the kid ney—and subsequently the signaling cascades that re sult from this activation—have not been fully described. Studies have demonstrated conflicting re sults describing the role of AEA and CB1 receptor acti vation in mediating glomerular podocyte injury. Jourdan et al.74 showed that chronic exposure of human cultured podocytes to high glucose resulted in a significant upregulation in CB1 receptor gene expres sion, which is also associated with an increase in cellu lar AEA and 2-AG. This is associated with signs of inflflammation and podocyte injury, which manifest as decreased podocin and nephrin and increased desmin gene expression.In contrast, Li et al. reported the protective functions of AEA following L-homocysteine -induced podocyte injury. AEA blocked Hcys induced NLRP3 inflflammasome activation in cultured podocytes and ameliorated podocyte dysfunction, ulti mately precluding glomerular damage.Therefore, while the former study demonstrated that an increase in CB1 receptor gene expression accompanied by an upregulation in AEA and 2-AG is associated with podocyte injury, the latter study suggests that AEA ex erts protective and anti-inflflammatory effects in podo cytes. Future studies are needed to investigate the role of EC ligands in CB receptor activation under varied conditions in renal health and disease.Among US adults in 2019, e-cigarettes were the most commonly used non-cigarette tobacco product , with use of e-cigarettes highest among adults aged 18–24 years . The convenience of newer pod like devices, the use of nicotine salts to provide higher doses of nico tine with less throat irritation , and marketing of e-cigarettes as smoking cessation aids all contribute to lower perceived harm and may account for recent increases in use among adults . Over a third of adult e-cigarette users also self-identify as current users of tobacco cigarettes , and the most common reasons given for e cigarette use are for cessation of tobacco cigarettes or health-related concerns . While smokers may use e-cigarettes to reduce their exposure to the toxic chemicals found in tobacco cigarettes , there is insufficient evidence to conclude that e-cigarettes are effective smoking cessation aids . Little is known about e-cigarette use among vulnerable populations such as those with SUDs. Among SUD treatment clients high rates of current ciga rette smoking have been reported, however, with lower rates of past 30-day e cigarette use. Similar to the general population, the majority of current e-cigarette users receiving care in SUD treatment programs are also current users of tobacco cigarettes , and many report that they use e-cigarettes to either quit or reduce cigarette smoking . These findings highlight the importance of considering cigarette smoking status to understand e cigarette use among clients in SUD treatment to inform the development of anti-vaping messages to support smoking cessation in this population. Although the studies by Gubner et al. and Baldassarri et al. examined the correlates of current e-cigarette use among patients in SUD treatment, these studies did not focus on beliefs and attitudes for using e-cigarettes as a smoking cessation aid.

Alcohol and water were delivered through Teflon tubing using a computer-controlled delivery system

Human studies examining CNS sequelae of chronic marijuana use provide evidence for increased metabolism and activation of alternate neural pathways within these regions . Further adverse effects may result from the pharmacological interaction of alcohol and marijuana, where THC has been reported to markedly enhance apoptotic properties of ethanol. In infant rats, administration of THC alone did not result in neurodegeneration; however, the combination of THC and a mildly intoxicating dose of ethanol induced significant apoptotic neuronal cell death, similar to that observed at high doses of ethanol alone . In sum, studies of adolescent alcohol and marijuana use indicate weaknesses in neuropsychological functioning in the areas of attention, speeded information processing, spatial skills, learning and memory, and complex behaviors such as planning and problem solving even after 28 days of sustained abstinence . There are also associated changes in brain structure and function that include altered prefrontal, cerebellar, and hippo campal volumes, reduced white matter microstructural integrity, and atypical brain activation patterns . There may be potential reversibility of brain structural changes with long-term abstinence , though additional studies are needed to understand the extent to which abnormalities persist or remit with time. Further, the potential interaction of alcohol and marijuana are of concern considering that comorbid use is common .It is postulated that there is an asynchronous development of reward and control systems that enhance adolescents’ responsivity to incentives and risky behaviors . Bottom-up limbic systems involved in emotional and incentive processing purportedly develop earlier than top down prefrontal systems involved in behavioral control. In situations with high emotional salience,drying cannabis the more mature limbic regions will override prefrontal regions, resulting in poor decisions.

The developmental imbalance is unique to adolescents, as children have equally immature limbic and prefrontal regions, while adults benefit from fully developed systems. Within this model, risky behaviors of adolescents is understood in light of limbic system driven choices to seek immediate gratification rather than long-term gains. Moreover, this relationship may be more pronounced in adolescents with increased emotional reactivity. Behavioral and fMRI studies show increased subcortical activation when making risky choices and less activation of prefrontal cortex, as well as immature connectivity between emotion processing and control systems overall . A more specific characterization of these patterns using comparisons of low- and high-risk gambles indicated that high-risk choices activate reward-related ventral striatum and medial prefrontal cortex, whereas low-risk choices activate control-related dorsolateral prefrontal cortex. Interestingly, activation of the ventral medial prefrontal cortex was positively associated with risk-taking propensity, whereas activation of the dorsal medial prefrontal cortex was negatively associated with risk-taking propensity , suggesting that distinct neural profiles may contribute to the inhibition or facilitation of risky behaviors.Development of effective treatments for alcohol use disorder remain a high priority area which involves screening compounds in the laboratory before proceeding to clinical trials . Within this process, there is a need to develop and understand relationships among human laboratory paradigms to assess the potential efficacy of novel AUD treatments in early-stage clinical trials. To date, reviews of the human laboratory literature in AUD pharmacotherapy development indicate significant outcome variability based on experimental paradigm parameters, population of interest, and sample size, and suggest that these myriad variables contribute to the disconnect between laboratory effect sizes and treatment outcomes . Amidst the efforts to develop translational experimental paradigms, neuroimaging tasks are increasingly used to explore potential pharmacotherapy effects on neural correlates of alcohol-induced craving . Alcohol consumption produces neuroadaptations in multiple circuits, including GABA-ergic regulation of traditional reward circuitry; alcohol craving is mediated by cortico-striatal-limbic activation, heightens relapse risk , and can be triggered through internal and external stimuli associated with alcohol consumption .

For this reason, neuroimaging techniques, such as functional magnetic resonance imaging , have been used to explore these circuits as potential medication targets. Recent qualitative reviews and meta-analyses suggested that while such fMRI tasks vary in sensory experiences and scan parameters, mesocorticolimbic areas consistently exhibit task-based neural activity and may be viable tools in understanding mechanisms of AUD pharmacotherapy . Based on this emerging literature, there is growing evidence that neural responses to alcohol cues and associated contexts are predictive of real-world consumption behavior and, potentially, clinical outcomes. For instance, among college students, alcohol cue-elicited blood oxygen level-dependent response in caudate, frontal cortex, and left insula predicted escalation to heavy drinking over a 1-year period. Further, insula and frontal gyrus activation in response to an emotion face recognition task similarly predicted alcohol related problems five years later in young adults . Regarding treatment outcomes, increased ventral striatum activation in response to alcohol cues was associated with a faster time to relapse in a sample of abstinent AUD individuals . Comparisons of AUD treatment completers and non-completers in a community sample indicated that non-completers showed stronger associations between reported alcohol craving intensity and resting state functional connectivity between striatum and insula, relative to completers . Of note, one study had contradicting results by reporting that relapsers, compared to successful alcohol abstainers and healthy controls, exhibited reduced alcohol cue-elicited activation in ventral striatum and midbrain . Several studies have examined whether AUD pharmacotherapies alter neural responses to contexts that elicit alcohol craving, including alcohol cues, exposure to reward and emotional faces, and stress exposure. While significant variability exists in sample populations, examined tasks, modified areas of activation, and molecular targets of treatments, there is some consistent evidence that AUD pharmacotherapies may reduce reward-related activation in regions such as the ventral striatum, precuneus, and anterior cingulate . Importantly, in one study of naltrexone, magnitude of reduction in alcohol cue-induced ventral striatum activation was associated with fewer instances of subsequent heavy drinking . In support, Mann and colleagues have found that individuals with high ventral striatum cue reactivity demonstrate lower relapse rates when treated with naltrexone than those with low VS reactivity. Bach and colleagues have also identified that individuals with high alcohol cue-reactivity in the left putamen exhibit longer time to relapse when treated with naltrexone, compared to those with low reactivity.

Together, these studies underscore reward circuitry as a key area in the translation of neural responses to clinical outcomes in AUD medication development . Alcohol self-administration tasks in the laboratory are thought to capture alcohol use behavior in controlled settings that approximate consumption in real world settings. Studies have tested multiple variants of self-administration paradigms,ebb flow including tasks that require participants to orally consume alcohol at the cost of monetary rewards per drink , and intravenous methods that can closely control breath alcohol concentration levels e.g. computer assisted self-infusion of ethanol.Studies have used self administration methods to test genetic, physiological, and psychological risk factors for heavy drinking .While both fMRI cue-reactivity tasks and alcohol self administration tasks are widely used in alcohol research, the extent to which cue-reactivity predicts self-administration in the laboratory remains unknown. In light of the emerging role of functional neuroimaging in predicting drinking behavior and AUD treatment outcomes, a remaining question is the nature of the relationship between neuroimaging task-induced neural activation and widely utilized laboratory paradigms considered proximal to real-world consumption, including self-administration tasks. To date, several studies have examined relationships of response across different laboratory paradigms and have consistently identified that alcohol craving during intravenous alcohol administration mediates the relationship between alcohol induced stimulatory effects and subsequent oral alcohol consumption . While relationships across human laboratory paradigms are recently delineated, no studies have yet investigated whether alcohol cue-induced BOLD response is predictive of responses within laboratory self-administration paradigms. To address this gap in the literature and to further integrate neuroimaging and human laboratory paradigms for AUD, the current study examines whether alcohol taste cue-induced ventral striatum activation predicts subsequent oral alcohol self-administration in the laboratory. These secondary analyses are conducted in a within-subjects design whereby the same participants completed an fMRI cue-reactivity task followed by an alcohol-self administration task . As striatal activation is thought to underlie craving responses , we hypothesized that those with greater ventral striatum activation would consume their first drink faster than those with lower activation. Similarly, as previous research has demonstrated that mesolimbic activity predicts real-world heavy drinking, we hypothesized that ventral striatum activation would also be positively associated with the total number of drinks consumed during the self-administration paradigm. Participants for this secondary analysis of an experimental laboratory study on naltrexone a score of 8 or higher on the Alcohol Use Disorders Identification Test AUDIT; self-identification of East Asian ethnicity lifetime non-alcohol substance use disorder clinically significant levels of alcohol withdrawal (indicated by a score of 10 or higher on the Clinical Institute Withdrawal Assessment-Revised CIWA-AR for women, pregnancy.

Interested individuals completed an in-person laboratory screening visit to learn about the study, provide written informed consent, and to assess for inclusion and exclusion criteria. Of note, this study collected information on genotypes encoding endogenous opioid receptors thought to mediate the stimulating effects of alcohol , as well as those associated with metabolism of alcohol . Participants provided a saliva sample for DNA analyses and completed a medical screening that included a physical examination. Detailed information on recruitment procedures are available in the primary manuscripts from which the current study is based . A study procedure flowchart can be seen in Figure 1.Study procedures followed a double-blind, randomized, placebo-controlled and counterbalanced design.Within each medication condition, participants were titrated to the medication for 5 days . Participants completed an fMRI scan on day 4 and an alcohol self-administration session on day 5 of the medication regimen. At the start of each experimental session, participants completed a urine toxicology screening; all participants tested negative for exclusionary substances during these screening periods. There was a minimum wash-out period between medication conditions of 7 days, with a range of 7-10 days. Regarding medication adherence, naltrexone and placebo capsules were packaged with 50mg of riboflavin. A visual inspection of riboflavin content under ultraviolet light indicated that all urine samples tested positive for riboflavin content. At the start of the scanning session , participants were required to have a BrAC of 0.00 g/dL, negative urine toxicology screen for all substances except cannabis, and negative pregnancy screen. Participants who smoked cigarettes were allowed to smoke 30 minutes prior to the scan to prevent acute nicotine withdrawal and craving.Within each task trial, participants initially viewed a visual cue for 2 seconds, followed by a fixation cross . The word “Taste” then appeared, corresponding to oral delivery of the indicated liquid at the start of the trial . Participants were also instructed to press a button on a button box to indicate the point at which the bolus of liquid was swallowed and this information was used to model motion associated with swallowing. There were two runs of this task, with 50 trials per run.Red or white wine, based on participant preference, was used as the alcohol stimulus; previous work from our group has demonstrated that this paradigm has been used to effectively elicit alcohol-related neural activation . Carbonated alcohol, such as beer, could not be systematically administered with the paradigm apparatus and was not offered as a drink option to participants. Visual stimuli and response collection were programmed using MATLAB and Psychtoolbox , and visual stimuli were presented using MRIcompatible goggles. Participants completed an oral alcohol self-administration paradigm on day 5 of medication titration. At the start of this session, participants were required to test negative for substance use and to have a BrAC of 0.00 g/dl. Female participants were also required to test negative on a pregnancy test. Participants fasted for two hours prior to the session and were given a standardized meal before the alcohol administration. Participants initially completed an intravenous alcohol administration discussed in the primary manuscript . After completing the alcohol infusion paradigm and reaching a target BrAC of 0.06 g/dl, the IV was removed and, after a standardized period of five minutes, participants subsequently began an oral self-administration session at the testing center. Notably, the alcohol dose of 0.06 g/dl prior to the self-administration period was higher than the typical 0.03 g/dl priming dose implemented in self-administration tasks During the self-administration period, participants were provided 4 mini-drinks of their preferred alcoholic beverage and allowed to watch a movie over a 1-hour period.

Marijuana use is also associated with atypical neural profiles

Once respondents had been classified into different groups based on the trajectory analysis, we reviewed the characteristics of the respondents assigned to each trajectory to identify between-trajectory differences . We reviewed the characteristics of members of each trajectory; dichotomous variables are reported as percentages, and ordinal variables are reported with means and confidence intervals. Participants identifying as males were more likely to have established smoking habits at a younger age; while less than half of never smokers, experimenters, and late escalators were male, 59% of quitters and 72% of early established smokers were male. While 31% of never smokers reported ever drinking alcohol at baseline, 39% of late escalators, 48% of experimenters, 59% of quitters, and 66% of early established smokers reported ever drinking alcohol. Similarly, 8% of never smokers reported cannabis use at baseline relative to 25% of late escalators, 20% of experimenters, 33% of quitters, and 42% of early established smokers. For the depression, peer smoking, and rule breaking scales, the differences between never smokers and early established smokers ranged between 0.10 and 1.85 on a 5-point scale. We compared the means and confidence intervals for all variables in the entire NLSY cohort and those in the subset included in the trajectories analysis to assess potential bias in the sample due to missing data. We found that, among the sociodemographic indicators, the subset of observations included in the trajectories analysis had a larger share of respondents identifying as non-Hispanic White and as being both enrolled in school and employed, while a smaller share identified as Hispanic, reported that they were not living with both biological parents and had a mother with less than a GED/high school diploma.Overall, as anticipated, we identified significant associations between smoking trajectories, tobacco control policy interventions and known risk factors for progression to established smoking. Our findings were consistent with and expand on results from prior research by adding the time-varying effects of two important tobacco policy interventions, smoke-free laws and taxes. In addition, our results demonstrate the effects of socio-demographic variables on patterns of smoking. Our results suggest that policy has different influences on the patterns of smoking behavior of different types of smokers. Our analysis also demonstrated the stronger effects of smoke-free laws on frequency of smoking than tobacco taxes.

Our findings with respect to risk factors for smoking frequency were generally consistent with previous research,pot for growing marijuana which suggested that white men were more likely to be daily smokers; smoking is associated with alcohol and drug use, peer smoking, and a history of rule breaking; established smoking is associated with lower socioeconomic status; and depression and anxiety are associated with smoking. A previous trajectory analysis using NLSY97data that did not include time-varying covariates and relied on latent class growth analysis identified roughly comparable shares of experimenters and quitters, smaller shares of never smokers , and larger shares of late escalators and early established smokers. With respect to risk factors, confidence intervals in this updated analysis were narrower, identifying significant associations for additional variables in one or more trajectories, including male, being employed and not in school, ever using cannabis, age, depression/anxiety, peer smoking, rule breaking, and having at least one child. In this analysis, non-Hispanic Black participants were also more likely to be experimenters, and Hispanic participants were less likely to be early established smokers. Our analysis expands upon the existing literature on tobacco control policies and smoking behavior, which focuses on measures of smoking behavior at specific time points, such as initiation, smoking status, and cessation. This is the first analysis to examine the relationship between tobacco control policies and patterns of smoking behavior over time. Our finding that policies have differential effects on smoking trajectories establishes that smokers are heterogeneous, meaning not all smokers follow the same progression to smoking. It may be necessary to tailor cessation interventions to different types of smokers to increase the efficacy of these approaches. The results also support the importance of tobacco control policy interventions in modifying smoking behavior across all trajectories of use. Comprehensive smoke-free laws were associated with decreased risk of initiation, decreased use, and reduced likelihood of return to use across four out of five trajectories. The effects were greatest for never smokers and quitters, while still evident among established smokers, whether they began smoking as adolescents or as young adults. The only trajectory that did not reduce its exposure to tobacco as a result of coverage by comprehensive smoke-free laws was experimenters. Our finding that smoke-free laws were not associated with patterns of use among experimenters is consistent with previous literature that established varying effects of smoke-free laws across different patterns of smoking behavior. Siegel et al found that strong smoke-free restaurant laws were associated with lower odds of transitioning from experimentation to established smoking, but not of transitioning from nonsmoking to experimentation. Song et al found that smoke-free laws had a different relationship with smoking initiation, smoking status, and days smoked.

Specifically, Song et al found that smoke-free bar laws were associated with lower odds of being a current smoker and fewer days of smoking but not lower odds of smoking initiation. Our findings are also consistent with the intention of smoke-free laws not to prevent experimentation, but to prevent progression from experimentation to established smoking, in addition to protecting nonsmokers from secondhand smoke exposure. The knowledge that experimenters are more likely to have counter intuitive responses to smoke-free laws has the potential to influence tobacco cessation efforts. When a state or locality improves its smoke-free law coverage, it may wish to supplement these changes with smoking prevention and cessation targeting experimenters to ensure that no group fails to benefit from these policy improvements. The analysis that this one builds upon revealed that, compared to never smokers, experimenters were more likely to be neither in school nor working. This finding suggests that school-based tobacco control efforts are less likely to be effective for experimenters than some other types of smokers. Tobacco control programs targeting these youth should be tailored to their use patterns by promoting complete cessation and elimination of occasional or social smoking. These efforts should be placed in locations likely to be frequented by youth who are neither in school nor employed, such as community centers and athletic courts. While increased tax rates were associated with reduced risk of initiation among never smokers , reduced days of smoking among experimenters,container for growing weed and reduced likelihood of return to use among quitters, they were associated with increased days of smoking among early established users and late escalators. The finding that established users increase smoking after tax increases is contrary to the intended effect of tobacco tax increases. In general, because cigarettes are addictive, the relationship between changes in the price and consumption of cigarettes tends to be different from that of many other goods. In addition, previous research suggests that, when tax increases occur, smokers increasingly engage in price minimization strategies such as coupons, bulk purchasing, and switching to discount brands to maintain their prior levels of use. In addition, tobacco manufacturer use price promotions to reduce the post-tax consumer prices of cigarettes to levels below the pre-tax prices.

Because these changes result in smokers purchasing cigarettes in larger quantities , they also have the potential to result in increased availability, and therefore, increased use. In addition, the use of price minimization strategies and coverage by tobacco-free policies tend to vary by socioeconomic status, and we found differences in some indicators of SES across classes. Policy interventions such as tobacco minimum floor prices or sudden, large tax increases might circumvent the price-minimization strategies likely to be used by established users and late escalators. Future research could consider the effectiveness of these policies by considering changes in smoking trajectories in years beyond 2011, after the introduction of substantial state level annual tax increases and local tobacco minimum floor prices . In addition, to ensure that all youth benefit from tax increases, states and localities planning tax increases could supplement these increases with cessation methods targeting early established smokers and late escalators. These were the only two trajectories that were significantly more likely to report having frequently broken rules in school compared to never smokers in a previous analysis. School-based efforts and tobacco educational campaigns targeting youth who self-report higher rates of rule-breaking would be most likely to be effective for these types of smokers. In addition, because late escalators do not become established smokers until late adolescence or early adulthood, these programs should extend their reach beyond youth to include young adults by utilizing not only school-based, but also community- and higher education-based smoking prevention and cessation approaches.Our study has limitations. Our analysis considered annual changes and does not consider policy changes after 2011 when NLSY97 data collection became biennial because the analytic method could not support missing years of data. Because we did not analyze NLSY97 data beyond 2011, we were unable to assess potential interactions between combustible cigarette use and new products such as e-cigarettes and possible complementary use of other substances such as cannabis, which has been increasingly legalized for medical and recreational use. Research using data from the CDC National Youth Tobacco Survey showed that the advent of e-cigarettes had not affected the rate of decline in youth cigarette use from 2004 through 2014 , but that e-cigarettes were adding to nicotine product use. The market for new tobacco products has continued to change, and caution is warranted in attempting to apply these findings to the current market. Our analysis did not include data on Tobacco 21 laws due to similar limitations. Although biennial NLSY datasets were available through 2018 at the time of writing, the only strong state T21 law in effect before 2019 was California, and organizations that code the strength of Tobacco 21 laws were unable to supply data on local Tobacco 21 laws for any time period. In addition, the switch to a biennial analysis would increase the share of missing data. Our analysis did not include data on state tobacco control funding, for at least two reasons: first, there are multiple differences between states relating to population and focus and quality of programs that make it unclear how to normalize a measure; second, there is likely collinearity between program funding and enactment of smoke free laws and tax increases, given that stimulating such policy change is often among the goals of state tobacco control programs. We used only a subset of the entire sample due in part to missing geographic identifiers in the underlying data and in part due to incomplete risk factor data ; it is unclear whether or how observations excluded due to missing geocodes or incomplete reporting might affect estimates. We relied on list wise deletion as a strategy to handle missing data given that this method is linked to loss of statistical power rather than to biased estimates. The fact that we identified statistically significant determinants of the trajectories suggests that this loss in power did not compromise the overall analysis. Another consequence of missing data is that we were unable to include a variable indicating ever use of cocaine/hard drugs, which dropped out of the analysis entirely . Another limitation of the analysis is the composition of the sample. A large proportion of the sample was non-Hispanic white and both enrolled in school and employed; a small proportion was Hispanic, not living with both biological parents, and had a mother with less than a GED/high school diploma. As a result, the analysis may not have identified some associations among respondents with underrepresented characteristics. Future research should consider questions left unanswered by this analysis, including further analysis of the identified increase in smoking days among experimenters under comprehensive smoke-free laws, and among early established and late escalators under higher excise taxes.Fragile X syndrome is an X-linked dominant disorder caused by the expansion of a trinucleotide repeat n within the first exon of the fragile X mental retardation 1 gene, which silences the expression of the fragile X mental retardation protein .The absence of FMRP, an important regulator of translation of many messenger RNAs involved in synaptic plasticity,2 leads to substantial intellectual deficits.

Participants are compensated for their time with cash and/ or gift cards

They further suggest that attentional bias to threat may mediate the association between CB1 receptor availability in the amygdala and threat symptomatology, with greater CB1 receptor availability being linked to greater attentional bias to threat that is in turn linked to greater severity of threat symptomatology. Results of the current study build on extant neurobiological studies that have implicated the endocannabinoid system in the amygdala as an important modulator of anxiety , as well as functional activation of the amygdala in mediating attentional bias to threat among individuals with PTSD . Specifically, results of this study suggest that CB1 receptor availability in the amygdala may directly mediate this endophenotype and its associated phenotypic expression of trauma-related threat symptomatology. Preclinical work suggests that the activation of membrane glucocorticoid receptors appears to engage a G-protein-mediated cascade through the activation of Gs proteins that, in turn, increases the activity of cAMP and protein kinase A. This increase in protein kinase A appears to induce the rapid synthesis of an endocannabinoid signal through an as yet unknown mechanism that may be an increase in intracellular calcium signaling that is then released from principal neurons in the amygdala and activates CB1 receptors localized on the terminals of GABAergic neurons in the amygdala. It should be noted, however, that other mechanisms than CB1 receptor stimulation by anandamide could contribute to the etiology of attentional bias to threat and threat symptomatology. First, the two endocannabinoids anandamide and 2-arachidonoylglycerol have differential roles in endocannabinoid and have distinctly different metabolic pathways fatty acid amide hydrolase for anandamide and monoacylglycerollipase for 2-arachidonoylglycerol; . To date, the relative contribution of these two endocannabinoids and their pathways in the modulation of anxiety remains unclear. Furthermore, recent evidence suggests that CB1 receptor signaling varies across brain regions , and that diverse effects of anandamide–CB1 receptor signaling mechanisms are evident even within the extended amygdala . Finally, the actions of anandamide are not restricted to CB1 receptors,mobile vertical rack as endocannabinoids also act on CB2 receptors , GPR55 , transient receptor potential vanilloid type 1 channels , and other G-protein subtypes .

Although additional research is needed to further evaluate how the endocannabinoid system mediates attentional bias to threat, the results of this study suggest that greater CB1 receptor availability in the amygdala, as well as lower levels of peripheral anandamide, are associated with a greater attentional bias to threat in trauma-exposed individuals. However, we acknowledge, that no human studies that we are aware of have found that anandamide concentrations directly influence CB1 receptor availability, and hence additional work is needed to ascertain how these variables are causally related. Nevertheless, the present data extend prior work linking attentional bias to threat to hyperarousal symptoms to suggest that the CB1 receptor system in the amygdala is implicated in modulating attentional bias to threat that is in turn linked to the transdiagnostic and dimensional phenotypic expression of trauma-related threat symptomatology. Further research will be useful in further elucidating molecular mechanisms that account for the observed association between CB1 receptor availability and the endophenotypic and phenotypic expression of threat processing in humans. An important question to be addressed in future work is whether pharmacotherapies that act on catabolic enzymes for endocannabinoids may be useful in the prevention and treatment of endophenotypic and phenotypic aspects of trauma-related threat symptomatology. Emerging evidence supports the potential utility of such targets, suggesting that variation in the FAAH gene is linked to reduced expression of FAAH that consequently results in elevations in circulating levels of anadamide , as well as decreased amygdala response to threat and more rapid habituation of the amygdala to repeated threat . Notably, elevating anandamide levels via FAAH inhibition appear to provide a more circumscribed spectrum of behavioral effects than blocking MAGL that could potentially result in a more beneficial side effect profile, as anandamide is less prone to CB1 receptor desensitization and resultant behavioral tolerance . These classes of compounds are currently being investigated for their potential efficacy in treating mood and anxiety disorders. Given that core aspects of threat symptomatology such as hyperarousal are key drivers of more disabling aspects of the trauma-related phenotype such as emotional numbing , pharmacotherapeutic targeting of threat symptomatology in symptomatic trauma survivors may have utility in reducing the chronicity and morbidity of trauma-related psychiatric disorders such as PTSD, MDD, and GAD. Methodological limitations of this study must be noted. First, we studied a cohort of individuals with heterogeneous trauma histories.

Although this is typical for most PTSD studies and we endeavored to recruit individuals who represented a broad and representative spectrum of traumarelated psychopathology, additional studies of samples with noncivilian trauma histories will be useful in extending these results. Second, 95% confidence intervals for coefficients in the mediation analysis were markedly wide, and hence additional studies in larger samples will be useful in ascertaining magnitudes of the observed associations. Third, we observed a high correlation between threat and loss symptomatology that may call into question the extent to which these symptom clusters reflect separable components of trauma-related psychopathology that are uniquely related to CB1 receptor availability in the amygdala and attentional bias to threat. Nevertheless, high correlations among symptom clusters of trauma-related psychopathology are not uncommon, with confirmatory factor analytic studies of substantially larger samples often observing intercorrelations among symptom clusters 40.80 . Furthermore, the finding that CB1 receptor availability in the amygdala was associated only with threat, but not loss symptomatology, suggests greater specificity of association that accords with prior work . Fourth, it is important to recognize that our outcome measure in this study, VT, represents specific plus nondisplaceable binding. Because of the lack of a suitable reference region devoid of CB1, we and others using different CB1 receptor ligands cannot directly calculate binding potential , a measure of specific binding. Thus, an implicit assumption in the interpretation of our results is that there are no group differences in VND, the distribution volume of nondisplaceable tracer uptake. An alternative assumption would be that the magnitude of nondisplaceable binding is small compared with the total binding. To definitively address this issue would require a blocking study in humans to estimate VND. To the best of our knowledge, such data are not currently available because of the lack of suitable selective CB1 antagonist drugs approved for human use. Blocking data with the CB1 receptor antagonist rimonabant in baboons , however, did show a large reduction in tracer uptake, suggesting that a substantial fraction of VT can be attributed to specific binding. Notwithstanding these limitations, the results of this study provide the first known in vivo molecular evidence of how a candidate neuroreceptor system—CB1—relates to attentional bias to threat and the dimensional expression of trauma-related psychopathology. Results revealed that greater CB1 receptor availability in the amygdala is associated with increased attentional bias to threat, as well as the phenotypic expression of threat-related symptomatology, particularly hyperarousal symptoms. Given that these results were based on a relatively small sample, further research in larger, transdiagnostic cohorts with elevated threat symptomatology will be useful in evaluating the generalizability of these results, as well as in examining the efficacy of candidate pharmacotherapies that target the anandamide–CB1 receptor system in mitigating both the endophenotypic and phenotypic expression of threat symptomatology in symptomatic trauma survivors.

SU initiation typically begins in the teens. According to the 2016 Monitoring the Future Survey, 7.3% of 8th graders have used alcohol, 5.4% have used marijuana, and 2.6% have used tobacco within the last 30 days. Marijuana,vertical grow rack alcohol and other substances of abuse are known to negatively impact neuro development in adolescents suggesting that the adolescent brain may have heightened vulnerability to toxic substance effects . The Adolescent Brain Cognitive Development Study is a large-scale, prospective, longitudinal, multi-site project designed to study brain and cognitive development in youth, as they transition into adolescence and young adulthood, across the United States . Participating families are recruited through school and community events at each respective ABCD study site, online and paper ads, and word of mouth. This is the first study of its kind in the US to focus on factors of critical importance to trajectories of developmental change in brain and cognition during a period of vulnerability to substance use and other mental health problems. Study outcomes will inform evidence based standards for normal cognitive and brain development, as well as provide large repositories of data and bio-materials for the study of experiential and environmental influences on brain and cognitive development in youth. Focus on the contributions of pubertal hormones, genomic and epigenomic factors, and the interactions across these many influences, are serving as key biological measures for informing our understanding of developmental and behavioral outcomes in the ABCD study. To this end, along with brain imaging, neurocognitive, and other measures, including a comprehensive battery of mental and physical heath, history of SU, behavioral assessments and key bio-samples are being obtained from participating youth . The present manuscript describes the rationales for inclusion and selection of the specific bio-specimens, methodological considerations for each measure, future plans for assessment of bio-specimens during follow-up visits, and preliminary ABCD data for some example topics. All procedures are approved by each site’s Institutional Review Board, and all participants undergo verbal and written consent/assent procedure.A description of the rationales for inclusion and selection of the specific bio-specimens, methodological considerations for each measure, future plans for assessment of bio-specimens during follow-up visits, and preliminary ABCD data to illustrate methodological considerations for all biological samples under collection from youth. These include breath, saliva, urine, hair, blood and baby teeth, collected for purposes of: screening for SU; measurement of pubertal hormone levels; characterization of genetic and epigenetic factors, and analyses of environmental exposures during development from baby teeth. Bio-material obtained from the ABCD Study are being stored in repositories, such as the Rutgers University Cell and DNA Repository , and baby teeth at the Icahn School of Medicine at Mount Sinai in the laboratory of Dr. Manish Arora. These stored bio-materials include measures from the assay of bio-specimens, genotyping, and bio-samples collected for future research. Preparation and processing of bio-samples at the ABCD data collection sites is occurring during or just following baseline assessment of youth, and is planned for follow-up assessments for utilization by members of the scientific community. Results from analyses of all ABCD bio-specimens will be made available through the ABCD Data Repository. Although we anticipate core specimens to be the same across the ten years of ABCD, the kinds and/or amounts of specimens to be collected in subsequent follow-up years may be adjusted to account for: 1. changes in technology; 2. shifts in the scientific questions being addressed ; or additional funding for analyses. Given these considerations, future specimen collections may include measures of the microbiome, parental specimens, or other types of specimens in subsequent follow-up years.The ABCD study baseline visits occur at 9–10 years of age, prior to initiation of SU for most youth allowing for measures of brain, cognitive, environmental, and genetic variability that may precede SU or other negative developmental influences. The ABCD study uses a combination of bio-specimens and self-report to evaluate consistency between biological testing, participant self-report and research assistant assessment of intoxication. Self-report alone may lead to biases due to under-reporting related to individualized motivations, or errors in recall . Further, as seen with youth self-reporting risky sexual behaviors, inaccurate self-reporting may vary as a function of race, gender or age . However, bio-specimen sampling itself is subject to experimental error, and therefore reporting bio-specimen measures in a thorough and standardized manner across published ABCD studies is important for accurate and reproducible results . Self- and parent/guardian-report of SU is conducted through interview and questionnaire survey. Bio-specimens include the annual collection of hair samples to evaluate recent and repeated use of alcohol and other drugs during the 1–3 months prior to testing, and testing of body fluids and breath prior to onsite assessment. Because of the low levels of SU among 9–10 year olds, only a small subset of youth participants are tested in the  first two years of the ABCD Study, with increasingly larger proportions of youth selected randomly for testing as the cohort ages into adolescence, when experimentation, regular, and problem use with substances becomes more prevalent .

Early research suggests that cannabis legalization does not lead to increases in adolescent cannabis use

The first column of Fig. 3 illustrates this data type using the Area Deprivation Index, a measurement of neighborhood deprivation derived from the American Community Survey. The second column in Fig. 3 illustrates traffic counts, which are point data that were obtained by surveying stations at various geographical locations. In contrast, raster data are usually obtained by model estimation, incorporating multiple sources such as satellite imaging and ground station surveys, as is seen for fine particulate matter in the third column of Fig. 3. The curated GIS database compiled by the ABCD Study LED Environment Working Group includes both vector and raster data of multiple built and natural environmental contextual variables. As shown in Table 1 and outlined in greater detail below, various environmental datasets have been used to map environmental factors to the state-, census-, residential-level for ABCD Study participants to date. Youth grow up in overlapping circles of cultural and socio-political contexts, from their local family and neighborhoods to the states and countries in which they live. We typically focus on the experience of stigma and bias at a relatively local level . Critically, there are also important indicators of more systemic or structural bias reflected in social norms at the community or institutionalized laws, policies and practices that may either reflect the behavior of individuals or shape the behavior of individuals in youths’ local environment. However, we rarely directly examine the relationship between objective measures of systemic/structural bias and function in youth. The ABCD Study provides a novel opportunity to address such critical questions with empirical data, given the geographic variability of the sites involved in the ABCD Study, which affords significant divergence across youth in their exposure to such systemic biases. To address such questions, colleagues at Harvard University created state-level indicators of three types of structural stigma : gender , race , and ethnicity . This information was linked to each youth in the ABCD Study as a function of their baseline site of participation and does not yet include information about whether the child moved to a different state, which may have different state level indicators, at later visits. To create these state-level measures, they used several types of data described in detail in . First, they obtained data about implicit and explicit attitudes about each of these three identity groups aggregated at the state-level, derived from large-scale projects that spanned several years: Project Implicit , the General Social Survey , and the American National Election Survey . Second, for information on gender, they obtained state-level data of women’s economic and political statuses and information about reproductive policies, such as information about availability of abortion providers. Third, for information on attitudes towards Latinx individuals, they examined state-level policies on immigration, recognizing that many Latinx individuals are not immigrants but that such state-level policies likely influence the experience of all individuals in the community with that identity.

These data can be used to examine how these state level biases interact with youth’s identities to predict a range of factors,mobile vertical rack such as educational experience, mental health, brain development, and substance use/abuse. In the United States, public acceptance of cannabis use has increased alongside increased access because of broader cannabis legalization. Currently, 36 states have legalized either recreational or medical cannabis use.However, among younger adolescents , greater exposure to cannabis advertisements was associated with greater use, intention to use, and positive expectancies . The difference in results as a function of age highlights the importance of understanding how cannabis regulations affect younger cohorts of children and adolescents who may have greater exposure to cannabis advertisement after living in an environment with legal access to cannabis for a longer period. Furthermore, the ABCD Study is an ideal dataset to examine the effects of cannabis legalization because there are 21 sites located in 17 states with various state cannabis policies. In addition, the ABCD Study is collecting detailed substance use data unlike other national surveys. Cannabis legalization categories were assigned to participants based on their state of residence. The four cannabis legalization categories are: 1. Recreational – allows adults to use cannabis for recreational purposes, 2. Medical – allows adults to use cannabis for medical conditions, 3. Low THC/CBD – allows adults to use cannabis that is low in THC and high in CBD for medical conditions, and 4. No legal access to cannabis – forbids access to cannabis. Urbanicity can provide information as to the impact of living in urban areas. Urbanicity indices may reflect the presence of environmental and social conditions that are more common in urban areas, such as pollution, congestion, and increased rates of social interactions. To date, various health factors have been linked to urbanicity, such as increases in overweight/obesity, increased calorie intake, decreased physical activity, increased drug and alcohol use, and mental health disorders, among many others . In the ABCD Study, we have linked five measures of urbanicity to residential addresses, including two density measures , census-tract derived metrics classifying the locations as urban or non-urban areas, walk ability, and motor vehicle information including distance to roadway and traffic volumes. Population density refers to the number of people living in a given unit of area . Differences in population density have been linked to psychological and environmental quality of life , and has been shown to moderate relationships between the built environment and health outcomes .

Thus, information about variability of population density may be important for contextualizing relationships between the build environment and health outcomes in the ABCD Study. As such, the population density from the Gridded Population of the World , provided by the Socioeconomic Data and Applications Center , has been linked to ABCD individual participant address information. National-level population estimates from 2010 used in this metric have been adjusted to the United Nations World Population estimates, which can often be corrected for over- or under-reporting and mapped to an ~1-km grid. Population density values represent persons per km2 . Similarly, gross residential density is a measure of housing units per acre on unprotected land and is an alternative measure of crowding. This measure was obtained from the Smart Location Database created by the United States Environmental Protection Agency based on the 2010 Census Data and also linked to ABCD Study individual addresses. While many studies have documented the effects of increased urbanicity on child and adolescent health outcomes, few studies have focused on differential risk associated with living in a rural area relative to an urban area . Although the number of studies devoted to this topic are few, linking this information to the ABCD Study may provide an opportunity to further investigate both positive and negative impacts of living in an rural area. To classify individuals as living in a rural or urban area, urban-rural census tract variables from 2010 were mapped to each address. Based on this external database, the Census Bureau identifies two types of urban areas, including Urbanized Areas of 50,000 or more people and Urban Clusters of at least 2500, but less than 50,000 people. Rural areas are those that encompass all population, housing, and territory not included within an urban area . In urban places, city planning designs have limited the walk ability between work, home, and recreational spaces, with distances too great to walk . Such reduction in walk ability leads to fewer opportunities for physical activity and a risk for health. Understanding potential links between the walk ability of the built environment of the child and physical and mental health outcomes is important in the context of the ABCD Study . A measure of walk ability was linked to ABCD participant addresses using the National Walk ability Index from the Smart Location Database created by the United States Environmental Protection Agency based on 2010 census data. Walk ability scores were calculated at the census-tract level,vertical grow rack ranking each census tract on a range from 1 to 20 according to relative walk ability. The walk ability score is based on a weighted formula that uses ranked indicators as related to the propensity of walk trips.

The ranked-indicator scores used in the weighted formula include a combination of diversity of employment types plus the number of occupied housing, pedestrian-oriented intersections, and proportion of workers who carpool. Beyond population density and walk ability, epidemiological studies have also reported associations between road proximity and brain health. Various neuro development, cognitive functioning, and mental health outcomes have been linked to living near major roadways . As such, the ABCD Study may be valuable to help understand how the distance of a child’s home to major roadways as well as the daily traffic patterns on nearby roadways impacts cognitive and neuro developmental trajectories over time. Therefore, we have mapped road proximity and traffic volume estimates to residential addresses of the child in the ABCD Study to provide insight into both the major roadways nearby and how many cars and trucks typically utilize these roads. the geospatial coordinates of the major roads were obtained through the North American Atlas for roads, as last updated July 2012 , and the shortest distance to a major roadway in meters was linked to participant’s residential addresses. In the field of developmental cognitive neuroscience, socioeconomic status has traditionally been treated as an individual-level variable, specific to each family or person. However, socioeconomic status can also be attributed to neighborhoods and communities, which may represent an independent construct from family-level socioeconomic status with considerable effects on child development . In the ABCD Study, detailed questions are asked about socioeconomic and social factors at the family-level. Thus, the ABCD Study is an ideal dataset to examine the independent and multiplicative associations of family- and neighborhood-level socioeconomic status on adolescent health. Investigations with these ABCD data can elucidate the underlying mechanisms by which various contexts uniquely influence development and potential emerging health disparities . Accordingly, the ABCD Study has incorporated the Area Deprivation Index measure of neighborhood-level socioeconomic status in past data releases, as well as information on crime and risk of lead exposure. Moving forward, three additional metrics, including the Social Vulnerability Index, Opportunity Atlas, and the Child Opportunity Index, have been linked in the 4.0 annual data release. The neighborhoods in which children in America grow up can influence outcomes in adulthood. As such, the Opportunity Atlas estimates measures of average outcomes across 20,000 people in adulthood according to the census tracts in which they grew up . The ABCD Study includes scores from the Opportunity Atlas that indicate the predicted 2014–2015 mean income earnings of adults aged 31–37 years that grew up in that census tract as children. Scores are provided based on the childhood census tracts of the Opportunity Atlas cohort, but we also provide the adult mean earnings disaggregated by parental household income percentiles based on the national income distribution during their childhood. For example, the mean income earnings at the 25th percentile rank correspond to the mean income earnings of adults whose parents were at the 25th percentile of the national income distribution.Although the outcomes for census tracts are based on children who were born in those tracts between 1978 and 1983, Chetty et al. suggest that these longitudinal outcomes are best suited for measuring stable outcomes in earnings in adulthood. Linking measures from the Opportunity Atlas to the ABCD Study allows for objective measures of neighborhood economic opportunity to study in relation to health outcomes in ABCD youth. However, while the Opportunity Atlas estimates can be used as predictors of economic opportunity for children today, it is important to combine these estimates with additional data to determine applicability to neighborhoods that have undergone substantial change in the last several decades. There are vast differences in neighborhood access to opportunities and quality of conditions for children across America, including access to good schools and healthy foods, green spaces such as safe parks and playgrounds, safe housing and cleaner air. These inequitable neighborhood differences can negatively influence the current living conditions of a child, as well as development throughout childhood and subsequent health outcomes in adulthood . Children who grow up in neighborhoods with access to more educational and health opportunities are more likely to grow up to be healthy adults.

Our syndemic count likely contains factors with non additive effects on NCI

Significant sex differences in these domain summary scores were followed by analyses of covariance adjusting for covariates and biopsychosocial factors. Lastly, among HIV-positive and HIV-negative participants, we examined the moderating role of sex in the relationship between HIV serostatus and the likelihood of NCI in two logistic regression approaches: inclusion of an HIV-serostatus by sex interaction term and examination of the HIV and NCI relationship in sex-stratified analyses. Significant effects were followed by stepwise logistic regressions adjusting for covariates and individual biopsychosocial factors . Significance was set at P value less than 0.05. Analyses were performed using SPSS . Given race/ethnicity differences in the prevalence of biopsychosocial factors and NCI, analyses were repeated within nonhispanic whites and blacks.Our findings present evidence supporting greater NCI among HIV-positive women than HIV-positive men. Race/ethnicity-stratified analyses indicated that this sex difference was primarily due to a higher proportion of black women in the HIV-positive, versus HIV-negative, group. We extend previous findings by determining whether discrepancies in biopsychosocial factors may explain higher rates of HIV-associated NCI in women. In partial support of hypotheses, adjusting for low reading level eliminated the sex difference in HIV-associated NCI. The race disparity in findings may be due to the overall higher rates and larger sex difference in biopsychosocial factors in blacks versus whites. The race disparity may also be partially due to race/ethnicity differences in health literacy, which have accounted for racial disparities in age-associated cognitive decline. Perhaps sex differences in HIV-associated NCI are more common in the context of low health literacy possibly due to sub-optimal engagement in HIV care/treatment. Alternatively, reading level may represent other NCI risk factors that show a female preponderance among black, HIV-positive individuals but were unmeasured.

Our race/ethnicity differences may explain inconsistent findings. Because our sample included more black women with lower reading level,grow cannabis the sex differences in HIV-related NCI may be more evident than in predominantly white samples. Low reading level was the only biopsychosocial factor to attenuate the sex difference in HIV-associated NCI in the overall sample and blacks. This may be because, among the biopsychosocial factors, low reading level demonstrated the strongest relationship with NCI and the largest sex difference. Low education was not associated with NCI likely because NCI was determined using education-adjusted, neurocognitive T-scores. SUDs were not associated with NCI possibly because most SUDs in our sample were not current . Furthermore, reading level may better reflect education quality than years of education, especially in lower socioeconomic populations because of the many factors impacting education quality. Notably, low reading level, but not education, was a risk factor for cognitive decline among ethnically diverse elders in the general population. In addition, reading level is associated with health outcomes, including hospitalizations and outpatient doctor visits, and, thus, may be a proxy for biopsychosocial factors underlying general health . Although mean syndemic count was higher in HIV positive women versus HIV-positive men, adjustment for syndemic count did not attenuate the sex difference in HIV-associated NCI, suggesting that the other biopsychosocial factors dilute the sex-related variance explained by reading level.Given findings that stress is more strongly associated with trajectories of cognitive impairment than depression in HIV-positive women, a syndemic count that included factors such as early life trauma and perceived stress may better capture biopsychosocial differences between HIV-positive men and women. Sex differences in the profile of HIV-associated NCI were only observed within race/ethnicity groups. Among whites, women demonstrated poorer learning than men, and this difference was attenuated after adjusting for reading level. White women also demonstrated poorer verbal fluency than white men and this difference was unchanged after adjustments. The sex difference in HIV associated NCI among blacks was not driven by specific cognitive domains. In fact, in contrast to whites, black women outperformed black men in verbal fluency and this difference was unchanged after adjustments. Sex differences in verbal fluency were likely masked in the overall sample due to differing associations within race/ ethnicity groups.

The sex by HIV interaction was not significant; however, sex-stratified analyses suggest a moderating role of sex in the HIV and NCI association, particularly in blacks. Compared with their HIV- counterparts, NCI was 3.5 times more likely in black, HIV-positive men but six times more likely in black, HIV-positive women. Adjustment for reading level marginally attenuated the HIV and NCI association in black women, suggesting that the large discrepancy in reading level between HIV positive and HIV-negative black women contributes to the higher risk of NCI in HIV-positive black women. Conversely, the HIV and NCI association was unchanged with adjustments in black men. Previously reported sex differences in disease characteristics unmeasured herein may contribute to sex differences in HIV and NCI associations. Overall, results suggest that HIV-positive black women are at the highest risk for NCI. Our study has limitations including the small proportion of women and, thereby, the potential of being under powered to detect an HIV by sex interaction. We were also unable to explore certain biopsychosocial factors . Study strengths include the large sample, an HIV-control group, race/ethnicity-stratified analyses and a comprehensive test battery that defined NCI. Demographically adjusted T-scores based on HIV- data allowed us to examine sex differences in HIV-specific cognitive profiles; however, by restricting this analysis to HIV-positive individuals with NCI, our statistical power was limited. In conclusion, we contribute evidence that HIV associated NCI is more prevalent in women versus men and indicate that this difference is accounted for by a lower reading level among HIV-positive women. The frequent sub-optimal educational experience of HIV positive women and the resulting lower cognitive reserve may make HIV-positive women more susceptible to HIV-associated NCI. The effect of HIV on NCI was also greater in women versus men, particularly among blacks. Adjusting for education quality rather than years of education may improve the specificity of neuropsychological tests for measuring cerebral dysfunction and sex differences in HIV. Clinically, practitioners should be advised that black HIV-positive women appear to be particularly at risk for NCI and provide resources to accommodate for these possible impairments.Alcohol and SUDs in general are associated with dysfunction, primarily in the domains of learning/memory, working memory, and other executive-based skills, including cognitive/inhibitory control . Persons with alcohol use disorder have been studied most, with the nature and level of impairment showing considerable variability. Approximately 55% of AUD manifest clinically significant neurocognitive deficits after acute detoxification , but some degree of recovery from these deficits is apparent with short-term , intermediate-term , and long-term abstinence from alcohol .

Some dysfunction has been reported to persist into long-term abstinence from alcohol, particularly in the domains of executive and visuospatial skills, learning and memory, and postural stability . The degree of cognitive dysfunction and the rate of recovery during abstinence appear to be influenced by many factors such as age, sex, family history of AUD, treatment history, pretreatment alcohol consumption level, number of detoxifications, nutritional status, comorbidpsychiatric and biomedical conditions, and comorbid SUD . Most treatment-seeking substance users today concurrently and/or simultaneously consume more than one illicit/licit compound, so-called poly substance users . Among PSU, comorbid tobacco use disorder is most prevalent in AUD , and chronic cigarette smoking itself is associated with significant neurocognitive deficiencies in both AUD and non-AUD cohorts . Smoking AUD performed worse than their nonsmoking counterparts on domains of auditory verbal learning and memory, processing speed, cognitive efficiency, and working memory during the first month of abstinence from alcohol . Smoking AUD also demonstrated poorer neurocognition with increasing age than never-smoking AUD, and the performance of former-smoking AUD on several domains was intermediate to that of never-smoking and actively smoking AUD . Particularly common among PSU is the simultaneous and/or concurrent abuse of alcohol, tobacco, and psychostimulants . Therefore, in many published research reports on the neurobiological and neurocognitive consequences of AUD, many individuals were also likely nicotine-dependent, and in studies of cocaine use disorder, participants were also likely heavy drinkers. This was both likely and most apparent in literature before about 2010, when poly substance abuse had not been attended to more widely in substance abuse research, and other substance use was largely treated as a nuisance variable. More recently,indoor cannabis grow system poorer health outcomes and greater treatment resistance have been reported for PSU compared to monosubstance users . Despite its prevalence, however, few studies have directly examined the neuropsychological or neurobiological consequences of poly substance abuse. In early studies, cocaine-dependent individuals with and without AUD showed cognitive deficits at 3 months of abstinence , and decision-making was still impaired in similar individuals abstinent for 8 months . Even after several years of abstinence, psycho stimulant-related deficits of episodic memory, planning, and cognitive flexibility were persistent in PSU . These relatively persistent cognitive deficits were associated with the amount of cocaine and cannabis consumed as well as with relapse risk .

Cognitive efficiency, processing speed, and visuospatial learning were less impaired in 1-month-abstinent PSU who continued to abstain versus those who subsequently relapsed between 1 and 4 months of abstinence ; similarly, 1-month-abstinent AUD with the lowest processing speed showed a significantly increased risk for relapse following treatment . In comparisons to 1-month-abstinent AUD, 1-month-abstinent PSU performed worse on measures of auditory verbal memory and learning and general intelligence , suggesting a diminished capacity of learning, memorizing, and integrating new skills presented in clinical treatment settings. In addition, PSU exhibited worse decision-making and higher self-reported impulsivity than AUD, potentially placing them at a greater relapse risk during early recovery. Finally, PSU between the ages of 25 and 70 years showed greater age-related declines in processing speed, general intelligence, cognitive efficiency, and global intelligence than controls, indicating the detrimental cumulative effects of poly substance use on neurocognition . As described, the degree and nature of neurocognitive deficits varies considerably among substance-using groups investigated, critically related to the combination of both illicit and licit substances abused and their use histories. However, it is noteworthy that even mild neurocognitive deficits can impact quality of life and relapse risk . Impaired neurocognition and inhibition may adversely affect maintenance of abstinence during treatment and long-term treatment efficacy ; specifically, neurocognitive deficits can interfere with treatment efficacy by reducing the individual’s ability to encode, process, recall, integrate, and apply program information during and following treatment . As such, assessment of cognitive abilities during treatment may improve treatment outcomes by providing clinicians an understanding of the individual’s capabilities during treatment and inform appropriate post treatment follow-up care . Studies of longitudinal neurocognitive changes during abstinence from alcohol and other substances are far less common than cross-sectional studies . Most longitudinal studies assessed neurocognition several weeks after detoxification and then 6–12 months later; they demonstrated several neurocognitive functions improve at least partially during sustained abstinence, whereas some cognitive dysfunction persists for years after detoxification . Psychological changes in AUD during a residential rehabilitation program have recently been documented and include significant decreases in anxiety, depression, and psychological distress within about 1 month of detoxification in those with substance-induced mood disorders . In studies on the effects of comorbid tobacco use on neurocognitive recovery in AUD , we found that smoking was associated with significantly diminished improvement of visuospatial learning and processing speed within the first year of abstinence from alcohol . We analyzed neurocognition across three different time points during abstinence from alcohol and as a function of smoking status . Over 8 months of abstinence, AUD as a group showed significant improvements of visuospatial learning and memory, processing speed, and working memory, with less pronounced changes in executive functions, postural stability, and auditory verbal learning and memory. Overall, the recovery rates were nonlinear over time, showing faster recovery between 1 and 4 weeks than between 1 and 8 months of abstinence. Improvements in the foregoing domains in AUD were driven by never-smoking AUD, where both former-smoking and actively smoking AUD showed significantly less recovery than never-smoking AUD. Additionally, active smokers showed significantly less improvement with increasing age than never-smoking AUD over 8 months on measures of processing speed and learning and memory. At 8 months of abstinence, currently smoking AUD remained inferior to controls and never-smoking AUD on multiple measures, former smokers performed worse than never-smoking AUD on several tests, but never-smoking AUD were not significantly different from controls on any measure.

Prop 47 led to substantially fewer drug arrests across all racial/ethnic groups

We calculated change in both absolute and relative differences across race/ethnicity, because relative differences can increase, even as absolute differences narrow. Relative differences depend on the rate in the reference group ; as rates in the reference group decline, a shrinking absolute difference may correspond to a widening relative difference. To calculate the difference in expected vs. observed counts, we subtracted model-predicted counts from actual arrests during the corresponding period. Confidence intervals were obtained by bootstrapping using 1,000 replications. All analyses were conducted using Stata Version 15.Felony drug arrest rates dropped immediately and precipitously for all racial/ethnic groups during the first month, and continued to decrease over time . In the year prior to Prop 47, the proportion of drug arrests that were felonies was highest for Blacks , suggesting that Blacks would have the most to gain from reclassification of drug felonies to misdemeanors. However, Prop 47 did not reclassify all felony drug offenses, only drug possession. In the year before Prop 47 was passed, the proportion of felony drug arrests for offenses that would be reclassified was 73% among Blacks, vs. 86% among Whites and 83% among Latinos. Thus, relative to Whites and Latinos, we would expect a smaller percentage decline in felony drug arrests of Blacks resulting from Prop 47. Consistent with these predictions, the models adjusted for secular and seasonal trends showed a significant, immediate decline in monthly felony drug arrest rates of 81 per 100,000 among Blacks, compared to declines of 44 among both Whites and Latinos . As a result, the absolute disparity in Black-White felony arrest rates declined by 37 per 100,000 in the first month, from a difference of 81 per 100,000 in the month pre-passage ,vertical grow systems for sale and continued to decrease over time. Felony drug arrest rates among Whites and Latinos were nearly equivalent in the month pre-passage and no difference was evident by one year post-passage.

Though Blacks had the largest decline in absolute felony drug arrest rates, they had the least proportional decline, likely because they had lower proportions of felony drug offenses that were reclassified to misdemeanors by Prop 47. In the first month, felony drug arrests among Blacks declined by 60% , compared to 69% among Whites and Latinos . With a larger percentage decline among Whites, the relative Black-White disparity increased by 27% . Though proportional declines in felonies continued to grow over time for all racial/ethnic groups, differences in declines across groups grew as well. By one year post-passage, felony rates among Blacks were 3.09 times higher than those of Whites , compared to 2.16 times higher in the month pre-policy . The difference in the number of felony arrests made, compared to the number expected based on secular and seasonal trends, was substantial. During the first year of adoption, Prop 47 led to an estimated 51,985 fewer felony arrests among Whites, 15,028 fewer among Blacks, and 50,113 fewer among Latinos . These represent reductions of 75.9%, 66.1%, and 73.7%, respectively. Our analysis of 60 months of county, race, and offense-specific arrest rates in California point to several notable effects of Prop 47. There was little indication of elevated drug arrest rates in Latinos compared to Whites before or after Prop 47, while the large absolute Black-White difference in felony drug arrest rates was reduced. With a higher proportion of felony drug offenses affected by Prop 47 , Whites had the greatest proportional decline in drug felonies, contributing to an increase in the relative Black-White disparity. Prop 47 appears to have led to reductions in arrests for drug offenses overall , which saw a decrease in the absolute Black-White difference, while relative disparities remained the same. There has been little study of the impacts of reducing offense severity on racial disparities in criminal justice involvement. In one exception, researchers found that reforming marijuana laws reduced arrests across all racial/ethnic groups, with no change in relative disparities between Blacks and other racial/ethnic groups . This aligns with our finding on Prop 47’s effect on total drug arrests, though we find increases in relative disparities for felonies.

Why did reducing the classification of some drug offenses reduce drug arrests overall? Reductions in drug arrests were unlikely a reflection of underlying crime trends – violent and property crime rates increased during this period . Prop 47 was a ballot initiative and law enforcement may be responding to perceptions of public opinion about public safety priorities. In areas with high rates of violent crime, police agencies may welcome a freeing up of resources to focus on these offenses. Officers may use their discretion to opt out of drug arrests and focus on offenses their department prioritizes. The initial drop in total drug arrests followed by the rise in month three suggests officers may also be responding to feedback from the courts regarding how to interpret and act on the legislative change. Reductions in arrests for all drug offenses may also reflect fluid lines between drug possession and sale. One lieutenant explained to the first author that in his city, sellers typically plead out to possession up to the third arrest, while arrests for possession were used to get information on sellers . Reducing possession to a misdemeanor may have diminished tools police and prosecutors used to enforce drug laws, contributing to a de-emphasis on arrests. These impacts warrant further investigation. While the absolute Black-White disparity in felony drug arrests decreased, the relative disparity increased, in part because of differences in pre-existing felony offense composition by race/ethnicity. Blacks had larger proportions of felony drug offenses unaffected by Prop 47 did not alter, such as sale. Whether this reflects racial differences in offending, or racial biases and practices in drug law enforcement, cannot be determined from our data, but other studies point to the latter . Prop 47 targeted drug possession, with the aim of decriminalizing substance use disorder. However, distinctions between sale and possession can be murky and influenced by prosecutorial discretion concerning which charges to file. Further study of racial inequalities in drug charges could help to address this unintended effect in California and states considering similar policies. Given substantial evidence of the role of social and economic factors in health outcomes , reducing incarceration and felony convictions through policy reform may be a critical component to addressing racial disparities in health. Our findings suggest that reclassifying drug offenses to misdemeanors is an effective approach to decreasing felony arrests across racial/ethnic groups, and absolute differences between Blacks and Whites.

However, a full assessment of how reducing criminal penalties affected racial/ethnic disparities in criminal justice involvement must go beyond the stage of arrest, particularly since groups may differ in the prevalence of prior convictions, which affect the likelihood of prosecution. Still, there is clear evidence that on a population level, there were declines in incarceration resulting from Prop 47,vertical grow system providing an opportunity to evaluate how reducing exposure affects health and associated racial/ethnic disparities, including the health of families and communities most affected by high rates of incarceration. Regarding more direct health impacts, a core component of Prop 47 was to reinvest savings from reduced incarceration to buttress substance use disorder and mental health treatment, with grants totaling $103 million awarded to 23 city and county agencies in mid-2017 . Prop 47 generated debate about whether arrestees would lose the incentive to enroll in treatment without a felony threat, which remains to be evaluated . Alternatively, populations accessing treatment or the proportions entering through voluntary vs. court-referred admissions may change . Racial disparities in substance treatment access could be impacted by Prop 47 as well. Blacks and Latinos arrested for drug offenses are more likely than Whites to receive incarceration, rather than drug treatment diversion . Sentence standardization initiated by Prop 36 in 2001 reduced disparities, but had a greater impact on Latinos than Blacks, perhaps because Blacks had more prior drug and violent offenses that precluded eligibility for diversion. Treatment resources generated by Prop 47 may have more promise for reducing disparities, given broader participant eligibility criteria stipulated in grant requirements . Critical questions remain regarding how shifting funds from a criminal justice to a public health approach to substance use disorders will influence treatment enrollment and outcomes for health, well-being, productivity, and public safety.

New programs funded by Prop 47 offer opportunities to evaluate these questions, and identify the most effective models for improving public health. In arrests with multiple offenses, only the most severe is included in the dataset. Since Prop 47 reduced the severity of drug possession offenses, one concern is whether some reduction in arrests could be attributed to co-occurring offenses that became comparatively more severe than drug possession postProp 47. This could occur only in measures incorporating offenses classified as felonies pre-Prop 47 and misdemeanors post-Prop 47. These would include drug arrests reclassified by Prop 47 and total drug arrests, but would exclude felony drug arrests, misdemeanor drug arrests, and felony drug arrests unaffected by Prop 47. We used data on juvenile arrests, which contain up to five co-occurring offenses, to estimate possible bias. We found that potential masking was minimal and would not alter findings. Approximately five percent of “drug arrests reclassified by Prop 47” may have been masked post-policy – far less than the 46-51% declines in this category 12 months post-policy. Masking in the “total drug arrests” measure was estimated at three percent, compared to declines of 17-22% at 12-months post policy. Data are also event-, rather than person-level. Some arrests may represent the same person, though we minimized this possibility by using monthly rates. We assessed the extent of possible bias by linking individuals on name, date of birth, and jurisdiction for July 2013; just 1.2% were arrested more than once and 0.1% more than twice. We also lacked data on prior convictions; Prop 47 offenses retain felony classification for individuals with serious and/or violent convictions such as homicide and sexually violent offenses, or convictions requiring registration as a sex offender. The history of more frequent arrests and severe charges among Blacks arrested for drug offenses , may have minimized the effects of Prop 47 on reducing Black-White disparities in felony convictions. Given that other states are pursuing similar policy changes to reduce racial disparities , this effect should be further explored, and limiting prior record exclusion criterion considered. Race/ethnicity in arrests data may be based on officers’ observations, rather than self-report, in population denominators. This could lead to misclassification of the numerator in arrest rates, though sensitivity analyses indicated findings were robust. We also only analyzed three racial/ethnic groups; though these groups make up 95% of arrests in California, further research could assess disparities and impacts among other populations.State-level criminal justice reforms often leave a great deal of room for interpretation and discretion. There can be a tension between the goals of state legislators enacting criminal justice laws and county level officials who administer them, leading to highly county-specific implementation . As an example, mandatory minimum sentencing laws were seen as tough on crime and therefore historically supported by state legislators as a symbolic statement. They were opposed by courts, however, because they increased trial rates and case processing times, and penalties were considered disproportionately severe . In his review of two centuries of mandatory minimum sentencing laws, Tonry found a long history of courts using devices to circumvent them: prosecutors refused to file charges, plea bargaining was used to reduce charges, and judges refused to convict or ignored the statute and imposed a different sentence . When severe mandatory minimums for drug sale were in place in Michigan, for example, nearly every charge was reduced to possession , while harsh minimums for felony possession during the Rockefeller drug law era in New York were circumvented by reducing charges to misdemeanors or referring defendants to drug courts . Scholars have proposed that differences in local contexts determine how discretionary options are used within locales, producing geographic variation in case dispositions. Specifically, cases are prosecuted in accordance with personal and local principles of proportionality , local political leanings, resources for prosecution, and community priorities .