Hippocampal and prefrontal white matter volumes appear smaller in heavy alcohol using adolescents

Approximately 8% of those ages 12–17 meet criteria for substance abuse or dependence in the past year, but this peaks between ages 18–25, when 21% meet diagnostic criteria for a substance use disorder . Those with early substance use onset are more likely to continue use into adulthood; individuals who first used alcohol at age 14 or younger have a >5 time increased risk of lifetime alcohol use disorder as compared to those who first used alcohol after the U.S. legal limit of the 21st birthday . Adolescent alcohol and marijuana use has been linked to harmful effects on physiological, social, and psychological functioning . This includes increased delinquency , aggressivity, risky sexual behaviors, hazardous driving, and comorbid substance use .Given the extent of brain maturation occurring during this phase in life, adolescents who use substances appear to be vulnerable to alterations in brain functioning, cognition and behavior. Indication that alcohol and marijuana use may detrimentally influence the developing brain comes from studies showing diminutions in neurocognitive functioning, especially attention, visuospatial functioning, and learning and retrieval of verbal and nonverbal information ; morphological changes ; anisotropic differences in white matter ; and a more distributed functional network and recruitment of alternate brain regions . Heavy alcohol use is associated with a wide range of neural consequences in adults and similar sequelae are implicated in adolescent users. Alterations in anisotropy in the genu and isthmus of the corpus callosum in alcohol-using teens and in frontal, cerebellar, temporal,growing cannabis and parietal regions in adolescent binge-drinkers lends further support to atypical developmental trajectories.

White matter quality appears to relate to drinking in a dose dependent manner, where higher blood alcohol concentrations are associated with poorer tissue integrity in the corpus callosum, internal and external capsules, and superior corona radiata . Functional consequences of adolescent heavy drinking are seen in attenuated frontal cortex response during spatial working memory , and deficits on neuropsychological measures of attention , information retrieval , and visuospatial functioning , with some studies showing sustained effects into adulthood . Drinking so much that hangover or withdrawal symptoms are experienced is associated with decreased performance over time . Overall, these studies indicate that heavy drinking during adolescence may be associated with decrements in cognitive performance and brain health. However, longitudinal studies are critical to determine if substance use causes these abnormalities, or if these features predated the onset of regular substance use. One such study prospectively examined the influence of alcohol on neuropsychological functioning prior to initiation of drinking. For girls who transitioned into moderate or heavy drinking, more drinking days in the past year predicted a greater reduction in visuospatial task performance from baseline to 3-year follow-up. For boys, a tendency was seen for more past year hangover symptoms to predict poorer sustained attention . Gender differences are seen in prefrontal cortex volumes of adolescents with alcohol use disorders, where females show smaller, and males, larger volumes than controls. In addition, limited frontal response to a spatial working memory task and reduced grey matter volume in females with alcohol use disorders compared to males suggest that females may be more vulnerable to the impairing effects of alcohol . Marijuana use is also associated with atypical neural profiles. Adolescent marijuana users show a less efficient pattern of activation compared to non-users on working memory , verbal learning , and cognitive control tasks using fMRI. Brain response patterns in marijuana-using teens consistently indicate increased utilization of alternate brain networks .

In addition, users have demonstrated larger cerebellar volumes than non-users , and female marijuana users showed larger prefrontal cortex volumes than same-gender non-users , suggesting the possibility of attenuated synaptic pruning. White matter integrity is typically poorer in users than non-users, particularly in fronto-parietal circuitry and pathways connecting the frontal and temporal lobes . The functional implications of these differences appear disadvantageous, as marijuana-using teens show an increased susceptibility to depressive symptoms and poorer performance than non-users on neuropsychological tests of psychomotor speed, complex attention, verbal memory, planning, and sequencing ability, even after a month of sustained abstinence . The pharmacodynamics of alcohol and marijuana are the subject of study in several empirical works examining their physiological and behavioral effects. Chronic alcohol exposure is associated with cortical and white matter volume loss secondary to decreases in choline and Nacetyl aspartate, reduced GABAa receptor efficacy, and impaired neurogenesis . Similarly, the principal active component of marijuana, delta9-tetrahydrocannabinol , produces complex alterations in cognition and behavior that involve several neuronal substrates . Brain regions with high densities of CB-1 receptors, and thus susceptible to the effects of THC, include the frontal regions, hippocampus, basal ganglia, cerebellum, amygdala, and striatum . Human studies examining CNS sequelae of chronic marijuana use provide evidence for increased metabolism and activation of alternate neural pathways within these regions . Further adverse effects may result from the pharmacological interaction of alcohol and marijuana, where THC has been reported to markedly enhance apoptotic properties of ethanol. In infant rats, administration of THC alone did not result in neurodegeneration; however, the combination of THC and a mildly intoxicating dose of ethanol induced significant apoptotic neuronal cell death, similar to that observed at high doses of ethanol alone .

In sum, studies of adolescent alcohol and marijuana use indicate weaknesses in neuropsychological functioning in the areas of attention, speeded information processing, spatial skills, learning and memory, and complex behaviors such as planning and problem solving even after 28 days of sustained abstinence . There are also associated changes in brain structure and function that include altered prefrontal, cerebellar, and hippo campal volumes, reduced white matter micro-structural integrity, and atypical brain activation patterns . There may be potential reversibility of brain structural changes with long-term abstinence , though additional studies are needed to understand the extent to which abnormalities persist or remit with time. Further, the potential interaction of alcohol and marijuana are of concern considering that comorbid use is common .It is postulated that there is an asynchronous development of reward and control systems that enhance adolescents’ responsivity to incentives and risky behaviors . Bottom-up limbic systems involved in emotional and incentive processing purportedly develop earlier than top down prefrontal systems involved in behavioral control. In situations with high emotional salience, the more mature limbic regions will override prefrontal regions, resulting in poor decisions. The developmental imbalance is unique to adolescents, as children have equally immature limbic and prefrontal regions, while adults benefit from fully developed systems. Within this model,cannabis growing risky behaviors of adolescents is understood in light of limbic system driven choices to seek immediate gratification rather than long-term gains. Moreover, this relationship may be more pronounced in adolescents with increased emotional reactivity. Behavioral and fMRI studies show increased subcortical activation when making risky choices and less activation of prefrontal cortex, as well as immature connectivity between emotion processing and control systems overall . A more specific characterization of these patterns using comparisons of low- and high-risk gambles indicated that high-risk choices activate reward-related ventral striatum and medial prefrontal cortex, whereas low-risk choices activate control-related dorsolateral prefrontal cortex. Interestingly, activation of the ventral medial prefrontal cortex was positively associated with risk-taking propensity, whereas activation of the dorsal medial prefrontal cortex was negatively associated with risk-taking propensity , suggesting that distinct neural profiles may contribute to the inhibition or facilitation of risky behaviors.Development of effective treatments for alcohol use disorder remain a high priority area which involves screening compounds in the laboratory before proceeding to clinical trials . Within this process, there is a need to develop and understand relationships among human laboratory paradigms to assess the potential efficacy of novel AUD treatments in early-stage clinical trials. To date, reviews of the human laboratory literature in AUD pharmacotherapy development indicate significant outcome variability based on experimental paradigm parameters, population of interest, and sample size, and suggest that these myriad variables contribute to the disconnect between laboratory effect sizes and treatment outcomes . Amidst the efforts to develop translational experimental paradigms, neuroimaging tasks are increasingly used to explore potential pharmacotherapy effects on neural correlates of alcohol-induced craving . Alcohol consumption produces neuroadaptations in multiple circuits, including GABA-ergic regulation of traditional reward circuitry; alcohol craving is mediated by cortico-striatal-limbic activation, heightens relapse risk , and can be triggered through internal and external stimuli associated with alcohol consumption .

For this reason, neuroimaging techniques, such as functional magnetic resonance imaging , have been used to explore these circuits as potential medication targets. Recent qualitative reviews and meta-analyses suggested that while such fMRI tasks vary in sensory experiences and scan parameters, mesocorticolimbic areas consistently exhibit task-based neural activity and may be viable tools in understanding mechanisms of AUD pharmacotherapy . Based on this emerging literature, there is growing evidence that neural responses to alcohol cues and associated contexts are predictive of real-world consumption behavior and, potentially, clinical outcomes. For instance, among college students, alcohol cue-elicited blood oxygen level-dependent response in caudate, frontal cortex, and left insula predicted escalation to heavy drinking over a 1-year period . Further, insula and frontal gyrus activation in response to an emotion face recognition task similarly predicted alcoholrelated problems five years later in young adults . Regarding treatment outcomes, increased ventral striatum activation in response to alcohol cues was associated with a faster time to relapse in a sample of abstinent AUD individuals . Comparisons of AUD treatment completers and non-completers in a community sample indicated that non-completers showed stronger associations between reported alcohol craving intensity and resting state functional connectivity between striatum and insula, relative to completers . Of note, one study had contradicting results by reporting that relapsers, compared to successful alcohol abstainers and healthy controls, exhibited reduced alcohol cue-elicited activation in ventral striatum and midbrain . Several studies have examined whether AUD pharmacotherapies alter neural responses to contexts that elicit alcohol craving, including alcohol cues, exposure to reward and emotional faces, and stress exposure. While significant variability exists in sample populations, examined tasks, modified areas of activation, and molecular targets of treatments, there is some consistent evidence that AUD pharmacotherapies may reduce reward-related activation in regions such as the ventral striatum, precuneus, and anterior cingulate . Importantly, in one study of naltrexone, magnitude of reduction in alcohol cue-induced ventral striatum activation was associated with fewer instances of subsequent heavy drinking . In support, Mann and colleagues have found that individuals with high ventral striatum cue reactivity demonstrate lower relapse rates when treated with naltrexone than those with low VS reactivity. Bach and colleagues have also identified that individuals with high alcohol cue-reactivity in the left putamen exhibit longer time to relapse when treated with naltrexone, compared to those with low reactivity. Together, these studies underscore reward circuitry as a key area in the translation of neural responses to clinical outcomes in AUD medication development . Alcohol self-administration tasks in the laboratory are thought to capture alcohol use behavior in controlled settings that approximate consumption in real world settings. Studies have tested multiple variants of self-administration paradigms, including tasks that require participants to orally consume alcohol at the cost of monetary rewards per drink , and intravenous methods that can closely control breath alcohol concentration levels ; . Studies have used self-administration methods to test genetic, physiological, and psychological risk factors for heavy drinking . Self-administration tasks have also been used extensively in developing effective AUD pharmacotherapies . While both fMRI cue-reactivity tasks and alcohol self-administration tasks are widely used in alcohol research, the extent to which cue-reactivity predicts self-administration in the laboratory remains unknown. In light of the emerging role of functional neuroimaging in predicting drinking behavior and AUD treatment outcomes, a remaining question is the nature of the relationship between neuroimaging task-induced neural activation and widely utilized laboratory paradigms considered proximal to real-world consumption, including self-administration tasks. To date, several studies have examined relationships of response across different laboratory paradigms and have consistently identified that alcohol craving during intravenous alcohol administration mediates the relationship between alcohol induced stimulatory effects and subsequent oral alcohol consumption . While relationships across human laboratory paradigms are recently delineated, no studies have yet investigated whether alcohol cue-induced BOLD response is predictive of responses within laboratory self-administration paradigms.

Stimuli presentation and data collection used E-Prime software

Our results indicate that screening, along with brief interventions and referrals to treatment, may be particularly important for identifying and providing early intervention for women of reproductive age prior to conception who may be at greater risk for prenatal alcohol and nicotine use. It is important that these conversations are supportive and non-punitive, focused on providing education and support to help women make informed decisions about substance use to increase the likelihood of future substance free pregnancies. Women’s health clinicians should also discuss risks associated with prenatal substance use at prenatal intake appointments to ensure that all patients receive the recommendation for complete abstinence throughout the pregnancy. Further, education about tracking one’s menstrual cycle for earlier recognition of pregnancy could potentially help women stop alcohol or nicotine use earlier, particularly in cases where women are not actively trying to conceive. Future studies that examine pregnancy intentions may be useful to understand whether trends in prenatal alcohol and nicotine use vary among women whose pregnancies are intended versus unintended. In contrast to declines in the prevalence and frequency of alcohol and nicotine use among pregnant women seen in the current study, recent studies have found increases in the frequency and prevalence of cannabis use during pregnancy . Cannabis use during pregnancy commonly co-occurs with alcohol and nicotine use , and additional studies are needed to better understand patterns of co-use of alcohol, nicotine and cannabis among pregnant women over time.This study has a number of strengths,vertical cannabis grow including a large sample of diverse pregnant women universally screened for alcohol and nicotine use as part of standard prenatal care, data on self-reported frequency of use both in the year before pregnancy and during pregnancy, and repeated cross-sectional data spanning nine years.

There are also several study limitations. Our sample was limited to pregnant women who completed the self-reported substance use screening questionnaire as part of standard prenatal care. Findings may not be generalizable to pregnant women who did not complete the self-reported substance use screening questionnaire or to those who do not receive prenatal care, who may be more likely to use substances during pregnancy. Data on self reported alcohol and nicotine use came from the initial prenatal visit , and do not reflect continued use throughout pregnancy. We were unable to differentiate alcohol and nicotine use in pregnancy that occurred before versus after women realized they were pregnant, and many women in our sample who used alcohol or nicotine while pregnancy may have stopped as soon as they became aware of their pregnancy. Finally, our study may underestimate both the prevalence and frequency of alcohol and nicotine use before and during pregnancy as women may choose not to disclose their use to their healthcare provider.Studies of how neurobiological systems are linked to the transdiagnostic endophenotypic and phenotypic expression of psychopathology are particularly relevant to trauma-related psychopathology, as three of the most common trauma-related disorders—post traumatic stress disorder , major depressive disorder , and generalized anxiety disorder —are highly comorbid and share common transdiagnostic dimensions of threat and loss symptomatology . Trauma-related threat symptomatology includes intrusive thoughts and memories, and hyperarousal symptoms such as sleep disturbance and hypervigilance, whereas trauma-related loss symptomatology includes emotional numbing and depressive/dysphoric and generalized anxiety symptoms. Elucidation of neurobiological systems implicated in trauma-related endophenotypes can inform etiologic models of traumarelated psychopathology, as well as the development of more targeted, mechanism-based prevention and treatment strategies.

Attentional bias to threat is one of the core endophenotypic characteristics of trauma-related psychopathology . Attentional biases to threatening information, such as faces and words, which are often assessed using a dot-probe paradigm, have been found to contribute to and maintain the persistence of trauma-related threat symptomatology, even months to years after trauma exposure . Greater attentional bias to threat is also associated with exaggerated fear expression and impaired extinction in individuals with PTSD .Hyperarousal symptoms, such as exaggerated startle response during fear learning, in particular, have been found to contribute to attentional bias to threat in symptomatic trauma survivors . Recent functional neuroimaging work has implicated increased amygdala activation in relation to attentional bias to threat among individuals with PTSD , suggesting that the amygdala modulates the orientation of attention toward and processing of threatening information in this population. Although cannabinoid type 1 receptors are widely distributed in the human brain , they are found in particularly high concentrations in the amygdala, and have been associated with the processing and storage of threat-related memories, as well as the coordination of threat-related behaviors . Recently, we reported in vivo evidence of abnormal CB1 receptor-mediated endocannabinoid signaling in individuals with PTSD and suggested that increased CB1 receptor availability may be a molecular adaptation to reduced endocannabinoid availability. In addition to this work, a large body of preclinical studies has found strong support for a major role of the endocannabinoid anandamide and CB1 receptor signaling in the amygdala in modulating stress-induced threat behaviors . Understanding how key neuroreceptor systems such as CB1 relate to intermediate endophenotypic and phenotypic expression of trauma-related psychopathology may thus provide insight into molecular targets that could inform the development of mechanism-based treatment approaches. To date, however, human data evaluating this possibility are lacking.

In the current study, we aimed to address this gap in the literature by using the CB1 receptor antagonist radiotracer [11C]OMAR, which measures volume of distribution linearly related to CB1 receptor availability, to evaluate the relationship between CB1 receptor availability in the amygdala, and objectively assessed attentional bias to threat, and the transdiagnostic and dimensional expression of trauma-related threat and loss symptomatology. To obtain a sample that encompassed the full-dimensional range of study measures , we employed an inclusive sampling approach by recruiting a sample of individuals who represented a broad transdiagnostic and dimensional spectrum of trauma-related psychopathology, ranging from healthy, nontrauma-exposed individuals to trauma-exposed individuals with severe trauma-related psychopathology. On the basis of prior work linking CB1 receptor availability in the amygdala to threat processing and threat symptomatology to attentional bias to threat , we hypothesized that greater CB1 receptor availability in the amygdala would be associated with greater attentional bias to threat,vertical grow light as well as increased severity of threat symptomatology, particularly hyperarousal. We then evaluated a mediational model to examine whether attentional bias to threat mediated the relationship between CB1 receptor availability in the amygdala and trauma-related psychopathology.Lifetime traumatic events were assessed using the Traumatic Life Events Questionnaire and psychiatric diagnoses were established using DSM-IV-TR criteria and the Structured Clinical Interview for DSM-IV that was administered by an experienced master- or doctoral level psychiatric clinician. Only traumatic events that met criteria A1 and A2 for a DSM-IV-TR-based diagnosis of PTSD were counted toward participants’ trauma histories in this study. Nontrauma-exposed healthy adults did not report any trauma exposures on the TLEQ and did not have any lifetime psychiatric diagnosis, including substance abuse or dependence or nicotine dependence. Severity of trauma-related threat and loss symptomatology was assessed using the Clinician-Administered PTSD Scale for DSM-IV ; the Hamilton Rating Scale for Depression to assess depressive symptoms; and the Hamilton Rating Scale for Anxiety to assess nonspecific anxiety symptoms. Scores on these structured clinician-administered measures of trauma-related psychopathology represented a transdiagnostic and dimensional spectrum of trauma-related psychopathology, ranging from nontrauma-exposed asymptomatic adults to trauma exposed adults with severe trauma-related psychopathology . All participants were evaluated by physical examination, electrocardiogram, standard blood chemistry, hematology laboratory testing, toxicology testing, and urinalysis. All but two participants were psychotropic medication naive, and two took antidepressants for less than a week before the study but were medication free for at least 6 months before the study. Participants with significant medical or neurologic conditions, with substance abuse within 12 months of the scan, lifetime history of intravenous substance dependence, or with history of head injury that involved loss of consciousness were excluded from the study. Lifetime cannabis abuse/dependence was an exclusion criterion, and occasional cannabis users were eligible to participate but not if they had used cannabis within 12 months of the scan. The absence of substance use was determined by self-report and confirmed by the results of urine toxicology and breathalyzer tests at screening and on the days when magnetic resonance imaging and positron emission tomography scans were conducted.

The medical and psychiatric evaluation was followed by MRI and a resting state PET scan on a High Resolution Research Tomograph PET scanner with the CB1-selective radio ligand [11C]OMAR . To obtain plasma anandamide levels, blood samples were collected at the time of tracer injection and processed immediately after collection in the laboratory that is adjacent to the scan room and frozen at 80 1C until analyzed, as previously described .The dot-probe task was composed of 160 trials. Each trial started with a fixation cross presented in the center of the screen for 500 ms. When the fixation cross disappeared, two words in 12-pt Arial font immediately appeared in the center of the screen for 500 ms, one above and one below the location of the fixation cross, separated by 1.5 cm. Following the presentation of the words, a target probe appeared in the location previously occupied by one of the words. The probe remained on the screen until participants responded, after which the next trial started. Participants were instructed to focus their attention on the fixation cross at the start of each trial, and when a probe appeared they were to identify the probe letter using a designated mouse button, as quickly as possible. Given the heterogeneity of trauma histories in our sample, the stimuli used were 32 trauma-related and 64 neutral words that were selected from a larger list developed by MacLeod et al . Word pairs were chosen for salience to the experience of traumatic life events . Word pairs were matched in terms of first letter, number of letters, and frequency of usage in the English language, as suggested by MacLeod et al , and were presented in random order. To reduce the effect of anticipatory responding and outliers, response times o200 ms and 43 SD above the mean for each trial were discarded . Attentional bias to threat was calculated as the difference between average RT to targets at neutral word locations and average RT to targets at threat word locations. Negative scores indicate attentional bias away from threat, whereas positive scores indicate attentional bias toward threat.Simple descriptive statistics were computed to summarize demographic, trauma-related, and clinical variables for the sample. To reduce symptom clusters into composite measures of trauma-related threat and loss symptomatology based on prior work , we conducted two principal components analyses : the first contained CAPS measures of re-experiencing and hyperarousal symptoms , and the second contained CAPS measure of avoidance/numbing symptoms and HAM-D and HAM-A measures of major depressive and anxiety symptoms. Pearson or Spearman correlations, as appropriate based on data distributions, were then computed to evaluate associations between [11C]OMAR VT values in the amygdala, attentional bias to threat, and composite measures of trauma-related threat and loss symptomatology. If significant associations were observed, exploratory post hoc analyses were conducted to evaluate associations between component aspects of composite measures; exploratory post hoc analyses were also conducted to evaluate associations between [11C]OMAR VT values in brain regions other than the amygdala in relation to attentional bias to threat; a was set to 0.01 for all of these analyses to reduce the likelihood of type I error. To evaluate whether attentional bias to threat mediated the relation between CB1 receptor availability in the amygdala and the phenotypic expression of traumarelated psychopathology, we conducted a bootstrapped mediation analysis with 10 000 replicates using Mplus version 7.11. Model fit was assessed using w2 , comparative fit index , and standardized root mean square residual fit statistics; by convention, non-significant w2 values, CFI values Z0.90, and SRMR values o0.05 indicate a good fit to the data .Using the CB1 receptor radiotracer [11C]OMAR, we found that greater CB1 receptor availability in the amygdala was associated with increased attentional bias to threat, as well as increased severity of trauma-related threat symptomatology in humans presenting with a broad dimensional spectrum of trauma-related psychopathology.

Analyzing the DNA of natural fiber rope components can be valuable for several reasons

The results of the restriction digest of the grass stain-covered sisal rope indicated that if this were an unknown piece of rope, it would be difficult to determine its identity. Through analysis of restriction bands, three out of five types of rope could be eliminated, resulting in the unknown being either sisal rope with contamination or abaca rope with contamination. A detailed analysis of the base sequence of the mixed DNA amplicons was performed, but was not helpful. It is possible that other restriction enzymes could differentiate between these two types of rope . Cloning the PCR amplicons and analyzing individual clones to separate DNA components might be necessary. As a general rule to minimize contamination, samples should be taken from the interior strands of a rope. However, it will still be difficult to distinguish between a ‘‘mixed’’ rope, for example one containing jute and Hibiscus, and a rope where contamination is present. Extensive contamination by the same unique, complex mixture of contaminants might be used to support, although not to prove, a finding that two samples represent segments of the same rope. First, DNA analysis may require less experience than microscopy. In microscopy, rope is identified through crystals, pits, the color, lumen, cell wall, and cross markings. According to Wiggins ,vertical grow system a considerable amount of experience and skill is needed to identify rope fibers through microscopy. Second, the current microscopic examination method may not be capable of unambiguously characterizing all natural fibers.

DNA analysis can strengthen identification. Third, DNA analysis may have other applications, such as in archaeology— determining the source, local or imported, of cordage found at an excavation. Finally, with advancements in technology, DNA analysis could eventually provide a background for identifying individual samples of rope, in addition to the rope’s botanical origin.Human immunodeficiency virus infection is often accompanied by chronic fatigue and disrupted sleep patterns . A considerable number of people with HIV report sleep disturbance, with some studies estimating the prevalence of symptoms to be up to five times that of the general population . PWH face a variety of unique social , physical , and socioeconomic stressors , which may influence sleep disturbance . Prolonged poor sleep quality among PWH is associated with anxiety, depression, loss of productivity, interference with employment, physiologic stress, and poorer quality of life . Methamphetamine , a central nervous system stimulant, is among the most commonly used addictive drugs, with 35 million users per year worldwide . PWH and individuals at risk for HIV transmission, such as men who have sex with men, have a particularly high prevalence of MA use . A recent study on MA-dependent gay and bisexual men reported the prevlance of HIV infection to be 63% . Among PWH, MA is linked to accelerated viral replication, more rapid progression to AIDS, reduced effectiveness of antiretroviral therapy , and increased immune suppression . Global neuropsychological impairment and dependence on basic and instrumental activities of daily living are more common among PWH who also use MA than among those who do not, with an additive effect of HIV and MA on neuronal injury and glial activation . Despite these negative effects, perceived benefits, such as sexual enhancement and relief of negative psychosocial symptoms, continue to drive MA use among PWH . MA functions by stimulating monoamine release , and facilitates hyperactivity, euphoria, feelings of increased mental and physical capacity, and riskier sexual behavior . Among the general population, prolonged MA use can have detrimental effects on alertness, mood, cognition, and activity levels .

MA use also has been associated with poor sleep quality, increased sleep latency, and daytime sleepiness . Cessation of MA is often accompanied by withdrawal symptoms such as anxiety, depression, and craving that can further contribute to poor sleep quality. The adverse effects of MA also contribute to functional decline , such as unemployment , which also may exacerbate sleep disturbance. Among MA-using PWH, poorer adherence and missing ART doses after MA use have been reported, in part due to disrupted sleep-wake cycles . Taken together, acute and chronic MA use can have multiple direct and indirect effects on sleep quality.Few studies have examined the combined associations of MA use and HIV on sleep disturbance. This study evaluates effects of lifetime MA use on self-reported sleep quality among participants with or without HIV infection. The hypothesis was that lifetime MA use disorder would be associated with poorer sleep quality, particularly among PWH, and that this would relate to poor outcomes, including poorer cognition, reduced independence in activities of daily living, unemployment, and poorer life quality. Participants included 225 HIV-seropositive and 88 HIV-seronegative adults enrolled in NIH-funded research studies at the UC San Diego’s HIV Neurobehavioral Research Center . All participants completed a standard, selfreport evaluation of sleep quality as well as comprehensive neurobehavioral and neuromedical assessments. Exclusion criteria were: 1) sleep apnea or restless leg syndrome; 2) disruptions to sleep due to temporary circumstances ; 3) history of comorbid neurological illness or injury that would affect cognitive functioning ; 4) history of psychotic disorder; 5) alcohol dependence within a year; and 6) low premorbid verbal IQ as estimated by a Wide Range Achievement Test-4 score less than 80. The study protocol was approved by the UC San Diego Institutional Review Board and each participant provided written, informed consent. The Composite International Diagnostic Interview  was administered to diagnose participants for current and lifetime substance use and mood disorders , as defined by the Fourth edition of the Diagnostic and Statistical Manual of Mental Disorders . For the initial analyses, participants were stratified into four groups based on HIV status and lifetime MA use disorder diagnosis: HIV +/MA+ ; HIV+/MA− ; HIV−/MA+ ; and HIV−/MA− .

Current depressive symptoms were assessed using the Beck Depression Inventory, Second Edition . Item 16 on the BDI-II, which assesses change in sleep pattern in the last two weeks, was excluded from the BDI-II total score to avoid collinearity with our outcome of interest, perceived sleep quality. All participants completed the Pittsburgh Sleep Quality Index , a self-report questionnaire that assesses perceptions of average sleep quality and disturbances over the past 30 days . The PSQI is a widely used and well-validated measure of subjective sleep quality in adults . The PSQI has 19-items that assesses seven components of sleep, including quality, latency, duration, efficiency, disturbances,vertical grow lights use of medications to aid sleeping, and daytime sleepiness. Component scores range from 0 to 3 . Items were summed to generate a continuous global sleep score ranging from 0 to 21. Global scores > 5 indicate problematic sleep . For purposes of the present study, the continuous global PSQI score and dichotomous sleep quality classification were used as outcome variables.All participants underwent a standardized medical history interview, neuromedical examination, and blood and urine collection. HIV serological status was confirmed via ELISA and Western blot test, and HIV RNA levels were measured in plasma by rtPCR . Current CD4+ T-cell count was measured in blood by clinical flow cytometry. Additional HIV disease and treatment variables included nadir CD4+ T-cell count, AIDS diagnosis, estimated duration of HIV disease, and current ART regimen. MA use characteristics were self-reported. Comorbid medical conditions and current medication use were determined by self-report and medical chart review.All participants completed a comprehensive and validated neurocognitive assessment across seven neurocognitive domains commonly affected by HIV and MA use ; these include verbal fluency, executive functioning, speed of information processing, learning and memory , working memory/attention, and motor. Using established normative standards, test scores were adjusted for known influences on neurocognitive performance . Deficit scores were calculated for each domain and averaged across the test battery to derive a global deficit score ranging from 0 to 5 . Dependence in instrumental activities of daily living was determined using a revised version of the Lawton and Brody ADL questionnaire , in which participants rated current degree of independence as compared to prior best level of independence across 13 IADL domains. Participants were classified as IADL “dependent” if they endorsed requiring increased assistance in at least 2 IADL domains. Employment status and symptoms of cognitive difficulties in daily life were determined via the Patient’s Assessment of Own Functioning Inventory . The Karnofsky Performance Status Scale is a clinician administered assessment of disease-related functional impairment with a range from 0 to 100 with standard intervals of 10 . Self-reported physical and mental health quality of life were assessed using the Medical Outcomes Study Short-Form Survey . Physical and mental health composite scores were calculated via validated summary score formulas derived from an obliquely rotated factor solution . Group differences on demographics, neuropsychiatric and neuromedical characteristics, HIV disease and treatment parameters, MA use history, and global sleep outcomes were tested using analysis of variance , Kruskal-Wallis tests, Chi-square statistics, or Fisher’s Exact test . Two-tailed t-tests were used to compare groups on HIV disease and methamphetamine use characteristics. Follow-up pairwise comparisons were conducted using Tukey’s Honest Significant Difference or Wilcoxon tests for continuous outcomes, or Bonferroni-corrections for categorical outcomes.

Cohen’s d measured effect size for pairwise comparisons of means. Based on the pattern of univariable group differences in global sleep health and the small sample size of the HIV−/MA+ group, multiple linear regression examined global sleep scores as a function of MA status and clinical covariates specifically within PWH. Covariates included clinical variables from Table I with univariable associations with the primary independent variable [MA status ] as well as associations with the primary dependent variable with p values < 0.10. Variables sex and sexual orientation were included based on theoretical evidence . Additionally, HIV disease and treatment covariates were included to determine if HIVspecific factors attenuated the effects of MA status on global sleep in PWH. Stepwise regression models used backward selection based on Akaike Information Criterion to select the optimal model. To determine potential co-occuring neurobehavioral functional impairments associated with poor sleep quality within the dual-risk HIV+/MA+ group, additional nominal logistic regression models based on AIC were run to examine the association between problematic sleep membership and neurobehavioral outcomes . Covariates were selected based on univariable associations with global PSQI and did not include HIV or methamphetamine characteristics.Multiple regression analysis within PWH examined the independent contribution of MA status on Global PSQI scores while adjusting for clinical covariates and HIV-disease specific factors . Based on univariable associations with the primary independent variable [MA status ], the following were included as covariates in AIC-based regression: age, sex, education, BDI-II scores, lifetime alcohol use disorder, lifetime cocaine use disorder, lifetime cannabis use disorder, MA use in the last 30 days, and HCV. In addition, body mass index was added to the model based on its association with the primary dependent variable , along with sexual orientation and HIV-specific covariates and contained lifetime MA use disorder, having higher BDI-II scores, higher BMI, and detectable HIV RNA being associated with higher global PSQI scores. In considering the possible contribution of extraneous variables that may be common among participants who reported recent MA use, the regression model was rerun after excluding for those who endorsed MA use within the last 30 days . Using AIC selection criteria, lifetime MA use disorder continued to significantly contribute to the variance in sleep quality . Similarly, to focus on a clinically relevant subgroup, the regression model was rerun after excluding participants who were off ART or had HIV RNA levels above 200 copies/ml . In this virologically suppressed subgroup, lifetime MA dependence again remained associated with global sleep based on AIC selection . Rates of MA use are elevated among PWH and are associated with poorer sleep quality in the general population . The present study is the first to explore the relationships between past MA use disorder, HIV disease, and sleep quality. Our results demonstrate that PWH who have a history of prior MA use disorder had significantly poorer sleep quality and were more likely to be classified as problematic sleepers than those without a lifetime disorder. This relationship between lifetime MA use disorder among PWH is robust to MA group differences in biopsychosocial factors and is linked to sleep quality above and beyond the effects of HIV disease severity and other established risk factors for poor sleep.

The same age grouping was used for regular alcohol use for comparative purposes

The amplitudes were calculated using the S-transform applied to the recorded data for the delta frequency band extending from 300 to 700 ms post-stimulus. Jones et al. provides a complete description of the experiment and the calculation of the values. The values were log transformed and non-parametric age regression was performed on the variables and the standardized residuals used for further analysis. Since the principal objective is to determine whether there are age-varying effects of the predictive variables, survival analysis using standard Cox proportional hazards models in which effects are age invariant is not appropriate. In addition, such models cannot account for differential effects on survival which are the result of unmeasured heterogeneity in the sample . Discrete time survival analysis  provides an alternative model which avoids these problems and which can be implemented with logistic regression methods. By dividing subjects into groups based upon age of onset, a single logistic regression model can be applied to estimate the probability of those at risk in each age group of becoming alcohol dependent as a function of the predictive variables . The functional form of the model can be set to determine age-specific effects and/or age-independent effects, and use age-invariant and/or age-dependent covariates. A weighted model was employed to enable the use of all members of multi-member families . The output of a DTSA calculation is the same as the output from a logistic regression calculation. Each DTSA model had the following structure: The outcomes,vertical farm supplier or dependent variables were either alcohol dependence or regular alcohol use.

Regular alcohol use was defined as consumption at least once a month for 6 or more consecutive months. In all cases four distinct age ranges were used: under 16, 16 and 17, 18 and 19, over 19. These age groups were determined by the fact that ages of onset were whole numbers of years, that the numbers of those who became alcohol dependent be about the same in each group, and that there be at least 50 subjects in each group who became alcohol dependent to provide a reasonable degree of statistical reliability in the calculations. The covariates were a genotype from a CHRM2 SNP, ERO power from one of the leads, family type , number of parents who smoke, gender, and scores on principal components 1 and 2 derived from the stratification analysis of the sample genome . The CHRM2 SNPs analyzed here, rs978437, rs7800170, rs1824024, rs2061174, and rs2350786 include the three most significant of those for alcohol dependence with comorbid drug dependence in Dick et al. as well as two others that appear to be in a range of significance indicated by that table. From preliminary statistical screening of the genotypic distributions in the sample, a recessive model was employed which contrasted major allele homozygotes with those who were not. The electrophysiological phenotypes used in the analysis were found to besignificant in previous studies ; these studies showed reduced amplitudes in alcoholics and in those offspring at high risk. The number of parents who smoke were selected in part because the Kaplan-Meier curves with different values showed considerable variation. for a discussion of the effects of parental smoking on adolescent behavior. DTSA results were calculated for the entire sample. Our fourth item for investigation, whether the influence of these SNPs would be greater in a behaviorally defined sub-sample comprising a putatively more genetically vulnerable group was suggested by the results of Dick et al. and King and Chassin . Given the prevalence of various substance abuse categories in the sample and the number of subjects in each category who become alcohol dependent during the age range of the study, the broad criterion of the use of an illicit drug regardless of age of onset or frequency of use was employed to define the more genetically vulnerable group. This sub-sample will be called the “illicit drug use” sub-sample. Unlike the definition of illicit drug use in Dick et al. , this definition does not categorize regular use of cannabis as illicit drug use.

Since more than half the sample are characterized as regular users of cannabis at some time during the age range of the study , regular use of cannabis can not be considered a practice that violates norms of age-related behavior or involves enhanced risk taking, and thus not an element of “externalizing psychopathology”. We note that 90% of cannabis dependent subjects who are also alcohol dependent are included in the sub-sample, so although our criterion does not span regular cannabis use we are probably picking up those more genetically vulnerable cannabis dependent subjects and thus paralleling the group used in Dick et al. . For the regular alcohol use outcome, there were a sufficient number of illicit drug non-users who became regular users of alcohol to provide a sub-sample to contrast with the illicit drug use sub-sample. Since about 75% of the alcohol dependent subjects were members of the illicit drug use sub-sample, there were too few alcohol dependent subjects with no illicit drug use to provide a contrasting sub-sample. However some inferences about the significance of illicit drug use for the onset of alcohol dependence can be drawn from the differences between the DTSA results for the entire sample and the results for the illicit drug use sub-sample. Since regular alcohol use is a necessary condition of alcohol dependence, it could not be used as a covariate in the DTSA calculation of the onset of alcohol dependence. In order to investigate the duration of the transition from regular alcohol use to alcohol dependence as a function of the age of onset of alcohol dependence, the third item for investigation, logistic regression analyses of the onset of alcohol dependence as the outcome in each of the age ranges, restricted to the sample of those who are regular users of alcohol within that age range, were carried out. All covariates used in the DTSA calculations were used with duration of drinking as an additional covariate. Although those who become alcohol dependent are removed from the sample at each age range, this is not a survival analysis method because new regular users of alcohol are added to the sample at each age range. However, the results of these tests can be compared to the DTSA results for the illicit drug use sub-sample to examine the effect of including all alcohol dependent subjects in the sample,vertical farm supplies as opposed to a restricted sub-sample as found in the illicit drug use sub-sample.

In order to investigate the duration of the transition from regular alcohol use to alcohol dependence as a function of the age of onset of regular alcohol use, both Fisher’s exact test and the Cochran-Armitage trend test were applied to the distribution in each of the first three age ranges of the proportion of those who became alcohol dependent in the same or subsequent age range for those who became regular users of alcohol in that age range. We investigated whether there were age-related trends in the genotypic distributions which underlie the results of the DTSA for the SNP covariates and the rapidity of the transition from regular alcohol use to alcohol dependence. Two separate Cochran-Armitage trend tests were carried out on genotypic distributions of the SNPs of the illicit drug use sub-sample. Given the use of the recessive genetic model in the DTSA tests, subjects in the illicit drug use sub-sample were divided into two genotypic groups, those who had two copies of the major allele and those who did not. The first trend test was of the genotypic distribution of those who became alcohol dependent as a function of age of onset of alcohol dependence, comparing those who had two copies of the major allele with those who did not. The null hypothesis is that the relative effect of having a particular genotype does not vary linearly between ages of onset; that is, that the ratio of different genotypes of those who become alcohol dependent does not display a linear trend between ages of onset. To test whether there was trend in the genotypic distributions as a function of the rapidity of the transition from regular alcohol use to alcohol dependence, a second trend test was carried out. This test was of the genotypic distribution of those who began regular alcohol use in the youngest age range and became alcohol dependent at any age as a function of age of onset of alcohol dependence, comparing those who had two copies of the major allele with those who did not. The null hypothesis is that the ratio of different genotypes of those who become alcohol dependent does not show a trend between different time spans from the onset of regular alcohol use to the onset of alcohol dependence.

We restricted our analysis to those who became regular alcohol users in the youngest age range in order to obtain results for those who might take a relatively long time to develop alcohol dependence. The prevalence of alcohol use and dependence in the sample being studied is shown in table 1 in a form relevant to DTSA. In DTSA, for each outcome, those who have the possibility of suffering the outcome in each age range are the at-risk group. The at-risk group in the youngest age range is the entire sample. In each succeeding age range those who have suffered the outcome previously or for whom no information for that age range is available are removed from the at-risk group. Consequently the at-risk group diminishes in size in each successive age range. Because more subjects become regular users of alcohol than become alcohol dependent in each age range, the at-risk group for alcohol dependence is increasingly larger than the at-risk group for regular alcohol use in each subsequent age range. The illicit drug use sub-sample is also characterized in the table. For each of the five SNPs an analysis was run with the ERO measure taken from each of the three leads, as described in section 2.4 for a total of fifteen models. An examination of the logistic regression results showed that for each SNP, the beta coefficients had little difference when different leads were used; similarly, for each ERO measure the beta coefficients had little difference when different SNPs used. The same was true of coefficients for the clinical variables. We conclude that the effect of each covariate is essentially independent of the effect of any of the others. Thus results from SNPs, electrophysiological variables, and other variables can be reported seriatim without any distortion. Applying the Nyholt correction derived from the LD matrix, we obtain 3.2 effective SNPs. The independence of the covariates also implies that the effective number of tests is no more than the number of age ranges times the sum of the effective number of SNPs and electrophysiological variables in each sample group. Considering that the overall pattern of results is of primary interest, not only the positive results, and that no consensus exists for the most appropriate way to handle the analysis of correlated phenotypes and correlated SNPs in these circumstances, we do not enter any corrections for multiple testing. Table 2 for the youngest and oldest age ranges provides all significant results. Tables 5 and 6 provide more complete results.Significant CHRM2 SNP association were noted for the onset of alcohol dependence and were found only in the those with age of onset younger than 16. These results were obtained both in the entire sample and the illicit drug sub-sample. In all cases with significant results, occurrence of the major allele was the risk factor. No CHRM2 SNPs were found to be significant predictors of the onset of regular alcohol use for any age range. In comparing the entire sample with the sub-sample, the CHRM2 effects are greater in the illicit drug use sub-sample than in the sample as a whole. In particular, restricting the sample to those most genetically vulnerable enables two more SNPs to become significant at the 0.05 level. If the risk of the onset of alcohol dependence as a function of genotype were as great in the drug non-users as in the illicit drug use sub-sample, and taking into account the lower rate of regular alcohol use in the drug non-users, there would be almost twice as many alcohol dependent subjects among the drug non-users as in fact there are.

Geographic disparities in the severity of punishment for drug offenses are also stark

Research conducted in 2010 in California found the proportion of arrests for possession of a controlled substance that were charged as felonies varied across California counties from 25 to 100 percent . Even after controlling for case characteristics and criminal history, county of residence was a strong predictor of felony filings following arrest . Many states are beginning to reduce criminal penalties for drug possession. While many drug law reforms have focused on marijuana, California passed Proposition 47: The Safe Neighborhoods and Schools Act in 2014, which made more expansive changes by reducing possession of narcotics , possession of a controlled substance and possession of concentrated cannabis ) to misdemeanors, as well as several property offenses. Prior to Prop 47, these offenses were classified as felonies, with prosecutorial discretion to reduce charges to misdemeanors. The aim of Prop 47 was to focus spending on serious offenses in a state with overcrowded prisons, invest the savings to support mental health and substance use disorder treatment, and increase alternatives to incarceration for low-level crimes. Since the passage of Prop 47, five additional states have reduced non-marijuana drug possession penalties, as priorities shift towards reserving incarceration for serious, violent offenses and improving treatment access for substance use disorders.The following three chapters examine the effects of reducing drug possession to a misdemeanor offense on disparities in criminal justice involvement, and drug-related hospital visits. First, we assess whether racial/ethnic disparities in felony drug arrests declined, and whether shifts away from the policing of substance use overall occurred differentially across race/ethnicity. Second, we investigate whether eliminating prosecutorial discretion for charging drug possession as either a felony or misdemeanor reduced geographic disparities in felony convictions following a drug arrest, or whether the effect was buffered by increases in felony convictions for concurrent or non-Prop 47 felony drug offenses. Third,commercial indoor vertical farming we use variation in Prop 47’s impact on county drug arrest rates to evaluate whether there is evidence that reducing criminal penalties for drug possession had unintended consequences with regards to drug-related hospital visits.

Our analysis of 60 months of county, race, and offense-specific arrest rates in California point to several notable effects of Prop 47. Prop 47 led to substantially fewer drug arrests across all racial/ethnic groups. There was little indication of elevated drug arrest rates in Latinos compared to Whites before or after Prop 47, while the large absolute Black-White difference in felony drug arrest rates was reduced. With a higher proportion of felony drug offenses affected by Prop 47 , Whites had the greatest proportional decline in drug felonies, contributing to an increase in the relative Black-White disparity. Prop 47 appears to have led to reductions in arrests for drug offenses overall , which saw a decrease in the absolute Black-White difference, while relative disparities remained the same.There has been little study of the impacts of reducing offense severity on racial disparities in criminal justice involvement. In one exception, researchers found that reforming marijuana laws reduced arrests across all racial/ethnic groups, with no change in relative disparities between Blacks and other racial/ethnic groups . This aligns with our finding on Prop 47’s effect on total drug arrests, though we find increases in relative disparities for felonies. Why did reducing the classification of some drug offenses reduce drug arrests overall? Reductions in drug arrests were unlikely a reflection of underlying crime trends – violent and property crime rates increased during this period . Prop 47 was a ballot initiative and law enforcement may be responding to perceptions of public opinion about public safety priorities. In areas with high rates of violent crime, police agencies may welcome a freeing up of resources to focus on these offenses. Officers may use their discretion to opt out of drug arrests and focus on offenses their department prioritizes. The initial drop in total drug arrests followed by the rise in month three suggests officers may also be responding to feedback from the courts regarding how to interpret and act on the legislative change. Reductions in arrests for all drug offenses may also reflect fluid lines between drug possession and sale. One lieutenant explained to the first author that in his city, sellers typically plead out to possession up to the third arrest, while arrests for possession were used to get information on sellers . Reducing possession to a misdemeanor may have diminished tools police and prosecutors used to enforce drug laws, contributing to a de-emphasis on arrests. These impacts warrant further investigation. While the absolute Black-White disparity in felony drug arrests decreased, the relative disparity increased, in part because of differences in pre-existing felony offense composition by race/ethnicity.

Blacks had larger proportions of felony drug offenses unaffected by Prop 47 did not alter, such as sale.Whether this reflects racial differences in offending, or racial biases and practices in drug law enforcement, cannot be determined from our data, but other studies point to the latter . Prop 47 targeted drug possession, with the aim of decriminalizing substance use disorder. However, distinctions between sale and possession can be murky and influenced by prosecutorial discretion concerning which charges to file. Further study of racial inequalities in drug charges could help to address this unintended effect in California and states considering similar policies. Given substantial evidence of the role of social and economic factors in health outcomes , reducing incarceration and felony convictions through policy reform may be a critical component to addressing racial disparities in health. Our findings suggest that reclassifying drug offenses to misdemeanors is an effective approach to decreasing felony arrests across racial/ethnic groups, and absolute differences between Blacks and Whites. However, a full assessment of how reducing criminal penalties affected racial/ethnic disparities in criminal justice involvement must go beyond the stage of arrest, particularly since groups may differ in the prevalence of prior convictions, which affect the likelihood of prosecution. Still, there is clear evidence that on a population level, there were declines in incarceration resulting from Prop 47 , providing an opportunity to evaluate how reducing exposure affects health and associated racial/ethnic disparities, including the health of families and communities most affected by high rates of incarceration. Regarding more direct health impacts, a core component of Prop 47 was to reinvest savings from reduced incarceration to buttress substance use disorder and mental health treatment, with grants totaling $103 million awarded to 23 city and county agencies in mid-2017 . Prop 47 generated debate about whether arrestees would lose the incentive to enroll in treatment without a felony threat, which remains to be evaluated . Alternatively, populations accessing treatment or the proportions entering through voluntary vs. court-referred admissions may change . Racial disparities in substance treatment access could be impacted by Prop 47 as well. Blacks and Latinos arrested for drug offenses are more likely than Whites to receive incarceration, rather than drug treatment diversion .

Sentence standardization initiated by Prop 36 in 2001 reduced disparities, but had a greater impact on Latinos than Blacks, perhaps because Blacks had more prior drug and violent offenses that precluded eligibility for diversion. Treatment resources generated by Prop 47 may have more promise for reducing disparities,commercial solutions for vertical farming given broader participant eligibility criteria stipulated in grant requirements . Critical questions remain regarding how shifting funds from a criminal justice to a public health approach to substance use disorders will influence treatment enrollment and outcomes for health, well-being, productivity, and public safety. New programs funded by Prop 47 offer opportunities to evaluate these questions, and identify the most effective models for improving public health. In arrests with multiple offenses, only the most severe is included in the dataset. Since Prop 47 reduced the severity of drug possession offenses, one concern is whether some reduction in arrests could be attributed to co-occurring offenses that became comparatively more severe than drug possession post Prop 47. This could occur only in measures incorporating offenses classified as felonies pre-Prop 47 and misdemeanors post-Prop 47. These would include drug arrests reclassified by Prop 47 and total drug arrests, but would exclude felony drug arrests, misdemeanor drug arrests, and felony drug arrests unaffected by Prop 47. We used data on juvenile arrests, which contain up to five co-occurring offenses, to estimate possible bias. We found that potential masking was minimal and would not alter findings. Approximately five percent of “drug arrests reclassified by Prop 47” may have been masked post-policy – far less than the 46-51% declines in this category 12 months post-policy. Masking in the “total drug arrests” measure was estimated at three percent, compared to declines of 17-22% at 12-months post policy. Data are also event-, rather than person-level. Some arrests may represent the same person, though we minimized this possibility by using monthly rates. We assessed the extent of possible bias by linking individuals on name, date of birth, and jurisdiction for July 2013; just 1.2% were arrested more than once and 0.1% more than twice. We also lacked data on prior convictions; Prop 47 offenses retain felony classification for individuals with serious and/or violent convictions such as homicide and sexually violent offenses, or convictions requiring registration as a sex offender. The history of more frequent arrests and severe charges among Blacks arrested for drug offenses , may have minimized the effects of Prop 47 on reducing Black-White disparities in felony convictions. Given that other states are pursuing similar policy changes to reduce racial disparities , this effect should be further explored, and limiting prior record exclusion criterion considered. Race/ethnicity in arrests data may be based on officers’ observations, rather than self-report, in population denominators. This could lead to misclassification of the numerator in arrest rates, though sensitivity analyses indicated findings were robust. We also only analyzed three racial/ethnic groups; though these groups make up 95% of arrests in California, further research could assess disparities and impacts among other populations.State-level criminal justice reforms often leave a great deal of room for interpretation and discretion. There can be a tension between the goals of state legislators enacting criminal justice laws and county level officials who administer them, leading to highly county-specific implementation . As an example, mandatory minimum sentencing laws were seen as tough on crime and therefore historically supported by state legislators as a symbolic statement. They were opposed by courts, however, because they increased trial rates and case processing times, and penalties were considered disproportionately severe . In his review of two centuries of mandatory minimum sentencing laws, Tonry found a long history of courts using devices to circumvent them: prosecutors refused to file charges, plea bargaining was used to reduce charges, and judges refused to convict or ignored the statute and imposed a different sentence . When severe mandatory minimums for drug sale were in place in Michigan, for example, nearly every charge was reduced to possession , while harsh minimums for felony possession during the Rockefeller drug law era in New York were circumvented by reducing charges to misdemeanors or referring defendants to drug courts . Scholars have proposed that differences in local contexts determine how discretionary options are used within locales, producing geographic variation in case dispositions. Specifically, cases are prosecuted in accordance with personal and local principles of proportionality , local political leanings, resources for prosecution, and community priorities . A study of the disparate prosecution of drug possession across California in 2010 is illustrative, and the case to which we will return: charging policies and decisions were influenced by community and judicial attitudes toward the crime and the political and philosophical beliefs of district attorneys and charging deputies . Studies of the use of prosecutorial discretion to mitigate or maximize penalties in the context of three strikes laws have also found that more politically conservative environments tend to be more punitive, and counties with a high case flow relative to the budget for prosecution have lower average sentence severities . Prosecutors and judges may appropriately use discretion to align a punishment with the characteristics of a case and the local community’s priorities for law enforcement. However, unequal application of the law to equivalent cases calls into question the integrity and equity of the law, and can undermine public trust in law enforcement . For example, after controlling for case characteristics, third strike sentences in California were disproportionately imposed upon black defendants, with the largest gaps evident for offenses that could be charged as felonies or misdemeanors at the prosecutor’s discretion . These geographic differences may stabilize or exacerbate social and health inequalities.

A significant indirect effect is indicated when the CI does not include the value zero

The retrospective nature of this study limits the conclusions that can be determined as the methodology was not able to ascertain any measure of acute versus chronic marijuana use. Urine toxicology screens, such as those used in the ED, detectable levels of THC can be present for up to 4.6 days after the last noted use for individuals who do not use marijuana frequently, or up to 15.4 days after last use for those who are frequent users . Therefore, the presence of marijuana at the time of exposure may not correlate with recent use. Timing of exposure may be a factor and is an important limitation in this study. Additionally, study findings are based on patients with TBI that have had a urine THC test performed. Since not all patients with moderate or severe TBI were tested for the presence of THC, bias is thus introduced. There was a large percentage of study participants who were not tested or had missing test results for THC . Consequently, a more accurate analysis of THC prevalence and association was not possible as there was no way to determine which of those cases that were not tested or had no results documented were positive for THC. It is important to note that despite there being a small percentage of THC prevalence, this study reflects only one year worth of data, from 2017, and that establishing previous prevalence rates for comparison from the NTDB cannot be calculated. This is because the presence of THC was never abstracted nor documented in the data set prior to 2017. Future studies examining prevalence rates for a series of years is warranted. Observational research has been shown to provide mis-estimations of the outcome of interest. Data analyzed from the NTDB is extracted from various trauma registries across the United States and Canada.

Each hospital employs its own registry abstractors who input the data collected from the electronic medical record into the registry which then feeds into the NTDB. This is an important limitation as the documentation and accuracy of data inputted may be inaccurate, incomplete, or inconsistent. This can result in information bias. Furthermore, indoor vertical farming system systematic under reporting of data by participating hospitals can result in selection bias and create an inconsistent database. An example of this was the lack of consistency in the measurement and documentation of blood alcohol levels at time of hospital admission, and the missed opportunities for urine testing. This contributed to a large percentage of missing data which may have also introduced informational bias. Additionally, this variation in reporting results in incomplete data, as seen in this study, as well as conflicting data. There were two occasions where participants were documented as having not being tested for any substances yet were each found to have been positive for THC and/or cocaine. Outcomes of such practices and variations between trauma registries leads to a lack of confidence regarding data accuracy and resulting analyses. Successful viral suppression from combination antiretroviral therapy has led to an increase in life expectancy among persons living with HIV . While severe HIV associated neurocognitive disorder is less prevalent in the cART era, mild to moderate HAND persists despite virologic suppression. HAND affects up to 50% of HIV-positive persons, with older HIV-positive adults at greater risk for neurocognitive impairment than their younger counterparts.Among neurocognitive domains affected by HAND, complex motor skills are consistently compromised across time. Complex motor skills refer to a combination of cognitive and perceptual-motor abilities, including perception, planning, continuous tracking, and sequential movements. Although the prevalence of complex motor impairment has receded in comparison to the pre-cART era, deficits in complex motor functioning are still observed in approximately 30% of those with HAND.

Complex motor impairment is related to everyday functioning impairment, including driving ability, highlighting the clinical relevance in understanding mechanistic pathways underlying HIV-associated motor dysfunction. A recent longitudinal study found that complex motor function is particularly vulnerable to the effects of age and stage of HIV infection, and implicated the basal ganglia as a neural correlate of interest. The effects of acute HIV infection on the basal ganglia are well documented, with greater atrophy associated with psychomotor slowing. Inflammatory processes are one putative factor that may contribute to central nervous system injury, including deficits in complex motor skills. Biomarkers of inflammation, such as cytokines and monocytes, are elevated in the context of HIV infection. HIV, viral products, and activated immune cells are able to cross the blood brain barrier and contribute to inflammation in the CNS. Neuroimaging studies have shown that peripheral inflammatory biomarkers are able to alter neural activity in the basal ganglia, including dopaminergic activity, which is reflected by psychomotor slowing in HIV-negative adults. Among HIV-positivepersons, global neurocognitive impairment is associated with elevation of various peripheral biomarkers of inflammation and coagulation . Taken together, deficits in complex motor performance are commonly observed among HIV-positive persons, and elevation in peripheral biomarkers of inflammation may be a contributing factor. Thus, we hypothesize that HIV will have negative direct and indirect effects via inflammation on complex motor performance. Comparisons of demographic, neuromedical, psychiatric, and biomarker data between the HIV-positive and HIV-negative groups were performed with two-tailed t-test, Wilcoxon rank-sum test, likelihood ratio 2 test, or Fisher Exact Tests , as appropriate. FET and Kruskal-Wallis tests were conducted to examine whether complex motor performance, inflammatory biomarkers, and HIV disease characteristics differed by age decade. When appropriate, the biomarker values were log10 transformed for group comparisons. Hedge’s g statistic for continuous variables and odds ratios for binary variables were used to generate effect sizes for group comparisons. To adjust for multiple comparisons, the Benjamini-Hochberg method was used to limit false discovery rate to 5%. Group comparisons were performed with JMP 11.0.0 .

Path analysis was used to test the indirect effect of HIV on complex motor performance through the pathway of inflammation. To conduct the path analysis, we calculated biascorrected 95% confidence intervals using bootstrapping with the Process Procedure. The Process Procedure calculates the unstandardized path coefficients for all paths in the model. Covariates were selected based on which variables in Table 1 demonstrated univariable associations with the primary dependent variable at a critical α = .10. The following covariates were identified as having met our criterion for inclusion in the analysis as control variables: hypertension, hyperlipidemia, diabetes mellitus, lifetime cannabis use disorder, and lifetime methamphetamine use disorder. Given that demographic variables were accounted for in the adjusted T-scores for complex motor performance,indoor vertical farming technology we did not include demographic variables as covariates for analyses involving complex motor performance as the outcome variable of interest. Path analyses were performed using IBM SPSS Statistics for Windows, Version 24. Although neurologic findings commonly associated with HIV infection have been suggested to largely remit with initiation of cART, our cross-sectional study observed worse complex motor skills across the adult age continuum of HIV-positive, relative to HIV-negative, adults. Inflammation burden was higher among HIV-positive adults, compared to the HIVnegative comparison group. Consistent with our hypothesis, HIV infection was observed to have both direct and indirect effects via inflammation on complex motor performance, such thatinflammation burden accounted for 15.1% of the effect of HIV infection on motor performance when controlling for relevant covariates. These results indicate that inflammatory processes may contribute to worse complex motor skills in the context of cART-treated HIV. Our sample consisted of virally suppressed, chronic HIV-positive patients; however, impairment in complex motor skills was still observed among 20% of the HIV-positive sample. This observed impairment rate is consistent with motor impairment rates reported in previous literature e.g. Some evidence indicates a higher impairment rate in complex motor performance among persons with chronic HIV compared to persons with acute or early HIV infection, which may reflect a history of immunosuppression and/or greater inflammation burden. For example, persons with AIDS performed significantly worse on a fine motor speed test than those without AIDS. Deficits in motor skills may indicate injury to the basal ganglia, which are part of the motor control pathways. The basal ganglia appears to be particularly vulnerable to alterations in BBB permeability,  immune cellular infiltration, and accumulation of HIV viral RNA. Neuropathological studies have observed higher concentrations of macrophages, microglia, and viral proteins in the basal ganglia. Our path analyses indicate that inflammation burden may play a role in the association between HIV infection and worse complex motor performance. This finding is consistent with previous research demonstrating the detrimental impact of HIV and its proteins on the brain through peripheral and CNS pathways. Monocytes and macrophages are observed to infiltrate the CNS in HIV infection. Elevations in soluble markers of monocyte and cytokine activation, including sCD14, MCP-1/CCL2 and IL-6, have been observed among HIV-positive adults with neurocognitive impairment.Expression of MCP-1/CCL2 may contribute to upregulation of HIV-1 replication, thereby contributing to an increased risk of neurocognitive impairment.

In addition to inflammation, coagulation imbalance, which includes upregulation of D-dimer, is associated with global neurocognitive functioning among HIV-positive adults. In the current analysis, D-dimer was included in the inflammation burden composite given the bidirectional relationship between inflammation and coagulation . Multiple factors likely contribute to activation of inflammatory and coagulation pathways observed among HIV-positive persons on cART, such as viral replication, excess levels of translocated microbial products and other chronic pathogens , and loss of immunoregulatory responses. HIV was observed to have both direct and indirect effects via inflammation burden on complex motor performance; however, inflammation burden only accounted for 15.1% of the effect of HIV on complex motor performance. These results suggest there are additional mechanisms by which HIV may have deleterious effects on complex motor performance. Other factors contributing to neurocognitive impairment may include vascular remodeling , metabolic disorders , and co-infections. The present study evaluated a model that identified one plausible indirect pathway between HIV and worse complex motor performance; future research may build upon this work by evaluating models with multiple pathways in order to estimate the relative contribution of various plausible mediators. A better understanding of the interplay of factors contributing to neurologic dysfunction in HIV may lead to more accurate prognosis and and/or risk stratification of HIV-positive adults in regard to neurologic dysfunction. Although some brain metabolite abnormalities may improve after initiating cART, some abnormalities persist, including ongoing inflammatory processes. A long-term prospective cohort study found interacting effects of aging and HIV disease stage, such that the magnitude of motor performance impairment was greater than the sum of the independent effects of age and HIV disease stage. The interaction between aging and HIV disease stage suggest that complex motor skills may be particularly susceptible to aging-related progression of neurocognitive impairment among HIV-positive adults. The Grooved Pegboard Test appears to be particularly sensitive to detecting neurocognitive decline among HIV-positive persons.Our study findings should be considered in light of its limitations. First, although we used path analysis, this study was cross-sectional in nature, which precludes us from making inferences in regard to causation or mediation. Our results are also consistent with the alternative hypothesis that both inflammatory processes and complex motor skills may be mediated by an unobserved third variable. Given that the parent study involves repeated assessment of neurocognitive functioning and inflammation over five years, future analyses will examine whether changes in inflammation are associated with changes in complex motor functioning. Second, our study could not rule out other pathology underlying complex motor performance deficits . Third, it is unclear how to best conceptualize inflammation burden as normative standards regarding biomarker measurement and conceptualization have not been established. However, our calculation of a composite inflammation burden score may be a viable method compared to reliance on a single biomarker given our analysis indicated that complex motor performance was significantly associated with the composite but not any individual biomarker. Furthermore, conceptualization of an inflammation burden composite adds to the body of research aimed at developing clinically relevant risk indices . Fourth, our HIV-negative comparison group was relatively healthy and differed from the HIV-positive group on multiple characteristics. Fifth, our HIV-positive group consisted of mostly non-Hispanic white males with some college education, which is not fully representative of HIV-positive persons in the United States. Sixth, prior research indicates multitasks may better detect motor impairment compared to a single motor task ; the Grooved Pegboard Test, however, involves the use of many complex operations and is correlated with a range of cognitive functions. In summary, HIV has a deleterious impact on complex motor skills, which may be partially explained by inflammatory processes.

Correlates that are identified as significant will become covariates in the adjusted prevalence analysis

These basic summary statistics will be calculated for continuous variables and binary categorical variables . Continuous variables will be plotted to assess for normality; tests to assess for normality will include kurtosis and skewness. If data is normally distributed, then parametric statistics will be utilized. If data is not normally distributed, then non-parametric statistics will be utilized. Frequency distributions, including numbers and percentages, will be generated for each of the categorical variables/correlates; scatter plots will be created so that outliers can be identified. All correlate variables presented in table 6 will be examined; all the variables but one are categorical variables. Categorical variables will be mapped against presence of marijuana exposure and TBI severity to determine if significant differences are present across each of the categories. Tests to determine significant differences across categories include chi-square test or Fisher’s exact test based on the data. Variables that are identified as significant will be used as covariates in the adjusted prevalence rates. The variable of age is a continuous variable. The literature suggests that the relationship between age and drug exposure is not linear so we will test this relationship in this study. For this study a bar plot graph plotting age against marijuana exposure will be used to determine if a linear relationship exists. If there is not a linear relationship, the variable will be categorized. Prior to the adjusted prevalence analysis,vertical greenhouse farming these covariates will be examined for multi-collinearity.For Aim 3, the objective is to determine the relationship between marijuana exposure at the time of injury, the mechanism of injury, and TBI severity.

The null hypothesis is that a relationship between marijuana at the time of injury, the mechanism of injury, and severity of TBI does not exist. As illustrated in the conceptual framework , mechanism of injury is considered a mediating variable; it potentially mediates the relationship between marijuana exposure at time of injury and TBI severity . First an estimate of the effect between marijuana exposure and TBI severity will be obtained without the mediator variable of mechanism of injury. To test for mediation, several regression analyses will be conducted that include the mediator variable and significance of the coefficients will be examined in each step to assess for direct and indirect effects. First, I will test for a direct relationship between marijuana exposure and TBI severity. Assuming there is a significant relationship between the two variables, I will then conduct an analysis to determine if marijuana exposure affects mechanism of injury. Assuming there is a significant effect, I will then conduct an analysis to determine if mechanism of injury affects TBI severity, and whether the mediation effect is complete or partial . To determine if the mediation effect is statistically significant I will use either the Sobel test or bootstrapping methods . All analyses will be conducted unadjusted and then adjusted for covariates and confounders identified a priori and via aim 2 . The analyses will use logistic regression modeling because the dependent variable, TBI severity, is a dichotomous variable with only two choices, moderate or severe TBI. While TBI severity can be considered a continuous variable if using the number scoring of the GCS scale, a binary variable will be used as it is easier to interpret for clinicians using a numerical score: clinicians treat not on subtle degrees of TBI severity, but whether it is a moderate or severe one based on GCS threshold cut-offs.

Dummy variables will be used to input non-binary categorical variables into the analysis. However, with the predicted large sample size, and understanding the potentially significant confounding effects of certain variables such as other drugs, I hope to create binary variables for each drug listed in the NTDB database . But if this is unable to be done another approach would be to code all drug use into 3 categories: a value of 0 assigned for ‘no drug use’, a value of 1 for ‘stimulants’ only . Observational studies offer valuable methods for studying various problems within healthcare where other study design methods, such as randomized controlled designs , may not be feasible or even unethical. High quality observational studies can render invaluable and credible results that positively impact healthcare when studying clinically relevant topics in patient populations of interest to practicing clinicians. Despite this, observational studies can be subject to a few potential problems within the design and analytical phases rendering results highly compromised. Potential problems that will be encountered in this study design are selection bias, information bias and confounding. Possible countermeasures to address these problems will be discussed in this section. A potential problem regarding selection bias is present in the current study. The target study population is comprised of a purposive sample of patients registered in the NTDB. The NTDB is a centralized national trauma registry developed by the American College of Surgeons with the largest repository of trauma related data and metrics reported by 65% of trauma centers across the U.S. and Canada. The main advantage to utilizing such a registry for this study is that it constitutes the largest trauma database in the U.S. Furthermore, the NTDB allows for risk-adjusted analyses which can be important when evaluating outcomes in trauma . Despite its incredible potential in informing trauma related research, the selection of participants from the NTDB is not without its own biases. The reporting of data into the NTDB is done on a voluntary basis by participating trauma centers, rendering a convenience sample that may not be representative of all trauma patients, and may also not be representative of all trauma centers across the U.S. . This creates the problem of selection bias.

Furthermore, the NTDB is subject to the limitations of selection bias is that it includes a larger number of trauma centers with typically more severely injured patients potentially under representing patients with milder traumatic injuries and injury scores . Additionally, patients who may be traumatically injured and who are not admitted to a participating trauma center will not be included in the NTDB, nor will trauma patients who died on scene before being transported. Another consideration to note is that participating hospitals may differ in their criteria of which patients to include in the database, specifically patients who are dead on arrival or those who die in the Emergency Department . This discrepancy in inclusion and exclusion criteria between hospitals regarding specific injuries makes representative comparisons potentially difficult. Lastly, it is important to mention that large databases such as the NTDB are subject to missing data or disparate data. This is often due a multitude of factors, a few of which various demographic data points,agricultural vertical farming test results and other key information, such as procedures, that may not be documented in the health record and therefore omitted in the database . Missing data often contributes to information bias; however, it can also contribute to selection bias because one of the methods in dealing with missing data is excluding participants for which data is missing thereby creating potential selection bias. Missing data may undermine the ability to make valid inferences, therefore, steps will be taken throughout the design and operational stages and methods within this study to avoid or minimize missing data. Methods to reduce information bias that can lead to selection bias will be discussed in the analysis section of this paper. Due to the methods by which data are collected and inputted into the NTDB, potential problems are encountered in terms of data accuracy. Under reporting of variables obtained from the NTDB has often been noted as a problem due to the reliability of data extraction by participating hospitals . The data is self-reported and often inputted by staff dedicated to data collection. A major variance between participating hospitals is that hospitals with more resources are more likely to have dedicated staff to data collection. This can lead to informational bias in those hospitals that are more compliant in reporting data metrics when compared to others that are not. For example, hospital data registries that have incomplete data on complications may appear to deliver better care than hospitals that consistently record all complications. A recent study by Arabian et al. revealed the presence of inaccuracy and variability between hospitals, specifically in the areas of data coding and injury severity scoring. Additionally, the type of registry software a hospital utilizes can report injury severity scores differently . This too, renders data subject to informational bias. Information bias is due to inaccurate or incorrect recording of individual data points. When continuous variables are involved, it is called measurement error; when categorical variables are involved, it is called mis-classification . In this study, the potential for information bias is mostly due to 1) incomplete data documented in the medical record, or 2) inaccurate entry into the hospital trauma database by hospital staff.

Missing data will be analyzed in terms of potential effect for both the independent and dependent variable . While the database captures marijuana exposure through the first recorded positive drug screen within the first 24 hours after first hospital encounter, it is recognized that at times patients will not be screened, even if they have been exposed to marijuana. Marijuana exposure is identified through the presence of Cannabinoid in a urine toxicology screen. Marijuana presence can be detected in the urine up to 3-5 days from exposure in infrequent users; marijuana can be detected up to 30 days for chronic users . Therefore, patients could potentially have a positive marijuana toxicology screen even though they may not have ingested marijuana the day of the event. A positive marijuana urine toxicology screen indicates the probability of prior use, not immediate use. This is an important limitation to note. In clinical practice, the determination for a toxicology screen is often symptomology, so it is reasonable to assume that patients who have ingested marijuana a week prior to the event date may not exhibit the expected symptomology. Unlike other observational cohort studies, the potential of recall bias is minimal due to the availability of an objective marker to measure the independent variable, namely, the presence of marijuana. The presence of marijuana is captured from the hospital lab urinalysis results and is recorded as present within 24 hours after the first hospital encounter. Similarly, the data entered to measure the GCS score is also captured objectively through a numeric recorded score found in the medical record. See analysis section for how this type of bias will be addressed. The final sample size for this study involved 7,875 total unique cases. Those cases represent individuals who sustained a moderate or severe TBI in the NTDB database. Of the 997,970 total cases for 2017, there was a total of 32,896 cases that were identified as having sustained some form of traumatic brain injury, ranging from a concussion to severe injury, using the ICD 10 Diagnosis codes listed below . Of the 32,896 cases, 25,021 were identified as having a concussion diagnosis, and were ultimately excluded from the final sample size. This was because mild concussion diagnosis was found to suffer from large underestimates in documented incidence . A World Health Organization systematic review of mild TBI found that up to 90% of overall TBIs was mild in nature. The WHO has also estimated a yearly incidence of mild TBI anywhere from 100-600 per 100,000 cases, 0.1 to 0.6 respectively . Furthermore, up to 40% of individuals who sustain a mild TBI, or concussion do not seek the attention of a physician . Another study found that 57% of veterans who had returned from Iraq and/or Afghanistan, and had sustained a possible TBI, were not evaluated or seen by a physician . According to the WHO and CDC reports, these numbers may still not represent the actual incidence of TBI worldwide. Furthermore, the data suggests that individuals with a mild TBI for the most part do not go and seek medical attention, and this study focuses on individuals who sustain a moderate or severe TBI as those individuals suffer life-long devastatingly debilitating effects and are the targets of public health initiatives and injury prevention measures.

There was no association between dose or length of CBD usage and ALT levels

The 839 individuals who completed the study came from 43 states, including Alaska and the District of Columbia. Of these, 548 were female and 291 were male. Age ranged from 18 to 75 with the mean – SD of 45.5 + 13.1 years. There was no statistical difference in age between females and males . The percent of individuals versus the length of their CBD use is shown in Figure 1. Table 1 shows the number and percent of individuals taking the various compositions of CBD and compares the average daily doses. Full-spectrum hemp oil was taken by 55.7%, CBD-isolate by 40.5%, and broad spectrum CBD by 3.8%. Overall, the mean – SD daily dose of CBD was 50.3 + 40.7 mg/day. Full spectrum users’ daily dose was 40.0 + 36.8 mg, CBDisolate users , and broad-spectrum users . The upper limit of the range for each group is several times that of the mean. The forms by which the different compositions of CBD were taken are listed in Table 2. Almost half of the participants used a tincture, whereas 22.1% used a capsule or pill, 13.9 used an edible formulation, 12.6% used a nanotechnology-treated product, and 1.7% used an additive that could be added to a slushy or food. The average daily dose of the nanotechnology-treated CBD was significantly lower than any of the other forms of CBD used. This nanotechnology-treated CBD was a full-spectrum product. When it was removed from the analysis,vertical aeroponic farming the average daily dose of full-spectrum CBD increased to 63.2 + 41.8 mg. There was no statistical difference in average dosage between the different compositions or forms of CBD used when the nanotechnology-treated CBD was excluded from the analysis. Table 3 shows the prevalence of the LT in this study. The number and percentage of individuals with elevated LT were: ALT 9.1%, AST 4.0%, ALP 1.8%, and TB 1.4%.

The prevalences of elevated ALT and AST were significantly higher than the 2.5% prevalence in a normal population with no medical conditions . However, they were not significantly different from their reported prevalences in the general adult U.S. population .6 The prevalence of TB was significantly less that the normal population prevalence of 2.5% , and the prevalence of ALP was not different than the normal population prevalence of 2.5%. The prevalence of those having either an elevated ALT or AST was 10.2% , which was not statistically different than the reported prevalence in the general adult U.S. population .Although BMI, age, and gender were highly correlated with ALT percentile level, multiple regression adjustment found that only BMI and age had an effect on predicting elevated ALT percentile levels. As is illustrated in Figure 2, there was no statistical correlation between the percentile level of ALT and the daily dose of CBD. In addition, there was no significant correlation with any of the other values, including length of use and percentile ALT level. The mean value for daily dose of individuals with elevated AST was 52.3 + 41.0 mg/day and 50.1 + 36.3 for those with normal AST. There was no significant difference in the prevalence of elevated LT between companies , nor was there any difference between CBD compositions . There were, however, significant correlations between ALT levels and AST and ALP levels but not with TB. All the LT values were reasonably approximated by a normal distribution, but the ALT distribution had an extended right tail, which corresponds to the increased incidence of elevated ALT levels as compared with a normal, healthy population. It should be noted that one individual had an LT drawn at week 2 of the study for reasons that are unknown and reported that their LTs were abnormally high. However, he continued in the study and at week 4 when his LTs were drawn for this study, his LTs had returned to normal. The number of individuals with medical conditions was 585 of the total subjects and the average number of medical conditions per person was 2.7 + 2.34. The number of subjects with medical conditions with normal ALT values was 539 with a mean of 2.7 + 2.3, while those with elevated ALT values was 46 and 2.8 + 2.72, respectively. Similarly, the number and percentage of subjects taking prescription drugs was 525 overall, 476 for individuals with normal ALT levels,and 49 for those with elevated ALT levels. Overall, an average of 2.4 + 2.3 drugs per person was taken, while those with normal ALT levels took 2.4 + 2.3 drugs and those with elevated ALT levels took 2.3 + 1.8 drugs.

There were no significant differences between any of these values. Of the 76 individuals with elevated LTs, 33 agreed to have follow-up LTs performed by our laboratories or by their local physician . The remaining 43 individuals either refused to agree to have a follow-up LT or never showed up at the laboratory for the test. Of the individuals having follow-up LT data, one individual had an ALT 3 · ULN, an elevated AST, and normal ALP and bilirubin levels on the first set of LT, stopped CBD for 4 months and continues to have essentially the same levels of LT. Two individuals had an initial ALT > 2 · ULN and < 3 · ULN and both had normal LT on follow-up. One continued on CBD and the other, who was also consuming large amounts of alcohol throughout the study, stopped taking CBD and reduced her alcohol intake. All the remaining individuals had initial ALT levels > 1 · ULN and < 2 · ULN and none of these individuals had stopped CBD because of their ALT levels. Of these, seven continued to have an elevated ALT > 1 · ULN and < 2 · ULN. One individual’s ALT increased to > 2 · ULN and < 3 · ULN. This individual had an elevated LT before starting the study due to acetaminophen toxicity and restarted the acetaminophen between the first and the second LT.Three other individuals also admitted to having had elevated LT in the past, but claimed they had not indicated such when being screened for the study because they knew it would exclude them from participating. Three individuals’ repeat ALT increased to > 3 · ULN, two of whom started consuming marijuana products between the two tests and the other had a third set of LT which was normal even though she was continuing on her CBD. This last individual was taking three drugs , which are known to be associated with elevated LTs. There was no relationship between continuing to take CBD, daily dose of CBD and ALT levels or change in ALT elevation severity. There were no differences in any of the initial data in the severity of LT elevation between those that had follow-up LT performed and those that did not. Therefore, ultimately 30 of the 33 individuals had their ALT levels ultimately return to normal or remain minimally elevated . The three individuals with significantly elevated ALT had reasons to explain their continued elevation that were not related to CBD consumption,vertical cannabis farming as described above. Of the 1475 individuals enrolled in the study, 33 reported an adverse reaction, of which 31 were classified as unrelated to CBD ingestion with two being classified as possibly related. These two consisted of one case of atrial fibrillation and one case of constipation and psychoactive effects.Although it would have been ideal to have a controlled, double-blinded clinical trial to study this issue, at the time of design and beginning performance of this study, CBD was considered a class 5 drug in the United States. Therefore, even though CBD use was widespread, such a trial was difficult, if not impossible to perform using commercial CBD. Therefore, this observational study was thus performed in its stead.

The ALT data in this population study are similar to other population studies, which found that the level of ALT is affected by BMI, age, and gender. These studies also found significant correlations among ALT, AST, and ALP levels, which also were found in this study. These similarities suggest that the sample in this study is representative of the general adult population. In this self-selected sample of individuals who were self-dosing CBD, there was an increased prevalence of LT elevation as compared with a normal healthy population with no medical conditions. However, individuals in this study were taking CBD primarily for medical reasons, making laboratory comparisons of this population to that of a normal healthy population unrealistic, as a large proportion of the individuals with medical conditions in the United States will have abnormal values.10 When compared with the general adult population norms in the United States, the prevalence of elevated ALT and/or AST was no different. Although comparing the prevalence of elevated ALT to the general population seems illogical when the exclusion criteria for this study included ‘‘having a history of elevated LT,’’ such a comparison was made because several individuals knowingly falsified their history of elevated LT so that they could be included in the study. The transient nature of elevated LT in most individuals in this study is similar to that seen and reported in the general population and, in the majority, their LT reverted to normal even though CBD ingestion was continued.6,7 In the few individuals with persistent severely elevated or worsening severity of ALT elevations, the cause can easily be attributed to factors other than CBD. In this study, the vast majority of participants with elevated ALT and/or AST had levels < 2 · UNL and among the few individuals with levels > 2 · UNL, none had any elevation of ALP or TB. Even though a number of individuals consumed large amount of CBD, there was no increase in prevalence of elevated AST levels. In fact, the daily mean dose and standard deviation were essentially the same in both those with normal and elevated AST levels. CBD was not found to be a factor in determining ALT levels, not a single individual in this study had liver disease, and the prevalences of ALP and TB in this population of CBD users were lower than those found in the normal healthy population. This suggests that self-medication of CBD in some individuals may help prevent liver disease, as has been suggested in animal and in vitro studies.This paradoxical in effect on drug-induced liver damage with difference in dosing levels of CBD is not new.Many of the individuals in this study had been taking multiple drugs, including many that are known to cause LT elevations, and this fact has been a common theme in CBD-associated LT elevations in most studies when this association has been examined. However in these studies, the daily dose was a significant factor in the association. In this study, the average daily dose was 0.65 – 0.57 mg/kg/day, which is an order of magnitude less and was comparable to other studies that examined the daily dose typically consumed by a self-dosing CBD user. Although, it may be possible that CBD at lower doses can cause transient elevations in LT, the findings of this study support that they are more likely due to demographic, physical, and medical conditions already suffered by the individuals for which they are self-medicating with CBD for relief of associated symptoms. In addition, only 0.14% individuals had an adverse reaction that was considered to possibly have any causal relationship to taking CBD.Traumatic brain injury is a significant public health concern as it is a leading cause of mortality, morbidity and disability in the United States . According to the World Health Organization, TBI is expected to become the third leading cause of death and disability in the world by 2020. In the United States TBI contributes to a third of all injury related deaths . A traumatic brain injury, as defined by the Centers for Disease Control and Prevention , is a disturbance of the brain’s normal function that occurs when an individual sustains a blow, jolt, or bump to the head, or sustains a penetrating head injury .

These combined studies generally support our hypothesized model

Alcohol-addicted non-smokers showed the highest and most widespread differences from controls at the 10-day assessment versus the 3-day and 4-week assessments, whereas the alcohol-addicted smokers had a more consistent pattern of differences from controls across all assessment time points. In the alcohol-addicted smokers, higher GABAA receptor availability was correlated with more craving for alcohol and cigarettes . Overall, GABA was less studied than glutamate. MRS studies suggested lower GABA concentrations in abusers of alcohol, nicotine, and cocaine, which was also the typical direction of MRS effects for glutamate. Perhaps due to availability of more radiotracers, and/or because of their availability for a longer period of time, there were more PET/SPECT studies related to GABA than for glutamate, particularly for alcohol . These studies generally showed decreased GABAA receptor availability/distribution volume in the addicted individuals compared with controls. Nicotine, however, showed an opposite pattern of effects. History of smoking was not only associated with higher GABAA receptor availability on its own, but smoking also modulated the early abstinence course of individuals with alcohol dependence. Interestingly, the effects of smoking on alcohol dependence showed an opposite pattern of effects to that of glutamate. Examining the joint effects of smoking and alcohol abuse, while incorporating markers of both glutamate and GABA neurotransmission,vertical farming startup will be an interesting and important direction for future research. It is also important to note that, similarly to glutamate, GABA effects appeared to be sensitive to study participant characteristics, such as the length of abstinence and/or drugrelated medical diseases .

We did not locate any PET/SPECT studies labeling the GABAB receptor, which unlike the fast ligand-gated action of the GABAA receptor, is instead associated with long-term modulation through G protein regulated gene transcription and protein synthesis . More research, both MRS and PET, is also needed in opiates, methamphetamine, and cannabis.A large literature has examined RSFC deficits in drug addiction , and we did not reprise all of this important work here. Rather, our current goal was to provide evidence that some of the same regions implicated in glutamate and GABA MRS and PET studies in addiction are also functionally disrupted as revealed by RSFC. We focused on studies that examined RSFC differences between addicted individuals and healthy controls [using approaches that were seed-based and/or whole-brain or the graph theory-based metric degree ] in the ACC extending into the dorsomedial and/or ventromedial PFC, insula, and striatum . The rationales for focusing on these regions are as follows. The ACC and adjacent medial PFC form part of the default mode network , which is activated during the resting state . Moreover, the resting state is a condition replete with mind-wandering and self-generated thinking , and these self-referential functions have been linked with activation of cortical midline regions, including the pACC and medial PFC, in healthy individuals and addicted individuals . Thus, although larger regions, such as the ACC and medial PFC, have sometimes been selected as regions of interest in MRS studies for practical reasons , effects in these regions are nonetheless highly anticipated for both MRS studies and RSFC studies; recent combined fMRI-MRS studies in healthy participants further speak to this point . The insula has a critical role in mediating interoception and the detection of behaviorally relevant stimuli . In drug addiction, these functions subserved by the insula appear vital for the experience of drug craving.

The striatum forms a key part of the mesocorticolimbic dopamine projections that mediate the reinforcing effects of addictive drugs; chronic perturbation of this system ultimately leads to enduring changes in striatalPFC glutamatergic projections . Although MRS measurement of glutamate and GABA is more difficult in the striatum than in the insula , some studies included in this review indeed have reported striatal effects. Importantly, prior resting-state studies of healthy individuals have revealed functional connections between these three regions . Taken together, the literature indicates that drug-addicted individuals exhibit abnormal neurotransmission involving glutamate and GABA in corticolimbic brain regions of core relevance to their disease , and that these same regions also show disruptions in RSFC. Because glutamatergic and GABAergic neurotransmission in such regions also drive the resting state in health, we raise the hypothesis that corticolimbic RSFC can provide an intermediate phenotype to explain associations between addiction-relevant glutamatergic and/or GABA dysregulation and addiction symptomatology . Future work can center on the following areas. It is crucial to incorporate the modulating influences of clinical characteristics, especially withdrawal/abstinence and smoking . Withdrawal carries a high vulnerability to relapse, which may partially stem from associated perturbations in brain glutamate or GABA . Smoking history, as shown above, exerts important independent effects on brain glutamate and GABA metabolites. Current smoking also modulates the effects of other substances, such as alcohol , and the resulting effects on brain glutamate and GABA may differ depending on which neurotransmitter is examined. Future studies might also investigate whether neurochemical deficits in one corticolimbic brain region have reverberations across the brain. This may be especially true for deficits in glutamate, which has more global effects . RSFC methods, especially using whole-brain graph theory approaches, are ideally suited to test such hypotheses.

Finally, future studies can incorporate direct measures of brain metabolism, such as PET with [18F]fluorodeoxyglucose. Indeed, energy metabolism may represent an intermediary process between fast neurotransmission and the slow RSFC blood-oxygen-level dependent response, and this kind of precision would increase mechanistic understanding. Another important future direction for enhancing mechanistic understanding is to conduct studies with tighter experimental control, as can be achieved in animal models. Animal models offer the advantages of more controlled drug histories and more invasive assessments, which could clarify how addiction may causally change glutamate/GABA neurotransmission and metabolite levels in select brain regions, as well as their consequent associations with RSFC. In such animal studies, lower Glx levels in the dorsal striatum of rhesus monkeys due to chronic methamphetamine exposure showed a linear pattern of recovery with abstinence over one year  [but see , where cocaine administration over the course of 9 months increased levels of glutamate and glutamine in squirrel monkeys]. In another study, rats received subcutaneous twice-daily injections of 2.5mg/kg methamphetamine for one week. This drug exposure resulted in decreased MRS-measured glutamate, glutamine, and GABA in hippocampus, nucleus accumbens, and PFC . Interestingly, a different study revealed decreased RSFC in cocaine-exposed rats between the nucleus accumbens and the dorsomedial PFC as a function of the degree of cocaine self-administration escalation .Alternatively, drug-administration schedules not intended to produce addiction have largely produced opposite results. For example, following short-term administrations of cocaine or alcohol , rats showed transient striatal or whole-brain increases in glutamate and/or GABA [but see ]. Such results are consistent with the idea that addiction-related decreases in glutamate or GABA could reflect neuroadaptations to chronic drug exposure. Such conclusions are difficult, if not impossible, to achieve in studies of already-addicted humans. Because human studies cannot achieve the level of precision attained in animal studies, mechanistic clarity needs to rely on more comprehensive and innovative experimental methods. A drug challenge model, if employed in combination with fMRI and with MRS or PET, can address causality by modulating underlying glutamate/GABA neurotransmission that can then be correlated with resting-state fMRI and then other clinical variables. We are aware of no previous studies in this field that have attempted this kind of ambitious design,vertical urban farming though some have incorporated various components. For example, one study showed that acute alcohol administration reduced occipital GABA levels . However, because this experiment was conducted in social drinkers , the potential relevance to addiction is unclear. Another study found that a heroin challenge in opiate-addicted individuals strengthened connectivity within an ICA-defined basal ganglia network . Similarly, opiate-addicted individuals receiving high methadone doses showed higher ACC glutamate levels . However, these studies did not incorporate both neurochemical measurements and RSFC. Finally, perhaps the most methodologically rich study to date evaluated the effects of 12-week varenicline administration on dACC Glx levels and fMRI BOLD response.

The varenicline regimen decreased dACC Glx levels, modulated DMN regions during task performance, and changed dACC-DMN connectivity as revealed by psychophysiological interaction analysis . Future iterations of this study type would need to include a control group and could benefit from using a pharmacological probe that modulates the neurotransmitter system of interest more directly . A future study that integrates these various components within a single design promises to be highly informative. It would be interesting to test whether addiction-related effects on brain glutamate and GABA are specific to addiction related to substances rather than behaviors. One could compare and contrast effects in individuals with substance use disorders with those in individuals who have behavioral addictions, such as gambling . We are aware of no MRS or PET studies that contrasted substance addiction and gambling addiction, but several studies on this front have been conducted using RSFC. For example, whereas increased intrinsic local connectivity of the PCC was observed for both behavioral and substance addictions, decreased connectivity of the ACC was specific to alcohol addiction . Moreover, cocaine addiction was uniquely associated with enhanced connectivity between the subgenual ACC with OFC or striatum . In contrast, connectivity in cocaine addiction overlapped with that in gambling addiction in the OFC and dorsomedial PFC, and in the amygdala and insula [note that this latter connection was also reported in opiate dependence ]. A relatively small but growing literature suggests that glutamatergic and/or GABAergic medications modulate neural activity in brain regions spotlighted in this review. In smokers, the GABAB receptor agonist baclofen, given both acutely and after 3 weeks of treatment, decreased cerebral blood flow during perfusion fMRI in several regions including the dACC . In an animal model , baclofen reversed neuropsychological deficits owing to acute cocaine injections in association with normalized metabolic activation in the PFC . Acamprosate, despite continuing debates regarding its clinical mechanism of action, appears to exert effects on brain glutamate . Consistent with this idea, 4-week treatment of acamprosate reduced MRS-measured pACC glutamate levels in recently abstinent alcoholaddicted individuals; such reductions appeared to be clinically warranted, as glutamate levels in cerebrospinal fluid were positively correlated with alcohol dependence severity . Moreover, in an animal model , acamprosate reduced Glx levels in the ventral striatum during alcohol withdrawal . In healthy controls, the GABA reuptake inhibitor tiagabine, which notably has been shown to decrease cocaine-positive urines in pilot clinical trials , increased the VT and/or BPND of [11C]flumazenil and [ 11C]Ro15 4513 in multiple PFC regions, including the ACC . We hypothesize that these medications – as well as potentially novel medications yet to be developed that act on these respective systems – could also modulate corticolimbic RSFC, providing a potential therapeutic target for intervention in drug addiction. In this regard, modulation of brain glutamate and GABA signaling may be particularly important during acute withdrawal, a time period when neurotransmission seems especially perturbed. Some researchers and advocates have raised concerns that alternative nicotine delivery systems act as a gateway into cigarette smoking and promote nicotine dependency for youth.However, other researchers argue that ANDS are important for harm minimization because they may replace higher risk combustible tobacco products, ultimately supporting goals related to the cigarette smoking endgame.Despite these debates, we still know little about how youth make sense of their transitions between ANDS and cigarettes and justify their unique initiation pathways of use. Existing research on pathways of nicotine and tobacco use has primarily focused on examining whether youth initiation of vaping encourages progression to smoking initiation. A few studies suggest that compared to never vapers, youth who use ANDS are likely to progress to smoking and that adolescent smokers who then initiate vaping are likely to adopt dual use practices of smoking and vaping. For example, cross sectional studies have found that among never smoking adolescents, ever use of ecigarettes was associated with increased susceptibility to initiate smoking, and that e-cigarette use was not associated with intentions to quit smoking.Recent longitudinal studies suggest that youth ecigarette use was associated with future cigarette initiation and current cigarette use, suggesting that e-cigarette use is a risk factor for cigarette smoking.

It is now considered as a distinct clinical entity despite a large variety of aetiologies

Considered from another perspective, the interaction effects suggest that certain temperamental traits are risk factors for substance use when parental monitoring is low, but not when it is high. Either interpretation is consistent with the findings and points to a similar conclusion about how temperament and parenting work together to increase risk for early substance use. Being raised in a home with a perception of minimal monitoring by parents may be a more salient risk factor for substance use for those adolescents with dispositional proclivities toward substance use, and possessing a disposition toward substance use may be a stronger risk factor when youth do not believe they are closely monitored by their parents. The broader developmental consideration is that temperamental factors and family variables should be considered jointly in models that attempt to understand early risk for substance use. Although the current study was notable for its multi-informant longitudinal design, and for the size and ethnic composition of the sample, there are limitations that merit consideration. For instance, our ability to detect effects for surgency was hampered by the low reliability of the scale in the 5th grade; thus, results involving surgency should be interpreted with caution. Also, we relied exclusively on youth reports of their substance use, intentions, and expectancies. However, intentions and expectancies are inherently subjective variables and are thus best assessed via self-report. Likewise,vertical farming market focal youth might be in the best position to report on their actual use given understandable motivations to hide substance use from parents, teachers, and other potential informants.

In closing, we found evidence from a longitudinal study of Mexican-origin youth that temperament and parental monitoring assessed in 5th grade are prospectively related to substances use outcomes in 9th grade. These findings are important because they suggest that theoretical models concerning the influence of temperament on substance use can be applied to adolescents of Mexican origin. Indeed, we suspect that factors like temperament and parental monitoring have transcontextual validity to the extent that they are risk factors for early substance use for a diverse range of youth. Of particular importance, we also found that relatively high levels of perceived monitoring might attenuate some of the risks associated with dispositional tendencies toward substance use. Although the current results should be replicated, we suggest that future intervention and prevention efforts could be enhanced by attending to individual differences in temperament. Such attention might be especially important when considering efforts to increase parental monitoring. Neuropathic pain, caused by a lesion or disease affecting the somatosensory nervous system,1 has a considerable impact on patients’ quality of life, and is associated with a high economic burden on the individual and society.Epidemiological surveys have shown that many patients with neuropathic pain do not receive appropriate treatment for their pain.This may be due to lack of diagnostic accuracy and relatively ineffective drugs, but also insufficient knowledge about effective drugs and their appropriate use in clinical practice.Evidence-based recommendations for the pharmacotherapy of neuropathic pain are therefore essential. Over the past 10 years, a few recommendations have been proposed for pharmacotherapy of neuropathic pain or specific neuropathic pain conditions, such as painful diabetic neuropathies and postherpetic neuralgia. In the interim, new pharmacological therapies and high-quality clinical trials have appeared.

Previously hidden and unpublished large trials can now be identified on the web , which, together with analysis of publication bias, may limit the risk of bias in reporting data. Furthermore, prior recommendations sometimes came to discrepant conclusions because of inconsistencies in methods used to assess the quality of evidence . In order to address these inconsistencies, the Grading of Recommendations Assessment, Development, and Evaluation was introduced in 2000 and has received widespread international acceptance. All these reasons justify an update of evidence-based recommendations for the pharmacotherapy of neuropathic pain. The present work aimed to update the recommendations of the Special Interest Group on Neuropathic Pain of the International Association for the Study of Pain on the systemic and topical pharmacological treatments of neuropathic pain.Non-pharmacological management such as neurostimulation techniques were beyond the scope of this work We conducted a systematic review and meta-analysis of randomised controlled trials of all drug treatments for neuropathic pain published since 1966 and of unpublished trials with available results, and assessed publication bias. We used GRADE to rate the quality of evidence and the strength of recommendations. The systematic review of the literature compiled with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statements.We used a standardized review and data extraction protocol . The full reports of randomised, controlled, double-blind studies published in peer-reviewed journals between 1966 and April 2013 were identified using searches of PubMed/Medline, the Cochrane Central Register of Controlled Trials, and Embase. Additional papers were identified from published reviews and the reference lists of selected papers. The target population was patients of any age with neuropathic pain according to the IASP definition; this included postherpetic neuralgia, diabetic and non-diabetic painful polyneuropathy, postamputation pain, post-traumatic/postsurgical neuropathic pain including plexus avulsion and complex regional pain syndrome type II , central post-stroke pain, spinal cord injury pain, multiple sclerosis-associated pain. Neuropathic pains pertaining to multiple aetiologies were also considered. Neuropathic pain associated with nociceptive components was included provided that the primary outcome was neuropathic pain. Conditions such as complex regional pain syndrome type I, low back pain without radicular pain, fibromyalgia, and atypical facial pain were not included because they do not fulfill the current definition of neuropathic pain.

Trigeminal neuralgia was considered separately because of generally distinct response to drug treatment.The interventions were systemic or topical treatments with at least 3 weeks duration of treatment. Single-administration treatments with long-term efficacy were included if there was a minimum follow-up of 3 weeks. Studies using intramuscular, intravenous,vertical farming pros and cons or neuraxial routes of administration and preemptive analgesia studies were excluded . Randomised, double-blind, placebo-controlled studies with parallel group or crossover study designs that had at least 10 patients per arm were included. Enriched-enrolment, randomised withdrawal trials were summarised separately. Studies published only as abstracts were excluded. Double-blind active comparator trials of drugs generally proposed as first or second-line treatments were included. The study outcome was based on the effect on the primary outcome measure, e.g. neuropathic pain intensity. Studies in which the primary outcome included a composite score of pain and paraesthesia or paraesthesia only were not included. Studies were assessed for methodological quality using the five-point Oxford Quality Scale by two independent authors . Here, a minimum score of 2 out of 5 was required for inclusion.We also assessed serious risk of bias relating to lack of allocation concealment, incomplete accounting of outcome events, selective outcome reporting, stopping early for benefit, use of invalidated outcome measures and carryover effects in crossover trials. The results of the database and registry search are shown in figure 1. In total, 191 published articles and 21 unpublished studies were included in the quantitative synthesis. Study characteristics are summarised in appendices 4 and 5. In addition, five published and 12 unpublished studies were retrieved between April 2013 and January 2014 . Thus, a total of 229 articles/studies were included. References are presented in appendix 7. Eligible studies investigated tricyclic antidepressants , serotonin- noradrenaline reuptake inbibitor antidepressants, other antidepressants, pregabalin, gabapentin/ gabapentin extended release and enacarbil, other antiepileptics, tramadol, opioids, cannabinoids, lidocaine 5% patch, capsaicin 8% patch and cream, subcutaneous BTX-A, NMDA antagonists, mexiletine, miscellaneous topical, newer systemic drugs, and combination therapies. Fifty-five percent of the trials were conducted in diabetic painful polyneuropathy or postherpetic neuralgia. NNT and NNH could be calculated in 77% of published placebo-controlled trials. There was generally no evidence for efficacy of particular drugs in specific conditions. Therefore these recommendations apply to neuropathic pain in general. However, they may not be applicable for trigeminal neuralgia, for which we could extract only one study complying with our inclusion criteria. We therefore recommend referring to previous specific guidelines regarding this condition.Few studies included cancer-related neuropathic pain; the recommendations for the use of opioids may be different in certain cancer populations. Similarly these recommendations do not apply to acute pain or acute pain exacerbation. Treatment of neuropathic pain in children is a neglected area.However, none of the studies assessed pediatric neuropathic pain, and the present guidelines therefore only apply to adults. Details regarding GRADE recommendations and practical use are provided in tables 2, 3 and appendix 10. Few relevant trials appeared since our meta-analysis, but none affected the recommendations . TCAs, SNRI antidepressants duloxetine and venlafaxine, pregabalin, gabapentin and gabapentin ER/enacarbil have strong GRADE recommendations for use in neuropathic pain and are proposed as first-line, with caution regarding most TCAs .

Tramadol, lidocaine patches and high-concentration capsaicin patches have weak GRADE recommendations for use and are proposed as generally second-line. Topical treatments are recommended for peripheral neuropathic pain with presumed local pain generator. In select circumstances, e.g when there are concerns due to side effects or safety of first-line treatments, particularly in frail and elderly patients, lidocaine patches may be considered as first-line. Strong opioids and BTX-A have weak GRADE recommendations for use and are recommended as third-line. Prescription of opioids should be strictly monitored particularly for patients requiring high dosages .Tapentadol, other antiepileptics, capsaicin cream, topical clonidine, SSRI antidepressants, NMDA antagonists and combination therapy have inconclusive GRADE recommendations. Combination of pregabalin/gabapentin and duloxetine/TCAs may be considered as an alternative to increasing dosages in monotherapy for patients unresponsive to monotherapy with moderate dosages . Cannabinoids and valproate have weak recommendations against their use in neuropathic pain and levetiracetam and mexiletine have strong recommendations against their use .The present manuscript presents the revised NeuPSIG recommendations for the pharmacotherapy of neuropathic pain based on an updated systematic review and meta analysis of systemic or topical drug treatments. We used the GRADE systemto assess the quality of evidence for all treatments, and the recommendations comply with the AGREE II guidelines. The present recommendations are driven by drug treatments rather than by the aetiology of pain, akin to prior NeuPSIG recommendations.Neuropathic pain is increasingly recognised as a specific multi-aetiology entity across neuropathic syndromes.In accordance with previous reports24 results of our meta-analysis show that the efficacy of systemic drug treatments is generally not dependent on the aetiology of the underlying disorder . Side effects may, however, to some degree depend on the aetiology, eg, drugs with CNS-related side effects may be less tolerated in patients with CNS lesions.Pain due to HIV-related painful polyneuropathy and radiculopathy seems more refractory than other pain conditions in our meta-analysis. This may be due to large placebo responses in HIV-related neuropathy trials,a distinct clinical phenotype in subgroups of patients with radiculopathy,or psychological/psychosocial comorbidites, often neglected in large trials. Topical agents have no known relevance for use in central neuropathic pain, and this is clearly stated in our recommendations. The strengths of this systematic review and meta-analysis are the analysis of publication bias and unpublished trials. Publication bias may be present if studies with positive results are published while those with no data or negative results are not.It may lead to major overestimation of efficacy in therapeutic studies.Our results showed that the effect sizes estimated from studies published in peer-reviewed journals were higher than those estimated from studies available in open databases. This finding emphasises the need for searching these databases in systematic reviews. Analysis of further publication bias suggested a limited overstatement of overall efficacy of drug treatments , although available methods to assess publication bias have limitations.Here, we found that high concentration capsaicin patches were the most susceptible to publication bias, ie, a new study with less than 400 participants with no effect may increase the NNT to an unacceptable level. This supports the robustness of a meta-analysis taking into account unpublished trials, and suggests that effect sizes were overestimated in previous meta analyses of pharmacotherapy for neuropathic pain. Results of quantitative data for individual drugs, showing NNT for 50 % pain relief ranging from around 4 to 10 across most positive trials, emphasizes the overall modest study outcomes in neuropathic pain. Inadequate response of neuropathic pain to drug therapy constitutes a highly unmet need and may have substantial consequences in terms of psychological or social adjustment.However these results may also reflect insufficient assay sensitivity of clinical trials of neuropathic pain.