The TreeNet analysis revealed a set of predictors for SUD containing those derived by CART

Each of the four types of sub stances, as well as daily use of tobacco and the number of substance use classes endorsed , were explored in secondary analyses. DSM-IV abuse or dependence was based on a positive parent or child report with the Diagnostic Interview Schedule for Children version 2.3/3.0 at the 6- and 8-year follow-up assessments. The DISC includes both lifetime and past year diagnoses. The Diagnostic Interview Schedule-IV was used at the 8-year follow-up for 18 + year-olds . SUD was defined as the lifetime presence of any abuse or dependence . Additional analyses explored SUD for alcohol, tobacco, and cannabis/other drugs separately. All patients in this study provided informed written consent as approved by the NIH Ethics Committee.A sample of 560 inpatients and outpatients with severe SUD from Central Kentucky psychiatric facilities was collected during a pharmacogenetics investigation. Patient interviews and medical record information were used by the research nurse to assess the Clinician Rating of Alcohol and Drug Use Disorder that provides a score from 1 = abstinence to 5 = severe dependence. Scores of 3 and higher are pathological and were considered positive in our analyses. All drugs were combined into one rating. Descriptions of the training provided to research nurses to assess the CRAUD and CRDUD were published elsewhere. DNA was available from 533 of 560 study subjects. Of the 533 subjects with available DNA, 53% were male, 82% were Caucasian, 16% were African American, and 2% were from other ethnicities.

Additional clinical information for this sample has been described elsewhere and included: clinical diagnosis obtained from medical records, prior psychiatric history, history of daily smoking,grow rack systems reviews of current and psychiatric medication use, and body mass index . All participants in the Kentucky study provided informed written consent as approved by the University of Kentucky IRB.Association studies of ADGRL3 variants with ADHD, ODD, CD, response to stimulant treatment and severity outcome have been published elsewhere for the Paisa and Spanish populations. We used ARPA to build a predictive framework to forecast the behavioral outcome of children with ADHD, suitable for translational applications. Our goal was to test the hypothesis that ADGRL3 variants predisposing to ADHD also increase the risk of co-morbid disruptive symptoms, including SUD. ARPA is a tree-based method widely used in predictive analyses because it accounts for non-linear and interaction effects, offers fast solutions to reveal hidden complex substructures and provides truly non-biased statistically significant analyses of high-dimension, seemingly unrelated data. In a visionary manuscript, D.C. Rao suggested that recursive-partitioning techniques could be useful for genetic dissection of complex traits. ARPA accounts for the effect of hidden interactions better than alternative methods, and is independent of the type of data and of the type of data distribution. Furthermore, results supplied by tree-based analytics are easy to interpret visually and logically. Therefore, to generate the most comprehensive andparsimonious classificatory model to predict the sus ceptibility to disruptive behaviors, we applied ARPA using a set of different modules implemented in the Salford Predictive Modeler® software, namely, Classifica tion and Regression Trees , Random Forest, and TreeNet .

One important advantage of SPM when compared to other available data mining software is its ability to use raw data with sparse or empty cells, a problem frequently encountered in genetic data. Briefly, CART is a non-parametric approach whereby a series of recursive subdivisions separate the data by dichotomization. The aim is to identify, at each partition step, the best predictive variable and its best corresponding splitting value while optimizing a splitting statistical criterion, so that the dataset can be successfully split into increasingly homogeneous subgroups. We used a battery of different statistical criteria as splitting rules to deter mine the splitting rule, maximally decreasing the relative cost of the tree while increasing the prediction accuracy of target variable categories. The best split at each dichotomous node was chosen by either a measure of between-node dissimilarity or iterative hypothesis testing of all possible splits to find the most homogeneous split . Similarly, we used a wide range of empirical probabilities to model numerous scenarios recreating the distribution of the targeted variable categories in the population. Following this iterative process, each terminal node was assigned to a class out come. To avoid finishing with an over-fitted CART predictive model , and to ensure that the final splits were well substantiated, we applied tree pruning. During the procedure, predictor variables that were close competitors were pruned to eliminate redundant commonalities among variables, so the most parsimonious tree would have the lowest misclassification rate for an individual not included in the original data. Additionally, we applied the Random Forest methodology using a bagging strategy to exactly identify the most important set of variables predicting disruptive behaviors. The RF strategy differs from CART in the use of a limited number of variables to derive each node while creating hundreds to thousands of trees. This strategy has proved to be immune to the over fitting generated by CART. In RF, variables that appeared repeatedly as predictors in the trees were identified. The misclassification rate was recorded for each approach. The TreeNet strategy was used as a complement to the CART and RF strategies because it reaches a level of accuracy that is usually not attainable by single models such as CART or by ensembles such as bagging.

The TreeNet algorithm generates thousands of small decision trees built in a sequential error-correcting process converging on an accurate model.To derive honest assessments of the derived models and have a better view of their performance on future unseen data, we applied a cross-validation strategy where both training with all the data and then indirectly testing with all the data were performed. To do so, we randomly divided the data into separate partitions of differ entsizes. This strategy allowed us to review the stability of results across multiple replications. We used a 10-fold cross-validation as implemented in the SPM software. A fixed-effects meta-analysis of the overall fraction of correctly classified individuals using the derived models from each of the four samples was applied to derive a general perspective of the SUD predictive capacity of this demographic-clinical-genetic framework.A series of predictive models were built on our data using combinations of the following criteria: the rules of splitting ; the priors; the size of the terminal nodes; the costs; the depth of branching; and the size of the folds for cross-validation,rolling flood tables to maximize the accuracy of the derived classification tree while considering class assignment, tree pruning, testing and cross-validation. A parsimonious and informative reconstructed predictive tree derived from CART for the Paisa sample revealed demographic , clinical , and genetic variables . The importance of these variables was corroborated, and their potential over fitting discarded by the TreeNet analyses that revealed a set of predictors for SUD containing those derived by CART . This predictive model displays good sensitivity and specificity as shown by areas under the receiver-operating characteristic curve during TreeNet cross-validation using folding . The proportions of misclassification for SUD cases in the cross-validation experiment for the learning and testing data were 0.124 and 0.177, respectively . In the case of the Spanish sample, a parsimonious and informative tree was reconstructed with CART revealing demographic , clinical , and genetic variables.This predictive model displayed good sensitivity and specificity as shown by areas under the ROC curve of 0.911 and 0.897 for learning and testing samples, respectively, during TreeNet cross-validation using folding . The proportions of misclassification for SUD cases obtained by TreeNet analysis for learning and testing data were 0.151 and 0.175, respectively . As in the previous cohorts, for the MTA sample we derived a parsimonious and informative predictive tree with CART depicting demographic , and genetic variables . The TreeNet analyses revealed a set of predictors for SUD containing those derived by CART . This predictive model displays good sensitivity and specificity as showed by AUC of 0.808 and 0.643 for learning and testing samples, respectively, during TreeNet cross-validation using folding . The proportions of misclassification for SUD cases obtained by TreeNet analysis for learning and testing data were 0.314 and 0.358, respectively . Finally, for the Kentucky sample, we derived a parsimonious and informative predictive tree with CART involving demographic , clinical and schizophrenia diagnosis, and genetic variables .

The TreeNet analyses revealed a set of predictors for SUD containing those derived by CART . This predictive model displays good sensitivity and specificity as showed by AUC of 0.811 and 0.744 for learning and testing samples, respectively, during TreeNet cross validation using folding . The proportions of misclassifification for SUD cases obtained by TreeNet analysis for learning and testing data were 0.285 and 0.252, respectively . The results from the RF analysis were consistent with those produced by TreeNet cross-validation using folding. A fixed-effects meta-analysis for overall accuracy returned a value of 0.727 , suggesting potential eventual clinical utility of predictive values. Overall, ADGRL3 marker rs4860437 was the most important variant predicting susceptibility to SUD, a commonality suggesting that these networks may be accurate in predicting the development of SUD based on ADGRL3 genotypes. We conducted independent analyses for alcohol or nicotine dependence and compared these results with those of our composite SUD phenotype, as defined by the disjunctive presence of substance use phenotypes and explained by likely common neuropathophysiological mechanisms. In general, across cohorts, we found significant alcohol and nicotine risk variants, some of which have reasonably high odd ratios . For instance, in the Spain sample, marker rs2271339 conferred significant risk to nicotine use: the heterozygote genotype A/G confers 43% increased risk of being diagnosed with nicotine use . In the same vein, we found in the Paisa sample that the heterozygote A/T genotype for rs1456862 confers 83% increased risk to nicotine use than the A/A genotype. Regarding alcohol use, we found in the Paisas that the heterozygote C/T genotype for rs2159140 confers susceptibility, whereas the C/C geno type does not . Supplemental Fig. 1 shows the ROC curves of nicotine and alcohol use prediction in the Paisa sample. Note that the AUC is greater than 0.7 in both cases, which suggests a straight performance of markers rs1456862 and rs2159140 in predicting nicotine and alcohol use, respectively. To determine the significance of improvement of pre diction when genetic markers are introduced in the ARPA-based predictive model for SUD, we compared the performance measures across all cohorts under two dis junctive scenarios: inclusion of genetic markers or not. We found that including genetic markers improved the performance measures of the resulting ARPA-based obtained for the Paisa sample: the AUC was 90% when genetic information was included versus 78.8% when it was not . Improvements were also observed in the correct classification rate for the Spanish and Paisa samples, the sensitivity values in all samples, the specificity in the Spanish and Paisa samples, and the lift in the Paisa sample . Similar results were observed for the MTA and Kentucky samples, where including genetic information in the predictive model for SUD drastically improved these performance measures .SUD genetic epidemiological studies across multiple substances have been plagued with inconsistency in the replication of genetic association results. This may be due to reasons such as: small effect size of variants expected to influence the SUD phenotype, as with any complex disease;insufficient power to detect significant associations due to small sample size;phenotypic heterogeneity of SUD across samples that may reflect different disease stages or multiple subtypes ; genetic heterogeneity arising from distinct risk genes sets; ethnicity inconsistencies between discovery and replication samples;and comorbidity with other psychiatric conditions with shared genetic and environmental architecture. Consequently, additional studies are required to identify new SUD candidate genes and to help dissect genetic contributions in the context of complex interactions with co-morbid conditions. In this study, we present a demographic, clinical and genetic framework generated using ARPA that is able to predict the risk of developing SUD. Interestingly, marker rs4860437 showed a differential splitting pattern in the Paisa, Spain, and Kentucky cohorts. For instance, in Fig. 1a, rs4860437 splits into and T/T; in Fig. 2a, the same variable splits into and G/T; and in Fig. 4a, it splits into and G/G. The most parsimonious and plausible explanation of this splitting pattern is the presence of genomic variability surrounding this proxy marker, reflecting ancestral composition. Future studies of genomic regions surrounding rs4860437 might reveal a cryptic mechanism.

Nicotine patches are usually placed on the skin in the morning and deliver nicotine over 16 to 24 hours

However, a 2015 review of 225 Android apps for quitting smoking found that most provide simplistic tools ; use of tailoring was limited, although positively related to app popularity and user ratings of quality .Notably, one randomized trial found that a simpler, direct texting program outperformed a smoking cessation app . Social media. Social media, such as Twitter and Facebook, are being explored for delivering cessation treatment. In the United States, 74% of online adults use social media, 80% of whom are seeking health information, and a majority access the sites daily . A promising technology, efforts to sustain engagement are key and can be challenging; like predecessor technologies such as bulletin boards and listservs, initial interest may be high but then tends to wane . There is preliminary evidence, however, of good acceptability and efficacy. Using Twitter, small, private groups of 20 smokers, who interact for 100 days, have been studied. The intervention seeds the groups with twice-daily automessages to encourage group sharing and support.Similar efforts are being developed on Facebook, with a focus on engaging young adults into cessation treatment. In a randomized trial, a novel Facebook smoking cessation intervention increased abstinence at the end of treatment, although effects were not sustained out to 1-year follow-up .Having tried and failed to quit smoking in the past,planting racks smokers may not initially publicize their quit attempts within their main social circle . With social media sites that are largely uncurated or expert moderated, however, users should be forewarned that inaccurate information may be posted. For example, online communities may encourage use of non–evidence-based treatments .

A heterogeneous group of emerging applications and knowledge gaps remain concerning best strategies for maximizing the reach and efficacy of mobile technologies for treating nicotine addiction as well as the comparative effectiveness relative to in-person approaches.Monetary incentives that reward outcome or engagement have been evaluated in 33 trials, with a meta-analysis finding evidence of increased abstinence that persisted after the incentives ceased . The level of the incentives ranged from zero to between $45 and $1185, with no clear direction of effect by level of incentive. Conditional payments outperformed non conditional payments. Findings from a subgroup analysis of eight trials conducted with smokers with substance use problems were consistent with the overall analysis. A summary of nine trials with pregnant smokers reported more than twofold greater odds of abstinence at longest follow-up assessment . The findings are particularly important given the substantial health harms of smoking to mother and baby and that, currently, there is no other effective cessation intervention for pregnant smokers.While counseling and psychosocial treatments help promote cessation, medications that address the neuropharmacological effects of nicotine and nicotine withdrawal further enhance the likelihood of quitting. E-cigarettes, which allow continued self-administration of nicotine without combustion, can also promote quitting smoking. Smoking cessation guidelines, such as those from the U.S. Public Health Service and National Cancer Center Network, recommend smoking cessation medications for all daily smokers where feasible and safe . Pharmacotherapy can be considered fornondaily smokers as well, although there are few clinical trials to guide treatment in this group. The mechanism of benefit innondaily smokers would be reduction of nicotine reward from cigarettes by nicotinic receptor desensitization or antagonism, as discussed below. Table 2 presents the FDA-approved smoking cessation medications, including dosing guidelines, advantages, disadvantages, adverse effects, and precautions. FDA-approved medications are NRT in the form of gum, patches, lozenge, nasal spray and inhaler, varenicline, and bupropion. Nicotine gums, lozenges, and patches are available over the counter in the United States, while the nicotine nasal spray, nicotine inhaler, varenicline, and bupropion are by prescription only.

Nicotine mouth spray is available outside of the United States and has evidence of acceptability, efficacy, and safety, including with minimal behavioral support . In general, medications serve to make smokers more comfortable while they learn to live and cope with daily cues/triggers and life stressors without smoking cigarettes. There are three main mechanisms by which medications can facilitate smoking cessation: reduction of nicotine withdrawal symptoms, reduction of the rewarding effects of nicotine from smoking by blocking or desensitizing nicotine receptors, and providing an alternative source of nicotine with the desired pharmacologic effect previously provided by nicotine from cigarettes. NRT medications are not as satisfying as cigarette smoking because of slower absorption of nicotine; nicotine delivery from e-cigarettes can resemble that of a cigarette, and these devices tend to be much more satisfying. Most smoking cessation medications are recommended for 8 to 12 weeks, although use for 6 months or longer may be necessary to achieve optimal quit rates. It makes sense to use medications to support smoking cessation for as long as the individual feels at risk for relapse. For those switching to e-cigarettes as a less harmful substitute for cigarette smoking, use sometimes continues for many months or years.Nicotine medications consist of purified nicotine that is administered to ameliorate symptoms of physical dependence on nicotine. The particular actions of different products vary according to route of administration and rate of nicotine absorption into the bloodstream. For example, nicotine patches deliver nicotine slowly, relieving nicotine withdrawal symptoms and reducing positive effects of cigarette smoking, without providing much, if any, direct positive effects of nicotine. Nicotine gums, lozenges, sprays, and inhalers deliver nic otine more rapidly, providing some acute nicotine effects that may serve as a substitute for smoking a cigarette. Combining a short-acting with a long-acting NRT results in superior quit rates compared to any NRT product alone and is recommended as a first-line treatment .

NRT products are marketed in different strengths, with higher doses recommended for more dependent smokers based on the number of cigarettes smoked per day or time to first cigarette upon wakening. A 2019 Cochrane review concluded that 4-mg gum is more effective than 2-mg gum in more highly dependent smokers and that 21-mg patch is more effective than 14-mg patch in general . While clinical trials do not demonstrate superiority of 42- to 21-mg dose nicotine patch, some clinicians do use high-dose patch for smokers with particularly severe withdrawal symptoms. Tapering of nicotine doses over time is an option for nicotine patches but does not appear to affect outcome in clinical trials. All forms of NRT have shown similar efficacy in clinical trials , increasing quit rates by 50 to 100% compared to behavioral treatment alone. For the NRTs, compliance is greatest with nicotine patches,sub irrigation cannabis lower with gum and lozenge, and lowest with the nasal spray and inhaler.Some smokers experience nicotine patch–related insomnia and/or abnormal dreams and do better removing the patch at bedtime. Use of patches for 16 or 24 hours is equally effective in promoting quitting smoking. The pharmacokinetics of nicotine gum, lozenge, and inhaler are similar, with gradual absorption of relatively low doses of nicotine over 15 to 30 min. Use every 1 to 2 hours provides the best pharmacologic response. The nicotine inhaler is a plastic device inhaled like a cigarette but delivers nicotine to the oropharyngeal area rather than to the lungs, which explains its slow absorption. All oral nicotine products have an alkaline pH, which results in a high proportion of nicotine in the free base form, which is rapidly absorbed across mucous membranes. Acidicbever ages reduce the pH and reduce nicotine absorption and should be avoided for >10 min before using oral NRT products. The nicotine nasal spray is absorbed much faster than the other rapid-release products, most closely resembling a cigarette. More dependent smokers may find nicotine nasal spray to be more effective than other NRT products for smoking cessation. The spray is associated with more local toxicity, including a burning sensation, watery eyes, and sneezing; however, tolerance develops to these effects with regular use of the spray over 1 to 2 days.

Overall, NRT products are well tolerated and present few safety concerns. Safety concerns with NRT are primarily skin irritation with patches, gastrointestinal symptoms with oral products, and nasal/throat burning and irritation with nasal spray. Nicotine’s car diovascular effects raised concern about NRT cardiovascular safety. Nicotine enhances sympathetic neural activity, resulting in increased heart rate, constriction of blood vessels, induction of proatherogenic lipid profiles , development of insulin resistance, and possible promotion of arrhythmias . Cigarette smoke delivers not only nicotine but also many oxidants, prothrombotic and other toxic chemicals, making smoking much more toxic than nicotine alone. Clinical trials and other stud ies of NRT in patients with cardiovascular disease find no increase in adverse cardiovascular events due to NRT .Varenicline is a partial agonist at the nicotinic a4b2 receptor, the major receptor mediating nicotine addiction. Varenicline both activates and blocks the effects of nicotine on the a4b2 receptor . The agonist effect serves to reduce withdrawal symptoms, while the antagonist effects reduce the rewarding effects of nicotine from cigarette smoke. Varenicline treatment before smoking cessation is often associated with reduced smoking, presumably because smoking is less satisfying, an effect that can promote subsequent cessation. In clinical trials, varenicline is more effective than bupropion or nicotine patch in promoting smoking cessation and is comparably effective to combined NRT . The EAGLES trial, the largest smoking cessation trial conducted with 8000 smokers, directly com pared varenicline, bupropion, nicotine patch, and placebo. Varenicline outperformed all conditions; bupropion and nicotine patch were comparable to each other and were significantly better than placebo . EAGLES included smokers without and with psychiatric diagnoses. Quit rates were higher in those without psychiatric diagnoses, but the relative efficacy of the various treatments was similar. Extended treatment with varenicline for 6 months is superior to the standard 12-week treatment and is recommended for smokers who feel at risk of relapse . The most common adverse effect of varenicline is nausea, which is dose related and to which tolerance develops over time. Concern about nausea is the rationale for starting at lower doses for a week before starting the full dose . Some smokers cannot tolerate the normal dose but do well on continued use of the lower dose. Anecdotal reports of neuropsy chiatric adverse effects of varenicline used for smoking cessation have been reported, prompting a black box warning in the label after the drug was marketed . The putative neuropsychiatric side effects included depression, psychosis, and suicide, with potentially higher risk in smokers with psychiatric disease. However, the EAGLES trial found no evidence of increased neuropsychiatric adverse events for varenicline or bupropion relative to nicotine patch or placebo, in smokers with or without psychiatric illness, and in 2016, the black box warnings were removed for both varenicline and bupropion . Varenicline has been shown to enhance smoking cessation in patients with cardio vascular disease, including stable coronary heart disease and acute coronary syndrome . Concern was raised about possible cardiovascular toxicity due to the nicotine-like effects of varenicline and anecdotal reports of adverse cardiovascular events, but several meta analyses, a large retrospective cohort study, and clinical trials in smokers with cardiovascular disease, as well as the EAGLES trial, showed no increase in cardiovascular risk . Varenicline has also been found efficacious for cessation of smokeless tobacco use .Bupropion is a stimulant drug originally marketed as an antidepres sant. Bupropion blocks neuronal uptake of dopamine and norepinephrine and has antagonist activity on the a4b2 nicotinic receptor. By blocking reuptake, bupropion increases brain levels of dopamine and norepinephrine, simulating effects of nicotine. Bupropion is marketed for smoking cessation as a sustained-release prepara tion. The drug works in both depressed and non-depressed smok ers. The usual duration of bupropion treatment is 12 weeks, but extended bupropion therapy for a year reduces relapse and enhances long-term quit rates . With lower quit rates, bupropion is considered to be second-line, after combination NRT and varenicline. The main adverse effects of bupropion relate to its nervous system stimulant actions. Some smokers are intolerant to bupropion because of anxiety, agitation, and insomnia. Bupropion reduces the seizure threshold and should not be used in smokers who are at risk for seizures. In overdose, bupropion causes tachycardia and hyper tension, but there is no evidence of increased cardiovascular events in smokers with preexisting stable cardiovascular disease .

Capsaicin may reduce release of neuropeptides that are active at neurogenic pain onset

Of note, CGRP plasma levels positively correlate with headache intensity and timing , and CGRP intravenous infusion causes migraine-like symptoms in migraine patients . The acute treatment of migraine relies on compounds specifically developed for the disease, such as the triptans . In contrast, prophylactic treatment has relied on drugs originally developed for other disorders, such as antihypertensive or antiepileptic drugs. Unfortunately, all of these compounds are only moderately effective, with side effects that limit adherence . The identification of CGRP as causative in migraine events has led to the development of different therapeutic strategies to specifically treat or prevent migraine. Four monoclonal antibodies targeting CGRP or its receptor have been approved by the U.S. Food and Drug Agency or the European Medicine Agency for the prophylactic treatment of migraine . With their extended time to reach the maximal concentration and long plasma elimination time , mAbs have proven to be effective. mAbs are relatively large molecules that usually do not cross the blood–brain barrier, so that they are more likely to act outside of the brain. Sites of action include the meningeal vasculature and certainly also the trigeminal ganglion, which is not protected by the blood–brain barrier . The results of a recent retrospective longitudinal study suggested that pa tients discontinued use of prophylactic therapies when they initiated CGRP receptor antagonist therapy with erenumab. Adherence to these novel therapies is higher than to the older medications,ebb and flow suggesting an increased real-world effectiveness . However, a proportion of migraine patients do not experience benefit from CGRP targeted treatments, suggesting the involvement of other pathways in migraine.

A preliminary study has indicated that those who do not benefit from their current CGRP-targeted therapy fare better by switching antibody class, but more rigorous double-blind studies are necessary . Gepants are small molecule CGRP receptor antagonists. The development of the so-called first-generation gepants was halted because of pharmacokinetic limitations or hepatotoxicity , but the second generation appears to be safe and tolerable, with several different compounds already FDA approved. Zavegepant is currently under clinical investigation for acute treatment or prophylaxis of migraine. Gepants with different pharma cokinetics along with mAbs could provide a continuum between acute and prophylactic approaches. Although these drugs all are effective in prophylactic or acute migraine treatment, real-life efficacy, long-term safety, and durability of the effects remain to be established. OA affects about 9.6% of older men and 18% of older women . Cartilage degradation is a hallmark of OA, with pain and reduced joint functionality. Treatment of OA pain remains an unmet medical need. This pain probably stems from the inflammatory response and release of inflammatory cytokines, including NGF. Indeed, NGF is present in sub-chondral bone of the human tibial plateau, cartilage, and synovium in OA and rheumatoid arthritis . Increased synovial NGF immunoreactivity, along with synovitis and morpholog ical changes in chondrocytes, also has been associated with symptomatic knee OA . It has become clearer that the NGF/TrkA pathway has a central role during the development of pain in OA , which has prompted research into therapeutic strategies targeting NGF in both animal studies and clinical trials . Pre clinical studies have shown encouraging results. Although several anti-NGF mAbs have been tested in the clinical setting, only fasinumab is currently in clinical development for treatment of painful lower extremity OA and low back pain . Tanezumab is another anti-NGF mAb, but its testing was stopped by the manufacturer in late 2021. Initially, following a proof-of-concept study showing improvement in joint pain and functionality , tanezumab was administered intravenously with and without NSAIDs . After the treatment, patients with knee or hip OA had significant pain reduction and improvement in joint function compared to either placebo or NSAID alone.

In 2010, however, the clinical development program for anti-NGF antibodies was put on hold after several serious adverse events emerged resembling joint osteo necrosis or rapidly progressive OA in patients treated with higher doses of both NGF antagonists alone or combined with NSAID. A detailed analysis identified most of these events as RPOA. This disorder is considered to be an accelerated form of OA leading to joint replacement. After a second clinical hold because of unclear and suspected peripheral neuropathy, clinical trials were eventually continued. Addi tional phase III studies with fasinumab and tanezumab were continued with a subcutaneous formulation. As in the previous studies with an intravenous formulation, these results also showed significantly reduced joint pain, but adverse events including RPOA were still present with the higher dose. The mechanism behind the development of RPOA is not completely understood, and the FDA and EMA both rejected approval of tanezumab for the treatment of OA pain, leading to the termination of the clinical tanezumab program in 2021. However, targeting patients with chronic low back pain and excluding medium to severe OA might be a promising alternative strategy for anti-NGF regimens. Fasinumab has completed phase II/III studies in patients with knee OA. Fasinumab also resulted in significant reduction in joint pain, with similar incidences of RPOA. Currently, studies with fasinumab at 1 mg every 4 weeks and 1 mg every 8 weeks are underway, and results should be reported soon. In summary, inhibition of NGF by systemic administration of antibodies appears to reduce pain and improve function in individuals with lower extremity OA. If fasinumab is approved, a longer acting non-opioid alternative will be available for the treatment of OA or perhaps chronic low back pain. CGRP and NGF have been successfully targeted in chronic pain pa thologies. In contrast, SP has not been as promising, and the results of studies are variable, preventing definite conclusions. SP is expressed in the spinal ganglion and released after stimulation of primary nociceptive neurons. SP binds the neurokinin-1 receptor found in the CNS , on the trigeminal and spinal ganglia , and immune cells, such as lymphocytes, macrophages, and mast cells .

SP leads to inflammation with vasodilation and edema after repetitive stimulation and anti-dromal transport with release at the nociceptor. In this way, SP causes neurogenic inflammation with recruitment of immune cells and the release of pro-inflammatory mediators . Genetic studies have pointed towards the role of the SP/NK-1 pathway in the development of pain. For instance,dry racks the absence of the NK-1 receptor in mice does not alter the perception of acute pain, but pain wind-up was absent. Furthermore, African naked mole rats have limited SP fibers in the skin and are not susceptible to some types of pain . Consequently, NK-1 receptor antagonists were developed with the idea of blocking SP neurotransmission . Although these antagonists showed promising results in animal models, results of clinical studies were disappointing . Aprepitant is a SP/NK-1 receptor antagonist approved as an antiemetic drug. Surprisingly, aprepitant did not show attenuation of pain after 2 weeks of treatment in patients with post-herpetic neuralgia or a decrease of sensitization in a human model of electrical hyperalgesia . This outcome raised the question of compound specificity in humans. Furthermore, blocking the NK-1 pathway might upregulate other neurotransmitters involved in nociception or activate alternative pathways such as NK-2 or NK-3. Findings of recent study suggest that SP released from primary afferents binds Mas-related G-protein–coupled receptor on mast cells, triggering the release of inflammatory mediators as well as immune cell recruitment . Thus, SP-mediated nociception involves not only the NK-1 receptor but also MrgprB2, suggesting that perhaps the other receptor should be a focus . MrgprX2, the human homolog of MrgprB2, may be a promising new target for chronic pain.

Despite our growing understanding of the pain landscape, novel treatment targets are necessary in light of the current scarcity of treatment options. Novel therapeutic regimens targeting more disease specific key molecules of chronic pain will have to be a focus in neuroscience and interdisciplinary research in rheumatology, neurology, and pain medicine. The lack of disease-specific treatment targets for chronic pain syndromes is in astounding contrast to a “mi nority” of inflammatory autoimmune diseases with targeted therapies against TNF-α, IL-1β, IL-6, IL-12/23, IL-17, IL-23, CTLA4, CD20, PDE4, or Janus kinases . Animal models of pain and the identification of novel therapeutic targets have led to development of new treatment strategies in the pre-clinical setting, but the translation into the clinic has been disappointing . Nonetheless, evidence is gradually accumulating of beneficial effects in subtypes of pain syn dromes. The poor translation between experimental models and clinical condition may trace to the clinical relevance of the model and assessment methods used . Several studies also suggest that the inflammatory response has a critical role during neuropathic and inflammatory pain by enhancing the expression or release of prostaglandins, sympathetic amines, endothelin, and NGF . With neurogenic in flammatory pain, target molecules can be related to inflammation itself or to pain, and it is crucial to evaluate the underlying inflammatory mechanisms. One of the mediators of inflammation in pain is TNF-α, a classical pro-inflammatory cytokine that enhances release of second order pro-inflammatory cytokines, such as IL-6 and other mediators, amplifying the inflammatory response . Like several other cytokines, IL-6 exerts its effects through JAK/signal transducer activation . A meta-analysis revealed that JAK inhibitors used for joint pain from rheumatoid arthritis improve outcomes, but in several patients, pain persists even when inflammation is contained . Patient-reported out comes such as health assessment questionnaire responses, physical function, and patient assessment of pain showed significant improvements in patients treated with baricitinib alone and in combination with methotrexate as compared to methotrexate alone . Therapies targeting inflammation in degenerative chronic pain syndromes have failed thus far. These chronic pain syndromes do not all rely on the same pathways as systemic inflammation, but anti inflammatory regimens may be beneficial for some chronic pain syndromes. For example, anti-TNF-α antibodies have been tested in patients with knee OA, and the treatment was well tolerated, with a significant improvement in pain . However, in another clinical trial, pain did not improve in patients with hand OA that had not responded to analgesics and NSAIDs . Thus, further studies are necessary to assess the therapeutic benefits of anti inflammatory compounds. Recently, the approach to pain has included revisiting old molecules, such as cannabinoids and capsaicin, and new synthetic compounds. Results of these studies suggest that cannabis and cannabis-based molecules may be effective and improve quality of life in a variety of chronic pain conditions . Cannabinoid receptors are localized on the sensory nerve, and their dual ability to reduce inflammation and neuronal activity might be a crucial mechanism in modulating neurogenic inflammation and pain. The legal consumption of cannabis and its use for pain management are already approved in several countries, despite limited evidence of their efficacy in pain management . Further clinical studies are needed to properly assess the benefits and pitfalls of cannabis-based therapies in pain management . Another relevant treatment for pain is capsaicin, the active ingredient in chili peppers. Capsaicin targets the TRPV1 receptor, which is prominent on nociceptors containing neuropeptides, and more specifically, it is enriched in nociceptors expressing SP and CGRP.A double blind multicenter study recently showed that intra-articularly injected synthetic capsaicin yielded improvement in pain in patients with knee OA . Of note, intra-articular injection of capsaicin did not cause side effects, as seen with the subcutaneous or intravenous administration of anti-NGF compounds. However, high doses of capsaicin can desensitize nociceptors, causing the loss of axon terminals . Another emerging therapeutic target candidate for chronic pain is BDNF, a crucial modulator of nociception. Although BDNF expression can contribute to plasticity in spinal neurons during controllable pain, spinal injury or chronic pain can lead to an altered response to BDNF, triggering central sensitization and pain hypersensitivity . Sustained BDNF levels may show noxious properties with chronic pain . Pro-inflammatory conditions in hyperalgesia also can induce upregulation of BDNF . A recent study showed that BDNF also plays a role in OA, with synovial expression of the BDNF receptor TrkB associated with higher OA pain . However, when administered as a pharmacolog ical treatment in the CNS, BDNF showed anti-inflammatory effects, suggesting an anti-nociceptive role .

These compounds with little excitatory capacity rapidly sensitize the transduction of nociceptors

These results indicate that clinicians and researchers should give adequate consideration to which AUD criteria might be most salient in identifying older versus younger drinkers with persistent problems who might benefit from interventions and treatment. The need to consider changes in endorsement patterns of specific items in different age groups could also be important for future iterations of the DSM. The specific criterion items that change with time might be different in different populations, a question that requires further research. However, the current prospective findings across 2 generations of the SDPS using the same diagnostic instrument and led by the same research team indicate that the changes observed here are not likely to be an artifact of the measures used and support for Hypothesis 1 is likely to be observed in other populations as well. As predicted in Hypothesis 2, the prospective SDPS data revealed a decreased prevalence of endorsement of tolerance over time among probands with persistent AUDs. AUD offspring demon strated a non-significant overall pattern for decreased endorsement of this criterion over time. The pattern of decreases in tolerance over time among individuals with AUDs is consistent with several cross-sectional and retrospective investigations in the literature . One possible explanation for this decreased endorsement with increasing age might reflect how tolerance is likely to be defined in research and clinical situations where the emphasis is usually on a recent time frame.Tolerance might have occurred earlier in the drinking career and been relatively constant throughout the heavy drinking years with the result that an individual with an AUD might indicate that tolerance, while technically present, was not newly observed in the more recent past.

Another potential contributor to decreased acknowledgment of D1 over time is the fact that at older ages,cannabis grow equipment drinkers are likely to actually increase their intensity of reaction to alcohol because they experience higher blood alcohol levels per drink as a consequence of: slower oxidation of ethanol in the liver; lower body water with age related to decreased water-rich body muscle; and age-related increased GABA sensitivity However, Hypothesis 3 regarding a potential increase with age in endorsement of the criterion of withdrawal was not supported in these analyses where probands with persistent AUDs demonstrated a significant decrease, rather than an increase, in self-reported symptoms of withdrawal between ages 31 and 43.Hypothesis 3 might be more relevant to individuals with AUDs in their 50s and/or those with more medical problems compared to SDPS participants . However, the current results were similar to those of the cross-sectional general population survey by Harford et al that suggested that rates of self-reported withdrawal phenomena did not change dramatically during midlife. Hypothesis 4 predicted age-related decreasing rates of endorsement of criterion A2 that related to using alcohol in hazardous situations. Consistent with this projection, our prospective data for probands with persistent AUDs documented significant decreases in endorsement rates of A2 over time, perhaps as a consequence of decreases in most risky behaviors with increasing age . The SDPS offspring with persistent AUDs demon strated non-significant decreases in endorsement of A2 over time. Hypothesis 4 is consistent with the potentially normal distribution of the age of endorsement of this item in the general population cross-sectional study, although the adolescent data indicated no change in endorsement of hazardous use . Except for the specific criterion items described immediately above, our group did not believe there was sufficient evidence in the prior literature to support specific hypotheses for changes in rates of endorsement of the remaining AUD criterion items in individuals with persistent AUDs.

However, the current data indicated that endorsement of spending a great deal of time involved with alcohol increased significantly over time in both generations of SDPS participants with persistent AUDs. Two additional DSM criteria demonstrated significant increases of endorsements for probands with a similar direction of change in offspring. These included drinking alcohol in higher amounts or for longer periods than intended and continued drinking despite social or interpersonal problems . The non-significant results in offspring might reflect the limitation of analyses to 2 time points and the smaller sample in the younger generation. The final significant change in endorsements over time for pro bands involved decreases in A1 regarding failure to fulfill obligations because of alcohol, but no change in rates of reports of this item was noted for offspring. The decreased rate over time for probands had not been originally hypothesized, and the result might be spurious. However, one might speculate that the decreasing endorsement of A1 for AUD probands might reflect the combination of higher levels of maturity between the early 30s and mid-40s along with decreasing pressures with age of raising young children and potentially diminished worrying about seeking success in one’s vocation . Similarly, while another change was not hypothesized and any conclusions should be drawn with caution, the only criterion for which the change over time was significant for offspring but not probands was an increase in the rate of endorsement of giving up important activities due to alcohol . This result in the younger generation might reflect the mirror image of some of the same changes over time noted in pro bands where the lower level of maturity and higher levels of impulsivity in the 20s contributed to the younger participants being more likely to take time to drink rather than attend to other activities.

The current analyses focused on DSM-IV criteria, but the results might be relevant to DSM-IIIR as well. As to DSM-5, 10 of the 11 diagnostic criteria are the same across the 3 systems, and the only change in criteria occurred when DSM-5 deleted legal problems and added a new criterion involving craving . Although in prior item response theory analyses,vertical grow system legal problems and craving were both found to add relatively little information to the latent concept of the DSM diagnosis or to not fit into a continuum with the other criteria , those were largely based on cross-sectional analyses, and it is not clear whether the results of the current analyses would be similar if data on craving had been available. Another difference between the 3 DSM approaches to an AUD diagnosis is that DSM-5 required endorsement of at least 2 criteria for a diagnosis, while DSM-IIIR and DSM-IV abuse required only a single item. In light of this difference, it is noteworthy that the re-analyses of the data from Tables 2 and 3 after limiting the sample to subjects who endorsed at least 2 criteria did not produce major changes in the current results. However, direct testing of the applicability of our findings to other diagnostic systems will be important.Data from both proband and offspring samples supported Hypothesis 1, and both subgroups demonstrated increases over time in the proportions endorsing criterion D5, spending a great deal of time involved with alcohol. However, although some findings in probands were similar in offspring, patterns for other criteria followed a different course regarding significant changes in the 2 generations. Such differences across older and younger individuals with persistent AUDs are not surprising in light of prior studies that documented differences in the typical ages of onset of some alcohol problems over time . In summary, the current analyses support 2 major conclusions. First, the salience of many DSM AUD criterion items among individuals with persistent AUDs changed significantly with age in both SDPS generations. This finding occurred despite the use of the same interview instrument and the same principal investigators who pro spectively evaluated the same individuals in both generations at all relevant follow-ups. These consistencies are important because differences in each of these items can affect the pattern of diagnostic item performance . The second overarching conclusion is the relative paucity of data on the question of whether the psychopathological process involved in AUDs manifests in different ways across development in the same individuals with persistent AUDs, a question that requires more study. In the current analyses, both generations demonstrated decreases in endorsement of tolerance and an increasing prevalence of experiencing a great deal of time to obtain, use, or recover from the effects of alcohol.

There was more modest consistency across generations for endorsement of the use of alcohol in hazardous situations, but additional studies are needed to evaluate whether the specific items that change with age during persistent AUDs might differ in different populations. The relatively unique nature of the data presented here regarding changes in endorsement of specific DSM AUD items across multiple evaluations of individuals with persistent AUDs contributes to the need to emphasize some important caveats. First, the data regarding the offspring across 2 time points are especially tentative because of their relatively short follow-up. Second, it is important to remember that 2 different groups of subjects provided the data for the changes in endorsements for samples aged 21 to 27 and 31 to 43 years. Third, the samples for both AUD probands and offspring are somewhat small. Fourth, the current population is almost exclusively European American and relatively well educated, and thus, the generalizability to other ethnic and education groups is not clear. Similarly, our requirement that an AUD had to be present at multiple time points excluded subjects with less persistent AUDs and those with no alcohol diagnoses and it is possible that the current results do not rep resent a more general pattern of AUD item endorsement over time . Fifth, The SDPS selected male but not female pro bands, and no data are available on females in their fourth and fifth decades. Also, in light of the variability in the course of AUDs over time , as clinical researchers and clinicians we believe that there are assets for understanding the picture of both recent histories and vulnerabilities toward future alcohol problems from focusing on a 5-year course of alcohol problems rather than limiting our analyses to the prior 12 months as might be considered as a current condition. It is possible that a different pattern of results might have been seen with a 12-month window of alcohol problems. Next, the data collection and subsequent analyses did not include the item of craving, and thus, the applicability of the current data to DSM-5 is not clear. Finally, the data relate only to alcohol and additional studies are needed to determine the age-related patterns of changes in endorsement that are likely to be seen in individuals with other sub stance use disorders.The discovery of the cannabinoid receptors and endocannabinoid ligands has generated a great deal of interest in identifying opportunities for the development of novel cannabinergic therapeutic drugs. Such an effort was first undertaken three decades ago by a number of pharmaceutical industries, but was rewarded with only modest success. However, the newly acquired knowledge on the physiological roles of the endocannabinoid system has significantly enhanced these prospects. At the June 27, 2004 workshop”Future Directions in Cannabinoid Therapeutics II: From the Bench to the Clinic”, sponsored by the University of California Center for Medicinal Cannabis Research, we on the Scientific Planning Committee were asked to identify the areas of research with the most immediate promise for the development of novel therapeutic agents. The Committee identified four broad areas involving modulation of the endocannabinoid system as particularly promising in this regard: agonists for central CB1 cannabinoid receptors and peripheral CB2 receptors, antagonists of CB1 receptors, inhibitors of endocannabinoid deactivation, and endocannabinoid-like compounds that act through mechanisms distinct from CB1 and CB2 receptors activation. Below, we summarize the data presented at the Workshop and the consensus of its participants on the most exciting opportunities for drug discovery.BAY-387271 , a centrally acting cannabinoid agonist in Phase II clinical studies for the treatment of stroke. The interest of the pharmaceutical industry in the application of cannabinoid agonists to the treatment of pain conditions is not recent. Indeed, most of the compounds now in experimental use derive from such an interest. Historically however cannabinoid agonist development has not proved clinically fruitful, largely because of the profound psychotropic side effects of centrally active cannabinoid agonists, hence the attention given to peripherally acting cannabinoids, which exhibit significant analgesic efficacy and low central activity in animal models. Neuroprotection is a relatively new area for cannabinoid agonists, but one that appears to be already well advanced.

In the twin studies post-onset presence of the condition is part of the outcome analyzed

The proportion of the at-risk sample who become regular users of alcohol increased from 15 to 43 % between the two age ranges. Biological factors are significant in both the onset of regular alcohol use and of alcohol dependence in the youngest age range. The prevalence of regular drinking in the oldest age range has eliminated the effect of the biological factors in its onset; only the onset of alcohol dependence is affected by biological factors. In the older age range, since it is likely that much of the onset of alcohol dependence is driven by past drinking, particularly since relatively few of those who become alcohol dependent in the oldest age range have been drinking for a short time, those factors which are significant for regular alcohol use in the youngest age range are significant for alcohol dependence. Furthermore, it is likely that a biologically specific sub-population of the youngest group particularly sensitive to the effects of alcohol has been effectively eliminated from the at-risk group in the oldest age range . In the illicit drug use sub-sample in the youngest age range, CHRM2 is a greater factor for the onset of alcohol dependence than in the entire sample. However, EROs are not a factor in the onset of alcohol dependence in this group. The range of ERO values in the illicit drug use sub-sample does not differentiate those who become alcohol dependent from those who do not, although ERO values differentiate the illicit drug sub-sample from their complement in the entire sample. The illicit drug use sample shows greater and more extensive genetic effects than the entire sample, since the result of selecting the illicit drug use sub-sample is to remove those subjects whose alcohol dependence is unlikely to be genetically affected from the analysis. In examining the results of the logistic regression analysis of the transition from regular alcohol use to alcohol dependence in the youngest age range, the U-shaped effect of the duration of drinking suggests the presence of two distinct factors,cannabis grow system one a susceptibility to rapidly become dependent subsequent to the onset of regular alcohol use and the other a gradual effect of continued alcohol consumption.

The masking of the ERO effect by the rising component of the duration factor suggests that ERO is associated with a long term behavior pattern involving substance abuse. The absence of a genotypic effect is the result of including all those who become alcohol dependent in the analysis, not just those in the genetically more vulnerable, as can be observed by comparing the under 16 results between the regular alcohol user group and the illicit drug user group. In summary, for the youngest age range the pattern of significance of the ERO and SNP phenotypes for the onset of regular alcohol use and of alcohol dependence, as well as the pattern of significance in the transition from alcohol use to alcohol dependence suggests that delta ERO value indexes an element of propensity to use drugs to excess, while the CHRM2 SNPs index an age related effect of alcohol consumption on the brain with the behavioral outcome of dependence, as we explain below. We view the age-varying genotypic effect of the CHRM2 SNPs as an instance of a gene–environment interaction. In our case the immediate genotypic effects are upon the activation level of the type 2 muscarinic receptors and the environment is the neuroanatomic and neurophysiological context in which the action of the muscarinic receptors is taking place. This environment undergoes significant changes as the brain develops from the early teens into the early twenties, as we have noted above. In the transition from alcohol non-use to regular use of alcohol to alcohol dependence, we note that alcohol consumption has significant effects on the development of addiction in adolescent animals and humans . The cholinergic M2 receptor gene belongs to a family of muscarinic acetylcholine G-protein coupled receptors with five known subtypes . The M2 receptors in the mesolimbic dopaminergic system play a significant role in modulating the level of dopamine release . This has a important effect in governing the reward system , including modulating the effects of alcohol on it .It is not possible to determine the precise nature of the interaction between the genotypic effect on the cholinergic M2 receptors and the age-varying neuroanatomic/neurophysiologcial environment given the data at our disposal.

Given the age-related patterns of genotypic action we have described above, it is possible that the effect of alcohol consumption on the brain varies with the genotype of the cholinergic M2 receptors and the age of onset of regular drinking. specifically, when alcohol is consumed regularly in the youngest age range, perhaps better described as a particular stage in brain maturation centered in this age range, the addiction producing effects on those who have two copies of the major allele are accelerated compared to those who do not, leading to rapid transition from regular alcohol use to alcohol dependence. [This may be in part responsible for the ‘‘telescoping of trajectory’’ effects reported in Hussong et al. .] Those without two copies of the major allele may take longer to manifest the effects of alcohol use. As the age of the initiation of alcohol use increases, it appears that the cumulative risk for alcohol dependence when carried into the adult years is greater in those without two copies of the major allele than in those with two copies. We draw this last conclusion on the basis of the trend tests on our own data and the results of the studies of Wang et al.and Dick et al. . In those who become regular users of alcohol under the age of 16, a majority of those who became alcohol dependent within two years had the risk genotype; the majority of those who become alcohol dependent four years or more after their onset of regular drinking did not have the initial risk genotype.The frailty effect would play a role if there were relatively easy access to alcohol in the youngest age range, at least for those most at risk. Among those who have the major alleles, those who are genetically most vulnerable become alcohol dependent rapidly, leaving only those who have some protective factor. Thus risk for those with the major alleles will decrease with age, since those without the protective factors will have become alcohol dependent, leaving primarily those with protective factors at potential risk. We also note that if the illicit drug user population had easier access to alcohol than the entire population as a whole, the greater genetic effects seen in the illicit drug user sub-sample might in part be the result of a gene– environment interaction, akin to those described in Dick and Kendler , in which looser social controls over behavior accentuate genetic effects.

Since 80 % of the illicit drug use sub-sample are from COGA rather than community families, this is a plausible hypothesis. The specific environment of the most vulnerable group is more likely to accentuate genetic effects,cannabis grow lights rather than to diminish them.We found that SNPs reported to be significant in adults were significant in adolescents in this sample, particularly for those in the youngest age ranges, and for those who had ever used an illicit drug. However, in our results, the major allele was the risk allele, while in the results of Wang et al.and consequently of Dick et al. , the minor allele was the risk allele. Our results do not contradict those of Wang et al.and Dick et al. ; the results are mutually consistent. Instead, they reveal a novel age-specific risk factor undetectable by solely examining the condition of alcohol dependence rather than its age of onset. In view of the age differences between the sample studied in this paper, and the sample used in the studies of Wang et al.and Dick et al.it is not possible that they should contradict one another. In the Wang et al.study, about 5 % of the alcohol dependent subjects had ages of onset of less than 16 years of age. This is too small a fraction to have an effect on the results. As we noted in our discussion of the trend tests, in our study the genotypic distributions of the alcohol dependent subjects change with age of onset. While we do not observe a significant SNP effect in the oldest age range with DTSA, the fraction of subjects with the minor allele in those who become alcohol dependent is greater than the fraction of subjects with the minor allele in those who do not become alcohol dependent . This trend acts to produce a similar genotypic distributions for alcohol dependent and non alcohol dependent subjects when considered regardless of age of onset. In terms of the methodology, DTSA requires that there be differences in genotypic distributions between alcohol dependent and non alcohol dependent subjects to give a statistically significant results for a SNP; this is not true for the family based method used by Wang et al. .Our interpretation is that family based studies are more powerful than the type of association study employed here; the absence of a distributional difference does not mean that there is no genetic effect.In the age ranges and samples in which we found that ERO was significant for the onset of alcohol use or alcohol dependence, it was the lower values which characterize the risk factor, which is consistent with the results in adolescents and young adults in the studies by Rangaswamy et al. , Kamarajan et al. , and Gilmore et al. .

That no effects of regular alcohol use on ERO values were found is consistent with similar results obtained by Perlman et al. .It is important to note that the objectives of the twin studies considered here and of this study are quite different. The twin studies investigate the presence of a ‘‘disease’’ condition, although exactly which condition varies considerably among studies.The objective of this study, as a survival analysis, is to analyze the factors contributing to an event, the onset of a condition. Once the condition has come to pass, it is not of further interest in survival analysis. The genetic effects which produce the condition are only significant at the onset of the condition, and their effects persist only if the subsequent onset of the condition in other subjects is attributable to them.That is, in the longitudinal studies using multistage models, the affected subjects are retained throughout the study subsequent to their becoming affected, while in the survival analysis method used in this study, the affected subjects are removed from consideration in the study once they have become affected, and no longer influence the results. Therefore, although the use of a longitudinal multi-stage model in van Beek et al.and Baker et al.enables genetic influences to have age-specific characteristics, these effects are modeled as persisting through time as a result of an effect at a single age range. If early onset alcohol use is associated with the more genetically determined form of alcoholism then it would be expected that genetic factors leading to early drinking and dependence would be manifest. Our results are consistent with this hypothesis. The pattern of genetic results obtained here, albeit from a single gene, is weighted towards the strongest effects manifesting.Originally formulated over twenty years ago, and recently updated, the neural diathesis-stress model proposes that the hypothalamic-pituitary-adrenalaxis is the central physiological mechanism linking psychosocial stress to the onset and exacerbation of schizophrenia and related psychotic disorders . A central tenet to this model is that individuals with increased vulnerability for psychosis are more sensitive to the effects of psychosocial stressors due to abnormalities within the HPA axis which in turn contribute to dopaminergic and glutamatergic abnormalities that eventually trigger expression of psychotic illness . In support of the model, accumulated evidence indicates that patients with psychosis exhibit elevated basal cortisol relative to healthy controls , but a blunted cortisol awakening response [CAR ], the latter thought to represent a distinct HPA axis component, independent of stress-induced cortisol secretion .

Individuals with severe mental illness may be prone to develop SUDs

For participants with regular drinking, we asked participants how many drinks they consumed in a typical day using a question modified from NSHAPC.If participants drank 5 or more equivalents in a day, we classified this as binge drinking.We asked participants to report if they had ever received treatment for use of alcohol or illicit substances. Participants self-reported their health status,which we dichotomized as fair or poor versus good, very good, or excellent. Based on the National Health and Nutrition Examination Survey , we asked participants to report whether a health care provider had told them that they had diabetes, emphysema or chronic obstructive pulmonary disease , asthma, stroke, coronary artery disease or a heart attack, congestive heart failure, cirrhosis, or cancer.We asked participants whether they had tested positive for HIV infection or had ever been told they had AIDS. To assess functional status, we asked participants if they had difficulty completing individual ADLs because of a physical, mental, emotional, or memory problem using the questionnaire developed by Katz.Using logistic regression, we assessed bivariate relationships between independent variables and the two dependent variables, moderate or greater severity alcohol symptoms , and moderate or greater severity symptoms for any illicit substance . We dichotomized race to African American versus non-African American because there were no significant differences among white, Asian, Latino, and other participants in the odds of substance use outcomes when only these participants were included. To construct multivariate models,cannabis growing equipment we selected independent variables based on bivariate relationships with p≤0.20 and performed backward stepwise elimination, retaining independent variables with multivariate p≤0.05.

As a sensitivity analysis, we created an alternate model including only participants with active illicit drug use in the previous six months to examine if participants who were scored with moderate severity symptoms for the ASSIST based on significant cravings alone differed significantly from those who were actively using.HOPE HOME participants had a median age of 58 years . Over three-quarters, 77.1%, were male, and 79.1% African American; 21.7% were veterans. Approximately a third had PTSD symptoms and 38.3% had major depressive symptoms; 22.3% had both depressive and PTSD symptoms.Nearly half experienced their first episode of homelessness after they turned 50 years old.Almost a third had ever been suspended or expelled from school.About a third experienced physical abuse as a child, while 13.2% experienced sexual abuse as a child. Approximately half experienced physical abuse as an adult, while 13.2% experienced sexual abuse as an adult. The majority, 55.7%, reported fair or poor health status, and 38.9% reported they were dependent in one or more ADLs. Almost two-thirds of our sample, 63.1% had used an illicit substance in the last 6 months, and 64.6% had moderate or greater severity symptoms for at least one illicit drug, with 14.5% reporting severe symptoms . Almost half, 49.2% had used alcohol in the last 6 months, while 25.8% had moderate or greater severity alcohol use, with 14.6% reporting severe symptoms. Approximately a tenth of HOPE HOME participants reported binge drinking and 60% drank at least three times a week to get drunk at some point in their life. For illicit drugs, the drugs most commonly used in the last 6 months included cannabis with 48.0%, cocaine with 37.7%, opioids with 7.4%, and amphetamines with 7.1%. The three most commonly reported drugs with moderate or greater severity symptoms were cocaine,cannabis,and opioids . For participants with ASSIST-defined moderate or higher severity illicit drug symptoms, 91.6% had used an illicit substance in the last six months. For cannabis, 92.7% of participants with moderate or higher severity symptoms had used that specific substance in the last 6 months: 79.5% for cocaine, 67.8% for opioids, and 67.9% for amphetamines. All participants with high severity illicit drug use, and all participants with moderate or greater severity alcohol use had used in the last 6 months. A smaller proportion reported injecting drugs in the last 6 months.

Almost a third reported moderate or greater severity symptoms for more than one illicit substance; 21.4% reported moderate or greater severity symptoms for both cannabis and cocaine. Less than a tenth of the sample used illicit non-cannabis drugs regularly, three times a week or more, prior to age 18, while 36.6% used cannabis regularly and 15.1% used alcohol regularly before age 18. 1.3.3 Multivariate Analysis In multivariate models , having been expelled or suspended from school was associated with moderate or greater severity illicit drug symptoms , as were having a history of psychiatric hospitalization,and having one’s first episode of adult homelessness prior to the age of 50 . In a multivariate model examining factors associated with moderate or greater severity alcohol symptoms, expulsion/suspension from school , male sex and the presence of major depressive symptoms were associated. Neither age, race/ethnicity, veteran status, education, PTSD symptoms, nor a history of sexual or physical abuse were associated in either model. An alternate modeling strategy that restricted moderate or greater severity illicit drug symptoms to those with active current use, as opposed to including all who met moderate severity ASSIST scores, changed p-values less than 10%. In the alternate model history of psychiatric hospitalization was no longer significant. In a sample of homeless adults aged 50 and older, we found high prevalence of substance use, with 14.5% reporting severe illicit substance symptoms compared to 6% of a sample of homeless-experienced adults of all ages engaged in primary care.When comparing current illicit substance use of HOPE HOME participants to a community-based sample that included all adults from 1996, NSHAPC, we find higher current illicit substance use, 63.1% vs. 23%.Moderate or greater severity alcohol use was high, with 25.6% of patients reporting moderate or greater severity alcohol use compared to 4.6% of adults of all ages in a VA primary care clinic.When comparing current binge drinking of HOPE HOME participants to NSHAPC, 10.3% of HOPE HOME participants reported binge drinking compared to 19% of NSHAPC participants.Approximately three-fifths of both HOPE HOME participants, and NSHAPC participants drank at least three times a week to get drunk at some point in their life.Participants were medically complex. They reported a high prevalence of chronic conditions, ADL dependencies,cannabis grow racks and poor health status, and therefore had an elevated risk of harm from substance use.

Consistent with prior research, we found that psychiatric hospitalization and depression were associated with substance use.The association between psychiatric hospitalization and illicit substance use is likely bidirectional.Individuals with SUDs may exhibit severe behavioral symptoms such as hallucinations, disorganization, or aggressive behavior that may lead to psychiatric hospitalization.The association of depressive symptoms with moderate or greater severity alcohol symptoms, and psychiatric hospitalization with moderate or greater severity illicit drug symptoms, has clinical relevance to programs designed to address substance use in older adults. An ideal program would integrate treatment of mental health conditions with substance use disorders. There are limited numbers of geriatric psychiatrists, particularly those who work in urban under served populations.They may be ill-equipped to accommodate an older adult with multiple medical comorbidities and functional dependencies. Young adult life experiences prior to age 50, specifically onset of homelessness prior to age 50, and a history of expulsion or suspension from school, were associated with participants’ later substance use. Substance use may be an important cause of homelessness for younger adults.In contrast, other causes such as medical illness, financial hardship, or loss of social support may be more likely to cause homelessness for adults who become homeless later in life.The finding that at-risk illicit drug use is associated with a history of homelessness earlier in life is germane to public health programs seeking to prevent incident homelessness in younger adults. The only factor associated with both moderate or greater alcohol and illicit substance symptoms was a history of expulsion or suspension from school. Suspension and expulsion could be a marker of early substance use.

Prior research has shown that students who attend schools that suspended or expelled for drug infractions at high rates, while controlling for substance use prevalence, were more likely to have higher prevalence of substance use at one-year follow-up.Unintended negative consequences of suspension and expulsion include disengagement from school, loss of social support, and increased risk of arrest.Adults who become homeless earlier in life tend to have more early life adverse experiences such as mental health, imprisonment, and substance use.44 It is possible that the association between suspension or expulsion from school and greater severity illicit drug and alcohol use is a reflection of the impact of early life experiences on the risk of developing risky substance use later in life. When we compare HOPE HOME participants’ illicit drug use prevalence to NSHAPC, we find higher prevalence of substance use for HOPE HOME participants, despite the HOPE HOME participants reporting on a shorter duration of time and all being age 50 and older . Our primary explanation is that adults born in the baby boom have different substance use patterns than those born in prior generations. This is supported by literature in the general population.45 However, we cannot exclude the possibility that local differences in substance use prevalence could contribute to the observed difference. Other alternative explanations are less likely. A higher percentage of HOPE HOME participants who were age 50 and older had been homeless for a year or more compared to NSHAPC adults of all ages, 67.1% vs. 55%. However, this is unlikely to account for the difference. Other data has shown that, regardless of substance use behaviors, there is a trend in increasing duration of homelessness in the United States.15 Another alternative explanation is that HOPE HOME included individuals who did not access or had limited contact with homeless services, and therefore had less access to treatment interventions. However, only 1.6% of HOPE HOME participants did not access any services in the last six months, meaning that this difference alone is unlikely to contribute to the differences we found in substance use prevalence. In spite of these possible alternate explanations, we believe that the magnitude of difference suggests that illicit substance use is higher in older homeless adults than was noted twenty years ago. The older homeless population now is made up of people born in the baby boom. Contemporary older homeless participants experienced the crack cocaine epidemic in their young adult lives. Data from the general population suggests that individuals born in the baby boom have a higher prevalence of substance use than those born in previous decades. We found lower prevalence of binge-drinking in HOPE HOME participants. The younger age distribution of NSHAPC participants may be responsible for the observed differences in binge drinking, as binge drinkers are less likely to survive to late adulthood.46 Our study has several limitations. We were not able to account for clustering of our sample among some sites as we did not have accurate counts of numbers of unique visitors at each site. We do not consider recruitment sites to be clusters because of the mobility of the population and the tendency of participants to frequent multiple venues of recruitment. Given that our sample was predominantly African American, and had a narrow age range, we were not powered to examine use by race and ethnicity, or age. A potential concern with usage of the ASSIST is that some participants were classified as having moderate severity illicit drug symptoms despite not having used that particular substance in the last 6 months, i.e. participants with remote use but active cravings for an illicit substance. The ASSIST is designed to identify patients at risk of harm from substance use and in most need of intervention. We chose to include these participants in our modelling strategy as this ASSIST score is clinically relevant. . However, one variable, history of psychiatric hospitalization, was no longer significant when excluding those without active use. With high prevalence of substance use in older homeless adults, there is increasing evidence of negative effects of substance use. When the median age of the homeless population was younger, the majority of substance use-related mortality in the homeless population occurred in younger homeless adults.However, between 2003 and 2008, older homeless adults’ mortality attributed to illicit drugs approached that of younger adults and older homeless adults’ mortality attributed to alcohol exceeded that of younger adults.

Drugs of abuse provide rewarding, pleasurable feelings that contribute to its reinforcement

The depth of this understanding, specifically the molecular consequences of adolescent nicotine use, allows for individualized treatment plans with a greater emphasis on medication interactions, care coordination, community resources, education, and advocacy. These clinical adjustments may contribute to decreases in addiction and drug-related emergencies.Prior to drafting this manuscript, the two authors independently evaluated and summarized research articles that addressed adolescent substance use and nicotine’s impact on the developing brain and behavior. We conducted a comprehensive review of the literature using a two- to three word combination of the following keywords: adolescence, substance use, nicotinic acetylcholine receptors, gateway, reward, smoking, tobacco, nicotine, alcohol, psychostimulant, cocaine, amphetamine, cannabis, opioids. We utilized the electronic databases of PubMed and Google Scholar for research articles published in English between January 1968 and November 2018. Articles were included in the review if they discussed nicotine exposure during adolescence, drug sequence patterns, or adolescent substance use. The references from relevant articles and websites of relevant organizations were also examined for other potential sources of information. Out of 80,000 initial search results, approximately 5,000 were reviewed as relevant and non-duplicate articles. To retain focus on adolescent initiation of nicotine products, studies related to maternal tobacco or nicotine exposure were excluded. Studies evaluating other interventions were also excluded to maintain focus on nicotine’s effects on brain function and behavior. We grouped studies together according to their methodological similarities, so findings without substantial support or reproducibility were excluded. Following exclusion and careful analysis of studies based on key results, limitations,cannabis drying racks suitability of the methods to test the initial hypothesis, and quality and interpretation of the results obtained, 174 references were selected.

The use of two reviewers and two extensive electronic databases allows for a widespread range of research articles, which maximizes scientific credibility and minimizes potential bias.Reward and reinforcing efficacy are measured in animals with drug self-administration on fixed and progressive ratio schedules of reinforcement, intracranial self-stimulation, oral intake, inhalation, and/or conditioned place preference. Although common drugs of abuse, like marijuana, cocaine, alcohol, and opioids, act on different neurotransmitter systems, they all exert their reinforcing effects via the mesolimbic pathway, a dopaminergic pathway that connects the ventral tegmental area to the nucleus accumbens.The development, projections, and functions of this pathway are strongly influenced by acetylcholine, glutamate, serotonin, and GABA.Dopamine release into the nucleus accumbens regulates motivation and desire for rewarding stimuli and facilitates reward prediction.As nAChRs modulate dopamine release, the gateway hypothesis posits that adolescent nicotine exposure primes the brain’s reward system to enhance the reinforcing efficacy of drugs of abuse.Substantial epidemiological data suggest that teenagers are more vulnerable than adults to nicotine dependence following minimal tobacco exposure , and individuals who begin smoking during adolescence are more likely to experience difficulty quitting than those who start as adults.Indeed, 90 percent of adult smokers started before age 18.Event-related functional neuroimaging studies in children, adolescents, and adults suggest that children and adolescents have over-reactive reward responses and improved task performance when earning rewards, suggesting enhanced engagement in behaviors that result in immediate gratification.Animal models allow for experimenter-controlled administration of nicotine and investigation of its direct consequences on the brain and behavior through neuroimaging, biochemical assays, and behavioral tests. Early adolescent rats exposed to intravenous nicotine levels equivalent to one to two cigarettes per day for four days self-administer a greater amount of cocaine, methamphetamine, and alcohol compared to adolescent rats not exposed to nicotine, as well as compared to exposed and unexposed adults.

These data strongly suggest that adolescent nicotine use increases the reinforcing effects of other drugs. In addition, adolescent, but not adult, rodents exposed to nicotine display disruptions in hippo campal learning, long-lasting depressive phenotypes, changes in cocaine sensitivity and reward, enhanced drug-related learning, and deficits in impulse control, executive function, and cognition.Improved drug-related learning following brief nicotine exposure during early adolescence is characterized by rapid initiation and cue association of cocaine and amphetamine self-administration, which is indicative of an addictive-like phenotype and is not observed in adolescent and adult controls or adults also pretreated with nicotine.Furthermore, heightened depressive- and anxiety-like behaviors after 30 days of nicotine abstinence in mice exposed as adolescents, but not adults, indicate that nicotine exposure and withdrawal can have long term effects on emotional and cognitive functioning, particularly when nicotine exposure occurs during adolescence.The exact timing of exposure during adolescence is also significant, as nicotine’s effects are far greater during early adolescenceversus late adolescence or adulthood Behavioral alterations brought on by developmental nicotine exposure are driven by molecular mechanisms, including epigenetic influences, synaptic activity, and receptor signaling and regulation.Adolescent, but not adult, nicotine exposure in rodents results in the expression of distinct subunits of nAChRsand persistent nAChR upregulation in the midbrain, cerebral cortex, and hippocampus.Due to the role of nAChRs in neurotransmitter release and reward processing, alterations in their quantity and function influence reward behavior. In addition, brief nicotine exposure in early adolescent rats enhances cellular activity, dopamine D2 receptor signaling, and serotonin 5-HT receptor function in brain reward areas compared to adult rats also exposed to nicotine.Moreover, chronic nicotine exposure during, but not after, adolescence alters gene expression in the ventral tegmental area and stimulates hyper responsiveness of dopaminergic nerve terminals in the medial prefrontal cortex.These nicotine-induced changes in reward-related neurotransmitters and brain regions during adolescence may contribute to alterations in reward regulation and behavior. The changes in brain function and behavior from developmental nicotine exposure are long lasting and a consequence of manipulation of the brain’s reward network, including the prefrontal cortex, nucleus accumbens, ventral tegmental area, hippocampus, and basolateral amygdala.

Specifically, adult rodents exposed to nicotine as adolescents show a persistent increase in deltaFosB in the nucleus accumbens, impaired GABA signaling in the ventral tegmental area, and changes in brain morphology and gene expression in reward regions.Furthermore, adult rodents exposed to nicotine as adolescents have an increased preference for cocaine, amphetamine, opioids,hydroponic cannabis system and higher doses of nicotine.The following section reviews in greater detail the impacts of adolescent versus adult nicotine exposure on subsequent drug use in animal models.The developments of alcohol and tobacco use patterns are closely related among teenagers, but the order of progression is not universal among cultural and ethnic demographics.Alcohol and nicotine products are more frequently co-abused than consumed separately, as a survey of high school seniors revealed that 88 percent of smokers were drinkers, while 55 percent of nonsmokers were drinkers.However, tobacco use predicts subsequent alcohol use better than the reverse.Individuals who initiate smoking before age 17 are at a higher risk of alcohol abuse and dependence than those who begin after 17.These studies lead to the hypothesis that adolescent exposure to nicotine may lead to enhanced alcohol intake later in life. Adolescent susceptibility to co-use of nicotine and alcohol is also observed in rodents, as concurrent self-administration of both drugs in adolescent, but not adult, rats is reinforcing and leads to an increase in subsequent oral alcohol intake.Moreover, a different nicotine exposure paradigm promotes long-lasting increases in alcohol self-administration exclusively in nicotine treated adolescents.Nicotine exposure during adulthood can also change subsequent alcohol consumption, which indicates the influence of nicotine on alcohol reward and reinforcement; however, enhanced alcohol intake is more likely to occur if nicotine is administered prior to alcohol access.These findings collectively indicate that nicotine exposure during adolescence enhances alcohol consumption more than if the same exposure occurs later in life. In humans, adolescent exposure to nicotine influences the likelihood of other psychostimulant use, including cocaine and methamphetamine.Data from a 1994 National Household Survey on Drug Abuse report that individuals who smoked cigarettes before age 15 were up to 80 times more likely to use illegal drugs than those who did not, with cocaine being the most likely drug to be used among young cigarette smokers.

A separate study of a cohort representative of the U.S population revealed that the rate of cocaine dependence was highest among cocaine users who initiated cocaine after having smoked cigarettes , and the rate of dependence was much lower among those who initiated cocaine before smoking .Preclinical studies also demonstrate associations between adolescent nicotine exposure and psychostimulant consumption. Chronic nicotine exposure differentially alters cocaine-induced locomotor activity and intravenous cocaine self-administration in adolescent versus adult rodents.Adolescent rats exposed to nicotine become considerably more sensitized to the locomotor-activating effects of cocaine compared to non-exposed adolescents.Nicotine exposure during adolescence, but not adulthood, also encourages increased self-administration of cocaine during adulthood, suggesting that nicotine use may carry a greater risk during adolescence than adulthood.The effects of adolescent nicotine pretreatment on psychostimulant reinforcement and locomotor activity are mediated by nAChRsand serotonergicreceptors.In addition, chronic and sub-chronic nicotine-exposed adolescent rats experience greater preference for and self-administration of cocaine and methamphetamine versus saline-exposed rats.Pre-adolescent nicotine exposure in rats also leads to increased cocaine-primed reinstatement, a model of relapse behavior.In contrast, alcohol pre-exposure in rats does not influence subsequent cocaine self-administration or cocaine relapse behavior, highlighting the unique gateway effects of nicotine on psychostimulant use.In addition to the enhanced use of alcohol and psychostimulants following early nicotine use, cigarette smoking in adolescents and young adults is associated with earlier onset of cannabis use, more frequent cannabis use, and a larger number of cannabis use disorder symptoms compared to those who did not smoke cigarettes.Likewise, teens who use e-cigarettes or hookah are more than three times more likely to use marijuana, and cannabis users report that nicotine enhances the pleasurable effects of tetrahydrocannabinol , the main psychoactive constituent of marijuana that exerts its effects via cannabinoid receptors.The endocannabinoid system, which comprises cannabinoid receptors and endogenous ligands throughout the central and peripheral nervous system, plays an important role in cognition, learning and memory, pain relief, emotion, stress, and reward processing.Although little research has been done on nAChRs interactions with THC specifically during adolescence, preclinical findings in adults suggest that cholinergic and endocannabinoid systems interact to modulate reward related processes.Selective antagonism of α7 nAChRs in rats blocks the discriminative effects of THC and reduces intravenous self-administration of a cannabinoid CB1 receptor agonist .This association appears to be bidirectional, as blockade of CB1 receptors reduces nicotine self-administration in rats.THC impacts adolescents and adults distinctively, where adolescent rats experience less of THC’s anxiogenic, aversive, and locomotor-reducing effects than adult rats.Nicotine also facilitates THC’s hypothermic, antinociceptive, and hypolocomotive effects in mice.Sub-chronic nicotine exposure in adolescent rats induces long-lasting effects in cannabinoid CB1 receptors, including increases in the hippocampus and decreases in the striatum.The association between nicotine and cannabis use and the role of reward processing in both the cholinergic and endocannabinoid systems encourages the hypothesis that nicotine may encourage and perpetuate cannabis use.The endogenous opioid system is primarily involved in pain relief, reward processing, emotion, stress, and autonomic control, and consists of 3 families of receptors: mu, delta, and kappa.Opioid receptors located in the brain and periphery are activated endogenously by enkephalins, dynorphins, endorphins, and endomorphins, as well as exogenously by opioids . Enkephalins, endorphins, endomorphins, and opioids act primarily through mu opioid receptorsto reduce pain perception, while dynorphins preferentially act at kappa opioid receptorsto regulate appetite, stress, and emotion. Mu and delta opioid receptors play a critical role in drug reward, whereas the KORs participate in drug aversion.Although opioid use has not been extensively evaluated during adolescence, an abundance of clinical and preclinical evidence suggests an important bidirectional relationship between nicotine use and opioid reward.Activation of nAChRs can influence excitability of opioid-containing neurons, and nicotine-induced dopamine release in the nucleus accumbens is dependent on activation of MORs in the ventral tegmental area.Furthermore, nicotine induces a release of endogenous opioids in the brain, and repeated exposure to nicotine can alter expression and/or functioning of opioid receptors.Perhaps unsurprisingly, given the significant overlap of cholinergic and opioidergic systems, clinical data show that treatment with naloxone and naltrexone, both opioid receptor antagonists, reduces tobacco smoking and craving for tobacco smoke.In addition, opioid-dependent smokers present with more severe nicotine dependence, respond poorly to smoking cessation medications, and may have a higher risk of relapse compared to non-opioid dependent smokers.The relationship between nicotine and the opioidergic system is similarly substantial in preclinical studies, which is important given the roles of both systems in reward processing.In addition, blocking nicotinic receptors reduces rewarding effects of morphine, and activation of MORs decreases nicotine withdrawal symptoms.

Our findings in Fresno County identified both similar and different risk factors for preterm birth

Data used for the study were received by the California Preterm Birth Initiative at the University of California San Francisco by June 2016.Not only did the risk models differ by residence within Fresno County, but the percentage of women with the risk varied greatly for some factors. In urban residences, 12.2% of women with preterm births smoked, while 6.6% of women in rural residences with preterm birth smoked. Similarly, 8.9% of urban women with a preterm birth used drugs or alcohol and 4.4% women in rural residences with preterm birth did. Nearly five percent of urban women delivering preterm had fewer than three prenatal care visits and 2.3% of women in suburban residences had this few number of visits. The percent of women with a preterm birth and with interpregnancy intervals less than six months ranged from 7.7%to 11.2%. When examining these risk factors in more geographic detail, appropriate targets for preterm birth reduction are elucidated. For instance, in six census tracts 15% or more mothers of preterm infants smoked during their pregnancy – four in urban residences and two in suburban residences . Also, five census tracts in urban residences show that over 10% of mothers who delivered preterm used drugs or alcohol . Over 2,600 women delivering in Fresno County had a cumulative risk score for preterm birth 3.0: 2.2% of women living in urban residences, 4.1% in suburban,cannabis equipment and 3.7% in rural residences had this high risk score . In this study of preterm births in Fresno County, we found that differences in the type and magnitude of risk and protective factors differed by the residence in which women reside.

Black women and women with diabetes, hypertension, infection, fewer than three prenatal care visits, previous preterm birth or interpregnancy interval less than six months were at increased risk of preterm birth, regardless of location of residence. Public insurance, maternal education less than 12 years, underweight BMI, and interpregnancy interval of five years or more were identified as risk factors only for women in urban residences. Women living in urban locations who were born in Mexico and who were overweight by BMI were at lower risk for preterm birth; WIC participation was protective for women in both urban and rural locations. Taken together, these findings suggest targeted place-based interventions and policy recommendations can be pursued. The preterm birth risk factors identified in these analyses are not unique to Fresno County: previous work has also shown that women of color, lower education, lower socioeconomic status, women with co-morbidities such as hypertension and diabetes, smoking, and short interpregnancy interval are at elevated risk of preterm birth.In Fresno County, however, we observed that these risks differ in magnitude. This is critical, as the percentage of women in each region with the risk factor can vary greatly.The degree of risk was mild – only a 1.1-fold increase in risk. However, 72% of the population giving birth in rural Fresno County is Hispanic, suggesting that focusing interventions reaching this population may provide the most impact. Similarly, Black women were at elevated risk of preterm birth regardless of location of residence. Since urban residences have the highest percentage of Black women and rural has the lowest , focusing prevention efforts for Black women in urban residences may be an effective approach. Others have found that with pre-pregnancy initiation of Medicaid , has been associated with earlier initiation of prenatal care,a factor that may reduce preterm birth rates.In addition, participation in the WIC program also has shown a moderate reduction of the risk of a small for gestational age infant and has been associated with reduced infant mortality in Black populations.

Fresno women from both urban and rural residences who participated in the WIC program were less likely to deliver preterm, while those women living in urban locations who were publicly insured through Medical coverage for delivery were at increased risk for preterm birth. Low income is a criterion for both public assistance programs, and over 32% of families in this region lives below the poverty line;it is apparent that social economic status is a complex risk factor for preterm birth. A key take away message from this study is that women who accessed prenatal care more frequently – three or more prenatal care visits – were less likely to deliver preterm. Fresno County may be able to improve preterm birth rates by addressing factors that encourage prenatal care access, which may include enrollment in Medi-Cal during the preconception period and increasing WIC participation. Identifying regions where a high percentage of women do not access three or more prenatal care visits may suggest locations for an intervention such as home visits or mobile clinic. Using a large administrative database allows for examination of rates and risks that would not be possible with other data sources. Despite these strengths, the study has some critical limitations. By design, the findings are very specific to one area of California and may not be as applicable to other areas of the state, country, or world. In fact, we recently conducted a similar study examining preterm birth risk factors by sub-type for all of California.Similar to the entire California population, we demonstrated increased risk of preterm birth for Fresno County women who were of Black race/ethnicity, who had diabetes or hypertension during pregnancy, or who had a previous preterm birth. However, Fresno County was different from the whole state in a few ways. Unlike the state of California as a whole, Hispanic women, women over 34 years at delivery, and underweight women in urban residences in Fresno County were at increased risk for preterm birth. Also, education over 12 years did not provide protection against preterm birth in any of the Fresno County residences, although higher education did provide protection when we looked at the whole state of California. These differences point to specific pathways occurring in Fresno County that may be distinct from the state as a whole, and demonstrate the value of place-based investigation of risk factors when examining a complex outcome such as preterm birth. Other residences may benefit from similar analyses to identify risk and protective factors that are important on a local level.

An additional limitation, as with most administrative databases, is that accuracy and ascertainment of variables is not easily validated. Previous studies of California birth certificate data suggests that race/ethnicity is a valid measure of self-identified race/ethnicity for all but Native Americans, and best obstetric estimate of gestation may underestimate preterm delivery rates.Previously reported rates of preterm birth in Fresno County are around 9.5% and was 8.4% overall in our population after removing multiple gestation pregnancies and pregnancies with major birth defects. Additionally, United States estimates for drug dependence/use during pregnancy is 5.0% to 5.4% and was only 2.5% in our population. This under ascertainment may mean that we are capturing the most severe diagnoses,vertical grow shelf potentially overestimating our risk calculations. Alternatively, under ascertainment also implies that drug users were likely in our referent population, which would underestimate our risk calculations. This examination of Fresno County preterm birth may provide important opportunities for local intervention. Several populations were identified as at risk, regardless of location of maternal residence, that deserve targeted interventions. Interventions focused on diabetes, hypertension, and drug or alcohol dependence/abuse across the county may be effective for preterm birth reduction. We identified several modifiable risk and resilience factors across the reproductive life course that can be addressed to reduce preterm birth rates. Given the complex clinical and social determinants that influence preterm birth, cross-sector collaborative efforts that take into account place-based contextual factors may be helpful and are actively being pursued in Fresno County. Ultimately, refining our understanding of risk and resilience and how these factors vary across a geography are fundamental steps in pursuing a precision public health approach to achieve health equity.The smoking prevalence among the general U.S. population is estimated to be 14% ; however, the prevalence of smoking among individuals experiencing homelessness in the U.S. is 70% . Smoking-caused cancer and cardiovascular disease are the leading causes of death among individuals experiencing homelessness . Previous studies estimating tobacco prevalence among homeless adults have focused exclusively on cigarette smoking. However, with the increasing availability and popularity of alternative tobacco products, defined as flavored and unflavored noncigarette tobacco products such as electronic cigarettes , cigars, or blunts , use of these products have increased among individuals experiencing homelessness . Between 51% and 68% of individuals experiencing homelessness have used one or more forms of ATP in the past 30 days . More than 50% of homeless smokers acknowledge high risk to health from non-cigarette combustible tobacco.

Studies have explored associations between ATP use and past year cigarette quit attempts and have found mixed results. In one study among homeless smokers, ATP use was not associated with readiness to quit or past-year quit attempts , whereas in a more recent study, ATP use was associated with a higher number of past-year quit attempts compared to cigarette only smokers . While these studies have contributed to tobacco research by showing that ATP use is common among individuals experiencing homelessness, there are still gaps in our understanding of patterns of ATP use and its consequences. Flavored non-cigarette tobacco use is increasing among the general population, and flavors are the primary motivators for initiation and continued use of ATPs . However, flavors are also associated with long-term addiction and difficulty with smoking cessation . We know of no studies to date that have examined flavored non-cigarette tobacco use among individuals experiencing homelessness. Individuals experiencing homelessness face substantial barriers to smoking cessation, and the use of ATP could make smoking cessation more difficult . However, some people may consider ATPs such as e-cigarettes as a lower risk alternative to cigarettes , potentially reducing harm . Given the varied uses for ATPs, there is a need for studies that explore how ATP use intersects with cigarette smoking behaviors among individuals experiencing homelessness. Moreover, ATP use is high among persons with mental health and substance use disorders and may be used to alleviate mental health and/or substance use cravings or may be a marker of severity of illness . ATP users have reported severe and pervasive externalizing outcomes comorbidity compared to cigarette-only users . ATP use may also increase the risk of developing substance use disorders compared to cigarette only or ecigarette only users . Mental health disorders such as depression, anxiety, bipolar disorder, schizophrenia, and post-traumatic stress disorder are common among populations experiencing homelessness and have also been shown to be associated with tobacco use . Moreover, substance use disorders are highly prevalent among homeless adults . Given the high rates of mental health and substance use disorders among people experiencing homelessness , examining use patterns of ATPs in this sub-population of smokers could be helpful with developing targeted interventions. In this study, we recruited a community-based sample of individuals experiencing homelessness who were current cigarette smokers to explore patterns of ATP use, including in-depth patterns of e-cigarette use, and association with past-year quit attempts. In addition to providing a larger sample size than previous studies to explore these associations, this study is also the first to report on the use of flavored tobacco and absolute perceptions of harm and addiction among individuals experiencing homelessness. Homeless adults are motivated to quit cigarette smoking and may use ATP as a cessation method ; therefore, we hypothesized that ATP use would be associated with increased past year quit attempts.We conducted a cross-sectional study of individuals experiencing homelessness who were recruited from eight sites, including emergency shelters, navigation centers , day-time referral programs, and community centers serving homeless adults in San Francisco, California . These sites primarily offered emergency shelter or referral services for individuals experiencing homelessness; no study site offered an on-site smoking cessation program. Individuals were eligible to participate if they were 18 years or older, had smoked at least 100 cigarettes in their lifetime, currently smoked cigarettes , were receiving services at the recruitment site, and were currently homeless as defined by the Homeless Emergency Assistance and Rapid Transition to Housing Act . We recruited participants between November 2017 and July 2018. We aimed to include participants who would express “typical” or “average” perspectives, and therefore recruited participants using typical case sampling .

Use of several CPUs allowed processing of multiple subjects’ scans to occur in parallel

No differences were found between groups before initiation, suggesting alcohol use was related to aberrant cortical thinning, as opposed to cortical thickness being predictive of initiation of alcohol use. Furthermore, widespread cortical thinning and volume reduction has also been reported in alcohol dependent adults in frontal, temporal, and occipital regions . The goals of this study were to use a set of novel analytic approaches to carefully examine within-subjects changes in morphometry and quantify cortical volume changes over time in youth who remained non-drinkers compared to those who initiated heavy drinking. We hypothesized that adolescents who transitioned into moderate to heavy drinking would show smaller cortical volumes, similar as has been seen in adolescent drinkers and adult alcoholics , but after a brief period of heavy alcohol exposure.The sample was obtained from a larger ongoing neuroimaging study of 296 adolescents examining neurocognition in youth at-risk for substance use disorders . Participants were recruited through flyers sent to households of students attending local middle schools, describing the study as a project looking at adolescent brain development in youth who do or do not use alcohol, and included major eligibility criteria, financial compensation , and contact information. Informed consent and assent were obtained, and included approval for youth and parents be contacted for follow-up interviews and scans. Eligibility criteria, substance use history, family history of substance use, developmental,greenhouse tables and mental health functioning data were obtained from the youth, their biological parent, and one other parent or close relative.

The study protocol was executed in accordance with the standards approved by the University of California, San Diego Human Research Protections Program. Participants for this study each had one brain scan acquired before the adolescent had any significant alcohol or drug use, and one follow-up scan approximately 3 years later after half transitioned into heavy substance use, for a total of 80 scans. At baseline, inclusionary criteria included being between the ages of 12 and 17 and having minimal to no experience with substances: ≤10 total drinks in their life, never with more than 2 drinks in a week; ≤5 lifetime experiences with marijuana and none in the past three months; ≤5 lifetime cigarette uses; and no history of other intoxicant use . Youth with any indication of a history of a DSM-IVAx is I disorder, determined by the NIMH Diagnostic Interview Schedule for Children –version 4.0 were excluded, as were youth who had any indicator of prenatal substance exposure, any history of traumatic brain injury, loss of consciousness , learning disorder, migraine, neurological problem, serious medical condition, or were taking a medication that could alter brain functioning or brain blood flow. After screening, approximately 12% remained eligible . Participants in the larger study completed substance use interviews every 3 months, and those who started heavy drinking were selected for a comprehensive annual follow-up with neuroimaging, and matched to a non-using control subject on baseline and follow-up age and pubertal development level, gender, race, family history of alcohol use disorders, and socioeconomic status. At follow-up, 20 were defined as heavy drinkers; 20 continuous non-drinkers were selected to match the characteristics of the heavy drinkers . Participants were assessed using rigorous follow-up procedures , with an overall follow-up rate of 99% through Year 6. Specifically, every three months after the baseline interview and imaging were complete, participants were interviewed to assess current substance use and psychiatric functioning. Those who met criteria for heavy drinking were invited to return and complete annual full in-person assessments , including neuroimaging. Each participant that endorsed heavy drinking was matched to a demographically similar participant who continued to endorse no substance use throughout the follow-up for comparison. Moderate drinkers were excluded from analysis in this paper.

Free Surfer 4.5.0 was used and required ~24-h computational time for image construction, using a dual quad core Intel Xeon CPU E5420 with a processing speed of 2.50 GHz and 16 GB ram.Subtle longitudinal morphometric changes in brain structure were measured by using a method developed at UCSD’s Multi-modal Imaging Laboratory, called “quantitative anatomic regional change analysis,” or QUARC . In the QUARC procedure, each subject’s follow-up image is registered to the baseline image using a 12-parameter affine registration and then intensity normalized to the baseline image by an iterative procedure. A deformation field is then calculated from the nonlinear registration , and used to align the images at the sub-voxel level, resulting in a one-to-one correspondence between each vertex on the baseline and follow-up images. Subcortical segmentation and cortical parcellation labels from the baseline image were used to extract an average volume change for each region of interest. A visual quality control in the volume change field was performed by a trained technician and supervised by an image analysis expert . The goal the present study was to use a recently developed longitudinal MRI paradigm to investigate brain volume differences pre- and post-substance use initiation to disentangle normal adolescent cortical thinning from alcohol-related brain changes. Cortical pruning is a key component of adolescent neural development ; however, the heavy drinking group showed exaggerated volume reductions in these areas when compared to controls, consistent with findings from adolescent and adult populations . Overall, adolescent drinkers showed greater volume reductions than demographically matched controls over the ~3 year follow-up period in the left ventral diencephalon, left inferior and middle temporal gyrus, left caudate, and brain stem. These volumetric changes were positively correlated with lifetime alcohol use and peak number of drinks on an occasion in the past year, suggesting a dose-dependent effect of substance use on cortical thinning. These findings suggest a possible effect of alcohol on neural pruning, in a way that amplifies cortical volume reductions during adolescence. These results parallel previous longitudinal functional MRI findings showing increasing brain activation over time in adolescents who initiate heavy drinking . These observed alcohol-related cortical reductions may help explain why youth required greater brain activation to complete at the same performance level as abstinent youth .

The regions showing alcohol-related volume reductions included subcortical structures , which are important for sensory integration, motor control, feedback processing, and habit learning, as well as inferior and middle temporal cortical structures important in visual object recognition and language comprehension. Previous findings suggest alcohol use interferes with language and visuospatial abilities during adolescence, which are consistent with the brain regions found in this study; continued volume reductions related to sustained drinking during adulthood might also relate to motor issues and spatial impairments found in adult alcoholics . Volume reductions in the caudate parallel findings from adult alcoholics , while reduced medial temporal volumes parallel previous results seen in adolescent heavy drinkers . While the cause of the accelerated cortical thinning is unclear, alcohol-induced dys regulated developmental timing may be responsible for the observed effects . NMDA receptor functioning could help explain accelerated thinning in heavy drinkers, as NMDA is vital for strengthening synapses and contributing to the loss of less important connections throughout development . Thus, it is possible that repeated alcohol exposure during adolescence may interfere with normal NMDA-mediated synaptic pruning. Baseline group differences were found in several frontal cortical volumes. Specifically, youth who initiated heavy drinking over the follow-up showed smaller cortical volume in three frontal regions,vertical farming as well as less cerebellar white matter volume, when compared to youth who remained substance-naïve over the follow-up. At baseline, smaller right rostral anterior cingulate volume was related to poorer performance on a test of executive functioning . These findings suggest heavy drinking youth have subtle brain abnormalities that exist prior to the onset of drinking. These findings are highly consistent with other recent functional MRI findings which found pre-existing lower frontal brain activation in teens who later initiated heavy drinking when compared to continuous controls over a three year follow-up . The current findings might help explain previous findings where heavy drinking transitioners showed less brain activation in frontal regions before they initiated alcohol use. Furthermore, the frontal regions found in this study are important brain regions for executive control, including inhibitory functioning, attention, impulsivity, and self-regulation . Poorer inhibitory functioning in sub-stance naïve youth has been found to be predictive of future substance use , and structural brain differences could help explain these behavioral findings. Limitations should be noted. Although overall groups were very well matched, follow-up lifetime cannabis use days significantly differed between groups. Cannabis use was related to increasing volume over time, possibly countering the volume reductions related to alcohol use. There is research that suggests cannabis may act as a protective factor for white matter integrity in binge drinking ; therefore, volume reductions may have been even more pronounced if we had a completely non-cannabis using comparison group. There are also statistical limitations to be considered in this preliminary study.

Findings did not survive Bonferroni or false discovery rate correction; however, the processing technique utilized is highly sensitive to morphometric brain changes, as each subject’s follow-up image was registered to the baseline image. Furthermore, a typical cubic millimeter of gray matter in an adult contains 35 to 70 million neurons and almost twice as many glial cells , as well as over 500 billion synapses , so even slight differences in cortical thickness could be associated with significant divergence from typical synaptic pruning and gray matter loss across adolescent development. Previous findings suggest that female heavy drinkers may be more vulnerable to aberrant cortical thinning than male drinkers . Unfortunately, our sample size did not allow sufficient power to detect gender effects. The parent study is ongoing and will offer larger sample sizes with more equal gender distributions, which will allow us to more fully address the moderating role of gender on the relationship between drinking and cortical thinning during adolescence. Additionally, the sample is comprised of healthy, high functioning adolescents, so findings may not generalize to clinical or lower functioning samples. The observed pattern of results may be more pronounced in those with higher levels of drinking . Despite these limitations, these findings have important clinical and public health implications, particularly given the participants’ limited, sub-diagnostic alcohol use, limited other substance use, and absence of psychopathology. Further work with larger populations is needed to increase statistical power to observe moderating effects of variables of interest and help advance the understanding of the relationship between alcohol exposure and brain morphometry, and subsequent cognitive functioning. The prevalence of alcohol, tobacco, and other substance use is higher among gay, bisexual, and other men who have sex with men than in the overall population . Although Hughes and Eliasonnoted that substance and alcohol use have declined in lesbian, gay, bisexual, and transgender populations, the prevalence of heavy alcohol and substance use remains high among younger lesbians and gay men, and in some cases older lesbians and gay men. Marginalization on the basis of sexual orientation increases the risk for problematic substance use. For example, GBM men were approximately one and half times more likely to have reported being diagnosed with a substance use disorder during their lifetime than heterosexual men , and one and a half times more likely to have been dependent on alcohol or other substances in the past year . GBM also have higher rates of mental health issues than their heterosexual counterparts . In a review of 10 studies, Meyer found that gay men were twice as likely to have experienced a mental disorder during their lives as heterosexual men. More specifically, gay men were approximately two and a half times more likely to have reported a mood disorder or an anxiety disorder than heterosexual men. A review by King and colleagues found that lesbian, gay, and bisexual individuals were more than twice as likely as heterosexuals to attempt suicide over their lifetime and one and a half times more likely to experience depression and anxiety disorders in the past year, as well as over their lifetime. Few Canadian studies have explored population-based estimates for mental health outcomes among GBM. In one cross-sectional study of Canadian gay/“homosexual” and bisexual men using 2003 Canadian Community Health Survey data, Brennan and colleagues found participants were nearly three times as likely to report a mood or anxiety disorder than heterosexual men. Pakula & Shoveller conducted a more recent cross-sectional analysis that used 2007–2008 Canadian Community Health Survey data and found again that GBM were 3.5 times more likely to report a mood disorder compared with heterosexual males.

These domains and issues are particularly relevant for the SUD workforce as well

The past two decades have seen significant advances in our understanding of the neuroscience of addiction and its implications for practice [reviewed in ]. However, despite such insights, there is a substantial lag in translating these findings into everyday practice, with few clinicians incorporating neuroscience-informed interventions in their routine practice . We recently launched the Neuroscience Interest Group within the International Society of Addiction Medicine to promote initiatives to bridge this gap between knowledge and practice. This article introduces the ISAM-NIG key priorities and strategies to achieve implementation of addiction neuroscience knowledge and tools in the assessment and treatment of substance use disorders . We cover four broad areas:cognitive assessment,neuroimaging,cognitive training and remediation, and neuromodulation. Cognitive assessment and neuroimaging provide multilevel biomarkers to be targeted with cognitive and neuromodulation interventions. Cognitive training/ remediation and neuromodulation provide neuroscience informed interventions to ameliorate neural, cognitive, and related behavioral alterations and potentially improve clinical outcomes in people with SUD. In the following sections, we review the current knowledge and challenges in each of these areas and provide ISAM-NIG recommendations to link knowledge and practice. Our goal is for researchers and clinicians to work collaboratively to address these challenges and recommendations. Cutting across the four areas,hydroponic vertical farming we focus on cognitive and neural systems that predict meaningful clinical outcomes for people with SUD and opportunities for harmonized assessment and intervention protocols.

Neuropsychological studies consistently demonstrate that many people with SUD exhibit mild to moderately severe cognitive deficits in processing speed, selective, and sustained attention, episodic memory, executive functions , decision-making and social cognition . Furthermore, neurobiologically informed theories and expert consensus have identified additional cognitive changes not typically assessed by traditional neuropsychological measures, namely, negative affectivity and reward-related processes. Cognitive deficits in SUD have moderate longevity, and although there is abstinence-related recovery , these deficits may significantly complicate treatment efforts during the first 3 to 6 months after discontinuation of drug use. Thus, one of the most critical implications of cognitive deficits for SUD is their potential negative impact on treatment retention and adherence, in addition to clinical outcomes such as craving, relapse, and quality of life. A systematic review of prospective cognitive studies measuring treatment retention and relapse across different SUD suggested that measures of processing speed and accuracy during attention and reasoning tasks were the only consistent predictors of treatment retention, whereas tests of decision-making were the only consistent predictors of relapse . A later review that focused on substance-specific cognitive predictors of relapse found that long-term episodic memory and higher-order EF predicted alcohol relapse, whereas attention and higher-order EF predicted stimulant relapse, while only higher-order EF predicted opioid relapse . Working memory and response inhibition have also been associated with increased risk of relapse among cannabis and stimulant users . Additionally, variation in response inhibition has been shown to predict poorer recovery of quality of life during SUD treatment . Therefore, consistent evidence suggests that processing speed, attention, and reasoning are critical targets for current SUD treatments, whereas higher-order EF and decision-making are critical for maintaining abstinence. Response inhibition deficits seem to be specifically associated with relapse prediction in cannabis and stimulant users and also predict quality of life.The workforce in the SUD specialist treatment sector is diverse, encompassing medical specialists, allied health professionals, generalist health workers, and peer and volunteer workers .

For instance, in the Australian context, multiple workforce surveys over the past decade suggest that around half the workforce have attained a tertiary level Bachelor degree or greater . Similarly, US and European data has shown that education qualifications in the SUD workforce are lower than in other health services . Because the administration and interpretation of many cognitive tests are restricted to individuals with specialist qualifications, this limits their adoption in the sector. In addition, when screening does occur in SUD treatment settings, its primary function is to identify individuals requiring referral to specialist service providers for more comprehensive assessment and intervention, rather than to inform individual treatment plans. Two fields in particular have driven progress in cognitive assessment practice for generalist workers: dementia, with an increasing emphasis on screening in primary care , and schizophrenia, where cognitive impairment is an established predictor of functional outcome necessitating the development of a standardized assessment battery specifically for this disorder. In the selection of domain-specific tests for the Measurement and Treatment Research to Improve Cognition in Schizophrenia standard battery, a particular emphasis was placed on test practicality and tolerability, as well as psychometric quality. Pragmatic issues of administration time, scoring time and complexity, and test difficulty and unpleasantness for the client should be considered .The dementia screening literature has also emphasized these pragmatic issues, leading to a greater awareness and access to general cognitive screening tools.To date, the majority of the published literature on routine cognitive screening in SUD contexts has focused on three tests commonly used in dementia screening : the Mini-Mental State Examination, Addenbrooke’s Cognitive Examination, and the Montreal Cognitive Assessment. Due to their development for application in dementia contexts, these screening tools placed a heavy emphasis on memory, attention, language and visuo spatial functioning . Multiple studies have demonstrated superior sensitivity of the MoCA and the ACE scales compared to the MMSE . It is possible that this arises from the MoCA and ACE including at least some items assessing EF which are absent in the MMSE.

Indeed, this may demonstrate an important limitation of adopting existing screening tools designed for dementia in the context of SUD treatment. It can be argued that cognitive screening is most beneficial in SUD contexts when focused on SUD-relevant domains, rather than the identification of general cognitive deficits. Therefore, current neuroscience based frameworks emphasise the importance of assessing EF, incentive salience, and decision-making in SUD . As such, there is much to be gained by applying a process similar to the MATRICS effortin the SUD field to identify a ‘gold-standard’ set of practical and sensitive cognitive tests that can be routinely used in clinical practice.The most commonly used cognitive assessment approach in SUD research has been the “flexible test battery”. This approach combines different types of tests to measure selected cognitive domains . Attention, memory, EF, and decision-making are the most commonly assessed domains, although there is a considerable discrepancy in the tests selected to assess these constructs . Even within specific tests, different studies have used several different versions; for example, at least four different versions of the Stroop test have been employed in the SUD literature . Another commonly used approach is the “fixed test battery”, which involves a comprehensive suite of tests that have been jointly standardized and provide a general profile of cognitive impairment. The Cambridge Automated Neuropsychological Test Battery, the Repeatable Battery for the Assessment of Neuropsychological Status,vertical agriculture the Neuropsychological Assessment Battery– Screening Module and the MicroCog™are examples of fixed test batteries utilized in SUD research , although these too have limited assessment of EF. Another limitation of these assessment modules is the lack of construct validity, as they were not originally designed to measure SUD-related cognitive deficits. As a result, they overemphasize assessment of cognitive domains that are relatively irrelevant in the context of SUD and neglect other domains that are pivotal . A common limitation of flexible and fixed batteries is their reliance on face-to-face testing, normally involving a researcher or clinician, and their duration, which is typically around 60-90 min. To address this gap, a number of semi-automated tests of cognitive performance have been developed, including the Automated Neuropsychological Assessment Metrics , ImmediatePost-concussion Assessment and Cognitive Testing battery , and CogState brief battery, have been used more widely, although validation studies to date suggest they may not yet have sufficient psychometric evidence to support clinical use . Research specifically in addictions has begun to develop and validate cognitive tests that can be delivered in client/participants’ homes or via smartphone devices. Evaluations of the reliability, validity, and feasibility of mobile cognitive assessment in individuals with SUD have been scarce, but promising . Cognitive assessment via smartphone applications and web based computing is a rapidly developing field, following many of the procedures and traditions of Ecological Momentary Assessment. The flexibility and rapidity of assessment offered by mobile applications makes it particularly suited to questions assessing change in cognitive performance over various time scales . For example, cognitive performance can be assessed in event-based , time-based and randomly prompted procedures that were not previously feasible, and or valid, in laboratory testing. While the benefits of mobile testing to longitudinal research, particularly large-scale clinical trials, appear obvious , the rapidity and frequency of deployment also provide opportunities to test questions of much shorter delays between drug use behavior and cognition.

For example, recent studies have examined if daily within-individual variability in cognitive performance, principally response inhibition, was associated with variable likelihood for binge alcohol consumption . Similarly, influencing the immediate dynamic relationship between cognition and drug use has also been used for intervention purposes. Web and smartphone platforms have been used to administer cognitive-task based interventions, such as cognitive bias modification training , where cognitive performance is routinely measured as a central element of interventions that span several weeks. The outcomes of these trials show that mobile cognitive-task based interventions are feasible but not efficacious as in a stand-alone context . However, the combination of cognitive bias modification and normative feedback significantly reduces weekly alcohol consumption in excessive drinkers .A substantial proportion of people with SUD have cognitive deficits. Alcohol, stimulants and opioid users have overlapping deficits in EF and decision-making. Alcohol users have additional deficits in learning and memory and psychomotor speed. Heavy cannabis users have specific deficits in episodic memory and attention. Cognitive assessments of speed/attention, EF and decision-making are meaningfully associated with addiction treatment outcomes such as treatment retention, relapse and quality of life . In addition, there is growing evidence that motivational and affective domains are also implicated in SUD pathophysiology and clinical symptoms . For example, both reward expectancy and valuation and negative affect have been proposed to explain SUD chronicity . However, to date, there have been no studies linking these “novel domains” with clinical outcomes. Thus, it is important to explore the predictive validity of non-traditional cognitive-motivational and cognitive-affective domains in relation to treatment response. While flexible and fixed test batteries are the most common assessment approaches, data comparability is alarmingly low and future studies should aim to apply harmonized methods . Remote monitoring and mobile cognitive assessment remain in a nascent stage for SUD research and clinical care. It is too early to make accurate cost-benefit assessments of different mobile methodologies. Yet, their potential to provide more cost-effective assessment with larger and more representative samples and in greater proximity to drug use behavior justifies continued investment into their development.One of the main challenges for the cognitive assessment of people with SUD is the disparity of tests applied across sites and studies, and the lack of a common ontology and harmonized assessment approach . Furthermore, harmonization efforts must accommodate clinicians’ needs, including brevity, simplicity, and automated scoring and interpretation . Mobile cognitive testing is a highly promising approach, although its reliability and validity are influenced by a number of key factors. Test compliance, or lack thereof, seems to be problematic. A recent meta-analysis suggested that the compliance rate for EMA with SUD samples was below the recommended rate of 80% . Designs including participant-initiated event-based assessments were associated with test compliance issues, whereas duration and frequency of assessment were not. While the latter finding suggests that extensive cognitive assessment may be feasible with mobile methods, caution is advised with regard to the scope and depth of the data that can be obtained with these brief assessments and the validity of data sets collected . Remote methods for assessing confounds such as task distraction, malingering, and “cheating” are not well established or validated. As the capability of smartphones, for example, increases, so will the potential to minimize or control for such variables. Face-recognition and fingerprint technology has been proposed for ensuring identity compliance, although this presents ethical issues regarding confidential and de-identified data collection from samples that engage in illicit drug use .