Cultivating Cannabis: A Comprehensive Guide for Beginners

This study suggests that in middle-aged PWH without severe confounding medical conditions and high rates of ART use, there is not a greater than expected decline in delayed recall. However, more research is needed to more definitively determine if there is accelerated memory decline in middle-aged PWH. Lastly, while there was some indication that peripheral CRP may be associated with memory, overall, most biomarkers of inflammation were not associated with episodic memory and the medial temporal lobe did not mediate a relationship between inflammation and episodic memory. However, given the limitations described above, ongoing research on this topic is needed. In summary, this study found that memory may be more related to HIV disease than preclinical AD, and delayed recall did not significantly decline over several years. This is positive news given that HIV-associated neurocognitive impairment is usually non-progressive. However, more research is needed in older PWH, when aMCI/AD would be more expected. Brief interventions have empirical support for acutely reducing alcohol use among non-treatment seeking heavy drinkers. For example, randomized clinical trials of brief interventions have found favorable results among heavy drinkers reached through primary care , trauma centers and emergency departments . Brief interventions also have shown effectiveness in reducing alcohol use in non-medical settings among a young adult college population . Given this sizable evidence base,vertical grow racks system there is considerable interest in understanding the underlying mechanisms toward optimizing this approach.

Neuroimaging techniques allow for the examination of the neurobiological effects underlying behavioral interventions, probing brain systems putatively involved in clinical response to treatment. To date, one study has examined the effect of a motivational interviewing-based intervention on the neural substrates of alcohol reward . In this study, neural response to alcohol cues was evaluated while individuals were exposed to change talk and counterchange talk , which are thought to underlie motivation changes during psychosocial intervention. The authors report activation in reward processing areas following counter change talk, which was not present following exposure to change talk . Feldstein Ewing and colleagues have also probed the nature of the origin of change talk in order to better understand the neural underpinnings of change language . In this study, binge drinkers were presented with self-generated and experimenter-selected change and sustain talk. Self-generated change talk and sustain talk resulted in greater activation in regions associated with introspection, including the interior frontal gyrus and insula, compared to experimenter elicited client language . These studies employed an active ingredient of MI within the structure of the fMRI task, thus allowing for a more proximal test of treatment effects. Neuroimaging has also been used to explore the effect of psychological interventions on changes in brain activation that are specifically focused on alcohol motivation. For example, cue-exposure extinction training, a treatment designed to prevent return to use by decreasing conditioned responses to alcohol cue stimuli through repeated exposure to cues without paired reward, has also been evaluated using neuroimaging . Alcohol dependent patients who underwent cue-exposure extinction training had larger decreases in neural alcohol cue-reactivity in mesocorticolimbic reward circuitry than patients who had standard clinic treatment.

Cognitive bias modification training, which similarly trains individuals to reduce attentional bias towards alcohol cues, resulted indecreased neural alcohol cue-reactivity in the amygdala and reduced medial prefrontal cortex activation when approaching alcohol cues . These studies suggest that fMRI tasks may be sensitive to treatment response. Further, neurobiological circuits identified using fMRI can be used to predict treatment and drinking outcomes, providing unique information beyond that of self-report and behavior. Individuals with alcohol use disorder who return to use demonstrate increased activation in the mPFC to alcohol cues compared to individuals with AUD who remain abstinent . Moreover, the degree that the mPFC was activated was associated with the amount of subsequent alcohol intake, but not alcohol craving . Activation in the dorsolateral PFC to alcohol visual cues has been associated with higher percent heavy drinking days in treatment-seeking alcohol dependent individuals . Increased activation in the mPFC, orbitofrontal cortex, and caudate in response to alcohol cues has also been associated with the escalation of drinking in young adults . Mixed findings have been reported for the direction of the association between cue-induced striatal activation and return to use. Increases and decreases in ventral and dorsal striatal activation to alcohol cues have been associated with subsequent return to use. Utilizing a different paradigm, Seo and colleagues found that increased mPFC, ventral striatal, and precuneus activation to individually tailored neutral imagery scripts predicted subsequent return to use in treatment-seeking individuals with AUD . Interestingly, brain activity during individually tailored alcohol and stress imagery scripts was not associated with return to use .

While initial evidence indicates that psychological interventions are effective at reducing mesocorticolimbic response to alcohol-associated cues, few studies have prospectively evaluated if psychosocial interventions attenuate neural cue-reactivity that in turn reduces drinking in the same population. Furthermore, no previous studies have used neural reactivity to alcohol cues to understand the mechanisms of brief interventions. Therefore, this study aimed to examine the effect of a brief intervention on drinking outcomes, neural alcohol cue-reactivity, and the ability of neural alcohol cue-reactivity to predict drinking outcomes. Specifically, this study investigated: 1) if the brief intervention would reduce percent heavy drinking days or drinks per week in non-treatment seeking heavy drinkers in the month following the intervention and 2) if the brief intervention would attenuate neural alcohol cue-reactivity. In the first case, we predicted significant effects on drinking based on the existing clinical literature and, in the second case, we predicted decrements in alcohol’s motivational salience based on the feedback about the participant’s drinking levels relative to clinical recommendations and their personal negative consequences of drinking. The effects of neural cue reactivity on subsequent drinking outcomes were tested in order to elucidate patterns of neural cue-reactivity that predict drinking behavior prospectively.Participants were recruited between November 2015 and February 2017 from the greater Los Angeles metropolitan area. Study advertisements described a research study investigating the effects of a brief health education session on beliefs about the risks and benefits of alcohol use. Inclusion criteria were as follows: engaged in regular heavy drinking, as indicated by consuming 5 or more drinks per occasion for men or 4 or more drinks per occasion for women at least 4 times in the month prior to enrollment ; a score of ≥8 on the Alcohol Use Disorder Identification Test. Exclusion criteria included under the age of 21; currently receiving treatment for alcohol problems, history of treatment in the 30 days before enrollment, or currently seeking treatment; a positive urine toxicology screen for any drug other than cannabis; a lifetime history of schizophrenia, bipolar disorder,vertical grow solution or other psychotic disorder; serious alcohol withdrawal symptoms as indicated by a score of ≥10 on the Clinical Institute Withdrawal Assessment for Alcohol-Revised; history of epilepsy, seizures, or severe head trauma; non-removable ferromagnetic objects in body; claustrophobia; and pregnancy. Initial assessment of the eligibility criteria was conducted through a telephone interview. Eligible participants were invited to the laboratory for additional screening. Upon arrival, participants read and signed an informed consent form. Participants then completed a series of individual differences measures and interviews, including a demographics questionnaire and the Timeline Follow-back to assess for quantity and frequency of drinking over the past 30 days. All participants were required to test negative on a urine drug test . A total of 120 participants were screened in the laboratory, 38 did not meet inclusion criteria and 12 decided not to participate in the trial, leaving 60 participants who enrolled and were randomized. Of the 60 individuals randomized, 46 completed the entire study. See Figure 1 for a CONSORT Diagram for this trial.The study was a randomized controlled trial. Participants were assessed at baseline for study eligibility and eligible participants returned for the randomization visit up to two weeks later. During their second visit, participants completed assessments, and then were were randomly assigned to receive a 1-session brief intervention or to an attention-matched control condition. Immediately after the conclusion of the session participants completed a functional magnetic resonance imaging scan to assess brain activity during exposure to alcohol cues and completed additional assessments. Participants were followed up 4 weeks later to assess alcohol use since the intervention through the 30-day Timeline Follow back interview. Participants who completed all study measures were compensated $160. The brief intervention consisted of a 30–45 minute individual face-to-face session based on the principles of motivational interviewing .The intervention adhered to the FRAMES model which includes personalized feedback , emphasizing personal responsibility , providing brief advice , offering a menu of change options, conveying empathy , and encouraging self-efficacy . In accordance with MI principles the intervention was non-confrontational and emphasized participants’ autonomy.

The content of the intervention mirrored brief interventions to reduce alcohol usethat have been studied with non-treatment seeking heavy drinkers. The intervention included the following specific components: 1) giving normative feedback about frequency of drinking and of heavy drinking; 2) Alcohol Use Disorders Identification Test score and associated risk level ; 3) potential health risks associated with alcohol use; 4) placing the responsibility for change on the individual; 5) discussing the reasons for drinking and downsides of drinking; and 6) setting a goal and change plan if the participant was receptive . The aim of the intervention was to help participants understand their level of risk and to help them initiate changes in their alcohol use. Sessions were delivered by master’s-level therapists who received training in MI techniques, including the use of open-ended questions, reflective listening, summarizing, and eliciting change talk, and in the content of the intervention. All sessions were audiotaped and rated by author MPK for fidelity and for quality of MI interventions using the Global Rating of Motivational Interviewing Therapists . On the 7-point scale, session scores ranged from 5.87 to 6.93 with an average rating of 6.61 ± 0.23, which indicates that the MI techniques used in the intervention were delivered with good quality. Supervision and feedback were provided to therapists by author MPK following each intervention session. The treatment manual is available from the last author upon request. Participants randomized to the attention-matched control condition viewed a 30-minute video about astronomy. In the control condition there was no mention of alcohol or drug use beyond completion of research assessments. Both the intervention and attention-matched control sessions took place within the UCLA Center for Cognitive Neuroscience in separate rooms from the neuroimaging suite.For the intervention effect on drinking, linear mixed model analyses were conducted to test for the main effect of the intervention on the average number of drinks per week and percent of heavy drinking days in the 4 weeks post intervention. One model was run for each dependent variable. The intercept was a random effect. The models accounted for sex, smoking status and age as covariates. The intervention effect was evaluated by testing the time -by-condition interaction. Comparative effect size estimates for the effect of intervention on drinking outcomes were calculated based on adjusted models using d = Bcondition*time /SDpooled baseline. In addition, the effects of neural cue-reactivity on drinking outcomes was also examined. For the analysis of the cues task, all first-level analyses of imaging data were conducted within the context of the general linear model , modeling the combination of the cue and taste delivery periods convolved with a double-gamma hemodynamic response function , and accounting for temporal shifts in the HRF by including the temporal derivative. Alcohol and water taste cues were modeled as separate event types. The onset of each event was set at the cue period with a duration of 11 seconds. Six motion regressors representing translational and rotational head movement were also entered as regressors of no interest. Data for each subject were registered to the MBW, followed by the MPRAGE using affine linear transformations, and then normalized to the Montreal Neurologic Institute template. Registration was further refined using FSL’s nonlinear registration tool . The Alcohol Taste > Water Taste contrast was specified in the first level models. Higher level analyses combined these contrast images within subjects and between subjects . Age, sex, cigarette smoking status, and positive urine THC were included as covariates. Additional analyses evaluated if neural response to alcohol taste cues was predictive of drinking outcomes.

Indoor Oasis: Creating the Ideal Environment for Cannabis Cultivation

One strategy to address this potential problem would be referral to a board-certified veterinary nutritionist to ensure any home-prepared diet is complete and balanced. An alternative strategy could be to discuss with the owner their concerns with commercial pet foods. Collecting a comprehensive nutritional history is not only important for ensuring dietary needs are met, but the conversation could lead to discussion regarding perceived problems of commercial pet foods. The current study did not find the accompanying decrease in commercial diets that has been shown elsewhere with the vast majority of owners using a commercial diet for part or all of their dog’s foods. As our sample comprised dogs with a recent diagnosis of cancer, this might suggest that inclusion of home-prepared elements precedes the complete exclusion of commercial diets, and our survey was conducted too close to the time of diagnosis to find exclusion of commercial diets. However, for owners feeding a commercial diet both before and after diagnosis, nearly half stopped feeding the pre-cancer diagnosis diet. It is possible that our sample would ultimately have stayed on their second commercial diet, rather than eliminating commercial elements entirely. Among owners feeding commercial diets, we found a decrease in the use of grain-free foods, from 22% to 14% among all 128 respondents, after a cancer diagnosis. While this could seem contrary to the concerns of some owners regarding the role of carbohydrate in promoting cancer progression,possible benefits of the low carbohydrate approach have not been supported by any studies. Further,vertical grow cannabis designs grain-free diets can be lower, similar, or higher in carbohydrate content compared to other diet categories.

There has been considerable attention to the association between dilated cardiomyopathy in dogs and the use of grain-free diets,and both veterinarians and pet owners might have increased awareness of this issue. Regardless, given that more than 1 in 5 dogs in the present study were fed a grain-free diet before a cancer diagnosis, this data highlights the need for clinicians to discuss the risk of diet-associated DCM with all dog owners. The most common informational resource for diets and supplements was veterinarians, similar to previous studies for dogs.Veterinarians are a key resource for providing nutritional information, especially after a cancer diagnosis when veterinarians are actively involved with care, and around three quarters of pet owners believe a change is necessary.Additionally, as our data show, many dog owners do alter their dog’s diet. These findings underscore the importance of collecting and assessing a thorough diet history. This enables effective client counseling by the veterinary care team to help guide and ensure the safe use of diets, treats, and supplement products. Our study did not differentiate whether veterinary advice was taken from general practitioners, cancer-specialists, nutritionists, or elsewhere. Further specifying where owners receive information in a future study would be beneficial for understanding whose dietary advice pet owners value the most. To assess which factors were most likely to result in diet changes, we created a logit model. Our logit model showed that 1 predictor of owners making diet changes was median census tract income, which lowers the chance of diet change as tract income increases. This suggests that people in wealthier areas might be less likely to alter their dog’s diet in response to a diagnosis of cancer. Larger studies are warranted to confirm and further investigate this pattern. One limitation of the current study was only involving dogs referred to a single hospital’s oncology service. Coupled with time restrictions, this survey might not have recruited a large enough sample size to detect all of the patterns in nutritional alteration after a cancer diagnosis.

Furthermore, dog owners within the geographical area of the survey might not be representative of the greater population of dogs and owners. Additionally, dog owners visiting oncology services are a subset of the overall dog owner population, meaning these data can only apply to dogs with a recent cancer diagnosis presenting for evaluation by a specialist. Any owners that decided not to pursue a second opinion or further treatment would not have visited the oncology service, and because of treatment associated costs, respondents to this survey could have more disposable income. This study sought to capture a single snapshot in time, namely, when a dog initially presented to an oncology service. We do not know if this sample of dogs would have eventually shown similar or different patterns than other studies, such as exclusion of commercial diets and using social media groups for dietary and supplement recommendations. It is also possible that these owners would either revert to previously fed diets and supplements or make more extreme changes after treatment. Although we attempted to capture the time-point shortly after diagnosis, there was still a median delay of 61 days from diagnosis to survey. This is likely due the nature of online survey distributions, and the wait to get an oncology appointment which was exacerbated by the pandemic. Additionally, some dogs attempted cancer-related treatments elsewhere before presenting to the oncology service. As a result, some dogs were already undergoing or finished treatments at the time of taking the survey, some of which might have caused gastrointestinal issues before survey completion. Nonetheless, we feel that the time frame from diagnosis to survey enables us to capture additional nutritional changes beyond those simply because of an immediate medical need such as cancer and treatment related gastrointestinal signs.

Further study is warranted into how specific treatments might result in changes to what owners feed their dogs. This study also tried to balance the quality and completeness of data obtained with respondents’ time and willingness to complete a lengthy survey. One concern was that adding too many questions would result in many owners not reaching the end of the survey. Since owners who made changes were asked additional questions, we felt these owners would disproportionately fail to reach the end of the survey, possibly skewing results. Another consideration in interpreting the results of this study was if owners who changed their dog’s diet or supplements could recall what was previously given. Based on initial piloting of the survey, some owners did not recall their dog’s previous diets and supplements and were frustrated by the survey. As a result, the survey program did not force a response for these questions. This was done to ensure owners who did not remember previous nutritional information would be able to complete the survey without guessing unknowns. While we feel this goal was achieved, it is also likely that some owners who remembered simply skipped past these questions for the sake of time. This study strived to be inclusive to all answers by providing text boxes, often referred to as “other” within the survey, if the owner felt the listed multiple-choice options for a question did not apply. However, as the owners largely filled out the survey online by themselves,vertical grow dry racks many either did not list what we were looking for, or possibly used the text box as an additional place to put information, rather than intending to respond with “other.” These factors limited the value of the free text responses, and we feel that studies in the future could avoid these issues by either limiting free text responses in favor of more comprehensive multiple-choice options or by administering the survey in person. Overall, many dog owners make alterations to diet or supplements after their dog has been diagnosed with cancer. Clinicians should counsel owners regarding cancer treatment and its relation to nutrition to assess the current diet and enable educated decisions for any changes. Topics of focus could include discussing owner concerns regarding commercial diets, formulation of home-prepared diets, and the use of certain herbal supplements, including mushrooms and CBD.Contrary to the hypothesis, medial temporal lobe structures were not significantly associated with odds of being impaired on recognition. Given the limited number of participants that were impaired on recognition, there may not have been enough power to detect an effect; however, the odds ratios were fairly close to 1 indicating the association was neither statistically nor clinically significant. Also contrary to the aim 1a hypothesis, a thinner pars opercularis, part of the prefrontal cortex, was significantly associated with greater odds of being impaired on recognition.

No other prefrontal regions or basal ganglia regions were significantly associated with odds of being impaired on recognition. Aim 1b examined the relationship between continuous delayed recall and the three regions of interest. Delayed recall was hypothesized to be more equally associated with all three regions, given that delayed recall deficits are observed in both aMCI/AD and HAND. Somewhat consistent with the hypothesis, thicker rostral middle frontal gyrus and pars opercularis were associated with better delayed recall. Examining laterality, these findings were somewhat more driven by the right. Additionally, thicker right pars triangularis was significantly associated with better delayed recall whereas the left pars triangularis was not. Contrary to the hypothesis, delayed recall was not significantly associated with the medial temporal lobe nor the basal ganglia. In post hoc analyses that excluded participants not on ART, or those with a detectable viral load or methamphetamine use disorder – a group of participants who are closer to those who are ideally treated in medical care – these associations held and thicker rostral middle frontal gyrus and pars opercularis were associated with better delayed recall and relationships were somewhat stronger within this subset of participants. It is important to note that given that delayed recall was examined continuously, this does not imply that these prefrontal regions are associated with delayed recall impairment, as that was not examined. Moreover, mean cortical thickness was included in the models as a covariate, so this means that this association is observed while accounting for average cortical thickness. Taken together, the finding that episodic memory was associated with some prefrontal structures may suggest that, at least in middle age, episodic memory performance is more likely related to frontally mediated etiologies, such as HIV, rather than early AD pathology. The inferior frontal gyrus, which includes the pars opercularis, pars triangularis, and pars orbitalis, as well as the middle frontal gyrus are not part of the medial limbic circuit implicated in memory formation, but they still contribute to memory deficits. The prefrontal cortex is of course associated with memory retrieval . Additionally, more recent models of memory formation stress the importance of the prefrontal cortex in memory formation given that there is some research to suggest that the prefrontal cortex aids in enabling long-term memory formation through connections with the anterior thalamic nuclei . Additionally, these more updated models of memory formation could account for why recognition was associated with prefrontal structures as well, although there could be several other explanations for this observed association. For example, recognition may also be associated with prefrontal structures due to poor initial encoding, which was not explicitly examined in these analyses. Nevertheless, functional MRI studies have shown alterations in prefrontal and hippo campal regions during memory tasks in PWH compared to controls further highlighting that prefrontal regions are implicated in memory in PWH . As highlighted in the introduction, HIV studies have found structural changes throughout the brain, including frontal regions, as compared to persons without HIV . Additionally, studies have demonstrated accelerated age-related atrophy or greater than expected “brain age” in middle-aged and older PWH compared to HIV-negative participants . For example, Milanini et al., 2019 found that, in a group of 19 participants with HAND who were on average 64 years old, HAND individuals showed faster atrophy in the cerebellum and frontal gray matter compared to HIV-negative controls. Additionally, Pfefferbaum et al., 2014 found accelerated changes in the frontal lobe, temporal pole, parietal lobe, and the thalamus in PWH compared to HIV-negative controls. Of these studies examining longitudinal brain changes, all found some involvement of the frontal lobe, but most studies did not examine the specific regions within frontal lobe that were driving these associations. Additionally, results from these studies were mixed as to if brain changes were associated with changes in cognition. Given that the current study only examines structural MRI at one time point, we cannot assume that there has been atrophy of the prefrontal cortex; however, given the literature demonstrates atrophic changes in PWH in the frontal lobe and accelerated aging in the frontal lobe, is possible that changes in the prefrontal cortex have occurred in this cohort and are contributing to the observed associations with memory.

Cannabis Skyward: Unlocking the Potential of Vertical Grow Systems

The mobility of the amine groups is an important property influencing CO2 adsorption, since CO2 molecules must insert into the metal-amine bond to adsorb at saturation. If the amine groups interchange more quickly, that might make it easier for CO2 to insert in the coordination bond. On the contrary, if there is little or no inter conversion, it might be because the amine groups are sterically hindered from “hopping.” This steric hindrance could also make the barrier for CO2 adsorption higher, and perhaps push the temperatures and pressures at which the CO2 adsorption step takes place to lower and higher values, respectively. In order to study the amine group dynamics for different amines, we made the structures in silico and relaxed with DFT as described. In addition to mmen, three additional amines were simulated: ethylenediamine, methylethylethylenediamine, and N,N’– diethylethylenediamine, shown in Table 4.1. The same simulation protocol was used as for mmen to measure the simulated rates of reaction at different temperatures and extract the enthalpies and entropies of activation. The computed energies of activation are provided in Table 4.2. The only amine for which an enthalpy and entropy of activation could not be calculated was ethylenediamine, the only primary-primary diamine . When the ethylenediamine-appended MOF was simulated, the ethylenediamine groups virtually all detached from the metal nodes. In NMR experiments carried out by collaborators, loss of diamine was also observed. In the remaining two amines, methylethylethylenediamine and N,N’-diethylethylenediamine , the methyl end groups of mmen were changed,one at a time, into ethyl groups, to increase the steric hindrance around the amine tails. As the steric hindrance increased, from mmen to methylethyl to diethyl, the enthalpy of conversion computed from simulation also increased from to to kJ mol1 , respectively,vertical cannabis grow racks while the entropy of conversion decreased from to to J mol1 K1 , respectively.

The increasing activation enthalpy is due to the steric hindrance of the increasingly bulky alkyl end groups in the interconversion transition state. Since both amine groups need to be coordinated to the metal node during the transition state, the larger alkyl groups make this configuration more energetically unfavorable. The decreasing activation entropy could reflect the fact that in the reactant/product states , bulkier amines already have somewhat restricted mobility since the adjacent ethyl groups along the metal rod crowd each other. In the transition state, more sterically hindered molecules should also have lower entropy, but the loss of entropy in the non-transition states are larger than in the transition state, making the ∆Sa smaller. Molecular dynamics simulations were also used in conjunction with NMR experiments performed by a collaborator to verify that the local structure around the Mg centers in mmenMg2 is deformed with respect to the bare Mg2 MOF, in order to explain the distribution in 25Mg NMR parameters when mmen is coordinated to the MOF.Snapshots from molecular dynamics simulations were used to calculate the 25Mg NMR parameters as a function of time over 50 ps. These NMR parameters can be found in the Electronic Supporting Information of Xu et al.and vary between different Mg sites at a single point in time, and for individual Mg sites over time. The reason for this distribution in NMR parameters is thought to be the motion in the amine groups. Simulations show that at experimentally relevant temperatures, amine molecules are mobile within the MOF channels. These amine motions are thought to sterically stabilize more bending fluctuations in the linkers than is present in the bare MOF,and these dynamic motions of the linker result in a dynamic pore deformation and resulting distribution of NMR parameters. A movie of the dynamic framework deformation can be found online in the published work.For an idealized parallel plate capacitor consisting of metallic electrodes enclosing a dielectric medium of uniform dielectric constant, ε, the capacitance can be derived exactly as where A is the cross-sectional area of the capacitor and d is the separation distance between the electrodes.

If we transpose this expression to an EDLC, A would be the surface area of the electrode which is accessible to electrolyte ions, and d the characteristic thickness of the electrode-electrolyte interface. Equation 5.2 provides some intuition for why EDLCs have increased capacitance over their traditional counterparts: First, the charge separation distance is smaller in an EDLC than in a traditional capacitor, since ions can approach within less than 0.5 nm of the electrode surface, while in a traditional capacitor d is a few nanometers or higher.Second, the accessible surface area of an EDLC can be increased by several orders of magnitude using rough or porous electrode. Multiple theories have arisen to describe the electrochemical double layer, beginning with the classical theory of Helmoltz,which was subsequently improved by Gouy,Chapman,and Stern to consider discrete ions and complex double layer structures. These theories can accurately predict capacitances of EDLCs whose pores are macroscopic. However, when the pores become comparable in size to the electrolyte ions, so-called “anomalously” high capacitances have been observed that break with both existing theories and empirical trends.Reports of materials with such impressive capacitances have led to considerable growth in the field of microporous materials for EDLCs, to better understand the mechanisms behind capacitance in small pores.A popular choice of material for EDLC electrodes is porous carbon, due to its stability, ease of synthesis, and low cost. Porous carbons used in EDLCs include activated carbons,carbide-derived carbons, carbon onions and nanotubes,carbonized precursors such as metal-organic frameworks,and graphene-based composites.Experiments and simulations have shed some light on the charge-storage mechanisms in such materials; however, a major challenge is that most microporous carbon materials have neither a welldefined porosity nor long-range order, making it difficult to draw conclusions between structure and performance.

A new class of materials called zeolite-templated carbons , which are synthesized using a sacrificial zeolite scaffold,has been demonstrated as a promising EDLC material.Thus far ZTCs have been synthesized from just three zeolites of the 245 frameworks recognized by the IZA Structure Commission.Recently, Braun and coworkers reported a method to computationally synthesize ZTCs from a given zeolite structure.Their predicted ZTCs are composed of sp2 -hybridized carbons which tile a surface that is dual to the zeolite. Templating on a crystalline framework confers welldefined pore geometries which could yield insights into the structure-property relationships of electrode materials, motivating further study of ZTCs for energy storage applications. In this work we use molecular dynamics simulations to screen the ZTC materials of Braun et al. as electrode materials in EDLC cells. We show that the charging timescale of the ZTCs is negatively correlated with pore limiting diameter,vertical cannabis grow solution and that there is evidence of both progressive charge penetration and kinetic trapping within the ZTCs during charging. We then study the equilibrium capacitance of the ZTCs to investigate the correlation between geometric descriptors, local electrolyte configurations, and charge storage mechanisms within the electrode. Introducing the concept of charge compensation per carbon , We find that charge storage is more efficient at ion adsorption sites with high CCpC, which are more likely within pores with a lower radius of curvature. Conversely, charge storage is diminished at high-radius-of-curvature sites and within sites with a mismatch of local pore diameter and ion size. In our simulations we used an organic electrolyte composed of a mixture of 1-butyl-3- methylimidazolium tetrafluoroborate and acetonitrile with the concentration of ions equal to 1 m. We modeled the organic electrolyte using a coarse-grained description consisting of a three-site model for BMI+ and ACN, and a single-site model for BF4 – , as shown in Supplementary Figure C.1. Non-bonded interactions were described by a pairwise Lennard-Jones potential with Lorentz-Berthelot mixing rules, and electrostatic interactions by a Coulombic potential. For the non-bonded parameters of BMI+ we used those developed by Roy and Maroncelli.The non-bonded parameters for BF4 – and ACN were taken from Merlet et al.and from Edwards et al.respectively. Bonds and angles of BMI+ , and bonds of ACN, were kept rigid using the SHAKE algorithm. For the angles of ACN we used a harmonic potential with a stiff spring constant of 400 kcal2 rad1 mol1 to keep the molecule close to linear.

The carbon atoms of the electrodes were modeled as rigid. During the constant applied potential simulations, the charges of the electrode atoms were computed at each time step according to the constant-potential method.All force field parameters and further details regarding the constant-potential method are provided in the Supporting Information. ZTC materials were synthesized in silico as described in Braun, et al.Carbide-derived carbon materials, which are studied in depth computationally by Merlet and coworkers,are used here as a reference material. CDC structures were taken from Palmer et al., who generated them using Quench Molecular Dynamics. In this work, we consider 27 ZTC and 2 CDC materials for the constant-charge simulations and a subset of 19 of the ZTCs for constant-potential simulations. CDCs are named as in the original article by Palmer et al. ZTCs are referred to using the name of the templating zeolite. We indicate hypothetical zeolites using the prefix “h” and the last 2 digits of their 7-digit identifier . Complete names for all the zeolites referenced in the text can be found in the Supporting Information, along with information on framework properties . We used a semi-automated protocol to build two-electrode EDLC cells using the Zeo++ software suite and the VMD script interface with the TopoTools package.Further details are provided in the Supporting Information. This protocol was designed to fill the EDLC cell with an amount of electrolyte such that when the capacitor is equilibrated at either constant-charge or constant-voltage, the electrolyte density and composition in the bulk region matches the experimental values. We present these results in a later section. An example of the simulation setup for FAU_1 ZTC is provided in Figure 5.1. MD simulations were done using the LAMMPS simulation package with Velocity-Verlet time integration using a time step of 1 fs, and a Nosé-Hoover thermostat to maintain a temperature of 300 K. The initial EDLC cell was equilibrated with a constant charge of 0 e on each carbon atom for 4 ns. Final configurations from the zero-charge simulations were used to initialize all further simulation steps. Molecular simulation is a powerful tool to study EDLCs, as it allows for precise determination of the microscopic properties, such as the structure of the electrolyte within the pores, which can be difficult to access experimentally but which play an important role in determining the capacitance of the material.At the same time, simulation of EDLCs presents technical challenges due to long equilibration times and the need to compute the fluctuating charges in the electrode in response to a constant applied potential. The constant potential approach is more accurate but also much more computationally expensive than simulating an EDLC with constant charges on the electrode atoms.We tested multiple protocols for equilibrating the simulation cells, one with a constantcharge equilibration followed by a short constant-potential equilibration step,and the other with a long constant-potential equilibration. In the constant-charge equilibration method,partial charges of ±0.01 e were applied to all the electrode atoms, positive charges for anode atoms and negative charges for cathode atoms. The EDLC cell was equilibrated with these fixed charges for 8 ns. Then, the effective potential across the cell was calculated using either the 1-D Poisson equation or using the averaged local potentials at each electrode atom, and this potential was applied for the constant-potential equilibration and production runs. In the constant-potential equilibration, a constant potential difference of 1 V was applied across the EDLC cell , and the constant potential simulation was run for at least 10 ns. During the constant-potential run, the average absolute charge on the electrodes was monitored and fit to an exponential. The equilibration step was considered completed when the simulation was at least as long as 5τ , where τ is the time constant of the exponential. This equilibration process was found to be the best following a number of tests which are described in the next section, “Development of Computational Protocol”. Production runs for simulation of capacitance were carried out after equilibration at constant potential.

From Seed to Flower: A Comprehensive Guide to Cannabis Cultivation

Using a slightly different approach in a Korean sample, CHR converters were compared to non-converters, full remitters, and HC to assess for baseline neurocognitive differences among the groups and prediction of symptom abatement over a 12- to 24-month follow-up. At baseline, those whose prodromal symptoms subsequently remitted performed better on measures of verbal fluency and memory, immediate visual memory, and attention as compared to converters, and in fact performed equally to healthy controls in all cognitive domains. Over time, CHR remitters demonstrated improvement in semantic fluency while performance of non-remitting, non-converting CHR individuals declined despite the absence of significant baseline differences, implying that investigation of cognitive trajectories over time may clarify probability of transition.Neuroimaging studies investigating high-risk cohorts have found abnormalities in the white matter organization in the brains of CHR individuals as compared to HCs using diffusion tensor imaging , particularly in brain regions known to undergo significant changes from adolescence to adulthood such as the superior longitudinal fasciculus . However, to date no baseline differences between subsequent CHR converters and non-converters have been reported utilizing either DTI or volumetric techniques, though to date very few studies have reported on this comparison. Research using positron emission tomography to estimate dopamine synthesis capacity in the striatum found elevated dopamine synthesis capacity in the whole, associative and sensorimotor striatum,commercial marijuana vertical growing suggesting the presence of dopaminergic abnormalities that precede psychosis onset. These intriguing findings have potential implications for early initiation of anti-psychotic medications in these patients. Findings related to the predictive utility of neuroanatomic findings have been inconsistent, in part due to methodological differences.

Multivariate pattern classification has been used to classify converters and non-converters based on baseline group differences in gray matter volume in cerebellar, prefrontal, cingulate, and striatal structures, with classification algorithms attaining 80% accuracy in test cases. When an independent sample of CHR participants were then classified into low, intermediate, and high risk groups by the multivariate pattern analysis, low versus high risk group transition rates were 8 and 88% respectively, demonstrating fairly accurate conversion predictions from such neuroanatomic algorithms. However, while a recent review of the high-risk literature supports the notion of anatomic changes over time that distinguish CHR+ from CHR− individuals, reports of baseline and follow-up group differences are inconsistent. For example, various studies have found that CHR+ individuals demonstrate baseline volumetric abnormalities in several regions such as the interior frontal gyrus, prefrontal cortex, cerebellum, and cingulate cortex as compared to nonconverters, with particularly converging evidence for the insula and superior temporal gyrus. Converters also have been reported to evidence greater volumetric reductions over time in the insular cortex, superior temporal gyrus, and inferior frontal gyrus as compared to non-converters over a 1- to 4-year follow-up period. Cortical thickness abnormalities have also been investigated, with no significant whole-brain or region of interest differences found between converters and non-converters despite overall decreased cortical thickness in the right parahippocampal gyrus observed in CHRs as compared to controls. Importantly, findings from the NAPLS consortium in a sample of 135 controls and 274 CHR youth, 35 of whom converted, indicated no cortical thickness or volumetric group differences at baseline, but significantly greater rates of change in cortical thickness in superior frontal, middle frontal, and medial orbitofrontal regions within the right hemisphere in CHR+ versus CHR− and HC groups .

CHR+ individuals also evidenced greater expansion of the third ventricle over time as compared to CHR− and controls. These changes were not due to anti-psychotic medication exposure as both medicated and non-medicated converters showed similar rates of gray matter loss. Additionally, converters demonstrated stronger correlations between rates of right hemisphere cortical thickness reduction and levels of pro-inflammatory markers measured in blood plasma, although this association was present among the entire sample. This work highlights the need for more research on the role of neuroinflammatory factors in psychosis onset, and their temporal relationship to neurochemical and neuroanatomic changes. Various quantitative electroencephalogram parameters, such as resting EEG frequencies, have also been assessed for their utility in psychosis prediction. These include alpha , beta , theta , and delta activity. In the European Prediction of Psychosis Study study, variables of occipital-parietal alpha peakfrequency , frontal delta, and frontal theta power were included in a final model and analyzed for prognostic power. Three classes of participants emerged , with low and high-risk CHRs demonstrating statistically significant different rates of conversions. Additionally, CHR+ individuals were found to have higher frontal/central delta and theta and lower occipital-parietal alpha peak frequency, suggesting baseline resting EEG differences that can be used to predict later psychosis and potentially function as a point of individualized intervention. Resting state-EEG microstates, or transient patterns occurring during spontaneous mental operations, have also differentiated CHR patients from other symptomatic groups and HC. Schizophrenic and CHR patients significantly differed in their temporal microstates as compared to controls, as well as from each other. In particular, microstate class A, one of the four typical microstates that may be active during phonological processing, seemed to most prominently predict transition to psychosis in light of its correlation with positive symptom severity, though it may also simply be a proxy for anxiety and impaired stress tolerance. EEG-based event-related potential work has additionally revealed a promising biomarker via auditory mismatch negativity , an ERP component resulting from hearing a discordant sound among repeated standard sounds. In a comparison of CHR, FE, and HC individuals, HCs had significantly higher MMN amplitudes compared to FE and CHR groups, while CHR and FE groups did not differ. Within the CHR group, converters showed distinct profiles of lower MMN amplitudes at baseline compared to non-converters, a finding that was not accounted for by anti-psychotic medication. Further analysis suggested only one of two types of deviant MMN predicted psychosis when factoring in the time delay between ERP evaluation and conversion, highlighting more specific potential predictors for further evaluation.

Previous research has implicated stress and underlying neurohormonal factors in the etiology of schizophrenia, such that indicators of hypothalamic-pituitary-adrenal axis activity are elevated in individuals with psychotic-spectrum disorders and appear to be affected by both anti-psychotic medications that reduce psychotic symptoms and recreational substances that exacerbate such symptoms. The association between elevated cortisol and dopamine activity in high-risk populations further suggests a role of neurohormonal factors in the emergence of psychosis. Recently, two measures of HPA activity within a CHR cohort were examined. CHR individuals, particularly those who were unmedicated, demonstrated a smaller cortisol awakening response compared to healthy controls. No group differences were observed within daytime cortisol levels, nor were clinical symptoms significantly correlated with cortisol levels. However,mobile vertical grow rack the small sample size and confounds of medication plus psychosocial treatment throughout the study suggest notable limitations. To better assess the predictive power of neurohormonal factors, the NAPLS consortium investigated salivary cortisol levels and found significant correlations to baseline symptoms across positive, negative, general, and disorganized domains, with particular significance for dysphoric mood and impaired stress tolerance. Baseline cortisol levels among CHR+ patients were also found to be higher than that of CHR− or healthy control groups. Here too, effects were independent of anti-psychotic or other medication use. Therefore, the role of HPA axis dysfunction as a potential risk biomarker warrants additional attention, particularly given recent reviews highlighting the role of stress and impaired immune functioning in the etiology of psychosis. Despite the wealth of information we have accumulated from the CHR literature, several issues associated with the reliability and utility of the construct remain. One of the most prominent is the lack of specificity for determining later psychosis as opposed to other psychiatric disorders, broadly defined poor functioning, or brief psychotic symptoms that ultimately remit. As some suggest, this may be partially due to limited long-term follow-up, research definitions of conversion that typically are based on psychotic-level symptomatology at a single time point, and the diversity of research tools and analytic strategies used among research sites Variability in long-term outcome, specifically the high number of CHR individuals who do not convert to psychotic disorder, may also reflect access to effective intervention, sampling from heterogeneous help-seeking populations, and conversions occurring outside of study follow-up points. Much of the CHR research to date has focused on relatively short follow-up periods, thereby potentially missing some cases of conversion and confounding predictive algorithms. Some suggestions for increasing specificity have been proposed, such as including the COGDIS criterion into CHR criteria to increase the positive predictive power of conversion. For example, the incorporation of basic symptoms at baseline has indeed been shown to increase the likelihood of predicting schizophrenia over affective psychosis, although this meta-analysis has been criticized for the use of limited, potentially under powered studies. Regardless, the inclusion of basic symptoms does not address the full spectrum of outlined concerns. Many of the above arguments and challenges observed in CHR research put forth above became a part of the recent discussion and controversy regarding the inclusion of an Attenuated Psychosis Syndrome into the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition.

Although the full debate is beyond the scope of the current article, we highlight several themes and point the reader in the direction of more comprehensive reviews of the topic. As put forth by the DSM psychotic disorders task force, the defined Attenuated Psychotic Symptoms syndrome significantly increases the likelihood of predicting future psychosis . However, as highlighted above, limitations to the current evidence base exist: the overwhelming presence of comorbid diagnoses, the range of non-psychotic psychiatric outcomes, and the decreased diagnostic reliability among community clinical settings outside of the research or academic domains. Therefore, continued investigation of the syndrome and its connection to other related disorders like Schizotypal Personality Disorder is necessary before inclusion into the DSM as a formal disorder. Recent research has continued to clarify the CHR state and long-term outcomes, finding that negative symptoms in CHR individuals predict deficits in functioning at baseline and follow-up, and that decreased functioning correlates with neurocognitive factors across all time points. In particular, premorbid social dysfunction appears to have some diagnostic specificity for predicting emergence of schizophrenia over other psychiatric outcomes, including other psychotic disorders. Additionally, the combination of clinical and neuropsychological variables such as IQ, verbal memory, or processing speed increases predictive power. Baseline differences in neuroanatomic structures have also been reported in CHR versus HC groups, with structural differences in the superior temporal gyrus and insula appearing in multiple studies. Progressive gray matter changes within several anatomic regions may be particularly relevant as predictors of psychosis outcome, although the implicated regions vary across studies. HPA axis dysfunction is also hypothesized to be relevant to psychosis risk; this possibility is supported by the finding of elevated baseline cortisol levels among CHR+ individuals. Lastly, most of the recent work conducted has focused on baseline predictors of psychosis, though it has also been suggested that the field should shift to assessing overall deterioration throughout study duration. Table 1 provides a summary of the clinical and neurocognitive prediction findings, while Table 2 summarizes neuroimaging, psychophysiological, and neurohormonal predictors of transition to psychosis. Despite this progress, findings across studies do not yet fully converge on common factors, highlighting the complex nature of schizophrenia and its etiology . Therefore, there are still many areas requiring clarification within the psychosis risk prediction literature. As with all budding research, many findings need to be replicated using larger sample sizes and extended longitudinal designs to confirm their validity and reliability; multi-site studies such as the NAPLS consortium may prove to be particularly useful here. It will also be imperative to pin down the timing and trajectories of suggested biomarkers in order to facilitate intervention. Although recent publications have highlighted promising interventions that seek to prevent psychosis emergence via symptom reduction, such as medications including Omega-3 fatty acids and glycine, psychosocial therapies, cognitive remediation training, and combined treatment approaches, this field is still in its infancy. Among other factors, potential regional/cultural variability in help-seeking behavior and health care programs has not yet been sufficiently addressed. Additionally, there continues to be a paucity of current research on the ethnic and cultural differences in CHR classification and outcomes, as well as whether distinct conversion predictors exist within ethnic groups as some have suggested. From a clinical standpoint, delays in obtaining access to care also present a substantial obstacle to receiving accurate early diagnosis and treatment.

Findings may be different in those populations where marijuana use is greater

The first was coded and therefore allowed examination of the impact of effects on change in the outcome variable from baseline to the first followup, the second was coded to model the impact of effects on change in the outcome variable from baseline to the second follow-up , and the third was coded in order to estimate the impact of effects on change in the outcome variable from the first to the third follow-up . In the context of these three dummy codes, effects on the intercept represent effects when all time effects are equal to 0 . Of note, as all participants received a BA session in the interim between the true baseline and 6-week assessment, marijuana user status at the 6-week assessment was used as the baseline for these analyses .To address hypothesis 2 , Level 2 effects for marijuana user status, treatment condition, and the interaction between marijuana user status and treatment condition were regressed on the three time components. Following recommendations of Aiken and West , prior to forming interactions, marijuana user status and treatment condition were recoded using effects coding , to remove collinearity with interaction terms so that all main effects of time could be evaluated in the context of models including interactions. To control for potential baseline group differences, we also regressed marijuana user status and treatment condition on the intercept. To address hypothesis 3 [i.e., whether treatment group impacts marijuana use frequency at any of the three follow-up time points, among those who reported marijuana use at 6-week pre-BMI assessment], at Level 2, treatment condition was regressed on the Level 1 intercept and all three time effects of marijuana use frequency. In models for both hypotheses 2 and 3, at Level 2,vertical rack grow gender also was included as a covariate.Results of the HLM models predicting three alcohol outcomes at each follow-up by marijuana user status, treatment condition, and marijuana user status by condition interactions are displayed in Table 4.

In the prediction of HED frequency, marijuana user status was associated with higher baseline HED frequency; however, being a marijuana user was not associated with more or less change in HED frequency between the pre-BMI assessment and any of the three follow-ups. There were no interactions between marijuana user status and treatment condition at any follow-up, suggesting that the BMI was not more or less effective for marijuana users. In the prediction of pBAC, marijuana user status was associated with higher pre-BMI pBAC. Additionally, those in the BMI condition had significantly lower pre-BMI pBACs. Controlling for these pre-BMI differences, being a marijuana user, treatment condition, and their interaction were all non-significantly associated with change in pBAC from pre-BMI to each of the follow-ups. In the prediction of alcohol consequences, being a marijuana user was associated with higher pre-BMI levels of consequences. There were no significant effects of marijuana user status, treatment condition, or their interaction on change in consequences between baseline and either the 3- or the 6- month follow-ups. At the 9-month follow-up, those in the BMI reported fewer alcohol consequences1 ; however, this was not moderated by marijuana user status. Overall, these findings suggest that collapsing across treatment condition, marijuana users had heavier alcohol consumption and consequences compared to non-users at the pre-BMI assessment, but they did not increase or decrease their consumption or consequences between pre-BMI and any of the follow-ups. Additionally, marijuana users responded to the BMI similarly to non-marijuana users at each time point .The purpose of the current study was to examine whether heavy drinking marijuana users demonstrate poorer response to two different alcohol-focused interventions compared to non-users and to examine the efficacy of an alcohol-focused BMI on marijuana use frequency among marijuana users receiving stepped care for alcohol use. Our findings indicated that marijuana users and nonusers evidenced equivalent treatment responses to the alcohol-focused BA session and reported similar alcohol-related outcomes following the BMI. Consistent with prior research , the alcohol-focused BMI did not significantly reduce marijuana use frequency in comparison to the assessment-only group.

In our sample, marijuana users did report higher alcohol consumption and problems at baseline/pre-BMI regardless of condition, and these differences between users and nonusers persisted over time. The findings of the current study are somewhat consistent with studies indicating that marijuana use does not decrease the efficacy of alcohol interventions . Although marijuana use did not necessarily lessen the efficacy of the BA and BMI sessions on alcohol use and consequences, regardless of condition, marijuana users reported higher levels of alcohol consumption and consequences at baseline and the pre-BMI assessment. These patterns suggest that heavy drinking marijuana users may still benefit from alcohol use interventions. This is especially noteworthy because dual users typically report increased consequences related to their alcohol use and may have a higher likelihood of being referred to alcohol-focused treatment or mandated to receive intervention for alcohol-related sanctions. Although heavy drinking marijuana users may demonstrate reductions in alcohol consequences following an alcohol-focused intervention , their frequency of marijuana use did not change as a result of receiving a BMI. We can posit several reasons for the participants’ continued use of marijuana, despite a decrease in alcohol-related consequences. First, the parent study found a reduction in alcohol consequences following the alcohol-focused BMI, but not a decrease in alcohol consumption. Prior research examining secondary effects of alcohol BMIs have noted a decrease in marijuana use when there was also a decrease in alcohol consumption . It could be that factors that result in students’ experiencing fewer alcohol-related consequences without changing their drinking differ from ones that would lead to reductions in alcohol or marijuana use. Although our study did not include a measure of marijuana-related consequences, future research should examine changes in marijuana consequences to investigate whether changes in alcohol-related consequences correspond with changes in marijuana consequences following alcohol-focused BMIs. Second, a lack of effects may be due to the fact that our BMI was focused solely on changing alcohol-related behaviors and did not discuss the participant’s marijuana use. Future research should examine process coding in BMIs that do discuss marijuana use to explore possible in-session processes that may be related to changes in marijuana use and can be targeted in future interventions3 .

Similarly, although alcohol and marijuana use share similar predictors , they may differ in their mechanisms of change. For example, the underlying motives that drive these two behaviors may vary so changing one will not ultimately lead to changes in the other and existing BMIs may not be targeting or altering both. Third, the referral incident in this study may not have been severe enough to warrant an overall re-evaluation of substance use, as may have been the case for those who required a visit to the ED as a result of their alcohol use . Marijuana users may require a more focused intervention or a supplemental session that targets alternative substance free activities to facilitate changes in marijuana use . Finally, with growing trends in decriminalization and legalization of marijuana in the US,growing cannabis vertically the perceived risk of marijuana has decreased among college students . Marijuana use may be more entrenched in the college social environment and more difficult to change without a targeted marijuana specific intervention. The results of this study should be interpreted within the context of its limitations. First, our study is restricted by our measure of marijuana use, which was limited to frequency and did not assess for marijuana-related consequences. Future studies may include assessments of quantity, days smoked, and consequences to get a better of understanding of the severity of participants’ marijuana use. Although daily marijuana use is on the rise, with almost 6% of college students reporting daily use , marijuana users in our study were using about 13.7 times in the past month. This is fairly low compared to those seeking treatment for marijuana use or being seen in an emergency department. For example, Metrik et al. found that compared to lighter users, those who reported weekly marijuana use demonstrated a significant decrease in use following treatment. Furthermore, our measure of pBAC was derived from participants’ reported heaviest drinking event and may not be the best way to capture peak BAC levels. Additionally, the study sample was predominantly white which may limit our ability to generalize findings to other populations of interest. Finally, we relied on self-reported data collection that did not include corroborating measures. Research using collateral informants indicated that mandated students may under-report alcohol use . Despite these limitations, this study adds to the existing literature on the secondary effects of alcohol-focused BMIs. To our knowledge it is the first study to examine the influence of two different alcohol interventions on marijuana use in the context of stepped care. Furthermore, findings indicate that heavy drinking college students who also use marijuana may still benefit from alcohol treatment especially in reducing their alcohol related consequences. From a theoretical perspective, our results suggest that changing one behavior does not necessarily mean changes in another will occur, at least with respect to marijuana. However, future work should examine other health behaviors that might change as a result of reducing alcohol consequences. For example, it may be that increases in substance free activities like exercising, volunteering, or academic related behaviors occur alongside changes in alcohol-related behaviors .

Future research examining marijuana focused interventions of different intensity implemented in a stepped care approach may enhance our understanding of which interventions are most effective for college students with varying levels of involvement with marijuana.Marijuana has been criminalized since the late 1920’s due to a plan orchestrated by the Bureau of Narcotics , which aimed to restrict its importation, consumption, and sale. This focused effort resulted in the plant fading from the spotlight until the early 1960’s, when its popularity began to soar. Today, four states and the District of Columbia have legalized recreational marijuana use and nineteen additional states have passed laws that permit the use of medical marijuana. Although permitted in some form in these twenty three states, it is still a violation of Federal law to possess marijuana, due to its classification as a Schedule I drug under the Controlled Substances Act. Despite its classification, marijuana’s increasing popularity, combined with an increasing demand for legalization, calls for an examination of why the plant is illegal in the first place. The purpose of this paper is to examine the validity of these arguments, as well as provide possible solutions to the complex issue of legalization. Many anti-marijuana groups, such as American Society of Addiction Medicine , National Association of Drug Court Professionals , Citizens Against Legalizing Marijuana , Smart Approaches to Marijuana , Parents Opposed to Pot , and National Families in Action , and many more, argue that the legalization of recreational marijuana will lead to easier access and increased use among minors. A study published in October 2014 in the Journal of Adolescent health found that marijuana use does not increase. The study was conducted by Choo, Benz, Zaller, Warren, Rising, and McConnel who looked at a population sample of 11,703,100 students between 1991 and 2011; the students were varying ages, but they all resided in states that had medical marijuana legalization laws. They found past-month marijuana consumption was common , but there was no significant statistical differences in use before and after marijuana policy changes for any state. Choo et. al. also did not find any overall increased probability of marijuana consumption related to the policy change in the regression analysis. Even though this study examines medical marijuana, the concern of minors having access to the plant is very limited. In a state where getting a medical marijuana card is fairly easy for anyone twenty-one and older, minors will turn to previous connections for the drug instead of asking from older siblings, relatives, etc. The real concern comes from the mentality among youth that marijuana is a safe drug to consume, which is not the case for developing minds . A study conducted by Loyola Medicine says that early use can lead to lifelong addiction and damaging developmental changes such as impaired thinking, increased likelihood of dropping out of school, and poor educational outcomes . Whether it is medical or recreational marijuana, there is a solution to youth consumption.

The same age grouping was used for regular alcohol use for comparative purposes

In order to investigate the age specificity of the genetic and endophenotypic factors noted above on the early onset of alcohol use and dependence, we studied adolescents and young adults drawn from the Collaborative Studies on the Genetics of Alcoholism sample . Because we wanted to understand the processes which lead from non-drinking to regular drinking to alcohol dependence we used both the onset of regular alcohol use and of alcohol dependence as dependent variables. As we noted above, more severe cases of alcohol dependence in adults were found associated with earlier ages of onset of drinking and are more likely to be the result of genetic factors, thus we hypothesized that specific genetic and related neurophysiological endophenotypes would have a greater predictive power in those with the earliest ages of onset. Since the principal objective is to determine whether there are age-varying effects of the predictive variables, survival analysis using standard Cox proportional hazards models in which effects are age invariant is not appropriate. In addition, such models cannot account for differential effects on survival which are the result of unmeasured heterogeneity in the sample . DTSA provides an alternative model which avoids these problems and which can be implemented with logistic regression methods. By dividing subjects into groups based upon age of onset, a single logistic regression model can be applied to estimate the probability of those at risk in each age group of becoming alcohol dependent as a function of the predictive variables . The functional form of the model can be set to determine age-specific effects and/or age-independent effects,vertical grow rack systems and use age-invariant and/or age-dependent covariates.

A weighted model was employed to enable the use of all members of multi-member families . The output of a DTSA calculation is the same as the output from a logistic regression calculation. Each DTSA model had the following structure: the outcomes, or dependent variables were either alcohol dependence or regular alcohol use. Regular alcohol use was defined as consumption at least once a month for 6 or more consecutive months. In all cases four distinct age ranges were used: under 16, 16 and 17, 18 and 19, over 19. These age groups were determined by the fact that ages of onset were whole numbers of years, that the numbers of those who became alcohol dependent be about the same in each group, and that there be at least 50 subjects in each group who became alcohol dependent to provide a reasonable degree of statistical reliability in the calculations. The covariates were a genotype from a CHRM2 SNP, ERO power from one of the leads, family type , number of parents who smoke, gender, and scores on principal components 1 and 2 derived from the stratification analysis of the sample genome . The CHRM2 SNPs analyzed here, rs978437, rs7800170, rs1824024, rs2061174, and rs2350786 include the three most significant of those for alcohol dependence with comorbid drug dependence in Dick et al. as well as two others that appear to be in a range of significance indicated by that table. From preliminary statistical screening of the genotypic distributions in the sample, a recessive model was employed which contrasted major allele homozygotes with those who were not. The electrophysiological phenotypes used in the analysis were found to be significant in previous studies ; these studies showed reduced amplitudes in alcoholics and in those offspring at high risk. The number of parents who smoke were selected in part because the Kaplan–Meier curves with different values showed considerable variation for a discussion of the effects of parental smoking on adolescent behavior.

DTSA results were calculated for the entire sample. Our fourth item for investigation, whether the influence of these SNPs would be greater in a behaviorally defined sub-sample comprising a putatively more genetically vulnerable group was suggested by the results of Dick et al. and King and Chassin . Given the prevalence of various substance abuse categories in the sample and the number of subjects in each category who become alcohol dependent during the age range of the study, the broad criterion of the use of an illicit drug regardless of age of onset or frequency of use was employed to define the more genetically vulnerable group. This sub-sample will be called the ‘‘illicit drug use’’ sub-sample. Unlike the definition of illicit drug use in Dick et al. , this definition does not categorize regular use of cannabis as illicit drug use. Since more than half the sample are characterized as regular users of cannabis at some time during the age range of the study , regular use of cannabis can not be considered a practice that violates norms of agerelated behavior or involves enhanced risk taking, and thus not an element of ‘‘externalizing psychopathology’’. We note that 90 % of cannabis dependent subjects who are also alcohol dependent are included in the sub-sample, so although our criterion does not span regular cannabis use we are probably picking up those more genetically vulnerable cannabis dependent subjects and thus paralleling the group used in Dick et al. . For the regular alcohol use outcome, there were a sufficient number of illicit drug non-users who became regular users of alcohol to provide a sub-sample to contrast with the illicit drug use sub-sample. Since about 75 % of the alcohol dependent subjects were members of the illicit drug use sub-sample, there were too few alcohol dependent subjects with no illicit drug use to provide a contrasting sub-sample. However some inferences about the significance of illicit drug use for the onset of alcohol dependence can be drawn from the differences between the DTSA results for the entire sample and the results for the illicit drug use sub-sample.

Since regular alcohol use is a necessary condition of alcohol dependence, it could not be used as a covariate in the DTSA calculation of the onset of alcohol dependence. In order to investigate the duration of the transition from regular alcohol use to alcohol dependence as a function of the age of onset of alcohol dependence, the third item for investigation, logistic regression analyses of the onset of alcohol dependence as the outcome in each of the age ranges, restricted to the sample of those who are regular users of alcohol within that age range, were carried out. All covariates used in the DTSA calculations were used with duration of drinking as an additional covariate. Although those who become alcohol dependent are removed from the sample at each age range, this is not a survival analysis method because new regular users of alcohol are added to the sample at each age range. However, the results of these tests can be compared to the DTSA results for the illicit drug use sub-sample to examine the effect of including all alcohol dependent subjects in the sample,vertical grow racks cost as opposed to a restricted sub-sample as found in the illicit drug use sub-sample. In order to investigate the duration of the transition from regular alcohol use to alcohol dependence as a function of the age of onset of regular alcohol use, both Fisher’s exact test and the Cochran-Armitage trend test were applied to the distribution in each of the first three age ranges of the proportion of those who became alcohol dependent in the same or subsequent age range for those who became regular users of alcohol in that age range.We investigated whether there were age-related trends in the genotypic distributions which underlie the results of the DTSA for the SNP covariates and the rapidity of the transition from regular alcohol use to alcohol dependence. Two separate Cochran-Armitage trend tests were carried out on genotypic distributions of the SNPs of the illicit drug use sub-sample. Given the use of the recessive genetic model in the DTSA tests, subjects in the illicit drug use sub-sample were divided into two genotypic groups, those who had two copies of the major allele and those who did not. The first trend test was of the genotypic distribution of those who became alcohol dependent as a function of age of onset of alcohol dependence, comparing those who had two copies of the major allele with those who did not. The null hypothesis is that the relative effect of having a particular genotype does not vary linearly between ages of onset; that is, that the ratio of different genotypes of those who become alcohol dependent does not display a linear trend between ages of onset. To test whether there was trend in the genotypic distributions as a function of the rapidity of the transition from regular alcohol use to alcohol dependence, a second trend test was carried out. This test was of the genotypic distribution of those who began regular alcohol use in the youngest age range and became alcohol dependent at any age as a function of age of onset of alcohol dependence, comparing those who had two copies of the major allele with those who did not. The null hypothesis is that the ratio of different genotypes of those who become alcohol dependent does not show a trend between different time spans from the onset of regular alcohol use to the onset of alcohol dependence. We restricted our analysis to those who became regular alcohol users in the youngest age range in order to obtain results for those who might take a relatively long time to develop alcohol dependence.Significant CHRM2 SNP association were noted for the onset of alcohol dependence and were found only in the those with age of onset younger than 16.

These results were obtained both in the entire sample and the illicit drug sub-sample. In all cases with significant results, occurrence of the major allele was the risk factor. No CHRM2 SNPs were found to be significant predictors of the onset of regular alcohol use for any age range. In comparing the entire sample with the sub-sample, the CHRM2 effects are greater in the illicit drug use sub-sample than in the sample as a whole. In particular, restricting the sample to those most genetically vulnerable enables two more SNPs to become significant at the 0.05 level. If the risk of the onset of alcohol dependence as a function of genotype were as great in the drug non-users as in the illicit drug use sub-sample, and taking into account the lower rate of regular alcohol use in the drug non-users, there would be almost twice as many alcohol dependent subjects among the drug non-users as in fact there are.The significance of family type alcoholic family or community family and number of parents who smoke was greatest in the younger age ranges. Effects are measured in changes in logit from baseline. When significant, SNP effects were about 1.0 for having two copies of the risk allele in the recessive genetic models, and the delta ERO effect was about 0.5 per standard deviation. When significant, the parental smoking effect was about 0.2 per smoker, the family type effect ranged from 1.0 to almost 2.0, and the gender effect ranged from about 0.5 to 1.0. In the logistic regression analyses used to investigate the duration of the transition from regular alcohol use to alcohol dependence as a function of the age of onset of alcohol dependence, genotype was not significant in any age range in both linear and quadratic models for duration. In the linear model for duration, modeled as log, delta ERO values at Fz are significant in the youngest age range, and both Fz and Cz ERO values are significant in the oldest age range. ERO results are consistent with those obtained in the DTSA models. Duration was significant in the three youngest age ranges. In the quadratic model for duration, modeled as the sum of log and log2 , the Fz and Cz ERO values are significant only in the oldest age range. The effect of duration of drinking was significant in the three youngest age ranges with an overall U-shape in the two youngest age ranges. Since the beta value for the log term is negative and the beta value for the log2 term is positive, the rising part of the U-shape masks the Fz ERO effect in the youngest age range .

Previous studies have however suggested that age at onset can be assessed reliably

Bipolar disorder is a severe, chronic, and disabling mental illness characterized by recurrent episodes of hypomania or mania and depression. It is a clinically defined nosological entity with multi-factorial but poorly understood etiologic mechanisms. The evidence from twin, family, and adoption studies provide compelling evidence for a strong genetic predisposition to BPD with heritability estimated to be as high as ≥80%. Given BPD is a heterogeneous disease with substantial phenotypic and genetic complexities, the identification for BPD risk loci has proven to be difficult. Some researchers have proposed that dissecting BPD into clinical subgroups with distinct sub-phenotypes may result in genetically homogeneous cohorts to facilitate the mapping of BPD susceptibility genes. Among the sub-phenotypes, early-onset BPD is of particular interest as several independent cohort studies have demonstrated their existence. Comparing to the non-early-onset BPD, the early-onset sub-type is associated with a more severe form of clinical manifestations characterized by frequent psychotic features, more mixed episodes, greater psychiatric comorbidity such as drug and alcohol abuse and anxiety disorders, higher risk of suicide attempt, worse cognitive performance, and poorer response to prophylactic lithium treatment. In addition, the pattern of disease inheritance seems to differ between early‐ and late‐onset BPD families, with the former involving greater heritability. These observations indicate that early-onset BPD may be a genetically homogenous subset and thus could be used for genetic study to identify its susceptibility genes. A number of BPD genes identified by genome-wide association study have been widely replicated and intensively studied, but these studies did not include early-onset BPD. Over the past two decades, a host of studies have investigated genetic loci responsible for early-onset BPD through linkage-analyses, candidate–gene association, analyses of copy number variants , and GWAS, but findings are inconclusive. Candidate–gene association studies have identified a number of genes potentially associated with early-onset BPD,the rack of clones including glycogen synthase kinase 3-β gene, circadian clock gene Per 3, serotonin transporter gene, brain-derived neurotrophic factor gene, and gene coding synaptosomal-associated protein SNAP25.

However, very few positive findings of these studies have been replicated independently. Findings from linkage studies suggested chromosomal regions harboring the susceptibility genes at 3p14 and 21q22, plus the loci at 18p11, 6q25, 9q34 and 20q11 with nominal significance. Studies of CNVs in early-onset BPD were based on relatively small effect sizes and were irreproducible, suggesting that CNVs are unlikely the major source of liability. Finally, GWAS failed to find any risk variant with genome-wide statistical significance in Caucasian populations, despite some variants showed suggestive significance. In previous genetic studies, the definition of early-onset in BPD typically ranged from 15 to 25 years of age. These association studies were largely conducted with small sample size and were under powered. Most of them compared early-onset BPD vs. healthy control. Such a case–control design is more likely to identify susceptibility gene for BPD per se, but not for the early-onset sub-type. The optimal strategy to identify gene for the early-onset BPD is to include the non-early-onset BPD group for comparison. Different definitions for early onset of BPI have been proposed in previous work. In this paper, we reported findings from a GWAS with high-density SNP chips on early-onset, defined as ≤20 years of age, BPI patients of Han Taiwanese descent.The clinical phenotype assessment of manic and depressive episodes was performed by well-trained psychiatric nurses and psychiatrists using a cross culturally validated and reliable Chinese version of the Schedules for Clinical Assessment in Neuropsychiatry, supplemented by available medical records. All of them were diagnosed according to the DSM-IV criteria for BPI disorder with recurrent episodes of mania with or without depressive episode. The assessment of onset age was based on a life chart prepared with graphic depiction of lifetime clinical course for each of the BPI patient recruited. This life chart included largely all the mood episodes with date of onset , duration, and severity . The construction of this life chart was based on integrated information gathered from direct interview with patients and their family members, interviews with in-charge psychiatrists, and a thorough medical chart review.

Different definitions for early onset of BPI have been proposed in previous work. Our selection of 20 as the age threshold was based on a systematic review for pediatric BPD. The age at onset was defined by the first mood episode . Of all patients, 1306 with genotyping data available were included in the discovery group for GWAS and the rest 473 without genotyping data were included in the replication group.In this paper, we have reported one of the largest GWASs to investigate genetic susceptibility to early-onset BPI with the first mood episode occurring ≤20 years of age. We employed standardized psychiatric interview and constructed a life chart with detailed clinical history to ensure the accuracy and homogeneity of phenotype for genotyping. Our GWAS with high-density SNP chips identified the SNP rs11127876 in CADM2 gene to be associated with early-onset BPI in both discovery and replication groups, and meta-analysis for the association was close to genome-wide significance . The gene CADM2 on chromosome 3 encodes a synaptic cell adhesion molecule that is prominently expressed in neurons, and plays key roles in the development, differentiation, and maintaining synaptic circuitry of the central nervous system. In previous GWASs, CADM2 has been found to be associated with a number of mental health related traits, including alcohol consumption, cannabis use, reduced anxiety, neuroticism and conscientiousness, and increased risk-taking behavior. CADM2 was also reported to be associated with executive functioning and processing speed, general cognitive function, and educational attainment. Though there have been no investigations examining the risk-taking phenotype in early-onset relative to non-early-onset BPD, Homes et al. showed that BPD patients with a past history of alcohol abuse or dependence had a higher risk-taking propensity, suggesting a relationship between early-onset BPD and risk-taking propensity. Of note, Morris et al. suggested that CADM2 variants may not only link with psychological traits, but also influence metabolic-related traits, such as body mass index, blood pressure, and C-reactive protein. In addition, they found that CADM2 variants had genotype specific effects on CADM2 expression levels in adult brain and adipose tissues.

The finding highlights the potential pleiotropy of CADM2 gene, i.e., the genetic variants may influence multiple traits, and shared biological mechanisms across brain and adipose tissues through the regulation of CADM2 expression. Given that the metabolic comorbidities are prevalent in patients with early-onset BPD, it is conceivable that CADM2 variants may influence both psychological and physical traits, further contributing to a more severe clinical sub-type of BPD with accompanying risk of metabolic adversities. In addition, an association between risk-taking and obesity has also been implicated in previous research, which suggests that risk takers tend to overlook health-related outcomes and are prone to aberrant reward circuitry predisposing them to poor dietary choices and excessive intake. Collectively, in line with the characteristics found to be associated with CADM2 variants, it is likely that CADM2 may exert an effect on the constellation of clinical features related to early-onset BPD with greater symptom severity . Therefore,grow shelves with lights our findings suggest that CADM2 genetic variants may have significant effects on a sub-type of BPD with early-onset. Two previous GWASs comparing early-onset BPD patients with healthy controls did not find any genetic variants reaching genome-wide significance. Our study included a larger sample of early-onset BPI patients to conduct GWAS using high-density genotyping . The statistical power was calculated using Post-hoc Power Calculator , combining the allelic frequencies of both discovery and replication groups. In this study of two independent samples of BPI with dichotomous endpoint, the power reached 99.4% and 18.2% under type I error = 0.05 and = 5 × 10−8 , respectively. Results of our study are also likely to be under powered under the stringency setting of type I error. However, the frequency of risk allele T was higher in patients with onset ≤20 than in patients with onset >20 in both discovery and replication groups. We believe all these have provided strong evidence to confirm the association of this SNP with early onset BPD. In Table 2, the minor allele frequencies differ quite a bit between the discovery and replication cohorts. In the NCBI SNP database, minor allele frequency of rs11127876 is 0.08 in Han Chinese in Beijing, close to our results and suggest that the different allele frequencies observed in Table 2 may mainly result from our sampling. The over-representative minor allele frequency in replication group may have come from random sampling or effects of hidden characters of our patients recruited. Genetic variant of CADM2 has been reported to be associated with behavioral and metabolic traity, which were not assessed in this study. Though the minor allele frequencies of rs11127876 were different in discovery and replication groups, the same direction of ORs of rs11127876 minor allele supports the reliability of our findings. The SNP rs75928006 located in the upstream of MIR522 reached genome-wide significance in discovery group but failed to show statistical significance in replication group.

MIR522 promotes glioblastoma cell proliferation, but there was no evidence to suggest its association with any psychiatric disorders. One major limitation of this study is the possibility of recall bias about the exact onset age of the first mood episode of BPI, particular when there was a long history of the illness. The preparation of life chart containing detailed clinical course and treatment based on a semi-structured clinical interview plus a thorough medical chart review for individual patients should have overcome this potential limitation satisfactorily. In summary, we have identified a genetic locus rs11127876 in CADM2 gene to be associated with early onset BPI. The finding has reflected the co-sharing genetic features of psychiatric disorders and behavioral traits. Further investigations of the CADM2 biological function in BPI are warranted.The fatty acid amide, palmitylethanolamide, was previously found to inhibit formalin-evoked pain behavior in Ž . rodents Calignano et al., 1998; Jaggar et al., 1998 . In the present study, we have further characterized the antinociceptive activity of this endogenous lipid molecule in several models of phasic and tonic pain. Our initial structure–activity relationship studies suggest that the ability of palmitylethanolamide to reduce formalin-evoked nociception may have distinct structural requirements, insofar as small variations in chemical structure were found to produce substantial losses of biological Ž activity. For example, myristylethanolamide shorthand . fatty acid designation, 14:0 and palmitoleylethanolamide Ž16:1D9 cis. displayed no significant antinociceptive activity, despite their close structural resemblance with Ž . palmitylethanolamide 16:0 . These findings, which need to be extended by testing a wider range of structural analogs, are consistent with the possibility that the effects of palmitylethanolamide are mediated by activation of a selective receptor site. This hypothesis is further supported by the ability of the cannabinoid CB receptor antagonist, 2 SR144528, to prevent palmitylethanolamide-evoked Ž . antinociception Calignano et al., 1998; present study . The relationship of the putative receptor activated by palmitylethanolamide with the cloned cannabinoid CB re- 2 ceptor subtype, if any, currently remains undetermined. There is general agreement that palmitylethanolamide does not productively interact with the cloned cannabinoid CB2 Ž . receptor Devane et al., 1992; Griffin et al., 2000 , a Ž negative finding that we have reproduced in our lab S. . Kathuria and D. Piomelli, unpublished observations . However, activation of cannabinoid CB receptors by the 2 selective agonist, HU308, was recently shown to inhibit Ž . pain in the formalin model Hanus et al., 1999 . Oleylethanolamide was found to exert a weak antinociceptive effect in the formalin test, which was reduced by systemic administration of either cannabinoid CB or CB 1 2 receptor antagonists. These results suggest that oleylethanolamide may reduce nociception through a dual mechanism. Oleylethanolamide may weakly interact with a receptor site sensitive to the cannabinoid CB receptor 2 antagonist SR144528. Moreover, by inhibiting anandamide Ž inactivation Desarnaud et al., 1995; Di Tomaso et al., ´ . 1996; Piomelli et al., 1999 , oleylethanolamide may cause anandamide to accumulate in the injected paw and activate local cannabinoid CB receptors. The possibility that 1 oleylethanolamide directly binds to and activates cannabinoid CB receptors is unlikely, because oleylethanolamide 1 Ž displays no affinity for these receptors in vitro S. Kathuria . and D. Piomelli, unpublished observations . In previous experiments, using an identical protocol, we failed to observe the antinociceptive effects of oleylethanolamide Ž .Ž 50 mg per animal, intraplantar in mice Calignano et al., . 1998 .

Confirmation of such hypothesis would have substantial public health implications

Therefore, genome-wide significance may identify loci with larger genetic effects, while others with smaller effects remain undetected for a given population size. Variation in ADGRL3 has been implicated in ADHD in diverse populations. ADGRL3 is a member of the latrophilin subfamily of G-protein-coupled receptors and is most strongly expressed in brain regions implicated in the neurophysiological basis of ADHD. Mouse and zebrafish knockout models also support ADGRL3 implication in ADHD pathophysiology. More recently, Martinez et al. identified a brain-specific transcriptional enhancer within ADGRL3 that contains an ADHD risk haplotype associated with reduced ADGRL3 mRNA expression in the thalamus. This haplotype was associated not only with ADHD, but also with disruptive behaviors, including SUD. A member of the family of leucine-rich repeat transmembrane proteins has been identified as an endogenous postsynaptic ligand for latrophilins. Interferencewith this interaction reduces excitatory synapse density in cultured neurons and decreases afferent input strength and dendritic spine number in dentate granule cells, which implicates ADGRL3 and FLRT3 in glutamatergic synapse development. Similarly, convergent evidence from a network analysis of a gene set significantly associated and/or linked to ADHD and SUD revealed pathways involved in axon guidance,mobile grow system regulation of synaptic transmission, and regulation of transmission of nerve impulse. These data altogether suggest that ADGRL3 may be an important SUD susceptibility gene. Strong evidence from clinical and genetic association studies suggests that genetic factors play a crucial role in shaping the susceptibility to both ADHD and SUD.

More strikingly, ADHD treatment has been shown to reduce the risk of SUD. Though the neurobiological basis for this association remains unclear, a variety of causal pathways from ADHD to SUD have been proposed that involve conduct problems. Clinical studies have suggested that the link between SUD and ADHD disappears after controlling for co-morbid CD. In agreement with these studies, the presence of CD was a major predictor of SUD in the ARPA-based predictive models for SUD in the Paisa and Spanish cohorts . Some researchers implicate genetically mediated personality traits, such as impulsivity and lack of inhibitory control as a link between ADHD and SUD resulting from common neurological substrates. Some investigators have proposed that patients with ADHD use addictive substances to self-medicate and that the differential response to drugs of abuse and atypical behavioral regulation in response to social cues may fuel substance use. Others suggest that the poor judgment and impulsivity associated with ADHD contribute to the development of substance dependence. Clinical variables from childhood have also been associated with SUD in patients with ADHD, such ADHD sub-type, temper characteristics , sexual abuse, suspension from school, and a family history of ADHD. In summary, our results support a possible functional role for ADGRL3 in modulating drug seeking behavior. Regardless of the type of abused substance, longitudinal studies generally find that the onset of ADHD precedes that of SUD, suggesting that the psychopathology of ADHD is not secondary to SUD in most patients. Accordingly, it is reasonable to consider that timely diagnosis and treatment of ADHD with stimulant medication may reduce the occurrence and/or severity of SUD. Based on the relationship with medication response, we speculate that ADGRL3 variants may underlie a differential genetic susceptibility not only to SUD, but also to the long-term protective effects of medication treatment. Inasmuch as ADGRL3 participates in synaptic formation and function, its involvement in SUD could be mediated by either influencing brain development or moderating drug-induced changes in synaptic strength. Molecular studies are required to elucidate the pathogenic mechanism associated with ADGRL3 dysfunction in SUD.Adaptive reward processing is critical for successful goal attainment and functioning across most domains of life.

Emerging research has begun to investigate different aspects of reward processing that may be impaired in schizophrenia and contribute to the diminished motivation and goal-directed behavior that often accompany this disorder . In addition to studies of reward anticipation and learning , investigators have examined reward-related decision making processes, such as effort based decision making . Another decision-making process that has received increased attention is “delay discounting” , which refers to whether one is willing to forego a smaller, sooner reward for the sake of a larger, later reward. Delay discounting is well suited for cross-species translational research, as a number of animal models of DD have been developed . Neurobiological studies in animals demonstrate central roles for the nucleus accumbens core and the orbitofrontal cortex in DD. In line with these findings, human fMRI studies of DD implicate a limbic circuit showing activity during selection of smaller sooner rewards, prefrontal areas associated with cognitive control showing activity during selection of larger but later rewards, and relative activity across these regions associated with behavioral preference . The vast majority of human DD studies use conventional decision making paradigms in which subjects make a series forced choices between smaller, sooner or a larger, later monetary rewards. In these paradigms, participants either receive no actual rewards, referred to as “hypothetical delay discounting tasks”, or are paid out for only one or a few randomly selected trials, referred to as “potentially real reward delay discounting tasks” , but receive no actual rewards or are paid out for only one or a few randomly selected trials. Numerous studies show that, all things being equal, the more a reward is delayed the less subjective value it has. People typically display a monotonically decreasing function such that reward value progressively diminishes as the delay to a reward grows longer. There are, however, substantial individual differences in the degree to which reward values are discounted as delays grow longer. For example, individuals with relatively greater discounting show a steeper reward/delay curve, such that smaller/sooner rewards are more readily chosen than larger/later rewards. Such individuals are more susceptible to proximal rewards and have been described as “temporally myopic” or “impulsive” . Consistent with this description, steeper discounting curves are associated with impulse control difficulties, including nicotine use, substance use disorders, and unhealthy behaviors .

In contrast to conventional paradigms, a more recent development in human research is the use of experiential DD paradigms in which subjects make a series of choices between smaller, sooner vs. later,mobile vertical grow rack larger monetary rewards and actually receive these rewards in real-time on a trial-by-trial basis. This format much more closely parallels DD tasks used in animal research . Like hypothetical tasks, experiential tasks show good sensitivity in differentiating between users and non-users of nicotine and other substances . Further, they may be more sensitive to treatment-related changes than hypothetical tasks, making them potentially attractive paradigms for endpoints in clinical trials. Despite the fact that both hypothetical and experiential tasks assess DD, they often show only small inter-correlations . The handful of DD studies in schizophrenia has only used hypothetical tasks. Findings have been mixed with several reporting greater DD in schizophrenia than controls , but others reporting normal DD . Findings regarding associations between DD and certain clinical symptoms and neurocognition have also been mixed. Although it has been proposed that negative symptoms may partly reflect DD disturbances , support has been inconsistent . Similarly inconsistent findings have been reported for associations with neurocognition . On the other hand, studies consistently indicate that DD is not significantly related to positive or mood-related symptoms or to anti-psychotic medications. Overall, it is difficult to integrate findings across studies and most of the studies have been under powered . Further, all studies have been cross-sectional, raising concerns that inconsistencies may reflect problems with the reliability of the paradigms used in these studies. The current study evaluated DD using a hypothetical task and, for the first time, an experiential task, in a relatively large sample of stabilized outpatients with schizophrenia. We had four primary goals. First, to address concerns about the validity of DD data in schizophrenia , we examined the orderliness of the DD data. In addition, we selected paradigms that enabled us to map the shape of the discounting curves to determine if individuals with schizophrenia show the typical hyperbolic shape . Second, we compared discount rates between the schizophrenia and control groups. Although prior studies using hypothetical tasks are mixed and did not support strong directional hypotheses, they led us to predict that the schizophrenia group would show higher DD rates than controls on both tasks. Third, we evaluated whether DD was related to clinical symptoms and neurocognition. We were particularly interested in whether greater discounting would relate to higher negative symptoms. Further, we determined whether the use of nicotine and other substances was associated with discounting rates in light of some evidence for steeper discounting rates among individuals with schizophrenia who are smokers . Fourth, in the schizophrenia group, we evaluated the one-month test-retest stability of the two tasks.The sample included 131 individuals with schizophrenia and 70 demographically-matched healthy controls. Individuals with schizophrenia were recruited from outpatient clinics at University of California, Los Angeles , the Veterans Affairs Greater Los Angeles Healthcare System , and from local clinics and board and care facilities.

Selection criteria for included Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition diagnosis of schizophrenia determined with the Structured Clinical Interview for DSM-IV age 18–60 years, no clinically significant neurological disease, no history of serious head injury, no evidence of current alcohol, cannabis or other substance dependence disorder or current substance abuse disorder ; lifetime histories of these disorders were acceptable, and nicotine-related disorders were not formally assessed, no history of mental retardation or developmental disability, and clinically stable . Diagnostic assessments were conducted by interviewers trained according to established procedures . Eighty-five percent of the participants with schizophrenia were taking a second-generation anti-psychotic, 8% a first-generation anti-psychotic, 3% were taking both, and 4% were not taking an anti-psychotic. The mean chlorpromazine equivalent units was 375.95 . Control participants were recruited through advertisements posted on websites. Selection criteria for healthy controls included no psychiatric history involving schizophrenia spectrum disorder , or other psychotic or recurrent mood axis I disorder according to the SCID-I and SCID-II, no family history of a psychotic disorder among first-degree relatives based on participant report, and no evidence of current or lifetime history of alcohol, cannabis, or other substance dependence disorder, and not evidence of current substance abuse disorder ; nicotine-related disorders were not formally assessed. Criteria concerning age, neurological disease, and head trauma were the same as listed above for the schizophrenia group. Written informed consent was obtained prior to participation in accordance with approval from the local Institutional Review Board. The DD task data was collected as part of a larger grant-funded project on reward processing and negative symptoms in schizophrenia but has not been published elsewhere. An aim of the project was to examine associations between reward processing measures and negative symptoms among individuals with schizophrenia, and a larger clinical than healthy comparison sample was included to evaluate these within-group relationships. The hypothetical DD task was administered earlier in the assessment battery than the hypothetical DD task. The schizophrenia group was administered both DD tasks twice ; controls only received the tasks at baseline. Both groups completed a neurocognitive battery at baseline. $1,000 Delay-Discounting Task . Participants made a series of choices between receiving a $1,000 delayed hypothetical reward and an adjusting smaller immediate reward. The magnitude of the smaller immediate option was adjusted across trials according to a previously described algorithm until an indifference point was determined. Once an indifference point was determined, the larger later option was delayed further and the adjustment procedure was repeated with that new delay. Seven delays were assessed: 1 day, 1 week, 1 month, 6 months, 1 year, 5 years, and 25 years. Indifference points were expressed as a proportion of the larger later reward . Unlike the monetary choice questionnaire used in several prior studies of schizophrenia, the present task determined an indifference point for each delay , which enabled us to assess the orderliness of indifference points across delays and the shape of the discounting function. The Quick Discounting Operant Task used a coin dispenser for money reward delivery. A visual depiction of the task is shown in Supplemental Figure 1. Before beginning the task, participants were instructed to sit at the desk with eyes open during any waiting periods in the task, and were forbidden from engaging in other behaviors such as reading.

Brain stimulation reward remains the most obscure aspect of CB-related reward

The role of release of endocannabinoids is less clear, however, as we have reported that URB597 unmasks some THC-like discriminative effects of nicotine , whereas another study found that URB597 did not potentiate the discriminative effects of nicotine itself. Importantly, unpublished results from company-sponsored clinical trials suggest that blockade of CB1 receptors may be effective in promoting smoking cessation and abstinence . Finally, the few studies available have not supported a role of CB1 receptors in the primary reinforcing effects of cocaine. In addition, in vivo micro-dialysis studies have found that neither anandamide nor 2-AG is formed in the shell of the NAcc during active self administration of cocaine . However, CB1 receptors appear to be required for the incentive motivational effects of cocaine, as measured by self-administration under progressive-ratio schedules and by reinstatement of extinguished cocaine self-administration .Food reward also clearly depends on CB1 receptors and almost every aspect of food reward is affected by activation or inactivation of CB1 receptors . Again, data from clinical trials appear to support the preclinical findings that CB1 blockade is effective in promoting weight loss, although the peripheral and hormonal effects of rimonabant may be more important that the central effects on reward . However,growing systems the involvement of released endocannabinoids may be limited to appetitive aspects of food reward, as concentrations of both anandamide and 2-AG increase in the NAcc during fasting but not during consumption of food.

In addition, inhibitors of anandamide uptake do not increase food intake , further indicating that endogenous anandamide, at least, may be insufficient to drive food intake. In contrast to other neurotransmitter systems involved in brain stimulation reward functions, such as dopaminergic and opioid systems , the effects of endocannabinoid system alterations in different studies have been contradictory and, in fact, activation of the endocannabinoid system appears to reduce the rewarding effects of electrical brain stimulation. Further studies are needed, but it is worth noting that in some instances dopaminergic systems and CB systems appear to have opposite antagonistic effects. For example, dopamine D2 receptor activation in the dorsal striatum releases anandamide, which might act to modulate or counterbalance the effects of dopamine . Also, glutamate release in the VTA activates dopaminergic neurons and, at the same time, leads to the release of 2-AG that in turn reduces glutamate release . On the other hand, CB agonists induce release of dopamine in the shell of the NAcc and have rewarding effects when administered locally both in the NAcc and the VTA . Thus, it is possible that, depending on the balance between endocannabinoids and dopamine and the intensity of stimulation of the region, the systems facilitate or oppose each other. This could be a mechanism for fine-tuning of dopaminergic activity. As electrical brain stimulation is a very strong excitatory stimulus, it is possible that the endocannabinoid system acts to counteract and oppose such stimulation.Alcohol consumption accounts for 5.9%, or roughly 3.3 million, deaths globally each year . Although alcohol use alone represents a serious public health concern, high comorbidity rates have been observed at an epidemiological level between alcohol and nicotine use , such that 6.2 million adults in the United States endorsed both an alcohol use disorder and dependence on nicotine .

Moreover, an individual is three times more likely to be a smoker if he/she is dependent on alcohol and those who are dependent on nicotine are four times more likely to be dependent on alcohol . Given these statistics, it is evident that heavy drinking smokers comprise a distinct sub-population of substance users that warrant unique investigation. Magnetic resonance imaging studies that have focused specifically on the effects that alcohol use may have on brain morphometry have investigated the relationship between drinking variables, such as lifetime duration of alcohol use or lifetime alcohol intake and brain structure in current alcohol users. For example, Fein et al. found lifetime duration of alcohol use was negatively associated with total cortical gray matter volume in alcohol dependent males, but not in light drinkers. Moreover, findings from Taki et al. suggest a significant negative association between lifetime alcohol intake and gray matter volume reductions in the bilateral middle frontal gyri among non-alcohol dependent Japanese men. A recent study , however, found no significant relationship between lifetime alcohol consumption and gray matter volumes in a sample of 367 non-alcohol dependent individuals. Given these contrasting findings, it is uncertain whether quantity variables, such as lifetime alcohol intake or duration of alcohol use account for many of the gray matter volume reductions observed with continued alcohol use. Various studies have implicated several different regions of gray matter atrophy in alcohol dependent individuals, such as the thalamus, middle frontal gyrus, insula, cerebellum, anterior cingulate cortex , and several prefrontal cortical areas . Due to these heterogeneous results, a meta-analysis was conducted, which concluded that there were significant gray matter decreases in the ACC and left dorsal striatum/insula , right dorsal striatum/insula , and the posterior cingulate cortex in alcohol dependent users relative to healthy controls . This suggests that brain areas implicated in processes such as reward and cognition show the most consistent gray matter atrophy in alcohol dependent individuals, but it is unclear whether overall amount of alcohol consumption or aspects of dependence severity explain these findings. Furthermore, some of the neuroimaging studies focusing on alcohol users have not mentioned whether the alcohol users also used nicotine , did not examine the effects of nicotine use on brain structure , did not control for nicotine use in their analyses , assessed nicotine use with a dichotomous questionnaire , or simply mentioned the number of smokers in the study .

This makes it difficult to ascertain whether the observed neural effects were attributable to either alcohol and/or nicotine use and further illustrates the necessity and utility of disentangling the neural effects of each substance. Similar to studies of alcohol use effects on brain morphometry, several MR imaging studies have been conducted to specifically examine the effects of nicotine use on brain structure . As with studies of alcohol users, studies of cigarette smokers have attempted to quantify and incorporate a lifetime use variable, such as pack-year smoking history, which has been found to negatively correlate with PFC gray matter densities as well as gray matter volume in the middle frontal gyrus, temporal gyrus, and the cerebellum . Interestingly, Brody et al., found no significant association between pack-year smoking history and regions of interest determined as having significant between group differences, such as the left dorsolateral PFC, ventrolateral PFC,growing rack and left dorsal ACC. Given these conflicting findings, it is uncertain whether quantity variables, such as pack-year smoking history, account for many of the gray matter volume reductions observed in nicotine dependence. Dissimilar to studies of alcohol dependent individuals, some studies of nicotine dependent individuals have examined symptoms of dependence severity in relation to brain morphometry. For example, the Fagerström Test for Nicotine Dependence , which was not associated with pack-year smoking history, was not correlated with PFC or insular gray matter density . The lack of a significant correlation between FTND scores and pack-year smoking history suggests that quantity of use and dependence severity symptoms may be unrelated in nicotine dependence, and thus have distinct relationships with brain structure. Overall, gray matter degradation has been observed in the thalamus, medial frontal cortex, ACC, cerebellum, and nucleus accumbens in nicotine dependent individuals . Due to widespread results, a meta-analysis was conducted, which found that only the left ACC showed significant gray matter reductions in nicotine dependent individuals compared to healthy controls . While studying primarily alcohol or nicotine using populations carries unique benefits, specific investigation is needed into heavy drinking smokers as past studies have shown compounded neurocognitive effects , as well as pronounced gray matter volume reductions in heavy drinking smokers when compared to nonsmoking light drinkers . Chronic cigarette smoking has been found to have negative consequences on neurocognition during early abstinence from alcohol and in one particular study, it was found that after 8 months of abstinence, actively smoking alcohol-dependent individuals performed worse on several neurocognitive measures, such as working memory and processing speed, when compared to never-smoking alcohol-dependent individuals .

Additionally, formerly smoking alcohol users were found to perform more poorly than never-smoking alcohol users at this time point. These findings not only illustrate the contribution of smoking status on neurocognitive measures but establish the clinical relevance of nicotine use in heavy drinkers. This relevance paired with the compounded neurocognitive and morphometric effects further merit investigation into this unique sub-population of substance users. The present work aimed to ascertain the effects of alcohol and nicotine dependence severity on gray matter density in a sample of 39 non-treatment seeking heavy drinking smokers using standard voxel-based morphometry . While some imaging studies have previously investigated the relationship of FTND scores with brain structure, to our knowledge, no imaging study to date has examined how alcohol dependence severity relates to gray matter density in heavy drinking smokers. Thus, the goal of this study was to examine if alcohol or nicotine dependence severity was correlated with gray matter density in heavy drinking smokers, while controlling for age, gender, and total intracranial volume . By examining dependence severity scores in addition to quantity of use variables, we may be able to capture how dependence is related to structural changes in the brain in a way that is not captured by variables that focus singularly on quantity of use. Based on previous findings, we hypothesized that gray matter density would be negatively related to quantity of both alcohol and nicotine use, in regions such as the middle frontal gyrus. We also hypothesized that dependence severity scores would uniquely relate to gray matter atrophy in several regions previously identified across the meta-analyses of voxel-based morphometry studies, such as the ACC, dorsal striatum, and insula. The subjects for the present study are a subset of participants from a medication development study of varenicline, naltrexone, and their combination in a sample of heavy drinking smokers. Subjects participated in the medication component of the study, details of which have been described in a previous publication , and a sub-sample was invited to complete a neuroimaging session. Participants were recruited from the greater Los Angeles area through online and print advertisements with the following inclusion criteria: 1) between 21 and 55 years of age; 2) reported smoking at least 7 cigarettes per day; and 3) endorsed heavy drinking per the National Institute on Alcohol Abuse and Alcoholism guidelines: for men, >14 drinks per week or ≥5 drinks per occasion at least once per month over the last 12 months; for women, >7 drinks per week or ≥4 drinks per occasion at least once per month over the last 12 months. Participants were excluded from the study based on the following criteria: 1) had a period of smoking abstinence greater than 3 months within the past year; 2) reported use of illicit substances within the last 60 days, confirmed via positive urine toxicology screen at assessment visit ; 3) endorsed lifetime history of psychotic disorders, bipolar disorders, or major depression with suicidal ideation; 4) endorsed moderate or severe depression symptoms as measured by a score of 20 or higher on the Beck Depression Inventory-II ; 5) reported current use of psychotropic medications; 6) reported any MRI contraindications, such as any metal fragment in the body or pregnancy; and 7) reported MRI constraints, such as left-handedness or color blindness. As no Structured Clinical Interview for Diagnostic Statistical Manual 4th edition , or DSM 5th edition , Axis I Disorders was administered, drinking status for participants was determined solely via NIAAA heavy drinking guidelines . After a telephone screening to determine eligibility, participants came to the laboratory for a screening visit, during which informed, written consent was obtained. A urine cotinine test along with carbon monoxide levels verified self-reported smoking patterns and a breath alcohol concentration of 0.00 was required at the beginning of each visit. Eligible participants then came in for a physical examination and if eligible afterwards, began taking medication for nine days, previously described elsewhere . Participants received varenicline alone , naltrexone alone , their combination, or matched placebo.

Model fit was then explored using the Akaike Information Criterion model fit index

Additionally, receiving a larger number of days’ supply of prescription opioids was a predictor of an opioid use disorder diagnosis , as was having a higher average daily dose . In addition to demographic and other markers, behaviorally-based criteria have been successfully used to identify problematic cases of prescription drug misuse . In a recent study, clinical expert raters identified key indicators of misuse, including interpersonal problems, arrest history, multiple opioid use, use for no identifiable reason, and comorbid other substance misuse, and used these indicators along with known indicators of misuse to improve accuracy in identifying misuse. This study indicates that multiple sources of data, particularly those regarding different domains of functioning, may best identify those at risk for opioid abuse and dependence.Previous studies have also linked problematic use of prescription drugs and mental health diagnoses. Nonmedical use of opioids has been associated with panic, depressive, social phobic or agoraphobic symptoms, and the overall number of psychiatric symptoms endorsed . Development of opioid abuse and dependence has also been associated with non-opioid substance use and mental health disorders . Recent prospective research has indicated that non-medical use of prescription medications,grow lights shelving including opioids, places individuals at risk for unipolar depressive, bipolar, and anxiety disorders . The converse relationship may also be true: othermental health conditions may predispose individuals to misuse opioids.

In a recent review of the known factors predicting opioid misuse, the authors caution that although many mental health diagnoses may be risk factors for opioid misuse, these conditions are likely to be concealed due to stigma, and some individuals may choose to take prescription opioids to treat undiagnosed cooccurring disorders rather than the appropriate psychiatric medication . This study seeks to identify demographic and healthcare related variables that predict the development of opioid abuse or dependence, utilizing data obtained from the Thomson Reuters MarketScan Commercial Claims and Encounters database, which contains information about commercially insured and Medicare eligible patients. The use of a large sample, physician-diagnosed disorders, and comprehensive demographic and health care utilization data enable detailed analysis of individuals at risk for the development for opiate abuse or dependence. First, individuals diagnosed with opioid use disorders will be compared with those who are not given opioid use diagnoses on a variety of domains. Second, the use of mathematical modeling techniques will aid in identifying people who are at risk for the development of opioid abuse or dependence. Patients within the CCAE database who had at least one opioid prescription claim between January 1, 2000 and December 31, 2008 were identified. Patients were included if they maintained continuous insurance eligibility for 6 months prior to, and 2 years beyond, this initial prescription claim . Individuals who subsequently received an ICD-9 CM diagnosis of opioid abuse or dependence were classified as those with opioid use disorders, hereafter referred to as OUDs, , and individuals who did not receive a subsequent opioid abuse or dependence diagnosis were classified as those without opioid use disorders, hereafter referred to as non-OUDs . Of the OUDs, 266 received a diagnosis of opioid abuse, and the remaining 2,647 received a diagnosis of opioid dependence. Abuse and dependence cases were therefore grouped together for the purpose of analyses from this point for the following reasons: 1) over 90% of the cases fell into the more serious category of dependence, 2) an abuse diagnosis is often a precursor to dependence, and 3) the clinical distinction between abuse and dependence is less important than the presence or absence of an addictive condition.

Furthermore, the distinction between abuse and dependence has been eliminated in the Diagnostic and Statistical Manual of Mental Disorders , and replaced with opioid use disorders. The first set of planned comparisons involved conducting either t-tests or chi-square analyses to test for statistically significant differences between cases and controls on a variety of variables present in the database. These analyses also served the purpose of identifying variables of interest for the mathematical modeling to be conducted in the next step. With regard to mental health diagnoses and co-occurring substance use disorders, the predictor variables were not time dependent . Once the variables that statistically discriminated cases from controls were identified, significant interactions between these variables were identified using CHAID analyses. The goal of a CHAID analysis is to find homogeneous clusters of a response variable where clusters are defined by the levels in a set of predictor variables. Particular emphasis is placed on the interaction of the predictor variables. The algorithm splits the population according to levels in the predictor variable, which make the responses within the resultant groups as similar as possible and the average between groups as different as possible . Significant interactions detected through CHAID were reviewed by the research team and included in the subsequent logistic regression model if the following conditions were met: 1) the levels in the predictor variable split the groups such that there was at least a 10% difference between the resultant groupings, and 2) a minimum of 10 participants per cell would need to result from the interaction split in order to be meaningful for future modeling. Once the significant variables identified through the bivariate analyses were selected and the CHAID analyses performed, we divided the sample into a build set comprised of 70% of the participants and a validation set comprised of the remaining 30%. The research team devised and tested a series of 18 logistic regression models to fit the data, with variables selected on the basis of several criteria: 1) varying degrees of parsimony, from the simplest demographic variables only to the all-inclusive model using every significant variable and interaction; 2) clinical setting, with models comprised of all mental health variables, or all pharmacy data, for example; and 3) all models were tested both with and without the interaction variables found through the CHAID analyses. Each model was tested using the validation set.

Global null hypothesis tests were used to determine presence of one or more significant predictor variables. This index was selected a priori over other fit indices because the models were not nested,plant drying rack built from the same database, and based on a large sample size. In considering fit indices, we did not want to eliminate variables that could be of potential use in future models, which the BIC is prone to do, particularly in large sample sizes . We wanted to produce a model that favored sensitivity over parsimony, as the AIC does . The choice of the best fitting model was based on the AIC, the relative overall parsimony of the model, and the predictive ability of the model to identify OUDs in the validation set.For the first series of analyses, bivariate comparisons of OUDs and non-OUDs were completed on variables related to demographics, medical service utilization, cooccurring conditions, and concomitant medication usage. The exact variables of interest were chosen by a team of researchers with expertise in pharmacoeconomics, public health, substance misuse, and mental health. Table 1 presents demographic metrics for OUDs and non-OUDs. As expected, OUDs were more likely than non-OUDs to be younger and male . OUDs were also more likely to be a spouse or dependent, rather than the primary insured individual in the plan; 60.1% of non-OUDs were the primary insured person, whereas 43.2% of OUDs were the primary insured. Data regarding participants’ opioid utilization is presented in Table 2. The number of opioid classes differed significantly between OUDs and non-OUDs . OUDs also had a higher mean count of both short acting and long acting opioids . Similarly, the number of days of opioid supply that individuals were prescribed during the study period was also different between groups, with OUDs receiving an average of 272.5 days’ supply of opioids, and nonOUDs an average of 33.2 days’ supply of opioids. Identified OUDs also had a higher number of opioid units dispensed . Notably, the number of pharmacies visited to fill opioid prescriptions differed significantly between groups, with OUDs visiting an average of 3.3 pharmacies per year, compared to 1.3 for non-OUDs. Annual medical service utilization rates also differed significantly between groups, as shown in Table 2. OUDs had significantly more physician visits , outpatient mental health visits , inpatient admissions , inpatient mental health admissions , hospitalization days , mental health hospitalization days , and emergency department encounters . Mental health diagnoses also significantly differentiated the two groups, as shown in Table 3. OUDs were more likely to have diagnoses of anxiety, mood, pain, personality, somatoform, and psychotic disorders than non-OUDs. Whereas 57.7% of OUDs had another substance use disorder diagnosis, only 3.4% of non-OUDs did. In descending order of frequency, the most commonly given substance misuse diagnoses for OUDs were : 20.7% alcohol dependence ; 16.3% other, mixed, or unspecified drug abuse ; 13.6% unspecified drug dependence ; 12.5% combinations of drug dependence excluding opioid ; 12.1% tobacco dependence ; 9.4% alcohol abuse ; 7.1% cocaine dependence ; 4.1% cannabis dependence ; 3.9% cocaine abuse ; and 3.0% cannabis abuse .

All between-group differences were significant at the p < 0.001 level. Other drug abuse or dependence categories occurred in less than 3% of either group. Medication utilization also significantly differentiated non-OUDs from OUDs. Commensurate with the findings regarding elevated rates of mood and anxiety disorders among OUDs, these individuals were more likely to use SSRI medications and benzodiazepines than non-OUDs. Tricyclic antidepressant use was also much greater for OUDs than non-OUDs ; much of this difference was accounted for by the rates of trazodone use , which is often prescribed for insomnia. Rates of anticonvulsant use were also significantly greater among OUDs, with gabapentin accounting for much of this difference . Medications related to pain also differentiated the two groups; OUDs were more likely to be prescribed skeletal muscle relaxants , including cyclobenzaprine hydrochloride and carisoprodol, than non-OUDs. The receipt of nonsteroidal anti-inflammatory medications, or NSAIDs, was also more common among OUDs than non-OUDs . Medications are listed by category in Table 4.The research team devised a series of models using the build set, which were then used to test for predicting OUD status within the validation set. These models were developed to include variables that could be reasonably expected to be present in other data sets. For example, one model was a “diagnostic data only” model including solely ICD-9 diagnoses, which are coded the same as DSM-IV-TR diagnoses for mental health conditions . Another model was comprised of “medical utilization data only” measures and was designed to use variables that might be available in other insurance or health data sets. A “pharmacy data only” model included utilization variables related to medications and opioid use data, similar to what might be available to a pharmacy benefit manager researcher. The CHAID procedure identified a number of significant interactions that met the criteria outlined in the Methods section. These interactions were added to the core variables for each model, but only if the variables involved in the interaction were also included in the model individually. The model that was selected as the best fit for the data was comprised of mental health variables; specifically, this model included the diagnostic status for ICD-9 mental disorders and the health service utilization variables that focused on mental health . This model provided the best fit for the data, as defined by AIC values and by overall parsimony. The log likelihood ratio test for the selected model was 12,695 . The remaining models had LR values ranging from 5,785 to 13,095 ; all of these ratios were also statistically significant . Likewise, the results for the Wald and Score tests were significant across all models. In comparison to the selected model, the only other model with a lower AIC value contained all of the variables presented in the bivariate comparisons above, as well as all significant interactions involving these variables, while only providing a modest decrease in AIC. The resulting model is presented in Table 5. This predictive model was 79.5% concordant with actual OUDs in the validation data set, meaning that almost four-fifths of the OUDs were correctly identified when the model was applied to a different sample of participants.