Campaigns in local buses and print publications will also be implemented

In order to appropriately test and validate this model for AUD, we will use an established, FDA-approved medication. NTX is an opioid antagonist with high affinity for mu-opioid and kappa-opioid receptors. Preclinical studies have shown that opioid antagonists at the muopioid receptor reduce ethanol consumption. In humans, alcohol consumption increases the release of endogenous opioids in the mesolimbic dopamine reward system which contributes to the subjective pleasurable effects of alcohol. Therefore, NTX’s therapeutic benefit as an opioid antagonist is proposed to block these rewarding effects and reduce alcohol consumption. Previous studies of NTX have shown that it reduces drinks per drinking day, alcohol craving, rates of relapse, and the subjective pleasurable effects of alcohol. The effects of NTX appear to be moderated by craving such that higher levels of craving were found to be associated with greater reduction in alcohol consumption. As an established medication for AUD, NTX is an ideal candidate to test the novel practice quit attempt model. To further validate this novel early efficacy model, we will also test a promising medication to treat AUD. Varenicline is a partial agonist at α4β2 and a full agonist at α7 nicotinic acetylcholine receptors, which is FDA-approved for smoking cessation. In preclinical studies, activation of nicotinic acetylcholine receptors reduced ethanol consumption. In human laboratory studies, VAR reduced alcohol self-administration and craving, compared to placebo. In smoking cessation trials, it also reduced alcohol consumption and craving.

Additionally, a multi-site randomized controlled trial of VAR in individuals with AUD found that it reduced drinks per drinking day, alcohol craving,cannabis grow supplies and percentage of heavy drinking days. Together, these studies suggest that VAR is a promising pharmacotherapy for the treatment of AUD. Therefore, including varenicline, a widely studied and promising AUD pharmacotherapy, as a third arm in this study will enable us to further validate this novel alcohol quit paradigm. In designing the current study as a 3-arm trial, we benefit not only from establishing the efficacy of NTX and VAR against placebo, but also from a head-to-head comparison of NTX and VAR in a cost-effective manner. The 3-arm trial design has been selected to overcome weaknesses present in non-inferiority trials where a novel drug is compared to an active control that is the current standard treatment. In active control trials, medication efficacy of the novel drug is determined by demonstrating non-inferiority to the active control, which rests on the critical assumption that the active control has an actual drug effect. However, as there is no placebo control, this assumption cannot be proven; therefore, non-inferiority/equivalence trials lack assay sensitivity, or the ability to distinguish between effective and ineffective treatments. The 3-arm design essentially combines the advantages of placebo and active controlled trials. The placebo arm will allow us to showcase if VAR is an effective or ineffective medication in the context of a good internal standard. Additionally, if neither NTX nor VAR are shown to be superior to placebo, then we can conclude that the practice quit paradigm is not a valid method for screening medications for AUD.The purpose of the current study is to develop and validate this novel model to screen novel compounds and advance medications development. Naltrexone was chosen to evaluate the novel practice quit attempt model as it is one of the few FDA-approved medications AUD.

RCT studies with oral NTX have shown that it reduced drinks per drinking day, alcohol craving, rates of relapse, and the subjective pleasurable effects of alcohol. As such, NTX represents a well-known, well-studied medication that is ideal for testing a novel paradigm. Varenicline is a promising pharmacotherapy for the treatment of AUD. VAR has been shown to reduce alcohol self-administration, consumption, and craving. A recent RCT of VAR in individuals with AUD found that it reduced drinks per drinking day, alcohol craving, and percentage of heavy drinking days. These studies suggest VAR as a potential AUD pharmacotherapy. The addition of VAR as a third arm in the current study will allow us to further validate this novel practice quit attempt model. Additionally, the inclusion of a promising pharmacotherapy allows us to compare the efficacy of two medications head to-head in a cost-effective manner. The 3-arm design of a novel medication , standard treatment , andplacebo allows us to not only establish efficacy of each medication against placebo, but also of the novel medication again the standard treatment. This study design essentially combines the advantages of placebo and active control studies.Participants who are eligible after the physical exam will be randomized to one of three treatment conditions . Urn randomization will be stratified by gender, smoking status , and drinking status . The UCLA Research Pharmacy will manage the blind. The three treatment conditions will not be different in appearance or method of administration. All participants will undergo a week long medication titration period prior to the onset of the practice quit attempt as follows: for the naltrexone condition, 12.5 mg will be taken for the first 3 days, followed by 25 mg dosage from days 4–7.

The target dosage of 50 mg will be ingested days 8–14. As for the varenicline condition, a dosage of 0.5 mg will be taken for the first 3 days followed by an increase to 1 mg for days 8–14. The intended dosage of 2 mg will be taken days 8–14. Each condition will the instructed to take prescribed medication twice per day as detailed in Table 1. On study day 1, participants will report to the laboratory to complete the alcohol CR paradigm and receive their first medication dose under direct observation of study staff. They will receive a 7-day supply of study medication in blister packs with AM and PM dosing clearly distinguished. After reaching the target medication dose at the end of 1 week, participants will come to the laboratory on study day 8 to receive their second, 7- day supply of study medication and to begin the 7-day practice quit attempt. Participants will be asked to take the AM dose of study medication on study day 8 in the lab under direct observation of study staff. During the practice quit attempt,cannabis grow facility participants will complete daily online and phone visits to report on their drinking, mood, and craving for alcohol during the previous day in a daily diary assessment . For each virtual visit, participants will be contacted over the phone by research staff. Participants will first be asked about adverse events and about use of concomitant medications. Research staff will then administer the CIWA-Ar to measure alcohol withdrawal. Next, they will ask participants to report on their past day drinking as well as cigarette and marijuana use. Finally, while participants are still on the phone, research staff will send a link to the DDA . All participants will meet with a trained study counselor briefly after the second cue exposure session on day 14. This brief intervention draws from motivational interviewing and Screening, Brief Intervention, and Referral to Treatment models. It uses the therapeutic stance of motivational interviewing which is collaborative and client-centered. Consistent with the literature on brief intervention, the therapist will seek opportunities to engage in and amplify change talk.

Together, the combination of evidence-based practices and principles applied to AUD, coupled with the experience of change in the context of study participation, is expected to result in an opportunity for health behavior change .Criteria for discontinuing or modifying allocated interventions are at the discretion of the study physicians or principal investigator. One week after beginning the medication, physicians will speak with the participant via phone call to check for any adverse events. If reported, participant may either undergo a dose-reduction or termination. Participants will also have the option to voluntarily discontinue all medication at any point. All severe adverse events will be reported to relevant reporting entities immediately.Adherence to interventions is facilitated by dividing the medication into separate blister packs for two distributions, the daily virtual visits during the practice quit attempt period, and a completion bonus. The separation of the study medication into two blister packs, each a 7-day supply, will motivate participants to come back to the laboratory for the second supply, and reduce the chance of them misplacing the medication at the start of the study. During the practice quit attempt period, the participants will be asked to send pictures of their blister packs to the study staff after completion of the daily phone visits. This will allow the study staff to count the medication for compliance. Additionally, a completion bonus will be given out to participants on the last day of the study if they have completed at 7 out of the 8 in-person and virtual visits. This is to motivate participants to complete all daily phone visits and online assessments .Participants will be recruited from the community through online and newspaper advertisements, as well as campaigns on multiple social media platforms .Targeted recruitment will also take place through a lab database of previous study participants who agreed to be contacted for future studies.Data are collected at the behavioral eligibility screening visit, the randomization visit , at each of the daily phone visits during the practice quit attempt period , and at the in-person study visits . All staff personnel will be trained on any relevant assessment procedures and inter-reliability will be monitored continuously by the primary investigator. For the drinking outcomes , data will be collected via participant self-report through the Timeline Follow back. The Alcohol Urge Questionnaire will be used in the CR paradigm to measure craving. The AUQ is an 8-item scale in which participants will rate their present experience of alcohol craving on a 7-point Likert scale. The AUQ has demonstrated high test-retest reliability, high internal consistency, and construct validity in human laboratory studies.Self-report measures will be directly completed through an electronic data capture electronic case report forms system, Qualtrics. Timeline Follow back data will be entered by research staff into excel in order to generate daily drink averages based on standard drink calculations. All other data will be entered by research staff onto SPSS. Data will be held on a secure server at the University of California, Los Angeles. Appropriately qualified personnel designated by the PI will monitor data entry and ensure that missing data are addressed as soon as possible after detection. All Timeline Follow back data will be double-checked by research staff to ensure validity. Excel will also be formulated to detect and notify in the case of any abnormal values.Participants will be given a 24-h telephone number to reach the study physician to discuss side effects, and physician office hours will be available as needed. Adverse events, including signs of sickness, will be collected in an open-ended format and coded using a systematic assessment for treatment emergent events format at each study visit . Vital signs will be monitored at the beginning of each in-person study visit. Alcohol withdrawal will be monitored at each visit through administration of the CIWA-Ar, and any significant withdrawal, as indicated by a score of 10 or more on the CIWA-Ar, will be reported to the study physician immediately. In the event that significant medical problems are encountered, the study blind will be broken and appropriate medical treatment will be provided.The PI will designate appropriately qualified personnel to periodically perform quality assurance checks at mutually convenient times during and after the study. These monitoring visits provide the opportunity to evaluate the progress of the study and to obtain information about potential problems. The monitor will assure that data are accurate and in agreement with any paper source documentation used, verify that subjects’ consent for study participation has been properly obtained and documented, confirm that research subjects entered into the study meet inclusion and exclusion criteria, verify that study procedures are being conducted according to the protocol guidelines, monitor review AEs and SAEs, and assure that all essential documentation required by GCP guidelines are appropriately filed. At the end of the study, they will confirm that the site has the appropriate essential documents on file and advise on storage of study records.Alcohol use disorder is a chronic condition with both high relapse and low treatment rates.

Similar CB1R-dependent effects of HFS were obtained for pROCK

Secondary antisera from ThermoFisher Scientific included Alexa Fluor 594 antirabbit IgG , Alexa Fluor 488 anti-mouse IgG and Alexa Fluor 488 anti-guinea pig IgG all used at 1:1000 dilutions. An epifluorescence microscope with a 63× PlanApo objective and ORCA-ER camera was used to capture image z-stacks, through a depth of 2 μm in 0.2 μm z-steps, from the DG outer molecular layer and CA1 stratum radiatum. For slice experiments, 1 z-stack was captured from each of 6 sections per slice. For behavioral/brain studies, 3 z-stacks were captured per section from 3 to 4 spaced sections within a given septo-temporal span of hippocampus . Immunolabeling for the synaptic vesicle protein SYN and for the excitatory synapses postsynaptic scaffold protein PSD-95 served as markers for the presynaptic and postsynaptic compartment, respectively. The incidence and density of immunolabeling for the phosphoprotein co-localized with these compartment markers were then evaluated using wide- field epifluorescence microscopy and fluorescence deconvolution tomography as described elsewhere . Briefly, images within each z-stack were processed for iterative deconvolution and then used to construct a 3-dimensional montage of the sample field. Automated systems were then used to normalize background density, identify immuno labeled elements within the size and eccentricity constraints of synapses, and quantify those double-labeled. Elements were considered double-labeled if there was any overlap in the field labeled with the 2 fluorophores as assessed in 3D.Male Long–Evans rats were handled for 6 sessions, 2 sessions per day, and prior to odor discrimination training. Procedures for animal handling, training, and testing were adapted from Martens et al. as described in detail elsewhere . Sessions of ten 30 s trials on a given odor pair were repeated up to twice daily until rats reached a success rate of 80% correct at which point they were considered to have acquired the odor discrimination task.

On the following day, trained rats were either given 10 training trials on a novel odor pair or transported to but not placed in the test apparatus,cannabis plant growing and killed immediately thereafter for tissue harvest .The CB1R is found on axon terminals throughout the brain including the field of LPP termination in the outer molecular layer of the DG . We confirmed CB1R localization to glutamatergic terminals in the rat LPP field and then compared the effects of cannabinoid receptor agonist WIN 55,212–2 on synaptic physiology in the S–C and LPP projections. In accord with prior work , WIN caused a rapid and pronounced depression of S–C fEPSPs in CA1 that was accompanied by an increase in paired-pulse facilitation and the expected severe impairment of LTP . Very different results were obtained in the LPP: WIN had no effect on baseline fEPSPs or on paired pulse facilitation . Voltage-clamp recordings also detected no effect on EPSCs in the LPP . In contrast to these results for glutamatergic responses in the LPP, WIN produced the canonical depression of IPSCs elicited by single pulse LPP stimulation . We next asked if, despite the lack of effect on baseline responses, WIN influences the machinery that produces the ECBdependent potentiation of the LPP using stimulation that is near threshold for induction. WIN more than doubled the magnitude of lppLTP under these conditions . These results suggest that activation of CB1Rs in the LPP preferentially engages signaling mechanisms leading to potentiated transmission as opposed to the more commonly observed depression of release. CB1R signaling through ERK1/2 effects phosphorylation and degradation of the vesicular protein Munc18-1 leading to reductions in transmitter release . In accord with this, using dual immunofluorescence and FDT , we found that treatment with WIN increased levels of phosphorylated Munc18-1 S241 co-localized with the presynaptic marker SYN in the S–C terminal field: WIN caused both a rightward shift in the pMunc18-1 immuno labeling intensity frequency distribution and increased numbers of terminals with dense pMunc18-1 immuno reactivity . In the same slices, WIN had no effect on presynaptic pMunc18-1 immuno labeling in the LPP terminal field .

The above results indicate that WIN-initiated CB1R signaling at LPP terminals is biased “away from” the ERK1/2-to-Munc18-1 cascade through which ECBs suppress neurotransmitter release and toward a route that promotes plasticity. They also raise the question of whether signaling to Munc18-1 and release suppression in CA1 is engaged by normally occurring patterns of physiological activity. Blocking the CB1R with the inverse agonist AM251 had no effect on S–C fEPSPs elicited by single pulses . Thus, we tested for an effect using short trains of low-frequency gamma stimulation. This pattern occurs routinely in hippocampal and entorhinal fields in behaving animals and is thought to be associated with processing of complex information . Within-slice comparisons were made between responses collected before and after 40 min perfusion of vehicle or 5 μM AM251. In CA1, S–C responses to low gamma stimulation showed the rapid, within-train frequency facilitation described in prior work . AM251 did not affect baseline responses but clearly enhanced S–C response facilitation during the gamma train . Effects of AM251 were greatest in later portions of the train, as anticipated for contributions of “on-demand” ECB production. Very different results were obtained for the LPP. Within-train facilitation was less pronounced in the LPP than in the S–C system, and this effect was not altered by AM251 . These findings are consistent with the hypothesis that CB1R signaling leading to a depression of transmitter release is more readily engaged in the S–C projections than in the LPP.The above results were unexpected because prior studies showed that physostigmine causes a suppression of excitatory transmission in the LPP and other hippocampal pathways that is blocked by AM251 .Therefore, we tested if physostigmine increases hippocampal 2-AG levels, as anticipated, and then used the FDT technique employed above to determine if it also triggers Munc18-1 S241 phosphorylation in the LPP. Infusion of physostigmine elicited a marked increase in slice levels of 2-AG but not other lipids ; it also produced a reliable increase in SYN+ terminals with dense concentrations of pMunc18-1 in both LPP and S–C fields.

As predicted,vertical grow system physostigmine effects on both projections were dramatically reduced in slices prepared from Munc18-1+/− mice relative to those from wild types although the mutation had no effect on the input/output curve or paired-pulse facilitation in the LPP . A recent study showed that the locally synthesized neurosteroid pregnenolone reduces both CB1R signaling through ERK1/2 and neurotransmitter release suppression normally mediated by the ECB receptor . We tested for this effect in hippocampus beginning with the pronounced fEPSP depression produced by CB1R activation in the S–C system: treatment with 10 μM pregnenolone prevented the synaptic response depression elicited by WIN . Pregnenolone was similarly effective in the LPP where it blocked actions of physostigmine on presynaptic pMunc18-1 immuno reactivity and synaptic transmission . These findings point to the conclusion that the pregnenolone/ CB1R/Munc18-1 system, as found in the S–C projections, is present in the LPP although it is not engaged by either the CB1R agonist WIN or repetitive afferent activity. There remains the possibility, however, that it is activated by the short high-frequency gamma trains used to induce lppLTP and participates in subsequent stabilization of the potentiated state of LPP terminals. We conducted multiple tests of this argument. Pregnenolone at the concentration that eliminates physostigmine effects on transmission and pMunc18-1 immuno reactivity in CA1 had no detectable effect on lppLTP induced by near threshold stimulation . Conventional stimulation trains failed to influence Munc18-1 phosphorylation in LPP terminals and induction of lppLTP was fully intact in Munc18-1+/− mice . Considered together with evidence that lppLTP is both 2-AG and CB1R-dependent , the present results suggest that potentiation in the LPP involves a second CB1R signaling pathway that has not been evaluated in work using physiological activation of hippocampal synapses. Finally, the results obtained with pregnenolone afforded a means for testing if increases in 2-AG content produced by physostigmine promote lppLTP in the absence of the response suppression associated with Munc18-1 phosphorylation. We tested this intriguing point and found that physostigmine more than doubled the magnitude of lppLTP induced by threshold level stimulation . This result is consistent with our earlier observation that reducing 2-AG breakdown, and thereby increasing hippocampal slice 2-AG levels, with the monoacylglycerol lipase inhibitor JZL184 similarly augments lppLTP .Prior results showed that lppLTP is blocked by presynaptic actions of latrunculin A , a toxin that selectively blocks the assembly of actin filaments. This raises the possibility that the CB1R promotes lppLTP via actions on actin regulatory signaling, an idea in alignment with evidence that CB1R initiates actin reorganization in dissociated cells and rapidly activates both FAK and the small GTPase RhoA in N18TG2 neuroblastomacells . FAK is a non-receptor tyrosine kinase that mediates integrin effects on the actin cytoskeleton throughout the body .

Other experiments found that CB1R acting through FAK initiates actin remodeling in pancreatic cells resulting in enhanced insulin release . Accordingly, we used FDT to test if WIN activates FAK, via Y397 phosphorylation, in LPP terminals . WIN produced a pronounced rightward skew in the immunofluorescence intensity-frequency distribution for pFAK Y397 co-localized with SYN . RhoA and its downstream effector, ROCK2, represent a primary route whereby FAK signals to actin . In the LPP terminal field, WIN increased levels of pROCK S1366 co-localized with SYN but not with the postsynaptic density marker PSD-95 . In the same hippocampal slices, WIN increased presynaptic pROCK levels in CA1 but this effect was substantially smaller than that in the LPP. We quantified the regional difference by converting the data into cumulative probability curves and then subtracting the WIN treatment values at each density bin for each slice from the mean curve for the vehicle group. The rightward shift in pROCK immuno labeling produced by WIN was over 2-fold greater in the LPP than in CA1 . In all, the CB1R agonist WIN had a much greater effect on Munc18-1phosphorylation in CA1 than in the LPP and a much greater effect on markers of actin signaling in the LPP than in CA1. We conclude from this that the CB1R response to WIN is biased toward different signaling streams in the 2 projections. We further tested if CB1R signaling to ROCK is more prominent in the LPP than in CA1 using physostigmine to elevate 2-AG levels and signaling. Physostigmine produced a reliable increase in presynaptic pROCK in the LPP that was blocked by CB1R antagonism but had no reliable effect on presynaptic pROCK levels in CA1 of the same hippocampal slices . A similar pattern of results was obtained in an analysis of the most densely pROCK immunoreactive terminal boutons . Collectively, these results describe a CB1R-FAK-ROCK route by which 2-AG generated and released during high-frequency stimulation could facilitate presynaptic cytoskeletal changes required for production of stable lppLTP. In accord with this proposal, high-frequency bursts of LPP stimulation caused a rightward shift in the density frequency distribution of presynaptic pFAK in slices harvested 2 min after stimulation.Together, these results describe a second CB1R signaling pathway in LPP terminals that, unlike the ERK1/2-Munc18-1 route, is directly involved in the production of lppLTP.An important question raised by the above results is why activation of FAK and ROCK by pharmacological CB1R stimulation, augmented lppLTP but did not by itself induce potentiation. One possibility is that electophysiological stimulation of the LPP engages elements that are not downstream from CB1R activation but nonetheless are required for shifting LPP terminals into the enhanced release state. We tested if integrin class adhesion proteins, which co-operate with CB1R in actin regulatory signaling in cultured cells , fill this critical role. Integrins are dimeric transmembrane adhesion receptors for extracellular matrix and cell surface proteins that are expressed throughout the brain by neurons and glia . In hippocampus the majority of integrins contain the β1 subunit and β1 integrins have been localized to both pre- and postsynaptic compartments . We previously demonstrated that, in hippocampal slices, infusion of β1 neutralizing antisera disrupts activity-induced actin polymerization and LTP in field CA1 . Here, we tested if similar treatments influence potentiation in the LPP. Treatment with anti-β1 had no effect on baseline LPP responses but caused a near complete suppression of lppLTP . In contrast, neutralizing anti-αV integrin left potentiation intact .

Methods for quantifying heavy drinking are also inconsistent across studies

A murine model also suggested inhaled VEA may cause EVALI-like lung injury,but the underlying mechanism remains to be determined. The age range of cases and deaths is broad, and the e-cigarette use patterns are diverse, although 75% of EVALI patients were young Caucasian males and an overwhelming majority admitted to THC vaping. While VEA from THC vaping has been most commonly and consistently linked to EVALI cases, the spectrum of usage patterns and clinical manifestations suggest a possible role of multiple toxicants from unregulated products. Chemical analysis of counterfeit cartridges obtained from EVALI patients demonstrated the presence of several toxicants including volatile organic compounds, semi-volatile hydrocarbons, silicon conjugated compounds, terpenes, pesticides, and metals, which were not found in medical-grade THC cartridges.The typical symptoms of EVALI include dyspnea, chest pain, cough, fever, and fatigue. Additionally, many of the EVALI patients also presented with nausea and vomiting and other gastrointestinal symptoms. Chest radiography of most cases was abnormal; images typically showed ground-glass opacities in both lungs.Four radiographic patterns were identified in EVALI patients including acute eosinophilic pneumonia, diffuse alveolar damage, organizing pneumonia, and lipoid pneumonia.Histological analysis of lung biopsies showed patterns of acute fibrinous pneumonitis, diffuse alveolar damage, or organizing pneumonia.EVALI patients may have slightly different phenotypes and have been diagnosed with acute respiratory distress syndrome, lipoid pneumonia, and pneumonitis. Patients have been treated with antibiotics and glucocorticoids,and the steroidal treatment has been shown to improve symptoms and lung function.

Although this is a new field,cannabis flood table where initial cross-sectional epidemiological studies have demonstrated several limitations, adolescents who vaped have been found to be more likely to try cigarettes than non-smoking non-vaping youth.For example, a cross-sectional analysis of PATH study data indicated an association between e-cigarette use and self-reported wheeze,and an analysis of data from 402 822 never-smoking participants in the behavioral risk factor surveillance system indicated an association between self-reported asthma and e-cigarette use intensity.It is important to recognize that the above studies were observational in nature, and the chronology of e-cigarette use and disease development are often not clear, so more evidence is needed that will further clarify the cause-effect relationship between e-cigarette use, cardiopulmonary disease, and cerebrovascular events. Regardless, these publications serve as an impetus for future research into the causative and mechanistic relationships between e-cigarette use and cardiopulmonary disease risk.E-cigarettes have been proposed as an effective strategy to quit conventional cigarette smoking, but they have not been approved for this purpose in the USA or elsewhere. To date, the clinical trials that have been carried out do not address the question of effectiveness in the “real world”, that is, does the availability of e-cigarettes in the marketplace decrease smoking at the population level. Instead, clinical trials have compared the delivery of nicotine by an e-cigarette to other modalities of nicotine delivery. The most recent review on this topic concluded: “The evidence is inadequate to infer that e-cigarettes, in general, increase smoking cessation. However, the evidence is suggestive but not sufficient to infer that the use of e-cigarettes containing nicotine is associated with increased smoking cessation compared to the use of e-cigarettes not containing nicotine, and the evidence is suggestive but not sufficient to infer that more frequent use of e-cigarettes is associated with increased smoking cessation compared with less frequent use of ecigarettes”.

To predict the relative dangers of second- and third-hand e-cigarette exposures, an understanding of the degree to which ecigarette use might lead to an increase in ambient nicotine and particulate matter, and the degree to which nicotine and other e-cigarette constituents deposit on surfaces, will be critical. Since there are no side stream aerosols from e-cigarettes, unlike combustible cigarettes, secondhand e-cigarette exposure is almost exclusively from user exhalation. Thus, it remains unclear and somewhat controversial as to what level of additional particulate matter, vapor phase, and nicotine emissions are released into the environment from e-cigarettes. Some of this uncertainty may relate to variability in device design and liquid composition. However, several studies have demonstrated ecigarette use by individuals can contribute to worse indoor air quality, including release of toxicants and particulate matter.For example, indoor e-cigarette use can generate fine particulate matter in high concentrations during natural use conditions in indoor environments, as well as an increase in particle numbers and concentrations of 1,2-propanediol, glycerin, and nicotine.Increased levels of 1,2-propanediol, diacetin,cannabis grow supplier and nicotine were also measured by gas chromatography from one exhaled e-cigarette puff. E-cigarettes containing nicotine-free solutions may have higher particulate levels than those containing nicotine. However, these particles dissipate much more quickly than cigarette smoke particles and further studies will be needed to fully understand the risk of second and third-hand exposures. Measurable nicotine levels have been detected in samples from hard surfaces and cotton surfaces exposed to e-cigarette emissions.Recent developments in detection strategies by use of auto fluorescence have further elucidated e-liquid deposition topography. One study found that for each 70-mL aerosol puff, 0.019% of the aerosolized e-liquid was deposited on hard surfaces.

These studies may also be an overestimate when compared to real-life scenarios because aerosol puffs were directly administered to the observed surfaces and were not inhaled and exhaled prior to surface deposition. However, in an attempt to provide a better model of surface deposition, deposition as a result of inhaled and exhaled e-cigarette aerosol was performed.This study found no significant increase in surface nicotine levels following 80 puffs per participant. The authors noted these results may not indicate a lack of risk for third-hand exposure, since they did not account for gradual accumulation on surfaces over time. Together, these results indicate that potentially hazardous e-cigarette emissions, including PG/VG, nicotine, and heavy metals may be deposited on household surfaces as a result of typical vaping behavior. Furthermore, they suggest a potential risk for third-hand exposure which could serve as a public health concern. However, more studies are needed to better understand the risk of vaping for second and third-hand exposures.In assessing the public health impact of e-cigarette use, there is an implicit comparison to alternative or counterfactual scenarios; in the case of e-cigarettes, the comparison is to the hypothetical situation of a world lacking e-cigarettes. There are established methods for quantitative risk assessment that are widely used for public health decision-making, such as the four-element paradigm set out in the 1983 National Research Council Report generally referred to as “The Red Book”. The elements include: hazard identification, that is, is there a risk?; exposure assessment, that is, what is the pattern of exposure?; dose-response assessment, that is, how does risk vary with dose?These four elements have general applicability to characterizing the impact of ecigarettes in terms of the prevalence of nicotine addiction and its profile across groups in the population and the associated additional burden of disease. Population impact is quantitatively assessed using conceptual models that capture an understanding of the relationships between independent and modifying factors and their outcomes. Models are implemented using statistical approaches and evidence-based estimates of the values of parameters at key steps in the model, for example, the rate of initiation of use of tobacco products with e-cigarettes present . This approach was used by the FDA’s Tobacco Products Scientific Advisory Committee to estimate the impact of menthol-containing tobacco products.

The overall approach was to formulate a conceptual framework, conduct systematic reviews around the framework, and implement an evidence-based statistical model for making estimates related to public health impact. The systematic reviews highlighted those gaps in scientific evidence, pointing to the most critical research needs for strengthening the evidence foundation for potential regulation of menthol. For e-cigarettes, the research priorities identified in this article relate to key evidence gaps that need to be addressed to achieve a greater and more certain understanding of the population impact of ecigarettes.People with HIV are twice as likely to engage in heavy alcohol use and two to three times more likely to meet criteria for an alcohol use disorder in their lifetime than the general population . Heavy alcohol use not only promotes the transmission of HIV through sexual risk-taking behavior and non-adherence to antiretroviral therapy , but also directly exacerbates HIV disease burden by compromising the efficacy of ART and increasing systemic inflammation . In addition to increased risk for physical illness , there is substantial evidence indicating that comorbid HIV and heavy alcohol use is more detrimental to brain structure and results in higher rates of neurocognitive impairment than either condition alone . The impact of comorbid HIV and heavy alcohol use on the central nervous system is especially important to consider in the context of aging. The population of older adults with HIV is rapidly growing; approximately 48% of PWH in the U.S. are aged 50 and older and the prevalence of PWH over the age of 65 increased by 56% from 2012 to 2016 . Trajectories of neurocognitive and brain aging appear to be steeper in PWH , possibly due to chronic inflammation and immune dysfunction, long-term use of ART, frailty, and cardiometabolic comorbidities . In addition to HIV, rates of alcohol use and misuse are also rising in older adults . The neurocognitive and physical consequences of heavy alcohol use are more severe among older than younger adults, and several studies also report accelerated neurocognitive and brain aging in adults with AUD . While mechanisms underlying these effects are poorly understood, older adults may be more vulnerable to alcohol-related neurotoxicity due to a reduced capacity to metabolize alcohol, lower total-fluid volume, and diminished physiologic reserve to withstand biological stressors . Altogether, these studies support a hypothesis that PWH may be particularly susceptible to the combined deleterious effects of aging and heavy alcohol use. For example, in a recent longitudinal report, Pfefferbaum et al. reported that PWH with comorbid alcohol dependence exhibited faster declines in brain volumes in the midposterior cingulate and pallidum above and beyond either condition alone. There is considerable heterogeneity, however, in profiles of neurocognitive functioning across individuals with HIV and AUD . Patterns of alcohol consumption rarely remain static throughout the course of an AUD, but rather are often characterized by discrete periods of heavy use. This episodic pattern of heavy consumption may similarly impact the stability of HIV disease , which may in part explain why some PWH with AUD exhibit substantial neurocognitive deficits while others remain neurocognitively intact. Self-report estimates of alcohol use, however, often fail to predict neurocognitive performance .For example, some studies classify individuals based on DSM criteria for AUD whereas others define heavy drinking based on “high-risk” patterns of weekly consumption . These methods characterize the chronicity of drinking and psychosocial aspects of alcohol misuse, but they are suboptimal for quantifying discrete periods of heavy exposure and high level intoxication that may confer higher risk for neurocognitive dysfunction. Binge drinking, defined by the National Institute on Alcohol Abuse and Alcoholism as 4 or more drinks for women and 5 or more drinks for men within approximately 2 hours, may more precisely capture discrete episodes of heavy exposure. The relationship between binge drinking and neurocognitive functioning remains poorly understood across the lifespan and particularly in the context of HIV. Thus, the current study examined two primary aims to better understand the impacts of HIV, binge drinking, and age on neurocognitive functioning. The first study aim examined the independent and interactive effects of HIV and binge drinking on global and domain-specific neurocognitive functioning. We hypothesized that: 1) neurocognitive performance would be poorer with each additional risk factor such that the HIV-/Binge- group would exhibit the best neurocognition, followed by the single-risk groups , and finally the dual-risk group; and 2) these group differences would be explained by a detrimental synergistic effect of HIV and binge drinking on neurocognition. The second study aim examined whether the strength of the association between age and neurocognition differed by HIV/Binge group. We hypothesized that: 1) older age would relate to poorer neurocognition; and 2) that this negative relationship would be strongest in the HIV+/Binge+ group. A modified timeline follow-back interview was used to assess drinking behavior in the last 30 days .

The ability to make such comparisons is further limited by the wide time frame in which CBIs were developed

For the CBIs listed that did not mention use of a broad theory , but mentioned using a specific construct or technique, all provided a description of how it was applied in the intervention ; however the amount and quality of information provided about the application of the construct/techniques varied considerable across this group of CBIs.Of the 21 CBIs that mentioned use/application of theory , all but two included at least one measure of a construct associated with the theory. If a CBI mentioned use of a theory, it was more likely to include a measure of specific constructs associated with the theory compared to CBIs that did not mention use of a broad theory. Specifically, of the CBIs, that did not explicitly mention use of a theory, but did include a specific construct, only five included corresponding measures of the theoretical construct . Tables 1 and 2 lists the classification of each CBI and provides a list of the measure associated with the theory, construct or intervention technique.This study identified 100 unique articles covering 42 unique computer-based interventions aimed at preventing or reducing alcohol use among adolescents and young adults.Thus, this review includes a total of 21 new CBIs and 43 new articles. This review is the first to provide an in-depth examination of how CBI’s integrate theories of behavior change to address alcohol use among adolescents and young adults. While theories of behavior change are a critical component of effective interventions that have been developed and evaluated over the past several decades,cannabis equipment attention to the application of theory in CBIs has been limited. We utilized a simple classification system to examine if theories were mentioned, applied or measured in any of the publications that corresponded with the CBIs.

Only half of the CBIs reviewed mentioned use of an overarching, established theory of behavior change. The other half mentioned used of a single construct and/or intervention technique but did not state use of a broader theory. CBIs that were based on a broad theoretical framework were more likely to include measures of constructs associated with the theory than those that used a discrete construct or intervention technique. However, greater attention to what theory was used, articulating how theory informed the intervention and including measures of the theoretical constructs is critical to assess and understand the causal pathways between intervention components/mechanisms and behavioral outcomes . When mentioning the use of a theory or construct, almost all provided at least some description of how it informed the CBI; however, the amount and quality of information about how the theory was applied to the intervention varied considerably. Greater attention to what is inside the “black box” is critical in order to improve our understanding of not only what works, but why it works. While a few articles provided detailed information about the application of theory, the majority included limited information to examine the pathway between intervention approach and outcomes.Some researchers/intervention developers may not fully appreciate how theory can be used to inform intervention approaches. There is an emphasis on outcomes/effectiveness of interventions and less attention is placed on their development. In addition, to our knowledge, there are no publication guidelines/standards for describing the use of theoretical frameworks in intervention studies and the inclusion of this information is often up to individual authors and reviewers. Given the importance of theory in guiding interventions, greater emphasis on the selection and application of theory is needed in publications. The classification system used in this review provided some form of personalized normative feedback and applied it relatively consistently across the CBIs.

Personalized normative feedback is designed to correct misperceptions about the frequency and acceptability of alcohol use among peers. It typically involves an assessment of a youth’s perceptions of peer norms around alcohol attitudes and use followed by tailored information about actual norms. In addition, some interventions have recently incorporated personal feedback to address individual’s motivations to change through assessing and providing feedback on drinking motives or in decisional balance exercises.The widespread use of personalized normative feedback in CBIs may be because it has been widely documented as an effective strategy and because it lends itself readily to an interactive, personalized computer-based intervention. Motivational interviewing was also used in several of the CBIs and is an effective face-to-face counseling technique. In contrast, this technique was applied to CBIs in a number of different ways, such as exercises designed to clarify goals and values, making both the description of how it was applied even more essential to examine differential effectiveness across various CBIs. This study builds on the growing evidence supporting the use of CBIs as a promising intervention approach. We found most of the CBIs improved knowledge, attitudes and reduced alcohol use among adolescents and young adults. In addition, this study suggests CBIs that use overarching theories more frequently reported significant behavioral outcomes than those that use just one specific construct or intervention technique . This finding is consistent with prior studies examining the use of theory in face to-face interventions targeting alcohol use in adolescents. However, it is important to acknowledge the wide variation across the CBIs not only in their use of theory, but in scope, the targeted populations, duration/dosage, and measured outcomes. It is encouraging that even brief/targeted CBIs demonstrated some effectiveness and thus can play an important role in improving knowledge and attitudes, which are important contributors to changes in behavior. There are limitations to this study.

As discussed previously,vertical grow shelf many articles did not explicitly describe how theory was applied in the CBI. It is therefore possible that the theoretical pathways for the intervention were further developed than we have noted, and possibly included in other documents, such as logic models and/or funding applications; however, such information is not readily accessible and was outside the scope of this review. Thus, lack of mention of the name of a theory or construct or its application does not mean that the intervention did not integrate the theory in the intervention, only that the article did not provide information about its application. Thus, due to variations in the described use of theory along with the wide range of CBIs, it was not possible to draw comparisons about the relative effectiveness of CBIs according to the theory used.This review spanned articles published between 1995 and 2014. During this period, CBIs to address health issues have been rapidly evolving due to major advancements in technological innovations . These advancements coupled with greater interest and investments from federal agencies and philanthropic foundations.Electronic cigarettes are battery-powered devices that aerosolize e-liquids, which typically contain propylene glycol and vegetable glycerin , nicotine, flavors, and stabilizers/humectants such as triacetin.Although it is well known that combustible cigarettes cause multiple cardiovascular and pulmonary diseases, the effects of e-cigarettes on health have only begun to be studied. Alarmingly, there has been a rapid increase in e-cigarette use among adolescents and young adults, who could potentially be exposed to e-cigarette aerosols for decades if their use is lifelong.Indeed, the US Surgeon General concluded that the use of e-cigarettes among youth and young adults has become a major public health concern.A recent European Respiratory Society task force concluded that since the long-term effects of e-cigarettes are unknown, it is not clear whether they are in fact safer than tobacco and based on current knowledge, their negative health effects cannot be ruled out.In the USA, the Family Smoking Prevention and Tobacco Control Act of 2009 gave the Food and Drug Administration the power to regulate tobacco products. While e-cigarettes were not covered in the original act, the FDA has clarified its position with its “deeming” rule, and since 2016, has begun to exert its regulatory authority over e-cigarettes and other noncombustible products. In 2020, in response to the growing popularity among youth, the FDA issued a policy to limit the sales of some flavored e-cigarette products.As the FDA adheres to a public health impact standard, evidence on adverse health effects of e-cigarettes will be a consideration to impact on future sales of e-cigarettes and e-cigarette liquids. Such a regulation will likely be contingent upon their observed health effects, as well as effects on nicotine addiction.

In addition, the recent emergence of acute and severe e-cigarette, or vaping, product use-associated lung injury across the US underscores the need, complexity, urgency, and importance of basic and clinical research on the health effects of e-cigarettes, particularly focused on cardiopulmonary systems.With regard to public health impact and cardiopulmonary health, availability and use of e-cigarettes might benefit those who switch from combustible cigarettes to e-cigarettes, that is, harm reduction. However, the potential benefits from the use of these products are uncertain: they deliver a poorly defined, highly variable, and potentially toxic aerosol that may have adverse effects, which may be dependent on patterns of age, reproductive status, health of the user, and use. In addition, there are major public health concerns surrounding the availability of e-cigarettes for children, adolescents, and young adults.Nicotine addiction is of particular concern and the use of e-cigarettes is positively associated with increased risk of use of combustible cigarettes.These issues complicate the question of how e-cigarettes might impact cardiopulmonary health and are explored further in the later sections of this review. Recognizing the potential health impact of e-cigarettes when they first emerged, particularly impacting the heart and lung, the Division of Lung Diseases and the Division of Cardiovascular Sciences at the NIH’s National Heart, Lung, and Blood Institute conducted a workshop in the summer of 2015 entitled “Cardiovascular and Pulmonary Disease and the Emergence of E-cigarettes”, to identify key areas of needed research as well as opportunities and challenges of such research. The workshop was organized around a framework recognizing that the public health impact of e-cigarettes would be influenced by a complex network of factors in addition to direct health and biological effects, including device characteristics, chemical constituents , aerosol characteristics, and use patterns. In response to the significant gaps and research areas highlighted at the workshop, NHLBI subsequently directed research funding to projects aimed at understanding the cardiopulmonary health effects of e-cigarettes and inhaled nicotine. Funded investigators met in 2018, 2019, and 2020 to discuss their results, findings in the larger field, and remaining scientific questions. The focus of this review is the result of discussions recognizing a need for further understanding of cardiopulmonary health effects of e-cigarettes that occurred at these NHLBI-supported workshops and investigator meetings. This review takes a holistic view of e-cigarette use and cardiopulmonary health, with a major focus on the USA. A summary of the current understanding of the multitude of factors that ultimately affect health, including policies, behaviors, emissions, and biological effects associated with e-cigarette use, is provided herein, with the ultimate goal of identifying key research gaps that remain in the field. E-cigarettes inhabit a rapidly changing marketplace and an evolving pattern of use that typically precedes scientific exploration. Following a PubMed search for relevant literature using “e-cigarette” and/or “cardiopulmonary” and/or “pulmonary”, and/or “cardiovascular disease” as search terms, we break down the pertinent fields to uncover critical research questions that will better enable an understanding of how e-cigarettes affect cardiopulmonary health at an individual and community level.E-cigarettes are highly variable in design, and are comprised of a battery, a reservoir for holding the e-liquid, a heating element, an atomizer, and a mouthpiece. The first generation of e-cigarettes were similar in size and shape to combustible cigarettes. First-generation devices typically used a prefilled nicotine solution cartridge that directly contacted the heating element. Many second-generation devices were penshaped; some included refillable cartridges, while others were closed systems that held only prefilled sealed cartridges. Third generation devices were called “mods” since they are easily modified. They were more diverse, and featured customizable atomizers, resistance coils, and larger-sized batteries capable of heating made-to-order e-liquids to higher temperatures to create more aerosol and potentially deliver more nicotine.Fourth generation devices were smaller and some resembled familiar items such as USB drives. Their sleek design and ease by which they can be concealed from parents and teachers have contributed to their growing popularity in school-age children. These e-cigarettes operate at lower wattages than third generation devices.

Quitting cigarettes improves respiratory symptoms and limits lung function deterioration

It was important to “beef up cessation services in a comprehensive way so that it is relevant to the communities that are most affected” . The flip side of including menthol in policies banning sales of flavored tobacco products to address the health of African Americans and other targeted populations such as the LGBT community, was fear within those communities about criminalizing smoking and smokers. Numerous participants emphasized that flavor policies were “not about the behavior but about sales of the product. We’re not about policing people’s behavior. We don’t want to see any more negative police/community interaction” . Most participants believed flavor bans were unlikely to result in over-policing: “I…don’t think we’re going to see…this law be misused to justify inappropriately criminalizing residents” . Another participant remarked that the argument was raised because “the tobacco industry has…paid some African American leaders to come out and say [bans on sales of menthol products] were criminalizing the Black community” . However, she also pointed out that she wasn’t sure what would happen “if an officer sees somebody selling some [contraband menthol cigarettes] New ports out of their trunk…I would like us to get a handle on [that] before we have an Eric Garner case in California” . Many participants showed awareness that their policies were precedents for other jurisdictions to follow . An advocate reported that a jurisdiction implementing a novel ordinance helps “a lot of people [in other cities] to understand that this is the next big step that can be taken” in their own community .

Even if a policy change did not seem to have a short term impact, one participant said, “we really have to take the long view,plant grow table that we’re creating a flavor-free tobacco region and state…so that youth [eventually] wouldn’t be able to go across the street into the next town and buy these products” . Participants also exhibited awareness that policy innovation in California generally had cities and counties taking the lead, not state government. Asked whether the state’s new endgame focus had influenced his work, one participant replied, “I think our local work has shifted the conversation of the state, to be honest” . Another noted that his organization “always had that vision…even before the state wanted that endgame” . This local, then state adoption of policy change was normal, as another participant noted: “the idea [is] they grow from the local jurisdictions to make statewide implementation more likely” . That influence could spread not only through the state, but also to other states and from there, “ripple out into the rest of the world” . A couple of participants sounded a warning about this process. One commented that, once policy making started to move forward at the state level, “We need to be very vigilant of preemption,” . Another was concerned that state-level action would skip over the community work necessary to make policies acceptable and successful, particularly among communities of color: “You ban flavored tobacco and menthol, and…where’s the community engagement?…There can’t be one without the other or there’s going to be imbalance” between policymakers and those most affected by regulations . Several participants also noted a greater readiness that they had seen in even the recent past for new, innovative policies. One noted this conceptual transition, saying, “To think about the endgame at first was kind of jarring . . .[but after Beverly Hills] you start to think, wow, maybe this is possible!” . Another said of his local elected officials, “They wanted a bolder move…They wanted things like, ‘what is the way to end this?…How do we stop this?’” This was a big change, he noted. “There wasn’t a conversation even happening…And that was just the last couple of years” . An advocate from another area reported that, “I’ve heard elected officials say…‘We’re saying we won’t allow pharmacies to sell tobacco anymore. Well, can’t we just say that nobody sells tobacco anymore?’” . One participant saw her role as getting people to believe an “endgame” was possible, saying: “Let’s believe [in zero smoking prevalence], and then we can work towards how we’re going to get there” . Creating an endgame vision and overcoming skepticism seemed increasingly possible in California, as one advocate noted: “the entire United States is learning a lot from California, and I think putting those big goals in front of public health advocates is really making a huge difference, and believing it will happen is making a big difference.

The policies we now consider, and we would have considered impossible, even just two years ago, now people are taking as commonplace” .Previous research [5] in 2018 found that California legislators and advocates to be somewhat cautious about endgame-oriented policies, preferring more gradual approaches. This study found an overall sense from interviewees of momentum for policy innovation, with greater belief in the possibility of an endgame. Some of this may be a response to advocates’ having begun to receive funding from the state’s new tobacco tax, enacted in 2016, that enabled local tobacco control agencies and coalitions to hire staff and engage in more ambitious policy-oriented planning. Advocates likely understood the success of the tax as signaling public support for tobacco control efforts in the state. The influx of funding and resources from the tax increase may have also bolstered advocates’ enthusiasm. The greater caution about endgame policies found in previous research may also relate to the specific participants. The previous study included interviews with state legislators and leaders of statewide organizations; the current study prioritized local advocates . Statewide leaders – and particularly members of the state legislature — think in terms of what can be accomplished at the state level,hydroponic table taking into account that law must get support from legislators who represent communities on a wide spectrum of readiness for policy change. The local advocates knew that their localities could take bolder steps, and judged that they would be willing to do so again. Participants understood that there were challenges ahead, for example, framing the endgame in such a way as to avoid or undercut arguments made in the past by the tobacco industry and its allies. For example, participants did not feel that the history of alcohol prohibition in the US was an appropriate reference for the tobacco endgame, but understood that it would be important to make that distinction, notably by distinguishing sales bans from prohibitions on possession or use of tobacco products. Making that distinction was also important to establish that new tobacco control policies would not invite further over policing of marginalized communities, such as occurred in the Eric Garner case. Another challenge that participants foresaw was California’s recent legalization of marijuana for recreational use.

Although there was some concern that the liberalization of marijuana regulation suggested that public opinion would not favor stricter tobacco policy, most participants had a more nuanced perspective—that one could simultaneously favor restricting tobacco sales, especially to youth, while permitting sales of marijuana, especially to adults. The combined use of tobacco products and marijuana meant that policies had to encompass both. Further, some participants proposed that the stricter rules relating to marijuana retail sales could provide a model for tobacco. Participants demonstrated awareness that policy innovations carried risks. Although they identified policies containing exemptions as less than ideal, requiring more complex and expensive enforcement or a difficult amendment process, participants sometimes considered exemptions to be a pragmatic way of advancing a policy; this was true even in the case of a policy traditionally considered so far from being considered “pragmatic” as to be almost unthinkable, a tobacco sales ban. In some cases, participants considered exemptions to be harmful. For example, a flavor ban that exempted menthol “solved” the problem by removing the products most obviously marketed to children and youth. However, it left African Americans still vulnerable, and without the allies concerned about youth-oriented “candy” flavors. There appeared to be broad understanding that the goal of ending the epidemic “for all population groups,” meant increased engagement with communities with the highest levels of tobacco use. The trend of California tobacco control policy efforts, led by localities, then followed at state level, was well known to participants. Some suggested that the state’s new focus on a tobacco endgame was the result of local innovation. Participants also recognized that communities took their cues from others, so that policy innovations even in small communities would “ripple outward” and engender wider effects over time. Our study has limitations. We interviewed a small number of key informants selected because they worked in communities that had recently passed innovative tobacco control policies; thus, our sample cannot be considered representative of all California tobacco control advocates. Those working in more conservative communities may view the idea of an endgame more skeptically. However, other tobacco control policies , were once regarded as radical and became more normative with their adoption. Indeed, during the course of the study, Beverly Hills began discussing the first-ever prohibition on sales. Some participants interviewed before these deliberations found such an idea out of reach, while others, interviewed afterward, remarked that the conversation alone made such policies seem possible. Discussions of the tobacco endgame have frequently focused on complex and drawn-out plans, sometimes involving sizable state investment, such as the proposal that the state should buy out the tobacco industry. Recent events in California suggest a different, in many ways a simpler future, more in line with the history of tobacco control, in which localities have taken the lead. The first laws in California calling for non-smoking sections in restaurants were local and largely symbolic, but they demonstrated the possibility for clean indoor air, and more, and stronger laws followed. A tobacco sales ban in a small municipality such as Beverly Hills will not substantially reduce tobacco use in California, but it serves as proof of concept. Municipalities and counties in the U.S. may increasingly recognize and exercise their ability to pass such laws, as the 2014 U.S. Surgeon General’s report suggested. This study, and the recent, rapid spread of policies to ban sales of flavored tobacco products in the state, suggest that tobacco control advocates in California are attentive to such possibilities and willing to act. This study, and the history of California’s approach to tobacco control more generally, point to the importance of local policy advocacy. Local advocates understand the specific issues in their communities, and have a nuanced perspective on policy development, such as when exemptions or exceptions are and are not acceptable. Local advocates also may be able to implement policies that would not be possible at a state or national level; such policies may seem radical, but passage normalizes them. Not every community is ready, but this study suggests that we should encourage more attention to the local actors and new, small-scale policy changes happening around the world that have the potential to ultimately end the tobacco epidemic. Acknowledgement: Support for this paper was provided by the California Tobacco Related Disease Research Program, Grant 26IR-0003. Cigarette smoking causes and exacerbates chronic obstructive pulmonary disease and asthma,1 and is associated with wheezing and cough in populations without a respiratory diagnosis.While the relationship between cigarette smoking and respiratory symptoms is well-established, the relationship between use of other tobacco products besides cigarettes and respiratory symptoms in adults is less clear. Changes in the tobacco market, in part, reflect efforts to market products that may cause less harm than cigarettes. Electronic nicotine delivery devices may represent such a product. With respect to respiratory symptoms, findings have been mixed, however. Numerous animal and in vitro studies raise theoretical concerns about e-cigarette use and lung disease.Short term human experimental studies have linked adult e-cigarette use with wheezing and acute alterations in lung function,and lower forced expiratory flow.One longer term 12-week prospective study of cigarette smokers switching to e-cigarettes found no effects on lung function,and two 1-year randomized controlled clinical trials found reduced cough and improved lung function in persons who used e-cigarettes to reduce or quit cigarettes.

Numerous factors limit the ability of clinicians to causally link acute pancreatitis with medications

Elucidating the role of macro- and neighborhood-level exposures in adolescent psychotic experiences could be particularly informative for early intervention efforts, because the clinical relevance of psychotic phenomena increases later in adolescence.Cities have higher rates of violent crime and tend to be more threatening and less socially cohesive.Additionally, 16–24 year-olds in the United Kingdom are 3 times more likely than other age groups to be victimized by a violent crime.Therefore, many adolescents raised in cities are not only embedded in more socially adverse neighborhoods, but are also more likely be personally victimized by crime compared to other age groups and peers living in rural neighborhoods. Given that cumulative trauma is implicated in risk for psychosis,we hypothesized that one of the reasons that young people in urban settings are at increased risk for psychotic phenomena is that they experience a greater accumulation of neighborhood-level social adversity and personal experiences of violence during upbringing. No study has yet explored the potential cumulative effects of adverse neighborhood social conditions and personal crime victimization on the emergence of psychotic experiences during adolescence. The present study addresses this topic with data from a nationally-representative cohort of over 2000 British adolescents, who have been interviewed repeatedly up to age 18, with comprehensive assessments of victimization and psychotic experiences and high-resolution measures of the built and social environment. We asked: Are psychotic experiences more common among adolescents raised in urban vs rural settings? And does this association hold after controlling for neighborhood-level deprivation,vertical rack as well as individual- and family-level factors, that might otherwise explain the relationship? Can the association between urban upbringing and adolescent psychotic experiences be explained by urban neighborhoods having lower levels of social cohesion and higher levels of neighborhood disorder ? 

Are psychotic experiences more common among adolescents who have been personally victimized by a violent crime? And Is there a cumulative effect of neighborhood social adversity and personal crime victimization on adolescent psychotic experiences? In addition, the present study conducted sensitivity analyses using adolescent psychotic symptoms as the outcome .We conducted analyses following 5 steps. First, logistic regression was used to test whether psychotic experiences were more common among adolescents raised in urban neighborhoods. We controlled for family- and individual-level factors and for neighborhood-level deprivation to check that the association was not explained by these characteristics which could potentially differ between urban vs rural residents. We also examined the association between urbanicity and adolescent major depression to check for specificity of the previous findings. Second, because urban neighborhoods are characterized by lower levels of social cohesion and higher levels of neighborhood disorder we tested whether levels of these neighborhood characteristics accounted for the association between urbanicity and adolescent psychotic experiences, and we also estimated the separate associations of social cohesion and neighborhood disorder with adolescent psychotic experiences. Third, using logistic regression we checked whether adolescents who had lived in the most socially adverse neighborhoods were more likely to be personally victimized by violent crime and, in turn, whether psychotic experiences were more common among adolescents who had been victimized. Fourth, using interaction contrast ratio analysis we investigated potential cumulative and interactive effects of adverse neighborhood social conditions and personal victimization by violent crime on adolescent psychotic experiences. Four exposure categories were created for this analysis by combining neighborhood social adversity with personal crime victimization . Finally, sensitivity analyses were conducted using the clinically-verified adolescent psychotic symptoms as the outcome measure. All analyses were conducted in STATA 14.2 , and accounted for the non-independence of twin observations using the “CLUSTER” command.

This procedure is derived from the Huber-White variance estimator, and provides robust standard errors adjusted for within cluster correlated data.Note: ordinal logistic regression was used in analyses where adolescent psychotic experiences was the dependent variable, because this was on an ordinal scale.This study investigated the role of urbanicity, neighborhood social conditions, and personal crime victimization in adolescent psychotic experiences and revealed 3 initial findings. First, the association between growing up in an urban environment and adolescent psychotic experiences remained after considering a range of potential confounders including family SES, family psychiatric history, maternal psychosis, adolescent substance problems, and neighborhood-level deprivation. This association between urbanicity and psychotic experiences was explained, in part, by 2 features of the neighborhood social environment, namely lower levels of social cohesion and higher levels of neighborhood disorder. Second, personal victimization by violent crime was nearly twice as common among adolescents in the most socially adverse neighborhoods, and adolescents who had experienced such victimization had over 3 times greater odds of having psychotic experiences. Third, the cumulative effect of neighborhood social adversity and personal crime victimization on adolescent psychotic experiences was substantially greater than either exposure alone, highlighting a potential interaction between these exposures. That is, adolescents who had lived in the most adverse neighborhood conditions and been personally victimized were at the greatest risk for psychotic experiences during adolescence. The present findings extend previous evidence from this cohort implicating childhood urbanicity and neighborhood characteristics in the occurrence of childhood psychotic symptoms.Here we show that the effects of urban and socially adverse neighborhood conditions on psychotic experiences are not limited to childhood, but continue into adolescence when psychotic phenomena become more clinically relevant.These findings support previous evidence demonstrating higher rates of psychosis-proneness and prodromal status among adolescents and young adults in urban,threatening,and socially fragmented neighborhoods.

Late adolescence heralds the peak age at which psychotic disorders are typically diagnosed.If a degree of aetiological continuity truly exists between adolescent psychotic experiences and adult psychotic disorder, ours and other recent findings tentatively support a mechanism linking adverse neighborhood conditions during upbringing with psychosis in adulthood. In our study,microgreen flood table the combined effect of adverse neighborhood social conditions and personal victimization by violent crime was greater than the independent effects of each. This is consistent with cumulative stress models and previous studies showing that risk for psychosis phenotypes increases as the frequency and severity of stressful exposures increase.Several biological and psychological mechanisms could explain why adolescents who were exposed to neighborhood social adversity and violent crime during upbringing were more prone to psychotic experiences. Prolonged and acute early-life stress is purported to dysregulate the biological stress response and lead to dopaminergic sensitization, which is the leading hypothesized neurochemical pathway for the positive symptoms of psychosis.In addition, adolescents who grow up in threatening neighborhoods with weak or absent community networks could develop psychosislike cognitive schemas such as paranoia, hypervigilance, and negative attributional styles.A cognitive pathway could explain why effects were apparent for psychotic experiences but not major depression. Our findings tentatively suggest a mechanism whereby childhood exposure to neighborhood social adversity sensitizes individuals to subsequent stressful experiences such as crime victimization. This hypothesized mechanism is supported by recent evidence of neurological differences in social stress reactivity between adults with urban vs rural upbringing.Further research into the influence of neighborhood exposures on childhood neurocognitive development could shed light on this hypothesized mechanism.Several limitations should be considered. First, causality of findings from this observational study cannot be assumed. Noncausal mechanisms, such as the selection of genetically high-risk families into urban and adverse neighborhoods, remain possible,though our findings were not explained by proxy indicators of genetic and familial risk. Second, neighborhood conditions were measured approximately 5 years before adolescent psychotic experiences were assessed. However, the vast majority of adolescents reported that they did not move house between ages 12 and 18. Third, though crime victimization was more common in adverse neighborhoods, we do not know the extent to which these victimization experiences occurred outside the home. Perpetrators of physical violence are often family members,suggesting that our measure of violent crime captured victimization inside as well as outside the home. Fourth, psychotic experiences are associated with adult psychosis but also with other serious psychiatric conditions; while a degree of specificity was suggested in that the effect of urbanicity on psychotic experiences was not replicated for adolescent depression and was not explained by adolescent substance problems, it is probable that the mental health implications of growing up in an urban setting extend beyond psychosis.

In addition, associations arising for the clinically-verified psychotic symptoms were often non-significant. It is possible that the low prevalence of psychotic symptoms in this sample restricted our power to detect associations. However, it is also possible that the self-report measure of adolescent psychotic experiences captured genuine experiences as well as psychotic phenomena . This may have inflated the associations arising for adolescent psychotic experiences, though it is reassuring that point estimates were fairly similar to those produced for psychotic symptoms. Finally, our findings come from a sample of twins which potentially differ from singletons. However, E-Risk families closely match the distribution of UK families across the spectrum of urbanicity and neighborhood level deprivation.Furthermore, the prevalence of adolescent psychotic experiences among E-Risk participants is similar to non-twin samples of adolescents and young adults.Acute pancreatitis is an acute, inflammatory, potentially life-threatening condition of the pancreas. With over 100000 hospital admissions per annum, acute pancreatitis is the leading gastrointestinal cause of hospitalization in the United States and the 10th most common non-malignant cause of death among all gastrointestinal, pancreatic, and liver diseases . It is a major cause of morbidity and healthcare expenditure not only in the United States, but worldwide. There are numerous established etiologies of acute pancreatitis, among which gallstones and alcohol are the most common. The remaining cases are primarily attributable to the following etiologic factors: Hypertriglyceridemia, autoimmune, infection, hyper/hypocalcemia, malignancy, genetics, endoscopic retrograde cholangiopancreatography, and trauma. Despite accounting for approximately only 1%-2% of cases overall, drug-induced pancreatitis has become increasingly recognized as an additional and vitally important, albeit often inconspicuous, etiology of acute pancreatitis. The World Health Organization database lists 525 different medications associated with acute pancreatitis.Unfortunately, few population-based studies on the true incidence of DIP exist, limiting knowledge of true incidence and prevalence. In this setting, we review the ever-increasing diversity of DIP, with emphasis on the wide range of drug classes reported and their respective pathophysiologic mechanisms – in an attempt to raise awareness of the true and underestimated prevalence of DIP. We hope this manuscript will aid in increasing secondary prevention of DIP ultimately leading to a decrease in overall acute pancreatitis-related hospitalizations and economic burden on the health care system. As there is no standardized approach to stratifying patients to determine their risk of developing acute pancreatitis, primary prevention for the majority of etiologies cannot be fully implemented. Secondary prevention of acute pancreatitis, on the other hand, can more easily be executed. For example, abstinence from alcohol reduces the risk of alcoholic pancreatitis, cholecystectomy reduces the risk of gallstone pancreatitis, and tight control of triglycerides reduces the risk of recurrent episodes of pancreatitis secondary to hypertriglyceridemia. On this notion, unique to DIP, is the fact that it can be prevented in both the primary and secondary fashion. Unfortunately, however, most of the available data in reference to DIP is derived from case reports, case series, or case control studies. In this vein, the causality between specific medications and acute pancreatitis has been established in only a minority ofcases. In addition, oftentimes, lack of a known etiology for acute pancreatitis directly increases length of hospitalization due to delayed diagnosis and subsequent treatment. Moreover, patients unaware of an adverse drug reaction to a prior medication may continue taking that medication leading to repeat hospitalizations. Finally, with the rapid expansion of pharmacologic agents, widespread legalization of cannabis, increase in recognized medications, supplements, and alternative medications reported to induce pancreatitis, the need to become familiar with this esoteric group remains imperative, and knowledge in the form of awareness regarding certain medications is warranted.First, the lack of mandatory adverse drug reporting systems allow many cases to go unreported. Second, bias exists, in the sense that clinicians tend to forgo linking unusual medication suspects to a rare adverse event. Third, it is often difficult to rule out other, more common, causes of drug-induced pancreatitis, especially in patients who have multiple comorbidities and underlying risk factors. Fourth, many cases lack a re-challenge test or drug latency period to definitively link acute pancreatitis to a particular drug. Finally, evidence is lacking to support the use of any serial monitoring technique – namely, imaging or pancreatic enzymes to help detect cases of drug-induced pancreatitis.

The high variability between animals in delta power may have obscured an effect

The MACH 14 cohort is a dataset pooled from 16 studies conducted at 14 sites across 12 states. Each study in MACH14 used electronic data monitoring pillcaps to objectively measure participants’ adherence to antiretroviral medication. The focus of this study was on non-methadone substance abuse treatment so studies conducted in methadone maintenance programs were not considered in this analysis. From the 1579 participants in the MACH14 dataset, we identified 215 from two studies based outside methadone clinics because only these two studies’ participants had both EDM and substance abuse treatment status data. Written informed consent was obtained for participation in the parent studies, and the Yale Institutional Review Board approved the secondary analyses. Patients were asked about engagement in substance abuse treatment and use of specific substances for varying preceding time frames: one of the two studies asked about participation in substance abuse treatment during the past 90 days and use of specific substances over the past 30 days, while the other study asked about treatment over the past 30 days and substance use over past 14 days. To aggregate substance use data across studies, variables representing use of specific substances were defined as the proportion of days within the asked-about time frame the person had used each of several substances. This analysis used data collected at the first time point at which participants had EDM data for the preceding four weeks, had also been asked about being recently enrolled in substance abuse treatment, and were not enrolled in a methadone-clinic-based study. To estimate the effect of substance abuse treatment on adherence, adherence was calculated for the four weeks up to and including the date recent substance abuse treatment enrollment was assessed,grow rack as well as for the four weeks after the substance abuse treatment determination. Adherence in each week was calculated by dividing the weekly number of doses taken by the weekly number of prescribed doses for each medication, with adherence to each medication capped at 100%.

Adherence for a patient on multiple antire trovirals was calculated by averaging across prescribed medications.The effects of substance abuse treatment on adherence were determined in multivariate analyses that included a grouping variable denoting whether the patient was enrolled in substance abuse treatment and a variable reflecting substance abuse treatment over time. The analyses were conducted controlling for sociodemographic characteristics that might differ between patients in, and not in, substance abuse treatment. To control for the anticipated finding that patients in substance abuse treatment would have more active drug use than a reference group including people who had never had significant substance use, analyses included a measure representing the largest proportion of days during which participants had used either cocaine, opiates, or stimulants. Cannabis use was not included in this measure of illicit drug use because in a separate analysis of the MACH14 dataset and in an earlier study recent cannabis use was not associated with worse adherence. Analyses were run with SAS 9.2. The model included random effects for intercept and slope as this model had better fit to the data than models with fixed effects only.Although the analyses controlled forillicit drug use, it is possible that our self-report measures of substance use understated the impact of substance abuse treatment on substance abuse and that it is in fact abstinence that facilitates adherence. In one of the few randomized controlled studies of HIV-positive drug users in which abstinence was the target outcome, there was a trend towards a significant correlation between consecutive weeks of toxicology-tested abstinence during the intervention and reductions in viral load. There is also evidence from a naturalistic longitudinal cohort study that attendance at HIV treatment, a sine qua non for adherence, appears to improve with newly-achieved abstinence. Substance abuse treatment might improve adherence by mechanisms other than facilitating abstinence from using drugs. Substance abuse treatment typically involves case management to address the unstable housing characteristic of drug users. Stable housing arrangements during substance abuse treatment would be expected to foster adherence, in that stable routines have been associated with better adherence.

Substance abuse treatment also focuses patients on future goals, an orientation that has been described as fostering adherence, and substance abuse treatment can involve re-arranging social networks in ways that also might foster better adherence. It is possible that enrollment in substance abuse treatment reflects a lurking un-measured variable associated with both being in substance abuse treatment and better adherence. The finding of better adherence among people in substance abuse treatment was not buttressed by finding better adherence over time among patients in treatment. However, it might have been difficult to detect the time course of benefit from substance abuse treatment because the data did not specify when patients were entering, continuing, or finishing substance abuse treatment.Substance abuse was measured by self-report, and it is possible that substance abuse was disproportionately under-reported by people out of substance abuse treatment, thus exaggerating the impact of substance abuse treatment on adherence. The type of substance abuse treatment was not specified and the findings may not apply to all types of substance abuse treatment. Finally, the sample size was modest, and the number of participants in substance abuse treatment was small. It is noteworthy that although adherence decreased on average over time, the course of adherence varied significantly by person. Further analyses should test variables that may account for individual differences in adherence over time. These findings lend some support to the clinical practice of addressing substance use in an effort to improve adherence. The crucial next step is to develop and prospectively test substance abuse-focused interventions for patients with both substance abuse and adherence problems.Marijuana has been used for hundreds of years for mystical and religious ceremonies, for social interaction, greenhouse grow tables and for therapeutic uses. The primary active ingredient in marijuana is delta-9 tetrahydrocannabinol,one of some 60 21-carbon terpenophenolic compounds knows as cannabinols, which exerts its actions via cannabinoid receptors referred to as CB1 and CB2 receptors. Endogenous cannabinoids have been isolated from peripheral and nervous tissue. Among these, N-arachidonoylethanolamine and 2-arachydonoylglycerol are the best-studied examples.Behaviorally, AEA increases food intake and induces hypomotility and analgesia.

Anandamide also induces sleep in rats.Cannabinoid stimulation stabilizes respiration by potently suppressing sleep apnea in all sleep stages.In humans, marijuana and ∆9-THC increase stage, or deep, sleep.The mechanism by which the cannabinoids induce sleep is not known, hampering the development of this drug for possible therapeutic use. The sleep-inducing effects of cannabinoids could be linked to endogenous sleep factors, such as adenosine . There is substantial evidence that AD acts as an endogenous sleep factor. Extracellular levels of AD measured by microdialysis are higher in spontaneous waking than in sleep in the basal forebrain and not in other brain areas such as thalamus or cortex.Given this evidence, we hypothesized that the soporific effects of AEA could be associated with increased AD levels. In the present study, extracellular AD levels were assessed in the basal forebrain via the microdialysis method. The basal forebrain was sampled because this region is particularly sensitive to AD.The cholinergic neurons located in the basal forebrain are implicated in maintaining waking behavior, and it is hypothesized that sleep results from the accumulating AD, which then inhibits the activity of the wake-active cholinergic neurons.Our results show that systemic application of AEA leads to increased AD levels in the basal forebrain during the first 3 hours after injection, and total sleep time is increased in the third hour. These findings identify a possible mechanism by which the endocannabinoid system influences sleep.Previous reports have found that application of AEA directly to the brain increases total sleep time and SWS.We have now shown that systemic administration of AEA also has the same effect. More importantly, the increased sleep is associated with increased extracellular levels of AD in the basal forebrain. The increase in sleep induced by AEA occurred during the third hour after injection of the compound and was associated with peak levels of AD. In each of the first 2 hours, AD levels were significantly higher compared to vehicle injections, with a peak in AD occurring in the third hour. Increased sleep was not evident in the first 2 hours, suggesting that a threshold accumulation of AD might be necessary to drive sleep. In the fourth hour, AD decreased dramatically relative to the third hour, and the levels were not different from those observed after vehicle administration. There was no significant difference in delta power between AEA and DMSO, even though the percentage of SWS was higher with AEA.Sleep is hypothesized to result from accumulating AD levels, and then there is a decline as a result of sleep.

This effect is present in the basal forebrain and not in other brain areas.Thus, this purine is hypothesized to act as a homeostatic regulator of sleep; its buildup increases the sleep drive, and as AD levels decline, sleep drive also diminishes. The present data are consistent with this hypothesis in that peak sleep levels occur with peak AD levels, and then as a result of sleep, AD levels also decline . The CB1-receptor antagonist blocked the AEA-induced induction of AD as well as the sleep-inducing effect. The CB1-receptor antagonist SR141716A has been tested in diverse behavioral paradigms, and it blocks the effects induced by AEA.Santucci and coworkers demonstrated that administration of SR141716A increases W and decreases SWS.Here we replicated these effects but also demonstrated that the increase in AD levels after injection of AEA were blocked by the CB1- receptor antagonist. The AEA exerts its effect via the CB1 receptor and hyperpolarizes the neuron.The CB1 receptors are coupled to the Gi/Go family of G protein heterotrimers. Activation of the CB1 receptor inhibits adenylate cyclase and decreases synthesis of cAMP.In rats, the CB1 receptor is localized in the cortex, cerebellum, hippocampus, striatum, thalamus, and brainstem.The CB1 receptor is also present on basal-forebrain cholinergic neurons as determined by immunocytochemistry.The CB1- receptor mRNA is present in the basal forebrain.This receptor is also present in the brainstem where the cholinergic pedunculopontine tegmental region is implicated in W.Microinjection of AEA into this region decreases W and increases REM sleep.The CB1 receptors are also localized in the thalamus,an area implicated in producing slow waves in the EEG.Activation of these receptors in the pedunculopontine tegmentum, basal forebrain, and thalamus may decrease the firing of wake-active neurons, resulting in sleep. Additionally, accumulation of AD in the basal forebrain may inhibit the cholinergic neurons and also increase sleep. Direct injections of the AEA into the basal forebrain were not possible since AEA dissolves only in DMSO and alcohol. Moreover, delivery of the AEA dissolved in DMSO clogs the microdialysis membrane. The mechanism by which AEA increases AD in the basal forebrain is not known, even though both AD and AEA could directly inhibit the wake-active neurons given the inhibitory action of these agents on their receptors. Nevertheless, there is evidence of an interaction between the adenosinergic and the endocannabinoid systems.For example, the motor impairment induced by the principal component of cannabis, ∆9- THC, is enhanced by adenosine A1-receptor agonists.We now show that stimulation of the endocannabinoid system via the CB1 receptor increases AD in the basal forebrain. The endocannabinoids and AD may regulate sleep homeostasis via second and third messengers, as we have hypothesized.Previously, investigators have shown that stage 4 sleep in humans is increased in response to administration of ∆9-THC or smoking of marijuana cigarettes.We now show that such a soporific effect is associated with an increase in AD levels in the basal forebrain. Cannabinoid stimulation suppresses sleep apnea in rats,and A1-receptor stimulation also has the same effect.The endocannabinoid system also influences other neurotransmitter systems,in particular, inhibiting the glutamatergic system.It would be important to determine whether endogenous levels of specific neurotransmitter systems are changed as a result of AEA-induced activation of the CB1 receptor. Irrespective of the mechanism involved, our studies underscore the importance of endocannabinoid-AD interactions in sleep induction and open new perspectives for the development of soporific medications. The discovery of the cannabinoid receptors and endocannabinoid ligands has generated a great deal of interest in identifying opportunities for the development of novel cannabinergic therapeutic drugs. Such an effort was first undertaken three decades ago by a number of pharmaceutical industries, but was rewarded with only modest success.

The cilia form a network covered in receptor proteins

The sinuses, a connected system of hollow cavities in the skull lined with mucosa tissue that has a thin layer of mucus, may help humidify air in the nasal cavity. In 2015, a $15-million grant by the National Science Foundation kicked off further research into how animals, including humans, locate the source of an odor, such as food . The research focuses on how odors move in the landscape and how animals use spatial and temporal cues to move toward a target. The research is just one part of the federal BRAIN Initiative that studies olfaction as a window into understanding the brain, because olfaction is considered the most primal pathway to understanding brain evolution. At present, such information is not available for e-nose development. The olfactory epithelium contains three types of cells: olfactory receptor neurons, their precursors and supportive cells. The cilia are constantly exposed to the nasal environment and are continually replaced, even their basal cells, possibly indicating frequent damage. A layer of mucus 10 to 40 µm thick coats the mucosa epithelium, and odorants must pass into this layer to interact with the sensory neurons through a series of poorly understood “perireceptor” events . Each sensory neuron, covered in cilia, trim tray projects down from the olfactory epithelium into the mucosa.These proteins thread back and forth across the outer membrane of the cilia and interact with odorants. Various theories have been put forward on how exactly odorants interact with the proteins, and this remains an area of research.

Receptor cells of the same type are randomly distributed in the nasal mucosa but converge on the same glomerulus. Each type of neuron frequently responds to more than one odorant, even from different chemical classes, so the overall odor signal must be integrated by the olfactory bulb . Integration includes both olfactory and trigeminal signals, and workers often report odor and irritation as a combined, singular perception . The olfactory bulb also receives information from other areas of the brain to filter out background odors and enhance perception. Fascinatingly, none of the physical stimuli themselves ever reach the brain. Instead, a host of proteins transduces captured molecules into a small change in voltage that can be deciphered by the brain . The unpleasant and pleasant aspects of mixtures are represented separately in the brain . Human sensitivity to odorants ranges across several orders of magnitude . Around 1 ppt appears to be a theoretical limit for sensitivity, and many odorants are not perceived until above 1,000 ppm. The major components of air are not sensed at all . Carbon dioxide is an interesting chemical because it is odorless at ambient concentrations yet selectively triggers only the trigeminal neurons and not the olfactory neurons when it reaches 200-fold above background levels . Describing multiple odor notes in mixtures is challenging. Fewer than 15% of the people tested could only identify one of the odorants present in a mixture, and identification of 3 to 4 components was the limit for trained experts . Even 90% of wine judges were unable to reproduce their scores . General variability in odor perception is high. Factors include age, sex, lifestyle, prior exposures, culture and health status . Approximately 3% of Americans have minimal or no sense of smell .Prolonged or repeated exposure to an odor can lead to a decreased response , which has the benefit of allowing a baseline reset in preparation for a new stimulus . Habituation happens as quickly as 2.5 second and is accompanied by decreased transduction by the neurons after 4 seconds . A growing field of research throughout public health is the microbiome, the microflora that contribute to gut, mouth and skin health. The nasal cavity, too, hosts microbes that contribute to normal functioning .

Some microbes themselves emit odorants and can decrease the host’s sensitivity . Attempts to reverse engineer an odor based on the molecular properties of the odorant have been successful. Algorithms were able to predict the odor note of a given odorant based on its chemoinformatic features for 8 descriptors out of 19 total . Researchers using systems biology and computational techniques mapped odors to specific proteins on olfactory receptor neurons, which was dubbed the “odorome” . Risk assessment for estimating the non-sensory health risks of airborne chemicals has a large body of guidance and case studies. The primary focus of this paper is on the sensory health effects of odors that integrate both trigeminal response and olfactory response . In general, the olfactory pathway iscapable of informing the organism about the presence of an odorant while the trigeminal pathway helps inform the organisms about the risk of health hazards and injury .Cognitive bias plays a role in odor responses . Odors trigger memories of previous experiences and are influenced by the power of suggestion. If given a prior warning that an odor is harmful, increased irritation was reported. Fewer symptoms were reported if told the odor was healthful. Even when no odor was administered, suggestion that there was a harmful odor led to symptoms. Prior experience with an odor introduces bias, too. Emotional baseline is also a factor . Sensitization to an odorant occurs when an acute exposure triggers subsequent, more-severe responses, often at lower concentrations . Desensitization can occur when chronic exposure to an odorant increases the concentration required to trigger a response. For example, workers who are habituated and desensitized to an odorant may be baffled by neighborhood complaints . Due to the availability of human data, other animal data were not considered in the hazard identification. Further, humans have a smaller area of nasal olfactory epithelium than rodents, which makes humans more vulnerable to loss, and respiration rates are quite different . Only one experimental study of a typical, complex environmental odor and health effects was found , which included both physical effects and mood . Out of the dozens of parameters evaluated, only headaches,eye irritation and nausea symptoms were elevated among those exposed for one hour as compared to controls . The epidemiology evidence, however, indicated the full range of adverse effects from odor exposure .

Such symptoms were self-reported, which means they may include bias. The distance from facility, an objective measure, contrarily did not predict the frequency of symptoms. Interestingly,ebb and flow the relationship between odor exposure and health symptoms appeared to be greatly influenced by odor hedonic tone, perhaps more so than odor intensity. The debate whether the purely odor-related symptoms are psychological or have an actual underlying physical cause is ongoing. In the same issue of Archives of Environmental Health in 1992, two opposing perspectives were presented. Shusterman concluded that the evidence of health effects was lacking beyond odors’ ability to inflict annoyance. In the editorial immediately after his article, Ziem and David off countered that odor, and chemical sensitivity in general, may well be based on underlying physiological responses, as was often found in the case of sick building syndrome. Both agreed that better ways to determine the impact of odors were needed, and well-controlled prospective case-control studies would be especially welcome. The psychological symptoms of odor exposure include tension, nervousness, anger, frustration, embarrassment, depression, fatigue, confusion, frustration, annoyance, and general stress . Odor frequency, odor intensity and feeling their concerns are not being heard all contribute to annoyance, which leads to stress. Health worries contribute as well. See Table 4.2.Changes in odor-induced frontal lobe activity has been linked to changes in mood, drowsiness, and alertness . Unfortunately, the studies of this connection were few and additional research in this area is needed. Odor-induced brain activity is complex, involving more than 30 different regions. Exposure to malodor led to inability to focus on a task . Other studies reviewed found, however, that odors have no effect on task performance, so they concluded that the impact of odors on task performance may be odorant-specific. Increased prevalence of gastrointestinal symptoms were observed as a function of proximity to a wastewater treatment plant in Poland . The symptoms were correlated with both odors and microbiological pollutants and could not be disentangled to single out odors as the primary agent. Similarly, the negative effects of traffic noise and odor on residents in Windsor, Ontario, Canada, had a strong covariance between these two parameters and could not be differentiated .Some odorants and some co-pollutants within odors are considered hazardous air pollutants because they cause other adverse effects beyond smell and irritation.

Air that contains odorants also is known to contain odorless co-pollutants such as particulate matter and endotoxins . There was a positive correlation between the presence of odors and the prevalence of self-reported health symptoms, such as headache and nausea, when communities near hazardous waste sites were compared . However, more serious health outcomes – cancers, mortality and birth defects – were not higher compared to the control sites .Dose-response relationships for odors aim to link the percentage of people experiencing adverse effects, such as odor annoyance and irritation, to the level of exposure. For toxic chemicals, adverse effects increase as exposure increases. Odors, however, can be more inconsistent. For example, hydrogen sulfide loses its characteristic “rotten egg” odor note as the concentration increases, leading to harmful levels going unnoticed .The major goal of both risk assessment and odor assessment is to verify that exposures are below the thresholds of concern . For conventional risk assessment, the thresholds are health-based, often extrapolated from animal studies, and typically incorporate large margins of safety due to crude extrapolations and uncertainties. For odor assessment, achieving odorless air is the goal, yet due to the “lack of severity” of the effect, the acceptable limit is often set well above the odor-detection threshold. Given the wide variability in human response to odor, this approach is perilous, but a point of departure is needed, nonetheless. The amount of dilution required to achieve odorless air delivers a crude point of departure but should acknowledge the large error and be presented with only one significant figure in the final results. Thresholds for odorants typically vary by several orders of magnitude, especially the ODTC50 . Further, reliable methods may not have been used, and results for odor detection and odor recognition are sometimes mixed up. Within a controlled setting, two approaches to sensory testing of dilutions by panelists are used. One is the Odor Profile Method , which uses sugar solutions for calibration, and the other uses odor disappearance upon sufficient dilution. Both rely on dilution equipment, typically a dynamic dilution olfactometer that delivers sampled air diluted with odorless air to a nose port where the dilution is smelled by the panelist. Concentrations are presented in ascending order to avoid desensitization and anticipation bias. Statistics are then used on the panelists’ results to determine the ODTC50 for the odorant or mixture tested. For a single odorant, OPM is used to assign an odor note and intensity to each dilution. The Weber-Fechner law is applied, meaning that the logarithm of the concentration is taken and then the intensity results are fit to a line through linear regression. Extrapolation to intensity score 1 yields the ODTC50 value. Each odorant can have a vastly different slope. For example, a 200-fold change in concentrations of 1-propanol and n-amyl buterate caused a 15-fold and 0.5- fold change in odor intensity, respectively . For mixtures , the ODTC50 can be determined by forced choice, typically “triangle,” methods using the same dynamic dilution equipment . While inhaling at the nose port, the panelist rotates between three choices and then selects which is the diluted sample. A point estimate is generated rather than a curve, and no odor description is gathered. ASTM Method E679-04 has been used to determine the ODTC50 values for a range of odorants . Such methods have been used in the drinking water industry to set ODTC50 values for methyl tertiary butyl ether and improvements to ASTM Method E679-04 have been suggested for drinking water . In Europe, under EN 13725, the final dataset typically only includes the data for the four or more panelists whose results are the most consistent with the overall panel’s geometric mean value. Also, panelists may be presented with 2 samples instead of 3.

Untreated OSA in this patient population has been associated with an increased risk of death

The BART scores increased significantly with abstinence , whereas the IGT scores did not change during abstinence. Self-reported total and motor impulsivity decreased significantly with abstinence and the non-planning score tended to decrease . The following changes were observed when restricting our longitudinal analysis to only those 17 PSU with baseline and follow-up data: general intelligence, executive function, working memory , visuospatial skills , global cognition , and processing speed . The 19 PSU not studied longitudinally differed from our abstinent PSU restudied on lifetime years of cocaine use . PSU not restudied performed significantly worse at baseline than abstinent PSU on cognitive efficiency, processing speed, and visuospatial learning . Furthermore, they did not differ significantly on years of education, AMNART, tobacco use severity, and proportions of smokers or family members with problem drinking, or the proportion of individuals taking a prescribed psychoactive medication.In PSU, more lifetime years drinking correlated with worse performance on domains of cognitive efficiency, executive function, intelligence, processing speed, visuospatial skills, and global cognition . More cocaine consumed per month over lifetime correlated with worse performance on executive function and greater attentional impulsivity . More marijuana consumed per month over lifetime correlated with worse performance on fine motor skills and tended to correlate with higher BIS-11 motor impulsivity ; in addition, more marijuana use in the year preceding the study correlated with higher non-planning and total impulsivity. Earlier onset age of marijuana use correlated with higher non-planning impulsivity and worse visuospatial learning . Interestingly, more lifetime years of amphetamine use correlated with better performance on fine motor skills, executive function, visuospatial skills, and global cognition . Similar to the associations found in PSU, more lifetime years drinking in AUD correlated with worse performance on cognitive efficiency, visuospatial skills, cannabis grow equipment and global cognition , and worse performance on visuospatial memory correlated with greater monthly alcohol consumption averaged over the year preceding assessment and over lifetime . In addition, longer duration of alcohol use in AUD was related to worse auditory-verbal learning and memory .

Our primary aim was to compare neurocognitive functioning and inhibitory control in onemonth-abstinent PSU and AUD. Poly substance users at one month of abstinence showed decrements on a wide range of neurocognitive and inhibitory control measures compared to normed measures. The decrements in neurocognition ranged in magnitude from 0.2 to 1.4 standard deviation units below a zscore of zero, with deficits >1 standard deviation below the mean observed for visuospatial memory and visuospatial learning. In comparisons to AUD, PSU performed significantly worse on measures assessing auditory-verbal memory, and tended to perform worse on measures of auditory-verbal learning and general intelligence. Chronic cigarette smoking status did not significantly moderate cross-sectional neurocognitive group differences at baseline. In addition, PSU exhibited worse decision-making and higher self-reported impulsivity than AUD , signaling potentially greater risk of relapse for PSU than AUD . Being on a prescribed psychoactive medication related to higher self-reported impulsivity in PSU. For both PSU and AUD, more lifetime years drinking were associated with worse performance on global cognition, cognitive efficiency, general intelligence, and visuospatial skills. Within PSU only, greater substance use quantities related to worse performance on executive function and fine motor skills, as well as to higher self-reported impulsivity. Neurocognitive deficits in AUD have been described extensively. However, corresponding reports in PSU are rare and very few studies compared PSU to AUD during early abstinence on such a wide range of neurocognitive and inhibitory control measures as administered here . To our knowledge, no previous reports have specifically shown PSU to perform worse than AUD on domains of auditory-verbal learning and general intelligence at one month of abstinence. Our studies confirmed previous findings of worse auditory-verbal memory and inhibitory control in individuals with a comorbid alcohol and stimulant use disorder compared to those with an AUD, and findings of no differences between the groups on measures of cognitive efficiency .

Some of the cross-sectional neurocognitive and inhibitory control deficits described in this PSU cohort are associated with previously described morphometric abnormalities in primarily prefrontal brain regions of a subsample of this PSU cohort with neuroimaging data . Our neurocognitive findings also further complement studies in subsamples of this PSU cohort that exhibit prefrontal cortical deficits measured by magnetic resonance spectroscopy and cortical blood flow . Our secondary aim was to explore if PSU demonstrate improvements on neurocognitive functioning and inhibitory control measures between one and four months of abstinence from all substances except tobacco. Polysubstance users showed significant improvements on the majority of cognitive domains assessed here, particularly cognitive efficiency, executive function, working memory, self-reported impulsivity, but an unexpected increase in risk-taking behavior . By contrast, no significant changes were observed for learning and memory domains, which were also worst at baseline, resulting in deficits in visuospatial learning and visuospatial memory at four months of abstinence of more than 0.9 standard deviation units below a z-score of zero. There were also indications for significant time-by-smoking status interactions for visuospatial memory and fine motor skills, however these analyses have to be interpreted with caution and considered very preliminary, considering the small sample sizes of smoking and nonsmoking PSU at followup. Nevertheless, the demonstrations of cognitive recovery in abstinent PSU, and potential effects of smoking status on such recovery, are consistent with our observations of corresponding recovery in abstinent AUD . The 19 PSU not studied at follow-up differed significantly from abstinent PSU at baseline onseveral important variables: they had more years of cocaine use over lifetime, and performed worse on cognitive efficiency, processing speed, and visuospatial learning. As such, these differences should be tested as potential predictors of relapse in future larger studies. Several factors limit the generalizability of our findings. Our cross-sectional sample size was modest and therefore our longitudinal sample of abstinent PSU was small; as not uncommon in clinical samples, about half of our PSU cohort relapsed between baseline and follow-up, a rate comparable to what has been reported elsewhere . This made us focus our longitudinal results reporting on the main effects of time and to de-emphasize the reporting of time-by-smoking status interactions. Larger studies are needed to examine the potential effects of smoking status and gender on neurocognitive recovery during abstinence from substances. The study sample was drawn from treatment centers of the Veterans Affairs system in the San Francisco Bay Area and a community based healthcare provider,vertical grow system and the ethnic breakdown of the study groups was different.

Therefore, our sample may not be entirely representative of community-based substance use populations in general. Although preliminary, the within subject statistics are meaningful as they are more informative for assessing change over time than larger cross-sectional studies at various durations of abstinence. In addition, premorbid biological factors and other behavioral factors not assessed in this study may have influenced cross-sectional and longitudinal outcome measures. Nonetheless, our study is important and of clinical relevance in that it describes deficits in neurocognition and inhibitory control of detoxified PSU that are different from those in AUD, and that appear to recover during abstinence from substances, potentially as a function of smoking status. Our cross-sectional and longitudinal findings are valuable for improving current substance use rehabilitation programs. The higher impulsivity and reduced cognitive abilities of PSU compared to AUD, likely the result of long-term comorbid substance use, and the lack of improvements in learning and memory during abstinence indicate a potentially reduced ability of PSU to acquire new cognitive skills necessary for remediating maladaptive behavioral patterns that impede successful recovery. As such, PSU may require a post-detox treatment approach that accounts for these specific deficits relative to AUD. Our results show that PSU able to maintain abstinence for 4 months had less total lifetime years of cocaine use and performed better on cognitive efficiency, processing speed and visuospatial learning than those PSU not restudied ; these variables may therefore be valuable for predicting future abstinence or relapse in PSU. Additionally, and if confirmed in larger studies, our preliminary results on differential neurocognitive change in smoking and nonsmoking PSU may inform a treatment design that addresses the specific needs of these subgroups within this largely understudied population of substance users. Potentially, concurrent treatment of cigarette smoking in treatment-seeking PSU may also help improve long-term substance use outcomes, just as recently proposed for treatment seeking individuals with AUD.

Finally, our findings on neurocognitive improvement in PSU imply that cognitive deficits are to some extent a consequence of long-term substance use , which have the potential for remediation with abstinence. This information is of clinical relevance and of psychoeducational value for treatment providers and treatment-seeking PSU alike. Patients with obstructive sleep apnea experience apneic and hypopneic events that, when untreated, have detrimental cardiovascular and neurocognitive consequences. Under normal conditions, blood pressure and heart rate decrease during non–rapid eye movement sleep and increase commensurately upon waking. This is attributed to a decrease in sympathetic nervous system activation and a subsequent increase in cardiac vagal tone during sleep . The transient episodes of hypoxemia and hypercapnia caused by apneas or hypopneas, as well as arousals, result in an increase in cardiac output and heart rate that leads to sympathetically induced peripheral vasoconstriction that causes a marked increase in blood pressure. The result of this chronic sympathetic excitation and inflammation does not resolve upon waking, and over time, together with the loss of the normal nocturnal blood pressure dip, it can lead to pathophysiologic changes such as impaired vascular function and stiffness . This impairment in the untreated patient with moderate to severe OSA has been found to increase the risk of both acute coronary syndrome and sudden cardiac death . The increased sympathetic nervous activity, inflammation, and oxidative stress seen in OSA can lead to hypertension. The prevalence of hypertension in moderate to severe OSA ranges between 13% and 60%, and OSA is considered the most common cause of secondary hypertension . Arrhythmias can be common in patients with OSA, and the prevalence of atrial fibrillation is higher in these patients than in patients without OSA. In fact, severe sleep disordered breathing is associated with twofold to fourfold higher odds of having complex arrhythmias. In addition, untreated OSA has been associated with higher rates of failure to maintain sinus rhythm after cardio version or ablation therapy . Inflammation, atrial fibrillation, and atherosclerosis are all associated with OSA and overlap with risk factors for cerebrovascular disease. OSA may be frequently diagnosed after stroke, and it can be difficult to determine whether the condition is causal or resultant. Evidence suggests that OSA is associated with an increased risk of stroke in elderly patients, and untreated OSA after stroke increases mortality risk during 10-year follow-up . Another disease state affected by sleep apnea is heart failure. Both OSA and central sleep apnea are common in patients with acute and chronic systolic and diastolic heart failure.However, screening for sleep disordered breathing can be difficult because patients with OSA and heart failure often do not report excessive daytime sleepiness. This absent symptom raises challenges in diagnosis and treatment adherence for OSA . Untreated OSA can affect many cognitive domains, including learning, memory, attention, and executive functioning. Data suggest that OSA is linked with cognitive impairment and may advance cognitive decline or dementia . In addition, intermittent hypoxemia and sleep fragmentation have been linked to structural changes in the brain that may be responsible for cognitive impairment . Given the increased prevalence of obesity and the common nature of diagnoses such as hypertension, coronary artery disease, atrial fibrillation, heart failure, and neurocognitive impairment, healthcare providers should be cognizant of the hazards of untreated OSA .Substance use, misuse, and dependence contribute immensely to the global burden of disease. Their harms extend far beyond their corrosive effects on health, safety, and well being and additionally include those associated with healthcare expenditures, productivity losses, criminal justice involvement, and other negative effects on social welfare. The incidence and harms of substance use, misuse, and dependence involve multilevel explanatory factors.

Earlier age of onset of heavy drinking in AUD was associated with worse decision-making

DSI can be blocked by postsynaptic Ca2 buffers or initiated by activity restricted to the postsynaptic side, and likely involves the opening of voltage-gated Ca2 channels , or release from intracellular stores. Changes in postsynaptic GABAA receptor sensitivity have been excluded, since the response to iontophoretically applied GABA did not change, and DSI had no effect on the amplitude of miniature IPSCs. Despite the clearly postsynaptic site of initiation, numerous experiments demonstrated that DSI is expressed presynaptically, i.e., as a reduction in GABA release. With the use of minimal stimulation, DSI was found to increase failure rate, multi-quantal components were also eliminated, and components of IPSCs were differentially influenced . In the cerebellum, axonal branch point conduction failure was shown to play a role . Furthermore, DSI was reduced by 4-aminopyridine and veratridine, both acting on the presynaptic terminal . Direct evidence for an inhibitory G protein-mediated presynaptic action has been provided by Pitler and Alger , as they showed that DSI was pertussis toxin sensitive. Both laboratories hypothesized from the very beginning that they were dealing with a phenomenon that involves retrograde messengers. Llano et al. stated that “Ca2 rise in the Purkinje cell leads to the production of a lipid-soluble second messenger.” This was a remarkable prediction 10 years before the discovery that, indeed, the lipid-soluble endocannabinoids are these messengers,cannabis grow system although the earlier claim of a retrograde action of arachidonic acid in the presynaptic control of LTP made this assumption rather plausible at that time.

The quest for identifying the chemical nature of this retrograde messenger began with the discovery of DSI. The slow onset , the requirement of a lasting Ca2 rise, and the Ca2 buffer effects were all consistent with a hormone or peptide rather than classical vesicular neurotransmitter. Yet the first substance suggested by direct experimental evidence was glutamate. In the cerebellum, metabotropic glutamate receptor agonists, acting on presynaptic group II mGluRs, were shown to mimic and occlude DSI, whereas antagonists reduced it . Activation of adenylate cyclase by forskolin reduced DSI, which is consistent with the proposed reduction of cAMP levels by mGluR2/3 activation that is known to lead to a reduction of GABA release . In contrast, in the hippocampus, forskolin and group II or III mGluR ligands were without effect on DSI; however, group I agonists occluded, and antagonists reduced it . Pharmacology and the anatomical distribution of the receptors suggested that mGluR5 is likely to be involved in the reduction of GABA release , but it appeared to be confined to the somadendritic compartment of the neurons perisynaptically around glutamatergic contacts , which was difficult to reconcile with the hypothesis of glutamate being the retrograde signal molecule . The long duration of DSI is not due to the dynamics of the Ca2 transient, as it was the same in EGTA and BAPTA , but probably to the slow disappearance of the retrograde messenger molecule from the site of action around the presynaptic terminal. This again is inconsistent with glutamate being the messenger , since this transmitter is known to be rapidly taken up. The fast buffer BAPTA and the slow buffer EGTA reduced DSI to a similar degree, suggesting that the site of Ca2 entry and the site of calcium’s action in DSI induction are relatively far from each other . One possibility is that the target of incoming Ca2 may be an intracellular Ca2 store that is able to produce large Ca2 transients required for the release of the signal molecule. On the other hand, the selective N-type Ca2 channel blocker -conotoxin was able to block DSI , which, according to recent evidence , turned out to be an action on the presynaptic terminals that are sensitive to DSI and selectively express the N-type Ca2 channel.

These data suggest that Ca2 plays a dual role: it is involved in the initiation phase via Ca2 -induced Ca2 release from intracellular stores in the postsynaptic side as well as in the effector phase via N-type Ca2 channels on presynaptic terminals . Obviously, DSI-like phenomena can have a functional role in neuronal signaling only if they can be induced by physiologically occurring activity patterns. In cerebellar Purkinje cells, 100-ms depolarization was required for a detectable reduction in IPSCs , which, under physiological conditions, may correspond to a few climbing fiber-induced complex spikes . Thus a short train of climbing fiber-induced spikes is expected to lead to an increased excitability of the innervated Purkinje cell for tens of seconds. Initiation by very few spikes, occasionally even two if closely spaced, has been reported in the hippocampus. With 100 M BAPTA in the pipette, detectable DSI could be evoked already by depolarization as short as 25 ms, and half-maximal effect was produced by 187 ms, or by 109-ms depolarization in the absence of BAPTA . This suggests a lower threshold, but also a smaller magnitude and shorter time course of DSI compared with the cerebellum. The behavior-dependent electrical activity patterns in the hippocampus that may lead to DSI are discussed in section VD.Recent studies by Kreitzer and Regehr provided evidence that, at least in the cerebellum, excitatory synaptic transmission is also under the control of retrogradely acting signal molecules. Both parallel fiber and climbing fiber-evoked EPSCs were suppressed for tens of seconds by a 50- to 1,000-ms depolarization of the postsynaptic Purkinje cells from 60 to 0 mV. Due to the obvious similarity to DSI, this phenomenon has been termed depolarization-induced suppression of excitation . Paired-pulse experiments, showing that short term plasticity is affected by the depolarization paradigm for both parallel and climbing fiber responses, demonstrated that the site of expression of DSE is presynaptic and involves a reduction in the probability of transmitter release. BAPTA in the recording pipette completely abolishes DSE,cannabis grow lights providing evidence for the requirement of postsynaptic Ca2 rise to trigger the event.Earlier reports are consistent with the lack of DSE in the hippocampus, but a recent study using excessive depolarization for 5–10 s argues for its existence also in this brain region.

Whether the mechanisms of DSE are similar in the hippocampus and cerebellum is discussed in the following section.The discovery by Wilson and Nicoll , OhnoShosaku et al. , and Kreitzer and Regehr that DSI/DSE are mediated by endocannabinoids revealed that investigations in both the cannabinoid and DSI/DSE fields have been dealing accidentally with the same subject, i.e., the mechanism of retrograde synaptic signaling via endocannabinoids. Both receptor localization data and identification of the physiological actions of cannabinoids on synaptic transmission confirmed that cannabinoids act on presynaptic axons, reducing transmitter release , whereas endocannabinoids are most likely released from the postsynaptic neuron upon strong stimuli that give rise to large Ca2 transients. Thus the signal molecules, which turned out to be endocannabinoids, travel from the post- to the presynaptic site and thus enable neurons to influence the strength of their own synaptic inputs in an activity-dependent manner. This may be considered as a short definition of retrograde synaptic signaling and perhaps, at the same time, summarizes the function of the endocannabinoid system. However, before trying to correlate the findings of cannabinoid and DSI studies, one should be aware of the major limitations. There are numerous examples of mismatch in receptor/transmitter distribution in the brain; receptors can be found in locations where they hardly ever see their endogenous ligand. Nevertheless, these receptors readily participate in mediating the effects of its exogenous ligands, e.g., during pharmacotherapy. We are facing the same problems with the relative distribution of cannabinoid receptors versus endocannabinoid release sites both at the cellular and subcellular levels. In addition, the distance to which anandamide and 2-AG are able to diffuse is also an important question from the point of identifying the degree of mismatch. Thus correlation of the sites of action of cannabinoid drugs and the sites of expression of DSI should reveal the regional, cellular, and subcellular domains where receptor and endogenous ligand distributions match, i.e., where endocannabinoids are likely to have a functional role in synaptic signaling. Several lines of evidence have been provided that endocannabinoids represent the retrograde signal molecules that mediate DSI both in the hippocampus and cerebellum, as well as DSE in the cerebellum. Antagonists of CB1 receptors fully block and agonists occlude DSI and DSE, whereas DSI is absent in CB1 receptor knock-out animals . In these experiments either single cell or paired recording has been used, and retrograde synaptic signaling has been evoked by the same procedures as described in the original work of Alger’s and Marty’s groups . In addition, Wilson and Nicoll demonstrated that uncaging of Ca2 from a photolabile chelator induces DSI that was indistinguishable from that evoked by depolarization. Thus a large intracellular Ca2 rise is a necessary and sufficient element in the induction of the release of endocannabinoids. As expected from the membrane-permeant endocannabinoids, their release does not require vesicle fusion, since botulinum toxin delivered via the intracellular recording pipette did not affect DSI.

A further crucial question concerns the range to which the released endocannabinoids are able to diffuse. Recordings at room temperature from pyramidal cells at various distances from the depolarized neuron releasing the signal molecules revealed that it is only the adjacent cell, at a maximum distance of 20 m, to which endocannabinoids are able to diffuse in a sufficient concentration to evoke detectable DSI . However, a considerably greater endocannabinoid uptake and metabolism should be expected at physiological temperatures, which likely results in a decreased spread and a more focused action. Earlier data indicating the involvement of glutamate and mGluR receptors in DSI also needed clarification . Varma et al. demonstrated that enhancement of DSI by mGluR agonists could be blocked by antagonists of both group I mGluR and CB1 receptors, whereas the same mGluR agonists were without effect in CB1 receptor knock-out animals. This provides direct evidence that any mGluR effects on DSI published earlier were mediated by endocannabinoid signaling, and glutamate served here as a trigger for the release of endocannabinoids rather than as a retrograde signal molecule as thought earlier. These data were subsequently confirmed by paired recordings from cultured hippocampal neurons . In a recent paper, Maejima et al. demonstrated that mGluR1 activation induces DSE in Purkinje cells even without changing the intracellular Ca2 concentration. This suggests that, at least in the case of cerebellar Purkinje cells, two independent mechanisms may trigger endocannabinoid synthesis ; one involves a transient elevation of intracellular [Ca2], and the other is independent of intracellular [Ca2] and involves mGluR1 signaling. This may imply that, under normal physiological conditions, different induction mechanisms may evoke the release of different endocannabinoids. With the growing number of potential endocannabinoids , the question arises whether they are involved in distinct functions, i.e., by acting at different receptors and/or at specific types of synapses. This question represents one of the hot spots of current endocannabinoid research, and direct measurements of the different endocannabinoid compounds during retrograde signaling should provide an answer. They may induce branch-point failure, decrease action potential invasion of axon terminals, reduce Ca2 influx into the synaptic varicosities via N- or P/Q-type channels, or block the release machinery somewhere downstream from the Ca2 signal. Using Ca2 imaging of single climbing fibers provided evidence that DSE involves a reduction of presynaptic Ca2 influx, which has the same time course as the reduction of the EPSC. Branch-point failure was shown not to contribute to DSE, at least in the case of climbing fibers, as stimulation of the examined single axon evoked a uniform rise of Ca2 throughout its entire arbor. These findings are supported by the fact that cannabinoids are known to block N-type Ca2 channels in neuroblastoma cells and reduce synaptic transmission by inhibiting both N- and P/Q-type channels in neurons . Inhibition of the release machinery is unlikely to play a role, particularly in GABAergic transmission, since CB1 receptor activation has little if any effect on mIPSC frequency in the presence of tetrodotoxin and cadmium . Furthermore, CB1 receptors tend to be localized away from the release sites, having a high density even on preterminal axon segments, which also argues against this possibility . In the hippocampus, evidence has been provided that DSI likely involves a direct action of G proteins on voltage-dependent calcium channels.