Article links were classified as inaccurate if the content was not supported by scientific literature

Systemic administration of AM404 normalizes the behavior of juvenile SHR without affecting that of control rats . These findings suggest that inhibitors of endocannabinoid inactivation may be used to alleviate certain symptoms of dopamine dysfunction. Clinical data showing that 9 -tetrahydrocannabinol ameliorates tics in Tourette’s syndrome patients lend further support to this possibility . Future Challenges. In conclusion, three major challenges lie before the pharmacologist interested in the mechanisms of endocannabinoid inactivation from the perspective of drug discovery. The first is the need for a deeper molecular understanding of these mechanisms. Considerable insight has been gained in the last few years on the structure and catalytic properties of AAH, but many questions remain unanswered, including the identity of the putative endocannabinoid transporter and the existence of additional hydrolytic enzymes for anandamide and 2-AG. The second challenge lies in the development of potent and selective inhibitors of endocannabinoid inactivation. Future AAH inhibitors should combine the potency of those currently available with greater pharmacological selectivity and biological availability. A second generation of endocannabinoid transport blockers that overcome the limitations of AM404 and its congeners is also needed. The third challenge is the validation of endocannabinoid mechanisms as targets for therapeutic drugs. This task is intertwined, of course,grow rack systems with that of understanding the endocannabinoids’ roles in normal physiology, one on which much research is currently focused.

The internet has profoundly altered how individuals obtain information regarding their health, and men’s health is no exception. Although men are less likely than women to pursue preventative health care and more likely to develop chronic cardiovascular and metabolic disease, the rise of online social media platforms may play a role in challenging this disproportion. Men contending with infertility increasingly turn to social media platforms for information, guidance, and discussion with peers. A male factor contributes to nearly 60% of all cases of infertility, yet cultural and societal constructs of masculinity create psychosocial barriers to consultation with a physician. Social media platforms enfranchise men to take an active role in understanding causes and treatments for infertility by providing anonymity absent from face-to-face encounters. Although health information online is becoming more readily accessible, it escapes the scrutiny of scientific publication guidelines, allowing for the propagation of non-evidenced-based material.Literature assessing the quality of male infertility content available online remains scarce, although a recent review has shown that urological conditions as a whole suffer from a spread of misinformation on social media. Social media analytics tools have emerged that provide detailed, quantitative metrics, but these tools have not yet been applied to content in the male infertility space. Given the proliferation of sensationalism on social media, we hypothesized that content about male infertility shared on social media platforms may be largely inaccurate or misleading. Using a combination of an analytics tool and a quality rating system, we performed a quantitative and qualitative analysis of male infertility content shared on social media. These data may inform how men’s health specialists should approach patient education about male infertility, as well as ways in which they engage with social media in the future.We used the social media analytics tool BuzzSumo to identify the most shared male infertility content from September 2018 through August 2019. BuzzSumo gathers data across the social media platforms Facebook, Pinterest, Reddit, and Twitter to generate a list of article links with the highest online engagement.

Engagement is defined as the total number of interactions that users have with a particular article link, including actions, such as “liking,” “commenting,” and “sharing” on social media. Two urologists with advanced fellowship training in male reproductive medicine initially screened a total of 20 search terms related to male infertility using BuzzSumo. Ten terms were then selected based on having the highest total engagements for BuzzSumo interrogation: fertility in men, low sperm treatment, male fertility, male fertility testosterone, male infertility, semen analysis, sperm count, sperm motility, sperm quality, and sperm testosterone. Terms were further excluded if no associated article links generated sufficient engagement .Fig. 1 summarizes the workflow for selecting article links from final search term results. The most popular article links for each search term were identified, followed by analysis of social media engagement. Links were excluded if they were not written in the English language, were not related to male infertility, were audio podcasts, were broken/expired, or had fewer than 100 total engagements, a metric that has been used in prior investigations to exclude low impact content . Two medical student reviewers with curricular training in critical evaluation of the literature independently graded content as accurate, misleading, or inaccurate by assessing references cited, as well as comparing the content to existing peer-reviewed literature .Content was classified as misleading if it contained a combination of accurate and inaccurate information, or if it extrapolated animal data to make inappropriate conclusions about human fertility. The senior authors, two urologists with advanced fellowship training in male reproductive medicine,rolling flood tables were blinded to the two reviewer decisions and adjudicated all discordances. Cohen’s κ was used to calculate inter-rater reliability between the two independent reviewers.

Binary logistic regression was used to compare user engagement with accurate versus inaccurate/misleading article links; a separate regression was run for total engagement, to limit collinearity among variables. Statistical significance was set at p<0.05. Statistical analysis was performed using IBM SPSS ver. 25 .After applying exclusion criteria, eight search terms and 52 article links remained. Of the original ten search terms, the two exclusions due to insufficient engagement were “male fertility testosterone” and “low sperm treatment.” The 52 article links were stratified into four categories: scientific peer-reviewed journal, medical center or hospital affiliated website, news organization website, and alternative media . Overall, the majority of links came from websites in the alternative media category , followed by news organizations . Scientific peer-reviewed journal websites and medical center websites comprised the fewest article links, at 8% and 2%, respectively. High-engagement articles hosted on alternative media websites were more accurate compared to misleading/inaccurate . Articles from each remaining category tended to have similar user engagement between accurate and misleading/inaccurate content . Table 2 outlines engagement data stratified by search term and social media platform. “Sperm count” emerged as the most popular search term, capturing 50% of the total engagement across all platforms. Facebook was the most popular platform for sharing male infertility content , followed by Reddit , Twitter , and Pinterest . All search terms experienced highest engagement on Facebook, with the exception of “fertility in men,” which was most popular on Reddit. Overall, 56% of articles were graded as accurate and 44% as misleading or inaccurate . No significant difference was found in engagements between accurate versus inaccurate/misleading links . Fifteen peer-reviewed research studies comprised the primary citations used by 34 of the 52 total articles links. The remaining 18 links did not cite original peer-reviewed scientific studies, or purported study data were not accessible. Of the 34 links with scientific evidence, 17 referenced the same two original research studies and captured twice as many engagements as the remaining 13 studies combined. Studies relying upon animal or insect models Though male infertility content proliferates on the internet, little is known about the sources and quality of information encountered by users who may make health care decisions based on what they read online. Prior work has shown that misinformation about urological conditions thrives on social media, but an in-depth analysis of male infertility content has been lacking. To this end, we quantified internet user engagement with male infertility content across a variety of social media platforms, then evaluated the accuracy of information shared on these platforms. We found that social media users encounter misleading or inaccurate information about male infertility at similar rates to accurate content. Nearly a quarter of user engagement focused on article links reporting on experimental results in non-human models, and 90% of these articles were determined to be misleading – the original studies were routinely misinterpreted as having immediate implications for human fertility, then amplified on social media.

For example, in a study by Sales et al, male insects exposed to increasing heatwave conditions had significantly lower sperm function and offspring production, and this effect showed transgenerational impact. The study draws no conclusions about human fertility, yet an article in USA Today reported, “…[T]he scientists used beetles to test their theory. But researchers say the insects can be used as a proxy for people,” and concluded that human fertility is directly impacted by climate change. Despite the inappropriate extrapolation, the USA Today link alone received a total of 21,812 social media engagements. We found that over 90% of male infertility content online comes from non-peer-reviewed sources, such as news organizations and alternative media . These data suggest that the scientific and medical establishment have limited traction with an increasingly sensationalized consumer culture. Scientific journal and medical center websites typically target their male infertility content toward scientifically literate health care professionals, rather than the general public. A potential way to mitigate the misinterpretation of scientific research may be for academic institutions to take a greater role in creating press releases or making researchers available for news media comment. The proliferation of misleading conjectures about scientifically sound research becomes subjected to the “illusory truth effect,” the tendency to believe false information to be accurate due to repeated exposure . Sensationalism on social media perpetuates unrealistic expectations of fertility treatments and can have significant economic implications for infertile couples, whose out-of-pocket expenditures related to male factor infertility may reach $15,000 in their quest to conceive a pregnancy. Indeed, a thriving market for male fertility vitamins and nutritional supplements has emerged online despite a paucity of evidence for a positive effect on semen analysis parameters. Although 15 research studies were cited in article links over the one-year study period, just two of these studies drew 50% of citations. The most popular study focused on the effect of marijuana smoking on sperm parameters and reproductive hormones, and the authors of this study cautioned that their data may not be generalizable to men from the general population. The other study that drew the second highest engagements examined the effect of chemical exposures on sperm function using a limited sample size of only nine humans and a comparison group of eleven canines, yet was featured in an article link that inappropriately generalized the data to imply wide translatability of the results. Our findings suggest that a few studies are tokenized and amplified to guide discussion about male infertility on social media, despite having crucial limitations that are overlooked. We also found that the most popular social media platform to share male infertility content was Facebook with 3× more engagements than Reddit and 56× more engagements than Twitter . Facebook’s higher level of engagement may be attributed to its in depth engagement dimensions and primary purpose of connecting with friends and family. For example, Facebook has a larger audience of various age ranges and users have the ability to share videos and engage with posts on a longer-term basis. This is in stark contrast to Twitter, whose primary purpose is to share ideas in 140 characters or less in a fast-paced nature making it difficult for posts to gain traction for very long as posts become quickly buried by new Tweets. Reddit is another platform which allows for longer term engagement with conversations within Reddit subforums, which may account for it being the platform with the second highest engagements. Our findings suggest that platforms with the ability to keep posts visible to users for longer periods of time plays a role in overall engagements. Overall, our study findings highlight the need to facilitate online health interventions designed to offer users men’s health information that is both accurate, engaging, and tailored to the general public. Discussions about male infertility are no longer occurring in the confines of medical offices and urologists should consider adding social media to their armamentarium to stave off misinformation and engage proactively with patients. The present study is not without limitations. This study focuses only on the accuracy and engagement of male infertility content; little is known on how that engagement directly influences an individual’s behaviors beyond the act of “sharing,” “liking” or “commenting” on social media.

Social media tends to amplify the most sensational content and headlines

Although there has been much work developing and testing CBT interventions for promoting ART adherence, there is still room for improvement because traditional multi-component CBT interventions for ART adherence result in small to medium effect sizes . Overall, the current study extends the literature on distress intolerance as a psychological vulnerability factor among people living with HIV. However, there are some limitations that provide opportunities for future research. First, the present study was cross-sectional, limiting inferences that can be made about directionality. Indeed, it is just as likely that low levels of distress tolerance lead to poor adherence as it is that poor adherence is prospectively associated with low levels of distress tolerance. This may be particularly relevant among immuno compromised individuals living with HIV. For instance, if poor adherence leads to an increasing viral load, then one’s immune system is mobilized to contend with the growing viral load. This is a physiological stressor and stress increases one’s drive to escape from unpleasant situations . Thus, it is feasible that stress due to immunological reactivity from an increasing viral load further limits one’s capacity to exercise tolerance of distress. It is plausible that poor adherence resulting in an increasing viral load may subsequently increase one’s vulnerability to distress intolerance.Following, as adherence was measured using pill count at only one time point, we were unable to establish a baseline level of adherence and MEMS caps would have provided a more precise indicator of adherence. Third, as mentioned earlier, though a strength of the study was the multi-method measurement of distress tolerance, future work would benefit from employing additional objectives and newly refined subjective measures to better understand differential relations between multiple facets of distress tolerance and HIV adherence.

Future work would also benefit from assessing tolerance of HIV symptom-related distress, specifically,rolling bench and the impact of distress tolerance on other clinically relevant HIV outcomes . Finally, though the present study was quite ethnically diverse, a majority of the sample was male. Future work would benefit from recruiting a more gender-diverse sample from different geographic areas. Promoting tolerance of affective distress and distressing tasks associated with the high-adherence demands of ART for HIV management are worthwhile to consider in future research. Future investigations are needed to examine relations prospectively to identify the role of distress intolerance in the development and maintenance of poor HIV management, and then verification of clinical implications through intervention process and outcome studies.A consortium of 67 scientific institutions from 24 European countries and beyond, covering over thirty scientific disciplines ranging from anthropology to toxicology, responded to an invitation by the European Commission to study the place of addictions in contemporary European society. The resulting five-year endeavour, the Addictions and Lifestyles in Contemporary Europe – Reframing Addictions Project , went beyond this. It reframed our understanding of addictions and formulated a blueprint for re-designing the governance of addictions. This paper summarizes the project’s conclusions, pointing to new understandings of the science and policy of nicotine, illegal drugs and alcohol, hereafter collectively referred to as ‘drugs’ . Although this paper does not cover process addictions , much of what is said applies to addictions beyond drugs.It contrasts two powerful pieces of evidence: the harm done by drugs, versus the poorly structured existing governance approaches designed to manage such harm.

The paper continues by considering three bases for re-thinking the addiction concept in ways that could lead to improved strategies across different jurisdictions: recognition that there is a biological predisposition for people to seek out and ingest drugs; that heavy use over time becomes a replacement concept and descriptor for the term substance use disorder; and that quantitative risk assessment can be used to standardize harm across different drugs, based on drug potency and exposure. The paper finishes by proposing two approaches that could strengthen addictions governance: embedding governance within a well-being frame, and adopting an accountability system—a health footprint that apportions responsibility for who and what causes drug-related harm.Governance is defined as the processes and structures of public policy decision making and management that engage people across the boundaries of public agencies, levels of government, and public, private and civic spheres to carry out a public purpose that cannot be accomplished by any one sector alone. The exclusive use of top-down bureaucratic approaches cannot maximize societal benefits when dealing with ‘wicked problems’ that are highly resistant to resolution . An analysis of 28 European countries found that no single country had a comprehensive policy for all drugs within a broad societal well being approach. For more detail, see ‘Governance of Addictions: European Public Policies’, by Albareda A et al. There are at least three reasons for ineffective and poorly integrated governance. Firstly, the same harm done by drugs is defined and understood in different ways in different countries and state systems. Seen from a trans-national comparative perspective, there is a lack of a common understanding of appropriate policies, and responses are often constrained by approaches that are tied to assumptions that are not evidence-based.

Ways of thinking about the harm done by drugs vary enormously, with considerable heterogeneity between different drugs, and between international, national and local levels of governance. For detail, see ‘Concepts of Addictive Substances and Behaviours across Time and Place, by Hellman et al. Secondly, a multitude of commercial, political and public stakeholders are active in addictions governance on national and international levels. In any given society, stakeholders that have power, means and influence are likely to achieve an advantageous influential position. The concepts of addiction are also shaped by popular constructs promulgated by the mass media and customs in the general population. Stakeholder positions and perceptions of drug problems also vary over time and by area4 , implying that sustainable approaches must be interwoven into societal and governance structures. Thirdly, corporate power,dry rack cannabis through multiple channels of influence, can hinder evidence-based policy decisions. Corporate strategies often include attempts to influence civil society, science and the media, as part of a wider aim to manage and, if possible, capture institutions that set policy. Transparency is insufficient given that the multiplicity of channels with corporate power is poorly acknowledged and understood by policy makers. Therefore, the rules in place to ensure level playing fields for discussions and equitable decision-making across all factors are inadequate.The idea that human exposure to drugs did not occur until late in human evolution—thus leaving our species inexperienced—is often posited as one of the reasons that these substances cause so much harm. However, multidisciplinary scientific evidence suggests otherwise. Many substances consumed today are not evolutionary novelties. In the story of terrestrial life over the last 400 million years or so, one ongoing theme has been the “battle” between plants and the animals that eat them. Of the many defence mechanisms in existence, plants produce numerous chemicals, including tetrahydrocannabinol, cocaine, nicotine, and opiates, all of which are potent neurotoxins that deter consumption of plant tissue by animals. From an evolutionary perspective, we thus find natural selection for compounds that discourage consumption of the plant via punishment of potential consumers. By contrast, there has been no natural selection for expression of psychoactive compounds which encourage consumption , which has also been predicted by neurobiological and behavioural psychology theories of reward and reinforcement for contemporary drugs. Counterbalancing the development of plant neurotoxins, planteating animals have evolved to counter-exploit plants’ production of drugs, for instance by exploiting the anti-parasitic properties of some of them.

Many species of invertebrates and vertebrates engage in pharmacophagy, the deliberate consumption and sequestration of plant toxins, to dissuade parasites and predators. In a human context, present day examples of pharmacophagy may be seen with Congo basin hunter gatherers, amongst whom the quantity of cannabis and nicotine consumed is titrated against intestinal worm burden – the higher the intake, the lower the worm burden. In individuals treated with the anti-worm drug abendazole, the number of nicotine-containing cigarettes smoked is reduced. Although parasite-host co-evolution is recognized as a potent selective force in nature, other, subtler evolutionary dynamics may affect human and animal interactions with plant-based drugs, including that they may buffer against nutritional and energetic constraints on signalling in the central nervous system. Ethnographic research reveals that many indigenous groups classify “drugs” as food, rather than psychoactive entities, and that they are perceived as having food-like effects, most notably for increasing tolerance for fatigue, hunger and thermal stress in nutritionally-constrained environments. The causes of these perceived effects have not been a research question, but there are clues that the “food” classification may be literal rather than allegorical. Common plant toxins not only mimic mammalian neurotransmitters, they are also synthesized from the same nutritionally-constrained amino acid precursors, such as tyrosine and tryptophan. In harsh environments, toxic plants could function as a “famine food” providing essential dietary building blocks, or, may function as a direct substitute for nutritionally-constrained endogenous neurotransmitters. There is some evidence to support this hypothesis in animal research; for example, wood rats in cold environments reduce thermoregulatory costs by modulating body temperature with plant toxins consumed from the juniper plant. In the case of ethanol, its presence within ripe fruit suggests low-level but chronic dietary exposure for all fruit-eating animals, with volatilized alcohols potentially serving in olfactory localization of nutritional resources. Molecular evolutionary studies indicate that an oral digestive enzyme capable of rapidly metabolizing ethanol was modified in human ancestors near the time that they began extensively using the forest floor, about 10 million years ago; humans have retained the fast-acting enzyme to this day. By contrast, the same alcohol dehydrogenase in our more ancient and mostly tree-dwelling ancestors did not oxidize ethanol as efficiently. This evolutionary switch suggests that exposure to dietary sources of ethanol became more common as hominids adapted to bipedal life on the ground. Ripe fruits accumulating on the forest floor could provide substantially more ethanol cues and result in greater caloric gain relative to fruits ripening within the forest canopy, and our contemporary patterns of alcohol consumption and excessive use may accordingly reflect millions of years of natural dietary exposure. This evolutionary evidence does not imply that humans also evolved to specifically consume nicotine, for example, or that nicotine use is beneficial in the modern world. What is novel in the modern world is the high degree of availability, and high concentration of psychoactive agents and routes of consumption that promote intoxication. What is different with alcohol in the modern world is novel availability through fermentative technology, enabling humans to consume it as a beverage, devoid of food bulk, with higher ethanol content, and artificially higher salience than that which characterizes fruit fermenting in the wild. The evolutionary evidence has two implications: firstly, policies that prohibit the use of drugs are likely to fail because people have a biological predisposition to seek out chemicals with varying nutritional and pharmacological properties; and secondly, in present-day society, drug delivery systems have been developed that are beyond what is reflected in the natural environment, particularly with respect to levels of potency, availability and taste, which could be argued as being the more central drivers of harm. Potency is largely determined by producer organisations operating in markets, which, from the perspective of overall societal well-being, are inadequately managed. Better regulation of potency can become a major opportunity for additional policy interventions – for example with alcohol, see ‘Evidence of reducing ethanol content in beverages to reduce harmful use of alcohol’ by Rehm et al..To better understand the interference of drugs in human biology and functioning, the consensus reached in ALICE RAP was that the concept and term ‘heavy use over time’ should be proposed as the replacement for ‘substance use disorder’. In medical settings and indeed often in academic and lay settings, heavy users of drugs are commonly dichotomized into those with a ‘substance use disorder’ or not. ‘Substance use disorder’ is a clinical construct that is often used as a shorthand to identify individuals who might benefit from advice or treatment. But as a condition in itself, it is a medical artefact which occurs in all grades of severity, with no natural distinction between ‘health’ and ‘disease’. This is illustrated with alcohol.

Many Ghanaian mental health professionals go overseas seeking better pay and better conditions

At the Accra Psychiatric Hospital there are 3 infirmaries and 23 wards total; 16 are male, six are female, and there is one children’s ward. One of the female and one of the male wards are reserved for geriatric cases. The largest special ward is reserved for forensic court cases and more aggressive males, 234 of them in total, though the official occupancy is only 60 for that particular ward. The largest female equivalent ward has 110 patients but the average number of patients in a ward is close to 50. Because there are only 500 beds and currently1,000 inpatients at the Accra Psychiatric Hospital, the congestion leaves 500 patients to sleep on the concrete or on thin mats either inside or outside. There are no fans or air conditioning either. The ones forced outside without insecticides or mosquito nets are subject to the rigors of the weather during the day and the disease carrying mosquitoes at night. These unlucky ones also share their space with ants, cockroaches, and rats. Though there is tap water available, drinking it is not encouraged, so patients have to pay a small fee for filtered drinking water. Patients eat three low quality meals a day that usually consist of rice, adding up to 3.60 cedis a day , a recent increase from the 1.20 cedis spent before 2011. Uniforms are not provided so the patients are free to wear their own or donated clothes, however it is a common and disturbing sight to see people running around stark naked or half naked with tattered clothes hanging loosely off their body. The congestion of patients and the conditions of their living situation are human rights violations in and of themselves.

To add to the situation, Dr. Osei admitted that behind the scenes, patients are sometimes physically or medically punished by nurses who are trying to control more patients than is feasibly possible. An undercover journalist also witnessed this injustice as well as pervasive drug trafficking between patients and employees. Although these acts are strictly discouraged,marijuana grow system it is hard to prevent these human violations from occurring due to a lack of staff and security. Records are kept analog, in a room full of bulging, tattered folders; though they are trying to digitalize the system, it is difficult with only 15 of the necessary 100 computers. There is also an intercom that works 80% of the time. The building, initially built as a prison and not as a hospital, is 100 years old, which makes it gruelling to clean and maintain. There is asbestos in the roofing, sewage system pipes have broken, and the buildings look like a rundown dog pound instead of a pristine, sanitary hospital. In fact, people in the West would be appalled by the conditions even if it actually were a place reserved for rogue dogs. If the Mental Health Bill passes, then remodelling of the building might start in seven years’ time. Dr. Osei proposes that if the buildings and wards ameliorate into sane conditions, then the morale of both the workers and patients will improve, and people will not want to leave the second they arrived. Pantang Hospital’s accretion of debt from insufficient funding over the past six years has led to unfinished structures, outdated equipment, shortage of prescribers, inadequate treatment programs , poor food quality, deficient road networks, old vehicles, under-supplied water and electricity, and encroachment of land and security. During my interview with Dr. Dzadey, the electricity went out in true Ghanaian fashion, and was followed by many scolding and worried phone calls about the number of the samples the laboratory was losing every minute the generator refused to work.

Water enters the pipes only twice a month so there is not enough water or disinfectants to properly clean the estate. In addition to that, the hospital is constantly buying water to fill tanks and filtered water to give to patients. The regular wards feed each patient on a mere 60 pesewas a day, but in 2008 it was rightfully increased to 2.5 cedis. Though the walls are covered in perma-dirt, and dust and a smell of sanitation chemicals lingers in the air, the facilities are much nicer and newer than at the Accra Psychiatric Hospital. The outpatient psych department is located in a three-story building, with a television in the lobby, and there is air conditioning in the consultant rooms. Possibly due to the workload and training, Dr. Dzadey also commented on how the nurses do not have the proper understanding on how to take care of patients. They complete their tasks, such as administering medicine, but there seems to be a lack of compassion in regards to keeping the patients’ best interests at heart. Ghana has only 11 psychiatrists, four of them at the higher, board certified consultant level, and 6 retired psychiatrists, four of whom continue to work at private psychiatric hospitals. In order to have an effective mental health care system in Ghana, Dr. Osei believes that there should be at least 80 working psychiatrists with half of them at the consultant level. To become a consultant psychiatrist in Ghana, one will have to complete six or seven of medical school, then five years of post-doctoral work in psychiatry. “Brain drain is a phrase all too common to the mental health care system in Ghana.Shockingly,cannabis vertical farming there are currently twenty Ghanaian psychiatrists practicing in the U.K. when there are only seventeen psychiatrists in all of Ghana. While here is one retired occupational therapist in Ghana, Dr. Osei conservatively requests for twenty. In actuality, every mental health unit should have an occupational therapist, so the ideal number would be around 200. Hence, in Table 7 I averaged 20 and 200 to get a more accurate estimate of the amount needed. Furthermore, there are only 600 Psychiatric Nurses presently working when there should be at least 3,000 in order to care for most of Ghana’s mentally ill.

Psychiatric Nurses train at either Pantang or Ankaful, and complete one year of general nursing and two years of specialized psychiatric nursing. Clinical psychologists are regrettably not even recognized by Ghana’s Ministry of Health, and any clinical psychologists working at a Psychiatric Hospital have to be listed under another title on the payroll.Seven psychiatrists are working at the Accra Psychiatric Hospital when there should not be less than 30 psychiatrists. Dr. Osei referred to this number as his “dream figure. Table 8 presents other current and proposed numbers of staff. Although there are no trained psychiatric social workers, there are two generic social workers employed by the hospital. The hospital also has two volunteers who help feed and bathe the children in the children’s ward. There should be two security workers in every ward and some more patrolling the hospital, which led to the suggested fifty. Because of the lack of security, many patients escape by jumping over a wall, exiting through a ceiling, or simply walking out of the front entrance. Also, there is typically one incident involving a worker being injured or killed by a patient per year. While some nurses declared that a patient killed another nurse early in 2011, Dr. Osei said that the most recent incident was someone who was blinded in one eye after being hit by a patient. Many of the staff is forced to work at the hospital through either a nursing program or the national service requirement. The staff is terribly limited due to a combination of factors revolving around money and stigma. There are poor working conditions and the little pay reduces any incentive. For a 600-person workforce there are only 28 accommodation units, so most employees are dissuaded because they have to find their own housing closer to the hospital, or pay for transportation into the workplace from their home. As a result, many nurses have confirmed interest in moving to a different country in order to work in a more amicable and rewarding environment, which would further diminish the number of psychiatric nurses the Health Ministry has managed to train. In 2010, the staff strength of Pantang Hospital numbered 524, with two psychiatrists, one clinical psychologist, three medical assistants, three pharmacists, 260 psychiatric nurses, two welfare officers, 34 ward assistants, one bio-statistician, one biomedical scientist, eight occupational therapists, and zero occupational therapists. Dr. Dzadey suggests that the minimum number of psychiatrists the hospital should have is five, around one psychiatrist per two wards and one in OPD, but ideally, the number should be ten so that each ward has its own psychiatrist. In order to gradually reach that ten, the hospital can aim for five permanent psychiatrists and five training or rotating psychiatrists. The one present clinical psychologist overwhelmingly offers counselling, family therapy, individual therapy, and behavioural therapy to all of the hospital’s outpatients and inpatients.

Due to this unfathomable ratio of one psychologist to around 17,265 patients, the quality of therapy and frequency of contact is low, and Dr. Dzadey proposed that there should be a minimum of three clinical psychologists. Preferably, there should be one in every ward, and one in the outpatient department, which would add up to an idyllic number of 10 clinical psychologists. Dr. Dzadey commended the use of physical therapy and recommends that the hospital should also hire at least four physical therapists;currently, there is none. Ghana’s only occupational therapist retired two years ago, but luckily, a temporary occupational therapist came from VSO, Voluntary Service Overseas, and helped train some future occupational therapist assistants to continue her year-long work. However, there should be at least three permanent occupational therapists. The two welfare officers in charge of tracing and contacting the families of patients and organizing repatriation with appropriate CPNs are also severely overworked, which leads to patients staying longer than expected. Dr. Dzadey advises that there should be a minimum of one nurse to five patients. This would result in an average of 275 psychiatric nurses working solely in the wards whereas there are only 260 currently working in either the wards, psych OPD, and/or general OPD. Table 9 shows a clearer representation in the gaps of human resource at the hospital. A person with an acute case is expected to stay at the Accra Psychiatric hospital for two weeks but usually ends up staying for one to three months, while a person with a major case usually stays for about two years though some stay for twenty or thirty years. Inpatients stayed at the Pantang hospital for an average of 63.2 days while vagrants, geriatric patients, and paupers lived in the wards for an average of 173.9 days. These hospitals have a problem with patients overstaying their welcome because there is not enough manpower to frequently evaluate each patient’s progress, the courts do not come for them, or the patients live very far from the hospital. In some cases, the families forget to inquire about their relatives or refuse to pick them up due to the hospital’s distance from their village, or the families can no longer be reached . Moreover, the associated stigma results in a great deal of patients being abandoned by their families upon admittance into the psychiatric hospital. A heartbreaking pattern common to the children’s ward is parents giving spurious phone numbers and addresses to the hospital so that they can no longer be contacted in reference to their child. On average, only 20% of patients at the Accra Psychiatric Hospital have relatives who care enough to occasionally visit and how they are doing; this mainly occurs for those suffering from minor cases. For many years and still to this day, countless Ghanaians believe that supernatural, evil forces or spirits, bewitchment, or planted juju cause mental illness. The executives of Mind Freedom find that most Ghanaians believe that the mentally ill cannot be productive in a society and that the reason they are mentally ill is because they are cursed or have offended a deity. The three noted that this perception cuts across all levels of age, education, and location. This assumption varies little between age and inhabited region, but sometimes varies in education level, with less schooled individuals often placing more blame on super natural forces while the more educated tend to hold more positive attitudes towards the mentally ill .

Campaigns in local buses and print publications will also be implemented

In order to appropriately test and validate this model for AUD, we will use an established, FDA-approved medication. NTX is an opioid antagonist with high affinity for mu-opioid and kappa-opioid receptors. Preclinical studies have shown that opioid antagonists at the muopioid receptor reduce ethanol consumption. In humans, alcohol consumption increases the release of endogenous opioids in the mesolimbic dopamine reward system which contributes to the subjective pleasurable effects of alcohol. Therefore, NTX’s therapeutic benefit as an opioid antagonist is proposed to block these rewarding effects and reduce alcohol consumption. Previous studies of NTX have shown that it reduces drinks per drinking day, alcohol craving, rates of relapse, and the subjective pleasurable effects of alcohol. The effects of NTX appear to be moderated by craving such that higher levels of craving were found to be associated with greater reduction in alcohol consumption. As an established medication for AUD, NTX is an ideal candidate to test the novel practice quit attempt model. To further validate this novel early efficacy model, we will also test a promising medication to treat AUD. Varenicline is a partial agonist at α4β2 and a full agonist at α7 nicotinic acetylcholine receptors, which is FDA-approved for smoking cessation. In preclinical studies, activation of nicotinic acetylcholine receptors reduced ethanol consumption. In human laboratory studies, VAR reduced alcohol self-administration and craving, compared to placebo. In smoking cessation trials, it also reduced alcohol consumption and craving.

Additionally, a multi-site randomized controlled trial of VAR in individuals with AUD found that it reduced drinks per drinking day, alcohol craving,cannabis grow supplies and percentage of heavy drinking days. Together, these studies suggest that VAR is a promising pharmacotherapy for the treatment of AUD. Therefore, including varenicline, a widely studied and promising AUD pharmacotherapy, as a third arm in this study will enable us to further validate this novel alcohol quit paradigm. In designing the current study as a 3-arm trial, we benefit not only from establishing the efficacy of NTX and VAR against placebo, but also from a head-to-head comparison of NTX and VAR in a cost-effective manner. The 3-arm trial design has been selected to overcome weaknesses present in non-inferiority trials where a novel drug is compared to an active control that is the current standard treatment. In active control trials, medication efficacy of the novel drug is determined by demonstrating non-inferiority to the active control, which rests on the critical assumption that the active control has an actual drug effect. However, as there is no placebo control, this assumption cannot be proven; therefore, non-inferiority/equivalence trials lack assay sensitivity, or the ability to distinguish between effective and ineffective treatments. The 3-arm design essentially combines the advantages of placebo and active controlled trials. The placebo arm will allow us to showcase if VAR is an effective or ineffective medication in the context of a good internal standard. Additionally, if neither NTX nor VAR are shown to be superior to placebo, then we can conclude that the practice quit paradigm is not a valid method for screening medications for AUD.The purpose of the current study is to develop and validate this novel model to screen novel compounds and advance medications development. Naltrexone was chosen to evaluate the novel practice quit attempt model as it is one of the few FDA-approved medications AUD.

RCT studies with oral NTX have shown that it reduced drinks per drinking day, alcohol craving, rates of relapse, and the subjective pleasurable effects of alcohol. As such, NTX represents a well-known, well-studied medication that is ideal for testing a novel paradigm. Varenicline is a promising pharmacotherapy for the treatment of AUD. VAR has been shown to reduce alcohol self-administration, consumption, and craving. A recent RCT of VAR in individuals with AUD found that it reduced drinks per drinking day, alcohol craving, and percentage of heavy drinking days. These studies suggest VAR as a potential AUD pharmacotherapy. The addition of VAR as a third arm in the current study will allow us to further validate this novel practice quit attempt model. Additionally, the inclusion of a promising pharmacotherapy allows us to compare the efficacy of two medications head to-head in a cost-effective manner. The 3-arm design of a novel medication , standard treatment , andplacebo allows us to not only establish efficacy of each medication against placebo, but also of the novel medication again the standard treatment. This study design essentially combines the advantages of placebo and active control studies.Participants who are eligible after the physical exam will be randomized to one of three treatment conditions . Urn randomization will be stratified by gender, smoking status , and drinking status . The UCLA Research Pharmacy will manage the blind. The three treatment conditions will not be different in appearance or method of administration. All participants will undergo a week long medication titration period prior to the onset of the practice quit attempt as follows: for the naltrexone condition, 12.5 mg will be taken for the first 3 days, followed by 25 mg dosage from days 4–7.

The target dosage of 50 mg will be ingested days 8–14. As for the varenicline condition, a dosage of 0.5 mg will be taken for the first 3 days followed by an increase to 1 mg for days 8–14. The intended dosage of 2 mg will be taken days 8–14. Each condition will the instructed to take prescribed medication twice per day as detailed in Table 1. On study day 1, participants will report to the laboratory to complete the alcohol CR paradigm and receive their first medication dose under direct observation of study staff. They will receive a 7-day supply of study medication in blister packs with AM and PM dosing clearly distinguished. After reaching the target medication dose at the end of 1 week, participants will come to the laboratory on study day 8 to receive their second, 7- day supply of study medication and to begin the 7-day practice quit attempt. Participants will be asked to take the AM dose of study medication on study day 8 in the lab under direct observation of study staff. During the practice quit attempt,cannabis grow facility participants will complete daily online and phone visits to report on their drinking, mood, and craving for alcohol during the previous day in a daily diary assessment . For each virtual visit, participants will be contacted over the phone by research staff. Participants will first be asked about adverse events and about use of concomitant medications. Research staff will then administer the CIWA-Ar to measure alcohol withdrawal. Next, they will ask participants to report on their past day drinking as well as cigarette and marijuana use. Finally, while participants are still on the phone, research staff will send a link to the DDA . All participants will meet with a trained study counselor briefly after the second cue exposure session on day 14. This brief intervention draws from motivational interviewing and Screening, Brief Intervention, and Referral to Treatment models. It uses the therapeutic stance of motivational interviewing which is collaborative and client-centered. Consistent with the literature on brief intervention, the therapist will seek opportunities to engage in and amplify change talk.

Together, the combination of evidence-based practices and principles applied to AUD, coupled with the experience of change in the context of study participation, is expected to result in an opportunity for health behavior change .Criteria for discontinuing or modifying allocated interventions are at the discretion of the study physicians or principal investigator. One week after beginning the medication, physicians will speak with the participant via phone call to check for any adverse events. If reported, participant may either undergo a dose-reduction or termination. Participants will also have the option to voluntarily discontinue all medication at any point. All severe adverse events will be reported to relevant reporting entities immediately.Adherence to interventions is facilitated by dividing the medication into separate blister packs for two distributions, the daily virtual visits during the practice quit attempt period, and a completion bonus. The separation of the study medication into two blister packs, each a 7-day supply, will motivate participants to come back to the laboratory for the second supply, and reduce the chance of them misplacing the medication at the start of the study. During the practice quit attempt period, the participants will be asked to send pictures of their blister packs to the study staff after completion of the daily phone visits. This will allow the study staff to count the medication for compliance. Additionally, a completion bonus will be given out to participants on the last day of the study if they have completed at 7 out of the 8 in-person and virtual visits. This is to motivate participants to complete all daily phone visits and online assessments .Participants will be recruited from the community through online and newspaper advertisements, as well as campaigns on multiple social media platforms .Targeted recruitment will also take place through a lab database of previous study participants who agreed to be contacted for future studies.Data are collected at the behavioral eligibility screening visit, the randomization visit , at each of the daily phone visits during the practice quit attempt period , and at the in-person study visits . All staff personnel will be trained on any relevant assessment procedures and inter-reliability will be monitored continuously by the primary investigator. For the drinking outcomes , data will be collected via participant self-report through the Timeline Follow back. The Alcohol Urge Questionnaire will be used in the CR paradigm to measure craving. The AUQ is an 8-item scale in which participants will rate their present experience of alcohol craving on a 7-point Likert scale. The AUQ has demonstrated high test-retest reliability, high internal consistency, and construct validity in human laboratory studies.Self-report measures will be directly completed through an electronic data capture electronic case report forms system, Qualtrics. Timeline Follow back data will be entered by research staff into excel in order to generate daily drink averages based on standard drink calculations. All other data will be entered by research staff onto SPSS. Data will be held on a secure server at the University of California, Los Angeles. Appropriately qualified personnel designated by the PI will monitor data entry and ensure that missing data are addressed as soon as possible after detection. All Timeline Follow back data will be double-checked by research staff to ensure validity. Excel will also be formulated to detect and notify in the case of any abnormal values.Participants will be given a 24-h telephone number to reach the study physician to discuss side effects, and physician office hours will be available as needed. Adverse events, including signs of sickness, will be collected in an open-ended format and coded using a systematic assessment for treatment emergent events format at each study visit . Vital signs will be monitored at the beginning of each in-person study visit. Alcohol withdrawal will be monitored at each visit through administration of the CIWA-Ar, and any significant withdrawal, as indicated by a score of 10 or more on the CIWA-Ar, will be reported to the study physician immediately. In the event that significant medical problems are encountered, the study blind will be broken and appropriate medical treatment will be provided.The PI will designate appropriately qualified personnel to periodically perform quality assurance checks at mutually convenient times during and after the study. These monitoring visits provide the opportunity to evaluate the progress of the study and to obtain information about potential problems. The monitor will assure that data are accurate and in agreement with any paper source documentation used, verify that subjects’ consent for study participation has been properly obtained and documented, confirm that research subjects entered into the study meet inclusion and exclusion criteria, verify that study procedures are being conducted according to the protocol guidelines, monitor review AEs and SAEs, and assure that all essential documentation required by GCP guidelines are appropriately filed. At the end of the study, they will confirm that the site has the appropriate essential documents on file and advise on storage of study records.Alcohol use disorder is a chronic condition with both high relapse and low treatment rates.

Similar CB1R-dependent effects of HFS were obtained for pROCK

Secondary antisera from ThermoFisher Scientific included Alexa Fluor 594 antirabbit IgG , Alexa Fluor 488 anti-mouse IgG and Alexa Fluor 488 anti-guinea pig IgG all used at 1:1000 dilutions. An epifluorescence microscope with a 63× PlanApo objective and ORCA-ER camera was used to capture image z-stacks, through a depth of 2 μm in 0.2 μm z-steps, from the DG outer molecular layer and CA1 stratum radiatum. For slice experiments, 1 z-stack was captured from each of 6 sections per slice. For behavioral/brain studies, 3 z-stacks were captured per section from 3 to 4 spaced sections within a given septo-temporal span of hippocampus . Immunolabeling for the synaptic vesicle protein SYN and for the excitatory synapses postsynaptic scaffold protein PSD-95 served as markers for the presynaptic and postsynaptic compartment, respectively. The incidence and density of immunolabeling for the phosphoprotein co-localized with these compartment markers were then evaluated using wide- field epifluorescence microscopy and fluorescence deconvolution tomography as described elsewhere . Briefly, images within each z-stack were processed for iterative deconvolution and then used to construct a 3-dimensional montage of the sample field. Automated systems were then used to normalize background density, identify immuno labeled elements within the size and eccentricity constraints of synapses, and quantify those double-labeled. Elements were considered double-labeled if there was any overlap in the field labeled with the 2 fluorophores as assessed in 3D.Male Long–Evans rats were handled for 6 sessions, 2 sessions per day, and prior to odor discrimination training. Procedures for animal handling, training, and testing were adapted from Martens et al. as described in detail elsewhere . Sessions of ten 30 s trials on a given odor pair were repeated up to twice daily until rats reached a success rate of 80% correct at which point they were considered to have acquired the odor discrimination task.

On the following day, trained rats were either given 10 training trials on a novel odor pair or transported to but not placed in the test apparatus,cannabis plant growing and killed immediately thereafter for tissue harvest .The CB1R is found on axon terminals throughout the brain including the field of LPP termination in the outer molecular layer of the DG . We confirmed CB1R localization to glutamatergic terminals in the rat LPP field and then compared the effects of cannabinoid receptor agonist WIN 55,212–2 on synaptic physiology in the S–C and LPP projections. In accord with prior work , WIN caused a rapid and pronounced depression of S–C fEPSPs in CA1 that was accompanied by an increase in paired-pulse facilitation and the expected severe impairment of LTP . Very different results were obtained in the LPP: WIN had no effect on baseline fEPSPs or on paired pulse facilitation . Voltage-clamp recordings also detected no effect on EPSCs in the LPP . In contrast to these results for glutamatergic responses in the LPP, WIN produced the canonical depression of IPSCs elicited by single pulse LPP stimulation . We next asked if, despite the lack of effect on baseline responses, WIN influences the machinery that produces the ECBdependent potentiation of the LPP using stimulation that is near threshold for induction. WIN more than doubled the magnitude of lppLTP under these conditions . These results suggest that activation of CB1Rs in the LPP preferentially engages signaling mechanisms leading to potentiated transmission as opposed to the more commonly observed depression of release. CB1R signaling through ERK1/2 effects phosphorylation and degradation of the vesicular protein Munc18-1 leading to reductions in transmitter release . In accord with this, using dual immunofluorescence and FDT , we found that treatment with WIN increased levels of phosphorylated Munc18-1 S241 co-localized with the presynaptic marker SYN in the S–C terminal field: WIN caused both a rightward shift in the pMunc18-1 immuno labeling intensity frequency distribution and increased numbers of terminals with dense pMunc18-1 immuno reactivity . In the same slices, WIN had no effect on presynaptic pMunc18-1 immuno labeling in the LPP terminal field .

The above results indicate that WIN-initiated CB1R signaling at LPP terminals is biased “away from” the ERK1/2-to-Munc18-1 cascade through which ECBs suppress neurotransmitter release and toward a route that promotes plasticity. They also raise the question of whether signaling to Munc18-1 and release suppression in CA1 is engaged by normally occurring patterns of physiological activity. Blocking the CB1R with the inverse agonist AM251 had no effect on S–C fEPSPs elicited by single pulses . Thus, we tested for an effect using short trains of low-frequency gamma stimulation. This pattern occurs routinely in hippocampal and entorhinal fields in behaving animals and is thought to be associated with processing of complex information . Within-slice comparisons were made between responses collected before and after 40 min perfusion of vehicle or 5 μM AM251. In CA1, S–C responses to low gamma stimulation showed the rapid, within-train frequency facilitation described in prior work . AM251 did not affect baseline responses but clearly enhanced S–C response facilitation during the gamma train . Effects of AM251 were greatest in later portions of the train, as anticipated for contributions of “on-demand” ECB production. Very different results were obtained for the LPP. Within-train facilitation was less pronounced in the LPP than in the S–C system, and this effect was not altered by AM251 . These findings are consistent with the hypothesis that CB1R signaling leading to a depression of transmitter release is more readily engaged in the S–C projections than in the LPP.The above results were unexpected because prior studies showed that physostigmine causes a suppression of excitatory transmission in the LPP and other hippocampal pathways that is blocked by AM251 .Therefore, we tested if physostigmine increases hippocampal 2-AG levels, as anticipated, and then used the FDT technique employed above to determine if it also triggers Munc18-1 S241 phosphorylation in the LPP. Infusion of physostigmine elicited a marked increase in slice levels of 2-AG but not other lipids ; it also produced a reliable increase in SYN+ terminals with dense concentrations of pMunc18-1 in both LPP and S–C fields.

As predicted,vertical grow system physostigmine effects on both projections were dramatically reduced in slices prepared from Munc18-1+/− mice relative to those from wild types although the mutation had no effect on the input/output curve or paired-pulse facilitation in the LPP . A recent study showed that the locally synthesized neurosteroid pregnenolone reduces both CB1R signaling through ERK1/2 and neurotransmitter release suppression normally mediated by the ECB receptor . We tested for this effect in hippocampus beginning with the pronounced fEPSP depression produced by CB1R activation in the S–C system: treatment with 10 μM pregnenolone prevented the synaptic response depression elicited by WIN . Pregnenolone was similarly effective in the LPP where it blocked actions of physostigmine on presynaptic pMunc18-1 immuno reactivity and synaptic transmission . These findings point to the conclusion that the pregnenolone/ CB1R/Munc18-1 system, as found in the S–C projections, is present in the LPP although it is not engaged by either the CB1R agonist WIN or repetitive afferent activity. There remains the possibility, however, that it is activated by the short high-frequency gamma trains used to induce lppLTP and participates in subsequent stabilization of the potentiated state of LPP terminals. We conducted multiple tests of this argument. Pregnenolone at the concentration that eliminates physostigmine effects on transmission and pMunc18-1 immuno reactivity in CA1 had no detectable effect on lppLTP induced by near threshold stimulation . Conventional stimulation trains failed to influence Munc18-1 phosphorylation in LPP terminals and induction of lppLTP was fully intact in Munc18-1+/− mice . Considered together with evidence that lppLTP is both 2-AG and CB1R-dependent , the present results suggest that potentiation in the LPP involves a second CB1R signaling pathway that has not been evaluated in work using physiological activation of hippocampal synapses. Finally, the results obtained with pregnenolone afforded a means for testing if increases in 2-AG content produced by physostigmine promote lppLTP in the absence of the response suppression associated with Munc18-1 phosphorylation. We tested this intriguing point and found that physostigmine more than doubled the magnitude of lppLTP induced by threshold level stimulation . This result is consistent with our earlier observation that reducing 2-AG breakdown, and thereby increasing hippocampal slice 2-AG levels, with the monoacylglycerol lipase inhibitor JZL184 similarly augments lppLTP .Prior results showed that lppLTP is blocked by presynaptic actions of latrunculin A , a toxin that selectively blocks the assembly of actin filaments. This raises the possibility that the CB1R promotes lppLTP via actions on actin regulatory signaling, an idea in alignment with evidence that CB1R initiates actin reorganization in dissociated cells and rapidly activates both FAK and the small GTPase RhoA in N18TG2 neuroblastomacells . FAK is a non-receptor tyrosine kinase that mediates integrin effects on the actin cytoskeleton throughout the body .

Other experiments found that CB1R acting through FAK initiates actin remodeling in pancreatic cells resulting in enhanced insulin release . Accordingly, we used FDT to test if WIN activates FAK, via Y397 phosphorylation, in LPP terminals . WIN produced a pronounced rightward skew in the immunofluorescence intensity-frequency distribution for pFAK Y397 co-localized with SYN . RhoA and its downstream effector, ROCK2, represent a primary route whereby FAK signals to actin . In the LPP terminal field, WIN increased levels of pROCK S1366 co-localized with SYN but not with the postsynaptic density marker PSD-95 . In the same hippocampal slices, WIN increased presynaptic pROCK levels in CA1 but this effect was substantially smaller than that in the LPP. We quantified the regional difference by converting the data into cumulative probability curves and then subtracting the WIN treatment values at each density bin for each slice from the mean curve for the vehicle group. The rightward shift in pROCK immuno labeling produced by WIN was over 2-fold greater in the LPP than in CA1 . In all, the CB1R agonist WIN had a much greater effect on Munc18-1phosphorylation in CA1 than in the LPP and a much greater effect on markers of actin signaling in the LPP than in CA1. We conclude from this that the CB1R response to WIN is biased toward different signaling streams in the 2 projections. We further tested if CB1R signaling to ROCK is more prominent in the LPP than in CA1 using physostigmine to elevate 2-AG levels and signaling. Physostigmine produced a reliable increase in presynaptic pROCK in the LPP that was blocked by CB1R antagonism but had no reliable effect on presynaptic pROCK levels in CA1 of the same hippocampal slices . A similar pattern of results was obtained in an analysis of the most densely pROCK immunoreactive terminal boutons . Collectively, these results describe a CB1R-FAK-ROCK route by which 2-AG generated and released during high-frequency stimulation could facilitate presynaptic cytoskeletal changes required for production of stable lppLTP. In accord with this proposal, high-frequency bursts of LPP stimulation caused a rightward shift in the density frequency distribution of presynaptic pFAK in slices harvested 2 min after stimulation.Together, these results describe a second CB1R signaling pathway in LPP terminals that, unlike the ERK1/2-Munc18-1 route, is directly involved in the production of lppLTP.An important question raised by the above results is why activation of FAK and ROCK by pharmacological CB1R stimulation, augmented lppLTP but did not by itself induce potentiation. One possibility is that electophysiological stimulation of the LPP engages elements that are not downstream from CB1R activation but nonetheless are required for shifting LPP terminals into the enhanced release state. We tested if integrin class adhesion proteins, which co-operate with CB1R in actin regulatory signaling in cultured cells , fill this critical role. Integrins are dimeric transmembrane adhesion receptors for extracellular matrix and cell surface proteins that are expressed throughout the brain by neurons and glia . In hippocampus the majority of integrins contain the β1 subunit and β1 integrins have been localized to both pre- and postsynaptic compartments . We previously demonstrated that, in hippocampal slices, infusion of β1 neutralizing antisera disrupts activity-induced actin polymerization and LTP in field CA1 . Here, we tested if similar treatments influence potentiation in the LPP. Treatment with anti-β1 had no effect on baseline LPP responses but caused a near complete suppression of lppLTP . In contrast, neutralizing anti-αV integrin left potentiation intact .

Methods for quantifying heavy drinking are also inconsistent across studies

A murine model also suggested inhaled VEA may cause EVALI-like lung injury,but the underlying mechanism remains to be determined. The age range of cases and deaths is broad, and the e-cigarette use patterns are diverse, although 75% of EVALI patients were young Caucasian males and an overwhelming majority admitted to THC vaping. While VEA from THC vaping has been most commonly and consistently linked to EVALI cases, the spectrum of usage patterns and clinical manifestations suggest a possible role of multiple toxicants from unregulated products. Chemical analysis of counterfeit cartridges obtained from EVALI patients demonstrated the presence of several toxicants including volatile organic compounds, semi-volatile hydrocarbons, silicon conjugated compounds, terpenes, pesticides, and metals, which were not found in medical-grade THC cartridges.The typical symptoms of EVALI include dyspnea, chest pain, cough, fever, and fatigue. Additionally, many of the EVALI patients also presented with nausea and vomiting and other gastrointestinal symptoms. Chest radiography of most cases was abnormal; images typically showed ground-glass opacities in both lungs.Four radiographic patterns were identified in EVALI patients including acute eosinophilic pneumonia, diffuse alveolar damage, organizing pneumonia, and lipoid pneumonia.Histological analysis of lung biopsies showed patterns of acute fibrinous pneumonitis, diffuse alveolar damage, or organizing pneumonia.EVALI patients may have slightly different phenotypes and have been diagnosed with acute respiratory distress syndrome, lipoid pneumonia, and pneumonitis. Patients have been treated with antibiotics and glucocorticoids,and the steroidal treatment has been shown to improve symptoms and lung function.

Although this is a new field,cannabis flood table where initial cross-sectional epidemiological studies have demonstrated several limitations, adolescents who vaped have been found to be more likely to try cigarettes than non-smoking non-vaping youth.For example, a cross-sectional analysis of PATH study data indicated an association between e-cigarette use and self-reported wheeze,and an analysis of data from 402 822 never-smoking participants in the behavioral risk factor surveillance system indicated an association between self-reported asthma and e-cigarette use intensity.It is important to recognize that the above studies were observational in nature, and the chronology of e-cigarette use and disease development are often not clear, so more evidence is needed that will further clarify the cause-effect relationship between e-cigarette use, cardiopulmonary disease, and cerebrovascular events. Regardless, these publications serve as an impetus for future research into the causative and mechanistic relationships between e-cigarette use and cardiopulmonary disease risk.E-cigarettes have been proposed as an effective strategy to quit conventional cigarette smoking, but they have not been approved for this purpose in the USA or elsewhere. To date, the clinical trials that have been carried out do not address the question of effectiveness in the “real world”, that is, does the availability of e-cigarettes in the marketplace decrease smoking at the population level. Instead, clinical trials have compared the delivery of nicotine by an e-cigarette to other modalities of nicotine delivery. The most recent review on this topic concluded: “The evidence is inadequate to infer that e-cigarettes, in general, increase smoking cessation. However, the evidence is suggestive but not sufficient to infer that the use of e-cigarettes containing nicotine is associated with increased smoking cessation compared to the use of e-cigarettes not containing nicotine, and the evidence is suggestive but not sufficient to infer that more frequent use of e-cigarettes is associated with increased smoking cessation compared with less frequent use of ecigarettes”.

To predict the relative dangers of second- and third-hand e-cigarette exposures, an understanding of the degree to which ecigarette use might lead to an increase in ambient nicotine and particulate matter, and the degree to which nicotine and other e-cigarette constituents deposit on surfaces, will be critical. Since there are no side stream aerosols from e-cigarettes, unlike combustible cigarettes, secondhand e-cigarette exposure is almost exclusively from user exhalation. Thus, it remains unclear and somewhat controversial as to what level of additional particulate matter, vapor phase, and nicotine emissions are released into the environment from e-cigarettes. Some of this uncertainty may relate to variability in device design and liquid composition. However, several studies have demonstrated ecigarette use by individuals can contribute to worse indoor air quality, including release of toxicants and particulate matter.For example, indoor e-cigarette use can generate fine particulate matter in high concentrations during natural use conditions in indoor environments, as well as an increase in particle numbers and concentrations of 1,2-propanediol, glycerin, and nicotine.Increased levels of 1,2-propanediol, diacetin,cannabis grow supplier and nicotine were also measured by gas chromatography from one exhaled e-cigarette puff. E-cigarettes containing nicotine-free solutions may have higher particulate levels than those containing nicotine. However, these particles dissipate much more quickly than cigarette smoke particles and further studies will be needed to fully understand the risk of second and third-hand exposures. Measurable nicotine levels have been detected in samples from hard surfaces and cotton surfaces exposed to e-cigarette emissions.Recent developments in detection strategies by use of auto fluorescence have further elucidated e-liquid deposition topography. One study found that for each 70-mL aerosol puff, 0.019% of the aerosolized e-liquid was deposited on hard surfaces.

These studies may also be an overestimate when compared to real-life scenarios because aerosol puffs were directly administered to the observed surfaces and were not inhaled and exhaled prior to surface deposition. However, in an attempt to provide a better model of surface deposition, deposition as a result of inhaled and exhaled e-cigarette aerosol was performed.This study found no significant increase in surface nicotine levels following 80 puffs per participant. The authors noted these results may not indicate a lack of risk for third-hand exposure, since they did not account for gradual accumulation on surfaces over time. Together, these results indicate that potentially hazardous e-cigarette emissions, including PG/VG, nicotine, and heavy metals may be deposited on household surfaces as a result of typical vaping behavior. Furthermore, they suggest a potential risk for third-hand exposure which could serve as a public health concern. However, more studies are needed to better understand the risk of vaping for second and third-hand exposures.In assessing the public health impact of e-cigarette use, there is an implicit comparison to alternative or counterfactual scenarios; in the case of e-cigarettes, the comparison is to the hypothetical situation of a world lacking e-cigarettes. There are established methods for quantitative risk assessment that are widely used for public health decision-making, such as the four-element paradigm set out in the 1983 National Research Council Report generally referred to as “The Red Book”. The elements include: hazard identification, that is, is there a risk?; exposure assessment, that is, what is the pattern of exposure?; dose-response assessment, that is, how does risk vary with dose?These four elements have general applicability to characterizing the impact of ecigarettes in terms of the prevalence of nicotine addiction and its profile across groups in the population and the associated additional burden of disease. Population impact is quantitatively assessed using conceptual models that capture an understanding of the relationships between independent and modifying factors and their outcomes. Models are implemented using statistical approaches and evidence-based estimates of the values of parameters at key steps in the model, for example, the rate of initiation of use of tobacco products with e-cigarettes present . This approach was used by the FDA’s Tobacco Products Scientific Advisory Committee to estimate the impact of menthol-containing tobacco products.

The overall approach was to formulate a conceptual framework, conduct systematic reviews around the framework, and implement an evidence-based statistical model for making estimates related to public health impact. The systematic reviews highlighted those gaps in scientific evidence, pointing to the most critical research needs for strengthening the evidence foundation for potential regulation of menthol. For e-cigarettes, the research priorities identified in this article relate to key evidence gaps that need to be addressed to achieve a greater and more certain understanding of the population impact of ecigarettes.People with HIV are twice as likely to engage in heavy alcohol use and two to three times more likely to meet criteria for an alcohol use disorder in their lifetime than the general population . Heavy alcohol use not only promotes the transmission of HIV through sexual risk-taking behavior and non-adherence to antiretroviral therapy , but also directly exacerbates HIV disease burden by compromising the efficacy of ART and increasing systemic inflammation . In addition to increased risk for physical illness , there is substantial evidence indicating that comorbid HIV and heavy alcohol use is more detrimental to brain structure and results in higher rates of neurocognitive impairment than either condition alone . The impact of comorbid HIV and heavy alcohol use on the central nervous system is especially important to consider in the context of aging. The population of older adults with HIV is rapidly growing; approximately 48% of PWH in the U.S. are aged 50 and older and the prevalence of PWH over the age of 65 increased by 56% from 2012 to 2016 . Trajectories of neurocognitive and brain aging appear to be steeper in PWH , possibly due to chronic inflammation and immune dysfunction, long-term use of ART, frailty, and cardiometabolic comorbidities . In addition to HIV, rates of alcohol use and misuse are also rising in older adults . The neurocognitive and physical consequences of heavy alcohol use are more severe among older than younger adults, and several studies also report accelerated neurocognitive and brain aging in adults with AUD . While mechanisms underlying these effects are poorly understood, older adults may be more vulnerable to alcohol-related neurotoxicity due to a reduced capacity to metabolize alcohol, lower total-fluid volume, and diminished physiologic reserve to withstand biological stressors . Altogether, these studies support a hypothesis that PWH may be particularly susceptible to the combined deleterious effects of aging and heavy alcohol use. For example, in a recent longitudinal report, Pfefferbaum et al. reported that PWH with comorbid alcohol dependence exhibited faster declines in brain volumes in the midposterior cingulate and pallidum above and beyond either condition alone. There is considerable heterogeneity, however, in profiles of neurocognitive functioning across individuals with HIV and AUD . Patterns of alcohol consumption rarely remain static throughout the course of an AUD, but rather are often characterized by discrete periods of heavy use. This episodic pattern of heavy consumption may similarly impact the stability of HIV disease , which may in part explain why some PWH with AUD exhibit substantial neurocognitive deficits while others remain neurocognitively intact. Self-report estimates of alcohol use, however, often fail to predict neurocognitive performance .For example, some studies classify individuals based on DSM criteria for AUD whereas others define heavy drinking based on “high-risk” patterns of weekly consumption . These methods characterize the chronicity of drinking and psychosocial aspects of alcohol misuse, but they are suboptimal for quantifying discrete periods of heavy exposure and high level intoxication that may confer higher risk for neurocognitive dysfunction. Binge drinking, defined by the National Institute on Alcohol Abuse and Alcoholism as 4 or more drinks for women and 5 or more drinks for men within approximately 2 hours, may more precisely capture discrete episodes of heavy exposure. The relationship between binge drinking and neurocognitive functioning remains poorly understood across the lifespan and particularly in the context of HIV. Thus, the current study examined two primary aims to better understand the impacts of HIV, binge drinking, and age on neurocognitive functioning. The first study aim examined the independent and interactive effects of HIV and binge drinking on global and domain-specific neurocognitive functioning. We hypothesized that: 1) neurocognitive performance would be poorer with each additional risk factor such that the HIV-/Binge- group would exhibit the best neurocognition, followed by the single-risk groups , and finally the dual-risk group; and 2) these group differences would be explained by a detrimental synergistic effect of HIV and binge drinking on neurocognition. The second study aim examined whether the strength of the association between age and neurocognition differed by HIV/Binge group. We hypothesized that: 1) older age would relate to poorer neurocognition; and 2) that this negative relationship would be strongest in the HIV+/Binge+ group. A modified timeline follow-back interview was used to assess drinking behavior in the last 30 days .

The ability to make such comparisons is further limited by the wide time frame in which CBIs were developed

For the CBIs listed that did not mention use of a broad theory , but mentioned using a specific construct or technique, all provided a description of how it was applied in the intervention ; however the amount and quality of information provided about the application of the construct/techniques varied considerable across this group of CBIs.Of the 21 CBIs that mentioned use/application of theory , all but two included at least one measure of a construct associated with the theory. If a CBI mentioned use of a theory, it was more likely to include a measure of specific constructs associated with the theory compared to CBIs that did not mention use of a broad theory. Specifically, of the CBIs, that did not explicitly mention use of a theory, but did include a specific construct, only five included corresponding measures of the theoretical construct . Tables 1 and 2 lists the classification of each CBI and provides a list of the measure associated with the theory, construct or intervention technique.This study identified 100 unique articles covering 42 unique computer-based interventions aimed at preventing or reducing alcohol use among adolescents and young adults.Thus, this review includes a total of 21 new CBIs and 43 new articles. This review is the first to provide an in-depth examination of how CBI’s integrate theories of behavior change to address alcohol use among adolescents and young adults. While theories of behavior change are a critical component of effective interventions that have been developed and evaluated over the past several decades,cannabis equipment attention to the application of theory in CBIs has been limited. We utilized a simple classification system to examine if theories were mentioned, applied or measured in any of the publications that corresponded with the CBIs.

Only half of the CBIs reviewed mentioned use of an overarching, established theory of behavior change. The other half mentioned used of a single construct and/or intervention technique but did not state use of a broader theory. CBIs that were based on a broad theoretical framework were more likely to include measures of constructs associated with the theory than those that used a discrete construct or intervention technique. However, greater attention to what theory was used, articulating how theory informed the intervention and including measures of the theoretical constructs is critical to assess and understand the causal pathways between intervention components/mechanisms and behavioral outcomes . When mentioning the use of a theory or construct, almost all provided at least some description of how it informed the CBI; however, the amount and quality of information about how the theory was applied to the intervention varied considerably. Greater attention to what is inside the “black box” is critical in order to improve our understanding of not only what works, but why it works. While a few articles provided detailed information about the application of theory, the majority included limited information to examine the pathway between intervention approach and outcomes.Some researchers/intervention developers may not fully appreciate how theory can be used to inform intervention approaches. There is an emphasis on outcomes/effectiveness of interventions and less attention is placed on their development. In addition, to our knowledge, there are no publication guidelines/standards for describing the use of theoretical frameworks in intervention studies and the inclusion of this information is often up to individual authors and reviewers. Given the importance of theory in guiding interventions, greater emphasis on the selection and application of theory is needed in publications. The classification system used in this review provided some form of personalized normative feedback and applied it relatively consistently across the CBIs.

Personalized normative feedback is designed to correct misperceptions about the frequency and acceptability of alcohol use among peers. It typically involves an assessment of a youth’s perceptions of peer norms around alcohol attitudes and use followed by tailored information about actual norms. In addition, some interventions have recently incorporated personal feedback to address individual’s motivations to change through assessing and providing feedback on drinking motives or in decisional balance exercises.The widespread use of personalized normative feedback in CBIs may be because it has been widely documented as an effective strategy and because it lends itself readily to an interactive, personalized computer-based intervention. Motivational interviewing was also used in several of the CBIs and is an effective face-to-face counseling technique. In contrast, this technique was applied to CBIs in a number of different ways, such as exercises designed to clarify goals and values, making both the description of how it was applied even more essential to examine differential effectiveness across various CBIs. This study builds on the growing evidence supporting the use of CBIs as a promising intervention approach. We found most of the CBIs improved knowledge, attitudes and reduced alcohol use among adolescents and young adults. In addition, this study suggests CBIs that use overarching theories more frequently reported significant behavioral outcomes than those that use just one specific construct or intervention technique . This finding is consistent with prior studies examining the use of theory in face to-face interventions targeting alcohol use in adolescents. However, it is important to acknowledge the wide variation across the CBIs not only in their use of theory, but in scope, the targeted populations, duration/dosage, and measured outcomes. It is encouraging that even brief/targeted CBIs demonstrated some effectiveness and thus can play an important role in improving knowledge and attitudes, which are important contributors to changes in behavior. There are limitations to this study.

As discussed previously,vertical grow shelf many articles did not explicitly describe how theory was applied in the CBI. It is therefore possible that the theoretical pathways for the intervention were further developed than we have noted, and possibly included in other documents, such as logic models and/or funding applications; however, such information is not readily accessible and was outside the scope of this review. Thus, lack of mention of the name of a theory or construct or its application does not mean that the intervention did not integrate the theory in the intervention, only that the article did not provide information about its application. Thus, due to variations in the described use of theory along with the wide range of CBIs, it was not possible to draw comparisons about the relative effectiveness of CBIs according to the theory used.This review spanned articles published between 1995 and 2014. During this period, CBIs to address health issues have been rapidly evolving due to major advancements in technological innovations . These advancements coupled with greater interest and investments from federal agencies and philanthropic foundations.Electronic cigarettes are battery-powered devices that aerosolize e-liquids, which typically contain propylene glycol and vegetable glycerin , nicotine, flavors, and stabilizers/humectants such as triacetin.Although it is well known that combustible cigarettes cause multiple cardiovascular and pulmonary diseases, the effects of e-cigarettes on health have only begun to be studied. Alarmingly, there has been a rapid increase in e-cigarette use among adolescents and young adults, who could potentially be exposed to e-cigarette aerosols for decades if their use is lifelong.Indeed, the US Surgeon General concluded that the use of e-cigarettes among youth and young adults has become a major public health concern.A recent European Respiratory Society task force concluded that since the long-term effects of e-cigarettes are unknown, it is not clear whether they are in fact safer than tobacco and based on current knowledge, their negative health effects cannot be ruled out.In the USA, the Family Smoking Prevention and Tobacco Control Act of 2009 gave the Food and Drug Administration the power to regulate tobacco products. While e-cigarettes were not covered in the original act, the FDA has clarified its position with its “deeming” rule, and since 2016, has begun to exert its regulatory authority over e-cigarettes and other noncombustible products. In 2020, in response to the growing popularity among youth, the FDA issued a policy to limit the sales of some flavored e-cigarette products.As the FDA adheres to a public health impact standard, evidence on adverse health effects of e-cigarettes will be a consideration to impact on future sales of e-cigarettes and e-cigarette liquids. Such a regulation will likely be contingent upon their observed health effects, as well as effects on nicotine addiction.

In addition, the recent emergence of acute and severe e-cigarette, or vaping, product use-associated lung injury across the US underscores the need, complexity, urgency, and importance of basic and clinical research on the health effects of e-cigarettes, particularly focused on cardiopulmonary systems.With regard to public health impact and cardiopulmonary health, availability and use of e-cigarettes might benefit those who switch from combustible cigarettes to e-cigarettes, that is, harm reduction. However, the potential benefits from the use of these products are uncertain: they deliver a poorly defined, highly variable, and potentially toxic aerosol that may have adverse effects, which may be dependent on patterns of age, reproductive status, health of the user, and use. In addition, there are major public health concerns surrounding the availability of e-cigarettes for children, adolescents, and young adults.Nicotine addiction is of particular concern and the use of e-cigarettes is positively associated with increased risk of use of combustible cigarettes.These issues complicate the question of how e-cigarettes might impact cardiopulmonary health and are explored further in the later sections of this review. Recognizing the potential health impact of e-cigarettes when they first emerged, particularly impacting the heart and lung, the Division of Lung Diseases and the Division of Cardiovascular Sciences at the NIH’s National Heart, Lung, and Blood Institute conducted a workshop in the summer of 2015 entitled “Cardiovascular and Pulmonary Disease and the Emergence of E-cigarettes”, to identify key areas of needed research as well as opportunities and challenges of such research. The workshop was organized around a framework recognizing that the public health impact of e-cigarettes would be influenced by a complex network of factors in addition to direct health and biological effects, including device characteristics, chemical constituents , aerosol characteristics, and use patterns. In response to the significant gaps and research areas highlighted at the workshop, NHLBI subsequently directed research funding to projects aimed at understanding the cardiopulmonary health effects of e-cigarettes and inhaled nicotine. Funded investigators met in 2018, 2019, and 2020 to discuss their results, findings in the larger field, and remaining scientific questions. The focus of this review is the result of discussions recognizing a need for further understanding of cardiopulmonary health effects of e-cigarettes that occurred at these NHLBI-supported workshops and investigator meetings. This review takes a holistic view of e-cigarette use and cardiopulmonary health, with a major focus on the USA. A summary of the current understanding of the multitude of factors that ultimately affect health, including policies, behaviors, emissions, and biological effects associated with e-cigarette use, is provided herein, with the ultimate goal of identifying key research gaps that remain in the field. E-cigarettes inhabit a rapidly changing marketplace and an evolving pattern of use that typically precedes scientific exploration. Following a PubMed search for relevant literature using “e-cigarette” and/or “cardiopulmonary” and/or “pulmonary”, and/or “cardiovascular disease” as search terms, we break down the pertinent fields to uncover critical research questions that will better enable an understanding of how e-cigarettes affect cardiopulmonary health at an individual and community level.E-cigarettes are highly variable in design, and are comprised of a battery, a reservoir for holding the e-liquid, a heating element, an atomizer, and a mouthpiece. The first generation of e-cigarettes were similar in size and shape to combustible cigarettes. First-generation devices typically used a prefilled nicotine solution cartridge that directly contacted the heating element. Many second-generation devices were penshaped; some included refillable cartridges, while others were closed systems that held only prefilled sealed cartridges. Third generation devices were called “mods” since they are easily modified. They were more diverse, and featured customizable atomizers, resistance coils, and larger-sized batteries capable of heating made-to-order e-liquids to higher temperatures to create more aerosol and potentially deliver more nicotine.Fourth generation devices were smaller and some resembled familiar items such as USB drives. Their sleek design and ease by which they can be concealed from parents and teachers have contributed to their growing popularity in school-age children. These e-cigarettes operate at lower wattages than third generation devices.

Quitting cigarettes improves respiratory symptoms and limits lung function deterioration

It was important to “beef up cessation services in a comprehensive way so that it is relevant to the communities that are most affected” . The flip side of including menthol in policies banning sales of flavored tobacco products to address the health of African Americans and other targeted populations such as the LGBT community, was fear within those communities about criminalizing smoking and smokers. Numerous participants emphasized that flavor policies were “not about the behavior but about sales of the product. We’re not about policing people’s behavior. We don’t want to see any more negative police/community interaction” . Most participants believed flavor bans were unlikely to result in over-policing: “I…don’t think we’re going to see…this law be misused to justify inappropriately criminalizing residents” . Another participant remarked that the argument was raised because “the tobacco industry has…paid some African American leaders to come out and say [bans on sales of menthol products] were criminalizing the Black community” . However, she also pointed out that she wasn’t sure what would happen “if an officer sees somebody selling some [contraband menthol cigarettes] New ports out of their trunk…I would like us to get a handle on [that] before we have an Eric Garner case in California” . Many participants showed awareness that their policies were precedents for other jurisdictions to follow . An advocate reported that a jurisdiction implementing a novel ordinance helps “a lot of people [in other cities] to understand that this is the next big step that can be taken” in their own community .

Even if a policy change did not seem to have a short term impact, one participant said, “we really have to take the long view,plant grow table that we’re creating a flavor-free tobacco region and state…so that youth [eventually] wouldn’t be able to go across the street into the next town and buy these products” . Participants also exhibited awareness that policy innovation in California generally had cities and counties taking the lead, not state government. Asked whether the state’s new endgame focus had influenced his work, one participant replied, “I think our local work has shifted the conversation of the state, to be honest” . Another noted that his organization “always had that vision…even before the state wanted that endgame” . This local, then state adoption of policy change was normal, as another participant noted: “the idea [is] they grow from the local jurisdictions to make statewide implementation more likely” . That influence could spread not only through the state, but also to other states and from there, “ripple out into the rest of the world” . A couple of participants sounded a warning about this process. One commented that, once policy making started to move forward at the state level, “We need to be very vigilant of preemption,” . Another was concerned that state-level action would skip over the community work necessary to make policies acceptable and successful, particularly among communities of color: “You ban flavored tobacco and menthol, and…where’s the community engagement?…There can’t be one without the other or there’s going to be imbalance” between policymakers and those most affected by regulations . Several participants also noted a greater readiness that they had seen in even the recent past for new, innovative policies. One noted this conceptual transition, saying, “To think about the endgame at first was kind of jarring . . .[but after Beverly Hills] you start to think, wow, maybe this is possible!” . Another said of his local elected officials, “They wanted a bolder move…They wanted things like, ‘what is the way to end this?…How do we stop this?’” This was a big change, he noted. “There wasn’t a conversation even happening…And that was just the last couple of years” . An advocate from another area reported that, “I’ve heard elected officials say…‘We’re saying we won’t allow pharmacies to sell tobacco anymore. Well, can’t we just say that nobody sells tobacco anymore?’” . One participant saw her role as getting people to believe an “endgame” was possible, saying: “Let’s believe [in zero smoking prevalence], and then we can work towards how we’re going to get there” . Creating an endgame vision and overcoming skepticism seemed increasingly possible in California, as one advocate noted: “the entire United States is learning a lot from California, and I think putting those big goals in front of public health advocates is really making a huge difference, and believing it will happen is making a big difference.

The policies we now consider, and we would have considered impossible, even just two years ago, now people are taking as commonplace” .Previous research [5] in 2018 found that California legislators and advocates to be somewhat cautious about endgame-oriented policies, preferring more gradual approaches. This study found an overall sense from interviewees of momentum for policy innovation, with greater belief in the possibility of an endgame. Some of this may be a response to advocates’ having begun to receive funding from the state’s new tobacco tax, enacted in 2016, that enabled local tobacco control agencies and coalitions to hire staff and engage in more ambitious policy-oriented planning. Advocates likely understood the success of the tax as signaling public support for tobacco control efforts in the state. The influx of funding and resources from the tax increase may have also bolstered advocates’ enthusiasm. The greater caution about endgame policies found in previous research may also relate to the specific participants. The previous study included interviews with state legislators and leaders of statewide organizations; the current study prioritized local advocates . Statewide leaders – and particularly members of the state legislature — think in terms of what can be accomplished at the state level,hydroponic table taking into account that law must get support from legislators who represent communities on a wide spectrum of readiness for policy change. The local advocates knew that their localities could take bolder steps, and judged that they would be willing to do so again. Participants understood that there were challenges ahead, for example, framing the endgame in such a way as to avoid or undercut arguments made in the past by the tobacco industry and its allies. For example, participants did not feel that the history of alcohol prohibition in the US was an appropriate reference for the tobacco endgame, but understood that it would be important to make that distinction, notably by distinguishing sales bans from prohibitions on possession or use of tobacco products. Making that distinction was also important to establish that new tobacco control policies would not invite further over policing of marginalized communities, such as occurred in the Eric Garner case. Another challenge that participants foresaw was California’s recent legalization of marijuana for recreational use.

Although there was some concern that the liberalization of marijuana regulation suggested that public opinion would not favor stricter tobacco policy, most participants had a more nuanced perspective—that one could simultaneously favor restricting tobacco sales, especially to youth, while permitting sales of marijuana, especially to adults. The combined use of tobacco products and marijuana meant that policies had to encompass both. Further, some participants proposed that the stricter rules relating to marijuana retail sales could provide a model for tobacco. Participants demonstrated awareness that policy innovations carried risks. Although they identified policies containing exemptions as less than ideal, requiring more complex and expensive enforcement or a difficult amendment process, participants sometimes considered exemptions to be a pragmatic way of advancing a policy; this was true even in the case of a policy traditionally considered so far from being considered “pragmatic” as to be almost unthinkable, a tobacco sales ban. In some cases, participants considered exemptions to be harmful. For example, a flavor ban that exempted menthol “solved” the problem by removing the products most obviously marketed to children and youth. However, it left African Americans still vulnerable, and without the allies concerned about youth-oriented “candy” flavors. There appeared to be broad understanding that the goal of ending the epidemic “for all population groups,” meant increased engagement with communities with the highest levels of tobacco use. The trend of California tobacco control policy efforts, led by localities, then followed at state level, was well known to participants. Some suggested that the state’s new focus on a tobacco endgame was the result of local innovation. Participants also recognized that communities took their cues from others, so that policy innovations even in small communities would “ripple outward” and engender wider effects over time. Our study has limitations. We interviewed a small number of key informants selected because they worked in communities that had recently passed innovative tobacco control policies; thus, our sample cannot be considered representative of all California tobacco control advocates. Those working in more conservative communities may view the idea of an endgame more skeptically. However, other tobacco control policies , were once regarded as radical and became more normative with their adoption. Indeed, during the course of the study, Beverly Hills began discussing the first-ever prohibition on sales. Some participants interviewed before these deliberations found such an idea out of reach, while others, interviewed afterward, remarked that the conversation alone made such policies seem possible. Discussions of the tobacco endgame have frequently focused on complex and drawn-out plans, sometimes involving sizable state investment, such as the proposal that the state should buy out the tobacco industry. Recent events in California suggest a different, in many ways a simpler future, more in line with the history of tobacco control, in which localities have taken the lead. The first laws in California calling for non-smoking sections in restaurants were local and largely symbolic, but they demonstrated the possibility for clean indoor air, and more, and stronger laws followed. A tobacco sales ban in a small municipality such as Beverly Hills will not substantially reduce tobacco use in California, but it serves as proof of concept. Municipalities and counties in the U.S. may increasingly recognize and exercise their ability to pass such laws, as the 2014 U.S. Surgeon General’s report suggested. This study, and the recent, rapid spread of policies to ban sales of flavored tobacco products in the state, suggest that tobacco control advocates in California are attentive to such possibilities and willing to act. This study, and the history of California’s approach to tobacco control more generally, point to the importance of local policy advocacy. Local advocates understand the specific issues in their communities, and have a nuanced perspective on policy development, such as when exemptions or exceptions are and are not acceptable. Local advocates also may be able to implement policies that would not be possible at a state or national level; such policies may seem radical, but passage normalizes them. Not every community is ready, but this study suggests that we should encourage more attention to the local actors and new, small-scale policy changes happening around the world that have the potential to ultimately end the tobacco epidemic. Acknowledgement: Support for this paper was provided by the California Tobacco Related Disease Research Program, Grant 26IR-0003. Cigarette smoking causes and exacerbates chronic obstructive pulmonary disease and asthma,1 and is associated with wheezing and cough in populations without a respiratory diagnosis.While the relationship between cigarette smoking and respiratory symptoms is well-established, the relationship between use of other tobacco products besides cigarettes and respiratory symptoms in adults is less clear. Changes in the tobacco market, in part, reflect efforts to market products that may cause less harm than cigarettes. Electronic nicotine delivery devices may represent such a product. With respect to respiratory symptoms, findings have been mixed, however. Numerous animal and in vitro studies raise theoretical concerns about e-cigarette use and lung disease.Short term human experimental studies have linked adult e-cigarette use with wheezing and acute alterations in lung function,and lower forced expiratory flow.One longer term 12-week prospective study of cigarette smokers switching to e-cigarettes found no effects on lung function,and two 1-year randomized controlled clinical trials found reduced cough and improved lung function in persons who used e-cigarettes to reduce or quit cigarettes.

Numerous factors limit the ability of clinicians to causally link acute pancreatitis with medications

Elucidating the role of macro- and neighborhood-level exposures in adolescent psychotic experiences could be particularly informative for early intervention efforts, because the clinical relevance of psychotic phenomena increases later in adolescence.Cities have higher rates of violent crime and tend to be more threatening and less socially cohesive.Additionally, 16–24 year-olds in the United Kingdom are 3 times more likely than other age groups to be victimized by a violent crime.Therefore, many adolescents raised in cities are not only embedded in more socially adverse neighborhoods, but are also more likely be personally victimized by crime compared to other age groups and peers living in rural neighborhoods. Given that cumulative trauma is implicated in risk for psychosis,we hypothesized that one of the reasons that young people in urban settings are at increased risk for psychotic phenomena is that they experience a greater accumulation of neighborhood-level social adversity and personal experiences of violence during upbringing. No study has yet explored the potential cumulative effects of adverse neighborhood social conditions and personal crime victimization on the emergence of psychotic experiences during adolescence. The present study addresses this topic with data from a nationally-representative cohort of over 2000 British adolescents, who have been interviewed repeatedly up to age 18, with comprehensive assessments of victimization and psychotic experiences and high-resolution measures of the built and social environment. We asked: Are psychotic experiences more common among adolescents raised in urban vs rural settings? And does this association hold after controlling for neighborhood-level deprivation,vertical rack as well as individual- and family-level factors, that might otherwise explain the relationship? Can the association between urban upbringing and adolescent psychotic experiences be explained by urban neighborhoods having lower levels of social cohesion and higher levels of neighborhood disorder ? 

Are psychotic experiences more common among adolescents who have been personally victimized by a violent crime? And Is there a cumulative effect of neighborhood social adversity and personal crime victimization on adolescent psychotic experiences? In addition, the present study conducted sensitivity analyses using adolescent psychotic symptoms as the outcome .We conducted analyses following 5 steps. First, logistic regression was used to test whether psychotic experiences were more common among adolescents raised in urban neighborhoods. We controlled for family- and individual-level factors and for neighborhood-level deprivation to check that the association was not explained by these characteristics which could potentially differ between urban vs rural residents. We also examined the association between urbanicity and adolescent major depression to check for specificity of the previous findings. Second, because urban neighborhoods are characterized by lower levels of social cohesion and higher levels of neighborhood disorder we tested whether levels of these neighborhood characteristics accounted for the association between urbanicity and adolescent psychotic experiences, and we also estimated the separate associations of social cohesion and neighborhood disorder with adolescent psychotic experiences. Third, using logistic regression we checked whether adolescents who had lived in the most socially adverse neighborhoods were more likely to be personally victimized by violent crime and, in turn, whether psychotic experiences were more common among adolescents who had been victimized. Fourth, using interaction contrast ratio analysis we investigated potential cumulative and interactive effects of adverse neighborhood social conditions and personal victimization by violent crime on adolescent psychotic experiences. Four exposure categories were created for this analysis by combining neighborhood social adversity with personal crime victimization . Finally, sensitivity analyses were conducted using the clinically-verified adolescent psychotic symptoms as the outcome measure. All analyses were conducted in STATA 14.2 , and accounted for the non-independence of twin observations using the “CLUSTER” command.

This procedure is derived from the Huber-White variance estimator, and provides robust standard errors adjusted for within cluster correlated data.Note: ordinal logistic regression was used in analyses where adolescent psychotic experiences was the dependent variable, because this was on an ordinal scale.This study investigated the role of urbanicity, neighborhood social conditions, and personal crime victimization in adolescent psychotic experiences and revealed 3 initial findings. First, the association between growing up in an urban environment and adolescent psychotic experiences remained after considering a range of potential confounders including family SES, family psychiatric history, maternal psychosis, adolescent substance problems, and neighborhood-level deprivation. This association between urbanicity and psychotic experiences was explained, in part, by 2 features of the neighborhood social environment, namely lower levels of social cohesion and higher levels of neighborhood disorder. Second, personal victimization by violent crime was nearly twice as common among adolescents in the most socially adverse neighborhoods, and adolescents who had experienced such victimization had over 3 times greater odds of having psychotic experiences. Third, the cumulative effect of neighborhood social adversity and personal crime victimization on adolescent psychotic experiences was substantially greater than either exposure alone, highlighting a potential interaction between these exposures. That is, adolescents who had lived in the most adverse neighborhood conditions and been personally victimized were at the greatest risk for psychotic experiences during adolescence. The present findings extend previous evidence from this cohort implicating childhood urbanicity and neighborhood characteristics in the occurrence of childhood psychotic symptoms.Here we show that the effects of urban and socially adverse neighborhood conditions on psychotic experiences are not limited to childhood, but continue into adolescence when psychotic phenomena become more clinically relevant.These findings support previous evidence demonstrating higher rates of psychosis-proneness and prodromal status among adolescents and young adults in urban,threatening,and socially fragmented neighborhoods.

Late adolescence heralds the peak age at which psychotic disorders are typically diagnosed.If a degree of aetiological continuity truly exists between adolescent psychotic experiences and adult psychotic disorder, ours and other recent findings tentatively support a mechanism linking adverse neighborhood conditions during upbringing with psychosis in adulthood. In our study,microgreen flood table the combined effect of adverse neighborhood social conditions and personal victimization by violent crime was greater than the independent effects of each. This is consistent with cumulative stress models and previous studies showing that risk for psychosis phenotypes increases as the frequency and severity of stressful exposures increase.Several biological and psychological mechanisms could explain why adolescents who were exposed to neighborhood social adversity and violent crime during upbringing were more prone to psychotic experiences. Prolonged and acute early-life stress is purported to dysregulate the biological stress response and lead to dopaminergic sensitization, which is the leading hypothesized neurochemical pathway for the positive symptoms of psychosis.In addition, adolescents who grow up in threatening neighborhoods with weak or absent community networks could develop psychosislike cognitive schemas such as paranoia, hypervigilance, and negative attributional styles.A cognitive pathway could explain why effects were apparent for psychotic experiences but not major depression. Our findings tentatively suggest a mechanism whereby childhood exposure to neighborhood social adversity sensitizes individuals to subsequent stressful experiences such as crime victimization. This hypothesized mechanism is supported by recent evidence of neurological differences in social stress reactivity between adults with urban vs rural upbringing.Further research into the influence of neighborhood exposures on childhood neurocognitive development could shed light on this hypothesized mechanism.Several limitations should be considered. First, causality of findings from this observational study cannot be assumed. Noncausal mechanisms, such as the selection of genetically high-risk families into urban and adverse neighborhoods, remain possible,though our findings were not explained by proxy indicators of genetic and familial risk. Second, neighborhood conditions were measured approximately 5 years before adolescent psychotic experiences were assessed. However, the vast majority of adolescents reported that they did not move house between ages 12 and 18. Third, though crime victimization was more common in adverse neighborhoods, we do not know the extent to which these victimization experiences occurred outside the home. Perpetrators of physical violence are often family members,suggesting that our measure of violent crime captured victimization inside as well as outside the home. Fourth, psychotic experiences are associated with adult psychosis but also with other serious psychiatric conditions; while a degree of specificity was suggested in that the effect of urbanicity on psychotic experiences was not replicated for adolescent depression and was not explained by adolescent substance problems, it is probable that the mental health implications of growing up in an urban setting extend beyond psychosis.

In addition, associations arising for the clinically-verified psychotic symptoms were often non-significant. It is possible that the low prevalence of psychotic symptoms in this sample restricted our power to detect associations. However, it is also possible that the self-report measure of adolescent psychotic experiences captured genuine experiences as well as psychotic phenomena . This may have inflated the associations arising for adolescent psychotic experiences, though it is reassuring that point estimates were fairly similar to those produced for psychotic symptoms. Finally, our findings come from a sample of twins which potentially differ from singletons. However, E-Risk families closely match the distribution of UK families across the spectrum of urbanicity and neighborhood level deprivation.Furthermore, the prevalence of adolescent psychotic experiences among E-Risk participants is similar to non-twin samples of adolescents and young adults.Acute pancreatitis is an acute, inflammatory, potentially life-threatening condition of the pancreas. With over 100000 hospital admissions per annum, acute pancreatitis is the leading gastrointestinal cause of hospitalization in the United States and the 10th most common non-malignant cause of death among all gastrointestinal, pancreatic, and liver diseases . It is a major cause of morbidity and healthcare expenditure not only in the United States, but worldwide. There are numerous established etiologies of acute pancreatitis, among which gallstones and alcohol are the most common. The remaining cases are primarily attributable to the following etiologic factors: Hypertriglyceridemia, autoimmune, infection, hyper/hypocalcemia, malignancy, genetics, endoscopic retrograde cholangiopancreatography, and trauma. Despite accounting for approximately only 1%-2% of cases overall, drug-induced pancreatitis has become increasingly recognized as an additional and vitally important, albeit often inconspicuous, etiology of acute pancreatitis. The World Health Organization database lists 525 different medications associated with acute pancreatitis.Unfortunately, few population-based studies on the true incidence of DIP exist, limiting knowledge of true incidence and prevalence. In this setting, we review the ever-increasing diversity of DIP, with emphasis on the wide range of drug classes reported and their respective pathophysiologic mechanisms – in an attempt to raise awareness of the true and underestimated prevalence of DIP. We hope this manuscript will aid in increasing secondary prevention of DIP ultimately leading to a decrease in overall acute pancreatitis-related hospitalizations and economic burden on the health care system. As there is no standardized approach to stratifying patients to determine their risk of developing acute pancreatitis, primary prevention for the majority of etiologies cannot be fully implemented. Secondary prevention of acute pancreatitis, on the other hand, can more easily be executed. For example, abstinence from alcohol reduces the risk of alcoholic pancreatitis, cholecystectomy reduces the risk of gallstone pancreatitis, and tight control of triglycerides reduces the risk of recurrent episodes of pancreatitis secondary to hypertriglyceridemia. On this notion, unique to DIP, is the fact that it can be prevented in both the primary and secondary fashion. Unfortunately, however, most of the available data in reference to DIP is derived from case reports, case series, or case control studies. In this vein, the causality between specific medications and acute pancreatitis has been established in only a minority ofcases. In addition, oftentimes, lack of a known etiology for acute pancreatitis directly increases length of hospitalization due to delayed diagnosis and subsequent treatment. Moreover, patients unaware of an adverse drug reaction to a prior medication may continue taking that medication leading to repeat hospitalizations. Finally, with the rapid expansion of pharmacologic agents, widespread legalization of cannabis, increase in recognized medications, supplements, and alternative medications reported to induce pancreatitis, the need to become familiar with this esoteric group remains imperative, and knowledge in the form of awareness regarding certain medications is warranted.First, the lack of mandatory adverse drug reporting systems allow many cases to go unreported. Second, bias exists, in the sense that clinicians tend to forgo linking unusual medication suspects to a rare adverse event. Third, it is often difficult to rule out other, more common, causes of drug-induced pancreatitis, especially in patients who have multiple comorbidities and underlying risk factors. Fourth, many cases lack a re-challenge test or drug latency period to definitively link acute pancreatitis to a particular drug. Finally, evidence is lacking to support the use of any serial monitoring technique – namely, imaging or pancreatic enzymes to help detect cases of drug-induced pancreatitis.

The high variability between animals in delta power may have obscured an effect

The MACH 14 cohort is a dataset pooled from 16 studies conducted at 14 sites across 12 states. Each study in MACH14 used electronic data monitoring pillcaps to objectively measure participants’ adherence to antiretroviral medication. The focus of this study was on non-methadone substance abuse treatment so studies conducted in methadone maintenance programs were not considered in this analysis. From the 1579 participants in the MACH14 dataset, we identified 215 from two studies based outside methadone clinics because only these two studies’ participants had both EDM and substance abuse treatment status data. Written informed consent was obtained for participation in the parent studies, and the Yale Institutional Review Board approved the secondary analyses. Patients were asked about engagement in substance abuse treatment and use of specific substances for varying preceding time frames: one of the two studies asked about participation in substance abuse treatment during the past 90 days and use of specific substances over the past 30 days, while the other study asked about treatment over the past 30 days and substance use over past 14 days. To aggregate substance use data across studies, variables representing use of specific substances were defined as the proportion of days within the asked-about time frame the person had used each of several substances. This analysis used data collected at the first time point at which participants had EDM data for the preceding four weeks, had also been asked about being recently enrolled in substance abuse treatment, and were not enrolled in a methadone-clinic-based study. To estimate the effect of substance abuse treatment on adherence, adherence was calculated for the four weeks up to and including the date recent substance abuse treatment enrollment was assessed,grow rack as well as for the four weeks after the substance abuse treatment determination. Adherence in each week was calculated by dividing the weekly number of doses taken by the weekly number of prescribed doses for each medication, with adherence to each medication capped at 100%.

Adherence for a patient on multiple antire trovirals was calculated by averaging across prescribed medications.The effects of substance abuse treatment on adherence were determined in multivariate analyses that included a grouping variable denoting whether the patient was enrolled in substance abuse treatment and a variable reflecting substance abuse treatment over time. The analyses were conducted controlling for sociodemographic characteristics that might differ between patients in, and not in, substance abuse treatment. To control for the anticipated finding that patients in substance abuse treatment would have more active drug use than a reference group including people who had never had significant substance use, analyses included a measure representing the largest proportion of days during which participants had used either cocaine, opiates, or stimulants. Cannabis use was not included in this measure of illicit drug use because in a separate analysis of the MACH14 dataset and in an earlier study recent cannabis use was not associated with worse adherence. Analyses were run with SAS 9.2. The model included random effects for intercept and slope as this model had better fit to the data than models with fixed effects only.Although the analyses controlled forillicit drug use, it is possible that our self-report measures of substance use understated the impact of substance abuse treatment on substance abuse and that it is in fact abstinence that facilitates adherence. In one of the few randomized controlled studies of HIV-positive drug users in which abstinence was the target outcome, there was a trend towards a significant correlation between consecutive weeks of toxicology-tested abstinence during the intervention and reductions in viral load. There is also evidence from a naturalistic longitudinal cohort study that attendance at HIV treatment, a sine qua non for adherence, appears to improve with newly-achieved abstinence. Substance abuse treatment might improve adherence by mechanisms other than facilitating abstinence from using drugs. Substance abuse treatment typically involves case management to address the unstable housing characteristic of drug users. Stable housing arrangements during substance abuse treatment would be expected to foster adherence, in that stable routines have been associated with better adherence.

Substance abuse treatment also focuses patients on future goals, an orientation that has been described as fostering adherence, and substance abuse treatment can involve re-arranging social networks in ways that also might foster better adherence. It is possible that enrollment in substance abuse treatment reflects a lurking un-measured variable associated with both being in substance abuse treatment and better adherence. The finding of better adherence among people in substance abuse treatment was not buttressed by finding better adherence over time among patients in treatment. However, it might have been difficult to detect the time course of benefit from substance abuse treatment because the data did not specify when patients were entering, continuing, or finishing substance abuse treatment.Substance abuse was measured by self-report, and it is possible that substance abuse was disproportionately under-reported by people out of substance abuse treatment, thus exaggerating the impact of substance abuse treatment on adherence. The type of substance abuse treatment was not specified and the findings may not apply to all types of substance abuse treatment. Finally, the sample size was modest, and the number of participants in substance abuse treatment was small. It is noteworthy that although adherence decreased on average over time, the course of adherence varied significantly by person. Further analyses should test variables that may account for individual differences in adherence over time. These findings lend some support to the clinical practice of addressing substance use in an effort to improve adherence. The crucial next step is to develop and prospectively test substance abuse-focused interventions for patients with both substance abuse and adherence problems.Marijuana has been used for hundreds of years for mystical and religious ceremonies, for social interaction, greenhouse grow tables and for therapeutic uses. The primary active ingredient in marijuana is delta-9 tetrahydrocannabinol,one of some 60 21-carbon terpenophenolic compounds knows as cannabinols, which exerts its actions via cannabinoid receptors referred to as CB1 and CB2 receptors. Endogenous cannabinoids have been isolated from peripheral and nervous tissue. Among these, N-arachidonoylethanolamine and 2-arachydonoylglycerol are the best-studied examples.Behaviorally, AEA increases food intake and induces hypomotility and analgesia.

Anandamide also induces sleep in rats.Cannabinoid stimulation stabilizes respiration by potently suppressing sleep apnea in all sleep stages.In humans, marijuana and ∆9-THC increase stage, or deep, sleep.The mechanism by which the cannabinoids induce sleep is not known, hampering the development of this drug for possible therapeutic use. The sleep-inducing effects of cannabinoids could be linked to endogenous sleep factors, such as adenosine . There is substantial evidence that AD acts as an endogenous sleep factor. Extracellular levels of AD measured by microdialysis are higher in spontaneous waking than in sleep in the basal forebrain and not in other brain areas such as thalamus or cortex.Given this evidence, we hypothesized that the soporific effects of AEA could be associated with increased AD levels. In the present study, extracellular AD levels were assessed in the basal forebrain via the microdialysis method. The basal forebrain was sampled because this region is particularly sensitive to AD.The cholinergic neurons located in the basal forebrain are implicated in maintaining waking behavior, and it is hypothesized that sleep results from the accumulating AD, which then inhibits the activity of the wake-active cholinergic neurons.Our results show that systemic application of AEA leads to increased AD levels in the basal forebrain during the first 3 hours after injection, and total sleep time is increased in the third hour. These findings identify a possible mechanism by which the endocannabinoid system influences sleep.Previous reports have found that application of AEA directly to the brain increases total sleep time and SWS.We have now shown that systemic administration of AEA also has the same effect. More importantly, the increased sleep is associated with increased extracellular levels of AD in the basal forebrain. The increase in sleep induced by AEA occurred during the third hour after injection of the compound and was associated with peak levels of AD. In each of the first 2 hours, AD levels were significantly higher compared to vehicle injections, with a peak in AD occurring in the third hour. Increased sleep was not evident in the first 2 hours, suggesting that a threshold accumulation of AD might be necessary to drive sleep. In the fourth hour, AD decreased dramatically relative to the third hour, and the levels were not different from those observed after vehicle administration. There was no significant difference in delta power between AEA and DMSO, even though the percentage of SWS was higher with AEA.Sleep is hypothesized to result from accumulating AD levels, and then there is a decline as a result of sleep.

This effect is present in the basal forebrain and not in other brain areas.Thus, this purine is hypothesized to act as a homeostatic regulator of sleep; its buildup increases the sleep drive, and as AD levels decline, sleep drive also diminishes. The present data are consistent with this hypothesis in that peak sleep levels occur with peak AD levels, and then as a result of sleep, AD levels also decline . The CB1-receptor antagonist blocked the AEA-induced induction of AD as well as the sleep-inducing effect. The CB1-receptor antagonist SR141716A has been tested in diverse behavioral paradigms, and it blocks the effects induced by AEA.Santucci and coworkers demonstrated that administration of SR141716A increases W and decreases SWS.Here we replicated these effects but also demonstrated that the increase in AD levels after injection of AEA were blocked by the CB1- receptor antagonist. The AEA exerts its effect via the CB1 receptor and hyperpolarizes the neuron.The CB1 receptors are coupled to the Gi/Go family of G protein heterotrimers. Activation of the CB1 receptor inhibits adenylate cyclase and decreases synthesis of cAMP.In rats, the CB1 receptor is localized in the cortex, cerebellum, hippocampus, striatum, thalamus, and brainstem.The CB1 receptor is also present on basal-forebrain cholinergic neurons as determined by immunocytochemistry.The CB1- receptor mRNA is present in the basal forebrain.This receptor is also present in the brainstem where the cholinergic pedunculopontine tegmental region is implicated in W.Microinjection of AEA into this region decreases W and increases REM sleep.The CB1 receptors are also localized in the thalamus,an area implicated in producing slow waves in the EEG.Activation of these receptors in the pedunculopontine tegmentum, basal forebrain, and thalamus may decrease the firing of wake-active neurons, resulting in sleep. Additionally, accumulation of AD in the basal forebrain may inhibit the cholinergic neurons and also increase sleep. Direct injections of the AEA into the basal forebrain were not possible since AEA dissolves only in DMSO and alcohol. Moreover, delivery of the AEA dissolved in DMSO clogs the microdialysis membrane. The mechanism by which AEA increases AD in the basal forebrain is not known, even though both AD and AEA could directly inhibit the wake-active neurons given the inhibitory action of these agents on their receptors. Nevertheless, there is evidence of an interaction between the adenosinergic and the endocannabinoid systems.For example, the motor impairment induced by the principal component of cannabis, ∆9- THC, is enhanced by adenosine A1-receptor agonists.We now show that stimulation of the endocannabinoid system via the CB1 receptor increases AD in the basal forebrain. The endocannabinoids and AD may regulate sleep homeostasis via second and third messengers, as we have hypothesized.Previously, investigators have shown that stage 4 sleep in humans is increased in response to administration of ∆9-THC or smoking of marijuana cigarettes.We now show that such a soporific effect is associated with an increase in AD levels in the basal forebrain. Cannabinoid stimulation suppresses sleep apnea in rats,and A1-receptor stimulation also has the same effect.The endocannabinoid system also influences other neurotransmitter systems,in particular, inhibiting the glutamatergic system.It would be important to determine whether endogenous levels of specific neurotransmitter systems are changed as a result of AEA-induced activation of the CB1 receptor. Irrespective of the mechanism involved, our studies underscore the importance of endocannabinoid-AD interactions in sleep induction and open new perspectives for the development of soporific medications. The discovery of the cannabinoid receptors and endocannabinoid ligands has generated a great deal of interest in identifying opportunities for the development of novel cannabinergic therapeutic drugs. Such an effort was first undertaken three decades ago by a number of pharmaceutical industries, but was rewarded with only modest success.