This pattern of expression in the three subfields of the cornu ammonis was very similar

To reveal the site of synthesis of the endocannabinoid 2-AG in the human hippocampus by determining the localization of its predominant synthesizing enzyme DGL-α , we first sought to identify an antibody with unequivocal specificity for this transmembrane serine hydrolase. Therefore, DGL-α- immunostaining was performed and compared in hippocampal sections derived from wildtype or DGL-α knockout mice. Using an affinity-purified antibody raised against a large intracellular loop on the C-terminus of DGL-α , immunoperoxidase reaction revealed at low magnification that the general dense distribution of DGL-α-immunostaining followed the topographic arrangement of glutamatergic pathways in the wild-type hippocampus. In contrast, the immunoreactive material was almost fully absent in the DGL-α knockout hippocampus confirming the specificity of the “DGL-α INT” antibody. At higher magnification, the differences in staining intensity between the somatic and dendritic layers were even more pronounced. While nuclei and cell bodies in the principal cell layers were largely devoid of DGL-α-immunoreactivity, an intense punctate staining pattern was observed throughout the neuropil in those layers, which contain a high density of excitatory synapses in the hippocampus. This was in accordance with the observations we have reported earlier using this antibody in the hippocampus and in other regions. On the other hand, this punctate labeling was largely missing in DGL-α knockout hippocampi. Therefore, in the next set of experiments, we incubated hippo campal sections derived from human subjects together with hippo campal sections derived from wild-type C57BL/6 mice using the “DGL-α INT” antibody. At low magnification, immunofluorescence staining for DGL-α was unevenly distributed throughout the human hippo campal formation. 

This pattern followed the laminar organization of the hippocampus and was found to be largely similar in mice. At higher magnification,drying rack cannabis the highest density of DGL-α- immunoperoxidase reactivity was observed in the strata oriens and radiatum of the cornu ammonis subfields, and in the inner molecular layer of the dentate gyrus , whereas somewhat weaker, but still significant density of DGL-α-immunoreactivity was found in the strata pyramidale and lacunosum-moleculare of the cornu ammonis and in the outer two-third of the stratum moleculare. Somata of pyramidal cells and dentate gyrus granule cells contained only very low amount of DGL-α-immunolabeling. At even higher magnification, the punctate staining pattern also showed striking similarities with the pattern observed in wild-type mice. This widespread granular pattern of DGL-α-immunoreactivity was visible throughout the hippo campal formation, but its distribution varied with regard to given subcellular profiles. For example, in the stratum radiatum of the CA1 sub-field, DGL-α-positive granules were distributed along the main trunk of the apical dendrites of pyramidal cells, whereas the trunk itself was devoid of immunostaining. Similarly, apical and possibly oblique dendrites of granule cells also appeared to be outlined on their surface by dense DGL-α-immunolabeling.To reveal the precise subcellular position of DGL-α in principal cells of the human hippocampus, we first tested the specificity of the “DGL-α INT” antibody at the ultrastructural level. Hippocampal sections from mice with different genotypes were processed together within the same incubation wells to ensure identical treatment throughout the imunostaining procedure. Further highresolution electron microscopic analysis in samples taken from the stratum radiatum of the CA1 sub-field of wild-type hippocampus revealed that DGL-α-immunoreactivity was predominantly concentrated in dendritic spine heads receiving asymmetric, putative excitatory synapses, in accordance with previous findings. Altogether, at least ~24% of dendritic spine heads were unequivocally positive for DGL-α immunoreactivity in our wild-type random samples ; this ratio should be treated as a conservative estimation restricted by epitope accessibility.

In contrast, under identical staining condition, only two out of 201 spine heads contained weak immunoperoxidase reaction end product in sections taken from the DGL-α knockout mouse , indicating the low level of background in this immunostaining experiment. To determine whether in the human hippocampus the same subcellular domain, namely the postsynaptic spine head, corresponds to the punctate staining pattern observed at the light microscopic level, hippocampal sections from human subjects with DGL-α- immunostaining were also processed for further electron microscopic analysis. Two regions were selected for detailed investigations, the stratum radiatum of the CA1 region and the inner third of the stratum moleculare of the dentate gyrus. In both regions, the DAB end product of the immunoperoxidase staining procedure, representing the subcellular position of DGL-α, was concentrated in dendritic spine heads protruding from DGL-α-immunonegative dendritic shafts. Because the majority of hippocampal GABAergic interneurons, including for example basket cells are aspiny, therefore the widespread occurrence of DGL-α in this characteristic subcellular compartment also reveals that principal cells express this enzyme in the human hippocampus. Notably, the DAB precipitate was consistently present within the spine heads through consecutive ultrathin sections. In contrast to this high concentration of DGL-α in dendritic spines, intensity of DGL-α-immunoreactivity did not reach the detection threshold in other subcellular profiles like excitatory and inhibitory axon terminals, or glial processes in the human hippocampus. Taken together, these data ultimately confirm previous findings that DGL-α accumulates postsynaptically in dendritic spines of principal cells in the mouse hippocampus and suggest that this 2-AG-synthesizing enzyme has a conserved function in the regulation of retrograde endocannabinoid signaling based on its entirely similar postsynaptic localization at excitatory synapses in the mouse and human hippocampus.If the enzyme responsible for 2-AG biogenesis is postsynaptically located , whereas its receptor is presynaptically positioned , then the next important question is where the 2-AG signal is terminated at excitatory synapses in the human hippocampus. Because MGL knockout mice have not yet become available to use as specificity controls, we employed two independent antibodies recognizing different epitopes of the MGL protein to characterize the regional distribution and subcellular localization of 2-AG’s principal hydrolyzing enzyme, MGL in the human hippocampal formation.

Immunofluorescence staining for MGL using two different antibodies recognizing independent epitopes of the MGL protein resulted in a comparable distribution pattern, although the general density of staining was stronger for the antibody “MGL-mid” in human hippocampal sections. Notably,commercial greenhouse supplies as with the DGL-α-immunostaining, the distribution pattern of MGL mirrored the laminar structure of the hippocampal formation and was found to be similar in mouse and human hippocampi. At higher magnification, the stratum oriens showed the strongest density of MGL-immunoperoxidase reactivity in the cornu ammonis , but profound staining was also observed in strata pyramidale and radiatum.Immunoperoxidase labeling for MGL was also found in the hilus and in the stratum moleculare of the dentate gyrus , with a somewhat stronger MGLimmunoreactivity visible in the outer two-thirds of the dentate molecular layer. Interestingly, this latter intensity pattern was in contrast with the distribution of DGL-α, which was more abundant in the inner third of the dentate molecular layer. At even higher magnification, cell bodies of pyramidal cells and granule cells were only weakly or not at all MGL-positive. Moreover, apical dendrites of pyramidal and granule cells were also largely devoid of immunolabeling for MGL. On the other hand, the neuropil among these dendrites and throughout the dendritic layers contained a dense, punctate MGL-positive staining. These varicosities were small, distributed with different densities in distinct layers and were often arranged in an array-like manner , reminiscent of the DGL-α-immunoreactivity pattern at the light microscopic level.To test the prediction that the comparable dotted immunostaining pattern for DGL-α and MGL is due to the similar subcellular compartmentalization of these two enzymes with opposing functions in the metabolism of 2-AG, we performed a high-resolution electron microscopic analysis of MGL-immunostaining in the human hippocampal formation. The same regions were selected for detailed investigations as for DGL-α, the stratum radiatum of the CA1 region and the inner third of stratum moleculare of the dentate gyrus. Importantly, both antibodies revealed an identical staining pattern at the ultrastructural level. In addition, no differences in MGL-immunostaining were observed between strata radiatum and moleculare. At asymmetric, presumably glutamatergic synapses, MGLimmuno reactivity was restricted to presynaptic axon terminals, in contrast to the postsynaptic localization of DGL-α. These MGL-positive boutons terminated most often on dendritic spine heads, but occasionally dendritic shafts were also present among their postsynaptic targets. The DAB end product indicating the presence of the MGL protein was predominantly found in the central part of the axon terminals often close to synaptic vesicles and to active zone release sites , and could be consistently followed through consecutive ultrathin sections of the same terminals. Besides the immunolabeling in axon terminals, MGL-immunoreactivity also appeared in thin axonal segments that could be often identified as preterminal axons through serial sections. In contrast to axonal profiles, consistent MGL-immunoreactivity confirmed with both antibodies remained under detection thresholds at postsynaptic sides, dendritic shafts, cell bodies and in glial processes.

Taken together, the abundance of MGL in axon terminals indicates that the majority of postsynaptically released 2-AG is inactivated presynaptically, close to its target, the CB1 cannabinoid receptor. Moreover, together with similar findings in the rodent hippocampus , these data also suggest that the entire molecular architecture of retrograde 2-AG signaling at excitatory synapses is evolutionarily conserved across species.Despite the compelling association of impaired endocannabinoid signaling with several neurological and psychiatric disorders , our knowledge regarding the molecular architecture of endocannabinoid system in the human brain is still limited. In the present study, we provide evidence that the enzymatic machinery responsible for the metabolism of the endocannabinoid 2-AG is also present in the human brain; its distribution follows the topographic layout of excitatory, glutamatergic pathways in the human hippocampal formation; and finally, its enzymes are restricted to complementary subcellular compartments at excitatory synapses. DGL-α, the key serine hydrolase in the biosynthesis of 2-AG is found postsynaptically. In contrast, MGL, the primaryserine hydrolase responsible for hydrolyzing 2-AG is localized presynaptically. Together with the presynaptic position of CB1 cannabinoid receptors on glutamatergic axon terminals in the human hippocampus , these data suggest that the molecular architecture of 2-AG signaling underlies 2-AG’s postulated function as a retrograde synaptic messenger. Moreover, these findings also indicate that retrograde 2-AG signaling is an evolutionarily conserved feature of hippocampal excitatory synapses and its similar organization in rodents and humans may help to offer plausible strategies for human medical research based on experimental findings obtained in rodents. An important implication of the present findings is the central role of DGL-α and 2-AG in the regulation of excitatory synaptic communication in the human hippocampus. Immunostaining for DGL-α at the light microscopic level resulted in an abundant punctate staining throughout the neuropil, which delineated the layered structure of the human hippocampal formation. On the other hand, characteristic profiles, like cell bodies and major dendritic trunks were weakly or not at all labeled. The granular pattern and its uneven, layered distribution suggest that DGL-α has a compartmentalized distribution at the subcellular level. The intense staining and its overlap with glutamatergic afferent pathways indicate that this compartment may be the glutamatergic synapse. Indeed, further electron microscopic examination revealed that DGL-α is exclusively found in postsynaptic spine heads receiving asymmetric, presumably excitatory glutamatergic synapses. This characteristic postsynaptic position was found both in stratum radiatum of the CA1 sub-field and in stratum moleculare of the dentate gyrus. On the other hand, dendritic shafts from which these DGL-α-containing spines protrude, axon terminals and glial profiles were not consistently labeled suggesting that even if these subcellular domains hold low, at present undetectable, levels of the DGL-α enzyme, the majority of 2-AG biosynthesis occurs postsynaptically at glutamatergic synapses in the human hippocampal formation. This peculiar subcellular position of DGL-α highlights its key function in the initiation of synaptic endocannabinoid signaling, whose human occurrence has been postulated based on numerous animal studies, but has never been demonstrated in human nervous tissue before. Using electron microscopy, a series of recent neuroanatomical findings reported a very similar postsynaptically compartmentalized distribution of DGL-α in several brain areas in rodents, for example in the prefrontal cortex , in the hippocampus , in the striatum , in the ventral tegmental area , in the cerebellum , in the auditory brainstem and even in the dorsal horn of the spinal cord. Thus, we propose that the matching postsynaptic localization of DGL-α in the human hippocampus and in many rodent brain areas indicates that DGL-α is an evolutionarily conserved component of excitatory synapses and thereby its synaptic functions established in animal experiments can be extrapolated to the human brain as well.

An alternative to real experiments is to have students collect data in a virtual environment

While renewed interest in psychedelic medicine is challenged by various funding and methodological and legal impediments, the emerging evidence indicating improved outcomes for some individuals suffering from mental health and addiction issues has generated new scientific inquiry and an imposing obligation to advance this research.Recent observational studies in the USA demonstrate significant associations between lifetime psychedelic use and reduced recidivism and intimate partner violence among populations of prison inmates and reduced psychological distress and suicidality among the general adult population.Despite the multifaceted structural and social inequities that shape poor mental health burden among marginalised and street-involved sex workers, there remains a paucity of data on suicide rates and research that systematically examines factors that potentiate or mitigate suicidality among sex workers, particularly in the global north. Some evidence suggests that psychedelic drug use may be protective with regard to suicidality and is associated with significant improvements in psychological well-being and reductions in depression and anxiety in clinical settings,yet existent research is characterised by large gaps. Given the urgency of addressing and preventing suicide and calls for prioritising innovative interventions, this study aimed to longitudinally investigate whether lifetime psychedelic drug use is associated with a reduced incidence of suicidality among a community-based prospective cohort of marginalised women. We postulated that psychedelic drug use would have an independent protective effect on suicidality over the study period.Data for this study were drawn from a large, community-based,dry racks prospective cohort of women sex workers initiated in 2010, known as An Evaluation of Sex Workers Health Access. 

Eligibility criteria for study participants included cisgender or transgender women, 14 years of age or older, who exchanged sex for money within the last 30 days. AESHA participants completed interviewer-administered questionnaires and HIV/sexually transmitted infection /hepatitis C virus serology testing at enrolment and biannually. Experiential staff are represented across interview, outreach and nursing teams, including coordinators with substantial community experience. Participants were recruited across Metro Vancouver using time–location sampling and community mapping strategies, with day and late-night outreach to outdoor sex work locations , indoor sex work venues and online. Weekly outreach by experiential staff is conducted to over 100 sex work venues by outreach/nursing teams operating a mobile van, with regular contact as well as encouraging drop-in to women only spaces at the research office, contributing to an annual retention rate of >90% for AESHA participants. The main interview questionnaire elicits responses related to sociodemographics , the work environment , client characteristics , intimate partners , trauma and violence and comprehensive injection and non-injection drug use patterns. The clinical questionnaire relates to overall physical, mental and emotional health, and HIV testing and treatment experiences to support education, referral and linkages with care. The research team works in close partnership with the affected community and a diversity of stakeholders and regularly engages in knowledge exchange efforts. AESHA is monitored by a Community Advisory Board of over 15 sex work, women’s health and HIV service agencies, as well as representatives from the health authority and policy experts, and holds ethical approval through Providence Health Care/University of British Columbia Research Ethics Board. All participants receive an honorarium of $C40 at each biannual visit for their time, expertise and travel. To capture initial episodes of suicidality, analyses for this study were restricted to AESHA participants who had never thought about or attempted suicide at baseline and completed at least one follow-up visit between January 2010 and August 2014. Those with missing observations for suicidality at baseline were excluded from analysis, and one additional participant was excluded because reported suicidality was missing at follow-up.

Using extended Cox regression, unadjusted hazard ratios and adjusted hazard ratios and 95% CI were calculated to identify predictors of suicidality. Psychedelic drug use, hypothesised a priori to be a predictor of suicidality, and variables that were significantly correlated with the outcome at the p<0.10level in bivariate analyses were subsequently fitted into a multi-variable Cox model. Backward model selection was used to determine the final multi-variable model with the best overall fit, as indicated by the lowest Akaike information criterion value. A complete case analysis was used, where observations with missing data were excluded from analyses, and participants who were lost to follow-up were right censored at their most recent study visit. All statistical analyses were performed using SAS software V.9.4. Two-sided p values are reported.This study demonstrated that among marginalised women, many of whom are street-involved and experience a disproportionate burden of violence, trauma, psychological distress and suicide, naturalistic psychedelic drug use predicted a significantly reduced hazard for suicidality. Crystal methamphetamine use and childhood abuse predisposed women to suicidality corresponding to more than a threefold increased hazard. Suicidality was highly prevalent, with almost half of women reporting lifetime suicidality at baseline, and 11% reporting a first episode of suicidality in the last 6months during follow-up. Few studies have longitudinally examined predictors of suicidality among marginalised sex workers, and of the available data, most are cross-sectional and/or conducted in lower-income and middle-income settings.The present study, based on a community-based, prospective cohort of marginalised women, adds to a growing body of literature documenting the protective and therapeutic potentials of psychedelic substances.Data were self-reported, and questions pertaining to events that occurred in the past may be subject to recall bias. Variables examined included sensitive and highly stigmatised topics such as childhood trauma,ebb flow violence and illicit drug use, which introduce the potential for social desirability and reporting bias. However, the likelihood of these biases is reduced by the community-based nature of the study. While lifetime psychedelic drug use was found to reduce the hazard of suicidality, the associations uncovered in this analysis cannot be determined as causal.

However, the use of Cox regression analysis in this study was able to determine a temporal relationship between psychedelic use and suicidality. The sample was restricted to participants who had not experienced suicidal ideation or attempt at baseline, ensuring that psychedelic use preceded suicidality and thus providing evidence that psychedelics have a protective effect. Due to a lack of statistical power, analyses evaluating the effects of more nuanced indicators of psychedelic use , as well as separate analyses for ideation and attempt outcomes, were not feasible. Further examination of these variables would certainly be interesting and important in future analyses with additional data from follow-up questionnaires. Suicidality is influenced by complex individual, interpersonal and structural variables, and not all potential confounding variables could be controlled for in this study. For example, women who use psychedelics may also possess some characteristic associated with a reduced likelihood of being suicidal , which were not examined in this study. Despite the relative safety of psychedelic drug use as evidenced from the clinical and non-clinical literature,it should be noted that the use of psychedelics, particularly with unknown doses sourced from unregulated street markets, is not without risk, highlighting the importance of set and setting; the doses and contexts of psychedelic use among women in the present study could not be determined. The SE for the association between psychedelic use and suicidality was somewhat high, resulting in a wider CI. However, a large and significant protective effect was demonstrated in multi-variable analysis, despite the relatively small number of events for suicidality over follow-up. With a larger sample size, we would expect a narrower CI for this association. The study population included women from a wide-ranging representation of sex work environments, yet findings may not be fully generalisable to sex workers in other settings. The mapping of working areas and time–location sampling helped to ensure a representative sample and to minimise selection bias. To the best of our knowledge, this study is the first to longitudinally investigate associations with suicidality among marginalised and street-involved sex workers in North America and builds on prior cross-sectional research highlighting significantly elevated rates of suicidality and unmet mental health needs in this population. For example, a study conducted in Sydney, Australia demonstrated significant links between depression, trauma, and suicidality, where an estimated 42% of street-based female sex workers reported attempting suicide and 74% reported lifetime suicidal ideation.While estimates of mental illness vary significantly across sex work settings, up to three-quarters of street-involved and drug-involved sex workers in a US study reported severe depression, anxiety or PTSD.

Notably, our study demonstrated a lower risk of suicidality among women working indoors in bivariate analysis , lending support to the critical role of safer workplace environments in mitigating risk. In studies conducted in Asia, recent suicide attempts ranged from 19% among sex workers in Goa, India to 38% among sex workers in China,many of whom work in marginalised settings with few workplace protections. Transgender women involved in sex work, a sub-population experiencing significant psychosocial vulnerability and discrimination, report notably further elevated rates of suicidality: three-quarters of participants in San Francisco reported suicide ideation, of whom 64% attempted suicide.The global evidence is unequivocal that in settings where sex work is criminalised, sex workers are unable to access essential social, health and legal protections , highlighting the need for structural and community-led interventions to improve health and human rights.A structural approach to mitigating suicidality risk requires a reform of laws and policies that perpetuate stigma, discrimination, violence and unequal access to health and social supports among sex workers. Increased support for community-driven interventions that are gender and culturally appropriate are urgently needed, and any clinical treatment utilising psychedelics must be developed alongside sex worker-led interventions and community empowerment. Our findings extend on research on associations between lifetime use of illicit drugs and increased risk for suicidality: in bivariate analysis, all classes of illicit drugs were demonstrated to increase the hazard of suicidality with the exception of psychedelics. In multi-variable analysis, psychedelics were independently associated with a 60% reduced hazard for suicidality, contributing to emergent evidence on the potential of psychedelics to mitigate risks for suicide. Among the various scientific studies examining the potential benefits of psychedelic drug use, a recent and large population study conducted among adult respondents in the USA demonstrated that psychedelics are associated with reduced psychological distress and suicidality.A recent open-label trial conducted in the UK demonstrated the safety and efficacy of psilocybin for treating major depression,and another open-label trial in Brazil found rapid and sustained antidepressant effects from the Amazonian psychedelic brew ayahuasca administered in a clinical setting.The ways in which psychedelics may alleviate suffering associated with some mental illness is undoubtedly a complex phenomenon. It has been hypothesised that psychedelics modify neurobiological processes that may be involved in suicidality by down regulating 5-HT2A serotonin receptors, as increased binding of this receptor has been implicated in major depression and suicide.Furthermore, there is evidence that psychedelics alter neural network connectivity and enhance recall of autobiographical memories, which may facilitate positive reprocessing of trauma.Recent randomised, placebo-controlled, crossover studies found that psilocybin and LSD were associated with increased positive mood and psychological well-being,supporting other work demonstrating the antidepressive/ anxiolytic effects of psychedelics.The potential of psychedelics to elicit ‘mystical-type’ experiences, with profound and sustained positive changes in attitudes and mood, may play a key role in addiction treatment interventions.For example, psilocybin-assisted psychotherapy demonstrated high success in smoking cessation outcomes at 6months follow-up , and mystical experiences generated from the psilocybin sessions were significantly correlated with elevated ratings of personal meaningfulness, well-being and life satisfaction.Randomised control trials in the USA and Switzerland have demonstrated significant longterm improvements among patients with treatment-resistant PTSD following MDMA-assisted psychotherapy,and further research is continuing in an international multisite phase t3 clinical trial. Marginalised and street-based sex workers experience complex and synergistic effects between trauma, lack of workplace safety and mental health/substance use comorbidities that elevate risk of suicidality. Marginalised women and sex workers who use drugs report high rates of childhood abuse,which is associated with an increased likelihood of experiencing subsequent physical or sexual violence, as well as initiating injection drug use.For those suffering from emotional trauma stemming from violence, including indirect violence , there may be a proclivity to use drugs for self-medication.Violence and sexual coercion have been found to be significantly associated with suicidality among sex worker populations in China and India.As demonstrated in this study, having an early traumatic life event is a key risk factor for suicide among sex workers, a high proportion of whom are Indigenous, and experiencing historical trauma can have harmful inter-generational impacts.

Both THC and the endogenous cannabinoid anan damide  promote overeating in partially satiated rats

In particular, examination of biomarkers of stress and trauma in PLWH may help to understand mechanisms underlying the associations between TES and neurocognitive and everyday function observed in this study. Efforts to reduce trauma, poverty, and other stressful contexts and developing resources to help people manage and cope with past and current adverse circumstances could be relevant to decreasing neurocognitive impairment, particularly the high rates of mild neurocognitive disorder, in PLWH. Historical descriptions of the stimulatory effects of Cannabis sativa on feeding are now explained by the ability of its psycho active constituent 9 -tetrahydrocannabinol to interact with CB1 cannabinoid receptors.Moreover, THC increases fat intake in laboratory animals and stimulates appetite in humans. The selective CB1 receptor antagonist SR141716A counteracts these effects and, when administered alone, decreases standard chow intake and caloric consumption , presumably by antagonizing the actions of endogenously released endocan nabinoids such as anandamide and 2-arachidonoylglycerol. These results suggest that endocannabinoid substances may play a role in the promotion of food intake, possibly by delaying satiety. It is generally thought that the hyperphagic actions of canna binoids are mediated by CB1 receptors located in brain circuits involved in the regulation of motivated behaviors. Thus, infusions of anandamide in the ventromedial hypothalamus were shown to promote hyperphagia , whereas the anorectic effects of leptin were found to be associated with a decrease in hypothalamic anandamide levels. Nevertheless,cannabis grow equipment evidence suggests that cannabinoids also may promote feeding by acting at periph eral sites. Indeed, CB1 receptors are found on nerve terminals innervating the gastrointestinal tract , which are known to be involved in mediating satiety signals that originated in the gut. 

To test this hypothesis, in the present study we have examined the impact of feeding on intestinal anandamide accumulation, the effects of central versus peripheral systemic administration of cannabinoid receptor agonists on feeding behavior, and the effects of sensory deafferentation on cannabinoid-induced hyperphagia.The present results suggest, first, that systemically administered cannabinoid agents affect food intake predominantly by engaging peripheral CB1 receptors lo calized to capsaicin-sensitive sensory terminals and, second, that intestinal anandamide is a relevant signal for the regulation of feeding. Two observations support the idea that cannabinoid agents modulate feeding through a peripheral mechanism. First, the lack of effect of central administration of cannabinoid antagonists such as SR14116A and 6-iodo-2-methyl-1-[2-ethyl]-[1H]-indol-3-yl methanone on food intake in food-deprived animals and, second, the ability of capsaicin-induced deafferentation to prevent changes in feeding elicited by the peripheral administra tion of cannabinoid drugs. Moreover, the similar pattern of ex pression of the early gene c-fos on hypothalamic and brainstem areas regulating food intake after both the peripheral adminis tration of either CB1 agonists and antagonists and the acute administration of peripherally acting satiety modulators such as gastrointestinal hormones or feeding inhibitors such as OEA further support the peripheral actions of cannabinoids on food intake. Finally, the fact that the CB1 receptor antagonist SR141716A was active only after intraperitoneal or oral administration but not after subcutaneous injection further supports this hypothesis. These results do exclude the possibility that peripheral anandamide also modulates feeding by acting on specific hypothalamic areas involved in caloric homeostasis. However, they do suggest that the predominant effects of systemically administered SR141716A are mediated by peripheral CB1 receptors, which may thus represent a potential target for anorexic agents. The concentration of anandamide in intestinal tissue increases during food deprivation, reaching levels that are threefold greater than those needed to half maximally activate CB1 receptors. This surge in anandamide levels, the mechanism of which is unknown, may serve as a short-range hunger signal to promote feeding.

This idea is supported by the ability of SR141716A to reduce food intake after systemic but not central administration. Locally produced anandamide also may be involved in the regulation of gastric emptying and intestinal peri stalsis, two processes that are inhibited by this endocannabinoid. Thus, intestinal anandamide appears to serve as an integrative signal that concomitantly regulates food intake and gastrointestinal motility. The predominant peripheral component of feeding suppression induced by SR141716A led us to analyze whether the modulation of food intake derived from CB1 receptor stimulation/blockade may interact with that produced by the noncannabinoid anand amide analog OEA. Our results indicate that the hyperphagic effects elicited by CB1 receptor stimulation were counteracted by the administration of OEA, whereas CB1 receptor blockade potentiates the suppression of feeding evoked by OEA. Because the intestinal levels of anandamide and OEA are inversely correlated , it is tempting to speculate that both compounds act in a coordinated manner to control feeding responses through their opposing actions on sensory nerve terminals within the gut.The Human Immunoeficiency Virus enters the central nervous system within days of initial infection , in many cases leading to neurological, cognitive, and behavioral complications. Cognitive deficits are a common feature of HIV/AIDS. While the incidence of HIV-associated dementia has considerably decreased in the era of modern ART suppressing viral replication, mild cognitive deficits with no change in everyday function persist in 24% [95% confidence interval = 20.3–26.8] of people with HIV and mild cognitive deficits with mildly decreased everyday function persist in about 13.3% of PWH. Although executive function and memory deficits are most common in PWH in the post-ART era, the characterization of cognitive impairment in HIV is highly variable with deficits observed in a range of cognitive domains. Previous studies using statistical clustering techniques have identified differing profiles of cognitive function among PWH with some profiles resembling global impairment across domains while other profiles resemble more domain-specific impairment,mobile vertical rack particularly in the domains of episodic memory and executive function. Similarly, there is also substantial variability in the risk factors associated with cognitive deficits among PWH that range from biological , demographic to psychosocial factors.

The persistence of cognitive impairment in the era of modern ART among PWH and the variability in the profiles and risk factors associated with cognitive impairment suggests that non-HIV factors associated with aging, comorbid conditions and psychosocial risk factors likely contribute to cognitive impairment given the high prevalence of these factors among PWH. With this in mind, we propose looking beyond the construct of HIV-associated neurocognitive disorders to identify the underlying pathophysiology linked to cognitive impairment as HAND requires other comorbidities to be ruled out as primary contributing factors. Biological sex is an important determinant of cognitive impairment among PWH. In a recent literature review of sex differences in cognitive impairment among PWH , seven cross-sectional and one longitudinal analysis identified sex differences on global measures of cognitive impairment among PWH. Additionally, six cross-sectional and one longitudinal analysis also reported sex differences in domain-specific cognitive performance. The strongest available evidence of adequately-powered studies indicates that WWH show greater deficits than MWH in the domains of learning and memory followed by speed of information processing and motor functioning, with inconsistent findings in executive functioning. The greater vulnerability of WWH to cognitive impairment may reflect sociodemographic differences between men and women with HIV. WWH tend to have a higher prevalence of psychosocial risk factors including poverty, low literacy levels, low educational attainment, substance abuse, poor mental health, and barriers to health care services as compared to MWH. These psychosocial risk factors may have biological effects on the brain that lead to reduced cognitive reserve among WWH as evidenced by findings of greater susceptibility of cognitive function to the effects of mental health factors among WWH vs. MWH. Additionally, biological factors such as sex steroid hormones and female-specific hormonal milieus may contribute to sex differences in cognitive test performance in PWH. However, it remains unclear how MWH and WWH may differ in the patterns of cognitive impairment and risk factors associated with these patterns of cognitive impairment. Previous reports of impairment profiles among PWH have identified them in combined samples of men and women , masking possible sex-specific patterns of cognitive impairment among PWH. Furthermore, although a number of studies reported sex differences in the presence and pattern of cognitive impairment and greater cognitive decline compared to MWH , only one study was adequately powered to address meaningful sex difference in global cognitive function. A well-powered examination of the patterns and determinants of cognitive impairment by sex, that also controls for other demographic differences between WWH and MWH , can help to clarify the contribution of sex to heterogeneity in cognitive impairment among PWH. Such an examination could also clarify the related psychosocial vs. biological factors and, thereby, optimize risk assessments and intervention strategies in both sexes.

Leveraging comprehensive neuropsychological data from the large-scale cohort of the HIV Neurobehavioral Research Program at the University of California-San Diego, we used novel machine learning methods to identify differing profiles of cognitive function in PWH and to evaluate how these profiles differ between women and men in sex-stratifified analyses. Rather, than using traditional cognitive domain scores, we used each of the NP test outcomes given that prior studies indicate that the correlation of NP test scores does not map to traditional domain scores in PWH. Furthermore, we determined how sociodemographic , clinical and biological factors related to cognitive profiles within women and men. Based on previous studies among PWH , we hypothesized that the machine learning approach would identify distinct subgroups of individuals with normal cognitive function, global cognitive impairment, and domain specific cognitive impairment. We further hypothesized that groups with domain-specific cognitive impairment would differ by sex, with WWH showing more consistent memory and processing speed impairment than MWH. Finally, we expected that similar sociodemographic/clinical/biological determinants would distinguish cognitive profiles among WWH and MWH; however, in line with previous research , we expected that depressive symptoms would be more strongly associated with cognitive impairment profiles among WWH than MWH. Study assessment details have been published elsewhere. The UCSD Institutional Review Board approved the studies. Participants provided written informed consent and were compensated for their participation. Exclusion criteria for the parent studies included history of non-HIV-related neurological, medical, or psychiatric disorders that affect brain functioning , learning disabilities, and a first language that was not English. Inclusion in the current study required completion of neuropsychological and neuromedical evaluations at the baseline study visit. Exclusion criteria for the current study included a positive urine toxicology test for illicit drugs or Breathalyzer test for alcohol on the day of clinic visit on the day of study visit. NP test performance was assessed through a comprehensive, standardized, battery of tests that measure seven domains of cognition, including complex motor skills, executive function, attention/working memory, episodic learning, episodic memory , verbal fluency, and information processing speed. Motor skills were assessed by the Grooved Pegboard Dominant and Non-dominant Hand tests. Executive functioning was assessed by the Trail Making Test -Part B and the Stroop Color and Word Test interference score. Attention/working memory was assessed by the Paced Auditory Serial Addition Task. Episodic learning was assessed by the Total Learning scores of the Hopkins Verbal Learning Test-Revised and the Brief Visuospatial Memory Test-Revised. Episodic memory was assessed by the Delayed Recall and Recognition scores of the HVLT-R and BVMT-R. Verbal Fluency was assessed by the “FAS” Letter Fluency test. Information processing speed was assessed by the WAIS-III Digit Symbol Test , the TMT-Part A, and the Stroop Color and Word Test color naming score. Raw test scores were transformed into age-, education-, sex-, and race/ethnicity-adjusted T-scores based on normative samples of HIV-uninfected persons. The use of demographically-adjusted T-scores are intended to control for these demographic effects as they occur in the general population. We examined sociodemographic, clinical, and biological factors associated with cognitive impairment in the literature and available with enough participants to be adequately powered in analyses. Sociodemographic factors included age, years of education, and race/ethnicity. Although these factors were used to create the T-scores, there can still be remaining demographic associations with cognition within clinical populations such as PWH. For example, there is considerable interest in the possibility of abnormal cognitive aging PWH; also, in general, older PWH tend to have had their infections longer, may have had longer periods without benefit of suppressive ART, and more history of worse immunosuppression.

This Article examines the chemical testing of psychoactive drugs

The most proportional significant shift in General Fund spending in 2021-2023 was the increase in Human Services, which was offset by reductions in the percentages going to Public Safety/Judicial Spending, K-12 Education and various minor programs. The increase in Human Services is largely driven by the legislature’s commitment to continue Medicaid coverage at least the same level without any fee increases despite inflation in medical care, and to expand access to the system for undocumented immigrants. Mental health programs were also enhanced. When combined with Other Funds, K-12 Education reached a record total of $9.3 billion and Democratic legislative leaders resisted pressures to go further. The relative decline in Public Safety Judicial spending reflects Governor Brown and various legislators’ interest in sentencing reform and prison consolidation, as well as the temporary use of early releases and less incarceration in response to COVID. Although their proportionate share did not increase, Higher Education in Oregon benefitted strongly in the 2021-2023 final budget, partially because it could harvest diverse federal monies. According to the Higher Education Coordinating Commission , the state’s overseeing body, “In general, posts econdary education and workforce experienced promising growth in key program areas in the 2021-23 budget. Support for community college operations increased 10.5 percent from 2019-21 LAB, and state support for public university operations increased 8.1 percent. These are the funding levels the institutions requested to accommodate actual cost growth and are expected to be sufficient to mitigate tuition increases during the biennium.”The rule of law is often seen as a formal,cannabis grow tent governmental alternative to informal, social mechanisms for regulating conduct.

In this Article, I examine a more indirect manifestation of the rule of law: the indirect effect that the criminal law can have on private efforts at risk management by individuals and corporations. Formal law can encourage private risk regulation, but it can also distort it.Trained technicians in commercial laboratories routinely employ a common technology—gas chromatography/mass spectrometry —to test samples for the presence of illicit psychoactive substances as well as dangerous or benign adulterants. One of these laboratories, LabCorp, provides occupational testing services for corporate clients.2 Another, Drug Detection Laboratories , conducts GC/MS screening of samples provided by DanceSafe, EcstasyData.org, and the Multidisciplinary Association for Psychedelic Studies.3 LabCorp’s samples are obtained from corporate clients’ random or systematic urine testing of their prospective and existing employees. DDL’s samples come from anonymous Ecstasy consumers who seek information on the potential presence of adulterants in samples they have purchased illicitly. This Article explores the remarkably different normative and behavioral consequences that follow from the use of the same basic laboratory protocol to test illicit drug use 4 and for illicit drug safety.My primary interest is in testing practices conducted by private citizens rather than agents of the legal system. At first glance, one might think that safety testing and use testing have little shared relevance. I do not contend that they are mutually exclusive alternatives. Both use testing and safety testing are intended to reduce harms, and each presumes to do so indirectly, by influencing the decision to ingest a drug. But these practices exemplify two distinctly different strategies for thinking about the management of risky behaviors— prevalence reduction and harm reduction. Prevalence reduction seeks to reduce the number of people engaging in a given behavior, while harm reduction seeks to reduce the harmful consequences of engaging in that behavior.Practices and concepts most readily identified with prevalence reduction include abstinence, prevention, deterrence, and incapacitation. Practices and concepts most readily identified with harm reduction include safe-use and safe-sex educational materials, needle exchanges, and the free distribution of condoms to students. Prevalence reduction may be employed in the hope of reducing drug-related harms, but because it directly targets use, any influence on harm is indirect. Harm reduction directly targets harms; any influence on use is indirect. This Article focuses on the private use of these methodologies. These private uses occur in the shadow of the law, thus criminal law influences—and, to some extent, distorts—their consequences.

Criminal law facilitates the intrusive exercise of use testing in workplaces and schools that might otherwise have difficulty implementing it; this is illustrated by the greater prevalence of drug testing than of alcohol testing.Criminal law also hinders the effective implementation of safety testing, making it easier for sellers to distribute adulterated and often dangerous products. More subtly, criminal law frames the issue of drug use as one of criminal deviance, which encourages some solutions but obscures others. For example, the focus on drug testing overlooks the potentially more harm-reducing use of psychomotor testing.8 Thus, both practices are constrained by the criminal laws prohibiting these drugs. This is not an argument for ending drug prohibition, nor do I argue for the superiority of safety testing over use testing, or harm reduction over prevalence reduction.But this Article suggests a less moralistic, more pragmatic approach to drug policy—an approach that is less speculative than legalization because it is has been pursued for decades in the Netherlands, and increasingly in the United Kingdom, Australia, and elsewhere.Not surprisingly, positive drug test rates are dramatically higher among criminal justice arrestees. The National Institute of Justice began collecting systematic drug testing data from arrestees with its Drug Abuse Forecasting program in 1988. An improved methodology, the ADAM program, was implemented in 2000.The most recent data available are from 2000.In that year,grow lights for cannabis more than half of thirty-five sites reported that 64% or more of their male arrestees tested positive for either cocaine, opiates, marijuana, methamphetamine, or PCP.The most common drugs present were marijuana and cocaine.Any consideration of drug test results should be qualified by the serious limitations of existing testing methods. Blood testing is the most accurate method for identifying drug influences at the moment of testing, but it is intrusive, expensive, and rare.Urine testing, which is also intrusive, is far more common. But it is a poor indicator of immediate drug status because drugs cannot be detected in urine until they have been metabolized, often many hours after consumption.Urine testing is particularly sensitive to cannabis use, and can detect use dating back several months for a heavy user, but it is far less likely to detect other “hard” drugs.

Saliva and hair testing are less intrusive and are becoming more common. In fact, hair testing can detect use dating back two to three months, and can even date the use with some accuracy.Use testing is vulnerable to false positives due to contaminants , as well as false negatives due to temporary abstention , “water loading” , and even a haircut. Detailed advice on defeating a drug test is available on various web sites.For example, false positives for marijuana can be triggered by many different prescription and over-the counter medications.Another reason to be wary of the accuracy of use testing results involves sampling. “Random testing” may sound a lot like “random sampling,” but there is selection into and out of the sample, because users and others who object to testing may avoid the testing organization altogether—whether it be the military, a workplace, or a school sports program.From a deterrence perspective, use testing should be an effective way to reduce drug use. Aggregate econometric analyses and individual-level “perceptual deterrence” studies suggest four generalizations about drug offenses, drunk driving, and various income-generating crimes: the certainty of punishment has a modest but reliable causal impact on offending rates, even for offenses with very low detection probabilities; the severity of punishment has no reliable impact, either in isolation or in interaction with certainty; the celerity or speed of punishment is important, but post-arrest criminal sanctioning is probably too slow to be effective; and an arrest can trigger informal social sanctions, even in the absence of incarceration.Use testing increases the certainty of sanctioning, and even when it does not lead to arrest, the consequences of a positive test are effectively punitive, because it damages one’s reputation with family, friends, and colleagues. Nevertheless, support for a general deterrent effect of drug testing is mixed. The available studies are correlational and hence they are subject to a variety of inferential problems. It is astonishing that such an intrusive intervention is being implemented so widely in the absence of a carefully controlled experiment group, with random assignment to testing condition either at the individual, site, or organizational level.In 1981, the United States military implemented a tough “zero-tolerance” drug policy, which imposed mandatory drug testing and threatened job termination for violations. Two studies have examined the effects of the policy. Professor Jerald Bachman and his colleagues used the Monitoring the Future cohort data from young adults who graduated from high school in 1976 through 1995.They found declining rates of drug use among active duty military personnel and nonmilitary cohort members in the two years after graduation, but beginning in 1981, the rate of decline was steeper for the military group, at least for illicit drugs. This is a pattern “strongly suggestive of causal relationships.”In a separate study, economists Stephen Mehay and Rosalie Pacula compared NHSDA and Department of Defense health survey data collected before and after the military adopted the zero-tolerance policy.They estimated a 16% drop in the prevalence of past-year drug use in the military, with a lower bound estimate of 4%.Dr. W. Robert Lange and his colleagues examined the effects of a decision at Johns Hopkins hospital to shift from “for cause” employee testing in 1989 to universal pre-employment testing in 1991.In 1989, 10.8% of 593 specimens were positive— 55% of them for marijuana—and there were seven “walkouts” who refused to be tested.In 1991, 5.8% of 365 specimens tested positive—28% for marijuana—with no walkouts.The authors interpreted these results as evidence of the deterrent effect of drug testing.But Professors M.R. Levine and W.P. Rennie offer a variety of alternative explanations, including the fact that in 1991 users had advance warning of the test and could abstain, water load, or ingest legal substances that would confound the test.The most comprehensive study of the effects of school testing on student use comes from analyses of data from the Monitoring the Future survey.This analysis found no measurable association between either random or “for cause” drug testing and students’ self-reported drug use.The study is cross-sectional, rather than prospective, and is somewhat limited by the relative rarity of exposure to testing. A more focused test was provided by the “pilot test” of the Student Athlete Testing Using Random Notification project.During the 1999–2000 academic year, the authors compared two Oregon schools using mandatory drug testing with another school that did not.60 Neither students nor schools were randomly assigned to drug testing versus nontesting.The authors reported a significant treatment effect; though statistical details were not presented, the conclusion is apparently based on a difference-in-difference estimate of changes from pre- to post-test in the control versus treatment schools.But caution is warranted for several reasons. First, although there was a slight decrease in drug use at the treatment schools the effect is largely attributable to an increase in drug use at the control schools.Because assignment to condition was not random, there is little reason to believe that a similar increase would have occurred at the treatment schools absent testing. Second, most drug use risk factors, including drug use norms, belief in lower consequences of drug use, and negative attitudes toward school actually increased among the target group—athletes at the treatment school.These puzzling results may explain why the study was labeled a pilot test, and why a more ambitious and rigorous follow-up study was launched. Unfortunately, the study was terminated by the Federal Office for Human Research Protection due to human protection concerns.At present, the evidence suggests that the military’s testing program had a deterrent effect, but no such effect was found in the workplace or in schools. Still, the absence of evidence is not evidence of absence. There are very few rigorous studies; low statistical power, noisy measurement, and other factors may hide genuine effects. Alternatively, it may be that the military program is more effective as a deterrent due to differences in its implementation, its target population, its consequences for users, or the institutional setting.

Intimate partner violence is also a correlate of drug use and harmful alcohol consumption

We then fit a partially adjusted model controlling only for the demographic covariates specified above. Lastly, we fit the final, fully adjusted model controlling for demographic covariates and marital/partner status and respondents’ reports of ART use in the past 6 months, as recorded at the 12 and 24 month follow-up interviews. A posthoc sensitivity analysis was performed excluding observations from participants who used ARTs. All analysis was done using the statistical package SAS 9.3. Most participants reported no form of transactional sex in the past 12 months. The most common transaction was reported by men as having given money, drugs or alcohol in exchange for sex. We did not measure the type of partner involved in this exchange. No women reported giving something for sex and all other forms of transaction were reported by less than 5% of the sample, see Table 1Table 2 shows the proportion of participants who reported on the primary and secondary outcomes of interest at baseline, 12 and 24-month follow-up. High risk sex behaviors were more commonly reported than drug risk behaviors. Although Table 2 only descriptively presents the longitudinal frequencies of each outcome, it is noteworthy that – relative to male participants – a higher estimated proportion of female participants reported engaging in every risk behavior at every time point with the exception of past month alcohol use before sharing equipment, as reported at the 12 month follow-up. In general,cannabis grow tray it appears that there were not substantial changes over time across the various outcomes presented in Table 2. ART use appeared to increase from 12 to 24 months, particularly among men.

All participants were ART-naïve at baseline but 17% and 35% reported having taken ART in the past 6 months at the 12 and 24 month follow-up visits, respectively. Relative to male participants, female participants had significantly higher odds of reporting both primary outcomes, sharing injecting equipment in the past 30 days and condomless sex in the past 90 days in the unadjusted models. After controlling for demographic covariates, partner status and ART use, the association between female gender and sharing injecting equipment was no longer significant. Female gender remained significantly associated with condomless sex in the past 90 days, even after controlling for demographics and additionally, both partner status and ART use , Table 3. The conclusions from posthoc sensitivity analyses excluding observations from participants who used ARTs were consistent with the main analyses for all 5 outcomes. The unadjusted odds of one of the secondary outcomes was higher for female participants than male participants: reporting both drug equipment sharing and condomless sex. After controlling for demographic covariates, female gender remained statistically significant for the outcome, reporting both injection equipment sharing and condomless sex. In the final fully adjusted model, where we controlled for demographics as well as the 3 level partner status covariate and ART use, the association between female gender and reporting both injection equipment sharing and condomless sex was no longer significant. No significant association was found in any of the models between female gender and alcohol use prior to sharing equipment in the past 30 days, or prior to or during sex in the past 90 days, see Table 3.Among a cohort of PLHIV in Russia who have ever injected drugs, we detected a statistically significant association between female gender and condomless sex in the past 90 days, even after controlling for the potentially confounding effects of demographics, partner status, and ART use.

Although we observed notable associations between gender and other outcomes, including sharing drug equipment, alcohol use prior to sharing, and both drug equipment sharing and condomless sex, the results were not statistically significant, possiblydue to limited power given the relatively small number of women in the study. It is also notable that nearly all risk behaviors, other than alcohol use prior to sharing, appeared to be more commonly reported among women compared to men. The increased odds of substance using women having condomless sex, compared to men, has been previously documented in multiple settings , including St. Petersburg, Russia. Prior research from St. Petersburg also found partnership status to be a major factor in PWID’s decision-making process about whether to engage in condomless sex with their partner. In our study, more participants reported being in HIV concordant partnerships which could explain why such a high proportion of respondents engaged in condomless sex. Regardless, female participants had higher odds of reporting condomless sex, irrespective of their partner’s HIV status, posing risk for HIV transmission in this population. Further, the preventive health benefits of HIV-positive persons using a condom or other protective barrier during vaginal or anal sex are indisputable, regardless of their partner’s HIV serostatus. These results are particularly concerning in light of recent research suggesting heterosexual transmission of HIV is increasing in St. Petersburg, and may overtake injection drug use as the primary mode of transmission , and suggest a need for a comprehensive, multi pronged response which should include “treatment as prevention” and pre exposure prophylaxis for HIV-negative partners. Interventions promoting condom usage are also warranted. However, our finding that women were less likely than men to use condoms under all circumstances implies that such approaches must be designed to account for the social, micro, and macro contexts of women’s lives. At the relationship level,vertical grow systems for sale alcohol use prior to sex was common and may have interfered with condom decision making around the time of the sexual event. Connecting women to alcohol harm reduction programming could help to lessen their collective risk for HIV infection and transmission. 

Our findings support the value of implementing multi-level interventions and also imply that TasP is a high-yield approach with potential to reduce the risk of transmission with condomless sex, as well as provide a multitude of other health benefits for the HIV-positive individual. Addressing the social and structural factors that contribute to gender differences in condom usage, and providing HIV-negative women with access to PrEP are additional strategies which should also be pursued. As has also been seen in other settings, women in our study were more likely to report drug equipment sharing than men. However, it seems the relationship between female gender and equipment sharing is at least partially explained by demographics, most notably employment and income. Female participants in this study were significantly less likely to be employed than male participants and significantly more likely to earn a monthly income below the sample median of 20,000 Rubles. When there is limited access to clean needles and syringes and/or limited funds to pay for new/unused equipment, women may be more likely to share. These patterns have been observed in other populations, including among PWID in South Africa where more women than men reported always sharing injecting equipment. Low economic status, coupled with limited work opportunities for women, have also been associated with increased sexual risk taking among female substance users, including having multiple sex partners and relying on sex trade/transactional sex to support drug use. Findings from the 2009 National HIV Behavioral Surveillance System, conducted in 20 U.S. cities, suggest more female PWID have sex in exchange for money or drugs. Findings from Russia found that compared to their male counterparts, female injectors who reported high drug use frequency were more likely to also report multiple sex partners. Our findings highlight the need for free access to clean needles/syringes among women who inject drugs, as well as access to opiate agonist therapy to prevent HIV. Our study has limitations. The sample size was relatively modest and participants were predominantly male, which limited study power particularly for outcomes that were less common. 

These findings from Russia might not be representative of the relationship between female gender and HIV transmission risk among people who inject drugs or have a history of injection drug use, who are living with HIV in other non Russian settings, or even within Russia but outside of the Russia ARCH study population. Additionally, our research was done with a mixed sample of current and former injection drug users. Another limitation of the current study is that knowledge and perceptions surrounding risk of HIV transmission were not assessed, nor did we specifically explore several key mechanisms known to contribute to sex and drug use behaviors associated with increased risk for HIV transmission. For instance, participants were not asked about their experiences of intimate partner violence, despite that it has been associated with women’s reduced ability to negotiate condom use and talk about HIV prevention with their partner.More research is needed to understand the challenges and preferences of HIV-positive women who inject drugs, which may be contributing to their condom nonuse and harmful drug and alcohol consumption. A better understanding of the factors underlying women’s condom choices, or to what extent they have any choice in the matter, will inform the design of more meaningful and effective prevention strategies. Furthermore, assessing awareness and willingness to use PrEP among HIV-negative women and men who have ever injected drugs or have a known HIV-positive partner is needed to inform future efforts for HIV prevention. We also did not assess participants’ sexual orientation or gender identity, or these characteristics of their sexual partner. Further, we did not assess differences in drug use and sexual behaviors according to whether the partner under consideration was a long-term or casual partner. Nor did we measure partner-specific information on sexual or drug related behaviors of interest. Instead, we only measured behaviors of interest at the individual level. These details should be collected in future research, as understanding partner dynamics contextualizing most at risk situations will help to establish what is needed for prevention efforts.Additionally, different time frames were used for the outcomes which may have differentially impacted participants’ ability to accurately remember their true behaviors. However, the Russia ARCH cohort study team is skilled at interviewing and has extensive experience with this population which likely serves to mitigate this latter bias. The homeless population is aging. People born in the second half of the “baby‐boom” have an elevated risk of homelessness. Homeless adults develop aging‐related conditions, including functional impairment, earlier than individuals in the general population. For this reason, homeless adults aged 50 and older are considered “older” despite their relatively young age. The homeless population has a higher prevalence of mental health and substance use problems than the general population. Individuals experiencing homelessness report barriers to mental health services, due to lack of insurance coverage, high cost of care, and inability to identify sources of care. These barriers can prevent their using services to treat mental health and substance use problems, such as outpatient counseling, prescription medication, and community‐based substance use treatment. Without these, homeless populations may experience more severe behavioral health problems and rely on acute care to address these chronic conditions. Homeless individuals have higher rates of Emergency Department use for mental health and substance use concerns , and are more likely to use psychiatric inpatient or ED services and less likely to use outpatient treatment than those who are housed. Homeless adults with substance use disorders face multiple barriers to engaging in substance use treatment. Competing needs , financial concerns, lack of knowledge about or connection to available services, and lack of insurance are barriers to substance use treatment among homeless adults. Older adults face additional barriers to mental health or substance use treatment due to cognitive and functional impairment, such as difficulty navigating and traveling to healthcare systems. However, there is little known about older adults experiencing homelessness. According to Gelberg and Anderson’s Behavioral Model for Vulnerable Populations, predisposing factors, enabling factors, and need, shape health care utilization. Although prior research has used this model for homeless populations, this work has not included older homeless adults. Little is known about the prevalence of mental health or substance use problems in older homeless adults, the level of unmet need for services, or the factors associated with that need. To understand the factors associated with unmet need for mental health and substance use treatment in older homeless adults, in a population‐based sample of homeless adults age 50 and older, we identified those with a need for mental health and substance use services. Then, we applied the Gelberg and Anderson model to examine predisposing and enabling factors associated with unmet need, which we defined as not receiving mental health and substance use treatment among participants with mental health or substance use problems.

A series of studies show that college students perceive their peers as less critical of heavy drinking than they actually are

The fundamental attribution error could cause a person who witnessed wrongdoing to conclude that the actor usually does wrong, whereas the correct conclusion in most cases is that the actor occasionally lapses.Moral pessimism could also result from a tendency to believe that the behavior of others is instrumentally driven. The overestimation of unethical behavior could follow from a common belief that one’s self interest is the most important factor in explaining the behavior of individuals in the society.A wrongdoer may protect his self-esteem by exaggerating how frequently others commit the same wrong.Relevant concepts invoked by psychologists include social validation, self- enhancing biases, and constructive social comparison.Now we turn from moral pessimism to social projection. An individual who projects his own behavior onto society overestimates how many others behave like he does.This bias is closely related to what the psychology literature calls the false consensus effect , which refers to a situation where people mistakenly think that others agree with them. According to the FCE, people tend to overestimate the social support of their own views and underestimate the social support for people who hold opposing views.Evidence from four studies in the original research by Ross demonstrates that social observers tend to form a false consensus with respect to the relative commonness of their own behavior. These results were obtained in questionnaires that presented subjects with hypothetical situations and also in actual conflicts that presented subjects with choices. Several psychological mechanisms could cause social projection. One such mechanism is cognitive: A person may attend to positions with which he agrees and dismiss positions with which he disagrees.Selective attention allows his preferred position to dominate his consciousness.

The sorting of people reinforces selective attention. People tend to associate with others who share their general beliefs, attitudes, and values. The association could be voluntary as when people select their friends,indoor cannabis growing or involuntary as when people are involuntarily segregated. If likes associate with likes, then recalling instances of behavior like your own will be easier than recalling behavior unlike your own.Instead of cognition, emotion could cause social projection. Perhaps people need for people to see their own acts, beliefs, and feelings as morally appropriate.Finding similarity between oneself and others may validate the appropriateness of behavior, sustain self-esteem, restore cognitive balance, enhance perceived social support, or reduce anticipated social tensions.Later we discuss the possibility that emotional bias resists correction by fresh information that ameliorates cognitive bias. We will construct an economic model of conformity to a social norm, solve for the equilibrium, introduce perceptual bias into the model, and see how the equilibrium changes. We follow the economic tradition of distinguishing between benefits and costs. A person who breaks a norm often enjoys various benefits, such as the financial gain from disclosing trade secrets, the reduction in taxes from evading them, the pleasure of listening to music after downloading it illegally, victory from winning a contest by cheating, time saved by not complying with law, etc. Assume that each actor’s benefit from breaking the norm can be measured. The metric may be utility, pleasure, income, time, prestige, power, comfort, etc. In Figure 1, the vertical axis measures the amount a person benefits from breaking the norm. Each person i has a type i, reflecting the benefits he obtains from breaking the norm. The horizontal axis depicts the cumulative proportion of people who enjoy a benefit of a given amount. According to the curve in Figure 1, a small number of people enjoy a high benefit, and a large number of people enjoy at least a small price. We connect the benefit from breaking a social norm to standard economic concepts. A person’s benefit in economics is described as his “willingness to pay”. The curve in Figure 1 thus depicts willingness to pay for wrongdoing in a population of people. The number of people who are willing to pay a certain amount also measures demand.

The curve in Figure 1thus depicts the “demand” for wrongdoing. The demand curve slopes down because more people are willing to pay the price of wrongdoing as it decreases. Now we turn from the individual’s benefits to his costs. Breaking a social norm often provokes social sanctions that can take various forms.38 First, people who break the norm could lose the social approval of their peers. Second, they could face social resentment. Third, they might have trouble finding business partners. Fourth, if the social norm is also a law, the wrongdoer might suffer civil liability or criminal punishment. Fifth, they might suffer in some of all of these ways because they acquire a bad reputation. The vertical axis in Figure 2 indicates the individual actor’s cost of breaking the social norm, and the horizontal axis indicates the proportion of wrongdoers who break the norm. As depicted in Figure 2, the individual actor’s cost of breaking the norm decreases as the proportion of wrongdoers increases. Various reasons could explain why breaking a norm costs less when many others do it. A simple reason that is central to the enforcement of norms concerns the expected sanction. The expected sanction equals its probability multiplied by its severity. As discussed above, the probability that a particular wrongdoer will suffer a social or legal sanction often decreases as more people commit the sanctioned act. For example, when people see many others smoking in airports, they feel more confident that they will not be confronted if they smoke. Because safety lies in numbers, the cost curve slopes down in Figure 2. The curve in Figure 2 also describes the “supply of sanctions for wrongdoing” as a function of the proportion of wrongdoers. One of this article’s authors gave questionnaires to engineers in Silicon Valley concerning trade secrets. The questionnaires asked each person whether or not he would violate trade secrets law,cannabis grow supplies and then asked the frequency with which he thought that other people violated trade secrets laws. 44.8%of the participants in the study said that they were more likely than not to violate trade secrets law, but they estimated on average that 57% of the employees in their company would violate trade secrets law. When asked about the proportion of employees in Silicon Valley in general who would violate the trade secrets law, the average answer was 68%. Pessimism bias would produce such a gap in results.45 Our model predicts that moral pessimism bias would lower the perceived cost of disclosing trade secrets. In terms of Figure 5, the perceived cost curve lies below the actual cost curve.

Consequently, moral pessimism bias causes more disclosure of trade secrets. Equivalently, fewer people would disclose trade secrets laws if they knew the true level of illegal disclosure. In these circumstances, accurate reporting of the frequency of norm violations should cause fewer of them. The effect of accurate reporting is presumably stronger when cognitive processes cause bias, and the effect is presumably weaker when motivational processes cause bias. The survey also found that the longer a worker spends in Silicon Valley, the more he feels justified in disclosing trade secret. Perhaps people change their beliefs to align with their misperception of the facts — they accept the morality of the actual as they misperceive it.The gap between self-reported and perceived disclosures of trade secrets differed systematically across types of people. Those who reported that they were more likely to disclose secrets estimated that a relatively high percentage of other people disclose secrets, and those who reported that they were less likely to disclose secrets estimated that a relatively low percentage of other people disclose secrets. Social projection would produce these results. Our model predicts that social projection would not cause more people to disclose trade secrets. Consequently, providing information to correct the bias will not change the number of people who disclose trade secrets. Social projection, however, might cause those people who disclose trade secrets to do so more often. In addition, social projection increases the resolve of people who disclose trade secrets; so increasing the severity of the punishment will be less effective in deterring them. Psychologists have investigated the connection between the willingness of people to pay taxes and their perception of tax evasion by other. A study of Australian taxpayers found a discrepancy between what the individual does and what he thinks others are doing. Moral pessimism bias would produce the observed discrepancy.According to our model, moral pessimism will cause fewer taxpayers to comply with the law. A longitudinal study of Australian citizens that used a cross-lagged panel analysis found that taxpayers’ personal views of the morality of tax compliance affect their perception of the levels of tax compliance by others.. Those with high personal standards of tax compliance perceived relatively more compliance by others, and those with low personal standards perceived relatively less tax compliance by others. These results are consistent with social projection bias. According to our model, social projection bias will not affect the number of taxpayers who comply with the law, but it may cause tax avoiders and evaders to comply less, and it should make all taxpayers more reluctant to change their behavior. This study show how people respond to information exposing their biases. Researchers were able to monitor people’s actual tax files. Some sub-groups were given information about the gap between their own behavior and the behavior of others. Receiving information on the behavior of others caused more tax compliance in some forms, such as the amount of deduction claimed. This fact is consistent with our prediction that disseminating accurate information will cause more right doing when actors suffer from moral pessimism bias.The actor’s perceived cost of heavy drinking, consequently, is less than its actual cost. Since the perceived cost curve lies below the actual cost curve, as in Figure 5, our model predicts more heavy drinking than would occur if students perceived costs accurately. One of the studies tracked how attitudes developed over the course of two months among college freshmen and discovered gender differences. Male students adjusted their personal attitudes over time to match more closely the perceived consensus. After the adjustment, actual attitudes among males were closer to perceived attitudes. In terms of Figure 5, the actual cost curve shifted closer to the perceived cost curve. These facts suggest that providing information to correct misperception at the beginning of the semester would reduce heavy drinking more than if the information were provided at the end of the semester. With female students, attitudes remained stable, so the timing of the information should make less difference to their behavior.Given multiple equilibria as in Figure 3, the initial proportion of wrongdoers can affect the equilibrium. In Figure 3, an initial proportion of wrongdoers below x2 will cause their numbers to fall approximately to zero, whereas an initial proportion of wrongdoers above x2 will cause their numbers to rise to x1. Perhaps more male students drink heavily when they arrive as freshmen, which causes the system to settle at a high level of drinking among males. Conversely, perhaps fewer female students drink heavily when they arrive as freshman, which causes the system to settle at a low level of drinking among females. These hypotheses require testing. Now we turn to studies on drug use. In a classical study, a sample of adolescents was divided into three groups: nonusers, cannabis-users, and cannabis and amphetamine users.The perceptions of members of the three groups differed significantly from each other.Compared to nonusers, drug-users gave relatively high estimates of the number of users. These results are consistent with social projection. Researcher proposed two psychological causes of projection. First, the number of arguments that we hear for or against something affects our attitudes towards it and we hear more arguments from people inside our group than from outsiders. Accurate information should help to correct this cognitive bias. Second, the members of each group were motivated to see their own behavior in others. Accurate information is probably not enough to correct this emotional bias. The authors concluded that projection bias would cause over-use of drugs, which contradicts our model. Our model does predict that projection bias will entrench existing behavior among the three groups of people and make it harder to change.

No single training approach is comprehensive enough to eliminate these disparities

Both studies were completed in an average of six seconds, which allowed for rapid confirmation with minimal risk of desaturation. Additionally, this examination could be performed while capnography was being obtained, with both confirmatory methods used to support each other in equivocal cases. Operator confidence was high with both techniques, suggesting that both providers felt comfortable with their assessments, which is an important finding in ultrasound studies because, if the operator is not confident in their assessment, they will be unlikely to use the examination clinically.It is important to consider several limitations with respect to this study. First, it was performed in a cadaver model, which may not fully reflect the characteristics of a live patient. However, cadaver models have been used extensively for the evaluation of ultrasound for ETT confirmation and have demonstrated similar test characteristics to live patients for this modality.Additionally, we used only three cadavers in the study and it is possible this may not have fully represented the wider population. However, we intentionally used cadavers with significant differences in anatomy to best represent the variation in a larger population. It is possible that the repeat intubations may have improved the accuracy of the sonographers due to increased practice. To avoid this we alternated cadavers and techniques between each use to reduce the potential for improving each sonographer’s learning curve during the study. While it is not possible to completely exclude the potential for sonographers to have improved their accuracy throughout the study, this was not supported by the data as equivalent numbers of misidentified ETT placements occurred in the early and later intubations. There is also no reason to suggest that this would differentially affect one technique over another.

Moreover, this study was designed to evaluate the test characteristics of dynamic vs. static sonography for ETT localization. Therefore, it is important to ensure similar rates of tracheal and esophageal intubation,hydroponics system for cannabis which would not be possible in an ED setting due to low overall rates of esophageal intubation.Because this study was performed by two sonographers with prior experience using ultrasound for ETT confirmation, it is possible that the results may have differed if less experienced sonographers were used. However, the use of ultrasound for ETT confirmation has been suggested to have a rapid learning curve.Nonetheless, further studies are advised to determine whether the accuracy of static vs. dynamic techniques differs in less experienced providers.Atrial fibrillation , a supraventricular tachyarrhythmia, is the primary diagnosis for over 467,000 hospitalizations each year.The AFFIRM trial compared rate and rhythm control in 4,060 chronic AF patients. It found no difference in overall mortality, but there were fewer hospitalizations with rate control compared to rhythm.The subsequent RACE II trial established that lenient heart rate control was as effective as strict control in preventing cardiovascular events and required fewer outpatient visits to achieve the goal HR.A number of medications are used for rate control including beta blockers and non-dihydropyridine calcium channel blockers.Diltiazem, a non-dihydropyridine calcium channel blocker, is a common initial choice in the management of AF with rapid ventricular response due to its ability to be given as an intravenous push, continuous infusion, and oral immediate-release or extended-release tablet. In the ED a loading dose of IV diltiazem is usually administered followed by PO immediate-release tablet or IV continuous infusion. Both options allow for dose titration in the short term before converting to a longer-acting PO formulation for discharge.

The PO immediate-release diltiazem tablet has an onset of action of 30-60 minutes and is dosed every six hours.IV continuous infusion diltiazem has a rapid onset of action and is titrated every 15-30 minutes. The route of diltiazem after the initial IV LD can influence the disposition of the patient from the ED, the level of care needed, and hospital length of stay. Patients who receive only the PO immediate-release diltiazem absorb a therapeutic dose quickly and can generally be discharged or admitted to a general medicine floor, but cannot be titrated more frequently than every six hours. Patients who received the IV continuous infusion must have their dose frequently titrated by nursing and often require stepdown care. No studies exist comparing the efficacy of PO immediate-release and IV continuous-infusion diltiazem in the emergent management of AF with RVR. The objective of this study was to compare the incidence of treatment failure at four hours between PO immediate-release and IV continuous-infusion diltiazem after an IV LD.We collected and managed study data using REDCap® electronic data capture tools.Baseline demographic information recorded included the patient’s age, sex, race, and weight. Diltiazem dosing characteristics at baseline and four hours and the use of adjunctive medication for HR or rhythm control at four hours were collected. Clinical outcomes recorded included HR and blood pressure at baseline and four hours, ED disposition,indoor hydroponics cannabis and hospital LOS. Two of the study’s investigators abstracted all available data independently. Both were involved in the study design and used a standardized data collection form in REDCap® that included study definitions to ensure consistency between the investigators. Investigators were not blinded to the study outcome. Any discrepancies between abstractors resulted in a collaborative review of the chart by both investigators until discrepancies were resolved. As a result, interrater reliability was not determined.

The primary endpoint of the study was the percentage of patients with treatment failure at 4 ± 1 hour after initiation of PO immediate-release diltiazem or continuous IV diltiazem infusion. Treatment failure was defined as HR of > 110 beats/min at 4 ± 1 hour, a switch in therapy from PO immediate-release diltiazem to IV continuous infusion diltiazem, the requirement of an additional IV diltiazem bolus within four hours from the start of PO or IV continuous infusion, or addition/switch of therapy to another rate control or antiarrhythmic agent within four hours. A clinical endpoint of 4 ± 1 hour was selected to give time for both the PO and the IV diltiazem to have therapeutic effect. It was also concluded that this was a reasonable amount of time for the ED provider to determine disposition. We made the decision not to include time points extending beyond four hours due to the increased number of confounding factors, including the conversion to PO β-blockers or extended-release PO diltiazem. Patient characteristics collected included age, weight, race, sex, initial HR and BP, and initial diltiazem LD. We assessed the safety endpoint of clinically significant hypotension by recording the indication for diltiazem discontinuation and the need for vasopressors administration for hemodynamic support.In the emergent setting, diltiazem has been shown to be superior to digoxin, metoprolol, and amiodarone in the initial management of AF and flutter.IV diltiazem has often been considered superior to PO in the management of AF due its 100% bioavailability and titratability. However, PO immediate-release diltiazem confers many benefits over IV continuous infusion including a fast onset of action, minimal titration requirement, decreased nursing resources, and the ability to disposition to a general floor or possibly discharge home. A comparison of PO immediate-release and IV continuous-infusion diltiazem in the emergent clinical setting had never been performed. In our study, we found that PO immediate-release diltiazem resulted in a 0.4 OR of treatment failure when compared to IV continuous infusion. In other words, PO immediate-release diltiazem resulted in an odds of heart rate control 2.6 times greater than IV continuous infusion at four hours.

This is a surprising result given the higher bio-availability of the IV route compared to the oral formulation. A possible reason for this difference in treatment failure may be that IV continuous infusion was sub-optimally titrated. In our sample, the median hourly dose of the IV continuous infusion at four hours was only 10 mg/h, well below the maximum dose of 15 mg/h. Slow titration to sub-maximal doses may have resulted in sub-optimal diltiazem plasma concentrations in comparison with patients who were given immediate-release PO diltiazem. In theory, PO dosing may have achieved a higher plasma concentration as a result of the entire diltiazem dose being given at once. Therefore, our results may not reflect the comparison of two treatment regimens at optimal dosing capacity, but rather the real-world practice in which medication titration is not always optimized. PO diltiazem was associated with statistically significant higher odds of being admitted to the general floor and lower odds of being admitted to stepdown or the ICU. Patients who received PO also had a two-day shorter median LOS compared to IV. While the differences in these two parameters cannot be ascertained in a definitive manner due to the retrospective nature of the study, it is possible that the extended time needed to transition patients from IV to PO diltiazem before discharge may have played a contributing factor. Patient disposition and decreased LOS represent a possible area of healthcare cost savings that should be investigated in future prospective studies. Providers may choose IV continuous-infusion diltiazem if they want to titrate to lower doses in patients with borderline hemodynamic stability. In our study, however, clinically significant hypotension did not occur in the PO or IV group. Overall, our findings call in to question the primacy of IV continuous-infusion diltiazem for AF. PO diltiazem was associated with a lower rate of treatment failure and higher rate of heart control than IV continuous infusion and with similar safety. Importantly, these findings are the result of a retrospective study with limited sample size and therefore must be confirmed in a larger, prospective, randomized controlled trial.Each year 395,000 people suffer an out-of-hospital cardiac arrest in the United States.Multiple studies have demonstrated that layperson CPR increases chance of survival by 2-3 fold.The importance of immediate response by the public has been highlighted by the Institute of Medicine report “Strategies to Improve Cardiac Arrest Survival: A Time to Act”.One of the key recommendations of the IOM report was a call to “foster a culture of action through public awareness and training” to reduce the risk of irreversible neurologic injury and functional disability.Wide disparities in bystander CPR rates and OHCA outcomes persist, with some communities reporting a five-fold difference in survival.Residents who live in neighborhoods that are primarily Black, Hispanic, or low income are more likely to have an OHCA, less likely to receive bystander CPR, and are less likely to survive.The implementation of creative new strategies to increase layperson CPR and defibrillation may improve resuscitation in priority populations.Most communities will only improve survival through a multifaceted, community-wide approach that may include teaching hands-only CPR for bystanders,emphasis on brief educational videos16 and video self-instruction,mandatory school-based training,and dispatcher-assisted CPR.One particularly high-yield approach for high-risk communities is the implementation of mandatory CPR training in high schools.The American Heart Association , the World Health Organization, and the IOM along with multiple other national and international advocacy groups have endorsed CPR training in high school as a key foundation to improve OHCA survival outcomes.The 2015 IOM report calls for state and local education departments to partner with training organizations and public advocacy groups to promote and facilitate CPR and automated external defibrillator training as a high school graduation requirement.Today communities across the U.S. have recognized the value of CPR training in high schools, and 36 states have enacted laws calling for mandatory training prior to graduation.The benefit of CPR training in high schools is understood as a long-term investment to ensure that multiple generations are trained and ready to act.However, a more immediate consequence of school centered training may be the amplification of community CPR training and literacy as students become trainers for their household and circle of friends.Students can be asked to “pay it forward” by sending them home with CPR training materials and assigning them the task of training friends and family members. This pilot program sought to investigate the feasibility, knowledge acquisition, and dissemination of a high school centered, CPR video self-instruction program with a “pay-it-forward” component in a low-income, urban, predominantly Black neighborhood with historically low bystander-CPR rates. Schools provide large-scale, centrally organized community settings accessible to both children and adult family members of all socioeconomic backgrounds. A student-mediated, CPR educational intervention may be an effective conduit to relay OHCA knowledge and preparedness in high-risk neighborhoods.

Twenty-nine states and the District of Columbia have legalized medicinal use of cannabis

Although the absolute improvement in quality scores was modest , the intervention resulted in the communication of approximately 100 pieces of additional information, any of which had the potential to improve the handoff process. A recent survey of EM residency programs in the U.S. found poor adherence to standardized ED-to- inpatient handoff practices,and our study was no exception. In the post-intervention period, the SBAR-DR format was used for only 30% of verbal hand offs and the written template was used for 50%. The reason for this was likely multi-factorial and related to both methodological and cultural barriers. Although the pilot study involved the institution’s largest admitting service, EPs performed admission hand offs with other admitting teams not included in the study. Having to shift between different handoff strategies may have limited EPs’ ability to acclimatize and integrate SBAR-DR into their daily practice. The adoption of the written handoff note also may have been hindered by the additional charting time required. Additionally, having fewer senior EM residents in the post-intervention cohort may have negatively impacted our post-intervention scores, as we found this group showed significant improvement in both handoff quality score and global rating scale. This supports prior research, which has found that residents’ ability to integrate handoff information may improve with experience.Additionally, handoff practices are an engrained part of a specialty’s culture. Although our study group included faculty and resident physician champions from IM and EM, we may not have fostered adequate buy-in from practicing providers to change practice routines.

As institutions implement changes to inter-unit hand offs and care transitions, they need to address cultural complacency and build coalitions among affected members of the healthcare team.Possible solutions includeinter-disciplinary communication training,cannabis grow table which could give physicians an opportunity to practice standardized hand offs with one another, while also mitigating future conflicts via improved inter-personal engagement.Endorsement from senior physician leadership could also facilitate provider buy-in and adherence. Finally, the Joint Commission Center for Transforming Health care’s Targeted Solutions Tool® has shown promise in improving handoff communication by facilitating targeted needs assessment of local handoff practices, data collection, and quality improvement intervention.Cannabis is the most widely used illicit substance in the United States. In 2014, 22.2 million Americans 12 years of age and older reported current cannabis use.The rapidly changing political landscape surrounding cannabis use has the potential to increase these numbers dramatically.In addition, as of 2017 California, seven other states and the District of Columbia have legalized recreational use of marijuana.The incidence of CHS and other marijuana related emergency department visits has increased significantly in states where marijuana has been legalized.A study published in 2016 evaluating the effects of cannabis legalization on EDs in the state of Colorado found that visits for cyclic vomiting—which included CHS in this study—have doubled since legalization.Despite the syndrome’s increasing prevalence, many physicians are unfamiliar with its diagnosis and treatment. CHS is marked by symptoms that can be refractory to standard antiemetics and analgesics.Notwithstanding increasing public health concerns about a national opioid epidemic and emerging guidelines advocating non-opioid alternatives for management of painful conditions, these patients are frequently treated with opioids.

In light of the public health implications of a need to reduce opioid use when better alternatives exist, this paper describes the current state of knowledge about CHS and presents a novel model treatment guideline that may be useful to front line emergency physicians and other medical providers who interface with these patients. The expert consensus process used to develop the model guideline is also described.CHS is a condition defined by symptoms including significant nausea, vomiting, and abdominal pain in the setting of chronic cannabis use.7 Cardinal diagnostic characteristics associated with CHS include regular cannabis use, cyclic nausea and vomiting, and compulsive hot baths or showers with resolution of symptoms after cessation of cannabis use. Cyclical vomiting syndrome is a related condition consisting of symptoms of relentless vomiting and retching. While CHS patients present with similar symptoms to those with CVS, associated cannabis use is required to make the diagnosis. CHS patients present to the ED with non-specific symptoms that are similar to other intra-abdominal conditions. These patients command substantial ED and hospital resources. In a small multi-center ambispective cohort study by Perrotta et al., the mean number of ED visits and hospital admissions for 20 suspected CHS patients identified over a two-year period was 17.3 ± 13.6 and 6.8 ± 9.4 respectively.These patients frequently undergo expensive and non-diagnostic abdominal imaging studies. In the Perrotta study,cannabis drying trays the mean number of abdominal computed tomography scans and abdominal/ pelvic ultrasounds per patient was 5.3 ± 4.1 and 3.8 ± 3.6 respectively. In addition to a contribution to ED crowding by unnecessary prolonged stays to perform diagnostic testing, patients are exposed to potential side effects of medications, peripheral intravenous lines, and procedures such as endoscopies and abdominal surgeries.

While treating physicians often administer opioid analgesics and antiemetics, symptom relief is rarely achieved with this strategy. In fact, there is evidence to suggest opioids may exacerbate symptoms.The pathophysiology of CHS is unclear.Paradoxically, there are long-recognized antiemetic effects of cannabis, thus leading to its approved use for treatment of nausea and vomiting associated with chemotherapy and appetite stimulation in HIV/AIDS patients. The factors leading to the development of CHS among only a portion of chronic marijuana users are not well understood. Basic science research has identified two main cannabinoid receptors: CB1 and CB2 , with CB1 receptors primarily in the central nervous system, and CB2 receptors primarily in peripheral tissues. This categorization has recently been challenged and researchers have identified CB1 receptors in the gastrointestinal tract.Activity at the CB1 receptor is believed to be responsible for many of the clinical effects of cannabis use, including those related to cognition, memory, and nausea/vomiting.Scientists hypothesize that CHS may be secondary to dysregulation of the endogenous cannabinoid system by desensitization or down regulation of cannabinoid receptors.Some investigators have postulated that disruption of peripheral cannabinoid receptors in the enteric nerves may slow gastric motility, precipitating hyperemesis.Relief of CHS symptoms with very hot water has highlighted a peripheral tissue receptor called TRPV1 , a G-protein coupled receptor that has been shown to interact with the endocannabinoid system, but is also the only known capsaicin receptor.This has led some to advocate for the use of topical capsaicin cream in the management of acute CHS.Sorensen et al. identified seven diagnostic frameworks, with significant overlap among characteristics listed by the various authors; however, there was no specific mention of how many of the above features are required for diagnosis. Those with the highest sensitivity include at least weekly cannabis use for greater than one year, severe nausea and vomiting that recurs in cyclic patterns over months usually accompanied by abdominal pain, resolution of symptoms after cannabis cessation, and compulsive hot baths/showers with symptom relief. Clinicians should consider other causes of abdominal pain, nausea and vomiting to avoid misdiagnosis. Abdominal pain is classically generalized and diffuse in nature. CHS is primarily associated with inhalation of cannabis, though it is independent of formulation and can be seen with incineration of plant matter , vaporized formulations , waxes or oils, and synthetic cannabinoids. At the time of this writing, there have been no reported cases associated with edible marijuana. Episodes generally last 24-48 hours, but may last up to 7-10 days. Patients who endorse relief with very hot water will sometimes report spending hours in the shower.Many of these patients will have had multiple presentations to the ED with previously negative workups, including laboratory examinations and advanced imaging.Clinicians should assess for the presence of CHS in otherwise healthy, young, non-diabetic patients presenting with a previous diagnosis of gastroparesis. Laboratory test results are frequently non-specific. If patients present after a protracted course of nausea and vomiting, there may be electrolyte derangements, ketonuria, or other signs of dehydration. Mild leukocytosis is common.

If patients deny cannabis use but suspicion remains high, a urine drug screen should be considered. Imaging should be avoided, especially in the setting of a benign abdominal examination, as there are no specific radiological findings suggestive of the diagnosis.Per the expert consensus guideline, once the diagnosis of CHS has been made and there is a low suspicion for other acute diagnoses, treatment should focus on symptom relief and education on the need for cannabis cessation. Capsaicin is a readily available topical preparation that is reasonable to employ as first line treatment.While this recommendation is made based on very limited data including a few small case series, capsaicin is inexpensive, has a low risk side-effect profile, makes mechanistic sense, and is well tolerated.Conversely, there are no data demonstrating efficacy of opioids for CHS. Capsaicin 0.075% can be applied to the abdomen or the backs of the arms. If the patients can identify regions of their bodies where hot water provides symptom relief, those areas should be prioritized for capsaicin application. Patients should be advised that capsaicin may be uncomfortable initially, but then should rapidly mimic the relief that they receive with hot showers. Additionally patients must be counseled to avoid application near the face, eyes, genitourinary region, and other areas of sensitive skin, not to apply capsaicin to broken skin, and not to use occlusive dressings over the applied capsaicin. Patients can be discharged home with capsaicin, advising application three or four times a day as needed. If capsaicin is not readily available, but there is a shower available in the ED, patients can be advised to shower with hot water to provide relief. Educate patients to use caution to avoid thermal injury, as there are reports of patients spending as long as four hours at a time in hot showers.11 Other possible therapeutic interventions include administration of antipsychotics such as haloperidol 5 mg IV/IM or olanzapine 5 mg IV/IM or ODT, which have been described to provide complete symptom relief in case reports.Conventional antiemetics, including antihistamines , serotonin antagonists , dopamine antagonists , and benzodiazepines can be used, though reports of effectiveness are mixed.Provide intravenous fluids and electrolyte replacement as indicated. Avoid opioids if the diagnosis of CHS is certain. Clinicians should inform patients that their symptoms are directly related to continued use of cannabis. They should further advise patients that immediate cessation of cannabis use is the only method that has been shown to completely resolve symptoms. Reassure patients that symptoms resolve with cessation of cannabinoid use and that full resolution can take anywhere from 7-10 days of abstinence.Educate patients that symptoms may return with re-exposure to cannabis. Provide clear documentation in the medical record to assist colleagues with confirming a diagnosis, as these patients will frequently re present to the ED.Due to the growing opioid epidemic in the U.S., there is widespread interest in using prescription drug monitoring systems to curb prescription drug abuse. PDMPs are statewide databases used by physicians, pharmacists, and law enforcement to obtain data about controlled-drug prescriptions, with the goal of detecting substance-use disorders, drug-seeking behaviors, and reducing patient risks of adverse drug events. While almost all U.S. states have PDMPs, they vary in design and implementation.In this paper, we review the history, evidence, and adoption of best practice guidelines in state PDMPs with a focus on how to best deploy PDMPs in emergency departments. Specifically, we analyze the current PDMP model and provide recommendations for PDMP developers and EDs to help meet the informational needs of ED providers with the goal of better detection and prevention of prescription drug abuse.The U.S. accounts for roughly 80% of opioid use worldwide, and misuse – such as the recreational use of opioids – is a significant problem.Every 19 minutes in the U.S. someone dies from an unintentional drug overdose, the majority from opioids.From 1997 to 2007 the average milligram -per-year use of prescription per person of opioids in the U.S. increased 402%, from 74 mg to 369 mg. Meanwhile, an estimated seven million people above the age of 12 use opioids and other prescription medications for non therapeutic purposes annually.These non-medical uses of opioids are linked to 700,000 ED visits yearly.Along with treating the consequences of opioid-related illness and overdose, EDs are often a location used by some patients as a source for opioid prescriptions.

This study was a sub-analysis of a prospective observational study

Such encounters may also serve as a sentinel event for those at high risk for stroke, facilitating important changes in their health behavior.Physicians can seize on such teachable moments to educate high-risk AF/FL patients on stroke risk and prevention and, when appropriate, to recommend or prescribe anticoagulation.Initiating anticoagulation at the time of ED discharge for stroke-prone patients does not increase bleeding rates and contributes to decreased mortality.Some patients, however, might prefer to have this shared decision-making conversation with a provider aware of their values and preferences, e.g., a primary care provider or cardiologist.Nevertheless, emergency physicians are an important link in the chain of multi-specialty care coordination for the stroke-prone AF/FL population—whether they initiate the discussion of thromboprophylaxis or actually prescribe anticoagulation.The initiation of thromboprophylaxis to ED patients with AF/FL at high risk for stroke has not been extensively studied. The literature that exists, however, demonstrates under-prescribing in countries around the world.The prescribing practices in U.S. community EDs, however, are not well understood. We undertook a multi-center, prospective, observational study to evaluate the anticoagulation practice patterns of community EPs and short-term, post-ED care providers in the management of patients with non-valvular AF/FL considered at high risk for ischemic stroke. We also sought to identify factors influencing initiation of oral anticoagulation. We hypothesized that increasing age, lack of cardiology involvement in the patient’s ED care, and restoration of sinus rhythm before ED discharge would decrease the likelihood of receiving an oral anticoagulant prescription. Lastly, we reviewed the electronic health records of the patients discharged without anticoagulation to evaluate documented reasons for withholding anticoagulation and provision of educational material on AF/FL stroke risk and prevention.

The source population was based within KPNC,planting table a large integrated healthcare delivery system that provides comprehensive medical care for four million members across 21 medical centers. KPNC members represent approximately 33% of the population in areas served and are highly representative of the local surrounding and statewide population. Emergency care was provided by emergency medicine residency-trained and board-certified EPs. During the study period , the annual census of each of the seven EDs ranged from 25,000 to 78,000. No departmental policies were in place at the participating EDs to govern the short-term anticoagulation management of patients with AF/FL. Patient care was left to the discretion of the treating EPs. All facilities had pharmacy services available around the-clock for discharge medications and supplemental patient education. Oral anticoagulation medications in use within KPNC during the study period were warfarin and dabigatran, warfarin being the drug of choice at the time. Furthermore, each facility had its own pharmacy-managed, phone-based Outpatient Anticoagulation Service that managed outpatient warfarin use and provided close follow-up and monitoring of these patients, akin to similar programs in other KP regions in the U.S.The percent time in therapeutic range for the international normalized ratio during the study period varied by facility and ranged from 70% to 74%, calculated with a six-month look-back period using the Rosendaal linear interpolation method.In the TAFFY study, adult KPNC health plan members in the ED with electrocardiographically-confirmed non-valvular AF/FL were eligible for prospective enrollment if their atrial dysrhythmia fell into any one of these three categories: symptomatic AF/FL; AF/FL requiring ED treatment for rate or rhythm control; or the first known electrocardiographically-documented episode of AF/FL. Patients were ineligible if they were transferred in from another ED, were receiving only palliative comfort care, had an implanted cardiac pacemaker/ defibrillator, or had been resuscitated from a cardiac arrest in the ED or just prior to arrival. The treating EPs enrolled patients via convenience sampling and were provided a small token of appreciation for their bedside data collection. No research assistants facilitated enrollment.

This anticoagulation study included TAFFY patients who were not taking oral anticoagulants at the time of ED presentation; at high risk for thromboembolic complications based on a validated thromboembolism risk score; and discharged home directly from the ED. Only a patient’s first enrollment was included in this analysis. We used the validated Anticoagulation and Risk Factors in Atrial Fibrillation stroke risk score to identify our AF/FL population at high risk for thromboembolism, as it has been shown to be more accurate than the CHADS2 or CHA2 DS2 -VASc stroke risk scores.TAFFY variables collected prospectively at the time of patient care included presenting symptoms; characterization of the atrial dysrhythmia ; comorbid diagnoses; ED management ; cardiology consultation; discharge rhythm and discharge pharmacotherapy. To minimize the effect that structured data collection might have on stroke prevention and to improve the odds of describing real-world behavior, the physician education material and data collection tool mentioned none of the following: hemorrhage risk, thromboembolic risk, risk scoring, indications for anticoagulation, post-ED follow-up care, or this study’s objectives and hypotheses. We undertook monthly manual chart review audits at each medical center to identify cases that were TAFFY-eligible but had not been enrolled to assess potential selection bias between the enrolled and missed-eligible populations. After completion of the enrollment period, we extracted additional demographic and clinical variables from the health system’s comprehensive integrated electronic health record. These included additional patient characteristics and oral anticoagulation prescription, prescriber,cannabis indoor grow system and outpatient follow-up within 30 days of ED discharge. Among 2,849 identified eligible patients, 1,980 were enrolled by the treating physicians in the parent TAFFY study. Enrolled and non-enrolled patients were comparable in terms of age, gender, comorbidity, and stroke risk scores, except that enrolled patients were more likely to have had a history of prior diagnosed AF/ FL. 

For the present analysis, we excluded 906 enrolled patients who were not discharged home directly from the ED or were not KP health plan members at enrollment, 252 patients who were already taking anticoagulation therapy and 510 patients who were not high risk for thromboembolism. The remaining 312 AF/FL patients constituted our study cohort. While selected for the study based on their ATRIA score, all study patients were also found to be high risk using the CHA2 DS2 -VASc score.Overall, median age was 80 years , and 201 cohort members were women. Oral anticoagulants were prescribed to 128 patients within 30 days of the index ED visit, with 85 patients receiving a new anticoagulant prescription at the time of ED discharge and the remaining 43 patients in the following 30 days. In this sample, warfarin was the only oral anticoagulant prescribed. During the post ED-discharge period, the specialty of the physician prescribing anticoagulation included outpatient internal medicine , cardiology , hospital medicine , and emergency medicine. Among the 227 patients who left the ED without an oral anticoagulant prescription, 195 had an in-person or telephone encounter with a primary care provider or cardiologist within 30 days. Forty-three patients were discharged home only on antiplatelet medications: seven were advised to continue their daily aspirin and 36 were prescribed new daily antiplatelet agents at the time of discharge. Characteristics of the cohort stratified by anticoagulation initiation are described in Table 2. Variables independently associated with increased odds of anticoagulation initiation included younger age, new diagnosis of AF/FL, symptom onset >48 hours prior to evaluation, EP assessment of rhythm pattern as intermittent , receipt of cardiology consultation in the ED, and failure of sinus restoration by time of ED discharge. Among the 227 patients discharged home from the ED without anticoagulation, 139 patients had one or more reasons documented for withholding anticoagulation. These were categorized as physician concerns and patient concerns. The leading physician reasons for withholding anticoagulation were concerns about elevated bleeding risk , deferring the decision to an outpatient provider, and the perception that the restoration of sinus rhythm had significantly reduced or eliminated stroke risk. The leading patient reasons for declining anticoagulation were a preference to continue the discussion of anticoagulation with their outpatient provider and simple refusal, not otherwise specified. Deferring the shared decision-making process to the patient’s outpatient provider was the leading reason for withholding anticoagulation when combining physician and patient concerns. One hundred thirty-seven patients were given patient education material on AF/FL in their discharge instructions. The three versions of material used by the EPs each included one sentence about the general association between AF/FL and thromboembolic events. The material was not personalized, however, and did not quantify the patient’s specific risk , nor even mention broader thromboembolic risk categories , nor discuss the benefits and risks of stroke prevention therapy. Using an online random number generator , we identified 23 cases for review by a second abstractor. Percent agreement was the same for presence of both documented reason for non-prescribing and provision of patient education material. 

The kappa statistic was 0.91 for each variable. In this multi-center, prospective cohort of non-anticoagulated AF/FL patients at high thromboembolic risk discharged home from the ED, we found that approximately 40% were prescribed oral anticoagulation within 30 days. Furthermore, we observed that younger age, selected rhythm-related characteristics in the ED, and receipt of cardiology consultation were strongly associated with receiving anticoagulation. About 60% of patients discharged home from the ED without anticoagulation had a reason documented in their electronic health record, a relatively high percentage of documentation compared with a recent, large, inpatient registry.The principal reason for non-prescribing in our study was deferring the shared decision-making process to the patient’s outpatient provider. Such reasoning is sensible in a setting like ours where patients have ready access to their outpatient physicians and 30-day follow-up is common.Our percentage of deferral was higher than in a similar study of ED anticoagulation prescribing for high-risk AF/FL in Spain , though, like our study population, all of their patients also had health coverage.Other leading documented reasons included a perception of increased bleeding risk and a perception of reduced stroke risk.The incidence of oral anticoagulation initiation for AF/FL patients at high risk for ischemic stroke who are discharged home from the ED has not been well described. Reports range widely, from approximately 10% to 50%. The calculation also varies depending on whether stroke-prone AF/FL patients deemed ineligible for anticoagulation are included in the denominator. A large, 124-center study from Spain by Coll-Vinent et al. in 2011 demonstrated that anticoagulation was initiated at the time of home discharge to 193 of 453 high-risk AF patients , higher than our 27%.The case mix in this study was similar to ours in that patients with all categories of AF were included , but was different in that they excluded patients thought ineligible for anticoagulation, something our study design did not allow. This difference might explain in part why their incidence of initiation was higher than what we observed. A more recent, 62-center Spanish study from the same investigators reported a similarly high incidence of de novo anticoagulation prescribing on ED discharge.32 Two hospitals with the University of British Columbia, Canada, have reported a high baseline incidence of appropriate anticoagulation initiation at ED discharge for high-risk AF/FL patients. As with the Spanish study above, these investigators had excluded ineligible patients.48 Other studies have reported lower incidences of anticoagulation initiation. A retrospective cohort study undertaken in 2008 in eight Canadian EDs observed thromboprophylaxis initiation in 21 of 210 patients with recent-onset AF/FL who were discharged home.A more recent prospective study by Stiell et al. described the treatment of patients with recent-onset AF at six academic Canadian EDs from 2010 to 2012 and found slightly lower rates of untreated high-risk patients leaving the ED with a new anticoagulation prescription.In a retrospective study of two academic Canadian EDs, Scheuermeyer et al reported that 27% of high-risk AF/FL patients were begun on appropriate stroke prevention medications at discharge, and documentation of reasons for withholding thromboprophylaxis was noted in an additional 21 patients.Our finding that older patients with high-risk AF/FL were less likely to receive an oral anticoagulant prescription than their younger counterparts is consistent with studies demonstrating under-treatment both in the ED and in other settings.Thromboprophylaxis is less commonly prescribed to patients over 75 years of age, even though this population likely benefits the most given their higher absolute risk of ischemic stroke compared with intracranial hemorrhage or life-threatening extracranial hemorrhage.Physicians often acknowledge their hesitancy to initiate anticoagulation in the elderly and very elderly,given that these patients often have a high comorbidity burden, associated cognitive disorders and polypharmacy-related challenges.

We also report the use of an EDOU to help decrease hospital admission

To the best of our knowledge, this is the first report of healthcare utilization in patients with SCD that includes DH and ED observation visits, in addition to the routinely reported ED visits and hospitalizations. We intentionally “counted” each encounter, and the numbers are significant. We attempted to dissect the “locations” in an effort to more fully understand all healthcare use for treatment of VOC and to begin to understand the potential for all locations as alternatives for treatment of VOC. During the project, several changes at both sites affected healthcare utilization options. Immediately prior to the onset of the study, Site 1 opened a new day hospital, enabling the management of mild episodes of VOC crisis with a DH stay; in contrast, at Site 2, the main provider that admitted patients to the DH took an 18-month medical leave, temporarily limiting the use of the DH at Site 2. Therefore, it is not surprising that the DH was used more for Site 1 patients needing acute pain management of VOC. It should be noted that the percentage of patients with one or more encounters to the DH at each site was not significantly different. The difference in usage reflected the frequency of DH use per patient during the study period rather than the percentage of patients with at least one DH encounter at each site. Another difference in management style is reflected in the hospital admission rate following a DH encounter between sites. Site 1 had a low post-DH encounter hospital admission rate, 8%,garden racks wholesale compared to Site 2, 24%. Dedicated DH management of patients with SCD has been shown to reduce full hospital admissions and total costs.It is clear there was a lower threshold for admission from the DH at Site 2 when compared with Site 1. This reflects practice pattern differences.

Emergency physicians should work with area hematologists to explore expanded use of DH treatment of uncomplicated VOC to reduce hospital admission for those cases where hospital admission is not otherwise warranted.Site 1 placed 67 patients in the observation unit rather than admitting to the hospital after inability to discharge after the ED stay; the admission rate was less than the SCD-VOC ED admission rate. A Brazilian hospital center successfully implemented an EDOU protocol and reduced hospital admissions; however, generalization of findings is limited to the small sample size, as there were less than 30 hospital admissions for sickle cell crisis each year.Two studies proclaiming a 50-55% reduction in hospital admission rates following implementation of a dedicated SCD-VOC observation protocol have been published in abstract form,but the detailed reports have yet to be published. Additional details are required before conclusions can be generalized to other settings. However, emergency physicians with access to an EDOU should consider establishing a SCD-observation protocol to reduce hospital admissions for uncomplicated VOC. Few prior studies assessed sickle cell patients’ use of hospital facilities outside of their specialists’ home institutions. Our finding of 34.5% of SCD patients visiting outside institutions is slightly less than that found by Woods et al. in 1997, who found 39% of SCD patients in the Illinois statewide database used more than one hospital for care.However, our finding of 34.5% outside hospital use is considerably less than Panepinto et. al. study using a database from eight states, which found that 48.7% of adult patients with SCD used more than one hospital.The fact that our patients had access to a hematologist for regular care may have reduced their need to seek care outside of the home institutions, while the other two cited studies reflected a more general SCD patient population, likely with less hematology follow-up care. Furthermore, patients seeking care elsewhere may represent needs unmet by the home institution.Our findings highlight the importance of measuring the cost of outside hospital utilization when studying the financial impact of new treatments or programs initiated at the investigator’s institution. While the majority of patients with sickle cell disease at each study site presented for acute care during the study period, a significant number had no acute care encounters, for a period longer than previously reported previously.

Approximately 40% of clinic patients at Site 1, and 33% of clinic patients at Site 2 had no acute care encounters at their hematologist’s institution, or at hospitals within 20 miles of the hematologist,hydroponic racks during the 2.5-year monitoring period. Our findings should be compared to an eight-state study of statewide inpatient and ED databases that found only 29% of patients had no acute care encounters related to their sickle cell anemia over a 12-month period.Darbari et. al. reported percentages similar to our study, 40%, but the assessment period was only one year.Our findings document 33-40% of two populations of patients with sickle cell disease being managed by hematologists without the need for acute care encounters for period of 30 months. We believe this is an important finding and further refutes the commonly held myth that all patients with SCD are high utilizers. Another important finding is that a greater proportion of patients at Site 2 had one or more hospital admissions , and had one or more ED visits. Furthermore, while the number of patients with acute care encounters at Site 1 was more than twice the number at Site 2 , Site 2 patients had more total ED encounters than Site 1 patients. This again speaks to differences in practice patterns between sites that can be guided with strong input from the patients’ hematologists. It has been documented previously how a minority of patients with SCD account for a disproportionately greater number of encounters;however, the variation in acute care usage between sickle cell populations has not been demonstrated previously within a single study. Clearly, the patients at Site 2 had more acute care encounters per patient. Our study did not assess the differences in methods of sickle cell disease management in the outpatient clinics; future study should investigate differences in all management methods, as well as differences in the patient characteristics, to determine the cause of this difference in acute care utilization. Our study was a prospective observational study, and we did not randomize patients to any specific treatment plan or setting. Although it was our intention to provide optimal and uniform care at both sites, providers at Site 2 were unable to initiate patient-controlled analgesia in the ED.

However, use of patient-controlled analgesia at Site 1 had unique problems, including delays to initiation of pain treatment as the device takes more time to set up than simple, single intravenous injection of pain medicine. Patient satisfaction with pain medication was not significantly different between sites.We did not assess outside hospital use beyond a 20-mile radius of each study site. We learned from discussions with patients that a few had received acute medical care at facilities outside of the 20-mile radius surrounding the home institutions, but we are not able to quantify or comment further on this care as patients were consented for hospitals only within the 20-mile radius. We observed differences in management styles, but we were unable to determine from this data to what extent the differences we observed were due to physician practice, patient disease severity, or other factors. Each site experienced a deficit in hematologist specialty coverage that reduced the use of the DH until a replacement could be found. Our patient population had access to a hematology specialty clinic during the entire study period; our finding may not be applicable to settings without readily available hematology follow-up and hematologist-directed day hospital management for patients with sickle cell disease. Headache is a common complaint in the emergency department.The use of HCT by emergency physicians for evaluation of headache varies widely, and 97% of EPs surveyed felt that at least some of the imaging studies ordered in EDs were medically unnecessary.The American College of Emergency Physicians released its Choosing Wisely Campaign in 2013, which included avoiding HCTs in patients with minor head injury who are at low risk based on validated decision rules.During the 2015 Academy of Emergency Medicine Consensus Conference on diagnostic imaging in the emergency department, participants suggested that allowing providers to influence metrics could produce better quality metrics; they also suggested that knowledge translation for the optimization of diagnostic imaging use should be a core area warranting further research.The Centers for Medicare and Medicaid Services proposed OP-15, “Use of Brain Computed Tomography in the Emergency Department for Atraumatic Headache,” to measure the proportion of HCTs performed on ED patients presenting with a primary complaint of headache that were supported by diagnosis codes; however, its methods were soon questioned.In 2012 while OP-15 was still under consideration, we implemented a quality improvement effort intended to improve the documentation of appropriate diagnoses in support of HCT ordering. As part of this QI effort we addressed some of the criticisms of OP-15 by expanding the indications for HCT and getting input from practicing EPs.

Reviewing this QI effort in 2014 we observed that the proportions of HCT use decreased after EPs had reviewed their individual practice data. The primary objective of our present study was to determine whether the observed decrease in HCT use was associated with changes in proportions of death or missed intracranial diagnosis. Secondarily, we sought to determine whether proportions of subsequent cranial imaging or reevaluation of headache differed when compared between those who did and those who did not undergo HCT in the ED.Our QI effort was structured to fulfill the practice improvement component of the American Board of Emergency Physicians’ Maintenance of Certification requirement.This required collecting data on 10 visits per EP before and after an intervention. We performed two interventions in succession, so our QI effort yielded three epochs: pre-intervention ; post-education ; and post-data review. At the end of each epoch, we sampled 10 visits for headache seen by each EP by searching the EMR for chief complaints of headache, and identifying the 10 most recent ED visits seen by each faculty EP. For our educational intervention we began by soliciting feedback from EPs on OP-15. Using this feedback we expanded the list of appropriate diagnoses supporting HCT. We followed this with a series of emails and lectures explaining CMS OP-15. We also conducted group discussions during educational conferences and faculty meetings to educate EPs on selecting appropriate diagnoses to support HCT ordering and explaining the measurement process, highlighting common pitfalls. During group discussions we invited and answered questions. The explicit goal of education was to improve diagnosis documentation, rather than to decrease HCT ordering. This began in late 2012 and continued through 2013. The data-review phase took place between January and March of 2014 when individual EPs reviewed their own HCT ordering practices based on data collected for the QI effort during the pre-intervention phase. These reviews occurred during individual faculty’s annual reviews with the department chair. In these meetings we presented each EP with his/her individual proportion of HCT ordering and proportion of appropriate diagnosis assignment. In cases where a HCT was ordered without the assignment of an appropriate supporting diagnosis we reviewed the ED chart. In keeping with Schuur et al.’s findings, we found that in the majority of cases a more specific diagnosis than “headache” was clearly supported by information documented in the ED chart, but had not been assigned at the end of the ED visit.During each annual review we informed the EP of the specific cases where HCT was not supported by a diagnosis code and suggested an alternate, more-specific diagnosis or the addition of a secondary diagnosis that would have made this HCT appropriate according to CMS OP-15. This was followed by the post data review phase when we sampled another 10 headache visits per EP. After our QI effort was completed, we were surprised to note that while there was no decrease in HCT use after the educational intervention, we observed a 9.6% reduction in HCT use after data review.In 2016 we decided to use the dataset generated during the QI effort to investigate our study hypothesis: Was a decrease in HCT use followed by an increase in death or missed intracranial diagnosis?