Both THC and the endogenous cannabinoid anan damide  promote overeating in partially satiated rats

In particular, examination of biomarkers of stress and trauma in PLWH may help to understand mechanisms underlying the associations between TES and neurocognitive and everyday function observed in this study. Efforts to reduce trauma, poverty, and other stressful contexts and developing resources to help people manage and cope with past and current adverse circumstances could be relevant to decreasing neurocognitive impairment, particularly the high rates of mild neurocognitive disorder, in PLWH. Historical descriptions of the stimulatory effects of Cannabis sativa on feeding are now explained by the ability of its psycho active constituent 9 -tetrahydrocannabinol to interact with CB1 cannabinoid receptors.Moreover, THC increases fat intake in laboratory animals and stimulates appetite in humans. The selective CB1 receptor antagonist SR141716A counteracts these effects and, when administered alone, decreases standard chow intake and caloric consumption , presumably by antagonizing the actions of endogenously released endocan nabinoids such as anandamide and 2-arachidonoylglycerol. These results suggest that endocannabinoid substances may play a role in the promotion of food intake, possibly by delaying satiety. It is generally thought that the hyperphagic actions of canna binoids are mediated by CB1 receptors located in brain circuits involved in the regulation of motivated behaviors. Thus, infusions of anandamide in the ventromedial hypothalamus were shown to promote hyperphagia , whereas the anorectic effects of leptin were found to be associated with a decrease in hypothalamic anandamide levels. Nevertheless,cannabis grow equipment evidence suggests that cannabinoids also may promote feeding by acting at periph eral sites. Indeed, CB1 receptors are found on nerve terminals innervating the gastrointestinal tract , which are known to be involved in mediating satiety signals that originated in the gut. 

To test this hypothesis, in the present study we have examined the impact of feeding on intestinal anandamide accumulation, the effects of central versus peripheral systemic administration of cannabinoid receptor agonists on feeding behavior, and the effects of sensory deafferentation on cannabinoid-induced hyperphagia.The present results suggest, first, that systemically administered cannabinoid agents affect food intake predominantly by engaging peripheral CB1 receptors lo calized to capsaicin-sensitive sensory terminals and, second, that intestinal anandamide is a relevant signal for the regulation of feeding. Two observations support the idea that cannabinoid agents modulate feeding through a peripheral mechanism. First, the lack of effect of central administration of cannabinoid antagonists such as SR14116A and 6-iodo-2-methyl-1-[2-ethyl]-[1H]-indol-3-yl methanone on food intake in food-deprived animals and, second, the ability of capsaicin-induced deafferentation to prevent changes in feeding elicited by the peripheral administra tion of cannabinoid drugs. Moreover, the similar pattern of ex pression of the early gene c-fos on hypothalamic and brainstem areas regulating food intake after both the peripheral adminis tration of either CB1 agonists and antagonists and the acute administration of peripherally acting satiety modulators such as gastrointestinal hormones or feeding inhibitors such as OEA further support the peripheral actions of cannabinoids on food intake. Finally, the fact that the CB1 receptor antagonist SR141716A was active only after intraperitoneal or oral administration but not after subcutaneous injection further supports this hypothesis. These results do exclude the possibility that peripheral anandamide also modulates feeding by acting on specific hypothalamic areas involved in caloric homeostasis. However, they do suggest that the predominant effects of systemically administered SR141716A are mediated by peripheral CB1 receptors, which may thus represent a potential target for anorexic agents. The concentration of anandamide in intestinal tissue increases during food deprivation, reaching levels that are threefold greater than those needed to half maximally activate CB1 receptors. This surge in anandamide levels, the mechanism of which is unknown, may serve as a short-range hunger signal to promote feeding.

This idea is supported by the ability of SR141716A to reduce food intake after systemic but not central administration. Locally produced anandamide also may be involved in the regulation of gastric emptying and intestinal peri stalsis, two processes that are inhibited by this endocannabinoid. Thus, intestinal anandamide appears to serve as an integrative signal that concomitantly regulates food intake and gastrointestinal motility. The predominant peripheral component of feeding suppression induced by SR141716A led us to analyze whether the modulation of food intake derived from CB1 receptor stimulation/blockade may interact with that produced by the noncannabinoid anand amide analog OEA. Our results indicate that the hyperphagic effects elicited by CB1 receptor stimulation were counteracted by the administration of OEA, whereas CB1 receptor blockade potentiates the suppression of feeding evoked by OEA. Because the intestinal levels of anandamide and OEA are inversely correlated , it is tempting to speculate that both compounds act in a coordinated manner to control feeding responses through their opposing actions on sensory nerve terminals within the gut.The Human Immunoeficiency Virus enters the central nervous system within days of initial infection , in many cases leading to neurological, cognitive, and behavioral complications. Cognitive deficits are a common feature of HIV/AIDS. While the incidence of HIV-associated dementia has considerably decreased in the era of modern ART suppressing viral replication, mild cognitive deficits with no change in everyday function persist in 24% [95% confidence interval = 20.3–26.8] of people with HIV and mild cognitive deficits with mildly decreased everyday function persist in about 13.3% of PWH. Although executive function and memory deficits are most common in PWH in the post-ART era, the characterization of cognitive impairment in HIV is highly variable with deficits observed in a range of cognitive domains. Previous studies using statistical clustering techniques have identified differing profiles of cognitive function among PWH with some profiles resembling global impairment across domains while other profiles resemble more domain-specific impairment,mobile vertical rack particularly in the domains of episodic memory and executive function. Similarly, there is also substantial variability in the risk factors associated with cognitive deficits among PWH that range from biological , demographic to psychosocial factors.

The persistence of cognitive impairment in the era of modern ART among PWH and the variability in the profiles and risk factors associated with cognitive impairment suggests that non-HIV factors associated with aging, comorbid conditions and psychosocial risk factors likely contribute to cognitive impairment given the high prevalence of these factors among PWH. With this in mind, we propose looking beyond the construct of HIV-associated neurocognitive disorders to identify the underlying pathophysiology linked to cognitive impairment as HAND requires other comorbidities to be ruled out as primary contributing factors. Biological sex is an important determinant of cognitive impairment among PWH. In a recent literature review of sex differences in cognitive impairment among PWH , seven cross-sectional and one longitudinal analysis identified sex differences on global measures of cognitive impairment among PWH. Additionally, six cross-sectional and one longitudinal analysis also reported sex differences in domain-specific cognitive performance. The strongest available evidence of adequately-powered studies indicates that WWH show greater deficits than MWH in the domains of learning and memory followed by speed of information processing and motor functioning, with inconsistent findings in executive functioning. The greater vulnerability of WWH to cognitive impairment may reflect sociodemographic differences between men and women with HIV. WWH tend to have a higher prevalence of psychosocial risk factors including poverty, low literacy levels, low educational attainment, substance abuse, poor mental health, and barriers to health care services as compared to MWH. These psychosocial risk factors may have biological effects on the brain that lead to reduced cognitive reserve among WWH as evidenced by findings of greater susceptibility of cognitive function to the effects of mental health factors among WWH vs. MWH. Additionally, biological factors such as sex steroid hormones and female-specific hormonal milieus may contribute to sex differences in cognitive test performance in PWH. However, it remains unclear how MWH and WWH may differ in the patterns of cognitive impairment and risk factors associated with these patterns of cognitive impairment. Previous reports of impairment profiles among PWH have identified them in combined samples of men and women , masking possible sex-specific patterns of cognitive impairment among PWH. Furthermore, although a number of studies reported sex differences in the presence and pattern of cognitive impairment and greater cognitive decline compared to MWH , only one study was adequately powered to address meaningful sex difference in global cognitive function. A well-powered examination of the patterns and determinants of cognitive impairment by sex, that also controls for other demographic differences between WWH and MWH , can help to clarify the contribution of sex to heterogeneity in cognitive impairment among PWH. Such an examination could also clarify the related psychosocial vs. biological factors and, thereby, optimize risk assessments and intervention strategies in both sexes.

Leveraging comprehensive neuropsychological data from the large-scale cohort of the HIV Neurobehavioral Research Program at the University of California-San Diego, we used novel machine learning methods to identify differing profiles of cognitive function in PWH and to evaluate how these profiles differ between women and men in sex-stratifified analyses. Rather, than using traditional cognitive domain scores, we used each of the NP test outcomes given that prior studies indicate that the correlation of NP test scores does not map to traditional domain scores in PWH. Furthermore, we determined how sociodemographic , clinical and biological factors related to cognitive profiles within women and men. Based on previous studies among PWH , we hypothesized that the machine learning approach would identify distinct subgroups of individuals with normal cognitive function, global cognitive impairment, and domain specific cognitive impairment. We further hypothesized that groups with domain-specific cognitive impairment would differ by sex, with WWH showing more consistent memory and processing speed impairment than MWH. Finally, we expected that similar sociodemographic/clinical/biological determinants would distinguish cognitive profiles among WWH and MWH; however, in line with previous research , we expected that depressive symptoms would be more strongly associated with cognitive impairment profiles among WWH than MWH. Study assessment details have been published elsewhere. The UCSD Institutional Review Board approved the studies. Participants provided written informed consent and were compensated for their participation. Exclusion criteria for the parent studies included history of non-HIV-related neurological, medical, or psychiatric disorders that affect brain functioning , learning disabilities, and a first language that was not English. Inclusion in the current study required completion of neuropsychological and neuromedical evaluations at the baseline study visit. Exclusion criteria for the current study included a positive urine toxicology test for illicit drugs or Breathalyzer test for alcohol on the day of clinic visit on the day of study visit. NP test performance was assessed through a comprehensive, standardized, battery of tests that measure seven domains of cognition, including complex motor skills, executive function, attention/working memory, episodic learning, episodic memory , verbal fluency, and information processing speed. Motor skills were assessed by the Grooved Pegboard Dominant and Non-dominant Hand tests. Executive functioning was assessed by the Trail Making Test -Part B and the Stroop Color and Word Test interference score. Attention/working memory was assessed by the Paced Auditory Serial Addition Task. Episodic learning was assessed by the Total Learning scores of the Hopkins Verbal Learning Test-Revised and the Brief Visuospatial Memory Test-Revised. Episodic memory was assessed by the Delayed Recall and Recognition scores of the HVLT-R and BVMT-R. Verbal Fluency was assessed by the “FAS” Letter Fluency test. Information processing speed was assessed by the WAIS-III Digit Symbol Test , the TMT-Part A, and the Stroop Color and Word Test color naming score. Raw test scores were transformed into age-, education-, sex-, and race/ethnicity-adjusted T-scores based on normative samples of HIV-uninfected persons. The use of demographically-adjusted T-scores are intended to control for these demographic effects as they occur in the general population. We examined sociodemographic, clinical, and biological factors associated with cognitive impairment in the literature and available with enough participants to be adequately powered in analyses. Sociodemographic factors included age, years of education, and race/ethnicity. Although these factors were used to create the T-scores, there can still be remaining demographic associations with cognition within clinical populations such as PWH. For example, there is considerable interest in the possibility of abnormal cognitive aging PWH; also, in general, older PWH tend to have had their infections longer, may have had longer periods without benefit of suppressive ART, and more history of worse immunosuppression.

This Article examines the chemical testing of psychoactive drugs

The most proportional significant shift in General Fund spending in 2021-2023 was the increase in Human Services, which was offset by reductions in the percentages going to Public Safety/Judicial Spending, K-12 Education and various minor programs. The increase in Human Services is largely driven by the legislature’s commitment to continue Medicaid coverage at least the same level without any fee increases despite inflation in medical care, and to expand access to the system for undocumented immigrants. Mental health programs were also enhanced. When combined with Other Funds, K-12 Education reached a record total of $9.3 billion and Democratic legislative leaders resisted pressures to go further. The relative decline in Public Safety Judicial spending reflects Governor Brown and various legislators’ interest in sentencing reform and prison consolidation, as well as the temporary use of early releases and less incarceration in response to COVID. Although their proportionate share did not increase, Higher Education in Oregon benefitted strongly in the 2021-2023 final budget, partially because it could harvest diverse federal monies. According to the Higher Education Coordinating Commission , the state’s overseeing body, “In general, posts econdary education and workforce experienced promising growth in key program areas in the 2021-23 budget. Support for community college operations increased 10.5 percent from 2019-21 LAB, and state support for public university operations increased 8.1 percent. These are the funding levels the institutions requested to accommodate actual cost growth and are expected to be sufficient to mitigate tuition increases during the biennium.”The rule of law is often seen as a formal,cannabis grow tent governmental alternative to informal, social mechanisms for regulating conduct.

In this Article, I examine a more indirect manifestation of the rule of law: the indirect effect that the criminal law can have on private efforts at risk management by individuals and corporations. Formal law can encourage private risk regulation, but it can also distort it.Trained technicians in commercial laboratories routinely employ a common technology—gas chromatography/mass spectrometry —to test samples for the presence of illicit psychoactive substances as well as dangerous or benign adulterants. One of these laboratories, LabCorp, provides occupational testing services for corporate clients.2 Another, Drug Detection Laboratories , conducts GC/MS screening of samples provided by DanceSafe, EcstasyData.org, and the Multidisciplinary Association for Psychedelic Studies.3 LabCorp’s samples are obtained from corporate clients’ random or systematic urine testing of their prospective and existing employees. DDL’s samples come from anonymous Ecstasy consumers who seek information on the potential presence of adulterants in samples they have purchased illicitly. This Article explores the remarkably different normative and behavioral consequences that follow from the use of the same basic laboratory protocol to test illicit drug use 4 and for illicit drug safety.My primary interest is in testing practices conducted by private citizens rather than agents of the legal system. At first glance, one might think that safety testing and use testing have little shared relevance. I do not contend that they are mutually exclusive alternatives. Both use testing and safety testing are intended to reduce harms, and each presumes to do so indirectly, by influencing the decision to ingest a drug. But these practices exemplify two distinctly different strategies for thinking about the management of risky behaviors— prevalence reduction and harm reduction. Prevalence reduction seeks to reduce the number of people engaging in a given behavior, while harm reduction seeks to reduce the harmful consequences of engaging in that behavior.Practices and concepts most readily identified with prevalence reduction include abstinence, prevention, deterrence, and incapacitation. Practices and concepts most readily identified with harm reduction include safe-use and safe-sex educational materials, needle exchanges, and the free distribution of condoms to students. Prevalence reduction may be employed in the hope of reducing drug-related harms, but because it directly targets use, any influence on harm is indirect. Harm reduction directly targets harms; any influence on use is indirect. This Article focuses on the private use of these methodologies. These private uses occur in the shadow of the law, thus criminal law influences—and, to some extent, distorts—their consequences.

Criminal law facilitates the intrusive exercise of use testing in workplaces and schools that might otherwise have difficulty implementing it; this is illustrated by the greater prevalence of drug testing than of alcohol testing.Criminal law also hinders the effective implementation of safety testing, making it easier for sellers to distribute adulterated and often dangerous products. More subtly, criminal law frames the issue of drug use as one of criminal deviance, which encourages some solutions but obscures others. For example, the focus on drug testing overlooks the potentially more harm-reducing use of psychomotor testing.8 Thus, both practices are constrained by the criminal laws prohibiting these drugs. This is not an argument for ending drug prohibition, nor do I argue for the superiority of safety testing over use testing, or harm reduction over prevalence reduction.But this Article suggests a less moralistic, more pragmatic approach to drug policy—an approach that is less speculative than legalization because it is has been pursued for decades in the Netherlands, and increasingly in the United Kingdom, Australia, and elsewhere.Not surprisingly, positive drug test rates are dramatically higher among criminal justice arrestees. The National Institute of Justice began collecting systematic drug testing data from arrestees with its Drug Abuse Forecasting program in 1988. An improved methodology, the ADAM program, was implemented in 2000.The most recent data available are from 2000.In that year,grow lights for cannabis more than half of thirty-five sites reported that 64% or more of their male arrestees tested positive for either cocaine, opiates, marijuana, methamphetamine, or PCP.The most common drugs present were marijuana and cocaine.Any consideration of drug test results should be qualified by the serious limitations of existing testing methods. Blood testing is the most accurate method for identifying drug influences at the moment of testing, but it is intrusive, expensive, and rare.Urine testing, which is also intrusive, is far more common. But it is a poor indicator of immediate drug status because drugs cannot be detected in urine until they have been metabolized, often many hours after consumption.Urine testing is particularly sensitive to cannabis use, and can detect use dating back several months for a heavy user, but it is far less likely to detect other “hard” drugs.

Saliva and hair testing are less intrusive and are becoming more common. In fact, hair testing can detect use dating back two to three months, and can even date the use with some accuracy.Use testing is vulnerable to false positives due to contaminants , as well as false negatives due to temporary abstention , “water loading” , and even a haircut. Detailed advice on defeating a drug test is available on various web sites.For example, false positives for marijuana can be triggered by many different prescription and over-the counter medications.Another reason to be wary of the accuracy of use testing results involves sampling. “Random testing” may sound a lot like “random sampling,” but there is selection into and out of the sample, because users and others who object to testing may avoid the testing organization altogether—whether it be the military, a workplace, or a school sports program.From a deterrence perspective, use testing should be an effective way to reduce drug use. Aggregate econometric analyses and individual-level “perceptual deterrence” studies suggest four generalizations about drug offenses, drunk driving, and various income-generating crimes: the certainty of punishment has a modest but reliable causal impact on offending rates, even for offenses with very low detection probabilities; the severity of punishment has no reliable impact, either in isolation or in interaction with certainty; the celerity or speed of punishment is important, but post-arrest criminal sanctioning is probably too slow to be effective; and an arrest can trigger informal social sanctions, even in the absence of incarceration.Use testing increases the certainty of sanctioning, and even when it does not lead to arrest, the consequences of a positive test are effectively punitive, because it damages one’s reputation with family, friends, and colleagues. Nevertheless, support for a general deterrent effect of drug testing is mixed. The available studies are correlational and hence they are subject to a variety of inferential problems. It is astonishing that such an intrusive intervention is being implemented so widely in the absence of a carefully controlled experiment group, with random assignment to testing condition either at the individual, site, or organizational level.In 1981, the United States military implemented a tough “zero-tolerance” drug policy, which imposed mandatory drug testing and threatened job termination for violations. Two studies have examined the effects of the policy. Professor Jerald Bachman and his colleagues used the Monitoring the Future cohort data from young adults who graduated from high school in 1976 through 1995.They found declining rates of drug use among active duty military personnel and nonmilitary cohort members in the two years after graduation, but beginning in 1981, the rate of decline was steeper for the military group, at least for illicit drugs. This is a pattern “strongly suggestive of causal relationships.”In a separate study, economists Stephen Mehay and Rosalie Pacula compared NHSDA and Department of Defense health survey data collected before and after the military adopted the zero-tolerance policy.They estimated a 16% drop in the prevalence of past-year drug use in the military, with a lower bound estimate of 4%.Dr. W. Robert Lange and his colleagues examined the effects of a decision at Johns Hopkins hospital to shift from “for cause” employee testing in 1989 to universal pre-employment testing in 1991.In 1989, 10.8% of 593 specimens were positive— 55% of them for marijuana—and there were seven “walkouts” who refused to be tested.In 1991, 5.8% of 365 specimens tested positive—28% for marijuana—with no walkouts.The authors interpreted these results as evidence of the deterrent effect of drug testing.But Professors M.R. Levine and W.P. Rennie offer a variety of alternative explanations, including the fact that in 1991 users had advance warning of the test and could abstain, water load, or ingest legal substances that would confound the test.The most comprehensive study of the effects of school testing on student use comes from analyses of data from the Monitoring the Future survey.This analysis found no measurable association between either random or “for cause” drug testing and students’ self-reported drug use.The study is cross-sectional, rather than prospective, and is somewhat limited by the relative rarity of exposure to testing. A more focused test was provided by the “pilot test” of the Student Athlete Testing Using Random Notification project.During the 1999–2000 academic year, the authors compared two Oregon schools using mandatory drug testing with another school that did not.60 Neither students nor schools were randomly assigned to drug testing versus nontesting.The authors reported a significant treatment effect; though statistical details were not presented, the conclusion is apparently based on a difference-in-difference estimate of changes from pre- to post-test in the control versus treatment schools.But caution is warranted for several reasons. First, although there was a slight decrease in drug use at the treatment schools the effect is largely attributable to an increase in drug use at the control schools.Because assignment to condition was not random, there is little reason to believe that a similar increase would have occurred at the treatment schools absent testing. Second, most drug use risk factors, including drug use norms, belief in lower consequences of drug use, and negative attitudes toward school actually increased among the target group—athletes at the treatment school.These puzzling results may explain why the study was labeled a pilot test, and why a more ambitious and rigorous follow-up study was launched. Unfortunately, the study was terminated by the Federal Office for Human Research Protection due to human protection concerns.At present, the evidence suggests that the military’s testing program had a deterrent effect, but no such effect was found in the workplace or in schools. Still, the absence of evidence is not evidence of absence. There are very few rigorous studies; low statistical power, noisy measurement, and other factors may hide genuine effects. Alternatively, it may be that the military program is more effective as a deterrent due to differences in its implementation, its target population, its consequences for users, or the institutional setting.

Intimate partner violence is also a correlate of drug use and harmful alcohol consumption

We then fit a partially adjusted model controlling only for the demographic covariates specified above. Lastly, we fit the final, fully adjusted model controlling for demographic covariates and marital/partner status and respondents’ reports of ART use in the past 6 months, as recorded at the 12 and 24 month follow-up interviews. A posthoc sensitivity analysis was performed excluding observations from participants who used ARTs. All analysis was done using the statistical package SAS 9.3. Most participants reported no form of transactional sex in the past 12 months. The most common transaction was reported by men as having given money, drugs or alcohol in exchange for sex. We did not measure the type of partner involved in this exchange. No women reported giving something for sex and all other forms of transaction were reported by less than 5% of the sample, see Table 1Table 2 shows the proportion of participants who reported on the primary and secondary outcomes of interest at baseline, 12 and 24-month follow-up. High risk sex behaviors were more commonly reported than drug risk behaviors. Although Table 2 only descriptively presents the longitudinal frequencies of each outcome, it is noteworthy that – relative to male participants – a higher estimated proportion of female participants reported engaging in every risk behavior at every time point with the exception of past month alcohol use before sharing equipment, as reported at the 12 month follow-up. In general,cannabis grow tray it appears that there were not substantial changes over time across the various outcomes presented in Table 2. ART use appeared to increase from 12 to 24 months, particularly among men.

All participants were ART-naïve at baseline but 17% and 35% reported having taken ART in the past 6 months at the 12 and 24 month follow-up visits, respectively. Relative to male participants, female participants had significantly higher odds of reporting both primary outcomes, sharing injecting equipment in the past 30 days and condomless sex in the past 90 days in the unadjusted models. After controlling for demographic covariates, partner status and ART use, the association between female gender and sharing injecting equipment was no longer significant. Female gender remained significantly associated with condomless sex in the past 90 days, even after controlling for demographics and additionally, both partner status and ART use , Table 3. The conclusions from posthoc sensitivity analyses excluding observations from participants who used ARTs were consistent with the main analyses for all 5 outcomes. The unadjusted odds of one of the secondary outcomes was higher for female participants than male participants: reporting both drug equipment sharing and condomless sex. After controlling for demographic covariates, female gender remained statistically significant for the outcome, reporting both injection equipment sharing and condomless sex. In the final fully adjusted model, where we controlled for demographics as well as the 3 level partner status covariate and ART use, the association between female gender and reporting both injection equipment sharing and condomless sex was no longer significant. No significant association was found in any of the models between female gender and alcohol use prior to sharing equipment in the past 30 days, or prior to or during sex in the past 90 days, see Table 3.Among a cohort of PLHIV in Russia who have ever injected drugs, we detected a statistically significant association between female gender and condomless sex in the past 90 days, even after controlling for the potentially confounding effects of demographics, partner status, and ART use.

Although we observed notable associations between gender and other outcomes, including sharing drug equipment, alcohol use prior to sharing, and both drug equipment sharing and condomless sex, the results were not statistically significant, possiblydue to limited power given the relatively small number of women in the study. It is also notable that nearly all risk behaviors, other than alcohol use prior to sharing, appeared to be more commonly reported among women compared to men. The increased odds of substance using women having condomless sex, compared to men, has been previously documented in multiple settings , including St. Petersburg, Russia. Prior research from St. Petersburg also found partnership status to be a major factor in PWID’s decision-making process about whether to engage in condomless sex with their partner. In our study, more participants reported being in HIV concordant partnerships which could explain why such a high proportion of respondents engaged in condomless sex. Regardless, female participants had higher odds of reporting condomless sex, irrespective of their partner’s HIV status, posing risk for HIV transmission in this population. Further, the preventive health benefits of HIV-positive persons using a condom or other protective barrier during vaginal or anal sex are indisputable, regardless of their partner’s HIV serostatus. These results are particularly concerning in light of recent research suggesting heterosexual transmission of HIV is increasing in St. Petersburg, and may overtake injection drug use as the primary mode of transmission , and suggest a need for a comprehensive, multi pronged response which should include “treatment as prevention” and pre exposure prophylaxis for HIV-negative partners. Interventions promoting condom usage are also warranted. However, our finding that women were less likely than men to use condoms under all circumstances implies that such approaches must be designed to account for the social, micro, and macro contexts of women’s lives. At the relationship level,vertical grow systems for sale alcohol use prior to sex was common and may have interfered with condom decision making around the time of the sexual event. Connecting women to alcohol harm reduction programming could help to lessen their collective risk for HIV infection and transmission. 

Our findings support the value of implementing multi-level interventions and also imply that TasP is a high-yield approach with potential to reduce the risk of transmission with condomless sex, as well as provide a multitude of other health benefits for the HIV-positive individual. Addressing the social and structural factors that contribute to gender differences in condom usage, and providing HIV-negative women with access to PrEP are additional strategies which should also be pursued. As has also been seen in other settings, women in our study were more likely to report drug equipment sharing than men. However, it seems the relationship between female gender and equipment sharing is at least partially explained by demographics, most notably employment and income. Female participants in this study were significantly less likely to be employed than male participants and significantly more likely to earn a monthly income below the sample median of 20,000 Rubles. When there is limited access to clean needles and syringes and/or limited funds to pay for new/unused equipment, women may be more likely to share. These patterns have been observed in other populations, including among PWID in South Africa where more women than men reported always sharing injecting equipment. Low economic status, coupled with limited work opportunities for women, have also been associated with increased sexual risk taking among female substance users, including having multiple sex partners and relying on sex trade/transactional sex to support drug use. Findings from the 2009 National HIV Behavioral Surveillance System, conducted in 20 U.S. cities, suggest more female PWID have sex in exchange for money or drugs. Findings from Russia found that compared to their male counterparts, female injectors who reported high drug use frequency were more likely to also report multiple sex partners. Our findings highlight the need for free access to clean needles/syringes among women who inject drugs, as well as access to opiate agonist therapy to prevent HIV. Our study has limitations. The sample size was relatively modest and participants were predominantly male, which limited study power particularly for outcomes that were less common. 

These findings from Russia might not be representative of the relationship between female gender and HIV transmission risk among people who inject drugs or have a history of injection drug use, who are living with HIV in other non Russian settings, or even within Russia but outside of the Russia ARCH study population. Additionally, our research was done with a mixed sample of current and former injection drug users. Another limitation of the current study is that knowledge and perceptions surrounding risk of HIV transmission were not assessed, nor did we specifically explore several key mechanisms known to contribute to sex and drug use behaviors associated with increased risk for HIV transmission. For instance, participants were not asked about their experiences of intimate partner violence, despite that it has been associated with women’s reduced ability to negotiate condom use and talk about HIV prevention with their partner.More research is needed to understand the challenges and preferences of HIV-positive women who inject drugs, which may be contributing to their condom nonuse and harmful drug and alcohol consumption. A better understanding of the factors underlying women’s condom choices, or to what extent they have any choice in the matter, will inform the design of more meaningful and effective prevention strategies. Furthermore, assessing awareness and willingness to use PrEP among HIV-negative women and men who have ever injected drugs or have a known HIV-positive partner is needed to inform future efforts for HIV prevention. We also did not assess participants’ sexual orientation or gender identity, or these characteristics of their sexual partner. Further, we did not assess differences in drug use and sexual behaviors according to whether the partner under consideration was a long-term or casual partner. Nor did we measure partner-specific information on sexual or drug related behaviors of interest. Instead, we only measured behaviors of interest at the individual level. These details should be collected in future research, as understanding partner dynamics contextualizing most at risk situations will help to establish what is needed for prevention efforts.Additionally, different time frames were used for the outcomes which may have differentially impacted participants’ ability to accurately remember their true behaviors. However, the Russia ARCH cohort study team is skilled at interviewing and has extensive experience with this population which likely serves to mitigate this latter bias. The homeless population is aging. People born in the second half of the “baby‐boom” have an elevated risk of homelessness. Homeless adults develop aging‐related conditions, including functional impairment, earlier than individuals in the general population. For this reason, homeless adults aged 50 and older are considered “older” despite their relatively young age. The homeless population has a higher prevalence of mental health and substance use problems than the general population. Individuals experiencing homelessness report barriers to mental health services, due to lack of insurance coverage, high cost of care, and inability to identify sources of care. These barriers can prevent their using services to treat mental health and substance use problems, such as outpatient counseling, prescription medication, and community‐based substance use treatment. Without these, homeless populations may experience more severe behavioral health problems and rely on acute care to address these chronic conditions. Homeless individuals have higher rates of Emergency Department use for mental health and substance use concerns , and are more likely to use psychiatric inpatient or ED services and less likely to use outpatient treatment than those who are housed. Homeless adults with substance use disorders face multiple barriers to engaging in substance use treatment. Competing needs , financial concerns, lack of knowledge about or connection to available services, and lack of insurance are barriers to substance use treatment among homeless adults. Older adults face additional barriers to mental health or substance use treatment due to cognitive and functional impairment, such as difficulty navigating and traveling to healthcare systems. However, there is little known about older adults experiencing homelessness. According to Gelberg and Anderson’s Behavioral Model for Vulnerable Populations, predisposing factors, enabling factors, and need, shape health care utilization. Although prior research has used this model for homeless populations, this work has not included older homeless adults. Little is known about the prevalence of mental health or substance use problems in older homeless adults, the level of unmet need for services, or the factors associated with that need. To understand the factors associated with unmet need for mental health and substance use treatment in older homeless adults, in a population‐based sample of homeless adults age 50 and older, we identified those with a need for mental health and substance use services. Then, we applied the Gelberg and Anderson model to examine predisposing and enabling factors associated with unmet need, which we defined as not receiving mental health and substance use treatment among participants with mental health or substance use problems.

A series of studies show that college students perceive their peers as less critical of heavy drinking than they actually are

The fundamental attribution error could cause a person who witnessed wrongdoing to conclude that the actor usually does wrong, whereas the correct conclusion in most cases is that the actor occasionally lapses.Moral pessimism could also result from a tendency to believe that the behavior of others is instrumentally driven. The overestimation of unethical behavior could follow from a common belief that one’s self interest is the most important factor in explaining the behavior of individuals in the society.A wrongdoer may protect his self-esteem by exaggerating how frequently others commit the same wrong.Relevant concepts invoked by psychologists include social validation, self- enhancing biases, and constructive social comparison.Now we turn from moral pessimism to social projection. An individual who projects his own behavior onto society overestimates how many others behave like he does.This bias is closely related to what the psychology literature calls the false consensus effect , which refers to a situation where people mistakenly think that others agree with them. According to the FCE, people tend to overestimate the social support of their own views and underestimate the social support for people who hold opposing views.Evidence from four studies in the original research by Ross demonstrates that social observers tend to form a false consensus with respect to the relative commonness of their own behavior. These results were obtained in questionnaires that presented subjects with hypothetical situations and also in actual conflicts that presented subjects with choices. Several psychological mechanisms could cause social projection. One such mechanism is cognitive: A person may attend to positions with which he agrees and dismiss positions with which he disagrees.Selective attention allows his preferred position to dominate his consciousness.

The sorting of people reinforces selective attention. People tend to associate with others who share their general beliefs, attitudes, and values. The association could be voluntary as when people select their friends,indoor cannabis growing or involuntary as when people are involuntarily segregated. If likes associate with likes, then recalling instances of behavior like your own will be easier than recalling behavior unlike your own.Instead of cognition, emotion could cause social projection. Perhaps people need for people to see their own acts, beliefs, and feelings as morally appropriate.Finding similarity between oneself and others may validate the appropriateness of behavior, sustain self-esteem, restore cognitive balance, enhance perceived social support, or reduce anticipated social tensions.Later we discuss the possibility that emotional bias resists correction by fresh information that ameliorates cognitive bias. We will construct an economic model of conformity to a social norm, solve for the equilibrium, introduce perceptual bias into the model, and see how the equilibrium changes. We follow the economic tradition of distinguishing between benefits and costs. A person who breaks a norm often enjoys various benefits, such as the financial gain from disclosing trade secrets, the reduction in taxes from evading them, the pleasure of listening to music after downloading it illegally, victory from winning a contest by cheating, time saved by not complying with law, etc. Assume that each actor’s benefit from breaking the norm can be measured. The metric may be utility, pleasure, income, time, prestige, power, comfort, etc. In Figure 1, the vertical axis measures the amount a person benefits from breaking the norm. Each person i has a type i, reflecting the benefits he obtains from breaking the norm. The horizontal axis depicts the cumulative proportion of people who enjoy a benefit of a given amount. According to the curve in Figure 1, a small number of people enjoy a high benefit, and a large number of people enjoy at least a small price. We connect the benefit from breaking a social norm to standard economic concepts. A person’s benefit in economics is described as his “willingness to pay”. The curve in Figure 1 thus depicts willingness to pay for wrongdoing in a population of people. The number of people who are willing to pay a certain amount also measures demand.

The curve in Figure 1thus depicts the “demand” for wrongdoing. The demand curve slopes down because more people are willing to pay the price of wrongdoing as it decreases. Now we turn from the individual’s benefits to his costs. Breaking a social norm often provokes social sanctions that can take various forms.38 First, people who break the norm could lose the social approval of their peers. Second, they could face social resentment. Third, they might have trouble finding business partners. Fourth, if the social norm is also a law, the wrongdoer might suffer civil liability or criminal punishment. Fifth, they might suffer in some of all of these ways because they acquire a bad reputation. The vertical axis in Figure 2 indicates the individual actor’s cost of breaking the social norm, and the horizontal axis indicates the proportion of wrongdoers who break the norm. As depicted in Figure 2, the individual actor’s cost of breaking the norm decreases as the proportion of wrongdoers increases. Various reasons could explain why breaking a norm costs less when many others do it. A simple reason that is central to the enforcement of norms concerns the expected sanction. The expected sanction equals its probability multiplied by its severity. As discussed above, the probability that a particular wrongdoer will suffer a social or legal sanction often decreases as more people commit the sanctioned act. For example, when people see many others smoking in airports, they feel more confident that they will not be confronted if they smoke. Because safety lies in numbers, the cost curve slopes down in Figure 2. The curve in Figure 2 also describes the “supply of sanctions for wrongdoing” as a function of the proportion of wrongdoers. One of this article’s authors gave questionnaires to engineers in Silicon Valley concerning trade secrets. The questionnaires asked each person whether or not he would violate trade secrets law,cannabis grow supplies and then asked the frequency with which he thought that other people violated trade secrets laws. 44.8%of the participants in the study said that they were more likely than not to violate trade secrets law, but they estimated on average that 57% of the employees in their company would violate trade secrets law. When asked about the proportion of employees in Silicon Valley in general who would violate the trade secrets law, the average answer was 68%. Pessimism bias would produce such a gap in results.45 Our model predicts that moral pessimism bias would lower the perceived cost of disclosing trade secrets. In terms of Figure 5, the perceived cost curve lies below the actual cost curve.

Consequently, moral pessimism bias causes more disclosure of trade secrets. Equivalently, fewer people would disclose trade secrets laws if they knew the true level of illegal disclosure. In these circumstances, accurate reporting of the frequency of norm violations should cause fewer of them. The effect of accurate reporting is presumably stronger when cognitive processes cause bias, and the effect is presumably weaker when motivational processes cause bias. The survey also found that the longer a worker spends in Silicon Valley, the more he feels justified in disclosing trade secret. Perhaps people change their beliefs to align with their misperception of the facts — they accept the morality of the actual as they misperceive it.The gap between self-reported and perceived disclosures of trade secrets differed systematically across types of people. Those who reported that they were more likely to disclose secrets estimated that a relatively high percentage of other people disclose secrets, and those who reported that they were less likely to disclose secrets estimated that a relatively low percentage of other people disclose secrets. Social projection would produce these results. Our model predicts that social projection would not cause more people to disclose trade secrets. Consequently, providing information to correct the bias will not change the number of people who disclose trade secrets. Social projection, however, might cause those people who disclose trade secrets to do so more often. In addition, social projection increases the resolve of people who disclose trade secrets; so increasing the severity of the punishment will be less effective in deterring them. Psychologists have investigated the connection between the willingness of people to pay taxes and their perception of tax evasion by other. A study of Australian taxpayers found a discrepancy between what the individual does and what he thinks others are doing. Moral pessimism bias would produce the observed discrepancy.According to our model, moral pessimism will cause fewer taxpayers to comply with the law. A longitudinal study of Australian citizens that used a cross-lagged panel analysis found that taxpayers’ personal views of the morality of tax compliance affect their perception of the levels of tax compliance by others.. Those with high personal standards of tax compliance perceived relatively more compliance by others, and those with low personal standards perceived relatively less tax compliance by others. These results are consistent with social projection bias. According to our model, social projection bias will not affect the number of taxpayers who comply with the law, but it may cause tax avoiders and evaders to comply less, and it should make all taxpayers more reluctant to change their behavior. This study show how people respond to information exposing their biases. Researchers were able to monitor people’s actual tax files. Some sub-groups were given information about the gap between their own behavior and the behavior of others. Receiving information on the behavior of others caused more tax compliance in some forms, such as the amount of deduction claimed. This fact is consistent with our prediction that disseminating accurate information will cause more right doing when actors suffer from moral pessimism bias.The actor’s perceived cost of heavy drinking, consequently, is less than its actual cost. Since the perceived cost curve lies below the actual cost curve, as in Figure 5, our model predicts more heavy drinking than would occur if students perceived costs accurately. One of the studies tracked how attitudes developed over the course of two months among college freshmen and discovered gender differences. Male students adjusted their personal attitudes over time to match more closely the perceived consensus. After the adjustment, actual attitudes among males were closer to perceived attitudes. In terms of Figure 5, the actual cost curve shifted closer to the perceived cost curve. These facts suggest that providing information to correct misperception at the beginning of the semester would reduce heavy drinking more than if the information were provided at the end of the semester. With female students, attitudes remained stable, so the timing of the information should make less difference to their behavior.Given multiple equilibria as in Figure 3, the initial proportion of wrongdoers can affect the equilibrium. In Figure 3, an initial proportion of wrongdoers below x2 will cause their numbers to fall approximately to zero, whereas an initial proportion of wrongdoers above x2 will cause their numbers to rise to x1. Perhaps more male students drink heavily when they arrive as freshmen, which causes the system to settle at a high level of drinking among males. Conversely, perhaps fewer female students drink heavily when they arrive as freshman, which causes the system to settle at a low level of drinking among females. These hypotheses require testing. Now we turn to studies on drug use. In a classical study, a sample of adolescents was divided into three groups: nonusers, cannabis-users, and cannabis and amphetamine users.The perceptions of members of the three groups differed significantly from each other.Compared to nonusers, drug-users gave relatively high estimates of the number of users. These results are consistent with social projection. Researcher proposed two psychological causes of projection. First, the number of arguments that we hear for or against something affects our attitudes towards it and we hear more arguments from people inside our group than from outsiders. Accurate information should help to correct this cognitive bias. Second, the members of each group were motivated to see their own behavior in others. Accurate information is probably not enough to correct this emotional bias. The authors concluded that projection bias would cause over-use of drugs, which contradicts our model. Our model does predict that projection bias will entrench existing behavior among the three groups of people and make it harder to change.

No single training approach is comprehensive enough to eliminate these disparities

Both studies were completed in an average of six seconds, which allowed for rapid confirmation with minimal risk of desaturation. Additionally, this examination could be performed while capnography was being obtained, with both confirmatory methods used to support each other in equivocal cases. Operator confidence was high with both techniques, suggesting that both providers felt comfortable with their assessments, which is an important finding in ultrasound studies because, if the operator is not confident in their assessment, they will be unlikely to use the examination clinically.It is important to consider several limitations with respect to this study. First, it was performed in a cadaver model, which may not fully reflect the characteristics of a live patient. However, cadaver models have been used extensively for the evaluation of ultrasound for ETT confirmation and have demonstrated similar test characteristics to live patients for this modality.Additionally, we used only three cadavers in the study and it is possible this may not have fully represented the wider population. However, we intentionally used cadavers with significant differences in anatomy to best represent the variation in a larger population. It is possible that the repeat intubations may have improved the accuracy of the sonographers due to increased practice. To avoid this we alternated cadavers and techniques between each use to reduce the potential for improving each sonographer’s learning curve during the study. While it is not possible to completely exclude the potential for sonographers to have improved their accuracy throughout the study, this was not supported by the data as equivalent numbers of misidentified ETT placements occurred in the early and later intubations. There is also no reason to suggest that this would differentially affect one technique over another.

Moreover, this study was designed to evaluate the test characteristics of dynamic vs. static sonography for ETT localization. Therefore, it is important to ensure similar rates of tracheal and esophageal intubation,hydroponics system for cannabis which would not be possible in an ED setting due to low overall rates of esophageal intubation.Because this study was performed by two sonographers with prior experience using ultrasound for ETT confirmation, it is possible that the results may have differed if less experienced sonographers were used. However, the use of ultrasound for ETT confirmation has been suggested to have a rapid learning curve.Nonetheless, further studies are advised to determine whether the accuracy of static vs. dynamic techniques differs in less experienced providers.Atrial fibrillation , a supraventricular tachyarrhythmia, is the primary diagnosis for over 467,000 hospitalizations each year.The AFFIRM trial compared rate and rhythm control in 4,060 chronic AF patients. It found no difference in overall mortality, but there were fewer hospitalizations with rate control compared to rhythm.The subsequent RACE II trial established that lenient heart rate control was as effective as strict control in preventing cardiovascular events and required fewer outpatient visits to achieve the goal HR.A number of medications are used for rate control including beta blockers and non-dihydropyridine calcium channel blockers.Diltiazem, a non-dihydropyridine calcium channel blocker, is a common initial choice in the management of AF with rapid ventricular response due to its ability to be given as an intravenous push, continuous infusion, and oral immediate-release or extended-release tablet. In the ED a loading dose of IV diltiazem is usually administered followed by PO immediate-release tablet or IV continuous infusion. Both options allow for dose titration in the short term before converting to a longer-acting PO formulation for discharge.

The PO immediate-release diltiazem tablet has an onset of action of 30-60 minutes and is dosed every six hours.IV continuous infusion diltiazem has a rapid onset of action and is titrated every 15-30 minutes. The route of diltiazem after the initial IV LD can influence the disposition of the patient from the ED, the level of care needed, and hospital length of stay. Patients who receive only the PO immediate-release diltiazem absorb a therapeutic dose quickly and can generally be discharged or admitted to a general medicine floor, but cannot be titrated more frequently than every six hours. Patients who received the IV continuous infusion must have their dose frequently titrated by nursing and often require stepdown care. No studies exist comparing the efficacy of PO immediate-release and IV continuous-infusion diltiazem in the emergent management of AF with RVR. The objective of this study was to compare the incidence of treatment failure at four hours between PO immediate-release and IV continuous-infusion diltiazem after an IV LD.We collected and managed study data using REDCap® electronic data capture tools.Baseline demographic information recorded included the patient’s age, sex, race, and weight. Diltiazem dosing characteristics at baseline and four hours and the use of adjunctive medication for HR or rhythm control at four hours were collected. Clinical outcomes recorded included HR and blood pressure at baseline and four hours, ED disposition,indoor hydroponics cannabis and hospital LOS. Two of the study’s investigators abstracted all available data independently. Both were involved in the study design and used a standardized data collection form in REDCap® that included study definitions to ensure consistency between the investigators. Investigators were not blinded to the study outcome. Any discrepancies between abstractors resulted in a collaborative review of the chart by both investigators until discrepancies were resolved. As a result, interrater reliability was not determined.

The primary endpoint of the study was the percentage of patients with treatment failure at 4 ± 1 hour after initiation of PO immediate-release diltiazem or continuous IV diltiazem infusion. Treatment failure was defined as HR of > 110 beats/min at 4 ± 1 hour, a switch in therapy from PO immediate-release diltiazem to IV continuous infusion diltiazem, the requirement of an additional IV diltiazem bolus within four hours from the start of PO or IV continuous infusion, or addition/switch of therapy to another rate control or antiarrhythmic agent within four hours. A clinical endpoint of 4 ± 1 hour was selected to give time for both the PO and the IV diltiazem to have therapeutic effect. It was also concluded that this was a reasonable amount of time for the ED provider to determine disposition. We made the decision not to include time points extending beyond four hours due to the increased number of confounding factors, including the conversion to PO β-blockers or extended-release PO diltiazem. Patient characteristics collected included age, weight, race, sex, initial HR and BP, and initial diltiazem LD. We assessed the safety endpoint of clinically significant hypotension by recording the indication for diltiazem discontinuation and the need for vasopressors administration for hemodynamic support.In the emergent setting, diltiazem has been shown to be superior to digoxin, metoprolol, and amiodarone in the initial management of AF and flutter.IV diltiazem has often been considered superior to PO in the management of AF due its 100% bioavailability and titratability. However, PO immediate-release diltiazem confers many benefits over IV continuous infusion including a fast onset of action, minimal titration requirement, decreased nursing resources, and the ability to disposition to a general floor or possibly discharge home. A comparison of PO immediate-release and IV continuous-infusion diltiazem in the emergent clinical setting had never been performed. In our study, we found that PO immediate-release diltiazem resulted in a 0.4 OR of treatment failure when compared to IV continuous infusion. In other words, PO immediate-release diltiazem resulted in an odds of heart rate control 2.6 times greater than IV continuous infusion at four hours.

This is a surprising result given the higher bio-availability of the IV route compared to the oral formulation. A possible reason for this difference in treatment failure may be that IV continuous infusion was sub-optimally titrated. In our sample, the median hourly dose of the IV continuous infusion at four hours was only 10 mg/h, well below the maximum dose of 15 mg/h. Slow titration to sub-maximal doses may have resulted in sub-optimal diltiazem plasma concentrations in comparison with patients who were given immediate-release PO diltiazem. In theory, PO dosing may have achieved a higher plasma concentration as a result of the entire diltiazem dose being given at once. Therefore, our results may not reflect the comparison of two treatment regimens at optimal dosing capacity, but rather the real-world practice in which medication titration is not always optimized. PO diltiazem was associated with statistically significant higher odds of being admitted to the general floor and lower odds of being admitted to stepdown or the ICU. Patients who received PO also had a two-day shorter median LOS compared to IV. While the differences in these two parameters cannot be ascertained in a definitive manner due to the retrospective nature of the study, it is possible that the extended time needed to transition patients from IV to PO diltiazem before discharge may have played a contributing factor. Patient disposition and decreased LOS represent a possible area of healthcare cost savings that should be investigated in future prospective studies. Providers may choose IV continuous-infusion diltiazem if they want to titrate to lower doses in patients with borderline hemodynamic stability. In our study, however, clinically significant hypotension did not occur in the PO or IV group. Overall, our findings call in to question the primacy of IV continuous-infusion diltiazem for AF. PO diltiazem was associated with a lower rate of treatment failure and higher rate of heart control than IV continuous infusion and with similar safety. Importantly, these findings are the result of a retrospective study with limited sample size and therefore must be confirmed in a larger, prospective, randomized controlled trial.Each year 395,000 people suffer an out-of-hospital cardiac arrest in the United States.Multiple studies have demonstrated that layperson CPR increases chance of survival by 2-3 fold.The importance of immediate response by the public has been highlighted by the Institute of Medicine report “Strategies to Improve Cardiac Arrest Survival: A Time to Act”.One of the key recommendations of the IOM report was a call to “foster a culture of action through public awareness and training” to reduce the risk of irreversible neurologic injury and functional disability.Wide disparities in bystander CPR rates and OHCA outcomes persist, with some communities reporting a five-fold difference in survival.Residents who live in neighborhoods that are primarily Black, Hispanic, or low income are more likely to have an OHCA, less likely to receive bystander CPR, and are less likely to survive.The implementation of creative new strategies to increase layperson CPR and defibrillation may improve resuscitation in priority populations.Most communities will only improve survival through a multifaceted, community-wide approach that may include teaching hands-only CPR for bystanders,emphasis on brief educational videos16 and video self-instruction,mandatory school-based training,and dispatcher-assisted CPR.One particularly high-yield approach for high-risk communities is the implementation of mandatory CPR training in high schools.The American Heart Association , the World Health Organization, and the IOM along with multiple other national and international advocacy groups have endorsed CPR training in high school as a key foundation to improve OHCA survival outcomes.The 2015 IOM report calls for state and local education departments to partner with training organizations and public advocacy groups to promote and facilitate CPR and automated external defibrillator training as a high school graduation requirement.Today communities across the U.S. have recognized the value of CPR training in high schools, and 36 states have enacted laws calling for mandatory training prior to graduation.The benefit of CPR training in high schools is understood as a long-term investment to ensure that multiple generations are trained and ready to act.However, a more immediate consequence of school centered training may be the amplification of community CPR training and literacy as students become trainers for their household and circle of friends.Students can be asked to “pay it forward” by sending them home with CPR training materials and assigning them the task of training friends and family members. This pilot program sought to investigate the feasibility, knowledge acquisition, and dissemination of a high school centered, CPR video self-instruction program with a “pay-it-forward” component in a low-income, urban, predominantly Black neighborhood with historically low bystander-CPR rates. Schools provide large-scale, centrally organized community settings accessible to both children and adult family members of all socioeconomic backgrounds. A student-mediated, CPR educational intervention may be an effective conduit to relay OHCA knowledge and preparedness in high-risk neighborhoods.

Twenty-nine states and the District of Columbia have legalized medicinal use of cannabis

Although the absolute improvement in quality scores was modest , the intervention resulted in the communication of approximately 100 pieces of additional information, any of which had the potential to improve the handoff process. A recent survey of EM residency programs in the U.S. found poor adherence to standardized ED-to- inpatient handoff practices,and our study was no exception. In the post-intervention period, the SBAR-DR format was used for only 30% of verbal hand offs and the written template was used for 50%. The reason for this was likely multi-factorial and related to both methodological and cultural barriers. Although the pilot study involved the institution’s largest admitting service, EPs performed admission hand offs with other admitting teams not included in the study. Having to shift between different handoff strategies may have limited EPs’ ability to acclimatize and integrate SBAR-DR into their daily practice. The adoption of the written handoff note also may have been hindered by the additional charting time required. Additionally, having fewer senior EM residents in the post-intervention cohort may have negatively impacted our post-intervention scores, as we found this group showed significant improvement in both handoff quality score and global rating scale. This supports prior research, which has found that residents’ ability to integrate handoff information may improve with experience.Additionally, handoff practices are an engrained part of a specialty’s culture. Although our study group included faculty and resident physician champions from IM and EM, we may not have fostered adequate buy-in from practicing providers to change practice routines.

As institutions implement changes to inter-unit hand offs and care transitions, they need to address cultural complacency and build coalitions among affected members of the healthcare team.Possible solutions includeinter-disciplinary communication training,cannabis grow table which could give physicians an opportunity to practice standardized hand offs with one another, while also mitigating future conflicts via improved inter-personal engagement.Endorsement from senior physician leadership could also facilitate provider buy-in and adherence. Finally, the Joint Commission Center for Transforming Health care’s Targeted Solutions Tool® has shown promise in improving handoff communication by facilitating targeted needs assessment of local handoff practices, data collection, and quality improvement intervention.Cannabis is the most widely used illicit substance in the United States. In 2014, 22.2 million Americans 12 years of age and older reported current cannabis use.The rapidly changing political landscape surrounding cannabis use has the potential to increase these numbers dramatically.In addition, as of 2017 California, seven other states and the District of Columbia have legalized recreational use of marijuana.The incidence of CHS and other marijuana related emergency department visits has increased significantly in states where marijuana has been legalized.A study published in 2016 evaluating the effects of cannabis legalization on EDs in the state of Colorado found that visits for cyclic vomiting—which included CHS in this study—have doubled since legalization.Despite the syndrome’s increasing prevalence, many physicians are unfamiliar with its diagnosis and treatment. CHS is marked by symptoms that can be refractory to standard antiemetics and analgesics.Notwithstanding increasing public health concerns about a national opioid epidemic and emerging guidelines advocating non-opioid alternatives for management of painful conditions, these patients are frequently treated with opioids.

In light of the public health implications of a need to reduce opioid use when better alternatives exist, this paper describes the current state of knowledge about CHS and presents a novel model treatment guideline that may be useful to front line emergency physicians and other medical providers who interface with these patients. The expert consensus process used to develop the model guideline is also described.CHS is a condition defined by symptoms including significant nausea, vomiting, and abdominal pain in the setting of chronic cannabis use.7 Cardinal diagnostic characteristics associated with CHS include regular cannabis use, cyclic nausea and vomiting, and compulsive hot baths or showers with resolution of symptoms after cessation of cannabis use. Cyclical vomiting syndrome is a related condition consisting of symptoms of relentless vomiting and retching. While CHS patients present with similar symptoms to those with CVS, associated cannabis use is required to make the diagnosis. CHS patients present to the ED with non-specific symptoms that are similar to other intra-abdominal conditions. These patients command substantial ED and hospital resources. In a small multi-center ambispective cohort study by Perrotta et al., the mean number of ED visits and hospital admissions for 20 suspected CHS patients identified over a two-year period was 17.3 ± 13.6 and 6.8 ± 9.4 respectively.These patients frequently undergo expensive and non-diagnostic abdominal imaging studies. In the Perrotta study,cannabis drying trays the mean number of abdominal computed tomography scans and abdominal/ pelvic ultrasounds per patient was 5.3 ± 4.1 and 3.8 ± 3.6 respectively. In addition to a contribution to ED crowding by unnecessary prolonged stays to perform diagnostic testing, patients are exposed to potential side effects of medications, peripheral intravenous lines, and procedures such as endoscopies and abdominal surgeries.

While treating physicians often administer opioid analgesics and antiemetics, symptom relief is rarely achieved with this strategy. In fact, there is evidence to suggest opioids may exacerbate symptoms.The pathophysiology of CHS is unclear.Paradoxically, there are long-recognized antiemetic effects of cannabis, thus leading to its approved use for treatment of nausea and vomiting associated with chemotherapy and appetite stimulation in HIV/AIDS patients. The factors leading to the development of CHS among only a portion of chronic marijuana users are not well understood. Basic science research has identified two main cannabinoid receptors: CB1 and CB2 , with CB1 receptors primarily in the central nervous system, and CB2 receptors primarily in peripheral tissues. This categorization has recently been challenged and researchers have identified CB1 receptors in the gastrointestinal tract.Activity at the CB1 receptor is believed to be responsible for many of the clinical effects of cannabis use, including those related to cognition, memory, and nausea/vomiting.Scientists hypothesize that CHS may be secondary to dysregulation of the endogenous cannabinoid system by desensitization or down regulation of cannabinoid receptors.Some investigators have postulated that disruption of peripheral cannabinoid receptors in the enteric nerves may slow gastric motility, precipitating hyperemesis.Relief of CHS symptoms with very hot water has highlighted a peripheral tissue receptor called TRPV1 , a G-protein coupled receptor that has been shown to interact with the endocannabinoid system, but is also the only known capsaicin receptor.This has led some to advocate for the use of topical capsaicin cream in the management of acute CHS.Sorensen et al. identified seven diagnostic frameworks, with significant overlap among characteristics listed by the various authors; however, there was no specific mention of how many of the above features are required for diagnosis. Those with the highest sensitivity include at least weekly cannabis use for greater than one year, severe nausea and vomiting that recurs in cyclic patterns over months usually accompanied by abdominal pain, resolution of symptoms after cannabis cessation, and compulsive hot baths/showers with symptom relief. Clinicians should consider other causes of abdominal pain, nausea and vomiting to avoid misdiagnosis. Abdominal pain is classically generalized and diffuse in nature. CHS is primarily associated with inhalation of cannabis, though it is independent of formulation and can be seen with incineration of plant matter , vaporized formulations , waxes or oils, and synthetic cannabinoids. At the time of this writing, there have been no reported cases associated with edible marijuana. Episodes generally last 24-48 hours, but may last up to 7-10 days. Patients who endorse relief with very hot water will sometimes report spending hours in the shower.Many of these patients will have had multiple presentations to the ED with previously negative workups, including laboratory examinations and advanced imaging.Clinicians should assess for the presence of CHS in otherwise healthy, young, non-diabetic patients presenting with a previous diagnosis of gastroparesis. Laboratory test results are frequently non-specific. If patients present after a protracted course of nausea and vomiting, there may be electrolyte derangements, ketonuria, or other signs of dehydration. Mild leukocytosis is common.

If patients deny cannabis use but suspicion remains high, a urine drug screen should be considered. Imaging should be avoided, especially in the setting of a benign abdominal examination, as there are no specific radiological findings suggestive of the diagnosis.Per the expert consensus guideline, once the diagnosis of CHS has been made and there is a low suspicion for other acute diagnoses, treatment should focus on symptom relief and education on the need for cannabis cessation. Capsaicin is a readily available topical preparation that is reasonable to employ as first line treatment.While this recommendation is made based on very limited data including a few small case series, capsaicin is inexpensive, has a low risk side-effect profile, makes mechanistic sense, and is well tolerated.Conversely, there are no data demonstrating efficacy of opioids for CHS. Capsaicin 0.075% can be applied to the abdomen or the backs of the arms. If the patients can identify regions of their bodies where hot water provides symptom relief, those areas should be prioritized for capsaicin application. Patients should be advised that capsaicin may be uncomfortable initially, but then should rapidly mimic the relief that they receive with hot showers. Additionally patients must be counseled to avoid application near the face, eyes, genitourinary region, and other areas of sensitive skin, not to apply capsaicin to broken skin, and not to use occlusive dressings over the applied capsaicin. Patients can be discharged home with capsaicin, advising application three or four times a day as needed. If capsaicin is not readily available, but there is a shower available in the ED, patients can be advised to shower with hot water to provide relief. Educate patients to use caution to avoid thermal injury, as there are reports of patients spending as long as four hours at a time in hot showers.11 Other possible therapeutic interventions include administration of antipsychotics such as haloperidol 5 mg IV/IM or olanzapine 5 mg IV/IM or ODT, which have been described to provide complete symptom relief in case reports.Conventional antiemetics, including antihistamines , serotonin antagonists , dopamine antagonists , and benzodiazepines can be used, though reports of effectiveness are mixed.Provide intravenous fluids and electrolyte replacement as indicated. Avoid opioids if the diagnosis of CHS is certain. Clinicians should inform patients that their symptoms are directly related to continued use of cannabis. They should further advise patients that immediate cessation of cannabis use is the only method that has been shown to completely resolve symptoms. Reassure patients that symptoms resolve with cessation of cannabinoid use and that full resolution can take anywhere from 7-10 days of abstinence.Educate patients that symptoms may return with re-exposure to cannabis. Provide clear documentation in the medical record to assist colleagues with confirming a diagnosis, as these patients will frequently re present to the ED.Due to the growing opioid epidemic in the U.S., there is widespread interest in using prescription drug monitoring systems to curb prescription drug abuse. PDMPs are statewide databases used by physicians, pharmacists, and law enforcement to obtain data about controlled-drug prescriptions, with the goal of detecting substance-use disorders, drug-seeking behaviors, and reducing patient risks of adverse drug events. While almost all U.S. states have PDMPs, they vary in design and implementation.In this paper, we review the history, evidence, and adoption of best practice guidelines in state PDMPs with a focus on how to best deploy PDMPs in emergency departments. Specifically, we analyze the current PDMP model and provide recommendations for PDMP developers and EDs to help meet the informational needs of ED providers with the goal of better detection and prevention of prescription drug abuse.The U.S. accounts for roughly 80% of opioid use worldwide, and misuse – such as the recreational use of opioids – is a significant problem.Every 19 minutes in the U.S. someone dies from an unintentional drug overdose, the majority from opioids.From 1997 to 2007 the average milligram -per-year use of prescription per person of opioids in the U.S. increased 402%, from 74 mg to 369 mg. Meanwhile, an estimated seven million people above the age of 12 use opioids and other prescription medications for non therapeutic purposes annually.These non-medical uses of opioids are linked to 700,000 ED visits yearly.Along with treating the consequences of opioid-related illness and overdose, EDs are often a location used by some patients as a source for opioid prescriptions.

This study was a sub-analysis of a prospective observational study

Such encounters may also serve as a sentinel event for those at high risk for stroke, facilitating important changes in their health behavior.Physicians can seize on such teachable moments to educate high-risk AF/FL patients on stroke risk and prevention and, when appropriate, to recommend or prescribe anticoagulation.Initiating anticoagulation at the time of ED discharge for stroke-prone patients does not increase bleeding rates and contributes to decreased mortality.Some patients, however, might prefer to have this shared decision-making conversation with a provider aware of their values and preferences, e.g., a primary care provider or cardiologist.Nevertheless, emergency physicians are an important link in the chain of multi-specialty care coordination for the stroke-prone AF/FL population—whether they initiate the discussion of thromboprophylaxis or actually prescribe anticoagulation.The initiation of thromboprophylaxis to ED patients with AF/FL at high risk for stroke has not been extensively studied. The literature that exists, however, demonstrates under-prescribing in countries around the world.The prescribing practices in U.S. community EDs, however, are not well understood. We undertook a multi-center, prospective, observational study to evaluate the anticoagulation practice patterns of community EPs and short-term, post-ED care providers in the management of patients with non-valvular AF/FL considered at high risk for ischemic stroke. We also sought to identify factors influencing initiation of oral anticoagulation. We hypothesized that increasing age, lack of cardiology involvement in the patient’s ED care, and restoration of sinus rhythm before ED discharge would decrease the likelihood of receiving an oral anticoagulant prescription. Lastly, we reviewed the electronic health records of the patients discharged without anticoagulation to evaluate documented reasons for withholding anticoagulation and provision of educational material on AF/FL stroke risk and prevention.

The source population was based within KPNC,planting table a large integrated healthcare delivery system that provides comprehensive medical care for four million members across 21 medical centers. KPNC members represent approximately 33% of the population in areas served and are highly representative of the local surrounding and statewide population. Emergency care was provided by emergency medicine residency-trained and board-certified EPs. During the study period , the annual census of each of the seven EDs ranged from 25,000 to 78,000. No departmental policies were in place at the participating EDs to govern the short-term anticoagulation management of patients with AF/FL. Patient care was left to the discretion of the treating EPs. All facilities had pharmacy services available around the-clock for discharge medications and supplemental patient education. Oral anticoagulation medications in use within KPNC during the study period were warfarin and dabigatran, warfarin being the drug of choice at the time. Furthermore, each facility had its own pharmacy-managed, phone-based Outpatient Anticoagulation Service that managed outpatient warfarin use and provided close follow-up and monitoring of these patients, akin to similar programs in other KP regions in the U.S.The percent time in therapeutic range for the international normalized ratio during the study period varied by facility and ranged from 70% to 74%, calculated with a six-month look-back period using the Rosendaal linear interpolation method.In the TAFFY study, adult KPNC health plan members in the ED with electrocardiographically-confirmed non-valvular AF/FL were eligible for prospective enrollment if their atrial dysrhythmia fell into any one of these three categories: symptomatic AF/FL; AF/FL requiring ED treatment for rate or rhythm control; or the first known electrocardiographically-documented episode of AF/FL. Patients were ineligible if they were transferred in from another ED, were receiving only palliative comfort care, had an implanted cardiac pacemaker/ defibrillator, or had been resuscitated from a cardiac arrest in the ED or just prior to arrival. The treating EPs enrolled patients via convenience sampling and were provided a small token of appreciation for their bedside data collection. No research assistants facilitated enrollment.

This anticoagulation study included TAFFY patients who were not taking oral anticoagulants at the time of ED presentation; at high risk for thromboembolic complications based on a validated thromboembolism risk score; and discharged home directly from the ED. Only a patient’s first enrollment was included in this analysis. We used the validated Anticoagulation and Risk Factors in Atrial Fibrillation stroke risk score to identify our AF/FL population at high risk for thromboembolism, as it has been shown to be more accurate than the CHADS2 or CHA2 DS2 -VASc stroke risk scores.TAFFY variables collected prospectively at the time of patient care included presenting symptoms; characterization of the atrial dysrhythmia ; comorbid diagnoses; ED management ; cardiology consultation; discharge rhythm and discharge pharmacotherapy. To minimize the effect that structured data collection might have on stroke prevention and to improve the odds of describing real-world behavior, the physician education material and data collection tool mentioned none of the following: hemorrhage risk, thromboembolic risk, risk scoring, indications for anticoagulation, post-ED follow-up care, or this study’s objectives and hypotheses. We undertook monthly manual chart review audits at each medical center to identify cases that were TAFFY-eligible but had not been enrolled to assess potential selection bias between the enrolled and missed-eligible populations. After completion of the enrollment period, we extracted additional demographic and clinical variables from the health system’s comprehensive integrated electronic health record. These included additional patient characteristics and oral anticoagulation prescription, prescriber,cannabis indoor grow system and outpatient follow-up within 30 days of ED discharge. Among 2,849 identified eligible patients, 1,980 were enrolled by the treating physicians in the parent TAFFY study. Enrolled and non-enrolled patients were comparable in terms of age, gender, comorbidity, and stroke risk scores, except that enrolled patients were more likely to have had a history of prior diagnosed AF/ FL. 

For the present analysis, we excluded 906 enrolled patients who were not discharged home directly from the ED or were not KP health plan members at enrollment, 252 patients who were already taking anticoagulation therapy and 510 patients who were not high risk for thromboembolism. The remaining 312 AF/FL patients constituted our study cohort. While selected for the study based on their ATRIA score, all study patients were also found to be high risk using the CHA2 DS2 -VASc score.Overall, median age was 80 years , and 201 cohort members were women. Oral anticoagulants were prescribed to 128 patients within 30 days of the index ED visit, with 85 patients receiving a new anticoagulant prescription at the time of ED discharge and the remaining 43 patients in the following 30 days. In this sample, warfarin was the only oral anticoagulant prescribed. During the post ED-discharge period, the specialty of the physician prescribing anticoagulation included outpatient internal medicine , cardiology , hospital medicine , and emergency medicine. Among the 227 patients who left the ED without an oral anticoagulant prescription, 195 had an in-person or telephone encounter with a primary care provider or cardiologist within 30 days. Forty-three patients were discharged home only on antiplatelet medications: seven were advised to continue their daily aspirin and 36 were prescribed new daily antiplatelet agents at the time of discharge. Characteristics of the cohort stratified by anticoagulation initiation are described in Table 2. Variables independently associated with increased odds of anticoagulation initiation included younger age, new diagnosis of AF/FL, symptom onset >48 hours prior to evaluation, EP assessment of rhythm pattern as intermittent , receipt of cardiology consultation in the ED, and failure of sinus restoration by time of ED discharge. Among the 227 patients discharged home from the ED without anticoagulation, 139 patients had one or more reasons documented for withholding anticoagulation. These were categorized as physician concerns and patient concerns. The leading physician reasons for withholding anticoagulation were concerns about elevated bleeding risk , deferring the decision to an outpatient provider, and the perception that the restoration of sinus rhythm had significantly reduced or eliminated stroke risk. The leading patient reasons for declining anticoagulation were a preference to continue the discussion of anticoagulation with their outpatient provider and simple refusal, not otherwise specified. Deferring the shared decision-making process to the patient’s outpatient provider was the leading reason for withholding anticoagulation when combining physician and patient concerns. One hundred thirty-seven patients were given patient education material on AF/FL in their discharge instructions. The three versions of material used by the EPs each included one sentence about the general association between AF/FL and thromboembolic events. The material was not personalized, however, and did not quantify the patient’s specific risk , nor even mention broader thromboembolic risk categories , nor discuss the benefits and risks of stroke prevention therapy. Using an online random number generator , we identified 23 cases for review by a second abstractor. Percent agreement was the same for presence of both documented reason for non-prescribing and provision of patient education material. 

The kappa statistic was 0.91 for each variable. In this multi-center, prospective cohort of non-anticoagulated AF/FL patients at high thromboembolic risk discharged home from the ED, we found that approximately 40% were prescribed oral anticoagulation within 30 days. Furthermore, we observed that younger age, selected rhythm-related characteristics in the ED, and receipt of cardiology consultation were strongly associated with receiving anticoagulation. About 60% of patients discharged home from the ED without anticoagulation had a reason documented in their electronic health record, a relatively high percentage of documentation compared with a recent, large, inpatient registry.The principal reason for non-prescribing in our study was deferring the shared decision-making process to the patient’s outpatient provider. Such reasoning is sensible in a setting like ours where patients have ready access to their outpatient physicians and 30-day follow-up is common.Our percentage of deferral was higher than in a similar study of ED anticoagulation prescribing for high-risk AF/FL in Spain , though, like our study population, all of their patients also had health coverage.Other leading documented reasons included a perception of increased bleeding risk and a perception of reduced stroke risk.The incidence of oral anticoagulation initiation for AF/FL patients at high risk for ischemic stroke who are discharged home from the ED has not been well described. Reports range widely, from approximately 10% to 50%. The calculation also varies depending on whether stroke-prone AF/FL patients deemed ineligible for anticoagulation are included in the denominator. A large, 124-center study from Spain by Coll-Vinent et al. in 2011 demonstrated that anticoagulation was initiated at the time of home discharge to 193 of 453 high-risk AF patients , higher than our 27%.The case mix in this study was similar to ours in that patients with all categories of AF were included , but was different in that they excluded patients thought ineligible for anticoagulation, something our study design did not allow. This difference might explain in part why their incidence of initiation was higher than what we observed. A more recent, 62-center Spanish study from the same investigators reported a similarly high incidence of de novo anticoagulation prescribing on ED discharge.32 Two hospitals with the University of British Columbia, Canada, have reported a high baseline incidence of appropriate anticoagulation initiation at ED discharge for high-risk AF/FL patients. As with the Spanish study above, these investigators had excluded ineligible patients.48 Other studies have reported lower incidences of anticoagulation initiation. A retrospective cohort study undertaken in 2008 in eight Canadian EDs observed thromboprophylaxis initiation in 21 of 210 patients with recent-onset AF/FL who were discharged home.A more recent prospective study by Stiell et al. described the treatment of patients with recent-onset AF at six academic Canadian EDs from 2010 to 2012 and found slightly lower rates of untreated high-risk patients leaving the ED with a new anticoagulation prescription.In a retrospective study of two academic Canadian EDs, Scheuermeyer et al reported that 27% of high-risk AF/FL patients were begun on appropriate stroke prevention medications at discharge, and documentation of reasons for withholding thromboprophylaxis was noted in an additional 21 patients.Our finding that older patients with high-risk AF/FL were less likely to receive an oral anticoagulant prescription than their younger counterparts is consistent with studies demonstrating under-treatment both in the ED and in other settings.Thromboprophylaxis is less commonly prescribed to patients over 75 years of age, even though this population likely benefits the most given their higher absolute risk of ischemic stroke compared with intracranial hemorrhage or life-threatening extracranial hemorrhage.Physicians often acknowledge their hesitancy to initiate anticoagulation in the elderly and very elderly,given that these patients often have a high comorbidity burden, associated cognitive disorders and polypharmacy-related challenges.

We also report the use of an EDOU to help decrease hospital admission

To the best of our knowledge, this is the first report of healthcare utilization in patients with SCD that includes DH and ED observation visits, in addition to the routinely reported ED visits and hospitalizations. We intentionally “counted” each encounter, and the numbers are significant. We attempted to dissect the “locations” in an effort to more fully understand all healthcare use for treatment of VOC and to begin to understand the potential for all locations as alternatives for treatment of VOC. During the project, several changes at both sites affected healthcare utilization options. Immediately prior to the onset of the study, Site 1 opened a new day hospital, enabling the management of mild episodes of VOC crisis with a DH stay; in contrast, at Site 2, the main provider that admitted patients to the DH took an 18-month medical leave, temporarily limiting the use of the DH at Site 2. Therefore, it is not surprising that the DH was used more for Site 1 patients needing acute pain management of VOC. It should be noted that the percentage of patients with one or more encounters to the DH at each site was not significantly different. The difference in usage reflected the frequency of DH use per patient during the study period rather than the percentage of patients with at least one DH encounter at each site. Another difference in management style is reflected in the hospital admission rate following a DH encounter between sites. Site 1 had a low post-DH encounter hospital admission rate, 8%,garden racks wholesale compared to Site 2, 24%. Dedicated DH management of patients with SCD has been shown to reduce full hospital admissions and total costs.It is clear there was a lower threshold for admission from the DH at Site 2 when compared with Site 1. This reflects practice pattern differences.

Emergency physicians should work with area hematologists to explore expanded use of DH treatment of uncomplicated VOC to reduce hospital admission for those cases where hospital admission is not otherwise warranted.Site 1 placed 67 patients in the observation unit rather than admitting to the hospital after inability to discharge after the ED stay; the admission rate was less than the SCD-VOC ED admission rate. A Brazilian hospital center successfully implemented an EDOU protocol and reduced hospital admissions; however, generalization of findings is limited to the small sample size, as there were less than 30 hospital admissions for sickle cell crisis each year.Two studies proclaiming a 50-55% reduction in hospital admission rates following implementation of a dedicated SCD-VOC observation protocol have been published in abstract form,but the detailed reports have yet to be published. Additional details are required before conclusions can be generalized to other settings. However, emergency physicians with access to an EDOU should consider establishing a SCD-observation protocol to reduce hospital admissions for uncomplicated VOC. Few prior studies assessed sickle cell patients’ use of hospital facilities outside of their specialists’ home institutions. Our finding of 34.5% of SCD patients visiting outside institutions is slightly less than that found by Woods et al. in 1997, who found 39% of SCD patients in the Illinois statewide database used more than one hospital for care.However, our finding of 34.5% outside hospital use is considerably less than Panepinto et. al. study using a database from eight states, which found that 48.7% of adult patients with SCD used more than one hospital.The fact that our patients had access to a hematologist for regular care may have reduced their need to seek care outside of the home institutions, while the other two cited studies reflected a more general SCD patient population, likely with less hematology follow-up care. Furthermore, patients seeking care elsewhere may represent needs unmet by the home institution.Our findings highlight the importance of measuring the cost of outside hospital utilization when studying the financial impact of new treatments or programs initiated at the investigator’s institution. While the majority of patients with sickle cell disease at each study site presented for acute care during the study period, a significant number had no acute care encounters, for a period longer than previously reported previously.

Approximately 40% of clinic patients at Site 1, and 33% of clinic patients at Site 2 had no acute care encounters at their hematologist’s institution, or at hospitals within 20 miles of the hematologist,hydroponic racks during the 2.5-year monitoring period. Our findings should be compared to an eight-state study of statewide inpatient and ED databases that found only 29% of patients had no acute care encounters related to their sickle cell anemia over a 12-month period.Darbari et. al. reported percentages similar to our study, 40%, but the assessment period was only one year.Our findings document 33-40% of two populations of patients with sickle cell disease being managed by hematologists without the need for acute care encounters for period of 30 months. We believe this is an important finding and further refutes the commonly held myth that all patients with SCD are high utilizers. Another important finding is that a greater proportion of patients at Site 2 had one or more hospital admissions , and had one or more ED visits. Furthermore, while the number of patients with acute care encounters at Site 1 was more than twice the number at Site 2 , Site 2 patients had more total ED encounters than Site 1 patients. This again speaks to differences in practice patterns between sites that can be guided with strong input from the patients’ hematologists. It has been documented previously how a minority of patients with SCD account for a disproportionately greater number of encounters;however, the variation in acute care usage between sickle cell populations has not been demonstrated previously within a single study. Clearly, the patients at Site 2 had more acute care encounters per patient. Our study did not assess the differences in methods of sickle cell disease management in the outpatient clinics; future study should investigate differences in all management methods, as well as differences in the patient characteristics, to determine the cause of this difference in acute care utilization. Our study was a prospective observational study, and we did not randomize patients to any specific treatment plan or setting. Although it was our intention to provide optimal and uniform care at both sites, providers at Site 2 were unable to initiate patient-controlled analgesia in the ED.

However, use of patient-controlled analgesia at Site 1 had unique problems, including delays to initiation of pain treatment as the device takes more time to set up than simple, single intravenous injection of pain medicine. Patient satisfaction with pain medication was not significantly different between sites.We did not assess outside hospital use beyond a 20-mile radius of each study site. We learned from discussions with patients that a few had received acute medical care at facilities outside of the 20-mile radius surrounding the home institutions, but we are not able to quantify or comment further on this care as patients were consented for hospitals only within the 20-mile radius. We observed differences in management styles, but we were unable to determine from this data to what extent the differences we observed were due to physician practice, patient disease severity, or other factors. Each site experienced a deficit in hematologist specialty coverage that reduced the use of the DH until a replacement could be found. Our patient population had access to a hematology specialty clinic during the entire study period; our finding may not be applicable to settings without readily available hematology follow-up and hematologist-directed day hospital management for patients with sickle cell disease. Headache is a common complaint in the emergency department.The use of HCT by emergency physicians for evaluation of headache varies widely, and 97% of EPs surveyed felt that at least some of the imaging studies ordered in EDs were medically unnecessary.The American College of Emergency Physicians released its Choosing Wisely Campaign in 2013, which included avoiding HCTs in patients with minor head injury who are at low risk based on validated decision rules.During the 2015 Academy of Emergency Medicine Consensus Conference on diagnostic imaging in the emergency department, participants suggested that allowing providers to influence metrics could produce better quality metrics; they also suggested that knowledge translation for the optimization of diagnostic imaging use should be a core area warranting further research.The Centers for Medicare and Medicaid Services proposed OP-15, “Use of Brain Computed Tomography in the Emergency Department for Atraumatic Headache,” to measure the proportion of HCTs performed on ED patients presenting with a primary complaint of headache that were supported by diagnosis codes; however, its methods were soon questioned.In 2012 while OP-15 was still under consideration, we implemented a quality improvement effort intended to improve the documentation of appropriate diagnoses in support of HCT ordering. As part of this QI effort we addressed some of the criticisms of OP-15 by expanding the indications for HCT and getting input from practicing EPs.

Reviewing this QI effort in 2014 we observed that the proportions of HCT use decreased after EPs had reviewed their individual practice data. The primary objective of our present study was to determine whether the observed decrease in HCT use was associated with changes in proportions of death or missed intracranial diagnosis. Secondarily, we sought to determine whether proportions of subsequent cranial imaging or reevaluation of headache differed when compared between those who did and those who did not undergo HCT in the ED.Our QI effort was structured to fulfill the practice improvement component of the American Board of Emergency Physicians’ Maintenance of Certification requirement.This required collecting data on 10 visits per EP before and after an intervention. We performed two interventions in succession, so our QI effort yielded three epochs: pre-intervention ; post-education ; and post-data review. At the end of each epoch, we sampled 10 visits for headache seen by each EP by searching the EMR for chief complaints of headache, and identifying the 10 most recent ED visits seen by each faculty EP. For our educational intervention we began by soliciting feedback from EPs on OP-15. Using this feedback we expanded the list of appropriate diagnoses supporting HCT. We followed this with a series of emails and lectures explaining CMS OP-15. We also conducted group discussions during educational conferences and faculty meetings to educate EPs on selecting appropriate diagnoses to support HCT ordering and explaining the measurement process, highlighting common pitfalls. During group discussions we invited and answered questions. The explicit goal of education was to improve diagnosis documentation, rather than to decrease HCT ordering. This began in late 2012 and continued through 2013. The data-review phase took place between January and March of 2014 when individual EPs reviewed their own HCT ordering practices based on data collected for the QI effort during the pre-intervention phase. These reviews occurred during individual faculty’s annual reviews with the department chair. In these meetings we presented each EP with his/her individual proportion of HCT ordering and proportion of appropriate diagnosis assignment. In cases where a HCT was ordered without the assignment of an appropriate supporting diagnosis we reviewed the ED chart. In keeping with Schuur et al.’s findings, we found that in the majority of cases a more specific diagnosis than “headache” was clearly supported by information documented in the ED chart, but had not been assigned at the end of the ED visit.During each annual review we informed the EP of the specific cases where HCT was not supported by a diagnosis code and suggested an alternate, more-specific diagnosis or the addition of a secondary diagnosis that would have made this HCT appropriate according to CMS OP-15. This was followed by the post data review phase when we sampled another 10 headache visits per EP. After our QI effort was completed, we were surprised to note that while there was no decrease in HCT use after the educational intervention, we observed a 9.6% reduction in HCT use after data review.In 2016 we decided to use the dataset generated during the QI effort to investigate our study hypothesis: Was a decrease in HCT use followed by an increase in death or missed intracranial diagnosis?

The registry should collect the prehospital management or link with the Thai EMS database

Data on treatment, outcome and post-crash transportation were collected from medical records. If a patient was admitted to the hospital, the outcome of treatment was reassessed at day 30 after a crash to be consistent with the World Health Organization definition of road traffic death. Alcohol-use information was obtained in various ways: from patients who verbally indicated that they had consumed alcohol prior to the injury; from relatives or from those who transported patients to the hospital; or by physical examination by a health provider, or laboratory testing. These data were then entered into the NIEM electronic database. This study was reviewed and approved by the Faculty of Medicine Siriraj Hospital Institutional Review Board. We conducted a retrospective review study using data collected by the NIEM registry during the 2008–2015 New Year holiday and 2008–2014 Songkran holiday. We excluded patients who died at the scene and those who were not transported to hospitals because no transportation method for these patients had been recorded in the registry. Severe RTI patients were defined as patients who were admitted to the hospital, were referred, or had died in the emergency department.Subsequently, we categorized data into two cohorts: a control group, and an EMS utilization group, which included patients who were transported to hospitals by FR, BLS, ILS, or ALS ambulances. The registry also recorded the vehicle type. We further classified the data according to whether the victims were vulnerable road users or non-vulnerable road users.We also categorized the time of the day that patients visited hospitals,vertical outdoor farming dividing the day by 06:00–17:59 and 18:00–05:59.

Patients who used helmets and seat belts were combined. Mortality included death in EDs and during referral, death in the initial 24 hours after admission, and death 1–30 days after admission. Survival was defined as patients who either survived after 30 days of admission or were discharged from hospital.Logistic regression was used to analyze the primary outcome, which was the association between EMS utilization and mortality of RTI patients; we then adjusted the outcome for factors that affected injury severity, such as age, sex, being a vulnerable road user, road characteristics, alcohol consumption, and helmet or seat belt use.We conducted subgroup analyses to identify factors related to the mortality of patients who were transported by EMS. Univariate analysis was conducted using chi-square test and Fisher’s exact test. We included factors that have been proven to be associated with RTI severity, as mentioned above, and level of EMS in a multiple logistic regression model. P values <0.05 were considered significant. We calculated statistics using R version 3.2.1.This study describes outcomes of severe RTI patients transported by EMS compared with patients transported by private vehicles. We conducted the analysis using a nationwide registry in Thailand, which has the highest traffic accident mortality rate in the world. Moreover, the registry collected data during holidays with a high incidence of RTIs. In this cohort, severe RTI patients transported by ambulance had a higher mortality rate than patients transported to hospitals by private vehicles, and this finding is in line with those of other studies.The higher mortality rate might be attributed to the fact that patients who were transported by EMS were more severely injured than those in the control group.

Our results demonstrated that approximately 40% of severe RTI patients were transported to hospitals by ambulance, which was less than reports from other countries. Recently, Huang et al. reported that 73.4% of RTI patients in Taiwan were transported by EMS. One possible reason for not using EMS may have been that patients might not have known or might have forgotten the four-digit number for ambulance services.This contact number is different from those of other public service agencies such as the police and fire departments. A continuous advertisement of the emergency number should be done to increase EMS use in Thailand. The demographic data revealed that the patients who used EMS had more factors that increased injury severity than the control group; for example, our analysis demonstrated that severe RTI patients transported by ambulance reported current alcohol consumption more than those who were not. Recent studies have reported that alcohol intoxication is associated with greater injury severity and higher mortality rates among RTI patients.We also found that severe RTI patients transported by EMS were more often injured on highways than patients who were not. This shows that the patients in the EMS use group were more severely injured than those in the control group, since accidents on highways were more likely to have occurred at high speed, which was associated with more severe injuries.Although we analyzed multiple logistic regression to adjust for confounding factors, certain factors related to injury severity were not included in the registry, such as vehicle speed, comorbidity, prehospital care time, or injury severity scores.To find out the real effect of EMS use in clinical outcomes of RTI patients, the registry should collect information about other factors related to severity of injuries. The subgroup analysis identified factors associated with mortality among severe RTI patients transported by EMS. It demonstrated that patients who were transported by ALS teams had significantly increased mortality. This might have been due to the fact that patients transported by ALS teams were at higher risk of increased severity when compared with patients transported by other EMS levels.

Another possibility is that ALS team use might increase the on-scene time, due to prehospital interventions such as administering intravenous fluids or performing endotracheal intubation,rolling grow table as opposed to the “scoop and run” concept. This concurs with findings of previous studies that the use of ALS teams did not improve clinical outcomes.The Ontario Prehospital Advanced Life Support Major Trauma Study demonstrated that implementation of ALS teams did not improve the survival of major trauma patients when compared with patients treated by BLS teams.The study also revealed that among patients with a Glasgow Coma Score less than 9 who needed endotracheal intubation, those transported by ALS teams had a significantly lower survival rate.However, our registry did not collect prehospital care time and prehospital intervention. Further studies should be conducted to determine the real effect of ALS teams on the clinical outcomes of RTI patients by analyzing all confounding factors. Helmet or seat belt use was a factor that reduced mortality in severe RTI patients transported by ambulance. This concurred with the findings of previous studies that showed that these protective devices reduce injury severity.Abu-Zidan et al. reported that restrained RTI patients showed significantly less severe injury as well as fewer surgeries compared with unrestrained patients.Furthermore, Nash et al. reported that seat belt use was associated with a significant reduction in injury severity, mortality rate, and length of stay among RTI patients.Liu et al. reviewed 61 observational studies and found that helmet use was a significant factor in reducing mortality and head injury in motorcycle crashes.Only 13.89% of patients in this cohort wore a helmet or seat belt, although Thai law requires helmet and seat belt use. Further studies should be conducted to explore barriers to helmet and seat belt use.Although we analyzed data from the largest RTI registry in the country, revealing high mortality rates among RTI patients, our study has certain limitations. First, because it was a retrospective observational study, there were missing data regarding accident time, helmet and seat belt use, alcohol consumption status, and road characteristics. Second, the collection method in the registry had a potential for recall bias since the data collectors interviewed patients or their relatives at the hospital. Third, the registry collects data only in the long holiday periods. Given the lack of data for non-holidays the registry could not represent the effect of EMS utilization during non-holidays. We also excluded patients who died at the scene and were not transported to hospitals. This might have changed the injury severity of the whole population. Furthermore, the analysis combined patients using helmet and seat belt in one variable as a protective factor for overall RTI patients, when the injuries have different mechanisms. Further investigation should conduct subgroup analysis comparing those using motorcycles vs. four-wheel vehicles. Moreover, as mentioned earlier, the registry did not collect data on confounding variables that could affect clinical outcomes, such as ISS, prehospital care time, vehicle speed, or patient comorbidities. Lacking prehospital intervention is another limitation.

Alcohol consumption was defined using patient history data and physical examination by the physician, which is not the gold standard. Blood alcohol levels should perhaps be included in the registry. Improving the registry will help enhance our understanding of these characteristics as well as the effects of EMS utilization on clinical outcomes. Aside from our suggestions to improve the registry, there are further implications from the results of this study. Since we found that the RTI patients transported by EMS during the holidays had increased mortality rates, we recommend that this group of patients be evaluated in a trauma resuscitation room earlier, especially for patients transported by ALS teams. And to reduce time spent in the ED, prehospital notification should be given to receiving hospitals by the ambulance team before arrival.The incidence of spinal epidural abscess , a highly morbid and potentially lethal deep tissue infection of the central nervous system, has risen significantly over the past decade.1, 2 Our tertiary care institution has experienced an increase of more than 200%, from 2.5 to 8 cases per 10,000 hospital admissions since 2005.Although the reasons are not clearly defined, various factors, such as an expanded, comorbidly ill, aging population, and procedures or behaviors predisposing to bacteremia, have been posited to contribute to the increased incidence of SEA.Because SEA may rapidly and unpredictably evolve to irreversible neurologic injury and diagnostic delays remain common,our goal was to use these data to inform a discrimination model that could be employed at the time of initial clinical presentation to prioritize potential cases for expeditious, advanced imaging to optimize patient outcomes.The design and selection criteria for cases and controls have been previously described.To ensure clinical relevance, the case and control groups were drawn from patients who presented with findings that either raised concern for SEA or who underwent a “rule-out” evaluation; magnetic resonance imaging or computed tomography and micro-biologic data were used to assign patients to the appropriate group. Baystate Medical Center , a 720-bed tertiary-care, regional, academic medical center currently serving a population of approximately 850,000 people in western Massachusetts experiences more than 33,000 annual adult discharges with a corresponding case-mix index of 1.72, indicating high severity and complexity of its inpatients relative to their diagnosis related group. Encounters were coded as “confirmed” SEA if there was a radiologist-confirmed finding of an epidural lesion on advanced imaging with a positive culture from lesion or blood; “probable” if there was a radiologist confirmed epidural lesion in the absence of positive cultures from lesion or blood; and “control” if no lesion was identified by the radiologist on the imaging study. This study was approved by the institutional review board.We preliminarily evaluated baseline comparison between cases and controls using univariable analyses and direct visualization methods. Because our goal was to develop a discrimination model, we used the Integrated Discrimination Improvement Index to identify candidate discriminators.The IDI represents the degree to which a candidate variable increases the event probability in cases, while decreasing the event probability in controls, thus discriminating cases from non-cases. We selected candidate predictive factors if they were immediately discernible upon clinical presentation and if their univariable IDI was >0.02, suggesting meaningful discrimination properties. To reduce bias in the prediction model, candidate variables had to have at least 20 events to be considered.The multi-variable logistic regression model initially included all candidate variables and was then refined using a backwards selection process, with a p-value for removal of 0.05. We used Youden’s J7 to identify the cut point that maximized sensitivity and specificity. Areas under the receiver operator curve of the full- vs. restricted-models were compared using previously described methods.We assessed model fit using the Hosmer-Lemeshow goodness of fit and Stukel tests. Because measures of model validation may be overly optimistic when derived on the sample used for coefficient estimation, we generated bootstrapped validation measures.We used Stata 14.2 and R for analyses.

Body temperature and ECG were monitored throughout anaesthesia

Duration of each injection was 0.2 s and injection volume was 0.2 ml. Each injection was followed by a 60-s timeout period, during which the chamber was dark and lever presses had no programmed consequences. One-hour sessions were conducted five days per week . All monkeys had learned to respond under the FR10 schedule for the particular training drug prior to beginning this study. After completing the previous experiments, monkeys self administered the training dose of each drug for at least five sessions until responding was stable . Self administration behavior was then extinguished by substituting vehicle for THC, anandamide, or cocaine, but maintaining the presentation of the brief-stimulus associated with each injection. Then we tested reinstatement of extinguished drug-taking behavior by priming injections of THC or URB597 in all three groups of monkeys. Reinstatement effects of each pretreatment were studied for three consecutive sessions starting after at least three days of stable vehicle extinction. Food was withheld 12 h prior to this procedure. Anaesthesia was induced and maintained with isoflurane . Monkeys were weighed, prepared with a venous line , placed on a surgery table and kept warm by heat lamps.After animals were stabilized , URB597 or its vehicle was intravenously injected 1 h prior to euthanasia and the venous line was flushed with 0.5 ml of saline. Body temperature, SPO2,vertical agriculture pulse and rate of respiration were recorded every 10 min. After 1 h, monkeys were euthanized with Euthasol . Death was confirmed by the absence of respiration and heart beat on ECG. Brains were quickly removed, the cerebella were separated and forebrains were dissected. Each hemisphere was cut into 3 parts by two coronal sections made at approximately AP +12.5 and −5.

Brain fragments were snap-frozen in isopentane , wrapped in aluminium foil, and placed in dry ice. Samples were stored in the freezer for 2 days and then shipped on dry ice to the University of California Irvine for analyses. In one set of experiments, rats were sacrificed by rapid decapitation under light anaesthesia 2 h after injection of URB597 or vehicle. In other experiments, rats were food deprived for 12 h and sacrificed by decapitation 1 h after injection of URB597 or vehicle, while maintained under isoflurane anaesthesia for the duration of the experiment. In either case, brains were rapidly removed, and the hippocampus and prefrontal cortex was dissected from the fresh tissue over ice. Brain regions were frozen in dry ice, and stored at −80°C until lipid and enzymatic analyses. Cumulative-response records were obtained during all sessions to assess within-session patterns of responding. Rates of responding during self-administration sessions are expressed as responses per second averaged over the one-hour session, with responding during time-outs not included in calculations. Injections per session represent total number of injections delivered per session. Data for dose-effect curves are expressed as mean response rates and numbers of injections per session ± SEM over the last three sessions. In addition, total intake of anandamide, THC or cocaine for each session was calculated. Reinstatement data and effects of pretreatment with URB597 on drug self-administration are expressed as mean ± SEM of total numbers of injections per session over three sessions. Statistical analysis was done using single-factor repeated measures ANOVA to assess differences between vehicle and test-drug pretreatment conditions or between different doses of anandamide, THC, cocaine or URB597 and vehicle. Significant main effects were analyzed further by subsequent paired comparisons to control values using Dunnett’s test . Bonferroni t-test was used when the number of observations did not allow for the use of the Dunnett’s test. Differences between effects of vehicle and URB597 pretreatment on lipid levels and FAAH activity were analyzed using single-factor ANOVA.

Differences were considered statistically significant when p < 0.05.We found that the selective FAAH inhibitor URB597 suppresses FAAH activity and increases anandamide levels in regions of the squirrel monkey brain that participate in motivational, cognitive and emotional functions. This effect is accompanied by a marked decrease in the levels of 2-AG, a major endocannabinoid substance in the brain, even though URB597 does not affect activities of 2-AG-metabolizing enzymes such as DGL and MGL. We further observed that URB597 does not display overt reinforcing property in monkeys over a broad range of experimental conditions. Indeed, the drug did not reinforce self-administration behavior even when its cumulative intake exceeded by several folds a fully effective dose for FAAH inhibition. Furthermore, neither previous cocaine nor THC exposure predisposed monkeys to self-administer URB597: even monkeys that had previously self-administered anandamide at very high rates failed to respond to the FAAH inhibitor. Lastly, URB597 did not affect the reinforcing effects of THC or cocaine, and did not reinstate extinguished drug seeking behavior in monkeys that had previously self-administered THC or cocaine. We interpret these results to indicate that URB597, by enhancing anandamide signaling, causes a compensatory down-regulation in 2-AG mobilization; and the potentiation of anandamide-mediated transmission produced by URB597 is insufficient per se to produce reinforcing effects. Our findings further imply that FAAH inhibitors such as URB597 – which have demonstrated analgesic, anxiolytic, antidepressant and antihypertensive properties in rodents – may be used in humans without anticipated risk of inducing abuse or provoking relapse to drug use in abstinent individuals. The pharmacological profile of URB597 is strikingly different from that of THC and other direct-acting CB1 receptor agonists. Studies in rodents have shown that URB597 does not produce THC-like effects such as catalepsy, hypothermia or hyperphagia . Further, URB597 does not mimic the discriminative-stimulus actions of THC . Even further, URB597 does not increase dopamine levels in the nucleus accumbens shell of rats,hydroponics flood table a defining neurochemical feature of reinforcing drugs . Finally, URB597 does not elicit conditioned place preferences indicative of rewarding properties in rats .

However, experiments in rodents, such as those outlined above, are insufficient to model human reward based behaviors and to predict the addictive potential of drugs. Thus the present results provide the first unequivocal demonstration that URB597 lacks THC-like reinforcing properties, and suggest that this FAAH inhibitor might be used in therapy without anticipated risk of abuse or triggering relapse to drug use. Exogenous anandamide exerts potent reinforcing effects in monkeys . Thus, it may be surprising that the ability of URB597 to potentiate brain anandamide signaling does not translate into overt rewarding properties. However, there are two plausible reasons why URB597 does not support self-administration responding. First, exogenous and endogenous anandamide might each access distinct sub-populations of CB1 receptors in the brain. In particular, systemic administration could allow anandamide to reach a receptor pool that is normally engaged by 2-AG. In this context, the observation that treatment with URB597 decreases 2-AG levels in the monkey brain suggests the existence of a compensatory mechanism aimed at reducing 2-AG signaling in the face of enhanced anandamide signaling. Such a mechanism might account, at least in part, for the inability of URB597 to serve as a reinforcer. Consistent with this idea, a recent report suggests that pharmacological or genetic disruption of FAAH activity causes a down-regulation of 2-AG production in acutely dissected rodent striatal slices, which is reportedly due to vanilloid TRPV1 receptor activation . However, we were unable to replicate this observation in live animals even when using doses of URB597 that completely suppressed FAAH activity and significantly increased anandamide levels . Another possibility is that the kinetics of CB1 receptor activation may differ between anandamide and URB597 administration, as the former is likely to produce a more rapid recruitment of CB1 receptors than the latter. It is well established that effectiveness of drug reinforcement in monkeys depends on a rapid drug distribution throughout the brain . Irrespective of the mechanism involved, the impact of 2-AG down-regulation on the broad pharmacological properties of URB597 in primates remains to be determined. In conclusion, our findings with URB597 unmask a previously unsuspected functional heterogeneity within the endocannabinoid signaling system in the brain, and suggest that FAAH inhibitors such as URB597 might be used therapeutically without risk of abuse or triggering relapse to drug abuse. Feeding America is a network of food banks, food pantries, and meal programs providing food and services in the U.S.Second Harvest Heartland, a mid-western member of Feeding America, is one of the nation’s largest food banks, and supports over 1,000 food shelves and other partner programs that distribute food to over 532,000 individuals in Minnesota and western Wisconsin annually. Our institution and Second Harvest Heartland have partnered to provide food assistance to our patients since 2010. As a result of this relationship, healthy food is available in our clinics in the form of bagged groceries, and on site through an institutional food shelf. Patients are able to receive food through clinics, at discharge from an inpatient stay, or delivered to their homes through a community paramedic program or visiting nurses. To determine eligibility for food services referrals in our institution, staff or providers screen patients for food insecurity with two questions.

This two-question screening tool is a validated, abbreviated version of the 18-item U.S. Household Food Security Scale , which is used to monitor national food security.For purposes of our referral program, we modified the screening questions to be dichotomous yes/no responses from the validated responses: often, sometimes, or never. The questions included “Within the past 12 months we worried whether our food would run out before we got money to buy more;” and “Within the past 12 months the food we bought just didn’t last and we didn’t have money to get more.” In its validation cohort, an affirmative response to either question yielded a sensitivity of 97% and specificity of 83%, compared to the gold-standard, full 18-item HFSS. If deemed food insecure by this screening process, providers will then order a “referral for food” in the EMR to connect the patient to Second Harvest Heartland for support. The patient must consent to the referral and state what specific contact information they are comfortable sharing with the partner organization. This referral provides the patient’s contact information to Second Harvest Heartland through an automated fax. The food bank staff will then assist the patient in enrolling in federal nutrition programs , in addition to locating their neighborhood food shelves and meal programs, and arranging free produce distribution that they can access on a monthly basis. Though initially intended for clinic and inpatient use, starting in 2015 this order became available for use in the ED. To advertise the referral program, focused information sessions were added to the emergency medicine resident educational conferences in late 2015, and to the new resident orientation, starting in 2016. In addition to this, semi-annual updates are integrated into the resident conferences, and the details of the referral patterns are distributed to faculty. All ED personnel were encouraged to use the referral order, including ED faculty physicians, residents, physician assistants, nursing staff, social workers, ED registration, and financial support staff. Institutional financial counselors were unforeseen allies with the program, as their workflow typically incorporated several questions that touched on financial and food security issues. From January through December 2015 , a total of 1,003 referrals were made to Second Harvest Heartland; only five were made from the ED. From January 2016 through December 2016 , there were 1,519 referrals hospital-wide, and 55 referrals were made from the ED. Table 2 outlines details of the frequency of EMR order use from all clinical sites. Of the 1,519 referrals, 1,129 were successfully contacted by Second Harvest Heartland, and 954 accepted and received assistance. Of the referred and successfully contacted households, 92% were connected with at least one new form of food assistance. This assistance included new information about geographically individualized food shelves, meal sites, and produce distribution. Of households eligible for the Supplemental Nutrition Assistance Program, 76% completed applications to the federal entitlement program. This study sought to determine whether ED referrals for food resources for patients with food insecurity would increase after the implementation of an EMR referral order, as well as to introduce provider education about this referral program. To our knowledge, we report the first experience with such an institutional EMR order for food resources in the ED.