Anxiety disorders and substance use disorders are highly prevalent and frequently cooccur

A “pharmacy data only” model included utilization variables related to medications and opioid use data, similar to what might be available to a pharmacy benefit manager researcher. The CHAID procedure identified a number of significant interactions that met the criteria outlined in the Methods section. These interactions were added to the core variables for each model, but only if the variables involved in the interaction were also included in the model individually. The model that was selected as the best fit for the data was comprised of mental health variables; specifically, this model included the diagnostic status for ICD-9 mental disorders and the health service utilization variables that focused on mental health . This model provided the best fit for the data, as defined by AIC values and by overall parsimony. The log likelihood ratio test for the selected model was 12,695 . The remaining models had LR values ranging from 5,785 to 13,095 ; all of these ratios were also statistically significant . Likewise, the results for the Wald and Score tests were significant across all models. In comparison to the selected model, the only other model with a lower AIC value contained all of the variables presented in the bivariate comparisons above, as well as all significant interactions involving these variables, while only providing a modest decrease in AIC. The resulting model is presented in Table 5. This predictive model was 79.5% concordant with actual OUDs in the validation data set, meaning that almost four-fifths of the OUDs were correctly identified when the model was applied to a different sample of participants. As is noted in Table 5,indoor farming systems the demographic variables significantly differentiate OUDs from non-OUDs, though the effect sizes for these variables are quite small.

Diagnostic data, particularly variables of barbiturate abuse/dependence, unspecified drug abuse/dependence, and poly substance drug dependence had strong effect sizes in differentiating OUDs from non-OUDs. The amount of short-acting opioid, measured in morphine equivalent units, dispensed was a better predictor than the amount of long-acting opioid. It should be noted that the magnitude and the directionality of the odds ratios in Table 5 differ from the bivariate comparisons in Table 3; in modeling multiple variables simultaneously, bivariate relationships are subject to change. Finally, ten interactions remained in this model, primarily involving the aforementioned variables of short-acting opioids dispensed, unspecified drug dependence, poly substance dependence, and barbiturate dependence. Participant age, inpatient mental health admissions, and mental health inpatient days were also present in the significant interaction variables.The detection of opioid misuse is an important step in addressing the public health problems of prescription drug abuse, dependence, diversion, and overdose. Although previous studies have identified some of the factors that place individuals at greater risk for misuse of opioids, this investigation benefits from a comprehensive database that has illuminated more differences between those who develop opioid use disorders and those who receive an initial prescription but do not develop a diagnosis of opioid dependence or abuse. Additionally, this study may be useful in providing health plans with a method for monitoring claims data that may assist in detecting members who are at risk for substance misuse, potentially providing relevant feedback to medical providers. The current study replicates the findings of previous studies that being male and younger are associated with increased risk of becoming an OUD; an additional significant difference captured in this dataset is that those who were OUDs are less likely to be the primary insured individual, and are more likely to be a dependent or spouse/partner of the primary insured. OUDs significantly differed from non-OUDs in a number of other areas, as well. The prescription patterns for opioids were quite different between these groups, with OUDs receiving a larger supply of opioids, paying a significantly higher copayment for opioids, and receiving more short-acting opioids than non-OUDs.

The directionality of this relationship is unclear from this study; it is possible that particular prescribing patterns place individuals at greater risk for developing a problem with opioids, but it is also possible that OUDs are more likely to request short-acting, and a greater number of, medications from a health care provider. Health service utilization was also significantly greater among OUDs than among nonOUDs. This finding was present among inpatient and outpatient clinics, emergency department, general medical care, and mental health specialty care visits. As with the relationship between opioid prescribing and misuse, the directionality of this relationship is also unclear. OUDs are likely to be at risk for other health problems that may co-occur with their opioid misuse; depression, anxiety, infections, metabolic difficulties, and injuries are all possible correlates of opioid misuse. Conversely, individuals who have other health problems may start to use opioids, and to misuse them, as a means of coping with their difficulties, such as chronic pain or mental health difficulties. The patterns of medication usage help to clarify, to some extent, the differences between OUDs and non-OUDs. OUDs are more likely to be receiving treatment for anxiety, depression, chronic pain, and many other conditions than non-OUDs. The mathematical modeling of opioid misuse, and the resultant predictors of misuse that were identified in the final model, underscore the relationship between mental health, other substance misuse, and opioid abuse/dependence. It is noteworthy that of the different models that were tested to identify OUDs, diagnostic and mental health care variables rose to be among the most robust predictors. This finding has implications for future research and practice. In settings that serve individuals at high risk for opioid misuse, collecting data on co-occurring mental health conditions, mental health treatment history, and psychotropic medication usage is imperative in identifying those who may be at risk for developing an opioid use disorder. Those identified as at-risk may benefit from indicated prevention programs that educate individuals about signs of prescription drug misuse and the relationship between opioid use and mental health conditions. Treating co-occurring mental health difficulties is an important part of addressing the health of individuals who are prescribed opioids. Variables that significantly predicted OUDs must, in some cases, be interpreted within the context of significant interactions that were identified through CHAID analysis.

Due to the atheoretical nature of CHAID analysis, the significant interactions were not anticipated prior to the analytic process; however, several variables frequently appeared in the significant interaction terms. Implications of these interactions include, for example, the finding that the impact of receiving short-acting opioids depends on co-occurring substance use diagnoses when predicting OUDs. These interactions may be of clinical utility in identifying individuals,vertical farming equipment suppliers through data readily available to health plans, who are at risk for OUDs and may benefit from prevention efforts. The model developed in this study was designed for use in the entire population of patients in the database, regardless of where they live. Given the significant regional differences in the distribution of diagnosed OUDs, future studies should test the model at the regional level to determine whether location impacts model performance. This investigation has a number of limitations that prevent broader conclusions from being drawn about opioid abuse and dependence. The key limitations are the use of an existing data set and the reliance on a physician’s diagnosis of abuse and dependence. Many individuals may develop an opioid use disorder that does not come to the attention of their physician. Those who have a diagnosis of abuse or dependence may represent an unusual opioid using population, in that they may have either talked with their physician directly about a potential problem or have such florid difficulties with misuse that it is evident to their health care provider or providers. The operationalization of cooccurring mental health and other substance use disorders as any lifetime diagnosis is also a limitation of this study, as important temporal relationships between opioid misuse and other mental health problems cannot be established. Given the possible bidirectional development of such difficulties, the research team did not specify a priori any time frame for co-occurring disorders, though such analysis could be an important line of future research in this area. The primary strengths of this study are the large sample size, the comprehensive number of variables regarding study participants, and the use of claims data, the likes of which may be generally available to health plans for use in their own risk stratification and intervention. Those interested in the prediction of opioid misuse may not have all of the significant variables present in their data sets, and thus may not be able to directly apply the particular mathematical model created here. To summarize, the detection of opioid misuse has important implications for public health; better identification of individuals at risk may help to reduce morbidity and mortality that is often associated with opioid use disorders. The current study made use of a large, comprehensive data set that may aid researchers and clinicians in their attempts to address this important issue.

Anxiety disorders are associated with significant quality-of-life impairments, carry an economic burden of billions of dollars in the United States, and are one of the top ten leading causes of disability globally . When anxiety disorders co-occur with substance use disorders, such co-occurrence is associated with markedly worse outcomes such as increased rates of drug-related problems, unemployment, and poorer treatment outcomes . The onset of both anxiety and substance use disorders often occurs during childhood and adolescence and understanding the potential shared etiological mechanisms underlying both anxiety and substance use disorders may help improve assessment and treatment of these disorders and their co-occurrence. Gray’s reinforcement sensitivity theory  offers explanations of risk for both disorders and possibly also for their co-occurrence. RST describes two major neurophysiological motivational systems that differ in their responsivity to reward and punishment. The behavioral inhibition system is activated when punishment or non-reward conflicts with a goal or reward and results in worry, risk assessment, and increased attention to threat. In contrast, the behavioral approach/activation system is activated in response to rewarding stimuli and results in impulsivity, goal pursuit, and increased attention to reward. The biologically based temperamental style of BI is similar to the BIS in that it is characterized by anxious, fearful, and vigilant reactions to novel stimuli . BI is identifiable as early as infancy and has demonstrated trait-like stability . Several studies have shown BI to be stable across early and middle childhood and possibly through late adolescence and early adulthood . However, the stability and predictive validity of BI varies across individuals. For example, BI is more stable for girls and those who are initially highest in BI as compared to their peers . Further, many of the infants and toddlers that are initially high in behaviorally assessed BI no longer display such sensitivity to novel stimuli as they age . Degnan et al. showed that only 15% of toddlers displayed initially high levels of BI and continued to display such high levels of BI at age 5. Identification of children with high and stable levels of BI is particularly important as they are at increased risk for experiencing symptoms of anxiety and the development of anxiety disorders in adolescence . In fact, nearly half of children high in BI develop social anxiety disorder in adolescence . The BIS may be related to and influenced by BI. While the BIS is assessed by self-report and at later stages of childhood through adulthood, as compared to the behavioral assessment of BI, scores on the BIS/BAS scale may yield improved reliability and generalizability in its assessment of BI . The relationship between BI and substance use is much more mixed than that of BI and anxiety. BI may be a protective factor against substance use as the conflicting rewarding and punishing outcomes associated with substance use may activate the BIS and lead to worry and increased attention to the potential negative consequences of use. However, it is also possible that BI may increase the likelihood of substance use through coping motives for use . Individuals with alcohol use disorder as well as individuals with co-occurring anxiety and alcohol dependence evidence greater levels of sensitivity to uncertain threat , a trait associated with, and predicted by, high levels of BI .

No other aspects of pediatric cardiac arrest management changed during the study period

The guideline emphasized step-wise escalation in airway management from BVM to EGA to ETI, only if the less invasive method was not effective .HFD is a two-tiered 9-1-1 EMS system with Basic Life Support and Advanced Life Support units. HFD serves a geographic area totaling 2.3 million persons and 667 square miles in the greater Houston region. The agency receives 300,000 EMS calls annually. No other EMS agencies provide emergency 9-1-1 response within Houston city limits. HFD has 3500 prehospital providers, all of whom are trained as firefighters and have at least BLS emergency medical technician training. HFD also has 700 paramedics providing ALS care. Dispatch of the initial unit is determined based on the 9-1-1 call type and severity. The local EMS protocol for management of respiratory failure in pediatric patients changed to include the use of an EGA for pediatric patients – the i-gel – in addition to algorithmic progression from one device to a more advanced device. Prior to the protocol change no EGA device was available for pediatric airway management due to the size restrictions of the then-used King LT-D airway . Prior to the protocol change pediatric patients with respiratory failure or cardiac arrest were managed first with BVM followed by intubation. Both ALS and BLS providers were equipped with the i-gel EGA post-protocol change for both adults and pediatric patients. The King LT-D was not available post-protocol change. The airway management protocol directed members to use BVM first and then advance to an EGA for all patients requiring transport and continued assisted ventilation. If the EGA provided inadequate oxygenation or ventilation it could be removed,vertical grow racks for sale with intubation attempted by a paramedic. The new protocol inclusive of EGAs was implemented in conjunction with an in-person lecture and skills training described in a prior publication.

All study patients received ALS care.We retrospectively reviewed electronic patient data to establish the baseline characteristics, incidence of airway procedures, and outcomes for patients meeting this study’s inclusion criteria . Prospective patients were electronically identified on a weekly basis via the patient care record and cardiac arrest quality-assurance databases. Records were reviewed by trained abstractors who were aware of the study design and outcomes in question. Hospital and outcome data were abstracted from the EMS agency’s cardiac arrest database and hospital inpatient medical records.Our primary outcome was a difference in the frequency of prehospital attempted intubations between the pre- and post-intervention groups. We estimated a 20% reduction in intubation rate from implementation of the new protocol with a sample size of 266 . For skewed continuous data we used non-parametric testing . Incomplete data or negative timed operational metrics were coded as missing. We analyzed categorical variables using the Pearson chi-square test or Fisher’s exact test. A p-value less than 0.05 was considered statistically significant. Categorical variables were reported using frequencies and percentages, continuous variables were reported using median and interquartile ranges. We conducted all analyses using the Statistical Package for the Social Sciences , version 24 .In this observational study, we found that the establishment of an airway management algorithm paired with an EGA suitable for all ages of pediatric patients decreased the rate of ETI in an urban EMS system. No differences in survival to hospital admission or discharge were observed in all patients with cardiac arrest or respiratory failure. For cardiac arrest patients specifically, we observed no difference in rates of ROSC. These observations suggest that deployment of a pediatric EGA can successfully decrease the need for prehospital intubation. Although prior research suggests no improvement in neurologic outcome with ETI,the skill is taught as part of the EMT-Paramedic National Standard Curriculum and still widely practiced in EMS agencies across the U.S.

As many EMS agencies progress toward widespread EGA deployment given evidence against significant benefits from intubation during initial cardiac arrest care, intubation skill retention remains largely unknown.For pediatric patients especially, the effects of implementing an EGA-first strategy decreases a paramedic’s exposure to the already rare intubation. Prior research has demonstrated a low number of clinical opportunities for paramedics to maintain procedural competency with intubation,let alone the exceedingly rare pediatric intubation. In our cohort, we observed a decline in the success rate for pediatric intubations when attempted after introducing an EGA. The effects of implementing the EGA in this system, while continuing to allow ETI, resulted in a further dilution of procedural experience. The potential difficulty with maintaining paramedic intubation skills for pediatric and adult patients, is well documented by prior studies,and may be augmented in systems such as this where ETI exists concurrently with EGA prioritization. The potential training solutions and their effectiveness have not been described. High-performance EMS agencies with intensive training, continuing education, and quality assurance report intubation success rates as great as 97% but with low first-pass success.Systems with infrequent airway management training and skill maintenance when coupled with the addition and widespread use of EGAs may experience declines in success, as those observed in our system. However, in the intubations that occurred post-protocol change, 96.4% occurred due to protocol non-adherence. Despite our reported 95% success rate with EGA placement, which is consistent with previous publications,many patients during the study period still underwent ETI attempts. Of the 36.4% with ETI attempted prior to an EGA attempt only, 85% experienced a success. Similarly, only 54.4% were successful when attempted after an already successful EGA. Although prior commentary has suggested that EGAs, specifically the i-gel, perform well in the prehospital environment, success rates may be lower than previously demonstrated in hospital-based studies.

In non-paralyzed adults, for example, ventilation with the adult size 4 i-gel may exceed the 24 millimeters of mercury laryngeal seal, causing significant air leak.For children, the degree of leak if the device is sized incorrectly is unknown. For our cohort, the rationales behind the protocol deviations were not consistently documented. It is possible that many of the ETIs after EGA placement were in fact warranted but appeared as protocol violation due to inadequate documentation of EGA failure. Providers’ perception of inadequate ventilation or incorrect device sizing may have contributed to the intubation attempts occurring after initial EGA placement. Our study was not powered to detect a prehospital ROSC or survival benefit in cardiac arrest patients.In this small cohort we did not observe any measurable effects on cardiac arrest care, although metrics such as compression fraction, CPR rate, and exact timing of EGA or ETI were not available. Also, given our small sample size and low frequency of shockable rhythms in the pediatric population,further research is required to address the initial airway management device by rhythm and likelihood of a primary respiratory arrest. Opioid use disorder is associated with excess mortality, morbidities,rolling hydro tables and other adverse health and social conditions . OUD is common among individuals with chronic pain conditions, and chronic pain is common among individuals with OUD . The relationship between chronic pain and OUD and the time course of the two is complex; chronic pain may precede prescription opioid use and addiction or may develop after OUD, either as an expected health condition or as a consequence of OUD. Substance use is a risk factor for car accidents and violent crime, which often lead to injuries that are associated with chronic pain . On the other hand, individuals with chronic pain often take opioids to relieve pain, and opioid therapy is the most commonly prescribed treatment for severe chronic pain, even though long-term opioid therapy remains controversial for chronic non-cancer pain due to its questionable efficacy and association with opioid misuse and use disorders in some individuals. Complicating the issue is that other physical health and mental health problems often cooccur with OUD and chronic pain. For example, OUD patients are up to 11 times more likely than the general population to have a mood disorder, and they have up to 8 times greater rates of anxiety disorders . Individuals with depression, schizophrenia, and bipolar disorder are significantly more likely to have chronic pain relative to those without these psychiatric conditions . The overlapping risk factors and causes of these diseases and chronic conditions are difficult to disentangle. Optimizing treatment for patients with these complex medical conditions has become a nationally recognized target for healthcare improvement . There is limited knowledge about chronic pain conditions, other co-morbid health and mental health conditions, and healthcare utilization among OUD patients treated in general medical or healthcare systems.

Most knowledge of OUD is based on self-reports from individuals treated in publicly funded addiction specialty programs. The availability of electronic health records provides the opportunity to efficiently examine the health status and service use among large and diverse samples in general healthcare systems . This is particularly important because the non-medical use of prescription opioids is now recognized to be a national epidemic in the United States, making prescription opioid misuse and overdose deaths a critical public health problem . EHR systems represent a new but underutilized research resource that allows OUD and related physical and mental health conditions to be studied in general healthcare systems. The goal of this study was to examine chronic pain among patients with OUD, as well as to examine other substance use disorders, health, mental health, and treatment for health and mental health among patients in medical settings using EHRs. We divided our sample into four clinically relevant groups related to the intersection and time course of OUD and chronic pain diagnoses: those with no chronic pain , those with chronic pain after OUD , those having both at the same time or clinic visit , and those with chronic pain before OUD . By comparing the four groups, we can examine the association between presence of chronic pain conditions and other psychiatric and medical conditions, as well as the prevalence of these conditions in association with the order of first pain diagnosis versus first OUD diagnosis. Based on previous research, we hypothesized that chronic pain conditions in OUD patients would be associated with greater prevalence of mental health disorders and physical health conditions than that among OUD patients without chronic pain diagnoses; the OUD-only group however, would have greater prevalence of other substance use disorders than OUD patients with chronic pain conditions. Further, we hypothesized that people in the OUD First group would have higher rates of other substance use disorders, mental health problems, and physical health problems than individuals in the Pain First group but similar prevalence rates of other substance use disorders as those in the Same Time group. Findings are important to extend knowledge of relative differences in psychiatric and medical comorbidities among patients with OUD and chronic pain in order to inform clinical practice related to screening patients with these commonly related conditions and targeting treatment interventions for them. In terms of other substance use disorders, the four groups differed significantly in all substances examined except for sedative/hypnotic/anxiolytic use disorder . The OUD First group had the highest prevalence rates of alcohol, cocaine, and other drug use . This group also had the highest rates of alcohol- or drug-induced disorders. The No Pain group had the highest rate of amphetamine use and the lowest rate of tobacco use. The highest rate of sedative/hypnotic/anxiolytic use but lowest rates of alcohol, cannabis, amphetamine, cocaine, and hallucinogen use disorders were found in the Pain First group and the Same Time group. Approximately 70% of the sample had a co-morbid mental health disorder, and the four groups also significantly differed in prevalence of any mental disorder or each specific type of mental disorder with only one exception, psychotic disorder. In general, the three groups with chronic pain conditions had higher rates of mental disorders in comparison to the No Pain group. Even among the No Pain group, more than 50% had co-morbid psychiatric diagnoses. Approximately half of each of the groups with chronic pain suffered from a depressive disorder, 40% had an anxiety disorder, and 40% had mental disorders other than psychosis, depression, anxiety, or bipolar disorders, with the highest rates consistently among the Pain First group.This study adds to a rapidly growing knowledge base concerning the intersection of chronic pain and opioid use disorder occurring in a large healthcare organization.

Numerous risk factors have been identified in the literature

Over 70% of elderly patients are not questioned on their ability to care for themselves prior to discharge; 20% disclose that they do not understand their discharge instructions.This subset of the older adult population may have difficulty comprehending and following their discharge instructions. This may lead some patients to return when their initial complaints do not improve due to uncertainty and lack of comprehension regarding their discharge diagnoses, treatment, and follow-up plans.Several studies indicate poor cognitive health also is an important driver of ED returns. Older-adult patients with cognitive and memory impairment were at an increased risk for 30-day returns, and several studies demonstrated it to be anindependent predictor for these returns.However, Ostir et al. found that poor cognitive health and odds of 30-day revisits did not have a significant association.Although, Ostir et al. did find that higher cognitive health scores were linked to lower risk for unplanned ED revisits at 60- and 90-days post-index visit.The authors found that every one-point increase in cognitive score was associated with 24% and 21% decreased odds of 60-day and 90- day revisits to the ED, respectively. The lack of significant association between poor cognitive health and increased 30-day returns by Ostir et al. may be explained by several differences in the study population, which was mostly female , African American , and with cognitive impairment . The average cognitive score of these patients was 4.5 points below standardized norms for persons 65 years and older,whereas 76.8% of the study population in the McCusker et al. study had no impairment or only mild cognitive impairment.Only 18.7% of patients in the de Gelder et al. study were found to have cognitive impairement.Since nearly all patients in the Ostir et al. study had cognitive impairment, their findings may be due to the lack of an adequate comparison group.

There are several possible explanations why patients with poor cognitive health may be at increased risk for recidivism,industrial rolling racks including suffering from more complex comorbidities necessitating more frequent healthcare, decreased comprehension of ED discharge diagnoses and instructions, and decreased accuracy in reporting of presenting illness. Patients with delirium superimposed on dementia were found to have lower concordance with their surrogates regarding reason for ED presentation reported to ED staff.This discordance between presenting complaints may lead to insufficient evaluation, missed diagnosis, and/or inappropriate discharge, particularly when the surrogate is not available during the ED evaluation. In addition to cognitive health, poor physical function and poor general health also increase odds of returning within 30 days, and may be an independent predictor for ED recidivism.As physical functioning is a well-established predictor of outcomes among elderly patients, these findings likely reflect the characteristics of a sicker aging population. Several studies have shown that patients, despite access to care , prefer to seek care in the ED compared to the outpatient setting.Reasons include the following: accessibility/convenience; perceived urgency of complaints; inability to wait for scheduled primary care follow-up due to worsening of persistence of symptoms; expedited diagnostic testing; perceived availability of specialists; lack of transportation to primary care office; and wanting a second opinion, among other reasons. In a study of the general ED population, uninsured patients were not found to use the ED more than insured patients, but they use other types of care less. Interestingly, both the insured and uninsured visit the ED at similarly high rates for non-emergent complaints or complaints that can be treated in non-ED settings.

As discussed previously, patient fear or uncertainty likely plays an important role in understanding why patients come to the ED. This sense of uncertainty regarding the cause of their symptoms is best illustrated by Castillo et al.’s findings of a rather high rate of older adults returning to the ED for the same primary diagnosis and many seeking care at a different facility , perhaps in hopes of finding a different conclusion from their index ED visit.In a qualitative study of 40 adult patients with chronic cardiovascular disease or diabetes, patient reported driving factors for ED returns included feeling a sense of fear or uncertainty with negative test results and expecting a diagnosis for their symptoms. Many patients who did not receive a clear diagnosis for their symptoms reported needing to return until a diagnosis was found.In two studies of older adults, patients were less likely to consider that their complaint has been completely resolved and believed they would be less independent after discharge from the ED.A survey of 15 older adults also linked patient perception of ED care with ED recidivism, including believing that the ED was their “only option” and that their symptoms required specialized care only provided in the ED.Several patients also reported that they believed their primary care physician would have advised them to seek care in the ED for their symptoms. Others reported receiving ineffective treatments or instructions at the time of ED discharge. In some cases, this perception may stem from inadequate patient counseling regarding expectations and reasonable goals of care and that can be achieved during the ED visit.The older adult population is a key and significant contributor to ED recidivism and is responsible for a disproportionate amount of healthcare costs. For this reason, older adults have received much attention and study to create interventions aimed at reducing ED recidivism.

The unique characteristics of this patient group should be considered when developing strategies to minimize ED returns. The generation of a profile for elderly patients at increased risk for ED returns could identify potential targets for individualized education, counseling, and other interventions to reduce ED over-utilization. Many of the study results discussed in this review were performed outside the U.S. and thus may not be fully generalizable to older adults residing in the U.S. due to different social and cultural influences and healthcare systems. However, when data was available for comparison, studies performed in the U.S. identified many similar risk factors for return visits in older adults as the non-U.S. studies. These similarities suggest that the underlying reasons for ED utilization by older adults may be influenced more by themes related to aging rather than the cultures or healthcare models of individual countries. However, it is important to note that these studies were all performed in highly developed countries with stable economies and well established healthcare systems. Therefore, whether the identified risk factors would remain true in developing countries with fewer healthcare resources is unknown and deserves further study. Further study is needed to understand how each of these areas influences return visits, how they influence each other, and to resolve discrepancies in previously reported findings.Academic medicine faces a challenge on how to balance the objectives of revenue production with compensation of scholarly achievement. Historically, “relative value units” have been used to incentivize physicians to improve clinical productivity, but these systems have neglected to recognize non-clinical achievements, such as those related to teaching, academic leadership roles, or other scholarly activity. Many non-clinical activities do not earn a reduction in clinical hours or financial incentive, which may result in decreased motivation to contribute academically as well as frustration and burnout. As faculty members work to advance in their professional careers,4×4 grow table diminished scholarly output may create a barrier for promotion possibilities at traditional academic institutions. All of this may result in less time devoted to teaching and diminished opportunities for mentorship and role modeling for learners. To foster academic productivity and the retention of talented physicians, academic medicine must recognize and reward the effort that is necessary to thrive within it.1 Models have been introduced over the past decade that focus on incentivizing non-clinical activities. Some of these models have focused solely on education and teaching commitments using a teaching or educational value unit system to weigh activities.Others have cast a broader net encompassing all academic activities, including education, teaching, committee and administrative roles, and research, using a clinical or academic relative value unit model.Problems were identified in our department with regards to education and scholarly activity. The residency group and a small group of core faculty have traditionally carried much of the teaching effort, resulting in an unequal distribution of educational commitments across the department. In addition to education, many in the department participate in other scholarly work such as research projects earning grant funding, peer previewed publications, lecturing engagements, and leadership or committee positions. Similar to other academic institutions, our department has experienced difficulty tracking faculty activities outside of clinical work.

Faculty frustration has resulted from many of these activities not being compensated financially or rewarded with reduced clinical hours. Furthermore, junior faculty lacked an understanding of the importance of tracking academic activities as a way to monitor their progress and to focus on areas that required more attention in preparation for the promotion process. We brainstormed ideas regarding how to expand faculty commitment to better align with our academic mission, to prepare faculty for promotion, and to create an improved infrastructure fostering resident and student mentoring. Our project had several objectives: 1) realign and redistribute the responsibility for meeting education needs equitably across the department; 2) create a system of accountability and transparency based on faculty consensus; 3) recognize and reward academic activities that go above minimum expectations; 4) align faculty academic productivity with institutional promotion procedures; 5) build a system that houses academic activities in a format consistent with institutional teaching portfolio expectations; 6) incentivize and increase departmental scholarly output; and 7) build a system capable of supporting an academic mentoring infrastructure for our learners. In 2017 we initiated a two-stage project to redesign education expectations and to identify and recognize the full spectrum of academic activities among all faculty. Stage one involved the creation of a mandatory baseline educational participation process; stage two, implemented later, involved the creation of an ARVU points system with identified voluntary academic participation. Both stages of the project were tied to an academic financial incentive awarded at year-end. Our goal was to determine the effects of this project on faculty baseline participation in educational activities as well as monitor academic productivity and advancement within the department.Stage one, initiated in July 2017, created minimum education expectations and accountability procedures, incorporating two related requirements. The first included attending a minimum number of resident conferences per year, inversely proportional to a faculty member’s clinical load. The second element required participation in a module system, created by the residency, where each month represented one module and focused on a particular topic. Each faculty member was required to sign up for and commit to specific dates during a module where they were responsible for taking part in teaching activities assigned by the residency or undergraduate medical education group. These activities included such things as giving a lecture, moderating a journal club, running a small group session, or teaching a procedural skills lab among others. The sign-up process afforded some flexibility and choice, as faculty could pick dates that worked for them and topics they were most interested in. Conference attendance required only the presence of faculty in the audience, but module participation required the active participation of faculty in specified activities. Conference attendance and module participation were chosen as minimum expectations for two reasons: firstly, all faculty historically have been expected to participate in residency and student teaching as part of their academic appointment to the medical school; and secondly, these activities were considered to require the heaviest lift and were inequitably distributed among the faculty. These new expectations were required of faculty across the department and were tied to a newly created academic incentive awarded at fiscal year-end. The faculty who did not meet these new education expectations were not eligible for this financial incentive. After soliciting feedback on these new expectations through faculty meeting discussions and offline conversations, most agreed that the new expectations were not overly burdensome. However, two main concerns surfaced. One was that the academic incentive was not reflective of other non-clinical activities valuable to the department’s mission. A second concern brought forth by the residency leadership was that the expectations did not include resident assessments, which historically had a low response rate. Based on this feedback, the baseline education expectations were revised to include completion of a percentage of resident post-shift assessments over the academic year, inversely proportional to a faculty member’s clinical load.

Medical educators aim to identify the best methods to prepare students for clinical practice

We compared the homogeneity of the CG and of the TG at pre-test with χ2 test and Fisher’s exact test for qualitative variables and with the Mann-Whitney U test for quantitative parameters. A generalized linear mixed model47 measured changes before and after the BBNSBT in self-efficacy, the SPIKES competence form and the mBAS. We adjusted the effects of time, group, and group-by-time by the study year as a confounding factor. GLMMs were performed with a covariance matrix of the compound symmetry type.We performed the McNemar’s test to compare the proportion of students who passed the SPIKES competence form and the mBAS cut-offs between pre-test and post-test within the groups. Furthermore, two further analyses were considered. First, we calculated the relative gains between pre-and post-test within the two groups by means of the following formula: [ / pre-test]. A Mann-Withney U test was used to compare relative gains. Second, we tested whether the BBNSBT could help fill the performance gap between participants with limited clinical experience in the TG and participants with clinical experience in the CG by means of a Mann-Whitney U test. Results were considered statistically significant at the 5% critical level .Traditional training is the common pedagogical method for learning clinical skills.Trainees rarely learn BBN in real clinical practice due to the paucity of opportunities and the fact that clinical preceptors are rarely available to give feedback.At pre-test,rolling benches for growing our study shows a low level of participant experience and a lack of BBN skills, especially in the TG.

Chiniara et al. define the “simulation zone” as areas in which simulation education may be better suited than other methods. BBN is an example of the HALO quadrant: high impact on the patient and low opportunity to practice. This feasibility study assessed the impact of a four-hour ED BBNSBT compared to clinical internship. It was hypothesized that BBNSBT would have the potential to increase participant self-efficacy in BBN communication and management, adherence to BBN stages and processes, and to improve communication skills during BBN. Our results revealed that this training increased self-efficacy perception. Participants had a low level of self-efficacy in pre-test. After the BBNSBT, the TG reported being more confident about their knowledge and application of BBN and about their ability to perform BBN compared to the CG. This confirms the results of another, smaller study , which showed an improvement in confidence and self-efficacy.52 These findings may be explained by Bandura’s social cognitive theory,53 which suggests four ways to enhance self efficacy that we identify in the BBNSBT: 1) enactive attainment ; 2) vicarious experience ; 3) verbal persuasion ; and 4) psychological safety during the simulations. Moreover, the perceived self efficacy of students in the CG with more clinical experience decreased. This result could have different potential explanations, notably that the pre-test may have led to introspection and reflection about their BBN and communication skills. Communication with patients and their families is one of the Accreditation Council for Graduate Medical Education Milestones for EM residents, specifically the fourth level of BBN.Our research used two validated assessment tools that allow for standardization of the evaluation and training. The results demonstrate that BBNSBT using role-playing and debriefing enhances participant BBN learning and performance compared with the traditional learning paradigm and direct immersion in acute clinical situations.

BBNSBT offers the opportunity to teach BBN and communication skills to students and young residents in a psychologically safe environment, preventing harm to patients and family members. It allows each participant to announce bad news and observe several BBN simulations with debriefings. By contrast, in the traditional curriculum role modeling at the bedside could have a negative impact on patients and relatives when medical students or residents engage in inappropriate communication behaviors,such as not keeping patients or family members adequately informed or using medical words they do not understand. More students in the TG reached the cut-off scores: 73% for SPIKES and 62.2% for the mBAS vs 45.2% and 35.5% in the CG. These results demonstrate the relevance of BBNSBT in communicating bad news in the ED. However, the difference between the groups for the mBAS cut-off score is not significant. BBNSBT probably focuses more on SPIKES than on communication behavior. It may be necessary to create an advanced course centered on communication skills rather than on SPIKES. Despite this, BBNSBT offers experiential learning for participants. From the simulation experience, the debriefing process leads students to explore their frames, incorporate new frames such as SPIKES skills, and re-practice these new skills. This process allows knowledge to be acquired through experience.56 Moreover, participants had access to ED BBN experts for four hours, which, unfortunately, is unlikely to happen in real clinical practice. Additional data analyses allowed us to address a new question: Is BBNSBT more useful for students with less than one year of clinical experience? We found a statistically significant difference in the pre-test. Students with limited clinical experience reached the same level of BBN skills as students with more clinical experience after the BBNSBT.

The gap between these groups could be filled by simulation training, without the pitfalls of stress and discomfort of direct clinical exposure. No study has previously focused on this question. In fact, BBNSBT used a step–by-step process involving novice participants to bring them to a higher level. The first step involved theoretical explanations given via video, discussions, and lectures. Each simulation, and especially each debriefing, further enhanced the participants’ skills. One strength of the study is that we paid special attention to the theoretical background upon which the training and evaluation were based, using the widespread SPIKES28 theoretical model and the INACSL Standards of Best Practice for SimulationSM.Moreover, the simulations were well designed, the debriefings were standardized, and the facilitators were trained and experienced. We believe that it is mandatory to meet the INACSL Standards of Best Practice, as well as work with simulation experts to obtain positive results with simulation training. The next steps for research and pedagogical method improvement can be identified based on these results. Further research is needed to investigate the role of an advanced course in BBN. As BBN is not a required skill for EPs, it would be interesting to investigate whether BBNSBT is feasible and effective in other areas such as obstetrics, intensive care units, etc. Finally, we think that e-learning preparation before BBNSBT, as described for a training on managing low urine output,58 could replace some of the in-person time.Myanmar, formerly Burma, and now administratively designated the Republic of the Union of Myanmar, is a sovereign state in Southeast Asia. Myanmar has a diverse—135 different ethnic groups—population of 53 million according to the United Nations Population Division.1 Recently, the military regime that long hampered the country’s development was replaced by a civilian government.2 Socioeconomic development in Myanmar lags far behind nearby countries,vertical farming system as does its healthcare system. There are shortcomings in maternal care, pediatric healthcare, and infectious disease treatment, as well as medical accessibility and quality.3 Strengthening medical systems by improving the standard of emergency care has been known to reduce the mortality and morbidity from both communicable and non-communicable diseases.A large proportion of the global mortality and morbidity rate from various diseases is found in low- and middle-income countries . Unfortunately, the emergency care systems required to address these shortcomings are not well established in most LMICs, including Myanmar.Formal emergency care in Myanmar is only available in hospitals located in urban areas. Rural hospitals can provide only limited emergency care to patients.While preparing for an international sporting event, the Myanmar government started to formalize efforts to develop a formal emergency medicine training program.

Apart from the formal EM training program in the capital city, Nay Pyi Taw, frontline healthcare facilities across the country are not capable of providing life-saving emergency care. In most rural hospitals, the outpatient department usually covers emergencies; there is no separate area or facility for emergency treatment. Rural hospitals offer access to few medical specialties with minimal, if any, laboratory services. Public prehospital ambulance transportation service is virtually unavailable in rural areas.Several tools have been used to evaluate emergency care capability. Most focused primarily on the availability of hardware or infrastructure rather than functional aspects of emergency care.Some researchers have tried to measure performance of EM practice in resource-limited settings, which has resulted in a demand for a comprehensive EM assessment tool for LMICs.Recently, a novel approach based on work in the field of obstetrics, called sentinel condition and signal function, was adapted for EM by the African Federation for Emergency Medicine .Based on this concept, the AFEM developed a standard preliminary tool called the Emergency Care Assessment Tool , which has been suggested to be more useful than previous evaluation tools in assessing EM systems.10 Our study incorporated the concept of ECAT as a tool to analyze Myanmar’s emergency care systems. We investigated the capability to deliver emergency care in different levels of hospitals located in several regions of Myanmar.This facility-based survey was conducted between February 7, 2018 –April 3, 2018. With the help of two Myanmar doctors and three nurses who were invited to Korea for training, survey sheets were distributed to the doctors in charge of emergency medical care at nine hospitals. Our primary criterion for selecting hospitals was access to e-mail and online messaging, at the time of survey, to allow for our interactions with them. The nine hospitals, including five at which our initial contacts were employed, were scattered in five states in Myanmar, and believed to partially represent both urban and rural regions . The nine hospitals were grouped into three levels, according to the bed capacity of the hospital and the number of physicians . Survey sheets were prepared in English using ECAT and delivered to responsible officers by e-mail. ECAT encompasses six sentinel conditions that threaten life , and the related signal functions that alleviate them. The researchers explained the meaning of each question in the survey to the original five Myanmar contacts, and they, in turn, conveyed this information to the Myanmar doctors who took part in the study. In the case of any questions that were initially omitted on the completed surveys, clarification was provided, and the questions were then revisited and answered by the respondents. The survey included questions about the general status of each hospital, such as the number of staff members, the number of hospital beds, and the annual patient load. The remaining questions addressed the performance of emergency signal functions, the products for signal functions , and the availability of emergency facility infrastructures. We coded data using standard descriptive analyses with Microsoft Excel 2015 . Qualitative research methods involved thematic analysis of answers. In performing signal functions for each of the sentinel conditions, basic-level hospitals were revealed to be weak in trauma care. Among the 12 signal functions related to trauma care that are deemed essential in basic-level hospitals, more than two functions were unavailable at all four hospitals. One hospital could not provide half of the trauma-related essential signal functions [Matupi Hospital– trauma protocol implementation , pelvic wrapping, cervical spine immobilization, basic fracture immobilisation , immediate cooling care for burns, fracture reduction]. None of the four basic-level hospitals had the resources to treat burn patients or provide pelvic wrapping. The survey questions regarding infrastructure revealed that none had a specialized resuscitation area for critical patients, and three of the hospitals did not have a triage area. There was neither trauma protocol nor a cervical immobilization device at any of the hospitals. Most signal functions for the other five sentinel conditions were generally available in these basic-level hospitals, with the exception of treatment for common toxidromes, which only half could provide. Two of the four intermediate-level hospitals indicated that they could provide all emergency signal functions. The other two hospitals, however, were found to provide a limited set of signal functions. They did not have a trauma protocol nor could they provide reduction for patients with bone fractures. Cervical immobilization, pelvic wrapping, burn care, and treatment of compartment syndrome were also unavailable. Moreover, one hospital could not perform defibrillation or mechanical ventilation support, nor administer intramuscular adrenaline, which is important for cardiopulmonary resuscitation.

They were more likely to have a return visit at all times as compared to non-frequent visitors

We did not collect post-discharge outcomes, such as subsequent emergency visits, hospitalizations, or post-discharge death,. General MSI epidemiological findings in our 691 patients are outlined in supplements. A total of 17 patients were excluded for incomplete documentation. Of these records, 279 occurred before the start of the EMTP on November 1, 2013, while 395 occurred on or after the start of the program. Thus, patients were divided into pre-EMTP and post-EMTP groups resulting in 674 available patient records . Patient demographics demonstrate that a majority of MSI cases were male and younger than 35 years of age . Major mechanisms of trauma included RTAs , falls , and assault . Of those involved in RTAs, a substantial proportion involved motorcycles while over one-quarter of accidents involved a pedestrian being struck . The majority of patients were transported from another health facility , while other patients were transported from the street or from home . In the population of patients seeking emergency care for MSI, this study found significant improvements in mortality and complication rates, length of stay, and an array of secondary outcomes in association with the implementation of EMTP. The training curriculum taught by EM faculty is thought to have played a key role in the improvement of these outcomes. This curriculum included specific longitudinal educational trainings on the diagnosis and treatments of MSI provided through lectures and workshops that all residents completed. These findings help to demonstrate the potential importance of investing in the training of formal EM specialists to address the large burden of morbidity and mortality associated with MSI in LMICs.

It has been previously proposed that relatively simple interventions in areas such as emergency triage, communication,gardening rack and education and supervision could lead to reductions in LMIC mortality in the ED, where up to 10-15% of all deaths occur.The study demonstrates a temporal association between MSI outcomes in the ED and the inception of an EMTP, underlining the importance of developing such programs. While many LMIC governments do not list EM in their medical education priorities, they could consider doing so to tackle the treatment of such a high volume of patients with acute health problems. The epidemiological results provide the first available data on MSI from a Rwandan hospital. Understanding the patient population, anatomical distribution of fractures, and mechanisms of injury could allow for more practical incorporation into the EMTP’s future MSI curriculum. This understanding may also aid in proper diagnosis and treatment of the growing burden of MSI cases, a critical step for improving patient outcomes. Moreover, these epidemiological results, to an extent, confirm those of another research team that studied traumatic injuries in Rwanda’s pre-hospital service, an epidemiological profile that showed nearly one-fourth of injured patients suffered from a fracture.Most importantly, the epidemiological patterns and EMTP results suggest the need for reducing MSI morbidity and mortality through expanding emergency care training programs. Although this evidence suggests an association with improved outcomes among patients with MSI with Rwanda’s first EM residency program, further prospective evaluation of cases with MSI are needed to demonstrate reliability of these improvements over time. Moreover, similar epidemiological and training evaluation studies are needed in other African countries to effectively understand and develop scale MSI treatments. Although we used formalized protocols, the design resulted in an inability to identify a proportion of cases due to incomplete medical records and some missing data among included cases, which could have biased the results.

Overall, it appears that some intervention data was prioritized and thus better collected in comparison to other interventions. For example, the fact that oxygen supplementation was recorded as less used than intubation, demonstrates an inherent bias in recording interventions that are now more commonplace in the EM setting. In another example, although the GCS and vital signs in the pre-EMTP group are slightly different, it is worth noting that preliminary results show both GCS and vital signs were better recorded in the post-EMTP group vs pre-EMTP group . As better documentation practices were emphasized during EMTP implementation, this improvement demonstrates the inherent differences between provider training in each group, which may have led to more accurate GCS scores and vital signs in the post-EM group. The present study was performed at a single tertiary-care hospital, which may limit the generalizability of the findings to health delivery venues with less resource availability. Furthermore, due to lack of detailed information on prehospital and interfacility care provided for patients transported from various origins, controlling for prehospital interventions was not possible. Future studies should attempt to account for such variables, especially given that a majority of patients presented from other facilities. Future studies should also attempt to differentiate patients based on varying levels of acuity, as this study’s inclusion of transfer patients likely led to a higher-acuity patient population. Additionally, general medical, technological, and other secular advances over the course of the study cannot be ignored, as healthcare does not occur in a vacuum. Many advances in Rwanda’s healthcare system have occurred in the last several years as previously noted, and the EMTP’s impact cannot be isolated due to the observational nature of this study.However, it is worth noting from the results that changes to patient outcomes in the ED setting outperformed those same outcomes in the in-hospital setting over the same course of years, minimizing the role that technological advances played in improving outcomes.

Lastly, the inclusion of patients with life-threatening injuries who also have fractures had the potential to confound results. Future research might exclude patients who require operative intervention for indications external to musculoskeletal trauma.Short-term outcomes – including return emergency department visits – after discharge from the ED are used as internal quality metrics, as short-term revisits might represent medical errors or failures in care.Although interventions to reduce return visits have largely been unsuccessful,it is possible that these efforts did not adequately target high-risk patients. Related literature is focused on patients who have a pattern of University of California, San Francisco, Department of Emergency Medicine, San Francisco, California Vituity Healthcare, Emeryville, California repeat ED use; however, surprisingly, the degree to which these frequent users contribute to short-term revisits remains unknown. The ability to accurately identify which patients are more likely to revisit the ED could improve treatment and disposition decisions,vertical farming equipment and also allow EDs and health systems to develop more focused interventions. Previous work has identified some predictors of return visits,although these studies are limited by investigating only a subset of patients, restriction to one or few sites,focus on non-U.S. hospitals,reliance on complicated instruments,focus on medical errors,focus on admissions,or use of overly-broad definition of discharge failure.We used a unique dataset with encounter-level data to evaluate the predictors of return visits. Our goal was to identify which patient demographics and medical conditions were most associated with short-term revisits. In addition, we hypothesized that frequency of recent previous visits – specifically, number of visits within the previous six months – would have a stronger association with return visits than other patient characteristics , and that this pattern would be observed even after controlling for hospital and community characteristics. Data were recorded in the medical record at each hospital. Vituity collects this data through monthly electronic data feeds by its medical billing company, MedAmerica Billing Systems, Inc, which stores records in Application System / 400 and PostgreSQL. Patient visits were linked through Medical Person Identification number – a unique patient identifier derived by an algorithm taking into consideration patient name, date of birth, Social Security number, and address.

This methodology allowed for linkage across sites, although visits at non-Vituity sites were not observable. Any visit had the potential to be defined as an index visit. Patient characteristics included age, sex, insurance type , and the number of ED visits they had in the six months prior to the index visit. We reduced previous ED visits to an indicator variable for two or more previous visits in order to identify a characteristic that was easily observed and easy to apply to patients in real time. Visit characteristics included acuity level, primary diagnosis, and Charlson comorbidity index. Primary diagnoses were categorized using International Classification of Diseases, 9th and 10th revisions Clinical Categorization Software categories. These categories were developed and defined by the Healthcare Cost and Utilization Project , under the AHRQ, and this scheme has been used in a number of studies.Because of the large number of categories, we further restricted diagnoses to the diagnoses that had at least 10,000 observations and were associated with 14-day revisits in bivariate analysis; among these, we included the five most common diagnoses for index visits and for revisits. Charlson comorbidity index was calculated for all visits based on up to 12 separate ICD codes per visit . Hospital characteristics included size , and turnaround time to discharge for 2015. TAT-D is a quality metric measuring the median time between patient arrival and discharge at the hospital level for a given year. We excluded from the study providers working for the firm for fewer than 60 days within the study period or accounting for fewer than 60 encounters. To test whether there was a different likelihood in return visit according to acuity level, we included interaction terms between MD/DO and acuity level; given the difference in scope of practice for APP, interactions between APP and acuity level were not modeled. Over the study period, there were 8,334,885 index encounters. After excluding visits resulting in a disposition other than discharge and excluding visits with missing data, the total sample size was 6,699,717 . Table 1 shows the patient, visit, hospital, and physician characteristics at index visit for all encounters, and stratified by discharge vs admission. These descriptive statistics are also shown for encounters resulting in a 14-day return and for those who returned and were admitted to the hospital. In the multivariate model including patient, hospital, and community characteristics , the highest predictor of return visit within 14 days was whether or not the patient had two or more visits in the previous six months: OR = 3.06 . Men and patients with Medicare or Medicaid insurance were more likely to have 14-day revisits, as were patients with a primary diagnosis of alcohol-related disorder; complication of device, implant or graft; congestive heart failure; and schizophrenia and other psychotic disorders. As a sensitivity analysis, we estimated the same model among adult patients only and found the results did not show any meaningful differences. Further, we repeated the analysis for each definition of frequent visitor definition and time horizons , and each combination of frequent visitor and time horizon. Skin and subcutaneous tissue infections were the strongest predictor of three-day revisits for each of the definitions of frequent visitor, followed by frequent visitor as the next largest association. In all other specifications, frequent visitor was the factor with the strongest association with revisits. There were 476,665 frequent visitors, who had a total of 1,251,082 visits, of which 340,381 were 14-day revisits. While frequent visitors represent 10.7% of all patients, they accounted for 18.7% of all encounters and 40.2% of all 14-day revisits. Figure 2 demonstrates the percentage of patients revisiting the ED according to day after the index visit. The blue line represents all patients and shows that revisits peak on days one and two, and steadily decline thereafter, with slight peaks at days 7 and 14. The red line shows the revisit rate for patients with no or one visit in the six months prior to the index visit; as with all patients, the revisit rate peaks on days 1-2 and declines thereafter, dropping to below 0.3% by day 14. Patients defined as frequent visitors have revisits peaking on day 1 and decrease thereafter. The daily revisit rate for frequent visitors declines to a value of about 1.0% at 14 days, after which the revisit percentage decreases by less than 0.1% for each subsequent day. Encounters showing 0 days to first revisit reflect patients who returned to the ED on the same day as their index visit. Same day revisits represented 3.7% of the total encounters with an associated revisit. Frequent visitors had a significantly higher risk of a 14-day return visit resulting in admission than non-frequent visitors . Table 3 shows the unadjusted proportion of encounters resulting in return at 3 and 14 days according to different thresholds defining frequent visitor.

The ASI is a standardized data collection tool that has excellent psychometric properties

The most common symptoms reported in our study were cough, shortness of breath, and vomiting, each occurring separately in five patients. Three patients presented with chest pain. Two patients presented with altered mental status in the form of unresponsiveness, with one patient requiring intubation. The other unresponsive patient, a 16-year-old male, returned to a normal mentation with bag-valve-mask ventilation and naloxone but required high-flow nasal cannula for shortness of breath. On physical examination, accessory muscle use was the most common finding, reported in four patients. Rales were appreciated in two patients, while no patients were found to have wheezing . In our study, six patients presented with respiratory failure. Four required HFNC. One patient was intubated; one patient required simple nasal cannula oxygen at two liters per minute; and one patient maintained normal oxygen saturations in room air during his ED visit and was discharged home. A brief clinical presentation, summary of findings on imaging, and type of respiratory support needed are summarized in Table 2. Five patients were admitted to the pediatric intensive care unit, and one patient was admitted to the normal pediatric unit. The median hospital length of stay was six days . All patients were discharged with no comorbidities or deaths reported. Six patients were treated with steroids. The median duration of treatment with steroids during admission and after discharge was nine days.

Our patients had a variety of laboratory tests ordered. Most common were complete blood count, respiratory virus panel, respiratory cultures, and urine drug screen. All patients had a complete blood count,grow trays and the median for white cell count was 16 thousand cells per cubic millimeter . A respiratory virus panel was collected from five patients and it was negative in all of them . Respiratory cultures were collected from two patients and both resulted negative. A urine drug screen was performed for six patients and was positive for cannabinoids in all six . Three patients followed up at different intervals in the pulmonology clinic . Spirometry showed normal results in all three patients at that time. Case 1 followed up one week after discharge, at which time spirometry showed evidence of obstructive lung disease, which returned to normal at three-month follow-up visit. No repeat imaging was performed for that patient. Case 2 followed up six weeks after discharge with near-complete resolution of ground-glass appearance on repeat CT and normal spirometry. Case 4 followed up two weeks after discharge with improvement in lung opacities on repeat radiograph and normal spirometry. All three patients had received steroids for 10 days when they were originally diagnosed with EVALI. No follow-up data was available for the remaining four patients.EVALI was an emerging disease entity in 2019. In our case series, we describe adolescents diagnosed with EVALI and their clinical course in the ED and the hospital. In our study, the most common symptoms of cough, shortness of breath, and vomiting presented with an equal frequency of 71%. In a study by Layden et al, shortness of breath and cough was noticed in 85% of patients and vomiting in 61%; whereas, according to Belgaev et al, 90% of patients in their study presented with gastrointestinal and respiratory symptoms.In a report by the CDC, 85% of the EVALI population had respiratory symptoms and 57% had GI symptoms.

The results of our study are similar to previous literature in suggesting that respiratory and GI symptoms are common in patients with EVALI. According to Balgaev et al, 67% of patients had clinical and radiological improvement with residual findings on radiological and pulmonary function tests at time of followup.10 In our study, the three patients who had documented follow-up visits had normal spirometry without residual deficits. Only two of those patients had repeat imaging, and both showed improvement without residual abnormalities. E-cigarette liquids and aerosols have been shown to contain a variety of chemical constituents including flavors that can be cytotoxic to human pulmonary fibroblasts and stem cells.Exposure to heavy metals such as chromium, nickel, and lead has also been reported.None of our patients were tested for heavy metal exposure. Most of the delivery systems have nicotine in them, with one cartridge providing the nicotine equivalent to a pack of cigarettes.In addition to nicotine, e-cigarette devices can be used to deliver THC-based oils.According to Trivers et al, one-third of the adolescents who used e-cigarettes had used cannabinoids in their e-cigarettes.In our patients with EVALI, urinary drug screen was positive for cannabinoids in all patients. One caveat is that we do not know whether our patients used only THC-containing products or a combination of nicotine and THC-containing products. In our case series, the majority of patients presented with pulmonary disease requiring respiratory support and intensive care unit admission. None of these patients developed acute respiratory distress syndrome . We likely did not see this disease process due to our small sample size, as Layden et al reported ARDS development in several of their examined cases.6 In our series, we did not evaluate the pathologic pulmonary changes in different patients. In other case reports, different pathophysiologic patterns of pulmonary involvement, in the form of diffuse alveolar hemorrhage, exogenous lipoid pneumonia, acute eosinophilic pneumonia, or hypersensitivity pneumonitis have been identified.Although the mechanism of EVALI is not clearly understood, the CDC suggests the use of steroids for treatment.

According to a series of patients in Illinois, 51% of those patients had improvement in symptoms after the administration of steroids.6 In another study, patients showed clinical and radiological improvement following the use of antibiotics and steroids.In our study, six patients received steroids and six patients received antibiotics; three of those patients followed up in clinics with normal spirometry. But this evidence is not sufficient to establish that use of steroids or antibiotics is beneficial in EVALI. There are several limitations of our study. First, because it was a retrospective chart review we could not establish causation. Second, all data may not have been recorded on all patients . We might have missed some if the ICD-10 codes were not correct on the chart. Only three had documented follow-up,pruning cannabis so we don’t know whether the other four had any comorbidities after their hospitalization. Third, we had a small number of patients. Fourth, this was a single-center study; so results may not be generalizable to other hospitals with different patient demographics.At baseline, intervention patients received a face-to-face brief intervention during their clinician visit. Clinicians followed a paper scripted protocol “Summary to Clinician” provided by research staff based on the patients’ HSD; the majority of clinicians reported on our post-visit “Intervention Plan” that their intervention lasted 3–4 minutes and all clinicians reported that they had counseled the patient on their HSD. The message covered drug addiction as a chronic brain disease , the need to quit or reduce using drugs to prevent this disease, the physical and mental consequences of drug use, and the potential accelerated progression towards addiction caused by poly-substance use. Clinicians also told patients that they would receive telephone calls 2 and 6 weeks later from a health educator. Patients subsequently received a Drug Health Education Booklet with a Report Card for their HSD, and viewed a video doctor reinforcing the clinician message . Patients were enrolled on their HSD, and it was that drug that the clinician focused on ; they would also briefly mention the benefits of reducing risky use of alcohol or tobacco if the patient screened positive on the ASSIST for risky use of these substances. The 2- and 6-week telephone drug-use coaching sessions reinforced the clinicians’ message, and followed a patient-centered protocol, focusing on HSD use reduction.

As previously described , lay health educators were trained in motivational interviewing and cognitive behavioral techniques. Weekly meetings with the PIs and project director fostered a HE “Learning Community,” where every case was discussed to maintain fidelity to the protocol. All 32 intervention patients received clinician brief advice , and 22 had at least 1 telephone session and 15 had both sessions. Control patients completed the ASSIST but did not receive clinician brief intervention or coaching sessions; they did receive a video doctor and information booklet on cancer screening. At study exit, control patients received the intervention components of the video doctor and informational booklet.Urine drug testing was conducted at baseline and follow-up to validate self-reported drug use. The Confirm BioSciences, San Diego, Integrated QuickScreen™ CLIA cup was used since it reliably tests for drugs of interest to this study . At baseline, 58/65 patients provided urine specimens and 47/51 did so at follow-up. Thirty two patients tested positive for marijuana at baseline and all 32 disclosed past month marijuana use. Similarly, 2 patients tested positive for cocaine and both self-reported its use. At follow-up, 18 patients tested positive for marijuana; all of these patients reported recent marijuana use. Three control patients tested positive for cocaine and/ or amphetamines – 1 for cocaine, 1 for amphetamines and 1 for both; all 3 disclosed their use of these drugs. Thus, for all intervention and control patients with urine tests, self-reports of drug use were confirmed by the tests at both baseline and follow-up. Finally, to complement the assessment of a group difference in degree of self-reported reduction in HSD use over the study period, chi-square and logistic regression analyses were conducted to determine whether there was a group difference with respect to the objective measure of testing positive for HSD use via urine analysis at follow-up.The outcome measure was reduction in number of days of drug use in the past 30 days  of the patients’ HSD between baseline and 3-month follow-up. For this study, we employed self-reported use of substances for the past 30 days that provides similar results as the timeline follow-back method . Patients self-administered the questionnaires and recorded their responses on the tablet computers at baseline and followup . The Behavioral Model for Vulnerable Populations guided selection of variables used as potential covariates in analyses . Key characteristics are shown in Table 1. Perceived general health status was assessed by a five-point Likert scale item from the SF-12 ; for analysis, responses were dichotomized to fair/poor health versus good, very good, or excellent health. Physical health was measured by self-reported history of 8 chronic medical conditions. Readiness to change drug use was assessed . Baseline and follow-up questionnaires were identical.Reduction in past month HSD use between baseline and follow-up was approximately normally distributed and was assessed with linear regression analysis. Baseline variables in Table 1 associated with reduction in HSD use at the 0.05 level were candidate covariates. A parsimonious final model was obtained by manually removing covariates one at a time in descending order of p values until only those associated with reduction in HSD use at the 0.10 level remained and multi-collinearity was not a problem , 2015. A priori power testing for efficacy was not conducted for this pilot study. Since 14 of the total sample of 65 patients were lost to follow up, intention-to-treat analysis was performed using multiple imputation to impute their missing outcome values rather than carrying forward the last observation to accommodate the very real possibility of change over time . Baseline variables in Table 1 related to loss-to-follow-up were included in the imputation model , along with analytic variables. Twenty sets of imputed values were produced. Two separate regression analyses were compared to check the sensitivity of our estimates of the effects of QUIT on drug use reduction. One was the intent-to-treat analysis including all 65 cases . The other used the 51 complete cases with both baseline and follow-up data . Additionally, to investigate whether patients might have compensated for reducing their HSD use by increasing their use of alcohol and tobacco, we assessed changes in use of these substances among patients who reduced their HSD use by 1 day or more. Baseline characteristics show that 94% were Latino; on average had used their HSD for 12.9 years; had a mean HSD ASSIST score at baseline of 14.4 ; and their most common HSD was cannabis , followed by stimulants . Intervention and control groups did not differ on baseline characteristics. For the 51 patients with follow-up data, the mean number of days of HSD use in the past 30 days was balanced at baseline .

Alcohol and water were delivered through Teflon tubing using a computer-controlled delivery system

There was a minimum wash-out period between medication conditions of 7 days, with a range of 7-10 days. Regarding medication adherence, naltrexone and placebo capsules were packaged with 50mg of riboflavin. A visual inspection of riboflavin content under ultraviolet light indicated that all urine samples tested positive for riboflavin content. Participants completed a modified version of the Alcohol Taste Cues Task in the scanner . Within each task trial, participants initially viewed a visual cue for 2 seconds, followed by a fixation cross . The word “Taste” then appeared, corresponding to oral delivery of the indicated liquid at the start of the trial . Participants were also instructed to press a button on a button box to indicate the point at which the bolus of liquid was swallowed and this information was used to model motion associated with swallowing. There were two runs of this task, with 50 trials per run. Red or white wine, based on participant preference, was used as the alcohol stimulus; previous work from our group has demonstrated that this paradigm has been used to effectively elicit alcohol-related neural activation . Carbonated alcohol, such as beer, could not be systematically administered with the paradigm apparatus and was not offered as a drink option to participants. Visual stimuli and response collection were programmed using MATLAB and Psychtoolbox ,cannabis indoor greenhouse and visual stimuli were presented using MRI compatible goggles.

Participants completed an oral alcohol self-administration paradigm on day 5 of medication titration. At the start of this session, participants were required to test negative for substance use and to have a BrAC of 0.00 g/dl. Female participants were also required to test negative on a pregnancy test. Participants fasted for two hours prior to the session and were given a standardized meal before the alcohol administration. Participants initially completed an intravenous alcohol administration discussed in the primary manuscript . After completing the alcohol infusion paradigm and reaching a target BrAC of 0.06 g/dl, the IV was removed and, after a standardized period of five minutes, participants subsequently began an oral self-administration session at the testing center. Notably, the alcohol dose of 0.06 g/dl prior to the self-administration period was higher than the typical 0.03 g/dl priming dose implemented in self-administration tasks During the self-administration period, participants were provided 4 mini-drinks of their preferred alcoholic beverage and allowed to watch a movie over a 1-hour period. The 4 mini-drinks allowed participants to consume up to 0.04 g/dl alcohol in total, and were individualized by participant gender, weight, height, and alcohol content. Participants were also told that they would receive 1 dollar for each drink remaining at the end of the session. At the end of the session, participants were provided a meal and required to stay at the testing center until their BrAC dropped below 0.02 g/dl or to 0.00 g/dl if driving. For the taste cues paradigm, information regarding image acquisition parameters and preprocessing steps are available in Supplementary Materials and are derived from the primary manuscript . The main contrast of interest was the difference in activation corresponding to alcohol taste delivery and water delivery across the two task runs , for each within-subject medication condition. Consistent with previous studies examining relationships among ventral striatum activity, subjective response to alcohol, and drinking behavior , an anatomical bilateral ventral striatum region of interest was defined using the Harvard-Oxfordatlas in standard MNI space and was transformed into participants’ respective native space using FSL’s FLIRT .

This ROI was selected because ventral striatum is most consistently elicited in alcohol cue and taste reactivity paradigms, as well as most frequently associated with behavioral measures and treatment response . ROI selection was limited to one due to insufficient power to detect incremental model improvement with multiple ROIs. The mean contrast estimate values were extracted from this region for each subject and used in mixed models for group-level analysis . The self-administration paradigm yielded two outcome measures: latency to first drink , and total number of drinks consumed during the session . To examine the relationship between alcohol taste-induced neural activation and self-administration, multilevel mixed poisson and cox proportional hazard models were the primary analyses for total number of drinks and latency to first drink, respectively. Frailty models were fitted using a penalized partial likelihood approach available in SAS 9.4 . Primary analyses examined effects of variables of interest, including medication condition , alcohol consumption , and OPRM1. Due to concerns of overparameterization given the limited sample size, additional covariates of interest were individually included in separate models to determine whether main effects of ventral striatum would be altered. Alpha corrections were not utilized in this exploratory study due to limited sample size and constrained power. Tests of proportional hazards are included in Supplementary Materials and Figures S1a-S1d. Survival plots for latency to first drink, controlling for covariates within the final model , were generated to further explore ventral striatum activation in predicting latency to first drink. Of note, a dichotomous median-split ventral striatum variable was created for ease of visualization of these relationships, but ventral striatum activation was included as a continuous variable in all models. The distribution of latencies to first drink was non-normal.

Across medication conditions, 52% of individuals refrained from drinking throughout the paradigm, 29% consumed a drink within the first three minutes of the paradigm, and 19% of individuals consumed their first drink at some point during the remainder of the session. Cox regressions for latency to first drink indicated a significant effect of ventral striatum activation, Wald χ2 = 2.88, p = 0.05, such that those with lower ventral striatum activation exhibited longer latencies to first drink . Significant covariates included medication condition, Wald χ2 = 5.99, p = 0.01, such that naltrexone was associated with longer latency to first drink. OPRM1 was also significant, Wald χ2 = 3.31, p = 0.03, such that Asn40Asn homozygotes exhibited shorter latency to first drink. Other covariates of interest were not associated with latency to first drink . There were also no interactions of medication X gender on self-administration outcomes. This study examined the relationship between alcohol cue-induced ventral striatum activation and alcohol self-administration in the laboratory. Results from this heavy-drinking sample of East Asians indicated that higher ventral striatum activation was associated with a shorter latency to first self-administered drink. Similarly, ventral striatum activation was positively associated with the total number of drinks consumed during the self-administration paradigm in this sample. These results remained significant after controlling for severity of drinking patterns, OPRM1, and medication condition. Overall, this is the first study to examine whether neuroimaging outcomes of interest can predict responses within laboratory paradigms commonly used in the alcohol literature. This foundational work adds important validity to the hypothesized interplay between neural bases of alcohol craving and behavioral measures of alcohol seeking,cannabis growing equipment namely alcohol self-administration in the human laboratory. These associations contribute to a growing literature on the translational value of neuroimaging paradigms in alcohol treatment, particularly in elucidating potential mechanisms through which self-administration paradigms in AUD research are related to real world alcohol consumption . Such work is aligned with current efforts in behavioral treatments utilizing neuroimaging to study mechanisms of behavior change for substance use disorders; identifying those individuals with severe orbitofrontal cortex deficits, for instance, may be useful in guiding them away from treatments focused on increasing the salience of future negative consequences of substance use . In a similar fashion, adjunctive fMRI has been used to train individuals with substance use disorders through resonance-based breathing to reduce visual processing of drug cues and increase activation in areas implicated in internally directed cognition . Elucidating the translational value of these various experimental paradigms is strongly indicated, as AUD medications can exhibit differential results based on the utilized paradigm and such variability may in turn inform precision medicine efforts. Expanding the study of interexperimental paradigms may also shed light on aspects of alcohol consumption unique to individual paradigms. For instance, a greater understanding of individuals’ experiences in the transition between the first and subsequent drinks may be an important point of clinical interventions when discussing naltrexone use. While the primary aim of this study was not focused on genetic determinants of self administration, it is notable that genotypes encoding the binding potential of mu-opioid receptors were associated with self-administration outcomes.

While it is theorized that individuals with at least one copy of the G-allele for OPRM1 exhibit greater vulnerability to developing AUD, meta-analyses have been mixed, with findings that such an association may not be reliable , are population specific , or that G-allele confers a modest protective effect on general substance dependence in Europeanancestry cohorts . In this study, G-allele carriers of OPRM1 exhibited lower total consumption relative to A-allele carriers at a statistical trend level, as well as slower latency to first drink. This finding is consistent with the primary analyses for this data , which indicated that G-allele carriers of OPRM1 also reported less severe drinking history and lower AUDIT scores compared to Asn40 homozygotes and may, in turn, help to explain these findings. In sum, we accounted for genetic factors in these analyses given their theoretical and practical salience , particularly in this population . And while the genetic findings are notable and largely consistent with the literature, the primary focus on the study is on the fMRI to human laboratory association. This is the area in which the present analyses make a substantive contribution to the literature by supporting a long hypothesized, yet rarely tested, association between brain and behavior. Finally, this study identified significant effects of naltrexone in increasing latency to first drink and decreasing total alcohol consumption. Notably, while these contrast the primary study results from which the data are derived the current study is a secondary analysis of a sub-sample of participants who had completed both neuroimaging sessions. While inclusion of VS activation may have helped to improve model fit, the primary study had greater power in order to test pharmacogenetic effects. For these reasons, while it is possible that consideration of neuroimaging outcomes help elucidate AUD pharmacotherapy effects, replication using larger samples is warranted. On balance, this study should be interpreted in light of its strengths and limitations. Strengths included assessment of multiple experimental procedures used in the medication development literature and consideration of multiple psychiatric and genetic predictors of self-administration in the statistical analyses. Another strength is the test of hypothesis at the within subjects level of analysis. As argued by Curran and Bauer , several psychological processes which are inherently within-person processes, such as the relationship between how one’s brain processes alcohol cues and how much s/he wants to drink in the future, are presumed to be explained in between-subjects models, when in fact, within-subject analyses provide a more representative test of the process at hand . Thus, a within-subjects approach represents a more robust, and methodologically adequate, test of the association between brain and behavior. One of the most important limitations of the current study is a constrained sample and power; given the exploratory nature of this study, alpha corrections were not implemented. A limitation of the taste cues fMRI paradigm used in this study is that it was modified to reduce trial duration in order to increase the number of trials for analysis; in contrast to the original task , a whole-brain analysis of the task did not elicit significant clusters of mesocorticolimbic, including ventral striatum, activation. Therefore, replication using other tasks that more strongly elicit ventral striatum activation are needed, both to induce significant enough variability to test medication effects and also to translate such effects into another subsequent experimental modality. Variations of the Monetary Incentive Delay task that administer beer may be particularly useful in disentangling whether anticipation, relative to receipt, of alcohol taste are differently discriminant in predicting self-administration Relatedly, the taste cues paradigm was limited to the choice of red or white wine, which did not always correspond with participants’ drink of choice; while this correspondence was not a significant covariate in self-administration outcomes, administering drink of choice may increase external validity of the imaging task. Another potential weakness is that medication effects from the primary manuscripts were null; future studies are needed to corroborate that medication effects are consistent across paradigms, particularly in identifying significant such effects.

Cells were fixed once again with 1% paraformaldehyde prior to storage and analysis

Carayon and associates generated and purified polyclonal rabbit anti-CB2 antibody directed against the C-terminal of human CB2. As fixation and permeabilization were required for antigen detection, their approach precluded a comparison between extracellular and intracellular staining. A fluorescent signal was detected from stained B cells and was inhibited by excess peptide, but the findings were much less convincing with respect to the staining of other cell types. More recently, Graham and coworkers evaluated polyclonal antibodies from several commercial manufacturers and reported that human B cells, T cells, monocytes, NK cells, and polymorphonuclear cells all express high levels of extracellular CB2. However, the staining patterns in their report were highly variable from manufacturer to manufacturer and from batch to batch. Furthermore, in the absence of appropriate control antibodies or the inclusion of known positive and negative controls, one cannot really draw conclusions about sensitivity and specificity. Based on these concerns we focused on a defined mAb with the ability to detect extracellular CB2 expression. In order to optimize and validate staining patterns, we constructed cell lines expressing defined levels of human CB2 and compared staining patterns to those observed with parental cells . During the optimization process, it was obvious that non-specific background staining could easily be mistaken for receptor expression if antibodies were not carefully titrated and appropriate isotype controls employed. By including the expression of a linked GFP reporter gene in our vector construct, we also possessed a mechanism for independently assessing expected CB2 staining patterns.

Perhaps the most important technical advancement was the inclusion of both intracellular and extracellular staining protocols. In this respect,hydroponic racks our studies were also aided by the use of an ImageStreamX® cytometer. Due to the impact of fixation and permeabilization on antibody staining, we could not use MFI to directly compare extracellular and intracellular protein levels by conventional flow cytometry. However, visual inspection of captured images readily identified the cytoplasmic compartment as the primary source of our CB2 signal. Imaging also allowed us to independently confirm the process of receptor internalization in response to ligand exposure. Given the controls and approaches employed, there should be little doubt regarding the performance characteristics of this flow cytometry approach. In summary, we describe a rapid and flexible approach for detecting and localizing human CB2 protein expression in cell lines and primary human cells. This approach uses commercially available reagents and should have wide applicability. In addition, for the first time, we report that CB2 receptor is primarily located at intracellular sites in PBL and that expression is not limited to the cell membrane as previously thought. Even in B cells, which express both extracellular and intracellular CB2, the majority of receptor protein is located within the cell. Our findings and related investigations carried out with CB2 suggest that there is trafficking between receptor locations and that intracellular receptors are likely to be biologically active. Future studies focused on understanding the role of differential CB2 receptor location on cannabinoid function are warranted.The expression of cannabinoid receptors by human leukocytes suggests that both endogenous ligands and inhaled marijuana smoke might exert immunoregulatory properties that are distinct from their effects on the brain .

Furthermore, while brain cells exclusively express cannabinoid receptor type 1 , leukocytes express both CB1 and CB2, with CB2 reported as the predominant subtype . Both CB1 and CB2 are transmembrane G-protein coupled receptors that inhibit the generation of cyclic adenosine monophosphate and can signal through a variety of pathways including PI3-kinase, MAP kinase, NF-κΒ, AP-1, and NF-AT . The resulting effects on host immunity have primarily been studied in animal models and suggest a coordinated down-regulation of cellular responses that can occur through altered trafficking, selective apoptosis, or functional skewing of antigen presenting cells and T cells away from T helper type 1 or Th17 response patterns and type 2 and signal through an endogenous human cannabinoid system to produce their biologic effects [Aizpurua-Olaizola 2016, Cabral 2015, Maccarrone 2015, Pacher 2006]. Expression of CB2 predominates in cells from the immune system [Castaneda 2013, Schmöle 2015], and cannabinoids have been described to exert potent immunosuppressive effects on antigen presenting cells [Klein 2006, Roth 2015], B cells and antibody production [Agudelo 2008, Carayon 1998], T cell responsiveness and cytokine production [Eisenstein 2015, Yuan 2002], and monocyte/macrophage function [Hegde 2010, Roth 2004]. However, the majority of these findings stem from studies employing agonists and antagonists with defined CB2 binding specificities, and only limited insight has been available regarding the actual expression patterns and dynamic regulation of CB2 protein. CB2 has traditionally been described as a seven transmembrane G protein-coupled receptor expressed on the cell surface and responsive to extracellular ligand binding.

Ligand binding has been shown to initiate both receptor internalization [Atwood 2012] and a diverse number of intracellular signaling cascades, including adenylyl cyclase, cAMP, mitogen-activated protein kinase, and intracellular calcium [Howlett 2005, Jean-Alphonse 2011, Maccarrone 2015]. However, after using a highly sensitive and specific monoclonal anti-CB2 antibody and fluorescent imaging, we were surprised to find that CB2 was expressed exclusively in the intracellular compartment of human monocytes, dendritic cells, and T cells without detectable cell surface staining [Castaneda 2013, Roth 2015]. Only human B cells expressed CB2 on the cell surface, which internalized in response to ligand exposure, as well as within the intracellular compartment [Castaneda 2013]. These findings challenge our understanding of the CB2 receptor and identify the need for additional insight. It is not yet clear whether cannabinoids routinely bind and activate intracellular CB2, but there is at least one report providing direct experimental evidence for this [Brailoiu 2014]. It is also not clear why B cells exhibit a receptor expression pattern that is distinct from other leukocytes or whether this is a unique feature in cells obtained from peripheral blood or related to the specific stage of cell activation or differentiation. B cell activation has been suggested to play a role in the pattern of CB2 expression in a prior report [Carayon 1998]. In order to better understand CB2 expression patterns exhibited by human B cells, this report examines cells obtained from three different tissue sources , evaluates the relationship between defined B cell subsets and CB2 expression patterns,indoor garden table and uses an in vitro model for activating B cells in order to examine changes in CB2 expression as they correlate to the life cycle of functional B cell responses. Following informed consent, peripheral blood leukocytes were isolated by Ficoll-gradient centrifugation from the blood of healthy human donors. Human umbilical vein cord blood leukocytes were obtained from anonymous donors through the UCLA Virology Core and isolated in the same manner. Fresh human tonsillar tissue was also obtained in an anonymous manner through the UCLA Translational Pathology Core from patients undergoing routine elective tonsillectomies. Tonsillar tissue was handled in a sterile manner, minced, and then extruded through a sterile 100 uM filter to produce single cells. Filtered cells were then rinsed with PBS and processed in the same manner as PBL. Cell subsets were identified by flow cytometry using fluorescent-labeled monoclonal antibodies directed against T cells , B cells , and B cell subsets . The human B cell non-Hodgkin’s lymphoma cell line, SUDHL-4 was cryopreserved, and when needed, it was cultivated in suspension in complete medium composed of RPMI-1640 supplemented with 10% fetal bovine serum , 50 uM 2-mercaptoethanol , and 1% antibiotic-antimycotic solution . CB2 on the extracellular membrane was detected as previously described [Castaneda 2013].

In summary, cells were pre-treated with human AB Serum followed by a 30 min incubation with unlabeled primary mouse IgG2 mAb directed against either human CB2 or isotype-matched mAb against an irrelevant antigen, mouse NK1.1 . After washing, cells were incubated with an APC-labeled goat antimouse F2 mAb for 30 min. To identify different leukocyte subsets, cells were incubated with lineage-specific fluorescent-labeled mAb for 20 min and washed. All cells were then fixed with 1% paraformaldehyde and washed. Samples were protected from light and stored at 4oC until analyzed. In order to detect total cellular CB2 expression , cell suspensions were fixed , permeabilized , and blocked with human AB serum. Staining with primary unlabeled mAb and secondary APC-labeled GAM were carried out as already detailed except for the use of a 60 minute incubation time and the presence of permeabilizing solution. After washing, leukocytes were further stained with fluorescent-labeled antibodies as indicated for individual experiments, fixed, and stored for analysis. In order to identify total cellular CB2 expression in specific B cells subsets, cells were prestained with B cell subset markers prior to fixation, permeabilization, and staining for CB2. This step prevented the detection of intracellular subset markers , which can otherwise result in misclassification. After staining, cells were fixed with 1% paraformaldehyde, washed, and cryopreserved in PBS with 2% human AB serum and 10% dimethyl sulfoxide . On the day of CB2 analysis, cells were rapidly thawed at 37oC, treated with permeabilizing solution and stained for 30 min with Alexa Fluor ® 647-labeled mouse IgG2a mAb directed against either human CB2 or isotype-matched mAb against an irrelevant antigen, mouse NK1.1 and with fluorescent-labeled antibodies directed against CD20 and CD3. The concept of CB2 as a simple GPCR expressed on the surface of human leukocytes [Graham 2010, Klein 2003] is being challenged by a number of recent findings, including our imaging studies that employ a mAb against the N-terminal domain of CB2 to detect protein expression [Castaneda 2013, Roth 2015]. Using a combination of multi-parameter flow cytometry and flow-based imaging, we observed that CB2 can be expressed on the cell surface, as expected, but is also present within the cytoplasm. Furthermore, the expression pattern for CB2 was not uniform across cell types. The intracellular expression, rather than the extracellular expression, was the predominant form [Castaneda 2013]. While peripheral blood B cells expressed both cell surface and intracellular CB2, T cells, monocytes, and dendritic cells exhibited only the intracellular form of CB2. Even though cell surface CB2 can rapidly internalize when exposed to a ligand, the distribution of this internalized CB2 did not appear to account for the pre-existing distribution of intracellular CB2. The biologic basis underlying these different CB2 expression patterns has not yet been fully delineated, but there is growing evidence that the presence of GPCRs at different cellular locations is an important feature of these receptors that promotes functional heterogeneity with respect to downstream signaling and biologic responses [Flordellis 2012, Gaudet 2015]. Along these lines, there is growing evidence that intracellular forms for both CB1 and CB2 are common and exert distinct biologic effects [Brailoiu 2011, Bernard 2012, Gómez-Cañas 2016]. In this setting, understanding the distribution, regulation, and dynamic balance between cell surface and intracellular CB2 receptors is likely to provide important insight regarding cannabinoid receptor biology. The unique expression of CB2 on the surface of peripheral blood B cells led us to question whether this represented an intrinsic and stable feature of B cells in general or was more characteristic of those in peripheral blood. B cells were therefore obtained from three sources for comparison including umbilical vein cord blood, adult peripheral blood, and tonsils. B cell subsets from these different sources were characterized as either naïve mature, activated, or memory B cells based on their expression of IgD, IgM, CD27 and CD38 [Ettinger 2005]. When analyzed in this manner, it became clear that all naïve and memory B cells, regardless of source, expressed both cell surface and intracellular CB2. On the other hand, B cells with an activated phenotype expressed only the intracellular form of CB2, and in most cases the level of intracellular CB2 was higher than that observed in naïve or memory B cells obtained from the same sample. Prior studies had noted that IgD- /CD38+ germinal center B cells, consistent with the activated tonsillar B cells studied here, express a different pattern of CB2 protein staining than other B cells. However, they were using a polyclonal rabbit antibody that targeted a C-terminal CB2 peptide sequence and concluded that their findings represented the transition of CB2 from an inactive to an “activated/phosphorylated” state [Carayon 1998, Rayman 2004]. It is plausible that their findings actually mirrored ours, but features related to receptor localization were not appreciated due to technical limitations. Given the unique CB2 signature of the activated B cell population, we entertained two possible hypotheses based on the existing literature.

It makes most sense to focus on limiting youth access rather than banning e-cigarettes completely

Substance use screening for women of reproductive age prior to conception may be useful for identifying frequent users who are most at risk for continued use during pregnancy. Screening and documentation of patients use of tobacco and delivery of brief cessation counseling is now routine in many US healthcare systems , and some healthcare systems now also screen all patients for unhealthy alcohol use as part of standard primary care . Our results indicate that screening, along with brief interventions and referrals to treatment, may be particularly important for identifying and providing early intervention for women of reproductive age prior to conception who may be at greater risk for prenatal alcohol and nicotine use. It is important that these conversations are supportive and non-punitive, focused on providing education and support to help women make informed decisions about substance use to increase the likelihood of future substance free pregnancies. Women’s health clinicians should also discuss risks associated with prenatal substance use at prenatal intake appointments to ensure that all patients receive the recommendation for complete abstinence throughout the pregnancy. Further, education about tracking one’s menstrual cycle for earlier recognition of pregnancy could potentially help women stop alcohol or nicotine use earlier, particularly in cases where women are not actively trying to conceive. Future studies that examine pregnancy intentions may be useful to understand whether trends in prenatal alcohol and nicotine use vary among women whose pregnancies are intended versus unintended. In contrast to declines in the prevalence and frequency of alcohol and nicotine use among pregnant women seen in the current study,plant grow table recent studies have found increases in the frequency and prevalence of cannabis use during pregnancy .

Cannabis use during pregnancy commonly co-occurs with alcohol and nicotine use , and additional studies are needed to better understand patterns of co-use of alcohol, nicotine and cannabis among pregnant women over time.This study has a number of strengths, including a large sample of diverse pregnant women universally screened for alcohol and nicotine use as part of standard prenatal care, data on self-reported frequency of use both in the year before pregnancy and during pregnancy, and repeated cross-sectional data spanning nine years. There are also several study limitations. Our sample was limited to pregnant women who completed the self-reported substance use screening questionnaire as part of standard prenatal care. Findings may not be generalizable to pregnant women who did not complete the self-reported substance use screening questionnaire or to those who do not receive prenatal care, who may be more likely to use substances during pregnancy. Data on self reported alcohol and nicotine use came from the initial prenatal visit , and do not reflect continued use throughout pregnancy. We were unable to differentiate alcohol and nicotine use in pregnancy that occurred before versus after women realized they were pregnant, and many women in our sample who used alcohol or nicotine while pregnancy may have stopped as soon as they became aware of their pregnancy. Finally, our study may underestimate both the prevalence and frequency of alcohol and nicotine use before and during pregnancy as women may choose not to disclose their use to their healthcare provider. Electronic nicotine delivery devices were developed as a less harmful source of nicotine to help cigarette smokers quit smoking.

A recent large controlled clinical trial and epidemiological data support the idea that the use of e-cigarettes in smokers who are motivated to quit can promote quitting and are more acceptable than conventional nicotine replacement medications . However, the U.S. has experienced a recent surge of e-cigarette use among middle and high school students and young adults who are not using them to quit smoking. While most youth vaping is experimental as evidenced by less 10 days or fewer per month, a sizable fraction are vaping more frequently and some are become highly dependent on nicotine . Clearly, there is no health benefit for non-smokers who are using e-cigarettes. The question remains as to the nature and magnitude of adverse health consequences of nicotine vaping among non-smoking youth. To date, the main adverse effects that are documented in youth who vape nicotine are respiratory – cough and worsening of asthma . However, Faulcon and colleagues from the Center for Tobacco Products of the FDA, raise another worrisome health concern . These authors describe a series of 122 cases of seizures and other neurological symptoms associated with ecigarette use that were spontaneously reported to FDA or the American Association of Poison Centers between December 2010 and January 2019. The authors suggest that because of its known pro-convulsant effects, nicotine might be responsible for these events. The authors and the FDA appropriately request that healthcare providers assess the use of e-cigarettes in patients who present with seizures and submit reports to FDA when e-cigarettes are involved.However, the examination of the spontaneous reports, details of which are provided in the supplemental table, raises questions about a causal link, which needs to be considered in assessing the actual health threat of nicotine vaping for youth. A big question is why nicotine inhaled from e-cigarettes should cause seizures, while nicotine from conventional cigarettes does not. The intake of nicotine is typically similar to or lower from e-cigarettes compared to from tobacco cigarettes.

One needs to take in a very large dose of nicotine to cause seizures, and such an exposure would be expected to product other signs of nicotine intoxication. Also, toxicity after inhaling nicotine would be expected to occur quickly, as brain levels peak within minutes of inhalation, and would be expected to resolve within a few hours. As a first step in interpreting the association between neurological events, one needs to consider what the events actually were. Descriptions of the seizure events were provided by self-report, and are not always clear. In some case, tonic-clonic seizures are reported, but in other case shaking or seizure-like activity are reported. In most cases, records of a medical evaluation are not available. Nicotine can cause anxiety attacks and involuntary muscle contractions, but these are not seizures. Of note is that in 54 cases the reporters continued to vape nicotine after the first event, and had recurrent seizure events. Continued use would be surprising if the user had previously experienced a real seizure. The probability of causation can be analyzed by considering three elements: 1) biological plausibility; 2) timing of the event in relation to the dosing of the product; 3) the presence of alternative explanations . Biological plausibility. With respect to biological plausibility, as mentioned above, a nicotine overdose can cause seizures, and nicotine can cause seizures in some animal models of epilepsy. Seizures have been observed in adults who were poisoned with nicotine,hydroponic table and in young children who have consumed liquid nicotine, including nicotine-containing e-liquids . Severe nicotine poisoning is expected to cause nausea, vomiting, pallor, sweating, abdominal pain, salivation, lacrimation, muscle weakness, confusion and lethargy before one experiences seizures. These symptoms have been reported of oral or dermal exposure; it is possible that inhalation of a high dose of nicotine may produce a different syndrome, but it seems unlikely that seizures would appear without other manifestations of systemic toxicity. While nicotine overdose can cause seizures, lower doses of nicotine may have anticonvulsant activity in people. An anticonvulsant effect of transdermal nicotine has been reported in people with autosomal dominant nocturnal frontal epilepsy and in focal epilepsy . Cigarette smoking per se is associated with an increased risk of seizures, but this appears to be due to medical complications of smoking rather than acute effects of nicotine . Another biological plausibility issue relates to several spontaneous reports of recurrent seizures in the absence of vaping. As mentioned before, the effects of nicotine are relatively brief, and cannot explain recurrent seizures at a later time. Most likely, these individuals have a seizure disorder. Whether nicotine can trigger a seizure in a person with an underlying seizure disorder is unclear. Timing of events. The timing of seizures in relation to vaping in the series was quite variable. 62% had a seizure within 30 minutes and a few had seizures immediately after vaping. If inhaled nicotine caused seizures, seizures would be expected to occur within minutes of vaping, when brain levels are highest; however, many of the seizure cases reported a much longer time lag. In eight cases, a seizure was reported after first use, and in some cases after a single puff.

Generally, a novice vaper inhales the e-cigarette aerosol inefficiently and is exposed to less nicotine than an experienced vaper and takes in less nicotine than a person gets from smoking a cigarette. It is hard to imagine that the dose of nicotine in a single puff would be sufficient to cause a seizure. Alternative explanations. Many of the case reports suggested alternative explanations for seizures. Some had known seizure disorders, some used cannabis and other drugs in addition to nicotine. Some vaping liquids have been reported to be adulterated with synthetic cannabinoids, cocaine and/or caffeine, all of which can produce seizures. Could a chemical other than nicotine be the cause of seizures with e-cigarette use? The typical e-liquid contains nicotine, propylene glycol, glycerin and flavoring chemicals. Propylene glycol and glycerin are not known to cause seizures, although at high heating temperatures they may be degraded to form toxic chemicals such as formaldehyde, acetaldehyde and acrolein. The amount of these chemicals are however usually lower than those found in cigarette smoke, and cigarette smoking does not acutely cause seizures. Some flavoring chemicals are cytotoxic, but there are no reports of these chemicals causing seizures. Analysis of the case reports in the paper by Faulcon raises many questions about the nature of the seizures and other events, and whether there is a causal link to nicotine vaping. A formal causation analysis, which has not yet been done, would likely indicate possible causation at most. At this point in time I would not consider seizures to be a potential adverse effect that should influence the decision of an adult smoker to use e-cigarettes to try to stop smoking conventional cigarettes.However, the possibility of neurological events reported by young vapers should not be ignored. The U.S. is currently experiencing an outbreak of cases of acute lung injury in young people from vaping illicit cannabis products and possibly some adulterated nicotine liquids. Similar to the guidance given to healthcare providers regarding acute lung injury, providers should be aware of a possible link between nicotine vaping and neurological events, should carefully evaluate medical causes of such events, including detailed neurological evaluation and biochemical screens for illicit drug use, should collect vaping devices and liquids for later analysis if possible, and should report such cases to the FDA and state health departments. Of course, the best solution to address the concern about seizures from e-cigarette use is to eliminate vaping by non-smoking youth. Some public health authorities and politicians have urged banning the sale of e-cigarettes completely to reduce e-cigarette use in youth. The public health cost of such a policy would be to deny adult smokers the availability of a cessation aid that may be life-saving. Policies of banning sales of e-cigarettes in gas stations and convenience stores, but allowing sale in specialty tobacco and vape shops, and over the internet, where age verification of purchases can be enforced, are reasonable. Hopefully, in this way youth can be protected from harm while supporting the potential benefits of ENDS in reducing the devastating harms from smoking in adults. According to the Monitoring the Future Survey [Johnston 2016], sponsored by the National Institute on Drug Abuse , marijuana is the most commonly abused illicit drug in the United States. As of 2015, ~23% of young adults responding to this national survey reported consumption of marijuana in the past 30 days. By comparison, during this same time interval, the national prevalence for tobacco smoking by young adults was reported at 16%, followed by amphetamine consumption at 3% and cocaine consumption at 2%. These striking differences in use are fueled in part by the growing perception that marijuana use is not harmful [Okaneku 2015] and by the growing number of states that have legalized marijuana for medicinal and/or recreational use [Wilkinson 2016]. This setting and the growing interest in medicinal applications of cannabis products makes it essential that we understand the human biology and toxicology of marijuana.

The reciprocity index is the proportion of ties that were reciprocal

We also control for the effects of how ego’s use of one substance was influenced by alters’ use of two other substances. In the network equation, we include endogenous network effects and homophily selection effects for each substance use behavior as well as additional covariates such as race , gender, grade, and parental education as the results from score-type tests suggest to do so. 501 students in Sunshine High and 166 students in Jefferson High were 12th-graders at t1 and t2and graduated at t3 . These 667 students were constructed as structural zeroes in the networks during the last wave. Due to a survey implementation error in Add Health, some adolescents could only nominate one female and one male friend at t2and t3 . We account for this with a limited nomination variable in the network equation. A Method of Moments estimation is used to estimate the behavior and network parameters in each model so that the target statistics in behaviors and networks can be most accurately calculated. We assess satisfactory model convergence with criteria of t statistics for deviations from targets and the overall maximum convergence ratio . The results of a post hoc time heterogeneity test for the models found no evidence that the co-evolution of substance use behaviors and friendship networks was significantly different across the two time periods, providing no indication of estimation or specification problems. We also perform goodness-of-fit testing for key network statistics in both schools, and display the results in the S1 File. Besides the main SAB model for each school sample,indoor grow shelves we estimate ancillary models that test whether the interdependent effects are symmetric in increasing and decreasing substance use. This is accomplished by differentiating the “creation” function and the “endowment” function in RSiena.

This technique has been applied to explore the asymmetric peer influence effect on adolescent smoking initiation and cessation.methodological challenge we face is that whereas the questions about smoking and drinking behavior were asked at all three waves, questions about marijuana use were only asked at t2 and t3 . One approach would discard all the information at t1 , but this strategy will reduce the efficiency of analysis, increase standard errors, and decrease statistical power. Instead, we reconstruct adolescent marijuana use at t1 based on four questions. Fig 2 provides a flow chart of the logic, and shows that we in fact have a considerable amount of information that can help us reconstruct probable values for the vast majority of the adolescents. First, if an adolescent has never tried marijuana at t2 , s/he would not have used it at t1, so we can safely code them as a zero at t1 . Next, if an adolescent has tried marijuana at t2 but the age at which he or she tried was above his or her age at t1 , s/he would not have reported using it at t1, so we can safely code them as a zero at t1 . Finally, if an adolescent has tried marijuana at t2 and the age of usage was below his or her age at t1 , we utilize information from two questions “During your life, how many times have you used marijuana?” and “During the past 30 days, how many times did you use marijuana?” at t2. In a few instances the difference between these two variables is zero, which appears to be a reporting error as they reported all their usage in the last 30 days and yet that they started at a young age. We code them as a zero at t1under the presumption that this earlier usage was very limited, and perhaps experimental. However, if the difference is non-zero, since the In-School Survey was conducted at least six months before the wave-1 In-Home Survey, we divide this difference by 5 to average over five months [i.e., /5]. Those with values less than 1 were categorized as non-users at t1 , those with values between 1 and 10 were categorized as light users and those with values above 10 were categorized as heavy users .

Light users comprised about 16% of adolescents in Sunshine High and 17% of adolescents in Jefferson High. Likewise, heavy users comprised about 5% of the adolescents in Sunshine High and 8% of the adolescents in Jefferson High. Overall, this reconstruction strategy enabled us to estimate a three-wave SAB model for each of the two samples without discarding any data. The last step of the reconstruction procedure for the heavy marijuana users is not perfectly accurate and might mistakenly categorize a few light users as heavy users, since they could have used marijuana outside of the last five months. The proportion of cases that might have been misclassified is less than 10%. Furthermore, sensitivity tests in which the level of marijuana use for these uncertain cases was randomly assigned to “light” or “heavy” use exhibited similar results over a large number of samples .Regarding missing data, for students in Sunshine High the response rates were 76% at t1 , 82% at t2 , and 75% at t3 . In Jefferson High the response rates were 79% , 81% , and 74% across the three waves. We imputed missing network data using the technique described in Wang et al. given the evidence that failing to do so can result in in biased estimates. Other actor attributes at t1 were imputed using the multiple imputation system of chained equations implemented in Stata. For the later waves, missing data is handled within the Stochastic Actor-Based models in RSiena software as suggested by Huisman and Steglich and Ripley et al.. The 501 and 166 students who graduated at t3 and were no longer in the network are treated as structural zeros in the Stochastic Actor-Based models at the last wave.Network statistics are measured at three waves. As shown in Table 1, in both school samples the number of out-going ties decreased over time due to limited nomination restrictions, graduation, moving, dropping out, and sample attrition/non-response/missing network data. The proportion of reciprocal ties over all out-going ties was 4% to 10% higher in Jefferson High than in Sunshine High at each wave. The transitivity index is the proportion of 2-paths that were transitive , which is similar in the two schools. The Jaccard index measures the network stability between consecutive waves.

There were substantial changes in friendship ties across waves, with the Jaccard index staying at .16 in Sunshine High and ranging from .21 to .22 in Jefferson High. Due to a survey implementation error in Add Health, some adolescents could only nominate one female and one male friend at t2 and t3 . Most limited nomination restrictions happened at wave 2,indoor garden table and involved less than 5% in the two schools. With respect to smoking behavior, there were between 69% and 78% non-smokers in Sunshine High over the three waves, and between 7% and 10% heavy-smokers . In Jefferson High, there were between 42% and 53% non-smokers and between 26% and 32% heavy smokers. Sunshine High also had more non-drinkers than Jefferson High , and more non-users of marijuana . The descriptive statistics of covariates are reported in the lower part of Table 1.As shown in Table 2, our estimated SAB model includes a smoking behavior equation, a drinking behavior equation, a marijuana use equation, and a network equation. Based on the smoking behavior equation, those who were one point higher on the marijuana scale are 25% [exp = 1.25] and 15% [exp = 1.15] more likely to increase their own smoking behavior at the next time point in Sunshine High and Jefferson High, respectively. Those who drank alcohol did not smoke more over time. There is no evidence of cross substance influence, as having more friends who drank or used marijuana did not impact a respondent’s own smoking over time. In ancillary models, we measured average level of drinking or marijuana use for friends and these effects were also statistically insignificant. These results are shown in S1 Table. Regarding the other measures in the smoking behavior equation, we detect a negative smoking behavior linear shape parameter in both school samples along with a positive smoking behavior quadratic shape parameter. This suggests that adolescents were inclined to adopt lower levels of smoking behavior over time, but they also tended to stay as or become non-smokers or escalate to heavy-drinkers due to a pull towards extreme values of this scale. Turning to the peer influence effect, we find that adolescents’ own smoking levels were affected by that of their best friends in both schools. There is no evidence that parental support or monitoring reduced levels of smoking over time in either sample. African Americans and Latinos smoked less than Whites in Sunshine High. Depressive symptoms were found to increase smoking behavior in Jefferson High. In the drinking behavior equation, we find that an adolescent who was one point higher on the marijuana use measure was 22% and 16% more likely to increase their own alcohol use at the next time point in Sunshine High and Jefferson High, respectively. However, respondents’ drinking was not related to their greater cigarette use. There is no evidence that friends’ smoking behavior or marijuana use affected respondents’ drinking behavior. This was the case whether measured as the number of friends who smoked or used marijuana, or as the average level of such behaviors. A negative linear shape effect and a positive quadratic shape effect are also confirmed regarding drinking behavior.

An adolescents’ drinking level was positively predicted by that of one’s best friends. Whereas there is no evidence in these two networks that high levels of parental support impacted drinking levels of adolescents, we do see that higher levels of parental monitoring were associated with lower levels of drinking behavior over time in Jefferson High. In Sunshine High, African Americans were found to drink less than Whites, and depressive symptoms were found to increase drinking levels. The marijuana use equation suggests no evidence that increasing usage of the other two substances leads to increasing marijuana use. We once again see no evidence of cross-substance influence, as the number of friends who smoked or drank or the average smoking or drinking level of friends is not related to ego’s marijuana use levels over time. A negative linear shape effect and a positive quadratic shape effect are also detected on marijuana use behavior. Across both samples there is very strong evidence of a peer influence effect from anadolescent’s best friends’ marijuana use to an individual’s own marijuana use. Higher levels of parental support or monitoring were not found to reduce levels of marijuana use over time. For all three substance use behaviors, there was no evidence that adolescents who are more “popular” were any more likely to increase their substance use over time. In the network equation the expected patterns are detected regarding the endogenous network structural effects across samples. At the dyadic level, adolescents did not randomly nominate peers as friends, since friendship ties inherently require the investment of time and energy, as indicated by the negative out-degree parameters; instead, adolescents tended to nominate peers who had already nominated them as friends previously, as indicated by the positive reciprocity parameters. At the triadic level, adolescents tended to nominate a friend’s friend as a friend but avoided ending in 3-person cyclic relationships. The negative out-degree/in-degree popularity parameters and the out-out degree assortativity parameters suggest that adolescents were less likely to befriend peers who have already made/received many friendship nominations or have similar out-degrees. Instead, they were more likely to befriend peers with similar in-degrees, as indicated by the positive in-in degree assortativity parameters. We also find that adolescents were more likely to nominate peers as friends if they were of the same gender, race , and grade. Grade is a particularly strong effect, as adolescents were 86% and 77% more likely to nominate a friend if they were in the same grade than if they were in a different grade in Sunshine High and Jefferson High, respectively. Lastly, the limited nomination parameter shows that for adolescents who encountered the administrative error of being limited to nominate only one male or one female friend, their odds of nominating friends is re-adjusted by the SAB models to be 132% larger in Sunshine High and 297% larger in Jefferson High than those with no such problem.