The most common formats for these tests are the ELISA and lateral flow assay

The design and quality of the binding reagents , along with other test conditions such as sample quality, play a key role in establishing the test specificity and selectivity, which determine the proportion of false positive and false negative results. Although the recombinant protein mass needed for diagnostic testing is relatively small , the number of tests needed for the global population is massive, given that many individuals will need multiple and/or frequent tests. For example, 8 billion tests would require a total of ~2.5 kg purified recombinant protein, which is not an insurmountable target. However, although the production of soluble trimeric full-length S protein by transient transfection in HEK293 cells has been improved by process optimization, current titers are only ~5 mg L−1 after 92 h . Given a theoretical recovery of 50% during purification, a fermentation volume of 1,000 m3 would be required to meet the demand for 2.5 kg of this product. Furthermore, to our knowledge, the transient transfection of mammalian cells has only been scaled up to ~0.1 m3 . The transient expression of such protein-based diagnostic reagents in plants could increase productivity while offering lower costs and more flexibility to meet fluctuating demands or the need for variant products. Furthermore, diagnostic reagents can include purification tags with no safety restrictions, and quality criteria are less stringent compared to an injectable vaccine or therapeutic. Several companies have risen to the challenge of producing such reagents in plants, including Diamante , Leaf Expression Systems ,4×8 botanicare tray and a collaborative venture between PlantForm, Cape Bio Pharms, Inno-3B, and Microbix.

Resilience is the state of preparedness of a system, defining its ability to withstand unexpected, disastrous events , and to preserve critical functionality while responding quickly so that normal functionality can be restored . The concept was popularized by the 2011 Fukushima nuclear accident but received little attention in the pharmaceutical sector until COVID-19. Of the 277 publications retrieved from the National Library of Medicine22 on July 9th 2020 using the search terms “resilience” and “pandemic,” 82 were evenly distributed between 2002 and 2019 and 195 were published between January and July 2020. Resilience can be analyzed by defining up to five stages of a resilient system under stress, namely prevent, prepare, protect, respond, and recover . Here, prevent includes all measures to avoid the problem all together. In the context of COVID-19, this may have involved the banning of bush meat from markets in densely populated areas . The prepare stage summarizes activities that build capacities to protect a system and pre-empt a disruptive event. In a pandemic scenario, this can include stockpiling personal protective equipment but also ensuring the availability of rapid-response bio-pharmaceutical manufacturing capacity. The protect and respond stages involve measures that limit the loss of system functionality and minimize the time until it starts to recover, respectively. In terms of a disease outbreak, the former can consist of quarantining infected persons, especially in the healthcare sector, to avoid super-spreaders and maintain healthcare system operability . The response measures may include passive strategies such as the adjustment of legislation, including social distancing and public testing regimes, or active steps such as the development of vaccines and therapeutics . Finally, the recover phase is characterized by regained functionality, for example by reducing the protect and response measures that limit system functionality, such as production lockdown.

Ultimately, this can result in an increased overall system functionality at the end of a resilience cycle and before the start of the next “iteration” . For example, a system such as society can be better prepared for a pandemic situation due to increased pharmaceutical production capacity or platforms like plants. From our perspective, the production of recombinant proteins in plants could support the engineering of increased resilience primarily during the prepare and respond stages and, to a lesser extent, during the prevent and recover stages . During the prepare stage, it is important to build sufficient global production capacity for recombinant proteins to mount a rapid and scalable response to a pandemic. These capacities can then be used during the response stage to produce appropriate quantities of recombinant protein for diagnostic , prophylactic , or therapeutic purposes as discussed above. The speed of the plant system will reduce the time taken to launch the response and recovery stages, and the higher the production capacity, the more system functionality can be maintained. The same capacities can also be used for the large-scale production of vaccines in transgenic plants if the corresponding pathogen has conserved antigens. This would support the prevent stage by ensuring a large portion of the global population can be supplied with safe and low-cost vaccines, for example, to avoid recurrent outbreaks of the disease. Similarly, existing agricultural capacities may be re-directed to pharmaceutical production as recently discussed . There will be indirect benefits during the recover phase because the speed of plant-based production systems will allow the earlier implementation of measures that bring system functionality back to normal, or at least to a “new or next normal.”

Therefore, we conclude that plant-based production systems can contribute substantially to the resilience of public healthcare systems in the context of an emergency pandemic.The cost of pharmaceuticals is increasing in the United States at the global rate of inflation, and a large part of the world’s population cannot afford the cost of medicines produced in developed nations23 . Technical advances that reduce the costs of production and help to ensure that medicines remain accessible, especially to developing nations, are, therefore, welcome. Healthcare in the developing world is tied directly to social and political will, or the extent of government engagement in the execution of healthcare agendas and policies . Specifically, community-based bodies are the primary enforcers of government programs and policies to improve the health of the local population . Planning for the expansion of a bio-pharmaceutical manufacturing program to ensure that sufficient product will be available to satisfy the projected market demand should ideally begin during the early stages of product development. Efficient planning facilitates reductions in the cost and time of the overall development process to shorten the time to market, enabling faster recouping of the R&D investment and subsequent profitability. In addition to the cost of the API, the final product form , the length and complexity of the clinical program for any given indication , and the course of therapy have a major impact on cost. The cost of a pharmaceutical product, therefore, depends on multiple economic factors that ultimately shape how a product’s sales price is determined . Product-dependent costs and pricing are common to all products regardless of platform. Plant-based systems offer several options in terms of equipment and the scheduling of upstream production and DSP, including their integration and synchronization . Early process analysis is necessary to translate R&D methods into manufacturing processes . The efficiency of this translation has a substantial impact on costs, particularly if processes are frozen during early clinical development and must be changed at a subsequent stage. Process-dependent costs begin with production of the API. The manufacturing costs for PMPs are determined by upstream production and downstream recovery and purification costs. The cost of bio-pharmaceutical manufacturing depends mostly on protein accumulation levels,flood tables for greenhouse the overall process yield, and the production scale. Techno-economic assessment models for the manufacture of bio-pharmaceuticals are rarely presented in detail, but analysis of the small number of available PMP studies has shown that the production of bio-pharmaceuticals in plants can be economically more attractive than in other platforms . A simplified TEA model was recently proposed for the manufacture of mAbs using different systems, and this can be applied to any production platform, at least in principle, by focusing on the universal factors that determine the cost and efficiency of bulk drug manufacturing .Minimal processing may be sufficient for oral vaccines and some environmental detection applications and can thus help to limit process development time and production costs . However, most APIs produced in plants are subject to the same stringent regulation as other biologics, even in an emergency pandemic scenario . It is, therefore, important to balance production costs with potential delays in approval that can result from the use of certain process steps or techniques.

For example, flocculants can reduce consumables costs during clarification by 50% , but the flocculants that have been tested are not yet approved for use in pharmaceutical manufacturing. Similarly, elastin-like peptides and other fusion tags can reduce the number of unit operations in a purification process, streamlining development and production, but only a few are approved for clinical applications . At an early pandemic response stage, speed is likely to be more important than cost, and production will, therefore, rely on well characterized unit operations that avoid the need for process additives such as flocculants. Single-use equipment is also likely to be favored under these circumstances, because although more expensive than permanent stainless-steel equipment, it is also more flexible and there is no need for cleaning or cleaning validation between batches or campaigns, allowing rapid switching to new product variants if required. As the situation matures , a shift toward cost-saving operations and multi-use equipment would be more beneficial.An important question is whether current countermeasure production capacity is sufficient to meet the needs for COVID-19 therapeutics, vaccines, and diagnostics. For example, a recent report from the Duke Margolis Center for Health Policy24 estimated that ~22 million doses of therapeutic mAbs would be required to meet demand in the United States alone , assuming one dose per patient and using rates of infection estimated in June 2020. The current demand for non-COVID-19 mAbs in the United States is >50 million doses per yea, so COVID-19 has triggered a 44% increase in demand in terms of doses. Although the mAb doses required for pre-exposure and post-exposure COVID-19 treatment will not be known until the completion of clinical trials, it is likely to be 1–10 g per patient based on the dose ranges being tested and experience from other disease outbreaks such as Ebola . Accordingly, 22–222 tons of mAb would be needed per year, just in the United States. The population of the United States represents ~4.25% of the world’s population, suggesting that 500–5,200 tons of mAb would be needed to meet global demand. The combined capacity of mammalian cell bioreactors is ~6 million liters, and even assuming mAb titers of 2.2 g L−1, which is the mean titer for well-optimized large scale commercial bioreactors , a 13-day fed-batch culture cycle , and a 30% loss in downstream recovery, the entirety of global mammalian cell bioreactor capacity could only provide ~259 tons of mAb per year. In other words, if the mammalian cell bioreactors all over the world were repurposed for COVID-19 mAb production, it would be enough to provide treatments for 50% of the global population if low doses were effective but only 5% if high doses were required. This illustrates the importance of identifying mAbs that are effective at the lowest dose possible, production systems that can achieve high titers and efficient downstream recovery, and the need for additional production platforms that can be mobilized quickly and that do not rely on bioreactor capacity. Furthermore, it is not clear how much of the existing bioreactor capacity can be repurposed quickly to satisfy pandemic needs, considering that ~78% of that capacity is dedicated to in-house products, many to treat cancer and other life-threatening diseases . The demand-on-capacity for vaccines will fare better, given the amount of protein per dose is 1 × 104 to 1 × 106 times lower than a therapeutic mAb. Even so, most of the global population may need to be vaccinated against SARS-CoV-2 over the next 2–3 years to eradicate the disease, and it is unclear whether sufficient quantities of vaccine can be made available, even if using adjuvants to reduce immunogen dose levels and/or the number of administrations required to induce protection. Even if an effective vaccine or therapeutic is identified, it may be challenging to manufacture and distribute this product at the scale required to immunize or treat most of the world’s population . In addition, booster immunizations, viral antigen drift necessitating immunogen revision/optimization, adjuvant availability, and standard losses during storage, transport, and deployment may still make it difficult to close the supply gap.

The revision will be implemented in steps and could facilitate the field based production of PMPs

The system can be integrated with the cloning of large candidate libraries, allowing a throughput of >1,000 samples per week, and protein is produced 3 days after infiltration. The translatability of cell pack data to intact plants was successfully demonstrated for three mAbs and several other proteins, including a toxin . Therefore, cell packs allow the rapid and automated screening of product candidates such as vaccines and diagnostic reagents. In addition to recombinant proteins, the technology can, in principle, also be used to produce virus-like particles based on plant viruses, which further broadens its applicability for screening and product evaluation but, to our knowledge, according results had not been published as of September 2020. In the future, plant cell packs could be combined with a recently developed method for rapid gene transfer to plant cells using carbon nanotubes . Such a combination would not be dependent on bacteria for cloning or gene transfer to plant cells , thereby reducing the overall duration of the process by an additional 2–3 days . For the rapid screening of even larger numbers of candidates, cost-efficient cell-free lysates based on plant cells have been developed and are commercially available in a ready-to-use kit format. Proteins can be synthesized in ~24 h, potentially in 384-well plates, and the yields expressed as recombinant protein mass per volume of cell lysate can reach 3 mg ml−1 . Given costs of ~€1,160  ml−1 according to the manufacturer LenioBio , this translates to ~€400 mg−1 protein, an order of magnitude less expensive than the SP6 system ,4×4 grow table which achieves 0.1 mg ml−1 at a cost of ~€360  ml−1 based on the company’s claims.

Protocol duration and necessary labor are comparable between the two systems and so are the proteins used to demonstrate high expression, e.g., luciferase. However, the scalability of the plantcell lysates is currently limited to several hundred milliliters, and transferability to intact plants has yet to be demonstrated, i.e., information about how well product accumulation in lysates correlates with that in plant tissues. Such correlations can then form the basis to scale-up lysate-based production to good manufacturing practice -compliant manufacturing in plants using existing facilities. Therefore, the cell packs are currently the most appealing screening system due to their favorable balance of speed, throughput, and translatability to whole plants for large-scale production. In any pandemic, the pathogen genome has to be sequenced, made publically available, and freely disseminated in the global scientific community to accelerate therapeutic and vaccine development. Once sequence information is available, a high priority is the rapid development, synthesis, and distribution of DNA sequences coding for individual viral open reading frames. These reagents are not only important for screening subunit vaccine targets but also as enabling tools for research into the structure, function, stability, and detection of the virus . Because many viral pathogens mutate over time, the sequencing of clinical virus samples is equally important to enable the development of countermeasures to keep pace with virus evolution .

To ensure the broadest impact, the gene constructs must be codon optimized for expression in a variety of hosts ; cloned into plasmids with appropriate promoters, purification tags, and watermark sequences to identify them as synthetic and so that their origin can be verified ; and made widely available at minimal cost to researchers around the world. Not-for-profit plasmid repositories, such as Addgene and DNASU, in cooperation with global academic and industry contributors, play an important role in providing and sharing these reagents. However, the availability of codon-optimized genes for plants and the corresponding expression systems is often limited . For example, there were 41,247 mammalian, 16,560 bacterial, and 4,721 yeast expression vectors in the Addgene collection as of August 2020, but only 1,821 for plants, none of which contained SARS-CoV-2 proteins. Sharing plant-optimized SARS-CoV-2 synthetic biology resources among the academic and industry research community working on PMPs would further accelerate the response to this pandemic disease. Screening and process development can also be expedited by using modeling tools to identify relevant parameter combinations for experimental testing. For example, initial attempts have been made to establish correlations between genetic elements or protein structures and product accumulation in plants . Similarly, heuristic and model-based predictions can be used to optimize downstream processing unit operations including chromatography . Because protein accumulation often depends on multiple parameters, it is typically more challenging to model than chromatography and probably needs to rely on data-driven rather than mechanistic models. Based on results obtained for antibody production, a combination of descriptive and mechanistic models can reduce the number of experiments and thus the development time by 75% , which is a substantial gain when trying to counteract a global pandemic such as COVID-19.

These models are particularly useful if combined with the high-throughput experiments described above. Techno-economic assessment computeraided design tools, based on engineering process models, can be used to design and size process equipment, solve material and energy balances, generate process flow sheets, establish scheduling, and identify process bottlenecks. TEA models have been developed and are publicly available for a variety of plant-based bio-manufacturing facilities, including whole plant and plant cell bioreactor processes for production of mAbs , antiviral lectins , therapeutics , and antimicrobial peptides . These tools are particularly useful for the development of new processes because they can indicate which areas would benefit most from focused research and development efforts to increase throughput, reduce process mass intensity, and minimize overall production costs.The rapid production of protein-based countermeasures for SARS-CoV-2 will most likely, at least initially, require bio-manufacturing processes based on transient expression rather than stable transgenic lines. Options include the transient transfection of mammalian cells , baculovirus-infected insect cell expression systems , cell-free expression systems for in vitro transcription and translation , and transient expression in plants . The longer term production of these countermeasures may rely on mammalian or plant cell lines and/or transgenic plants, in which the expression cassette has been stably integrated into the host genome, but these will take months or even years to develop, optimize, and scale-up. Among the available transient expression systems, only plants can be scaled-up to meet the demand for COVID-19 countermeasures without the need for extensive supply chains and/or complex and expensive infrastructure, thus ensuring low production costs . These manufacturing processes typically use Nicotiana benthamiana as the production host and each plant can be regarded as a biodegradable, single-use bioreactor . The plants are grown either in greenhouses or indoors, either hydroponically or in a growth substrate, often in multiple layers to minimize the facility footprint, and under artificial lighting such as LEDs. In North America,cannabis drying system large-scale commercial PMP facilities have been built in Bryan, TX , Owensboro, KY , Durham, NC , and Quebec, Canada . The plants are grown from seed until they reach 4–6 weeks of age before transient expression, which is typically achieved by infiltration using recombinant A. tumefaciens carrying the expression cassette or by the introduction of a viral expression vector such as tobacco mosaic virus , for example, the GENEWARE platform . For transient expression by infiltration with A. tumefaciens, the plants are turned upside down and the aerial portions are submerged in the bacterial suspension. A moderate vacuum is applied for a few minutes, and when it is released, the bacteria are drawn into the interstitial spaces within the leaves. The plants are removed from the suspension and moved to an incubation room/chamber for 5–7 days for recombinant protein production. A recent adaptation of this process replaces vacuum infiltration with the aerial application of the A. tumefaciens suspension mixed with a surfactant. The reduced surface tension of the carrier solution allows the bacteria to enter the stomata, achieving a similar effect to agroinfiltration . This agrospray strategy can be applied anywhere, thus removing the need for vacuum infiltrators and associated equipment .

For transient expression using viral vectors, the viral suspension is mixed with an abrasive for application to the leaves using a pressurized spray, and the plants are incubated for 6–12 days as the recombinant protein is produced. Large scale production facilities have an inventory of plants at various stages of growth and they are processed in batches. Depending on the batch size , the vacuum infiltration throughput, and the target protein production kinetics, the infiltration/ incubation process time is 5–8 days. The inoculation/incubation process is slightly longer at 6–13 days. The overall batch time from seeding to harvest is 33–55 days depending on the optimal plant age, transient expression method, and target protein production kinetics . Importantly, plant growth can be de-coupled from infiltration, so that the plants are kept at the ready for instant use, which reduces the effective first-reaction batch time from gene to product to ~10–15 days if a platform downstream process is available . The time between batches can be reduced even further to match the longest unit operation in the upstream or downstream process. The number of plants available under normal operational scenarios is limited to avoid expenditure, but more plants can be seeded and made available in the event of a pandemic emergency. This would allow various urgent manufacturing scenarios to be realized, for example, the provision of a vaccine candidate or other prophylactic to first-line response staff.The speed of transient expression in plants allows the rapid adaptation of a product even when the process has already reached manufacturing scale. For example, decisions about the nature of the recombinant protein product can be made as little as 2 weeks before harvest because the cultivation of bacteria takes less than 7 days and the post-infiltration incubation of plants takes ~5–7 days. By using large-scale cryo-stocks of ready-to-use A. tumefaciens, the decision can be delayed until the day of infiltration and thus 5–7 days before harvesting the biomass . This flexibility is desirable in an early pandemic scenario because the latest information on improved drug properties can be channeled directly into production, for example, to produce gram quantities of protein that are required for safety assessment, pre-clinical and clinical testing, or even compassionate use if the fatality rate of a disease is high . Although infiltration is typically a discontinuous process requiring stainless-steel equipment due to the vacuum that must be applied to plants submerged in the bacterial suspension, most other steps in the production of PMPs can be designed for continuous operation, incorporating single-use equipment and thus complying with the proposed concept for bio-facilities of the future . Accordingly, continuous harvesting and extraction can be carried out using appropriate equipment such as screw presses , whereas continuous filtration and chromatography can take advantage of the same equipment successfully used with microbial and mammalian cell cultures . Therefore, plant-based production platforms can benefit from the same >4-fold increase in space-time yield that can be achieved by continuous processing with conventional cell-based systems . As a consequence, a larger amount of product can be delivered earlier, which can help to prevent the disease from spreading once a vaccine becomes available. In addition to conventional chromatography, several generic purification strategies have been developed to rapidly isolate products from crude plant extracts in a cost-effective manner . Due to their generic nature, these strategies typically require little optimization and can immediately be applied to products meeting the necessary requirements, which reduces the time needed to respond to a new disease. For example, purification by ultrafiltration/diafiltration is attractive for both small and large molecules because they can be separated from plant host cell proteins , which are typically 100–450 kDa in size, under gentle conditions such as neutral pH to ensure efficient recovery . This technique can also be used for simultaneous volume reduction and optional buffer exchange, reducing the overall process time and ensuring compatibility with subsequent chromatography steps. HCP removal triggered by increasing the temperature and/ or reducing the pH is mostly limited to stable proteins such as antibodies, and especially, the former method may require extended product characterization to ensure the function of products, such as vaccine candidates, is not compromised . The fusion of purification tags to a protein product can be tempting to accelerate process development when time is pressing during an ongoing pandemic. These tags can stabilize target proteins in planta while also facilitating purification by affinity chromatography or non-chromatographic methods such as aqueous two-phase systems . On the downside, such tags may trigger unwanted aggregation or immune responses that can reduce product activity or even safety .

These were specifically indicated and later excluded from analysis

Because high rates of cocaine and methamphetamine use have been noted among younger heart failure patients and heart failure due to stimulant use may have a reversible component, targeted preventive and treatment efforts for young patients with drug use disorder may reduce the burden of heart failure. There is a paucity of literature investigating tobacco and substance use disorders in heart failure patients especially amongst racial/ethnic subgroups. While Native American race was associated with increased risk of alcohol use disorder, these patients also had high rates of tobacco and drug use disorders. Recent data from the National Survey on Drug Use and Health shows that American Indians or Alaska Natives have higher prevalence of tobacco use and cigarette smoking than all other racial/ethnic groups.Black race was associated with substance, alcohol, and drug use disorder. Cocaine use disorder was highest among black heart failure hospitalizations, while amphetamine use disorder was highest for Asian/PI heart failure hospitalizations. A prior study of 11,258 heart failure patients from the ADHERE-EM database found that self-reported illicit drug use with cocaine or methamphetamines was associated with black race compared to Caucasian.Black men and women present with heart failure at a younger age and have the highest age-standardized hospitalization rates compared to other race/ethnicities in the US.Addressing underlying substance use disorders in black patients may reduce the burden of heart failure attributed to substances and reduce hospitalizations.

Conversely, Asian/PI males and females have the lowest hospitalization rates for heart failure compared to other races in the US.34 However,grow racks with lights the Asian/PI population in the US is rapidly growing with high rates of amphetamine use, which may contribute to future heart failure hospitalizations. Geographically, the Pacific region stands out for high rates of substance use disorder, especially drug use disorder. Data from NSDUH reports high prevalence of past-month illicit drug use by individuals 18 years or older within Pacific states.Patterns of use in heart failure patients may mirror those of the general population. Providers should be aware of types of substance use prevalent in their region. Rates of tobacco and substance use disorders were higher for patients of lower socioeconomic status as represented by payer status and median household income quartiles. Socioeconomic factors mediate differences in tobacco and substance use disorders based on race/ethnicity. While we cannot adjust for complex community stressors predisposing to tobacco or substance use disorders, evaluating community risk factors for tobacco and substance use disorders, such as density of tobacco stores,and identifying vulnerable groups may help develop preventive and treatment strategies, reducing observed disparities. Tobacco and substance use disorders in heart failure patients have implications for the broader health system. Substance use leads to increased costs from decreased productivity, healthcare costs, and crime.Tobacco,alcohol,and cocaine use are associated with increased readmission risk in heart failure patients. Screening for tobacco and substance use disorders has historically been deficient in primary care, emergency room, and hospital settings;despite efforts to improve screening, rates are likely under-appreciated.

Heart failure patients who actively smoke but are attempting to quit may be coded with a different ICD-9-CM code than tobacco use disorder, further underestimating numbers.Tobacco and substance use disorders may have even larger negative effects on the healthcare system than currently reported. The NIS does not use unique patient identifiers; a hospitalization may represent a new patient or a patient already captured in the sample being readmitted, which may increase rates. We are unable to account for geographic or provider coding variation in ICD-9-CM coding. Some conditions, notably tobacco use disorder, may be under-coded. Due to constraints within ICD-9-CM codes, we could not quantify amount or duration of tobacco or substance use disorders. Heavier or prolonged tobacco or substance use may have more detrimental cardiotoxic effects, but even substance use that does not qualify for a diagnosis may contribute to heart failure. Many hospitalized heart failure patients with drug use disorder used “other drugs,” illustrating the complexity of coding for specific drug use. Finally, unmeasured confounding, related to other lifestyle or cardiovascular risk factors not measured, may influence some of these associations, especially as related to socioeconomic status or race/ethnicity.Substance use is associated with multiple adverse health outcomes, including increased rates of infectious disease, mental health disorders, and mortality.Methods: We performed a retrospective, cross-sectional study using the National Hospital Ambulatory Medical Care Survey data from 2013–2018. All ED visits in the United States for patients ≥18 years of age were included. The primary exposure was having substance use included as a chief complaint or diagnosis, which we identified using the International Classification of Diseases, 9th and 10th revisions, codes. The primary outcome was the use of diagnostic services or imaging studies in the ED. Results: The study sample included 95,506 visits in the US, extrapolating to over 619 million ED visits nationwide.

The total number of ED visits remained stable during the study period, but substance userelated visits increased by 45%, with these visits making up 2.93% of total ED visits in 2013 and 4.25% in 2018. This increase was primarily driven by stimulant-, sedative- , and hallucinogen-related visits. Mental health-related visits rose in parallel by 66% during the same period. Compared to non-substance use-related visits, substance use-related visits were more likely to undergo any diagnostic study : 1.11-1.47; P = 0.001, toxicology screening , but less likely to have imaging studies . In stratified analyses, substance use-related visits with concurrent mental health disorders were more likely to undergo imaging studies , while findings were opposite for those without concurrent mental health disorders . Conclusion: Substance use- and mental health-related ED visits are rising, and they are associated with increased resource utilization. Further studies are needed to provide more guidance in the approach to acute services in this vulnerable population. [West J Emerg Med. 2022;22X–X.] data showing that the age-standardized mortality rate due to substance use disorders increased by 618.3% between 1980–2014 in the United States.The most common causes of death associated with substance use were injuries and poisoning, along with other external causes.Among people ages 15-49 in the US, SUDs and intentional injuries make up close to one third of all deaths.The poor outcomes associated with substance use, along with its rising prevalence and low treatment rates, create a significant public health issue.From 2004–2013 the proportion of US adults receiving treatment for SUDs stayed at 1.2-1.3%, representing less than 20% of the population affected.In light of the low treatment rates, it is not surprising that emergency department visits related to substance use have risen rapidly.This increase has created predictable challenges for emergency clinicians and the healthcare system overall, as substance use-related ED visits have been linked to increased length of stay, higher service delivery costs,rolling benches for growing and higher rates of hospital admissions. In addition, increasing ED utilization has outpaced similar increases in hospital inpatient care, meaning the burden of these increased visits has fallen disproportionately on EDs and emergency clinicians.While resource utilization is high in this population, it remains unclear which specific resources are used in the ED for these visits on a national scale. Identifying the resource utilization pattern for substance use-related visits could help inform resource allocation and potentially increase standardization of care. This could in turn lead to reduction in unnecessary testing or treatment, and eventually reduce the strain on emergency physicians and the healthcare system overall. With this rationale in mind, we aimed to describe the trends of substance use-related ED visits among US adults nationwide over a five-year period, beginning in 2013, and to evaluate the relationship between substance use and ED resource utilization.This was a retrospective, cross-sectional study using data from the National Hospital Ambulatory Medical Care Survey , which is conducted by the National Center for Health Statistics .We included data from January 1, 2013–December 31, 2018. The NHAMCS is an annual, national probability sample of ambulatory care visits throughout the US and collects data on visits to hospital based EDs. The survey employs a four-stage probability design with samples of area primary sampling units . Within each ESA, patient visits were systematically selected over a randomly assigned four-week reporting period. There were approximately 2000 PSUs that covered 50 states and the District of Columbia, and approximately 600 hospitals. Data collection was overseen by the US Bureau of the Census, which provided field training on data abstraction for participating hospital staff. Ethics approval was obtained from the research ethics board at our home institution. The primary exposure was defined as having substance use listed as a chief complaint or diagnosis in the visit, as identified by the International Classification of Diseases 9th and 10th revisions codes.

The ICD codes were taken from previously published briefs by the Health Care Utilization Project.Substances of interest included alcohol , opioids, cannabis, cocaine, amphetamines, hallucinogens, and other recreational substances of abuse that affect the central nervous system. Substances were further broken down into five categories as defined by previous literature: 1) alcohol; 2) opioid, sedative/hypnotic, or anxiolytic; 3) cocaine, amphetamine, psychostimulant, or sympathomimetic; 4) cannabis or hallucinogen; and 5) other/unspecified or combined.The reference group consisted of ED visits without substance use as a diagnosis or chief complaint. Covariates of interest were defined a priori and identified from literature review.They included age, gender, ethnicity, homelessness, burden of comorbidities, presence of mental health disorder, geographical region, metropolitan statistical area, payment source, day of visit, and arrival time. Mental health disorder was treated as a separate diagnosis from SUD to specifically examine the trend of substance use-related visits and to emulate previous studies in this area. The primary outcomes of interest consisted of the use of any diagnostic services, toxicology screens or imaging studies in the ED. Diagnostic services included laboratory investigations, toxicology screens, imaging studies, electrocardiograms, and cardiac monitoring. Imaging studies included all imaging carried out in the ED, such as radiographs, ultrasounds, computed tomography , and magnetic resonance imaging. Secondary outcomes consisted of number of procedures performed , number of medications administered, disposition, and use of mental health consultation services in the ED. These variables were identified using pre-existing matching labels in the NHAMCS database.11The NHAMCS used a multistage estimation procedure to produce essentially unbiased estimates. The first step included inflation by reciprocals of selection probabilities, which was the product of the probability at each sampling stage. The second step adjusted for survey non-response, which included inflating weights of visits to hospitals or EDs similar to non-respondent units, depending on the pattern of missingness. During data analysis, survey procedures were used and patient visit weights were applied to obtain the total estimated ED visits from sampled visits . As per the NHCS, sampled visits with relative standard error of 30% or more and observations that were based on fewer than 30 sampling records may yield unstable estimates. We performed univariate analysis using chi-squared test to assess the association between substance use and each of the categorical covariates. To test for linear trend in substance use-related visits over time, we applied a logistic regression model with substance use as the dependent variable and time as the independent variable. Univariate and multi-variable logistic regression were used to assess the unadjusted and adjusted associations between substance use and each of the outcomes, respectively. All listed covariates, with the exception of mental health disorder, were included in the multi-variable model. We reported odds ratios for all logistic regression analyses, along with 95% confidence intervals. For the primary and secondary outcomes of interest, P-value for significance was determined to be 0.005 after applying Bonferroni correction, to minimize family-wise error rate in the setting of multiple comparisons. To evaluate mental health disorder as a potential effect modifier, we assessed the relationship between substance use and primary outcomes using a stratified analysis. The P-value for interaction was obtained from a multi-variable logistic regression model. Missing data were handled using complete case analysis, given that the percentage of missingness was small, and complete data were available for both the exposures and outcomes. All data analyses were carried out using STATA version 15 . From 2013–2018, substance use-related ED visits increased from 2.926 to 4.132 million visits, or from 2.93% to 4.25% of total ED visits during the same period, which translates to a 45% relative increase. Non-substance use related ED visits remained stable during the same period, with 93.17 million visits in 2018 compared to 96.98 million visits in 2013.

Greater impairments among HIV+ females is also evident in the context substance-dependence

There may be several explanations for this discrepancy between expected and observed results. While in vitro studies predict potent immune suppression, they examine isolated cells in an artificial culture condition that may not reflect the more complex interactions that occur during antigen-presentation in vivo. The concentrations of THC and exposure periods that are studied in vitro and in animal models may similarly differ from those occurring at the cellular level in the peripheral blood, tissues and lymphoid organs of MJ smokers. While our subjects are characterized as moderate to heavy habitual MJ users, it is difficult to directly compare the transient exposures that occur from smoking with the continuous high-level exposure that occurs during in vitro cultures. Peak THC concentrations in the blood of MS are reported in the range of 30–300 ng/ml, but quickly fall to much lower levels within 15– 30 minutes after smoking . In contrast, in vitro studies often examine the effects of chronic THC exposure that lasts for days and are in the range of 1–10 μM . In order to assure uniform and significant exposure conditions we admitted patients and had them smoke three MJ cigarettes on the day of vaccination and another the following morning. However, even in this setting, the inhalation of mg quantities of THC may not recapitulate the systemic administration of 4–10 mg/kg dosing that is often administered in mouse models. Furthermore,equipment for growing weed while models examine the effects of purified THC, MJ smoke exposes the user to scores of cannabinoids and hundreds of other inhaled substances with potentially disruptive or even counteracting effects that have not been studied and are hard to recapitulate with controlled exposure models.

As our study examined habitual MJ users, it is also possible that long term exposure to MJ may allow for compensatory mechanisms or tolerance to develop and therefore mask the potentially deleterious effects observed following acute exposure to THC. Redundancy of regulatory pathways and maintenance of immune homeostasis are key features of human immunity and receptor signaling. Similar to our findings in habitual MJ smokers, the chronic exposure of male rhesus macaques to THC for up to 12 months was not associated with altered immune cell subsets or function in control animals or evidence of increased morbidity or mortality in animals exposed to simian immunodeficiency virus infection . In conclusion, this prospective analysis of immune responses to HBV vaccination in healthy naive NS and MS failed to identify a significant difference with respect to either the frequency or nature of the vaccine response. However, the small sample size employed here was based on the assumption that habitual MJ use would have a rather profound effect on the generation of adaptive immunity. That underlying assumption appears to be incorrect. Further, our goal of enrolling up to 20 subjects per group in order to detect more subtle differences was impaired by a number of factors including the changing frequency of routine HBV vaccination in the general population, the stringent nature of our inclusion and exclusion criteria which excluded many subjects that appeared to be eligible at preliminary screening, and the difficulty in retaining subjects in a rather demanding and protracted protocol. As such, while our findings argue that the potent immunosuppression demonstrated by in vitro models and mouse studies does not likely represent the biologic impact of habitual MJ use, it remains possible that more subtle differences between our two study groups exist but could not be detected.

Ongoing clinical research carried out in active MS is needed in order to better interpret the reason for this discrepancy and to assess the presence or absence of clinically important health effects associated with MJ smoking.HIV-infected women may be more vulnerable to developing neurocognitive impairment than HIV+ men . Among HIV studies showing male/female NCI differences, some demonstrate greater impairments in females than males overall, whereas others demonstrate male/female differences in the pattern of NCI1. For example, in a large study combining the two longest-running U.S. multisite, longitudinal studies of HIV progression in the U.S., the Women’s Interagency HIV Study and the Multi-center AIDS Cohort Study , HIV was associated with alterations in the pattern of sex differences in executive function , attention, psychomotor speed, and motor function1 . Performance was consistently worse among HIV+ women versus HIV+ men even after adjusting for HIV related characteristics. This female-specific vulnerability may be due to biological influences , sociodemographics, mental health factors or disorders. Here we examine whether associations between MDD and NCI differs between men and women as MDD is the most common neuropsychiatric complication among people with HIV and is more prevalent than in the general U.S. population The only nationally representative study among PWH reported an 18.5% 12-month prevalence of MDD which is two- to three-times higher than the general U.S. population. Prevalence estimates in U.S. cohort studies are similarly high. In the WIHS, current MDD via diagnostic interview was 20% and lifetime MDD was 32% versus 10% and 23% nationally. In cognitive studies, about 30% of WIHS women report elevated depressive symptoms,an estimate about 10% higher than other large-scale cohort studies of healthy midlife women. Additionally, HIV+ women often have higher rates of depression and more depressive symptoms than HIV+ men.

It cannot be assumed; however, that the magnitude of the male/female depression difference will be similar among PWH, due to greater depression severity in sexual minority men versus heterosexual men. The presence and/or severity of depression may contribute to the greater cognitive vulnerability in HIV+ women versus HIV+ men. Among HIV-uninfected individuals, depression severity is commonly associated with poorer episodic memory, executive function, psychomotor speed, and attention27. Across cross-sectional studies in PWH, the domains most reliably associated with depression are psychomotor speed , executive function , learning and memory , and motor function followed by attention and working memory . Longitudinal WIHS studies demonstrate associations between depression and psychomotor speed , executive function , memory , motor function , and fluency. In the MACS, depression is also associated with psychomotor speed and executive function ; however, other domains have not been examined. It remains unknown whether these associations differ by sex. We examine combined and independent associations between three factors known to influence NCI—HIV, biological sex , and depression. Associations were analyzed in the same large sample of MACS men and WIHS women where we examined HIV by sex interactions on NCI1 . Biological mechanisms that may contribute to sex differences in the depression-NCI relationship include, among others, sex differences in neuroinflammation, dopamine transmission,grow tables 4×8 genetics and the HPA axis. Given our prior work demonstrating an HIV by sex interaction on NCI 1 and given that depression influences many of those domains, we hypothesized that depression would predict greater HIV-related NCI in females versus men, particularly in attention, executive function, and motor function. Longitudinal data collected through September 2016 were extracted from WIHS and MACS in August 2017. In brief, the WIHS was established in August 1994 at 6 clinical sites . The MACS was established in 1984 at 4 clinical sites . MACS data were limited to participants recruited during the most recent enrollment period , as those participants more similar to WIHS participants in race and socioeconomic status38 . For the present analysis, there were 3,766 WIHS participants who enrolled in the study during 1994–1996 or 2001–2002 . Based on these numbers, we drew identical numbers of HIV+ and HIV- men from 1735 individuals enrolled in the MACS1 . To be included, all participants completed four tests administered by both cohorts––TMT-A and TMT-B, SDMT, Comalli Stroop color-word test39, and GP. We analyzed data collected by WIHS from May 2009-September 2016 and by MACS from October 2001-May 2014 of which 97% of all selected visits had complete data. We also limited the tests to the first five years of testing as men had more tests administered versus women, and the restricted time span limited the differences between cohorts in neuropsychological test exposure. Participants were excluded based on history of toxoplasmosis, CNS lymphoma, cryptococcal meningitis or cryptococcal infection , progressive multifocal leukoencephalopathy, AIDS-defining or other dementia, transient ischemic attack/stroke, use of antiseizure/antipsychotic drugs, head injury with loss of consciousness , and preference for Spanish as first language.

Neuropsychological tests included measures of: psychomotor speed/attention ; executive function ; and motor function . Outcomes for all tests was time to completion except SDMT which was total correct. All timed outcomes were log transformed to normalize distributions and reverse scored so higher values represented better performance. Demographically-adjusted T-scores were derived for each outcome based on the entire HIV- population with impairment defined as T-score < 40. A number of covariates were included based on prior WIHS and MACS studies1,29,30 including age , race/ethnicity, income, education , sexual orientation, heavy alcohol use classified using NIAAA standards , recreational drug use since the previous visit , cigarette smoking, HIV RNA and CD4 Tcell counts, antiretroviral therapy, nadir pre-HAART CD4+ T-cell count, ever having clinical AIDS diagnosis, and count of neuropsychological test exposures. Generalized linear mixed models were conducted to assess combined and separate associations of depression , biological sex , and HIV-serostatus on NCI. The GLMM include a random-subject effect which accounts for within-person correlation of the repeated assessments . Primary initial predictors included depression, HIVserostatus, sex, all two-ways, and the three-way interaction. Of particular interest was the three-way interaction. A significant three-way interaction would indicate that depression exacerbates the interactive associations between HIV-serostatus and sex on NCI. Nonsignficant three-way interactions were removed from the models so that we could assess whether depression exacerbates a general: 1) female versus male difference and/or 2) HIV-serostatus difference on NCI. If none of the two-way interactions were significant they were removed from the models so that we could examine whether depression, irrespective of HIVserostatus and sex, predicts NCI. All models adjusted for race , ethnicity , education and time-varying factors including age, heavy alcohol, marijuana, cocaine/crack use , smoking , income , time from enrollment, and number of prior neuropsychological testing administrations . In HIV only analyses, models also adjusted for the following time-varying factors: antiretroviral use, log10-transformed HIV RNA, current CD4 count , CD4 nadir <200, and prior AIDS diagnosis. Odds ratios and 95% confidence intervals are presented and predicted probabilities from models including all two- and three-way interactions are plotted for visual interpretation. Participants included 858 HIV+ and 562 HIV- individuals, ranging in age from 20–66 years, with 67% non-Hispanic, African American and 20% Hispanic per group. Table 1 provides socio-demographic, behavioral, and clinical characteristics stratified by sex, HIV-serostatus, and depression status. Groups differed on numerous factors during visits categorized as depressed versus not depressed. Depressed individuals had lower income levels and were more likely to have increased alcohol, marijuana, and cocaine use, and more likely to be current/former smokers. Across study duration, depression was reported by HIV+ and HIV- women on fewer visits versus HIV+ and HIV- men . Similarly, men were more likely to ever be depressed regardless of HIV-serostatus than women . In the MACS, 50% of HIV+ men reported ever being depressed versus 47% of HIVmen; 3% difference . In the WIHS, 34% of HIV+ women reported ever being depressed versus 25% of HIV- women; 9% difference . Among PWH, the difference between men and women ever being depressed was 16% . Similarly, among HIV- individuals the difference between men and women ever being depressed was 22% . To ensure the reversal of conventional rates of depression in men and women was not due to the higher rates of crack/cocaine in men, we re-ran the frequency of depression among those who never used crack/cocaine. In this subgroup, depression remained higher in men than women . Among PWH, depressed individuals were more likely to have higher plasma HIV RNA but the higher rates of depression among HIV+ men versus HIV+ women remained after controlling for HIV RNA . These data add to the growing body of evidence that in the era of effective antiretrovirals, factors other than HIV are at least as important, and often more important determinants of NCI, than HIV serostatus among HIV+ men and women. In 1420 HIV+ and HIV- adults, we demonstrated that HIV+ women with elevated depressive symptoms are at 5-times the odds of impaired performance on Stroop color-word [interference] versus both depressed HIV+ and HIV- men and HIV- women.

Removing the two women from our PSU analyses did not significantly change any of our results

We accounted for the multiplicity of measures by correcting alpha levels via a modified Bonferroni procedure . This approach considers the mean correlation between variables and the number of tests in the adjustment of alpha levels. All alpha levels were adjusted for both traditional neurocognitive assessment and BIS-11 and their average inter-correlation coefficients in primary and secondary models and in tertiary models . The corresponding adjusted alpha levels for primary and secondary models were p ≤ 0.013 for neurocognitive domains and p ≤ 0.027 for self-reported impulsivity. The corresponding adjusted alpha levels for tertiary models, which included PSU only, were p ≤ 0.011 for neurocognitive domains and p ≤ 0.017 for BIS-11. Alpha levels for risk-taking and decision-making were not adjusted as these are individual tasks measuring separate domains of executive function. Effect sizes for mean differences between groups were calculated with Cohen’s d . We correlated cognitive functioning, risk-taking, decision making and self-reported impulsivity measures to alcohol use in PSU and AUD, and to cocaine, and marijuana use in PSU only at baseline. Since these were exploratory correlations, we chose a less restrictive alpha level of 0.05. For all other domains except fine motor skills, PSU showed numerically lower scores than AUD with effect sizes up to 0.76 but no statistically significant group differences after covariate correction . When smoking status was included as a factor in the cross-sectional group analyses of neurocognitive domains,drying rack for weed neither significant group-by-smoking interactions nor main effects of smoking were observed.

In addition, gender was not a significant predictor of neurocognitive performance at one month of abstinence, except for fine motor skills which were worse in female than male substance users. Poly substance users exhibited trends to worse decision-making than AUD [x2 = 3.64, p = 0.056, ES = 0.33]; the groups were not significantly different on risk-taking . No significant group-by-smoking interactions or main effects for smoking were observed on either IGT or BART. Poly substance users self-reported significantly higher BIS-11 total and non-planning impulsivity, a measure of cognitive control, than AUD , and being on a prescribed psychoactive medication significantly predicted higher total and non-planning impulsivity. With smoking status included in the analyses, no significant group-by-smoking interactions were observed for any of the BIS-11 measures. However, self-reported motor impulsivity showed a trend for a group-by-smoking interaction [x2 = 3.259, p = 0.071], a significant main effect for group [x2 = 2.005, p = 0.006], and a trend for a smoking effect [x2 = 1.499, p = 0.066]. Follow-up pairwise comparisons showed significantly higher motor impulsivity in smoking PSU compared to both smoking and nonsmoking AUD . Between baseline and follow-up, neurocognitive functions in abstinent PSU improved markedly in the following domains: general intelligence, cognitive efficiency, executive function, working memory, and visuospatial skills , and weaker improvements were observed for global cognition and processing speed . Abstinent PSU did not change significantly in the domains of learning and memory or fine motor skills. Preliminary analyses indicate that the lack of significant changes in the domains of visuospatial memory and fine motor skills were related to significant time-by-smoking status interactions , where only nonsmokers increased on fine motor skills and only smokers improved on visuospatial memory.

The BART scores increased significantly with abstinence , whereas the IGT scores did not change during abstinence. Self-reported total and motor impulsivity decreased significantly with abstinence and the non-planning score tended to decrease . The following changes were observed when restricting our longitudinal analysis to only those 17 PSU with baseline and follow-up data: general intelligence, executive function, working memory , visuospatial skills , global cognition , and processing speed . The 19 PSU not studied longitudinally differed from our abstinent PSU restudied on lifetime years of cocaine use . PSU not restudied performed significantly worse at baseline than abstinent PSU on cognitive efficiency, processing speed, and visuospatial learning . Furthermore, they did not differ significantly on years of education, AMNART, tobacco use severity, and proportions of smokers or family members with problem drinking, or the proportion of individuals taking a prescribed psychoactive medication. In PSU, more lifetime years drinking correlated with worse performance on domains of cognitive efficiency, executive function, intelligence, processing speed, visuospatial skills, and global cognition . More cocaine consumed per month over lifetime correlated with worse performance on executive function and greater attentional impulsivity . More marijuana consumed per month over lifetime correlated with worse performance on fine motor skills and tended to correlate with higher BIS-11 motor impulsivity ; in addition, more marijuana use in the year preceding the study correlated with higher non-planning and total impulsivity. Earlier onset age of marijuana use correlated with higher non-planning impulsivity and worse visuospatial learning . Interestingly, more lifetime years of amphetamine use correlated with better performance on fine motor skills, executive function, visuospatial skills, and global cognition . Similar to the associations found in PSU, more lifetime years drinking in AUD correlated with worse performance on cognitive efficiency, visuospatial skills, and global cognition , and worse performance on visuospatial memory correlated with greater monthly alcohol consumption averaged over the year preceding assessment and over lifetime .

In addition, longer duration of alcohol use in AUD was related to worse auditory-verbal learning and memory . Earlier age of onset of heavy drinking in AUD was associated with worse decision-making . Our primary aim was to compare neurocognitive functioning and inhibitory control in onemonth-abstinent PSU and AUD. Poly substance users at one month of abstinence showed decrements on a wide range of neurocognitive and inhibitory control measures compared to normed measures. The decrements in neurocognition ranged in magnitude from 0.2 to 1.4 standard deviation units below a zscore of zero, with deficits >1 standard deviation below the mean observed for visuospatial memory and visuospatial learning. In comparisons to AUD, PSU performed significantly worse on measures assessing auditory-verbal memory, and tended to perform worse on measures of auditory-verbal learning and general intelligence. Chronic cigarette smoking status did not significantly moderate cross-sectional neurocognitive group differences at baseline. In addition, PSU exhibited worse decision-making and higher self-reported impulsivity than AUD , signaling potentially greater risk of relapse for PSU than AUD . Being on a prescribed psychoactive medication related to higher self-reported impulsivity in PSU. For both PSU and AUD, more lifetime years drinking were associated with worse performance on global cognition, cognitive efficiency, general intelligence, and visuospatial skills. Within PSU only,vertical cannabis greater substance use quantities related to worse performance on executive function and fine motor skills, as well as to higher self-reported impulsivity. Neurocognitive deficits in AUD have been described extensively. However, corresponding reports in PSU are rare and very few studies compared PSU to AUD during early abstinence on such a wide range of neurocognitive and inhibitory control measures as administered here . To our knowledge, no previous reports have specifically shown PSU to perform worse than AUD on domains of auditory-verbal learning and general intelligence at one month of abstinence. Our studies confirmed previous findings of worse auditory-verbal memory and inhibitory control in individuals with a comorbid alcohol and stimulant use disorder compared to those with an AUD, and findings of no differences between the groups on measures of cognitive efficiency . Some of the cross-sectional neurocognitive and inhibitory control deficits described in this PSU cohort are associated with previously described morphometric abnormalities in primarily prefrontal brain regions of a subsample of this PSU cohort with neuroimaging data . Our neurocognitive findings also further complement studies in sub-samples of this PSU cohort that exhibit prefrontal cortical deficits measured by magnetic resonance spectroscopy and cortical blood flow . Our secondary aim was to explore if PSU demonstrate improvements on neurocognitive functioning and inhibitory control measures between one and four months of abstinence from all substances except tobacco. Poly substance users showed significant improvements on the majority of cognitive domains assessed here, particularly cognitive efficiency, executive function, working memory, self-reported impulsivity, but an unexpected increase in risk-taking behavior . By contrast, no significant changes were observed for learning and memory domains, which were also worst at baseline, resulting in deficits in visuospatial learning and visuospatial memory at four months of abstinence of more than 0.9 standard deviation units below a z-score of zero.

There were also indications for significant time-by-smoking status interactions for visuospatial memory and fine motor skills, however these analyses have to be interpreted with caution and considered very preliminary, considering the small sample sizes of smoking and nonsmoking PSU at followup. Nevertheless, the demonstrations of cognitive recovery in abstinent PSU, and potential effects of smoking status on such recovery, are consistent with our observations of corresponding recovery in abstinent AUD . The 19 PSU not studied at follow-up differed significantly from abstinent PSU at baseline on several important variables: they had more years of cocaine use over lifetime, and performed worse on cognitive efficiency, processing speed, and visuospatial learning. As such, these differences should be tested as potential predictors of relapse in future larger studies. Several factors limit the generalizability of our findings. Our cross-sectional sample size was modest and therefore our longitudinal sample of abstinent PSU was small; as not uncommon in clinical samples, about half of our PSU cohort relapsed between baseline and follow-up, a rate comparable to what has been reported elsewhere . This made us focus our longitudinal results reporting on the main effects of time and to de-emphasize the reporting of time-by-smoking status interactions. Larger studies are needed to examine the potential effects of smoking status and gender on neurocognitive recovery during abstinence from substances. The study sample was drawn from treatment centers of the Veterans Affairs system in the San Francisco Bay Area and a community based healthcare provider, and the ethnic breakdown of the study groups was different . Therefore, our sample may not be entirely representative of community-based substance use populations in general. Although preliminary, the within subject statistics are meaningful as they are more informative for assessing change over time than larger cross-sectional studies at various durations of abstinence. In addition, premorbid biological factors and other behavioral factors not assessed in this study may have influenced cross-sectional and longitudinal outcome measures. Nonetheless, our study is important and of clinical relevance in that it describes deficits in neurocognition and inhibitory control of detoxified PSU that are different from those in AUD, and that appear to recover during abstinence from substances, potentially as a function of smoking status. Our cross-sectional and longitudinal findings are valuable for improving current substance use rehabilitation programs. The higher impulsivity and reduced cognitive abilities of PSU compared to AUD, likely the result of long-term comorbid substance use, and the lack of improvements in learning and memory during abstinence indicate a potentially reduced ability of PSU to acquire new cognitive skills necessary for remediating maladaptive behavioral patterns that impede successful recovery. As such, PSU may require a post-detox treatment approach that accounts for these specific deficits relative to AUD. Our results show that PSU able to maintain abstinence for 4 months had less total lifetime years of cocaine use and performed better on cognitive efficiency, processing speed and visuospatial learning than those PSU not restudied ; these variables may therefore be valuable for predicting future abstinence or relapse in PSU. Additionally, and if confirmed in larger studies, our preliminary results on differential neurocognitive change in smoking and nonsmoking PSU may inform a treatment design that addresses the specific needs of these subgroups within this largely understudied population of substance users. Potentially, concurrent treatment of cigarette smoking in treatment-seeking PSU may also help improve long-term substance use outcomes, just as recently proposed for treatment seeking individuals with AUD . Finally, our findings on neurocognitive improvement in PSU imply that cognitive deficits are to some extent a consequence of long-term substance use , which have the potential for remediation with abstinence. This information is of clinical relevance and of psychoeducational value for treatment providers and treatment-seeking PSU alike. People who inject drugs are disproportionately impacted by harms associated with injecting, including overdose and infection with HIV and hepatitis C virus . Relatedly, people who experience homelessness are at increased risk of initiating injection drug use . Between 40%–61% of PWID in North America are estimated to have experienced homelessness in the prior year .

Raycasting from the eye position was initially used to enable object selection in the direction of gaze

The participants were free to move around and interact with various objects within the VR environment using 2 hand-held Vive controllers. Surveys assessing depressed mood and anxiety were presented at the start of the paradigm and additional surveys assessing subjective craving and scene relevance were presented between scenes within the headset. A VAS survey was chosen as the in-task measurement of subjective craving owing to its high face-validity, ability to capture the dynamic fluctuations in craving, and low burden on participants, especially over frequently repeated assessment. Survey responses were made by adjusting a slide bar using one of the controllers. Participants were instructed to “Just explore everything around you until the scene changes” and “During the task, we will be measuring what you pay attention to, and we will be asking you to rate your craving level between each scene.” Three Active scenes and three Neutral scenes were developed and included in the final paradigm . The Active scenes include NTP-related cues, while in the Neutral scenes, all cues are neutral. Active cues include ashtrays, lighters, JUUL devices, cigarettes , Puffbars, hookahs, as well as the presence of human models engaged in smoking or vaping behaviors. Neutral cues vary depending on the scene context. All cues are interactable such that the participants are able to pick up, throw, and collide the items with other items in the scene. All scenes include the presence of at least one animated human model. Smoke and vapor effects are incorporated with the animated human models in the Active scenes to increase the immersiveness of the experience.

All scenes include background music and audio effects consistent with the scene and the participants’ interaction.The NTP Cue VR paradigm begins with 3 “test scenes,” which are approximately 3 minutes in duration,cannabis dryer depending on participant comfort and abilities with the VR hardware. The first scene is the Practice Room. This is a square room with cubes systematically placed around corners of the room. The participants are asked to gaze at each of the boxes to confirm that the eye-tracking is functioning as intended. Then, the participants are asked to practice using the controllers to teleport to 4 different locations in the room. The second scene is the Practice Slider room, which instructs the participants how to answer the survey questions and provides the opportunity to practice adjusting the slider to answer the scales. The third test scene is the Blink Calibration room. In this scene, the participants are asked to blink 5 times after being prompted by an audio signal. The purpose of this room is to collect pupil diameter data when the participants actively blink to assist with increasing the accuracy of blink detection algorithms. Following the completion of the initial test scenes, the 2 mood surveys are presented, and the 6 scenes are pseudorandomized within scene type such that the general scene order is maintained . The participants are then placed in each scene for 5 minutes. The entire paradigm is approximately 30 minutes in duration. There are 2 types of data recorded within each scene, regular time series and event-based data that is recorded at event onset. Regular time series data are collected at every 10-millisecond interval , independent of the frame time. The following data are recorded periodically: timestamp, raw gaze intersection point, position and forward direction of the participants’ headset, and pupil diameter and eye openness .

The following events and corresponding timestamps are recorded when they occur: blinks, including number of blinks and the object of gaze at the time of the blink; button presses on the controller, including time, button pressed, and object of interaction ; and object of gaze when eye gaze switches to a new object.However, this raycasting method did not perform well in our experiments, especially for very small objects, owing to the limited precision and accuracy of the eye tracker, microsaccades, etc. Therefore, for small objects of interest, we utilized the G2OM algorithm provided by the Tobii XR SDK, which is a machine learning–based object selection algorithm that aims to improve small object– and fast-moving object–tracking. Based on our testing, this algorithm improved object selection over the naïve method but still lacked selection quality. Thus, to further improve object selection, we introduced an additional mechanism to “lock” the object selection when an object is manipulated such that whenever a participant actively picks up a virtual object, the object selection algorithm will always select the picked object until the participant releases the object. If the participant is not interacting with an object, the G2OM algorithm is employed, or if no small objects are within the field, naïve raycasting is employed. To calculate eye-gaze statistics toward active and neutral cue objects, 4 dictionaries corresponding to 4 different types of objects are initialized prior to the start of participant involvement in the paradigm. These dictionaries are then used to store the cumulating gaze fixation or dwell time durations as values for individual objects belonging to each object and type. When a participant gazes at an object, the object is searched in the dictionary on the basis of its name and type. If the object was encountered before, the current fixation time is added to its cumulative fixation time.

If the object had not been encountered before, a new entry is created for the object. The fixation time is then calculated as the difference between the timestamp of current entry and that of the next line of entry. Following the completion of the paradigm, total fixation time indices are produced, which reflect the sum of values within each dictionary . The mean fixation time indices are also created, which reflect the total fixation time divided by the number of objects gazed at by the participant.Initially, we tested a measurement of eye openness, as calculated by the HTC SRanipal SDK, as an indicator for blink detection. However, given the lack of established thresholds of eye openness for blink detection, we instead chose to rely on estimates of pupil diameter. Consistent with previous studies, an eye blink is herein defined as complete eyelid closure with the pupil covered for 50-500 milliseconds. For any given time point, we consider a missing pupil diameter reading as a possible complete eyelid closure where the pupil is completely covered by the eyelid. These eye closure durations are blink candidates. If either pupil is covered for less than 50 milliseconds, the candidate is discarded as it is more likely owing to noise or an eye tracker limitation. If either pupil is covered for more than 500 milliseconds, the candidate is also discarded as this is more consistent with a microsleep. Using this blink detection definition, the blink count for the majority of the current participants fell within 12-40 blinks per minute,drying weed which appears to align with the consensus of spontaneous blink rates in the literature. This report describes our approach to the development of a novel NTP cue VR paradigm designed to simultaneously induce and assess potential eye-based objective correlates of nicotine craving in naturalistic and translatable virtual settings. The preliminary statistical analyses support the potential of this paradigm in its ability to induce subjective craving while instilling a moderate sense of presence in the virtual world and only low levels of VR-related sickness. The preliminary results outline a potential context-specific effect of NTP-related attentional bias and pupil dilation in this pilot sample. Consistent with the literature on attentional bias and pupil dilation, we observed greater Active NTP versus Neutral control cue-related effects in 2 of the 3 Active scenes . The similarity observed in the pattern of effects between attentional bias and pupil dilation provides early evidence of a potential cross-validation of these metrics. No effects were observed for the EBR metric; however, the size of this effect, if present at all, may be smaller than we are currently able to detect with the limited sample. The observed reversal of attentional bias and pupil dilation toward neutral cues in the Driving scene warrants further investigation, given the large effect size. Potential explanations for this include the presence of especially engaging neutral cues in the Driving scene, as a 360° video of a busy city street is presented in the background, which participants report as entertaining to watch. Despite the overall bias toward neutral cues reflected in the global attentional bias metric, and within the Driving scene alone, participants with greater attentional bias toward NTP cues were found to endorse greater NTP use in the previous 90 days.

This effect appears to be driven by the higher-frequency NTP users in our sample and is consistent with the literature supporting the validity of attentional bias as a clinically important indicator of nicotine addiction. Additional analyses are planned to assess direct and indirect relationships between scene eye-related outcomes and relevance to the individual, scene-specific craving level, randomization of scenes, engagement with specific cues, and NTP use groups once more data are collected. This pilot study has several strengths and limitations. Strengths include the development of a cutting-edge VR cue-reactivity task that incorporates the latest technological advances in graphic design to increase translatability to the real-world and simultaneous assessment of multiple potential eye-related indices of cue-reactivity in a 3D virtual environment. Limitations include the absence of biological verification to confirm self-reported NTP use and the inability to investigate NTP use profiles in the analyses owing to limited power. Importantly, given the limited sample size, we caution against over interpretation of our results. It remains unknown whether the absence of significant results, particularly with respect to the correlations between objective eye-related indices and subjective craving ratings, are the result of limited power to detect these relationships or true independence of these indices. However, we believe that the general pattern of scene-related effects on attentional bias and pupil dilation are encouraging and warrant further study. The identification of reliable objective correlates of craving would allow for greater examination of the underlying neurobiological processes involved, and inform new avenues for the development of psychological and pharmacological treatments. Despite consistent declines in rates of cigarette use among adolescents in the last five years, rates of marijuana use have remained constant, with marijuana being the most widely used illicit drug among adolescents . Nationally representative data from Monitoring the Future show rates of conventional cigarette use among 10th graders declining significantly from 9.1% in 2013 to 7.2% in 2014; and among 12th graders from 16.3% in 2013 to 13.6% in 2014 . Rates of marijuana use have remained stable, with 16.6% of 10th graders and 21.2% of 12th graders reporting past 30-day use in 2014 . Blunts have become a common form of marijuana among adolescents, with more than half of 30-day marijuana users also reporting blunt use . Adolescents’ perceptions related to marijuana use have also changed, with the number of youth who perceive significant risk from using marijuana once or twice a week decreasing from 54.6% in 2007 to 39.5% in 2013 . Moreover, 73.3% of 10th graders reported disapproval of occasional marijuana use in 2007, yet 62.9% reported disapproval in 2014 . Social media is a key venue for sharing marijuana-related information and attitudes, particularly among adolescents. For example, between 2012 and 2013, more adolescents than adults tweeted about marijuana, with the majority of these tweets reflecting positive attitudes about marijuana . Social acceptability and perceptions of risks and benefits, including the active sharing of these beliefs on social media, are important predictors of health behavior decision-making . Perceptions of risks generally vary by sex and age, with females and minorities tending to rate perceived risks higher than white males . Additionally, perceptions of risk related to marijuana use are known to be higher among females, non-whites, older adults, and individuals who have a family income between $20,000-49,999 . However, few studies have examined adolescents’ beliefs about specific risks and benefits related to marijuana and blunts, and studies have not examined relationships among adolescents’ perceptions, social acceptability, awareness of social media and actual marijuana use . Understanding these relationships is critical, especially since smoking marijuana places one at risk for a number of the same negative health outcomes and secondhand smoke effects as smoking conventional tobacco cigarettes . Long-term use of marijuana can lead to addiction, with initiation in adolescence associated with higher rates of addiction, negative impacts on brain development, and lower levels of school and lifetime achievement .

These pieces of evidence suggest that ABCB1 may be a gene of interest for further study

Historically, molecular genetic research on AAB has been limited to the examination of a small number of candidate genes with purported biological relevance; only recently have researchers begun to conduct atheoretical genome-wide scans for this phenotype.13,15,16 In our genome-wide investigation, we found that autosomal SNPs accounted for ~ 25% of the variation in a dimensional measure of AAB. Although this estimate was not statistically significant , which is likely attributable to our modest sample size, it maps nicely to meta-analytic findings that additive genetic influences account for 32% of the variation in antisocial behavior.Our finding also maps to recent GCTA analyses in a community-based sample, where it was found that common genetic variation accounted for 26% of the variation in a behavioral disinhibition phenotype.16 No SNP reached genome-wide significance in our GWAS of AAB. Our most associated SNP, rs4728702, was located in ABCB1 on chromosome. In our gene-based tests, ABCB1 was significant at the genome-wide level; however, we did not find an association for this gene in our replication sample. In expression analyses, we also found that ABCB1 is robustly expressed in human brain. This provides some biologically plausible evidence that ABCB1 variation could be associated with behavioral outcomes. ABCB1 codes for a member of the adenosine triphosphate-binding cassette transporters, ABCB1 or P-glycoprotein, which transportmolecules across cellular membranes and also across the blood– brain barrier. ABCB1 is considered a pharmacogenetic candidate gene in view of ABCB1 transporters’ ability to change drug pharmacokinetics. Variation in ABCB1 has been previously associated with a number of psychiatric phenotypes, cannabis growing equipment including opioid and cannabis dependence, as well as with treatment outcomes for depression and addiction.

The related rodent gene, Abcb1a, is differentially expressed in three brain regions of alcohol preferring animals compared with non-preferring animals.Furthermore, ethanol exposure changes ABCB1 expression. An in vitro study of human intestinal cells found that ethanol exposure increased messenger RNA ABCB1 expression level, and that these increases were maintained even after a week of ethanol withdrawal.Similarly, ABCB1 expression was increased in lymphoblastoid cell lines following ethanol exposure, and in rodents, Abcb1a expression was increased in the nucleus accumbens of alcohol-preferring rats following alcohol exposure.Taken as a whole, this pattern suggests that ABCB1 has pleiotropic effects across a number of externalizing spectrum behaviors/disorders, and that its expression is affected by ethanol exposure. The former is consistent with findings from the twin and molecular genetics literature, demonstrating that common externalizing disorders and behaviors share genetic influences,and that this shared genetic factor is highly heritable .Supplementary analyses in our own sample were consistent with this hypothesis, and we found evidence that ABCB1 variation was associated with alcohol and cocaine dependence criterion counts. However, we did not find associations between ABCB1 and marijuana or opioid dependence criterion counts. We also found evidence for enrichment across multiple canonical pathways and gene ontologies including cytokine activity, Jak-STAT signaling pathway, toll-like receptor signaling pathway, antigen processing and presentation, cytokine receptor binding and natural killer cell-mediated cytotoxicity. Although the immediate biological relevance of these categories to AAB is not clear, these enrichment findings include many immune-related pathways and may be best interpreted in light of the associations among AAB and alcohol, cannabis, cocaine and opioid dependence criterion counts in the sample.

Immune and inflammatory pathways have been hypothesized to be associated with psychiatric disorders across the internalizing and externalizing spectra.For example, it is known that alcohol alters cytokine activity,induces changes in neuroimmune signaling in the brain48 and that alcohol dependence is associated with low-grade systemic inflammation.49 Likewise, the monocytes of individuals who are cocaine dependent show decreased expression of tumor necrosis factor-α and interleukin-6 proinflammatory cytokines in response to a bacterial ligand relative to controls.Four of the top genes to emerge in our analysis are genes for type I interferon , which reside in a cluster on chromosome 9p. Previous studies demonstrate that interferon A treatment of hepatitis C patients can induce multiple psychiatric symptoms including depression and impulsivity.Although we did not find significant enrichment for these pathways in our replication sample, these results add preliminary evidence to a growing literature that variation in genes in immune-relevant pathways may predispose individuals to AAB and closely related behaviors. The present study expands upon the initial AAB GWAS by Tielbeek et al.as well as more recently published GWAS of a behavioral disinhibition phenotype in two important ways. First, we used a case–control sample where the cases met criteria for alcohol dependence. By virtue of the association between alcohol dependence and AAB, and the relatively high rates of individuals meeting clinical cutoffs for criterion A for ASPD in the present sample compared with American population-based prevalence estimates, it is likely that the sample was enriched for genetic variants predisposing individuals toward externalizing spectrum behaviors such as AAB.

Previous work indicates that the genetic influences on AAB completely overlap with the genetic influences on alcohol dependence, other drug abuse/dependence and conduct disorder—that is, AAB does not have unique genetic influences above and beyond those shared with these other externalizing disorders.In view of this, gene identification efforts for AAB are likely to be more successful in more severely affected samples or in samples where participants high in AAB also tend to have comorbid alcohol or substance-use disorders, such as the COGA sample. In contrast, for example, only 6% of the participants in the Tielbeek et al.community-based sample met their nondiagnostic AAB case criteria. This sample may also have had low rates of comorbid alcohol and other drug diagnoses, limiting the ability to find genome-wide significant effects. Second, we used a dimensional measure of AAB, which is more powerful than a binary diagnostic variable, and better represents the underlying dimensional structure of AAB.These differences may explain, in part, why we were able to detect a significant genetic association in the present sample. Our study should be interpreted in the context of several limitations. First, our sample size was relatively small. Second, because the COGA case–control alcohol dependence sample is highly affected by AAB, the findings emerging from our study may not generalize to lower-risk populations or other types of high-risk populations. Our null replication attempt may be attributable, in part, to the replication sample being relatively less affected than the discovery sample. There are other instances where genetic associations for externalizing behaviors have replicated within highly affected samples, but not less-affected samples. For example, GABRA2 is associated with alcohol dependence in samples where alcohol-dependent cases came from clinically recruited samples and families densely affected by alcoholism,but not community-based samples.A sample recruited for this purpose is likely to be enriched for genetic variation that predisposes individuals to a range of externalizing behavior problems, including AAB;however,cannabis drying trays whether our findings generalize to other populations at high risk for AAB is unknown. Third, because we limited the current analyses to European-Americans, our results may not generalize to other racial and ethnic groups. Fourth, similar to all psychiatric outcomes, antisocial behavior has a developmental component, and evidence from the twin literature suggests that there are genetic influences on adolescent and adult antisocial behavior that are distinct from genetic influences on child antisocial behavior.The degree to which the genetic associations documented here for AAB are also associated with child or adolescent antisocial behavior is not clear. The results from this study provide an empirical starting point for subsequent developmental analyses to examine these questions. Fifth, there are likely to be aspects of the environment that moderate genetic influences on AAB that we did not explicitly examine here but that may be valuable to pursue in subsequent studies. Finally, our genome-wide association approach examined only common genetic variation. There is suggestive evidence that rare nonsynonymous exonic SNPs account for 14% of the variance in a behavioral disinhibition phenotype.

As rare variant-genotyping arrays and whole-genome sequencing become more widely available and cost effective, our understanding of the genetics of AAB will improve. In summary, our goal in this study was to take an atheoretical approach to investigate the molecular genetic basis of AAB in a high-risk sample. The heritability of AAB was 25%, although this estimate did not differ significantly from zero. No SNP reached strict genome-wide significance, but gene-based tests identified an association between ABCB1 and AAB. Expression analyses further indicated that ABCB1 is robustly expressed in the brain, providing some evidence that variation in this gene could be related to a behavioral outcome. Previously documented associations between variants in ABCB1 and other drugs of abuse suggest that ABCB1 may confer general risk across a range of externalizing behaviors, rather than risk that is unique to AAB. This was consistent with post hoc analyses in our sample, where we found that variation in ABCB1 was associated with DSM-IV alcohol and cocaine dependence criteria. We also found enrichment of several immune-related canonical pathways and gene ontologies, which is consistent with previous suggestions that immune and inflammatory pathways are associated with externalizing spectrum behaviors. As a whole, our study goes beyond the candidate gene approach typically taken in studies of AAB, and implicates a gene and gene sets for which there is convergent evidence from other lines of research. These findings, although novel and promising, would benefit from direct replication.More than 500,000 individuals in the United States are homeless at any time and approximately 3 million experience homelessness over the course of a year.The medianage of adults experiencing homelessness has risen and is now approximately 50 years.Homelessness is a risk factor for many adverse health conditions, including aging-related conditions.Individuals experiencing homelessness have an earlier onset of age related problems than the general population. In their 50s and 60s, they have a similar prevalence of geriatric conditions as housed adults in their 70s and 80s.Due to high prevalence of functional and cognitive impairments, researchers and practitioners consider homeless adults “older” at age 50.Pain is a common and challenging symptom, and the risk factors, severity, and duration of pain are well studied in the general population.Chronic non-cancer pain, defined as pain lasting for longer than 3 months not attributable to malignancy, is common.While studies have examined chronic pain among older adults or homeless individuals, little is known about the risk factors for chronic pain in older adults experiencing homelessness.Compared with the general population, people who experience homelessness are more likely to report conditions associated with an increased prevalence and severity of chronic pain, including chronic physical and mental health disorders, substance use disorders, tobacco dependence, and histories of childhood physical or sexual abuse.In addition, people who are homeless experience challenging physical environments which may worsen pain. Improving our understanding of chronic pain in older adults experiencing homelessness may aid efforts to identify effective ways to treat and relieve their suffering. In this study, we describe the severity and duration of pain and its association with demographic and clinical characteristics in a community-recruited sample of homeless adults aged 50 and older. We hypothesized a high prevalence of chronic moderate to severe pain. We explored factors associated with chronic pain, including: gender, race, age, physical environment, history of abuse, substance use, mental health problems, and physical health. During July 2013 to June 2014, we enrolled a population-based sample of 350 homeless adults from overnight shelters, homeless encampments, meal programs, and a recycling center in Oakland, California.Based on the estimates of the number of unique individuals who used each site annually, we approached potential participants in a random order and assessed for interest and preliminary eligibility. Following an eligibility interview, study staff recruited individuals meeting the following criteria: homeless as defined by the Homeless Emergency Assistance and Rapid Transition to Housing Act, English-speaking, aged 50 or over, and able to provide informed consent as verified by using a teach-back mechanism.The HEARTH Act includes both individuals who lack a fixed residence or reside in a place not typically used for sleeping and individuals who are at imminent risk of losing housing within fourteen days. It acknowledges that people who are homeless reside in a variety of environments.We conducted study interviews at a community-based center that provided social services for low-income older adults. Participants did not have to be eligible for, or receive services at the Center. Trained study staff members administered the questionnaires. Participants received gift cards valued at $5and $20 for the eligibility and baseline interviews, respectively. The Institutional Review Board of the University of California, San Francisco approved the study.

Primary trial findings and full study procedures were previously published

These findings occur in the context of extreme poverty and account for simultaneous influences of previous recent violent victimization and amphetamine use. Given the consistency of the relationship between dissociation and physical violence over time, identifying individuals who have high levels of dissociation within a broader context of trauma-informed care for homeless and unstably housed women and linking them with specialized mental healthcare and other services may be an additional targeted means to modify risk and reduce physical violence in this particularly vulnerable population. Harmful use of alcohol is the leading risk factor for premature disability and mortality globally among individuals aged 15 to 49 years . Excessive drinking, along with biological and environmental risk factors, can progress to alcohol use disorder , which is characterized by repeated alcohol use despite negative consequences . Notwithstanding the wide range of health and psychological consequences associated with AUD, a large treatment gap remains, with less than 8% of persons with past-year AUD receiving alcohol care and even fewer receiving evidence-based care . Multi-system strategies are needed to advance treatments and increase utilization rates among the diverse set of people with AUD. Development of novel, effective pharmacotherapies is one approach likely to help . To support people in recovery, medications must target factors sustaining drinking. Identifying mechanisms of action,microgreens grow rack such as through randomized controlled trials , human laboratory paradigms, and collection of real-world data represents a vital step in medications development .

Behavioral pharmacology has established subjective response to alcohol as a reliable, multi-faceted phenotype serving as a central bio-behavioral marker of positive and negative reinforcement from alcohol . Individual variability in alcohol’s acute subjective effects, specifically greater stimulation and reward and lower sedation, predict liability for AUD, including escalation of alcohol use and AUD symptomatology . Positive mood, negative mood, and craving are acutely modulated by alcohol use, such that individuals typically experience an increase in positive mood and craving and decrease in negative mood along rising breath alcohol concentrations , serving as reinforcers of alcohol intake . Thus, researchers routinely assess whether pharmacotherapies can effectively modulate subjective response to alcohol through experimental human laboratory paradigms . Importantly, a recent meta-analysis has shown that medication effects on subjective responses to alcohol in the laboratory predict their efficacy in clinical trials for AUD . In an initial safety and efficacy trial, our laboratory used an intravenous alcohol administration paradigm to test whether the novel pharmacotherapy, ibudilast, modulated subjective response in a clinical sample of AUD . While ibudilast did not significantly alter subjective response, subjective effects of mood were dependent on participant’s degree of depression symptomatology. Novel designs testing alcohol’s subjective effects are emerging, such as daily diary methods and ecological momentary assessment in which participants report on their drinking experiences in real-world settings . For instance, using EMA in an RCT that enrolled adolescents with problematic drinking, Miranda Jr. et al. found that naltrexone attenuated alcohol-induced increases in stimulation and enhanced alcohol-induced sedation, as compared to placebo.

These naturalistic reports are consistent with findings on naltrexone’s subjective effects in human laboratory settings . Although less temporally precise than EMA, daily diary methods, which typically include data collection once daily, have lower participant burden and can enhance compliance. While assessment of medication effects on acute subjective response to alcohol via daily diary assessments is limited, past work has utilized these designs to assess daily relationships among urge, mood, and drinking . In a trial of naltrexone for heavy drinking among young adults, morning DDAs revealed that higher daily urge was associated with a greater likelihood of taking the medication, which in return, predicted a lower likelihood of same-day intoxication among the treatment group . The current study consists of a secondary analysis of a two-week experimental medication RCT of ibudilast, which demonstrated treatmentrelated reductions in rates of heavy drinking, as reported through daily diary methods, and reduced neural alcohol cue-reactivity . This study seeks to further test ibudilast’s effects on subjective response to alcohol in the natural environment via DDA. When comparing participant report of drink quantity between these two methods , estimates of alcohol consumption are largely consistent, such that 75% of reports fell within 1 standard drink . Similarly, research from affective science suggests that DDA versus EMA do not meaningfully alter estimates of emotion variability in the real-world nor their associations with health outcomes . In sum, micro-longitudinal, naturalistic daily reporting is a valuable and highly complementary method to clinical trials, as they can increase power and ecological validity, reduce recall error, and result in more cost-effective RCTs . Despite a mounting body of work connecting the immune system with the development and maintenance of AUD and the important implications for the development of these novel therapeutics , few RCTs have tested immunotherapies in the context of AUD to date.

Thus, our understanding of how these medications influence complex AUD profiles, including subjective response, is limited . Alcohol is believed to alter immune signaling and contribute to neuroinflammation indirectly through systemic inflammation and directly via events in the brain that stimulate release of inflammatory molecules, induce neural damage, and alter neural signaling . In preclinical models, an inflammatory state alters ethanol intake, preference, and behavioral responses to ethanol . In human AUD samples, peripheral proinflammatory markers are consistently elevated and correlate with alcohol use . As such, considerable interest exists for novel treatments that can restore healthy levels of inflammation and immune signaling to promote recovery from AUD . Phosphodiesterase inhibitors are a class of immune therapies tested extensively in preclinical models of AUD . PDEs are enzymes that play a central role in the regulation of intracellular levels of cyclic adenosine monophosphate , along with its downstream signal transductions . Acute alcohol exposure activates cAMP signal transduction, while chronic exposure to alcohol attenuates cAMP signaling pathways in specific brain regions . PDE4 isoforms are expressed in neuronal and glial cells in brain regions implicated in the rewarding and reinforcing effects of alcohol, such as the nucleus accumbens and amygdala . Ibudilast is a selective PDE inhibitor and macrophage migration inhibitory factor inhibitor that crosses the blood-brain barrier , attenuates astrocyte and microglial activation, and increases anti-inflammatory cytokine expression . Notably, preclinical work demonstrated that ibudilast reduced voluntary ethanol intake in three different rodent models of AUD . Thus, ibudilast represents a promising pharmacotherapy for AUD, but its mechanisms of action remain largely unknown in clinical samples. To date, our laboratory has tested ibudilast in two clinical samples with AUD. In an initial safety and efficacy trial,ebb and flow flood table ibudilast improved mood resilience following stress exposure and reduced tonic levels of craving . Mood resilience was defined as a faster recovery of positive mood to baseline levels in the treatment condition following exposure to a stressful personal narrative. However, as noted above, ibudilast did not robustly alter subjective response during an alcohol administration paradigm. Yet, this study had a relatively small sample size , and findings could not be extended to subjective effects of alcohol in real-world settings, as participants were required to maintain abstinence during the trial for safety reasons. Extending medications development to naturalistic settings, particularly for novel pharmacotherapies like ibudilast, is needed, as it enables researchers to assess medication effects with far greater ecological validity and to examine dynamic within-person processes through repeated assessments. Electronic real-world data capture is a cost-effective way to collect numerous occasions of alcohol self administration and related subjective effects in participants’ natural environment . As such, work testing ibudilast’s ability to modulate subjective response in naturalistic drinking settings has the potential to further our understanding of its biobehavioral mechanisms, particularly in the context of powerful natural reinforcers and cues. For this reason, the present study will extend findings published from a two-week clinical trial of ibudilast in our laboratory, which utilized daily diary methods . DDAs of subjective alcohol response were collected during this trial to identify bio-behavioral mechanisms of ibudilast, but had yet to be analyzed.

The present study sought to test the effect of neuroimmune modulation by ibudilast on subjective response to alcohol in the naturalistic environment. This secondary analysis leveraged DDAs from a two-week experimental medication RCT of ibudilast, stratified on sex and withdrawal-related dysphoria, that enrolled non-treatment seeking participants with AUD. The DDAs included reports of alcohol use and subjective response measures of stimulation, sedation, mood, and craving. Each morning, participants retrospectively reported on their mood and craving levels both before and during the previous day’s drinking episodes, as well as stimulation and sedation levels during the previous day’s drinking episodes. We hypothesized that ibudilast would significantly reduce average levels of alcohol-related stimulation and increase average levels of alcohol-related sedation compared with placebo during participant naturalistic drinking episodes. Second, we hypothesized that ibudilast would significantly attenuate daily alcohol-induced changes in craving and mood compared with placebo. Two sets of exploratory analyses were also undertaken in which we tested if ibudilast moderated the effect of alcohol-related stimulation and sedation on same-day number of drinks consumed and if the presence of withdrawal-related dysphoria moderated ibudilast’s effects on daily alcohol-induced changes in mood and craving.The current study is a secondary analysis of data collected during a two-week clinical trial of ibudilast for heavy drinking reduction and negative mood improvement in a sample of non-treatment seeking individuals with AUD . Fifty-two eligible participants were randomized to either ibudilast or matched placebo. Randomized participants were asked to attend in-person study visits on Day 1 , Day 8 , and Day 15 , and complete electronic DDAs to report on previous day craving, mood, and alcohol and cigarette use. When participants endorsed previous day alcohol consumption, they also reported on levels of stimulation and sedation. Participants completed a neuroimaging scan at study midpoint. The clinical trial was approved by the University of California, Los Angeles Institutional Review Board [UCLA IRB#17–001741]. Prior to completing study procedures, all participants provided written informed consent after receiving a full study explanation. A community-based sample of individuals with current DSM-5 AUD was recruited for the trial through social media and mass transit advertisements in the greater Los Angeles area. Study inclusion criteria were: between 21 and 50 years of age; meet current DSM-5 diagnostic criteria for mild-to-severe AUD ; and report heavy drinking levels 30 days prior to their screening visit, as defined by the National Institute on Alcohol Abuse and Alcoholism as >14 drinks per week for men and >7 drinks per week for women. Exclusion criteria were: currently receiving or seeking treatment for AUD; current DSM-5 diagnosis of another substance use disorder ; lifetime DSM-5 diagnosis of bipolar disorder or any psychotic disorder; current use of psychoactive drugs, other than cannabis, as verified by a urine toxicology screen; if female: pregnancy, nursing, or decision to not use a reliable method of birth control; presence of nonremovable ferromagnetic objects, claustrophobia, serious head injury, or prolonged period of unconsciousness ; medical condition that could interfere with safe study participation; and recent use of medications contraindicated with ibudilast treatment . Participants were also required to have reliable internet access to complete electronic DDAs. A total of 190 individuals consented to participate in the initial in-person screening visit. Of those, 81 individuals were deemed clinically eligible and were invited to complete a physical screening to determine medical eligibility. A total of 52 participants were randomized to study medication or placebo . Included in the present analyses are 50 participants who completed at least one daily diary report after randomization. Participants were compensated up to $250 for their participation in the study and received an additional $100 bonus if all study visits and ≥80% of DDAs were completed. The clinical trial was conducted at an outpatient research clinic in an academic medical center. Interested individuals completed an initial telephone-screening interview and eligible callers were then invited to the laboratory for an in-person behavioral screening visit. At the start of all in-person visits, participants were required to have a BrAC of 0.00g/dl and a urine toxicology test negative for all drugs excluding cannabis. Eligible participants were asked to complete an in-person physical screening visit consisting of laboratory tests and physical exam by a study physician. Participants meeting all study eligibility criteria who attended the in-person randomization visit were randomly assigned to receive either 50 mg BID of ibudilast or matched placebo.

A final note about interpreting the study’s findings is warranted

A reverse pattern appeared for transgender adolescents of color, however, such that transgender Latinx, American Indian or Alaskan Native, Black, Native Hawaiian or Pacific Islander, and multiracial adolescents evidenced greater adjusted odds of more frequent vaping relative to transgender white adolescents. For adolescents unsure of their gender identity, patterns were less consistent. Whereas Latinx, Black, and Native Hawaiian or Pacific Islander adolescents unsure of their gender identity evidenced greater odds of more days vaping relative to white adolescents unsure of their gender identity, Asian adolescents unsure of their gender identity evidenced lower odds of more days vaping than white adolescents unsure of their gender identity. Consistent with our hypothesis, we found that gender identity and race/ethnicity significantly interacted in their association in vaping frequency such that transgender adolescents of color were generally more likely to report a higher frequency of vaping compared to cisgender white adolescents. Although less consistent, some groups of adolescents of color who were unsure of their gender identity were also disproportionately more likely to report a higher frequency of vaping compared to cisgender white adolescents. In stratified models, we observed disparities in vaping frequency between transgender and cisgender adolescents within each race/ethnicity stratum as well as in vaping frequency among transgender Latinx, American Indian and Alaskan Native, Black, Native Hawaiian or Pacific Islander, and multiracial relative to their transgender white peers. The largest differences in both stratified models were among transgender Black adolescents who evidenced 6 times the odds of more frequent vaping relative to their cisgender Black peers and nearly 3 times the odds of more frequent vaping relative to their transgender white peers. In the model stratified by gender identity, we observed reversed patterns among cisgender adolescents, grow rack with white adolescents evidencing greater odds of more frequent vaping than their cisgender peers of color.

Taken together, our findings extend past research documenting vaping and other tobacco use disparities among transgender relative to cisgender youth to highlight pronounced disparities in vaping frequency among transgender adolescents of color. Our finding of gender identity disparities in vaping frequency among Black adolescents in particular aligns with a recent analysis of data from the 2018-19 Behavioral Risk Factor Surveillance System finding that transgender Black adults were more likely to be current smokers relative to cisgender Black adults.Additionally, our finding that cisgender adolescents of color tended to vape less frequently than their cisgender white peers is in keeping with prior research documenting greater prevalence of vaping among white adolescents compared to their Black and Latinx peers.Our study does not explain the reasons for the observed disparities in vaping frequency; however, structural injustice has been identified as a fundamental cause of health disparities.Structural injustice is enforced via inequitable socio-political and economic systems and norms which differentially influence access to resources and opportunities for groups based on relative societal power, and in turn, health behaviors and outcomes.Interpreting our findings through this understanding of structural injustice, gender minority stress,and intersectionality suggests multilevel discrimination and stressors may drive the observed disparities in vaping frequency among transgender adolescents of color. Transgender youth of color face pronounced housing instability, employment precarity, lack of access to healthcare, and violence and victimization,which may lead to vaping as a coping strategy. Qualitative research with racially/ethnically diverse LGBTQ youth smokers have found that participants describe smoking as a way to deal with stress and take back control from or rebel against oppressive systems.Limited supportive resources in schools may also underlie disparities in vaping among transgender adolescents of color. For example, participation in LGBTQ empowerment groups, i.e..

Gender and Sexuality Alliances , is associated with lower levels of school-based victimization and greater receptivity to school-based substance use prevention efforts among LGBTQ adolescents.However, there are several limitations to effective engagement of transgender adolescents and adolescents of color within GSAs, including limited considerations of or discussions regarding diverse gender identities and intersections of LGBTQ identities with race, ethnicity, and socioeconomic position among members.If GSAs or other LGBTQ specific resources in schools are not inclusive of or welcoming to youth with diverse gender identities or race/ethnicities, the potential for these resources to buffer against stress and/or prevent vaping may be inequitably distributed. Additionally, the enduring history of predatory marketing of tobacco and vape products to you may influence vaping disparities among transgender adolescents of color. A recent study found LGBTQ adolescents and Black and Latinx adolescents reported higher engagement with online tobacco and e-cigarette marketing compared to their non-LGBTQ and white peers, respectively.One might conclude that gender identity , as opposed to race/ethnicity , contributes more to disparities in vaping among transgender/unsure adolescents of color because the magnitude of these disparities is larger within race/ethnic groups than across race/ethnic groups. We caution against such an interpretation, as this logic contradicts the notion that systems of power are intersecting and interlocking; thus, identities or social positions cannot be neatly disentangled.Instead, we call attention to the increased vulnerability for higher vaping frequency among transgender adolescents of color with the framework of intersectionality in mind, and the need for future research to examine and intervene on the interlocking systems shaping these disparities. Our study should be considered within the context of its limitations. Our sample consists of adolescents in secondary schools in one U.S. state ; the extent that findings generalize to adolescents in California and more broadly is uncertain. Additionally, there is variability in the terms used by transgender and gender diverse people to describe their gender identity.Thus, our categories may not reflect the diversity of participants’ gender identities orbe culturally sensitive to gender identities among adolescents within particular racial/ethnic groups, such as American Indian or Alaskan Native adolescents who may identify as two-spirit or other gender identities not assessed in the CHKS.A similar concern relates to our measurement of race and ethnicity which we combined as race/ethnicity, leading to categorization of more than half the sample as Latinx . Although this approach to measurement is common, our failure to disentangle ethnicity from race may have masked nuanced disparities among Latinx adolescents who also identify with a specific race , for example, Afro-Latinx adolescents.

We were also are unable to determine precisely the substances vaped by participants as the survey did not measure substances vaped , however the CHKS item is preceded by questions about past 30-day smoking and use of smokeless tobacco ; other types of substance use are asked about separately. Finally, we did not test causal mechanisms of the observed vaping disparities. At best, our independent variables of race/ethnicity and gender identity are proxies for the inequitable systems of power that shape health determinants and outcomes.A key strength is our use of a large, diverse, methodologically strong,greenhouse tables population-based sample of adolescents in schools. Our study is strengthened by examining vaping disparities with three categories of gender identity and seven categories of race/ethnicity – yielding detailed information for multiple racial/ethnic groups of transgender adolescents and adolescents unsure of their gender identity. Although some of our analytic categories were relatively small , these findings offer insights into vaping disparities for subgroups often left out or obscured in research and highlight their unique health-related needs. Finally, our use of an ordinal model to assess disparities in vaping frequency is a strength, as more frequent vaping may be more harmful than infrequent vaping.Our findings have implications for future research, including the need to examine the multilevel causal mechanisms of adolescent vaping disparities at the intersection of gender identity and race/ethnicity. Explicit examinations of how systems of power intersect to shape disparities are necessary to mitigate inequitable population-level differences in health behaviors and outcomes.50 Thus, future research on vaping disparities among transgender and other marginalized communities of young people should employ novel and community-engaged approaches that identify and interrogate these systems. Mixed methods community-based participatory research is one such approach. In MM-CBPR, researchers collaborate directly with communities to gather and synthesize both qualitative and quantitative data to generate locally valid results and catalyze action for social change and sustainable health improvements. In the context of adolescent health disparities prevention, this approach may be especially useful for identifying and/or implementing asset-based and youth-led interventions.For example, researchers could directly partner with teachers, service providers, parents, and transgender adolescents of color to gather insights based on survey data and in-depth interviews or focus groups into the individual, interpersonal, and contextual factors that influence adolescent vaping. Indeed, research has found that supportive school, community-based, and family contexts may buffer against substance use and support well-being among transgender adolescents– MM-CBPR is well-suited to examine these influences and identify multiple levers for intervention. There is also a need to examine gender identity disparities in adolescent vaping and co-use of tobacco products, such as combustible cigarettes. While explorations of vaping alone are important given recent increases in vaping prevalence, examinations of co-use and the health effects of co-use relative to vaping alone should be prioritized for prevention planning. Drug addiction is characterized by persistent drug use despite adverse consequences, perhaps in part because the instant pleasure garnered by using drugs is perceived to outweigh the long-term benefits of sobriety.

Consistent with this idea, laboratory studies routinely find that individuals with substance use disorders display greater preference for smaller, more immediately available rewards over larger, delayed alternatives than healthy controls . Moreover, research indicates that those who most strongly favor the immediate options on such laboratory-based choice tasks are also most likely to relapse during attempted abstinence . Nonetheless, few studies have attempted to elucidate the neural mechanisms underlying addicts’ inordinate preference for immediate rewards. Dopamine is heavily implicated in intertemporal choice , and indirect evidence suggests that deficient dopamine D2 /D3 -type receptor-mediated dopaminergic neurotransmission in the striatum may be an important contributing factor to this immediacy bias. Like steep temporal discounting, low striatal D2 /D3 receptor availability is observed among individuals with substance use disorders , and has been linked with an increased likelihood of relapse . Chronic exposure to methamphetamine or cocaine induces persistent reductions in striatal D2 /D3 receptor availability in rats and monkeys , and rats treated chronically with either of these drugs exhibit greater temporal discounting than controls . Humans with attention-deficit hyperactivity disorder or obesity—two other disorders that are associated with low striatal D2 /D3 receptor availability — also display greater temporal discounting than healthy controls . Greater temporal discounting has also been observed among carriers of the A1 allele of the ANKK1 Taq1A polymorphism , a genetic variant associated with low striatal D2 receptor density/binding in humans relative to A2 homozygotes . Although an association between low striatal D2 /D3 receptor availability and steep temporal discounting has been implied, this link has not been directly evaluated. We therefore examined striatal D2 /D3 receptor availability in relation to temporal discounting in research participants who met DSM-IV criteria for MA dependence and a group of healthy controls. MA-dependent individuals were selected as a group for study because case-control studies find that they display deficits in striatal D2 /D3 receptor availability and exaggerated temporal discounting . We hypothesized that striatal D2 / D3 receptor availability would be negatively correlated with discount rates among MA users, and possibly also among controls. Because tobacco use has also been linked with low striatal D2 /D3 receptor availability and steep temporal discounting , the association was explored as well in the control-group smokers. Because chronic MA abusers also display lower D2 /D3 receptor availability than non-users in extrastriatal brain areas , including several that have been implicated in intertemporal choice , exploratory analyses were performed to investigate whether extrastriatal D2 /D3 receptor availability might also be negatively correlated with discount rate.Procedures were approved by the University of California Los Angeles Office for Protection of Research Subjects. Participants were recruited using the Internet and local newspaper advertisements. All provided written informed consent and underwent eligibility screening using questionnaires, the Structured Clinical Interview for DSM-IV , and a physical examination. Twenty-seven individuals who met criteria for current MA dependence but were not seeking treatment for their addiction and 27 controls completed the study. D2 /D3 receptor availability data from approximately half of the MA users and controls have been reported previously , and smaller subsets were included in other studies from our laboratory regarding D2 /D3 receptor availability .

Over time one would expect these factors to further contribute to the effectiveness of CBIs

Use of a theoretical framework helps to explain the mechanisms of change by informing the causal pathways between specific intervention components and behavioral outcomes. Understanding these mechanisms improves our understanding of how and why a particular intervention works. There has been little attention as to how theoretical frameworks have informed the development of CBIs focused on alcohol use among adolescents and young adults. Only two of the five aforementioned literature reviews covering CBIs for alcohol use in youth examined the underlying theoretical basis of the CBIs. In both of these reviews, the names of the theory and/or specific theoretical constructs were mentioned; however, there was little examination of how the theories were applied to the CBIs. In addition to the reviews focused specifically on adolescent and young adult substance use, there was an additional systematic review that examined the relationship between the use of theory and the effect sizes of internet-based interventions. This study found that extensive use of theory was associated with greater increases in the effect size of behavioral outcomes. They also found that interventions that utilized multiple techniques to change behavior change tended to have larger effect sizes compared to those using fewer techniques. This review builds on prior work demonstrating that health interventions grounded in established theory are more effective than those with no theoretical basis. However, this review did not exclusively focus on alcohol use or adolescents specifically.

It is therefore important to build upon this knowledge base and focus on the application of theory in CBIs to address adolescent/young adult alcohol use. The primary goal of this study is to conduct a review of how theory is integrated into CBIs that target alcohol use among adolescents and young adults. Specifically,grow table this study examines which CBIs are guided by a theoretical framework, the extent to which theory is applied in the CBIs and what if any measures associated with the theoretical framework are included in the CBI’s evaluation. A secondary goal is to provide an update of CBIs addressing alcohol use among youth in order to expand our understanding of their effectiveness.To be included in this review, the main component of the intervention was required to be delivered via computer, tablet or smartphone. Interventions could include a video game, computer program, or online module. In addition, the intervention needed to target alcohol use among adolescents and young adults between the ages of 12 to 21 years. While adolescence covers a wide range, we chose this age range because there is general consensus that it has begun by age 12, and we included youth up to age 21 since that is the legal drinking age in the U.S. Studies whose participants’ had a mean age between 12 and 21 years were included even when individual study’s participants’ ages extended outside this age range. Interventions intended to treat a substance use disorder were excluded. Non-English language articles, research protocols, and intervention studies that did not report outcomes were also excluded from analyses.Once eligible studies were identified, the characteristics of the intervention, the context of the intervention, the population targeted, intervention dosage, study author, year and outcomes were entered into a spreadsheet for analyses.

Duplicate articles were deleted and journal articles which discussed the same intervention were grouped together. When there was more than one unique article for any given CBI, the CBI was counted only once. In some cases, a given CBI existed in several editions, was modified, or was applied to a different study population. These variations of the CBI were grouped together. Painter et al.’s classification system was used to categorize the use of theory in each of the CBIs. Consistent with this system, first a CBI was examined to see if an established, broad theory was mentioned in any of the corresponding articles for a given CBI. If so, the CBI was classified as “mentioned”. Second, articles were reviewed to see if they provided any information about how the CBI used theory to inform the intervention. If any of the articles associated with a given CBI provided any information about the use of theory, the CBI was classified as “applied. For our third category, we used “measured” to classify CBIs if any associated article included at least one specific measure of a construct within the theoretical framework. This third category is a slight departure from Painter’s typology which classifies interventions as “tested” if over half of the constructs in the theory are measured in the evaluation of the intervention. We opted for “measured” because testing theories is a complex process and not a common practice of CBIs. We did not use Painter’s 4th category, “building or creating theory” because this was not applicable for any of these interventions. For all articles reporting on effects of the intervention on alcohol use, attitudes, or knowledge on an included CBI, the effectiveness of the CBI on these outcomes was also examined. Two senior health research scientists , with advanced training in theories of behavior change, oversaw the classifications system and addressed questions about the application of a theory/theoretical constructs.

The review was conducted by a trained research associate with a master’s degree in public health. A spread sheet was created that included each classification, a description of how the theory was applied, and a list of relevant constructs that were measured.This study identified 100 unique articles covering 42 unique computer-based interventions aimed at preventing or reducing alcohol use among adolescents and young adults. Half of these CBIs have not been included in previous reviews. Thus, this review includes a total of 21 new CBIs and 43 new articles. This review is the first to provide an in-depth examination of how CBI’s integrate theories of behavior change to address alcohol use among adolescents and young adults. While theories of behavior change are a critical component of effective interventions that have been developed and evaluated over the past several decades, attention to the application of theory in CBIs has been limited. We utilized a simple classification system to examine if theories were mentioned, applied or measured in any of the publications that corresponded with the CBIs. Only half of the CBIs reviewed mentioned use of an overarching, established theory of behavior change. The other half mentioned used of a single construct and/or intervention technique but did not state use of a broader theory. CBIs that were based on a broad theoretical framework were more likely to include measures of constructs associated with the theory than those that used a discrete construct or intervention technique. However, greater attention to what theory was used, articulating how theory informed the intervention and including measures of the theoretical constructs is critical to assess and understand the causal pathways between intervention components/mechanisms and behavioral outcomes . When mentioning the use of a theory or construct, almost all provided at least some description of how it informed the CBI; however, the amount and quality of information about how the theory was applied to the intervention varied considerably. Greater attention to what is inside the “black box” is critical in order to improve our understanding of not only what works, but why it works. While a few articles provided detailed information about the application of theory,vertical rack the majority included limited information to examine the pathway between intervention approach and outcomes. There are a number of reasons why there may be limited information on the use of theory in CBIs. Some researchers/intervention developers may not fully appreciate how theory can be used to inform intervention approaches. There is an emphasis on outcomes/effectiveness of interventions and less attention is placed on their development. In addition, to our knowledge, there are no publication guidelines/standards for describing the use of theoretical frameworks in intervention studies and the inclusion of this information is often up to individual authors and reviewers. Given the importance of theory in guiding interventions, greater emphasis on the selection and application of theory is needed in publications. The classification system used in this review provided some form of personalized normative feedback and applied it relatively consistently across the CBIs. Personalized normative feedback is designed to correct misperceptions about the frequency and acceptability of alcohol use among peers. It typically involves an assessment of a youth’s perceptions of peer norms around alcohol attitudes and use followed by tailored information about actual norms.

In addition, some interventions have recently incorporated personal feedback to address individual’s motivations to change through assessing and providing feedback on drinking motives or in decisional balance exercises. The widespread use of personalized normative feedback in CBIs may be because it has been widely documented as an effective strategy and because it lends itself readily to an interactive, personalized computer-based intervention. Motivational interviewing was also used in several of the CBIs and is an effective face-to-face counseling technique. In contrast, this technique was applied to CBIs in a number of different ways, such as exercises designed to clarify goals and values, making both the description of how it was applied even more essential to examine differential effectiveness across various CBIs. This study builds on the growing evidence supporting the use of CBIs as a promising intervention approach. We found most of the CBIs improved knowledge, attitudes and reduced alcohol use among adolescents and young adults. In addition, this study suggests CBIs that use overarching theories more frequently reported significant behavioral outcomes than those that use just one specific construct or intervention technique . This finding is consistent with prior studies examining the use of theory in face to-face interventions targeting alcohol use in adolescents. However, it is important to acknowledge the wide variation across the CBIs not only in their use of theory, but in scope, the targeted populations, duration/dosage, and measured outcomes. It is encouraging that even brief/targeted CBIs demonstrated some effectiveness and thus can play an important role in improving knowledge and attitudes, which are important contributors to changes in behavior. There are limitations to this study. As discussed previously, many articles did not explicitly describe how theory was applied in the CBI. It is therefore possible that the theoretical pathways for the intervention were further developed than we have noted, and possibly included in other documents, such as logic models and/or funding applications; however, such information is not readily accessible and was outside the scope of this review. Thus, lack of mention of the name of a theory or construct or its application does not mean that the intervention did not integrate the theory in the intervention, only that the article did not provide information about its application. Thus, due to variations in the described use of theory along with the wide range of CBIs, it was not possible to draw comparisons about the relative effectiveness of CBIs according to the theory used. The ability to make such comparisons is further limited by the wide time frame in which CBIs were developed. This review spanned articles published between 1995 and 2014. During this period, CBIs to address health issues have been rapidly evolving due to major advancements in technological innovations . These advancements coupled with greater interest and investments from federal agencies and philanthropic foundations. Opioid use disorder is a public health concern in the United States, with an estimated 2.0 million Americans age 12 or older having this disorder in 2018 . Emergency department visits for suspected opioid overdose increased 30 % from July 2016 to September 2017 , and almost 50,000 people died from an opioid-involved overdose in 2019 . Effective OUD pharmacotherapy include the opioid agonist methadone, the partial opioid agonist buprenorphine, and the opioid antagonist naltrexone , all of which may be delivered with adjunctive behavioral treatments. Literature has reported a higher rate of comorbid psychiatric disorders among adults with OUD than those in the general population . National reports on treatment-seeking patients with OUD indicate that 37.9 % have a current comorbid psychiatric diagnosis . The most common psychiatric disorders among patients with OUD include major depression, anxiety, and bipolar disorder . Research findings are mixed regarding the impacts of psychiatric disorders on treatment outcomes and psychosocial functioning in populations with OUD. Some studies have reported that psychiatric comorbidity in patients with OUD is associated with worse treatment outcomes, such as higher risks for a return to opioid use and non-adherence to pharmacotherapy treatment, poorer psychosocial or physical health status, and lower quality of life .