It is now considered as a distinct clinical entity despite a large variety of aetiologies

Considered from another perspective, the interaction effects suggest that certain temperamental traits are risk factors for substance use when parental monitoring is low, but not when it is high. Either interpretation is consistent with the findings and points to a similar conclusion about how temperament and parenting work together to increase risk for early substance use. Being raised in a home with a perception of minimal monitoring by parents may be a more salient risk factor for substance use for those adolescents with dispositional proclivities toward substance use, and possessing a disposition toward substance use may be a stronger risk factor when youth do not believe they are closely monitored by their parents. The broader developmental consideration is that temperamental factors and family variables should be considered jointly in models that attempt to understand early risk for substance use. Although the current study was notable for its multi-informant longitudinal design, and for the size and ethnic composition of the sample, there are limitations that merit consideration. For instance, our ability to detect effects for surgency was hampered by the low reliability of the scale in the 5th grade; thus, results involving surgency should be interpreted with caution. Also, we relied exclusively on youth reports of their substance use, intentions, and expectancies. However, intentions and expectancies are inherently subjective variables and are thus best assessed via self-report. Likewise,vertical farming market focal youth might be in the best position to report on their actual use given understandable motivations to hide substance use from parents, teachers, and other potential informants.

In closing, we found evidence from a longitudinal study of Mexican-origin youth that temperament and parental monitoring assessed in 5th grade are prospectively related to substances use outcomes in 9th grade. These findings are important because they suggest that theoretical models concerning the influence of temperament on substance use can be applied to adolescents of Mexican origin. Indeed, we suspect that factors like temperament and parental monitoring have transcontextual validity to the extent that they are risk factors for early substance use for a diverse range of youth. Of particular importance, we also found that relatively high levels of perceived monitoring might attenuate some of the risks associated with dispositional tendencies toward substance use. Although the current results should be replicated, we suggest that future intervention and prevention efforts could be enhanced by attending to individual differences in temperament. Such attention might be especially important when considering efforts to increase parental monitoring. Neuropathic pain, caused by a lesion or disease affecting the somatosensory nervous system,1 has a considerable impact on patients’ quality of life, and is associated with a high economic burden on the individual and society.Epidemiological surveys have shown that many patients with neuropathic pain do not receive appropriate treatment for their pain.This may be due to lack of diagnostic accuracy and relatively ineffective drugs, but also insufficient knowledge about effective drugs and their appropriate use in clinical practice.Evidence-based recommendations for the pharmacotherapy of neuropathic pain are therefore essential. Over the past 10 years, a few recommendations have been proposed for pharmacotherapy of neuropathic pain or specific neuropathic pain conditions, such as painful diabetic neuropathies and postherpetic neuralgia. In the interim, new pharmacological therapies and high-quality clinical trials have appeared.

Previously hidden and unpublished large trials can now be identified on the web , which, together with analysis of publication bias, may limit the risk of bias in reporting data. Furthermore, prior recommendations sometimes came to discrepant conclusions because of inconsistencies in methods used to assess the quality of evidence . In order to address these inconsistencies, the Grading of Recommendations Assessment, Development, and Evaluation was introduced in 2000 and has received widespread international acceptance. All these reasons justify an update of evidence-based recommendations for the pharmacotherapy of neuropathic pain. The present work aimed to update the recommendations of the Special Interest Group on Neuropathic Pain of the International Association for the Study of Pain on the systemic and topical pharmacological treatments of neuropathic pain.Non-pharmacological management such as neurostimulation techniques were beyond the scope of this work We conducted a systematic review and meta-analysis of randomised controlled trials of all drug treatments for neuropathic pain published since 1966 and of unpublished trials with available results, and assessed publication bias. We used GRADE to rate the quality of evidence and the strength of recommendations. The systematic review of the literature compiled with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statements.We used a standardized review and data extraction protocol . The full reports of randomised, controlled, double-blind studies published in peer-reviewed journals between 1966 and April 2013 were identified using searches of PubMed/Medline, the Cochrane Central Register of Controlled Trials, and Embase. Additional papers were identified from published reviews and the reference lists of selected papers. The target population was patients of any age with neuropathic pain according to the IASP definition; this included postherpetic neuralgia, diabetic and non-diabetic painful polyneuropathy, postamputation pain, post-traumatic/postsurgical neuropathic pain including plexus avulsion and complex regional pain syndrome type II , central post-stroke pain, spinal cord injury pain, multiple sclerosis-associated pain. Neuropathic pains pertaining to multiple aetiologies were also considered. Neuropathic pain associated with nociceptive components was included provided that the primary outcome was neuropathic pain. Conditions such as complex regional pain syndrome type I, low back pain without radicular pain, fibromyalgia, and atypical facial pain were not included because they do not fulfill the current definition of neuropathic pain.

Trigeminal neuralgia was considered separately because of generally distinct response to drug treatment.The interventions were systemic or topical treatments with at least 3 weeks duration of treatment. Single-administration treatments with long-term efficacy were included if there was a minimum follow-up of 3 weeks. Studies using intramuscular, intravenous,vertical farming pros and cons or neuraxial routes of administration and preemptive analgesia studies were excluded . Randomised, double-blind, placebo-controlled studies with parallel group or crossover study designs that had at least 10 patients per arm were included. Enriched-enrolment, randomised withdrawal trials were summarised separately. Studies published only as abstracts were excluded. Double-blind active comparator trials of drugs generally proposed as first or second-line treatments were included. The study outcome was based on the effect on the primary outcome measure, e.g. neuropathic pain intensity. Studies in which the primary outcome included a composite score of pain and paraesthesia or paraesthesia only were not included. Studies were assessed for methodological quality using the five-point Oxford Quality Scale by two independent authors . Here, a minimum score of 2 out of 5 was required for inclusion.We also assessed serious risk of bias relating to lack of allocation concealment, incomplete accounting of outcome events, selective outcome reporting, stopping early for benefit, use of invalidated outcome measures and carryover effects in crossover trials. The results of the database and registry search are shown in figure 1. In total, 191 published articles and 21 unpublished studies were included in the quantitative synthesis. Study characteristics are summarised in appendices 4 and 5. In addition, five published and 12 unpublished studies were retrieved between April 2013 and January 2014 . Thus, a total of 229 articles/studies were included. References are presented in appendix 7. Eligible studies investigated tricyclic antidepressants , serotonin- noradrenaline reuptake inbibitor antidepressants, other antidepressants, pregabalin, gabapentin/ gabapentin extended release and enacarbil, other antiepileptics, tramadol, opioids, cannabinoids, lidocaine 5% patch, capsaicin 8% patch and cream, subcutaneous BTX-A, NMDA antagonists, mexiletine, miscellaneous topical, newer systemic drugs, and combination therapies. Fifty-five percent of the trials were conducted in diabetic painful polyneuropathy or postherpetic neuralgia. NNT and NNH could be calculated in 77% of published placebo-controlled trials. There was generally no evidence for efficacy of particular drugs in specific conditions. Therefore these recommendations apply to neuropathic pain in general. However, they may not be applicable for trigeminal neuralgia, for which we could extract only one study complying with our inclusion criteria. We therefore recommend referring to previous specific guidelines regarding this condition.Few studies included cancer-related neuropathic pain; the recommendations for the use of opioids may be different in certain cancer populations. Similarly these recommendations do not apply to acute pain or acute pain exacerbation. Treatment of neuropathic pain in children is a neglected area.However, none of the studies assessed pediatric neuropathic pain, and the present guidelines therefore only apply to adults. Details regarding GRADE recommendations and practical use are provided in tables 2, 3 and appendix 10. Few relevant trials appeared since our meta-analysis, but none affected the recommendations . TCAs, SNRI antidepressants duloxetine and venlafaxine, pregabalin, gabapentin and gabapentin ER/enacarbil have strong GRADE recommendations for use in neuropathic pain and are proposed as first-line, with caution regarding most TCAs .

Tramadol, lidocaine patches and high-concentration capsaicin patches have weak GRADE recommendations for use and are proposed as generally second-line. Topical treatments are recommended for peripheral neuropathic pain with presumed local pain generator. In select circumstances, e.g when there are concerns due to side effects or safety of first-line treatments, particularly in frail and elderly patients, lidocaine patches may be considered as first-line. Strong opioids and BTX-A have weak GRADE recommendations for use and are recommended as third-line. Prescription of opioids should be strictly monitored particularly for patients requiring high dosages .Tapentadol, other antiepileptics, capsaicin cream, topical clonidine, SSRI antidepressants, NMDA antagonists and combination therapy have inconclusive GRADE recommendations. Combination of pregabalin/gabapentin and duloxetine/TCAs may be considered as an alternative to increasing dosages in monotherapy for patients unresponsive to monotherapy with moderate dosages . Cannabinoids and valproate have weak recommendations against their use in neuropathic pain and levetiracetam and mexiletine have strong recommendations against their use .The present manuscript presents the revised NeuPSIG recommendations for the pharmacotherapy of neuropathic pain based on an updated systematic review and meta analysis of systemic or topical drug treatments. We used the GRADE systemto assess the quality of evidence for all treatments, and the recommendations comply with the AGREE II guidelines. The present recommendations are driven by drug treatments rather than by the aetiology of pain, akin to prior NeuPSIG recommendations.Neuropathic pain is increasingly recognised as a specific multi-aetiology entity across neuropathic syndromes.In accordance with previous reports24 results of our meta-analysis show that the efficacy of systemic drug treatments is generally not dependent on the aetiology of the underlying disorder . Side effects may, however, to some degree depend on the aetiology, eg, drugs with CNS-related side effects may be less tolerated in patients with CNS lesions.Pain due to HIV-related painful polyneuropathy and radiculopathy seems more refractory than other pain conditions in our meta-analysis. This may be due to large placebo responses in HIV-related neuropathy trials,a distinct clinical phenotype in subgroups of patients with radiculopathy,or psychological/psychosocial comorbidites, often neglected in large trials. Topical agents have no known relevance for use in central neuropathic pain, and this is clearly stated in our recommendations. The strengths of this systematic review and meta-analysis are the analysis of publication bias and unpublished trials. Publication bias may be present if studies with positive results are published while those with no data or negative results are not.It may lead to major overestimation of efficacy in therapeutic studies.Our results showed that the effect sizes estimated from studies published in peer-reviewed journals were higher than those estimated from studies available in open databases. This finding emphasises the need for searching these databases in systematic reviews. Analysis of further publication bias suggested a limited overstatement of overall efficacy of drug treatments , although available methods to assess publication bias have limitations.Here, we found that high concentration capsaicin patches were the most susceptible to publication bias, ie, a new study with less than 400 participants with no effect may increase the NNT to an unacceptable level. This supports the robustness of a meta-analysis taking into account unpublished trials, and suggests that effect sizes were overestimated in previous meta analyses of pharmacotherapy for neuropathic pain. Results of quantitative data for individual drugs, showing NNT for 50 % pain relief ranging from around 4 to 10 across most positive trials, emphasizes the overall modest study outcomes in neuropathic pain. Inadequate response of neuropathic pain to drug therapy constitutes a highly unmet need and may have substantial consequences in terms of psychological or social adjustment.However these results may also reflect insufficient assay sensitivity of clinical trials of neuropathic pain.

Serving sizes have also been updated to reflect what people currently eat and drink

Liquid solutions were prepared of the target analytes dissolved in methanol with 0.6 μL injected directly into the GC-MS. Pure standards were of ACS Reagent grade and obtained from Sigma-Aldrich. Components and photographs of the wearable sampler are presented . Examples of GPS, temperature and humidity data are shown . Although the GPS is programmed to acquire a signal every 10 sec, this time occasionally varies. This has a minor effect for the timing of sample collection and increases the power usage slightly. Furthermore, while the sampler is designed for use in indoor and outdoor environments, the GPS signal is too poor for indoor location updates. However, the system stores the last updated location and will record it until the GPS updates again. The system also records whether or not the sampler is receiving a GPS signal. We found that the sampler could be machined and constructed in less than 16 h with a material cost around $400 USD. Almost half of the material cost came from the pump. Per the manufacturer, the pump produces less than 40 dB of noise, meaning it could be heard in a quiet room but would not be a loud disruption. In the sampler, the pump is encased in the aluminum fixture, muffling any noise. We found that the pump was very quiet and could barely be heard in typical use. The device weighed just under 400 g and could fit comfortably attached to the belt of a user. Ways to further reduce the weight include: custom PCBs in place of the commercial microcontroller and GPS; using a custom plastic outer case. Bench marking of the micro preconcentrator chips has been previously described.The work herein focuses on performance of the wearable environmental sampler. We first present evidence that the sampler successfully and reproducibility enables the μPC chip to collect a VOC sample.

The sampler was set a sample time of 10 min and a calibration curve was produced of four chemicals ranging from 300 ppb to 5 ppm . As expected,indoor vertical farm the sampler showed a linear increase of signal with an increase in chemical concentration. R2 values ranged from 0.958–0.999. Two samples were collected per concentration and the relative standard deviation ranged from 7–11%. We continued to collect samples of increasing concentration to profile the limit of linearity for this particular sampling protocol and sorbent. This is not a limitation of the environmental sampler but instead represents saturation of the sorbent within the μPC, where an increase in sample concentration would no longer lead to an increase in signal response. We sampled up to 100 ppm . Above 10 ppm, these four chemicals showed evidence of sorbent saturation for 10 min of sample time: signal increases no longer maintained a linear relationship to increases in sampling concentration. Based on this data, 10 min of sampling time might be best used in environments where VOCs are present in less than 10 ppm. This is largely contingent on the VOCs of interest, which is further discussed below. In addition to varying concentration, we also varied sampling time . There was a steady increase of signal response with longer sampling times from 10 to 60 min for hexane, 2-pentanone and 2-hexanone, and a steady increase from 10 to 90 min for heptane. After these ranges, signals decreased in intensity, suggesting saturation of the preconcentrator sorbent trap inside the sampler. Based on this experiment, a sampling time of less than 60 min might be appropriate in an environment with VOCs around a concentration of 100 ppb. We intended to build a sampler that would be worn by a person for hours at a time and provide a snapshot that was as complete as possible of their integrated VOC exposure. A variety of factors influence such detection goals.

Within the sampler, VOC extraction is influenced by sampling time, flow rate and the preconcentrator chip 19. Furthermore, without advanced knowledge of environmental conditions, it is even more challenging to define an optimized sampling protocol for general use. For instance, the sampler could be optimized to detect VOCs in the parts-per-billion range. If it is then used in situations with high concentrations, the preconcentrator chip would quickly saturate, reducing the quality of any quantitative assessment. The reverse could also occur. Additionally, the μPC chip could be exchanged at a greater frequency which would improve the temporal and spatial resolution of exposure; however, this would put greater requirements on the user and could result in relatively clean samples in unpolluted environments. Finally, it is impossible to create a single method that is optimized to detect every known VOC, as parameter changes have varying effects on classes of compounds . The above tests helped us establish initial sampling guidelines for untargeted analyses in unpredictable environments. Furthermore, we designed the sampler so that parameters can be easily changed or tailored for specific applications. The USB port on the sampler allows users to change these settings via the Adafruit Feather micro-controller. Should a researcher seek targeted analysis, such as user’s exposure to benzene or toluene, sampler parameters such as sampling time, sampling flow rate and sorbent type can be tested, optimized and applied.The environmental sampler was tested outside the lab environment as a handheld device initially by researchers in our group. Different sampling durations, sampling locations, and preconcentrator chips were used for qualitative assessments of the performance the sampler.

Figure 5 shows three deployments of the samplers for a continuous sampling duration of one hour for each run. Samples were collected in a kitchen when the user was cooking , in an institutional hallway when the floors were being stripped and waxed , and in a room where a cat litter box was kept . Table 1 shows putative peak identities of sixteen example compounds, although more were detected. Many of the putatively identified compounds are unsurprising given the context of the samples. The sample taken while the user was cooking included detection of limonene , ocimene and cuminal . A sample taken in a hallway during a floor stripping/waxing yielded high abundances of benzyl alcohol and ethylene glycol monohexyl ether . The cat litter box emitted common fragrance VOCs, such as limonene, eucalyptol and nerol, which can help mask odor. To further test our sampler, we let a representative person of the general public use the device for a week. Under IRB approval, a 17 year old high school student volunteered to test our sampler. The student carried the sampler for 12 h during the day and the sampler automatically collected a 10 min sample every 1 h. The student repeated this for 5 d. This test aided to monitor the experimental performance and get feedback of the user friendliness of the sampler during a lengthy test. The sampler did not interrupt any daily activities of the user, noise due to the sampling pump was negligible, and the student was able to carry the sampler and exchange the preconcentrator chips easily. Raw GC-MS chromatograms of samples collected by the volunteer are shown . Table 2 summarizes the number of VOCs deconvoluted from each chromatogram and also the number of unique VOCs detected in a sample. The user did vary their location during the five days of sampler use,vertical farming equipment with some overlap in location between days, and varied the time spent in each location . It is thus expected that the number of captured VOCs reflected the similarities and differences of the user’s environment. Putative identifications of compounds were performed by comparison of obtained mass spectra to the NIST ‘14 database, providing a list of potential VOCs that the user was exposed to during their day to day activities, as collected by the environmental sampler. A number of naturally occurring VOCs were detected, such as benzeneacetaldehyde, β- myrcene and camphor. Other compounds were potentially artificial in origin, such as lilial and galaxolide, two synthetic fragrances that smell floral and musky, respectively, and likely originated from a scented cosmetic product. As captured by the sampler, the user was potentially exposed to hazardous VOCs, such as ethylbenzene, toluene, phenol and benzoyl chloride. To demonstrate the quantitative capabilities of the sampler, we chose four VOCs that were present in all five of the participant’s samples .

Limonene, menthol and decanal are all common fragrance compounds while 2- butyl-1-octanol is a known humectant. As these compounds appeared in each of this volunteer’s samples, we suspect they may have derived from a personal product that was applied daily. We did not speculate further to the origins of these VOCs as the purpose of this was only to present quantification of chemicals. A calibration curve was constructed to quantify the amount of each chemical retained onto the μPC chip during deployment . Limonene is seen as the most variable, with values ranging from 3.4 to 71.5 ng. Decanal was the most stable with a relative standard deviation of 34% across all five samples. Menthol was found in the lowest abundance . In future work, we hope to deploy these samplers in environments that potentially contain hazardous levels of certain VOCs. Locations would be areas such as California’s Central Valley, which contains multiple sources of air pollution from industry, agriculture and benzene treatment plants. Areas like Paradise, California could also benefit from VOC samplers, since the area is currently recovering from a massive wildfire that has the potential to expose residents to unsafe compounds as they rebuild their community. At these sites, samplers could be used to target dangerous compounds, such as benzene or toluene, and quantify exposure concentrations. The over consumption of nutritive sugars continues to be a major dietary problem in different parts of the world. A recent report indicates than an average American consumes about 17 teaspoons of added sugar daily, which is nearly twice the amounts of the 6 and 9 teaspoons, recommended for women and men, respectively. This dietary behavior is linked to various adverse health effects such as increased risk of diabetes, obesity, high blood pressure and cardiovascular diseases. Hence, there are worldwide efforts to reduce sugar consumption. For instance, the World Health Organization made a conditional recommendation to reduce sugar consumption to less than 5% of the total caloric intake, along with a strong recommendation to keep sugar consumption to less than 10% of the total caloric intake for both adults and children. Currently, added sugar consumption accounts for approximately 11–13% of the total energy intake of Canadian adults, is greater than 13% in the US population, and is as high as 17% in US children and adolescents, the latter principally from sugar-sweetened beverages . Consequently, taxes on SSB have been proposed as an incentive to change individuals’ behavior to reduce obesity and improve health. Notably, the city of Berkeley, CA, USA successfully accomplished a 21% decrease in SSBs consumption within a year of implementation. Therefore, it is expected that more states and cities will adopt this policy. On the regulatory level, the U.S. Food and Drug Administration updated the Nutrition Facts label requirement on packaged foods and beverages, starting 1 January 2020, to declare the amount of added sugars in grams and show a percent daily value for added sugar per serving. The expansion of these efforts to spread the awareness on sugar consumption habits and the resulting health issues has generated demand for safe, non-nutritive sugar substitutes. There are many sweeteners on the market to help consumers satisfy their desire for sweetness; however, each of the sweeteners available to consumers has specific applications and certain limitations. Artificial sweeteners have been used as sugar substitutes in numerous applications; however, their long-term effects on human health and safety aspects remain controversial. For example, ATS appear to change the host microbiome, lead to decreased satiety, alter glucose homeostasis, and are associated with increased caloric consumption and weight gain. Moreover, some health effects such as dizziness, headaches, gastrointestinal issues, and mood changes are associated with the consumption of a commonly used ATS, aspartame. Additionally, Kokotou et al. have demonstrated the impact of ATS as environmental pollutants, concluding that when artificial sweeteners are applied in food products or eventually enter the environment, their transformation and/or degradation may lead to the formation of toxic substances.

The tobacco industry argues that cigarette smuggling leads to increased levels of organized crime

Lawmakers also were talking about the desire of the Legislature to have a greater say in how the settlement money would be spent. In particular, concern was expressed regarding the fact that such a large amount of money was to be disbursed by only nine people. “Some of my colleagues definitely feel that with the magnitude of dollars there, the Legislature should have more control,” said Senator Jensen, the chairperson of the Health and Human Services Committee.For the next few months, the University of Nebraska Medical Center and Creighton University continued to lead efforts calling on the Legislature to dispense the tobacco settlement instead of relying on the grant process.In September, Governor Johanns weighed in on the issue. He said that he supported splitting the settlement in three ways: one-third to remain for Nebraska Health Care Cash Fund grants, one-third to be distributed by the Legislature and one-third to go toward medical research at Creighton University and the University of Nebraska.Regarding the research opportunities, Johanns said, “I think it is a great opportunity to literally lift Nebraska and put it on the map,” but he added that he thought one-third of the money should remain in the grant process because, “These grants have great potential to improve access and delivery of health-care services to medically under served areas, improve the quality of health-care services and strengthen public health in Nebraska.”With all this prior discussion on how best to spend the tobacco settlement money,rolling benches it was not surprising that when the 2001 legislative session began in January numerous bills dealing with using settlement money for health improvements were introduced.

By combining the interest generated from the investment of the tobacco settlement dollars and money received from federal Medicaid transfers, the Health Care Fund was estimated at $50 million per year.This $50 million was separate from the $7 million that Tobacco Free Nebraska was receiving directly from the settlement payments for three years. The Speaker of the Legislature, Doug Kristensen of Minden, introduced a bill on behalf of Governor Johanns that included one-third of the $50 million to be going to biomedical research. Senators Jensen, Jennie Robak of Columbus and Thompson all separately introduced bills that focused largely on improvements to Nebraska mental health services, Senator Lowen Kruse of Omaha introduced a bill that would have raised state Medicaid payment rates for mental health services, and Senator Byars introduced three bills that sought to increase the amount spent on respite care and developmental disabilities. The focus on public health and specifically on mental health was not surprising. Historically, Nebraska’s funding per capita for public health had been either 49th or 50th out of all states,and as Susan Boust, a community mental health service provider from Omaha testified at the Appropriations Committee and Health and Human Services Committee’s public hearing for all these bills, Nebraska was spending $8.36 per person on community-based mental health efforts while the national average was $32.Unfortunately for the state, the biennium budget numbers continued to worsen. When new revenue figures came in early during the 2002 legislative session the anticipated $50 million shortfall rose to $186 million and then to $222 million in March, another 5%.Once again, Governor Johanns reiterated his position that the budget deficit should be remedied through spending cuts and not tax increases.Having already cut $171 million from the budget during the 2001 special session, legislators viewed the governor’s position toward tax increases as unreasonable. Over the governor’s veto, the Legislature passed temporary increases in the state income tax, sales tax and cigarette excise tax as well as made spending cuts including the state’s K-12 school system.Previously, Governor Johanns said that the only tax increase he would support was on the cigarette excise tax.

Governor Johanns was willing to support an excise tax increase because an excise tax increase was less likely to have political repercussions for him with Nebraska voters than a sales or income tax increase.Despite the fact that during the 2001 special session and the 2002 legislative session the Legislature had made $295 million in spending cuts and transfers and raised taxes by $142 million – $25 million from the cigarette excise tax increase – a shortfall of $130 million still remained by the end of the 2002 legislative session.One major consequence of the budget crisis facing the Legislature in 2002 was that Tobacco Free Nebraska lost $5 million from its $21 million total.In 2000, Tobacco Free Nebraska had been allocated $7 million for FY2000 but it was not possible for the program to scale up fast enough to spend this money during the first year. Because of the need to following standard government procurement rules , only about $2 million of the $7 million appropriated was spent.With this money not being spent by the end of biennium in FY2001, the Legislature considered this $5 million “unobligated.” Rather than treating these funds as encumbered for contracts being in the process of awarded, during the 2002 session, the Legislature decided, as part of LB1310, to reallocate this money to cover a shortfall for the Children’s Health Insurance Program.Tobacco Free Nebraska was not alone in this respect. Dozens of programs lost money that had been allocated to them during the previous biennium budget which had not been spent and thus had become unobligated.Unfortunately for the state, the biennium budget numbers continued to worsen. When new revenue figures came in early during the 2002 legislative session the anticipated $50 million shortfall rose to $186 million and then to $222 million in March, another 5%.Once again, Governor Johanns reiterated his position that the budget deficit should be remedied through spending cuts and not tax increases.Having already cut $171 million from the budget during the 2001 special session, legislators viewed the governor’s position toward tax increases as unreasonable.

Over the governor’s veto,grow tray the Legislature passed temporary increases in the state income tax, sales tax and cigarette excise tax as well as made spending cuts including the state’s K-12 school system.Previously, Governor Johanns said that the only tax increase he would support was on the cigarette excise tax.Governor Johanns was willing to support an excise tax increase because an excise tax increase was less likely to have political repercussions for him with Nebraska voters than a sales or income tax increase.Despite the fact that during the 2001 special session and the 2002 legislative session the Legislature had made $295 million in spending cuts and transfers and raised taxes by $142 million – $25 million from the cigarette excise tax increase – a shortfall of $130 million still remained by the end of the 2002 legislative session.One major consequence of the budget crisis facing the Legislature in 2002 was that Tobacco Free Nebraska lost $5 million from its $21 million total.In 2000, Tobacco Free Nebraska had been allocated $7 million for FY2000 but it was not possible for the program to scale up fast enough to spend this money during the first year. Because of the need to following standard government procurement rules , only about $2 million of the $7 million appropriated was spent.With this money not being spent by the end of biennium in FY2001, the Legislature considered this $5 million “unobligated.” Rather than treating these funds as encumbered for contracts being in the process of awarded, during the 2002 session, the Legislature decided, as part of LB1310, to reallocate this money to cover a shortfall for the Children’s Health Insurance Program.Tobacco Free Nebraska was not alone in this respect. Dozens of programs lost money that had been allocated to them during the previous biennium budget which had not been spent and thus had become unobligated.While Tobacco Free Nebraska lost $5 million from its FY2000 allocation, the Legislature did not touch the program’s budget for FY2001 and FY2002; therefore, Tobacco Free Nebraska was able to obligate $7 million a year for both of these years.Because of the reallocation that occurred during the 2002 legislative session, Tobacco Free Nebraska only received $16 million over three years and not the $21 million over three years that had been originally budgeted for the program as part of LB 1436 in 2000.With a shortfall of $130 million remaining after the 2002 legislative session, the governor once again called a special session in July, 2002322 and froze spending for all state agencies. With tax receipts for the preceding eleven months falling short of projections, the estimated budget deficit that state senators were faced with rose from $130 million to $233 million.Later it was determined that the actual deficit number for the biennium was much higher that $233 million figure that was estimated by the Governor’s budget office.

The $233 million figure was calculated by assuming that many expenditures would not increase, but when projected increases to Medicare, state aid to school, employee salaries and health insurance were factored in, the deficit figure rose to $778 million.Similar to the special session in 2001, the special session of 2002 cut over $15 million from the University of Nebraska system – half of which was taking from the largest campus, the University of Nebraska-Lincoln.In addition, approximately 19,000 people were removed from Medicaid enrollment by changing eligibility requirements which was supposed to save approximately $20 million.While tobacco control was unaffected by the special session of 2002, the Legislature was unable to solve the budgetary crisis facing the state. This massive deficit remained for the 2003 legislative session came at a very unfortunate time for Tobacco Free Nebraska. Believing that the Master Settlement Agreement earlier that year did not do enough to prevent underage smoking, the members of SmokeLess Nebraska decided to create a new coalition in November, 1998, with the specific goal of reducing smoking by minors by making cigarettes more costly through an excise tax.Named Citizens for a Healthy Nebraska, the new coalition was comprised of the Nebraska divisions of the American Cancer Society, the American Heart Association, the American Lung Association, Health Education Inc., the Nebraska Association of Hospitals and Health Systems, the Nebraska Dental Association and the Nebraska Medical Association.While no specific amount for an tax increase was announced, Rich Lombardi, the coalition’s lobbyist, stated that $1.00 would be a good goal.Only Alaska and Hawaii had excise taxes that high at the time. Citizens for a Healthy Nebraska’s announcement that it would pursue an excise tax increase came right on the heels of a nationwide announcement the day before by Philip Morris and R.J. Reynolds that they had raised the wholesale price of a pack of cigarettes by $0.45 to recoup the money they had to pay as part of the Master Settlement Agreement. The coalition’s reason for pushing for a large excise tax increase was that it viewed it as the most effective means of getting smokers, especially teenage smokers, to quit. “Our group understands that the real solution is to keep kids out of the market by increasing the cost of a pack of cigarettes so that kids won’t smoke,” according to Sherry Miller at the coalition’s first press conference.331Miller was the former President of the Nebraska Parent Teachers Association. It was Miller and Dr. Chris Caudill, a Lincoln cardiologist, that announced theformation of Citizens for a Healthy Nebraska. Speaking about the plans to increase the excise tax, Caudill said, “It is the number one way we can get kids to stop smoking.”Following standard tobacco industry rhotoric,the day after the Citizens for a Healthy Nebraska’s announcement, cigarette smuggling was already being proposed as a reason to forgo an excise tax increase. State Senator Bob Wickersham of Harrison, who later voted against LB 505 in committee, stated, “You do have to examine whether it will work or, in fact, whether it would be an incentive for bootlegging and other activities.”Wickersham was repeating one of the tobacco industry’s primary arguments to combat excise taxes is to argue that large excise tax increases lead to increased levels of cigarette smuggling.Cigarette smuggling typically means purchasing cigarettes in a state with a lower excise tax and then selling them for a profit in a neighboring state which has a higher excise tax. They also claim that excise tax increases lead to economic hardship for retailers due to revenue lost to neighboring states.

Two other characteristics of the Health Department’s first ordinance proposal were important

The objections of these two organizations seemed to focus on the revised definitions and the removal of exemptions and not the enforcement section of LB 45. They complained that LB 45 would prevent convenience stores from creating smoking areas for their customers and that it would prevent smoking in private offices. As Keigher stated, “This is a total ban on smoking in convenience stores.”Senator Thompson was aware of all the changes to the implementing regulations that had recently been developed by Health and Human Services. Except for the enforcement clause, LB 45 was a clean-up bill that sought to harmonize the language of the Nebraska Clean Indoor Air Act with the rules and regulations that would enforce it.After the hearing she told reporters, “They can’t now , with or without LB45. LB45 simply puts in statute what is already in the new rules and regulations, so it is easier for people to read the law and know exactly what it does.”To reporters, Keigher and Siefken both seemed surprised to find out that the Health and Human Services changes had very similar effects as LB 45. “That’s news to me,” claimed Keigher. “It sounds to me like this is a total smoking ban. And it is banned by the rules. Interesting.”Siefken responded by saying, “That is amazing. They don’t have the authority to do that. They have to follow what the Legislature intended them to do.”According to Goedeker, Keigher and Siefken’s surprise was feigned. She said that Health and Human Services had conducted meetings and exchanged correspondence with both the Nebraska Petroleum Marketers and Convenience Store Association and the Nebraska Grocery Industry Association to explain the effect that the new rules and regulations would have for them so both organizations were aware that LB 45 simply sought to clarify regulations that were already in effect. “I believe it was a good press opportunity for them,” Goedeker said. “

They knew what was going on,how to dry cannabis they knew that it wasn’t effecting their businesses any differently than it had before, but it was a good opportunity to confuse the media and confuse the public.”The activities of the lobbyists seemed to have their desired effect on the language of LB 45. Two senators on the Health and Human Services Committee, Philip Erdman and Doug Cunningham , both stated publicly that they were willing to send LB 45 on to the full Legislature if everything but the enforcement section measure was stripped from the bill, which was what happened.The Health and Human Services Committee drafted an amended version of LB 45 that removed twelve sections so that only the enforcement section remained, and then the bill was passed by a vote of 7-0.To Senator Thompson, this was an acceptable compromise because she felt the enforcement measure was the important new piece of the bill. The next hurdle for LB 45 was concern over the fact that the original language of the Nebraska Clean Indoor Air Act stated that “any affected party” could seek to bring an injunction against an establishment that was not in compliance with the Nebraska Clean Indoor Air Act.Some legislators were worried that this language would allow any individual to take a business to court over its smoking policy. During the second round of debate, an amendment was passed that removed the ability of an individual or a local board of health to seek an injunction requiring a business to comply with the law.Instead, the new language allowed local public health departments to seek an injunction in court.Another change was that previously, proprietors were required to ask smokers to refrain from smoking “upon request of a client or employee suffering discomfort from the smoke.”Because LB 45 actually made such a provision enforceable against business owners, legislators decided to amend LB 45 so that proprietors of businesses were only required to ask smokers to refrain from smoking if the smoker was smoking in the nonsmoking section.After these changes were made, LB 45 was approved by the Legislature by a 43-0 vote on March 14, 2003. It was signed into law by the governor on March 20143.

Even with the compromises that were made to LB 45, the passage of the bill meant that the State of Nebraska finally has the means to ensure that businesses, and not just smokers, are in compliance with the Nebraska Clean Indoor Air Act. In 2003, the Lincoln-Lancaster County Health Department headed an effort to pass a 100% smoke free workplace ordinance for the city of Lincoln, the capital of Nebraska and its second largest city. The other groups who also fought for this ordinance were the Lincoln Lancaster County Board of Health, Tobacco Free Lincoln , the Lancaster County Medical Association and the Nebraska chapters of the American Heart Association, American Lung Association and the American Cancer Society, attempted to pass a 100% smoke free workplace ordinance for the city of Lincoln. However, the Lincoln City Council caved in to pressure from the tobacco industry and its ally, the hospitality industry, to pass a confusing and weakened ordinance that allowed separated ventilated “smoking rooms” and exempted bars. In preparation for an attempt to pass a smoke free workplace ordinance in Lincoln, the Lincoln-Lancaster County Health Department requested a study of the harm caused by secondhand smoke to workers in the hospitality industry in Lincoln. James Repace, a health physicist who had conducted numerous similar studies, was contacted to measure the cotinine levels present in the blood of workers in Lincoln.Cotinine is a biomarker for nicotine exposure which is used to predict health risks.On May 5, 2003, Repace’s finding were released, which found that the cotinine levels for nonsmoking bar employees in Lincoln were 18 times higher than the national median and based on these findings, he estimated that 17 hospitality industry workers in Lincoln die each year from the exposure to secondhand smoke they receive in their workplace.In an effort to protect the health of workers and the general public, the Lincoln Lancaster County Health Department announced in August 2003 that it would put before the city council an ordinance that would require all indoor places of employment and all indoor public places in Lincoln, including bars and restaurants, to be 100% smoke free.

In their proposed draft of the ordinance, entitled the “Lincoln Smoke free Air Act,” which was released on August 19,how to cure cannabis places of employment were defined as “an indoor area under the control of an owner that employees access during the course of employment, including, but not limited to, work areas, employee lounges, restrooms, conference rooms, meeting rooms, classrooms, employee cafeterias, and hallways.The ordinance defined public places as “an indoor area to which the public is invited or in which the public is permitted.”The only exceptions to this smoke free ordinance were that hotel and motel rooms could be designated as smoking rooms and smoking was permitted if it was being conducted as part of medical research. By requiring that all places of employment and public places be completely smoke free, the Health Department sought to provide comprehensive protection from secondhand smoke to the general public but also more specifically to employees.The Health Department’s draft of the ordinance also sought to avoid some of the problems that had plagued the Nebraska Clean Indoor Air Act such as enforcing the law against individual smokers. The effectiveness of the Nebraska Clean Indoor Air Act had been weakened by the fact that it could only be enforced against individual smokers, until the passage of LB 45 in 2003, so the proposed ordinance addressed this area by providing for $100-$500 fines against the owners of businesses that failed to comply with the law as well as against individual smokers.The draft ordinance also stated that failure to comply with its provisions was a sufficient cause to justify the revocation or suspension of an license granted to that establishment by the City of Lincoln. Another section of the ordinance, as originally proposed, was its signage provisions, which required the posting of at least one “no smoking” sign at all entrances used by the public or employees.In the Health Department’s first draft, the only acceptable “no smoking” signs were ones provided by the Health Department or the State of Nebraska, but this was changed to any sign using the international “no-smoking” symbol of a cigarette with a red circle and slash. First, the law was to apply not only to establishments within Lincoln but also to establishments within three miles past the corporate limits of the city.150 The Lincoln City Council has zoning jurisdiction over this three mile area in order to deal with city growth, so by including this area the Health Department was attempting to expand the effect of the law as far as possible.The first draft of the Lincoln Smoke free Air Act also stated that if a business had a food establishment permit that included an outdoor area, then this outdoor area was also required to be smoke free.This outdoor area provision, which mainly affected outdoor cafes and beer gardens, was included at the recommendation of Tony Messineo, a restaurant owner and member of the Lincoln-Lancaster County Board of Health. Following the announcement by the Health Department that it would propose a smoke free workplace ordinance, it became clear that the debate in Lincoln would be little different from the debate over the passage of the Nebraska Clean Indoor Air Act in 1979. At that time the tobacco industry and its allies in the hospitality industry had argued that the Legislature should not establish separate smoking and nonsmoking areas because they said there was little evidence that secondhand smoke was harmful, that it would have a harmful economic impact and that it was unreasonable government interference into private businesses.

The tobacco industry used these arguments in 1979, because, as Ray Oliverio, the Director of Public Affairs for the Tobacco Institute, said, “All of these arguments play well in Nebraska.”In 2003, over twenty years later, the tobacco industry and its ally, the hospitality industry, would use these same arguments to successfully oppose the passage of Lincoln smoke free workplace ordinance. The subtitle for the Lincoln Journal Star’s article the day after the announcement of the Health Department’s proposed ordinance for Lincoln accurately captured the nature of the debate that would continue for months in Lincoln. It read, “The Health Department and supporters call it an issue of employee health; to opponents, it’s a matter of free choice.”In the article, Brian Kitten, co-owner of a Lincoln bar named Brewsky’s, echoed tobacco industry rhetoric saying, “My belief: There will always be a market for an environment where a smoker can go in and have a meal and have a drink. Why make that illegal?”Kitten also said that he was skeptical about the harm caused by secondhand smoke. He asked, “Who do you believe?” and stated that different studies came to different conclusions.In the coming months, Kitten would serve as the primary spokesperson for the opponents of the local smoke free ordinance.Throughout the debate over the proposed 100% smoke free workplace ordinance, these two papers would maintain these positions with the Journal Star supporting smoke free environments while the Daily Nebraskan sided with the tobacco industry’s position that smoke free workplace laws violated the rights of business owners. Following hearings with business owners that were sparsely attending and after receiving comments from other city officials, the Health Department decided to make some changes to the draft of the ordinance. At the request of Lancaster County Board of Commissioners, the three mile area outside the city limits was removed so that only businesses within Lincoln would be effected.According to County Commissioner Larry Hudkins, this request was made not because the board disagreed with the intent of the proposed law but because they felt extending the smoke free changes further into Lancaster County only complicated the issue. Another change was the elimination of outdoor areas at restaurants and bars from the smoke free requirements, which was taken out after health advocates were unable to find research that specifically addressed the effect of secondhand smoke in an outdoor setting.Businesses that were located in private residences, which included daycare centers, were exempted from the smoke free requirement to avoid legal concerns about inspecting private homes without a warrant.

There are benefits of Nebraska’s unicameral system over the two-chamber system

They have weakened efforts to keep tobacco out of the hands of children , opposed any attempt to increase the excise tax on cigarettes and fought against laws to protect the public from exposure to secondhand smoke . The tobacco industry regularly spends large amounts of money to employ the most influential lobbyists in the state, as well as to make direct contributions to candidates and elected officials . They have also established close relationships, often by providing money, with other business groups such as the Nebraska Restaurant Association, the Nebraska Chamber of Commerce, the Nebraska Retail Grocers Association, the Nebraska Petroleum Marketers and Convenience Store Association, the Nebraska Retail Federation, the Nebraska Association of Tobacco and Candy Distributors and the Nebraska Licensed Beverage Association.It is a common strategy of the tobacco industry to mobilize third party allies to mask its own involvement in opposing tobacco control progress. In the 1970s, Nebraska lacked a substantial tobacco control community and, therefore, lacked community-based political pressure to enact policy changes.Despite this hindrance, two state senators, Shirley Marsh of Lincoln and Larry Stoney of Omaha, were successful in establishing Nebraska as an early leader in passing clean indoor air legislation . The Nebraska Legislature passed its first clean indoor air law in 1974, one year after Arizona passed the first law in the nation that required separate smoking and nonsmoking sections in some public places.Nebraska followed in the footsteps of Minnesota, which enacted the nation’s first comprehensive clean indoor air law in 1975,pruning cannabis when the Legislature passed the Nebraska Clean Indoor Air Act in 1979, which required separate nonsmoking and smoking sections in almost all public places.

This law was stronger than similar legislation that was proposed in New York, Connecticut and Massachusetts at the same time. Despite the early success seen in Nebraska, a lack of commitment from the Legislature resulted in a lull in tobacco control policy making in Nebraska during the 1980s.Legislators, led by Shirley Marsh, tried numerous times to strengthen the Nebraska Clean Indoor Air Act, especially its enforcement provisions, but they were unsuccessful due to opposition from the tobacco industry.Since that time, the tobacco control community in Nebraska has grown extensively, both in numbers and in organization. Since 1989, the Smokeless Nebraska coalition and its offshoot, Citizens for a Healthy Nebraska, were formed as was the much smaller grassroots group, Group to Alleviate Smoking Pollution of Nebraska.During that same time period, local coalitions were formed in Nebraska’s larger cities and the Nebraska Health and Human Services Department formed a statewide tobacco control division, Tobacco Free Nebraska. While the emergence of these groups resulted in much more activity around tobacco control during the 1990s, including youth access legislation at both the local and state level , a petition-based initiative to increase the state’s cigarette excise tax , and several different efforts to protect the public from the secondhand smoke , the decade was mostly a series of victories for the tobacco industry and its allies because they were successful in defeating or weakening almost all of the proposed tobacco policies. The major exception was the Legislature’s decision in 1999 to make almost all state buildings and vehicles smoke free, which represented the first change to the Nebraska Clean Indoor Air Act in almost 20 years of existence. The reason for the success of this measure was largely the determination of Senator Don Preister of Omaha who pushed for over five years to get state buildings smoke free . Between 2000 and 2003, there was a series of highs and lows for tobacco control, due largely to a budget crisis that afflicted the state. In 2000, health advocates were successful in getting the Legislature to fund tobacco control at $21 million over three years but this amount was cut by 94% in 2003, leaving only $405,000 per year for tobacco control . In 2000 and 2001, attempts by Senator Nancy Thompson of Papillion to broaden the Nebraska Clean Indoor Air Act to make restaurants smoke free were defeated, but she was successful in 2003 in getting the Act’s enforcement provisions strengthen so that the Act can be enforced against business owners .

In 2000 and 2001, health advocates were unable to convince the Legislature to pass a large cigarette excise tax increase, but in 2002, a $0.30 increase was approved . In 2003, the Lincoln-Lancaster County Health Department attempted to pass Nebraska’s first local 100% smoke free workplace ordinance, but the City Council gave in to pressure from the tobacco industry and bar owners to pass a confusing ordinance that was weakened to allow smoking in bars and separately ventilated break rooms and “smoking rooms” . The beginning of the new millennium was a series of three steps forward, two steps back for tobacco control in Nebraska.Nebraska is the only state in the United States to have a single-house legislature, also known as a unicameral system. Nebraska’s legislature is also unique in that it is nominally nonpartisan. While state senators may be affiliated with a political party, this information is not listed on the election ballot and leadership positions are not determined by political party such as in the common majority-minority system. In this report, the political affiliation of legislators that are in office as of 2003 are listed.The Nebraska Blue Book is published biennially by the Clerk of the Legislature and contains copious amounts of information about Nebraska. Much of the information about Nebraska legislative process and its history is discussed in the Nebraska Blue Book; therefore, unless otherwise indicated, the information provided in this section is from the Nebraska Blue Book. Both of Nebraska’s unique legislative features came about on November 6, 1934, when voters approved a constitutional amendment that eliminated the previous bicameral system in favor of a unicameral one. Thus, the form of the Nebraska Legislature shares more in common with local governing bodies than the federal system. First, the relative simplicity of the unicameral system makes the legislative process more transparent. For example, there are no conference committees in the unicameral system. Conference committees are usually formed in an ad hoc manner in order to resolve any differences that exist between similar bills passed by both houses. This role often makes conference committees extremely powerful in bicameral legislatures because of its members’ role in determining the final language of a bill and because there is little oversight of their activities. While the Nebraska Legislature does utilize committees to conduct public hearings and to decide whether a bill will be sent for debate on the floor of the full Legislature,drying room it is the responsibility of the full Legislature to finalize the language of a bill, not that of a particular ad hoc committee. Second, a unicameral system is smaller and less expensive than a bicameral system. When the unicameral system was implemented in 1937, the number of Nebraska legislators, dropped from 133 to 43, committees were reduced from 61 to 18, half as many bills were introduced but more bills were passed and the session was shorter by 12 days. The cost of the last bicameral session in 1935 was $202,593. The first unicameral session almost halved this amount with a final cost of $103,445.

Over sixty years later, the number of legislators has only risen by six, to 49, and there are still only 18 committees: 14 standing committees that hold public hearings on bills and 4 select committees that conduct administrative tasks for the Legislature. The most common criticism leveled against unicameralism is that one house is not capable of maintaining checks and balances. However, it should be remembered that legislative bodies at the local level do not usually consist of two separate bodies. Also, the governor’s veto and the ability of the judicial branch to deem laws unconstitutional are in place to check the power of the Nebraska Legislature.The Legislature meets every year beginning on the first Wednesday after the first Monday in January. During odd-numbered years, it meets for a 90-day session; for even numbered years, the session lasts 60 days. During 90-day sessions, bills that are not advanced but are not killed are carried over to the following 60-day session. Most bills are introduced during the first 10 days of the session. All legislators in Nebraska are referred to as state senators. Introduction of a bill occurs when it is filed with the Clerk of the Legislature, who reads the title of the bill into record and then assigns it a number. Bills are usually abbreviated LB followed by this number. After a bill has been introduced, the nine-member Reference Committee assigns it to one of the 14 standing committees. Committees are required to conduct a public hearing for almost all bills. It is at the public hearing that individuals or groups may testify for or against a particular bill. After listening to public comments on a bill, the committee may advance an amendment to the bill by a majority vote. It is also the committee’s role to decide whether or not to advance the bill to General File, where it is debated on the floor of the full Legislature. On the floor of the Legislature, it takes a majority of the full Legislature to advance a bill or adopt an amendment. It is usually after a bill has been placed on General File that a senator or committee decides to designate it as a priority bill. Priority bills are debated by the full Legislature before other bills so designating a bill as a priority helps to ensure that it will be debated during the current legislative session. Each senator is allowed one priority bill, each committee may select two, and the Speaker of the Legislature, who introduces bills at the request of the governor, may designate 25 priority bills. The reason why senators wait until a bill has been placed on General File before designating it a priority bill is because designating a bill as a priority bill does not ensure that it will advance out of committee and if this bill died in committee the senator would lose his or her priority bill for that legislative session. If a bill is advanced at the General File stage, it goes on Select File. Select File is the term for the second time in which a bill is read, debated and voted on by the full Legislature. Once again, 25 votes are required to adopt a new amendment or to advance the bill to Final Reading. The legislative bill may not be debated or amended when it is on Final Reading, but it may be returned to Select File. Bills may not be passed until at least five days after they were introduced and at least one day after it was advanced to Final Reading. Final passage of a bill requires more than the normal 25 votes if it contains an emergency clause. An emergency clause allows the bill to take effect immediately after it is signed by the governor or the governor’s veto is overridden by the Legislature. Bills that contain an emergency clause require 33 votes to be sent to the governor. Once the governor receives the bill, he or she has five days to sign the bill, veto it, or decline to act on it. If he or she signs it or declines to act on the bill, it becomes state law, but if the governor vetoes it, 30 votes are required for the Legislature to override the veto. Bills that do not contain an emergency clause usually go into effect three calender months after the Legislature adjourns. The tobacco industry recognized the unique structure of the Nebraska Legislature and recognized that it created special opportunities and difficulties for the industry. One of these features is the fact that legislation can be brought up for debate and amended at any time while on General File or Select File if 25 senators vote in favor of such as action. Because of this aspect of Nebraska’s system, the tobacco industry and its lobbyists must remain vigilant in monitoring the status of bills in which they take an interest.

We then present findings from a survey of 35 diverse urban farm operations in the East Bay

Defined in these ways, the radical, transformative potential of urban food production spaces and their preservation often gets lost or pushed to the side in city planning decisions in metropolitan regions such as the San Francisco Bay Area, where the threat of displacement is ubiquitous given high levels of economic inequality and extreme lack of affordable land. In order to facilitate what scholars such as Anderson et al. 2018a refers to as the “agroecological transition,” already underway in many urban food ecosystems around the globe , we argue that applying an agroecological approach to inquiry and research into the diversity of sites, goals, and ways in which food is produced in cities can help enumerate the synergistic effects of urban food producers. This in turn encourages the realization of the transformative potential of urban farming, and an articulation of its value meriting protected space in urban regions. Urban agroecology is an evolving concept that includes the social-ecological and political dimensions as well as the science of ecologically sustainable food production . UAE provides a more holistic framework than urban agriculture to assess how well urban food initiatives produce food and promote environmental literacy, community engagement, and ecosystem services. This paper presents a case study of 35 urban farms in San Francisco’s East Bay in which we investigated key questions related to mission, production , labor, financing, land tenure, and educational programming. Our results reveal a rich and diverse East Bay agroecosystem engaged in varying capacities to fundamentally transform the use of urban space and the regional food system by engaging the public in efforts to stabilize, improve, and sustainably scale urban food production and distribution. Yet, as in other cities across the country,grow trays urban farms face numerous threats to their existence, including land tenure, labor costs, development pressure, and other factors that threaten wider adoption of agroecological principles.

We begin by comparing the concepts of UA and UAE in scholarship and practice, bringing in relevant literature and intellectual histories of each term and clarifying how we apply the term “agroecology” to our analysis. We pay particular attention to the important nonecological factors that the literature has identified as vital to agroecology, but seldomly documents .We discuss the results, showing how an agroecological method of inquiry amplifies important aspects of urban food production spaces and identifies gaps in national urban agriculture policy circles. We conclude by positing unique characteristics of urban agroecology in need of further studies and action to create equitable, resilient and protected urban food systems.Agricultural policy in the United States is primarily concerned with yield, markets, monetary exchange, and rural development. The United States Department of Agriculture defines agricultural activities as those taking place on farms. Farms are defined as “any place from which $1,000 or more of agricultural products were produced and sold, or normally would have been sold, during the year” . Urban agriculture has been proliferating across the country in the last decade on both public and private lands, as both for-profit and nonprofit entities, with diverse goals, missions and practices largely centered on food justice priorities and re-localizing the food system. Yet U.S. agriculture policy has been struggling to keep up. In 2016, the USDA published an Urban Agriculture Toolkit, which aims to provide aspiring farmers with the resources to start an urban farm including an overview of the startup costs, strategies for accessing land and capital, assessing soil quality and water availability, production and marketing, and safety and security . The 2018 U.S. Farm Bill provides a definition of urban agriculture to include the practices of aquaponics, hydroponics, vertical farming, and other indoor or controlled environment agriculture systems primarily geared towards commercial sales. In both the Toolkit and Farm Bill, non-profit, subsistence, and educational urban farming enterprises are not well integrated or included in the conceptualization of UA.

While there are many definitions of urban agriculture in the literature from the simplest definition of “producing food in cities” to longer descriptions of UA such as that of the American Planning Association that incorporate school, rooftop and community gardens “with a purpose extending beyond home consumption and education,” the focus of many UA definitions used in policy arenas continues to center around the production and sale of urban produced foods. Accordingly, food systems scholars have recognized that “Urban agriculture, [as defined], is like agriculture in general”, devoid of the many political, educational, and food justice dimensions that are prioritized by many U.S. urban farming efforts. Thus the social-political nature of farming, food production, and food sovereignty are not invoked by formal UA policy in the U.S. Many goals and activities common in urban food production, including education, nonmonetary forms of exchange, and gardening for subsistence are obscured by the productivist definitions and can be thus neglected in policy discussions. Furthermore, UA policy in the U.S. remains largely agnostic about the sustainability of production practices and their impact on the environment. While U.S. agriculture policy narrowly focuses on the production, distribution and marketing potential of UA, broader discussion of its activities and goals proliferate among food systems scholars from a range of fields including geography, urban planning, sociology, nutrition, and environmental studies. These scholars are quick to point out that UA is much more than production and marketing of food in the city, and includes important justice elements . In the Bay Area context, we continue to see the result of this dichotomy: thriving urban farms lose their leases , struggle to maintain profitability or even viability and encounter difficulties creating monetary value out of their social enterprises. In light of the ongoing challenge to secure longevity of UA in the United States, there is a need for an alternative framework through which food and farming justice advocates can better understand and articulate what UA is, and why it matters in cities.Agroecology is defined as “the application of ecological principles to the study, design and management of agroecosystems that are both productive and natural resource conserving, culturally sensitive, socially just and economically viable” , and presents itself as a viable alternative to productivist forms of agriculture. Agroecology in its most expansive form coalesces the social, ecological, and political elements of growing food in a manner that directly confronts the dominant industrial food system paradigm, and explicitly seeks to “transform food and agriculture systems, addressing the root causes of problems in an integrated way and providing holistic and long-term solutions” . It is simultaneously a set of ecological farming practices and a method of inquiry, and, recently, a framework for urban policy making ; “a practice, a science and a social movement” . Agroecology has strong historical ties to the international peasant rights movement La Via Campesina’s food sovereignty concept, and a rural livelihoods approach to agriculture where knowledge is created through non-hegemonic forms of information exchange, i.e. farmer-tofarmer networks .

Mendez et al. describe the vast diversity of agroecological perspectives in the literature as “agroecologies” and encourage future work that is characterized by a transdisciplinary, participatory and action-oriented approach. In 2015, a global gathering of social movements convened at the International Forum of Agroecology in Selengue, Mali to define a common, grassroots vision for the concept, building on earlier gatherings in 2006 and 2007 to define food sovereignty and agrarian reform. The declaration represents the views of small scale food producers, landless rural workers, indigenous peoples and urban communities alike, affirming that “Agroecology is not a mere set of technologies or production practices” and that “Agroecology is political; it requires us to challenge and transform structures of power in society” . The declaration goes on to outline the bottom-up strategies being employed to build, defend and strengthen agroecology, including policies such as democratized planning processes, knowledge sharing, recognizing the central role of women,dry racks for weed building local economies and alliances, protecting biodiversity and genetic resources, tackling and adapting to climate change, and fighting corporate cooptation of agroecology. Recently, scholars have begun exploring agroecology in the urban context. In 2017, scholars from around the world collaborated on an issue of the Urban Agriculture magazine titled “Urban Agroecology,” conceptualizing the field both in theory and through practical examples of city initiatives, urban policies, citizen activism, and social movements. In this compendium, Van Dyck et al. describe urban agroecology as “a stepping stone to collectively think and act upon food system knowledge production, access to healthy and culturally appropriate food, decent living conditions for food producers and the cultivation of living soils and biodiversity, all at once.” Drawing from examples across Europe, Africa, Latin America and Asia and the United States, the editors observe that urban agroecology “is a practice which – while it could be similar to many ‘urban agricultural’ initiatives born out of the desire to re-build community ties and sustainable food systems, has gone a step further: it has clearly positioned itself in ecological, social and political terms.” . Urban agroecology takes into account urban governance as a transformative process and follows from the re-emergence of food on the urban policy agenda in the past 5-10 years. However, it requires further conceptual development. Some common approaches in rural agroecology do not necessarily align with urban settings, where regenerative soil processes may require attention to industrial contamination. In other cases, the urban context provides “specific knowledge, resources and capacities which may be lacking in rural settings such as shorter direct marketing channels, greater possibility for producer-consumer relations, participatory approaches in labour mobilisation and certification, and initiatives in the area of solidarity economy” .

Focusing on the social and political dimensions of agroecology, Altieri and others have explicitly applied the term “agroecology” to the urban context, calling for the union of urban and rural agrarian food justice and sovereignty struggles . Dehaene et al. speak directly to the revolutionary potential of an agroecological urban food system, building towards an “emancipatory society” with strong community health and justice outcomes. Our research builds upon this emergent body of work that employs urban agroecology as an entry point into broader policy discussions that can enable transitions to more sustainable and equitable city and regional food systems in the U.S. . This transition in UAE policy making is already well underway in many European cities . As noted, there are many dimensions of agroecology and ways in which it is conceptualized and applied. We employ the 10 elements of agroecology recently developed by the UN FAO in our discussion of urban agroecology1 . These 10 elements characterize the key constituents of agroecology including the social, ecological, cultural, and political elements. Despite the emancipatory goals of agroecology, a recent review of the literature by Palomo-Campesino et al. found that few papers mention the non-ecological elements of agroecology and fewer than 1/3 of the papers directly considered more than 3 of the 10 FAO-defined elements. In an effort to help guide the transition to more just and sustainable food and agricultural systems in cities across the U.S., we propose that food system scholars and activists consider using the 10 elements as an analytical tool to both operationalize agroecology, and to systematically assess and communicate not only the ecological, but also the social, cultural and political values of urban agroecology. “By identifying important properties of agroecological systems and approaches, as well as key considerations in developing an enabling environment for agroecology, the 10 Elements [can be] a guide for policymakers, practitioners and stakeholders in planning, managing and evaluating agroecological transitions 2 . We employed a participatory and collaborative mixed methods approach, involving diverse stakeholders from the East Bay Agroecosystem. We held two stakeholder input sessions involving over 40 urban farmers and food advocates to co-create the research questions, advise on the data collection process, interpret the results, and prioritize workshop topics for the community. We administered an online Qualtrics survey to 120 urban farms in the East Bay that had been previously identified by the University of California Cooperative Extension Urban Agriculture working group and additional outreach. The survey launched in Summer 2018, which is a particularly busy time for farmers, and in response to farmer feedback was kept open until November 2018. 35 farmers responded in total, representing a 30% response rate.

The risks of exposing children to residual tobacco smoke contamination are not well understood

Significant differences in concentrations of nicotine in residential dust were observed for all self-reported smoking categories. Pearson correlation coefficients for covariates of interest are shown in Table 15. The group of smoking variables was highly correlated as was the group of parental demographic variables, whereas the two groups of variables were negatively correlated with each other. Other variables correlated with residential-dust nicotine were age of residence, breastfeeding duration, size of sampling area , and vacuum use frequency.Tables 16 and 17 show the results of the principal components analysis for the two groups of highly correlated variables, i.e., self-reported smoking and parental demographics. Three meaningful factors were chosen to represent the 15 self-reported smoking variables and 2 factors were chosen to represent the 5 parental demographic variables. A variable was said to load on a given component if the factor loading was 0.40 or greater . Using this criterion, 12 variables describing parental smoking were found to load on the first smoking component, which was subsequently labeled the parental smoking component. Similarly, the 4 father’s smoking variables loaded on the second smoking component and 3 variables, describing other household smoking, loaded on the third component . Combined, the smoking-related principal components accounted for 65% of the total variance of all smoking variables. The demographic variable group, shown in Table 17 was described by a parental socioeconomic status component, which was loaded by parental education and income, and a parental age component,curing drying which was loaded by the mother’s age and father’s age. Combined, the summary demographic principal components accounted for 80% of the total variance explained by all demographic variables.

Several determinants of concentrations of nicotine in residential dust were identified . Notably, two principal components summarizing self-reported smoking variables were highly significant predictors of residential-dust nicotine in the final models . These principal components represented self-reported smoking for time periods of months and years before dust collection. Based on the regression model results, nicotine concentrations in residential dust seem to reflect cumulative smoking habits of residents over periods of up to several years rather than simply the current smoking pattern in the home. To verify the hypothesis that levels of nicotine in residential-dust samples reflect past smoking habits, it was useful to examine NCCLS households that reported changes in their smoking status between the initial interview and dust collection. Of the households that reported no smoking in the month before dust collection, 90 households had previously reported some smoking at the initial interview. Nicotine concentrations in residential-dust samples from these 90 households did, indeed, remain elevated . This finding suggests that nicotine may contaminant homes long after cigarette smoking has ceased, a phenomenon referred to as “third hand smoke”. In fact, investigators have reported that children living in apartments that were formerly occupied by smokers had elevated levels of residential-dust nicotine and urinary cotinine . Additionally, of the NCCLS households that reported some smoking at the time of dust collection, 5 reported no smoking at the initial interview. These 5 households had lower concentration of nicotine in residential dust than households that consistently reported smoking . Both of these findings support the conjecture that current concentrations of nicotine in residential dust may be particularly good measures of cumulative household smoking habits. Furthermore, these findings suggests that, in studies that aim to estimate prenatal or postnatal cigarette smoking exposures retrospectively, concentration of nicotine in residential dust could be a more useful surrogate than short-term exposure markers such as concentrations of nicotine in air or of cotinine in urine. After considering self-reported smoking, the age of the residence was a significant predictor of concentrations of nicotine in residential dust. Since concentrations of nicotine in residential dust increase with the age of the residence,nicotine evidently accumulates in household carpets. Thus, nicotine concentrations in residential dust likely reflect cumulative smoking habits in a household. Two measures of parental demographics, namely, the parental SES component and the parental age component, remained significant predictors of the concentrations of nicotine in residential dust, after accounting for self-reported smoking.

Table 19 illustrates that, in general, after adjusting for self-reported smoking, concentrations of nicotine in residential dust decreased with increasing parental SES and age. Interestingly, when considering the 211 households that reported no smoking at any time, the households with below median income had significantly higher concentration of nicotine in their residential dust than the households with above median income . Thus, even when no smoking was reported, low-income households had elevated concentrations of nicotine in their residential dust compared to high-income households. There are several possible explanations for the discrepancy in levels of nicotine in residential dust from self-reported non-smoking households: low-SES residences may be physically different from high-SES residences, due to unmeasured differences in ventilation, carpet types, light, moisture or microbial action; low-SES parents may be more likely to be exposed to passive cigarette smoke, and may convey nicotine into their homes on their skin or clothing; low-SES households may be more likely to have residual nicotine in residential dust from previous residents; low-SES households may use more smokeless tobacco products or; low-SES households may have under reported their smoking habits. If differential self-reporting by SES or age is present, then an objective measure of exposure to household smoking, such as concentrations of nicotine in residential dust, would be advantageous. Three other variables were significant predictors of nicotine concentrations in residential dust after adjusting for self-reported smoking and parental demographics, residence is apartment, residence is townhouse, and size of sampling area. Since apartments and townhouses generally have less square footage than single family homes, the positive regression coefficient for the variables residence is apartment and residence is townhouse are consistent with the observation of Hein et al. who found that residential-dust nicotine concentrations increased with decreasing square footage of the residence. The negative regression coefficient for the variable size of sampling area in the final model with HVS3-sampled homes indicates that, as the size of carpet sampled increased, the concentration of nicotine measured in residential dust decreased.

This relationship could be a limitation of the HVS3 sampling method and it suggests that this variable should be measured and adjusted for in models of residential-dust nicotine concentrations using HVS3 sampling. Still,drying curing including size of sampling area in the regression model had little effect on the other parameters. Given that the ultimate purpose of the NCCLS is to compare leukemia cases and controls, the effect of case-control status on measured nicotine concentration was examined. Interestingly, case-control status was not a significant predictor of nicotine concentrations and there was no indication that case parents were reporting their smoking differently than controls . This finding suggests that there was little differential misclassification of exposures in case and control households in the previous analysis of self-reported cigarette smoking in the NCCLS population . The concentrations of nicotine measured in dust from smoking and non-smoking NCCLS residences were lower than those previously reported . Specifically, the median concentrations of nicotine for self-reported non-smoking NCCLS homes was 0.3 µg/g, substantially lower than median levels reported for non-smoking homes in previous studies . As discussed in Chapter 1, lower levels of background nicotine contamination might be explained by the low prevalence of smoking in California. Alternatively, these differences may partly reflect differences in analytical methodology. Despite the lower levels of nicotine measured in the NCCLS, the nicotine concentrations in residential-dust samples were correlated with concurrently self-reported household cigarette consumption . Although concentrations of nicotine in residential dust are specific indicators of cigarette smoke contamination, the use of dust to assess children’s exposure to secondhand cigarette smoke has limitations. First, it must be assumed that children are in the home when smoking occurs. This is a reasonable expectation given the young age of the children in the NCCLS . Secondly, it must be assumed that nicotine in residential dust originated from cigarettes smoked in the home. However, a previous study found that nicotine levels in residential dust were elevated in homes where parents reported only smoking outdoors compared to homes where parents reported no smoking . Thus, parents that are exposed to cigarette smoke may convey nicotine into carpets, via their skin, clothing, or shoes without exposing their children to secondhand cigarette smoke.Future studies should consider using a long-term biomarker of exposure to cigarette smoke, such as hair nicotine, to investigate the relationship between concentrations of nicotine in residential dust and the corresponding biological dose of nicotine in children. Since parents may have tracked nicotine into their homes after smoking outside, the results of the residential-dust nicotine models may have been somewhat obscured. Specifically, the variable describing household cigarette consumption during the month before dust collection was specific to in-home smoking and it was a relatively weak predictor of nicotine concentrations in dust.

In contrast, the highly significant parental and father smoking components were based on general smoking habits . It is possible that the variable describing household cigarette consumption during the month before dust collection was a relatively weak predictor of nicotine levels, because outdoor smoking was excluded. In summary, results reported in this chapter confirmed previous findings that concentrations of nicotine in residential dust were significantly associated with self reported household smoking. Chapter 3 also presents evidence that residual smoke contamination , could persist in homes long after cigarette smoking ceased. Finally, these results suggest that concentrations of nicotine in residential dust can be used as long-term surrogates for exposures to cigarette smoke in the home. Polycyclic aromatic hydrocarbons are formed as products of incomplete combustion and there are a variety of indoor PAH sources including cigarette smoke, wood-burning fireplaces, gas appliances, and charred foods, as well as outdoor sources, including vehicle exhaust and coal-tar-based pavement sealants . Occupational exposures to PAHs have been associated with increased risks of lung, skin, and bladder cancers . Likewise, increased levels of PAH-DNA adducts have been associated with lung cancer in the general population. Moreover, in-utero PAH exposures, as measured by maternal personal air monitoring during pregnancy, have been associated with IQ deficits , cognitive developmental delays , decreased gestational size , and respiratory effects . Surrogates of PAH exposure have been measured in several environmental and biological media, including air , residential dust , urine , and blood . Because chemicals can accumulate in carpets , concentrations of PAH in residential dust may be long-term predictors of indoor PAH exposures. Moreover, because inadvertent dust ingestion could be responsible for as much as 42% of non-dietary PAH exposure in young children , levels of PAHs in residential dust may be particularly relevant to the uptake of PAHs in children. Although measurements of chemicals in residential dust are specific measures of indoor exposures, such data have rarely been collected in epidemiologic investigations. Rather, epidemiologists have classified potential exposures to chemicals based on selfreported information and/or ambient levels of chemicals measured at outdoor monitoring sites. Since self-reports and estimated outdoor air levels may not be good surrogates for indoor exposures, it is important to know the extent to which these indirect measures predict residential levels of environmental agents. Chapter 4 evaluates the predictive value of self-reported and geographic data in estimating measured levels of 9 PAHs in residential-dust samples. A global-positioning-system device was used to determine the latitude and longitude coordinates for each residence. Subsequently, three surrogates for outdoor PAH concentrations: traffic density, modeled predictions of outdoor PAH concentrations, and urban or rural location were considered. Traffic density was estimated as described previously . Briefly, a 500-m radius was drawn around each residence and traffic density was defined as the sum of the annual average daily traffic count from 2000, multiplied by the length of the road for all roads within the buffer, divided by the buffer’s area . The estimates of outdoor PAH concentrations were taken from the EPA’s 2002 National-Scale Air Toxics Assessment . The outdoor PAH concentrations were estimated at a census-tract resolution using an air dispersion model and National Emissions Inventory data, which includes major stationary sources , area sources , and mobile sources .

Growers who completed the survey were also clearly knowledgeable about cannabis cultivation

Growers were likely referring to hemp russet mite , two-spotted spider mite , broad mite and Carmine spider mite , respectively, but this remains unclear because there are many species of mite commonly referred to as russet mite, spider mite and red mite . This similarly applies to aphids, thrips, larvae, mildew, rots and molds. Accurate species identification of these pests and diseases will remain uncertain until they can be more systematically collected and identified by UC academics or other scientists. The most common approach to pest and disease control was to apply some type of solution or chemical to the crop , followed by augmentation of natural enemies and various cultural practices .A majority of sprays were products that were biologically derived or approved for use in organic production. Products specifically used for control of arthropod pests included azadirachtin , soap solution , pyrethrins and Bacillus thuringiensis . Many respondents indicated that certain products were effective against both pests and diseases, for instance microbial pesticides , oils and compost tea . Sulfur was the most commonly applied product specifically used for disease control. In addition, 29% of respondents claimed to use certified organic products for pest and disease management but did not name any product specifically. Finally, 2% of respondents reported that they did not spray for pests and diseases at all. Augmentation of natural enemies involved the introduction of predatory mites , lady beetles , predatory nematodes and other unnamed beneficial insects . Cultural practices included removal of infested plant material , insect trapping , intercropping , use of diatomaceous earth and selection of resistant cultivars .Our survey, although of limited sample size,weed curing is the first known survey of California cannabis growers and provided insights into common forms of cultivation, pest and disease management, water use and labor practices.

Since completing this survey, we have discussed and/or presented the survey results with representatives from multiple cannabis grower organizations, and they confirmed that the data were generally in line with production trends. Evident in the survey results, however, was the need for more data on grower cultivation practices before best management practices or natural resource stewardship goals can be developed. All growers monitored crop health, and many reported using a preventative management strategy, but we have no information on treatment thresholds used or the efficacy of particular sprays on cannabis crops. Likewise, the details of species-level pest and disease identification, natural enemy augmentation and sanitation efforts remain unclear. Growers did not report using synthetic pesticides, which contrasts with findings from previous studies that documented a wide range of synthetic pesticide residues on cannabis . Product selection for cannabis is very limited due to a mixed regulatory environment that currently does not allow for the registration of any insecticide or fungicide for use specifically on cannabis , although growers are allowed to use products that are exempt from residue tolerance requirements, exempt from registration requirements or registered for a use that is broad enough to include cannabis . As such, it may be that in the absence of legally available chemical controls growers were choosing allowable, biologically derived products or alternative strategies such as natural enemy augmentation and sanitation. Our survey population was perhaps biased toward non-chemical pest management — the organizations we contacted for participant recruitment included some that were formed to share and promote sustainability practices. Or, it may be that respondents were reluctant to report using synthetic chemicals or products not licensed for cannabis plants. The only other published data on water application rates for cannabis cultivation in California we are aware of is from Bauer et al. , who used estimates for Humboldt County of 6 gallons per day per plant for outdoor cultivation over the growing season .

Grower reported estimates of cannabis water use in this survey were similar to this rate in the peak growing season , but was otherwise lower. Due to the small sample size, we cannot say that groundwater is the primary water source for most cannabis growers in California or that few use surface water diversions. However, Dillis et al. found similar results on groundwater being a major water source for cannabis growers, at least in northwest California. If the irrigation practices reported in our survey represent patterns in California cannabis cultivation, best management practices would be helpful in limiting impacts to freshwater organisms and ecosystems. For example, where groundwater pumping has timely and proximate impacts to surface waters, limiting dry season groundwater extraction by storing groundwater or surface water in the wet season may be beneficial , though this will likely require increases in storage capacity. The recently adopted Cannabis Cultivation Policy requires a mandatory dry season forbearance period for surface water diversions, though not for groundwater pumping. Our survey results indicate that the practical constraints on adding storage may be a significant barrier for compliance with mandatory forbearance periods for many growers. More in-depth research with growers and workers is needed to explore the characteristics of the cannabis labor force and the trajectory of the cannabis labor market, especially in light of legalization. Several growers commented on experiencing labor shortages, a notable finding given that recent market analyses of the cannabis industry suggest that labor compliance costs are the most significant of all of the direct regulatory costs for growers. Higher rates of licensing compliance among medium and large farms is not surprising given the likelihood that they are better able to pay permitting costs. Yet, that the majority of respondents indicated they had not applied for a license to grow cannabis, with over half noting some income from cannabis sales, indicates potentially significant effects if these growers remain excluded from the legalization process. More research is needed to understand the socioeconomic impacts of legalization, which likely extend beyond those accounted for in the state’s economic impact analysis, which primarily focuses on economic contributions that a legalized market will bring to the state .

Bodwitch et al. report that surveyed growers characterized legalization as a process that has excluded small farmers, altered local economies and given rise to illicit markets. The environmental impacts of drying cannabis production have received attention because of expansion into remote areas near sensitive natural habitats. The negative impacts are likely not because cannabis production is inherently detrimental to the environment, but rather due to siting decisions and cultivation practices. In the absence of regulation and best management practices based on research, it is no surprise that there have been instances of negative impacts on the environment. At the same time, many growers appear to have adopted an environmentally proactive approach to production and created networks to share and promote best management practices. Organizations that we approached to recruit survey participants had a fairly large base membership , which is on a par with other major commodity groups, like the Almond Board of California and California Association of Wine grape Growers . Membership included cannabis growers, distributors and processors as well as interested members of the public, and some people were members of more than one organization, suggesting a large, engaged community. Most of the organizations we contacted enthusiastically agreed to help us recruit growers for our survey, and we received excellent feedback on our initial survey questions. Some potential future research topics include the development of pest and disease monitoring programs; quantifying economic treatment thresholds; evaluating the efficacy of different biological, cultural and chemical controls; developing strategies to improve water use and irrigation efficiency; understanding grower motivations for regulatory compliance; understanding the impacts of regulation; and characterizing the competition between labor in cannabis and other agricultural crops — to name just a few. As cannabis research and extension programs are developed, it will be critical to ensure that future surveys capture a representative sample of cannabis growers operating inside and outside the legal market, to identify additional areas for research and develop best practices for the various cultivation settings in which California cannabis is grown. In the United States, Native Americans experience a dramatically higher burden of diet-related chronic disease across the lifespan compared to the all-race population. Approximately 38% of NA adults are obese, and research from 2016 reports that preschool-aged NA and Alaska Native children had the highest obesity rates compared to all racial groups combined. During childhood, establishing healthy eating habits is vital for physical growth and cognitive development. Moreover, research has shown that a diet rich in vegetables during childhood can help protect against chronic diseases, such as obesity, heart disease, and diabetes, that develop during adulthood. Though data on diet quality of NA populations are limited, prior studies that included NAs found that diet quality is insufficient and is lower than in other populations. Preschool-aged children consume almost half of their daily calories at school, which is an important setting for food environment interventions. Childcare-based interventions are effective in improving nutrition behaviors among children and are recognized as a vital influence on learned eating behaviors. However, most of these nutrition programs have been implemented in urban schools, and little is known about school-based interventions among rural NA communities. Tribally owned and operated Early Childhood and Education programs offer preschool-aged children around two snacks and two meals per day and signify a vital organizational influence on childhood obesity disparities. Therefore, ECEs can serve as an essential location to provide healthy eating interventions for NA children.

School gardens are a common strategy to increase fruit and vegetable intake in all grades, including ECE programs; however, limited studies have used rigorous methodological designs to assess their impact on diet quality and health outcomes. A systematic review on garden-based interventions among preschoolers found that only four studies assessed fruit and vegetable intake, and only one has been conducted among NA youth. Results from this study found that increases in preferences for vegetables were significant, but intake was not . To our knowledge, there are no studies that address vegetable intake and health outcomes among NA children in ECE programs using a multi-level method, targeting the individual, family, and community. Using a community-based participatory research approach, we partnered with the Osage Nation to implement the Food Resource Equity and Sustainability for Health study, a culturally based farm-to-school intervention to increase vegetable intake among NA children and their families. The intervention was implemented within Osage Nation ECEs. The aim of this manuscript is to describe the FRESH intervention results, including changes in dietary intake , body mass index , systolic blood pressure , health status, and food insecurity among Osage Nation families. The six-month FRESH study employed a randomized wait-list controlled trial design with treatment condition assigned at the community level . The design and methods of the FRESH study have been published in detail elsewhere. In summary, our tribal-university partnership recruited NA families of children attending Osage Nation ECE programs in four communities to assess individual-level changes on children and adults. Two communities received the intervention and two communities served as wait-list controls. We randomized by community instead of ECE program to avoid crossover due to geographical proximity to members of the other study group. The FRESH Leadership Committee included four university researchers and 13 Osage Nation employees from the health, education, language, agriculture, and government divisions and led all aspects of the study. University researchers set up tables in ECE programs during school orientation, back-to-school nights, and during child drop-off/pickup to notify parents about the study and invite them to participate.ECE staff also contacted parents to notify them about the study. FRESH study flyers that promoted the study were shared through children’s backpacks and parent mailings. Flyers were also posted in classrooms around the schools. Adults at least 18 years old who met the following inclusion criteria were eligible to participate in the study: one or more family member in the household identified as NA; one or more child between the ages of three and six years enrolled at an Osage Nation ECE program; planned to reside in Osage Nation for nine months or more; and one or more adult family member willing to engage in monthly family nights at the school. Children were eligible if they were between the ages of three and six years old, enrolled in a participating ECE program, and were a household family member of an eligible adult.

Pastoral communities are those whose means of living entirely depends on raising livestock

There are two contrary debates focused on whether pastoral lifestyles could serve as an adaptation strategy to climate change in the drylands regions of East Africa. The first contends with deep pessimism about the pastoral mode of life, viewing pastoralism an old living system by which pastoralists could not meet their livelihood requirements. Pastoralists live in drylands areas characterized by repeating droughts, land degradation, lack of marketing, governance and access to technology . And even these limited resource regions are being further stressed by human population growth. In the Greater Horn of Africa, pressure on the ecological base of rangelands threatened carrying capacity to support huge livestock herds that eventually left pastoralists in crisis . According to Sandford , introducing improved livestock management with permanent settlement should be prioritized and this can be credible if it is integrated with irrigation and mixed livestock-cereal production and with forage enhancement schemes. His argument emphasizes that settling pastoral communities into permanent locations leads to the provision of basic infrastructure including schools, health services, road accesses, and veterinary services. The second strand of literature strongly advocates the importance of the pastoral living style to maintain livelihoods through traditional systems . Pastoralists have a long history of involvement in various forms of adaptation methods based on their own indigenous knowledge . Research findings demonstrate that the pastoral system is an easy way to adapt to climatic effects, owing to its suitability to arid and semi-arid environments through strategies of establishing strong social capital, economic cooperation among community members and clan lineage networks, herd diversification and restocking methods. In such a context, pastoral life allows the community to keep their cultural systems and knowledge while responding to the negative effects of climate change. Instead of changing the prolonged indigenous mode of living into the proposed new style of life,horticulture trays more attention is needed to enhance mobility strategies in a way that supports adaptive capacity by introducing modern extension services and veterinary facilities. Clearly, there is a divergence of opinions about the sustainability of the pastoral way of life and its corresponding contribution towards climatic adaptation in the drylands regions of Africa.

This is complicated by the multifaceted nature of adaptation possibilities that are heavily dependent on a variety of factors such as market accessibility and institutions , resource availability , demand pressure of human and livestock populations for limited land size and availability of livelihood options apart from livestock earnings . Considering the existence of pastoral, semi-pastoral, agro-pastoral and mixed-farming communities in the region, it is difficult to clearly point out exactly how these two debates fit into policy actions without having sizeable evidence. This requires a thorough investigation about how multiple adaptation strategies influence the adaptive capacity of these communities. This study examines what and how major factors influence the adaptive capacity of rural communities in the Afar region of Ethiopia, including to what extent adaptation methods are applied and which adaptation methods contribute to household income. This is important because rural communities in the Afar region account for about 29% of the country’s total population and 16% per annum of total GDP 2008). While most of these communities meet their subsistence living via engaging in animal production, the natural resource base in the region is highly subject to overgrazing and deforestation, with an increasing number of human and livestock populations , which has accelerated . Such challenges combined with unpredictable rainfall and changing temperature leave villagers vulnerable to economic disasters. Therefore, understanding how locally practised adaptation strategies uphold the livelihoods of rural communities is paramount to improve their lives. It is unclear which adaptation strategies lift livelihoods across the community groups. The large body of previous literature is focused on climate modelling techniques for identifying future threats of climate change and outlining adaptation approaches. Options for adaptation include diversifying income, building formal and informal institutions, adjustments in livestock holdings and species, labour mobility and engagement in small irrigation schemes . However, little empirical knowledge is available to help understand the effects of alternative adaptation strategies on household incomes. Hence, this study has three objectives: to determine how pastoral, semi-pastoral, agro-pastoral and mixed-farming communities perceive the effects of climatic change; to examine how they adapt to these changes and to estimate how that affects their income. Results are based on a survey of over 300 pastoral, semi-pastoral, agro-pastoral and mixed-farming communities.The Aba’ala district was chosen for two reasons. First, the district is characterized by its dryness and the common phenomenon of drought occurrences for about five decades.

Due to its geographical remoteness from the Awash River and other perennial rivers, Aba’ala is one of the districts in northern Afar currently suffering from lack of water and access to grazing areas during drought periods. Second, the existence of indigenous experiences of adaptation methods practised by pastoral, semipastoral, agro-pastoral and mixed-farming communities in the district motivated this research, specifically to formulate a detailed analysis on relationships between various adaptation strategies and household income. The livelihood bases of the Afar communities depend on their involvement in livestock rearing, cropping and mixed crop-livestock farming systems. Household adaptation strategies vary across communities in Aba’ala district .These communities are widely known for managing their livestock through a nomadic strategy. They pursue livestock mobility in search of natural pasture and water sources. Semi-pastoral community members are those who were originally pure pastoralists but started to evolve into cropping over the last three decades. Although these communities are land owners, their involvement in cropping is only through renting or sharecropping to other farmers. Their livelihood dominantly depends on livestock rearing with a sedentary lifestyle in permanent houses. Their adaptation strategies to climate change and drought include livestock mobility, sharecropping, trading and participating in some other off-farm activities such as wages and salaries. The agro-pastoral community members mainly raise cattle, have their own land and directly produce cereals. They cope with adverse events of climate change by collecting animal feed . The mixed crop-livestock farming community members have their own land and rent-in or share-in cultivable land from others . The main source for their living is crop farming. They keep raising a small number of cattle for draught power and small ruminant animals to supplement their produce from cropping. Data from the four communities were collected in two stages of primary surveys. First, a reconnaissance appraisal was conducted to have a broader understanding on adaptive behaviours of farmers that dwell in the study area. During the exploratory survey, a series of discussions were held with various stakeholders including clan leaders, farmers, pastoralists, agro-pastoralists, extension workers and agricultural experts.

Pertinent information obtained from the first stage was used to refine the study objectives,sliding grow tables sampling methods and the survey instrument. In the second stage, we stratified the community into mixed-farming, agro-pastoral, semi-pastoral and pastoral, whereby sample households were selected from each stratum randomly. Based on the four community classifications, sampling across 11 Kebeles in the Aba’ala districts was made. Out of the 11 Kebeles, five were pastorals, three were semi-pastorals, one was agro-pastoral; the remaining two are mixed-farming communities. To ensure appropriate representation of each stratum, a two-stage stratification sampling method was applied to minimize heterogeneities among groups . In total, there were about 2,236 households across the four groups. Proportionately, the number of households in each stratum constituted 763 pastoral, 287 semi-pastoral, 508 agro-pastoral and 678 mixed-farming communities. In the end, 325 representative sample households were randomly selected from the four groups, out of which, 110 were pastoral, 43 semi-pastoral, 74 agro-pastoral and 98 mixed-farming communities. Among those 325 household heads randomly selected for sampling, we were unable to collect data from 12 households due to change in their address during the five consecutive years. Hence, a balanced panel data of 313 sample households was gathered in 2011, 2012, 2013, 2014 and 2015. To preclude seasonal variations, data collection was conducted every November. Four enumerators who have good knowledge regarding the study area were hired and trained for the survey. After developing and completing preparation of the structured questionnaire, a pre-test survey was conducted on 12 households, the feed backs of which were incorporated in the full survey. Qualitative data were also gathered to supplement data types that cannot be obtained via quantitative methods.This would validate the quantitative results to come up with story lines of information about local practices of adaptation to climate change for improving their livelihood sources. Before setting out on fieldwork for data collection, clan leaders, religious leaders, agricultural experts, village administrators and elders were selected to hold group discussions. The important criterion for the inclusion of such discussions in this context was based on their pertinence for substantiating the findings. During the discussion, ethnographical methods were used to explore the contribution of Afar communities and highland settlers in building livelihood assets.As shown in Table 1, the mean age of households was 48.9 years. A given household constituted an average family size of six members whose age ranged between 15 and 64 years.

According to International Labor Office , this age category is termed as the active economic labour force population. This shows that the availability of the active labour force in rural areas is an opportunity to apply locally based adaptation strategies. For instance, a physically capable labour force can easily accomplish various environmental conservation actions, which would enable the locals to cope with risks related to climatic change. The implication is that local development plans that incorporate participation of an active labour force across rural villages may enhance sustainable income options and minimizing climate-related risks. The study findings also indicated that the average size of families whose age was below 15 and greater than 64 years were 3 and 0.09, respectively. The ILO named these age categories as a dependent labour force. In terms of gender distribution, 84% of the household heads were males and the remaining 16% were females. Based on ideas obtained from key informants and group discussants, females in the Afar region were generally burdened with indoor family management tasks, which deterred them from accessing various income-generating activities such as possible benefits from livestock rearing and off-farm activities. The result is consistent with other studies conducted by Chala et al. and FAO . Females in Ethiopia have cultural hindrances that obstruct their involvement in various developmental activities outside their home. Women are highly engaged in family management and indoor house duties such as cooking, washing and taking care of their children. Because of these extra burdens, it is hard for them to access formal education and work outside homes seeking to supplement their financial situations. Among the household heads, 66.9% did not get any chance to get a formal education, 19.6% could write and read, 13.5% reached primary level, and nobody went to secondary school. It was presumed that more educated people would have more awareness about the effects of climate change and ability to apply adaptation measures. The mean level of household’s working experience in livestock farming was almost 25 years. The major livestock holdings across the household heads were cattle , goats, sheep and camels . Among livestock owners who already moved to other potential areas, 6% reported that they continued moving for more than one month until they get sufficient pasture and water sources. Once livestock owners moved to a certain district, they no longer keep moving if they find sufficient feed for their cattle. In the study, a majority of livestock owners did not make repeating movements after they found adequate feed resources in certain areas. Hence, accessibility to animal feed and water resources determines household’s movement.Households were requested to provide their views about whether they were sensitive to the effects of climate change. The majority reported that successive occurrences of droughts and the vulnerable nature of livestock farming in the Afar region had heightened their sensitivity in terms of crop failure and animal decimation over the last five years. Table 2 presents information about the perception of household heads across the four community groups and the degree of climatic effects they perceived.

Environmental harms resulting from accelerated erosion are well documented

Therefore, removing sick or dead birds from the pens likely did not prevent secondary exposure from contaminated litter or soil, which is the primary mode of transmission for MSD virus . The farm in Stanislaus County was the only game bird producer near commercial poultry producers. This farm was also the largest with approximately 40,000 pheasants and 60,000 chukar raised annually. Likely due to the size of this farm, they employed a greater level of bio-security relative to the other farms that participated in the survey. The Stanislaus farm had bio-security signage at the entrance to the property, as well as foot baths at the entrances to every brooder house. They employed a variety of wildlife control measures, including traps and rodent bait stations, to minimize the interaction of wildlife with pheasants or other game birds raised on the property. Although farms did not have a vehicle wash station, they did not allow people to come on the property without prior authorization. Only two of five farms used a wash station of any kind that was separate from foot baths, and two farms required vehicles to remain outside of the farm perimeter when clients or vendors visited the property . Although game breeders interviewed during the study did not always adhere to bio-security guidelines recommended by NPIP, in general, they understood the importance of minimizing points of contact that could lead to pathogen transmission on the farm. They did not share equipment such as crates, trailers or other farming equipment with other breeders. They also stated that they used their own vehicles and personnel to transport birds to release sites or to clients purchasing birds across state lines. Game breeders sought to balance bio-security on the farm with the size of their flocks,dutch bucket hydroponic and implementation of bio-security guidelines was not necessarily equivalent to the game breeders’ understanding of bio-security.

Rather, farmers likely weighed the risk of not following certain bio-security principles with the cost of implementing that principle. However, adequate surveillance and preventive action is still likely the best means of minimizing the potential for disease to be released into wildlife environments or otherwise spill over into backyard flocks or commercial poultry.In the last 40 years, 30 percent of the world’s arable land has become unproductive and 10 million hectares are lost each year due to erosion.1 Additionally, accelerated erosion diminishes soil quality, thereby reducing the productivity of natural, agricultural and forest ecosystems. Given that it takes about 500 years to form an inch of topsoil, this alarming rate of erosion in modern times is cause for concern for the future of agriculture. This supplement explores the major causes of soil erosion and the social impacts it has on communities, underscoring the importance of agricultural practices that prevent or minimize erosion. Anthropogenic causes of accelerated soil erosion are numerous and vary globally. Industrial agriculture, along with overgrazing, has been the most significant contributor, with deforestation and urban development not far behind.2, 3, 4 Heavy tillage, fallow rotations, monocultures, and marginal-land production are all hallmarks of conventional agriculture as it is variably practiced around the world and significantly encourage accelerated soil erosion. Repeated tillage with heavy machinery destroys soil structure, pulverizing soil particles into dust that is easily swept up by wind or water runoff. Fallow rotations, common with cash crops around the world and subsidized in bio-fuel production in the U.S., leave land vulnerable to the full force of wind gusts and raindrops. Monocultures tend to be planted in rows, exposing the soil between to erosion, and are commonly associated with fallow rotations. More and more marginal land, land that is steep and particularly susceptible to water erosion, is being planted by farmers either attracted by higher crop prices or forced by loss of productivity on flatter, but already eroded lands. In an increasingly complex global food web, seemingly separate causes of erosion begin to influence each other, magnifying their effects. For example, deforestation of tropical forests in Brazil clears the way for industrial soybean production and animal grazing to feed sprawling urban populations in the U.S.

All the while, fertile topsoil is carried away by wind and water at alarming rates. Decreased soil fertility and quality, chemical-laden runoff and groundwater pollution, and increased flooding are just a few of these detrimental effects. There are, in addition, disproportionate social harms resulting from high rates of erosion that are less obvious, but no less directly linked. Hunger, debt, and disease are serious problems in mostly poor, rural communities around the world that are exacerbated by accelerated erosion. As global agricultural development and trade have accelerated in the last half-century, mainly via the “green revolution” and the formation of the World Trade Organization , increasing trade pressures have raised export crop production in less developed countries. As a result, farmers mainly in Asia, Latin America, and sub-Saharan Africa are increasingly abandoning traditional farming techniques and locally significant crops in favor of adopting the industrial practices mentioned above that lead to high rates of erosion.5 While development institutions and governments proclaim concerns for the rural environment, agricultural policy supporting high commodity prices and limited credit access continually pushes farmers to intensify land use. Coupled with the fact that the total area of arable land in cultivation in these parts of the world is already very high , land degradation by soil erosion threatens food security by removing from cultivation land sorely needed for domestic food production. The majority of the world’s 868 million undernourished people live in Eastern and Southern Asia and sub-Saharan Africa. One of the international responses to soil degradation in the developing world has been to promote soil conserving tillage practices known as minimumor no-till agriculture. No-till agriculture protects soil by leaving crop residue on the field to decompose instead of plowing it into the ground before planting the next crop. Weed management is addressed with heavy herbicide use to make up for the loss of weed control from tillage. The practice, extensively adopted in the U.S., has been popular in Brazil and Argentina, and much effort is being expended to expand no-till to Asia and Africa. There are, however, costs associated with no-till agriculture, both economic and social. First, no-till agriculture is expensive to adopt. Herbicides, seed drills, fertilizers, and other equipment require a high initial investment not possible for poor farmers without incurring significant debt. Second, heavier herbicide use increases human exposure to chemicals and contributes to water and air pollution. Third, weed pressures can change in unexpected ways as reliance on a handful of herbicides breeds resistance. Weed resistance to the popular herbicide, glyphosate, is an increasing concern in conventional agriculture and is leading to development of more harmful herbicides to compensate for glyphosate’s reduced effectiveness.

Lastly, no-till agriculture alsopromotes monoculture cropping systems that, as described above, have a deleterious effect on soil quality. The techniques illustrated in this manual emphasize long-term soil stewardship using an integrated approach to soil health and management. For example,dutch buckets system cover crops hold soil aggregates together in the wet season, protecting soil from the erosive effects of rain. Properly timed tillage limits its destructive effects on soil particles and soil structure. Compost promotes a healthy soil ecosystem, improving soil’s structure and its ability to more successfully withstand wind and water erosion. In addition to environmental benefits, agroecological systems are often based on traditional farming practices that promote soil-conserving techniques and varietal choices adapted to the particular region, stemming the tide of land consolidation and commodity crop production. Food security is enhanced and debt risk reduced by way of diverse cropping systems and labor-intensive, rather than input intensive, production methods. And there are public health benefits from eliminating exposure to harmful pesticides and herbicides. In sum, the serious challenge presented by accelerated soil erosion coupled with the uncertainty about whether no-till agriculture’s benefits outweigh its harms underscores the importance of employing an agroecological approach to farming that prevents soil erosion on farms.The Parisian market gardens for which the practice was originally named were small plots of land that were deeply and attentively cultivated by French gardeners, or “maraîchers.” The “marais” system, as it is known in French, was formed in part as a response to the increasing urbanization of Paris, the attendant increase in the cost of urban land, and the ready availability of horse manure as a fertility source. English master gardener Alan Chadwick popularized both the term and the gardening method in the U.S. when he introduced them at UC Santa Cruz’s Student Garden Project in 1967, and they have served as the theoretical foundation supporting the cultivation methods used at the UCSC Farm & Garden ever since. But as Chadwick was quick to point out, other societies were using similar practices far earlier than the Parisian market gardeners. He acknowledged the influence of early Chinese, Greek, and Roman agriculture specifically, on the development of the French-intensive method.

The concept of small farms dedicated to intensive cultivation of the land, improved soil fertility, water conservation, and closed-loop systems was a feature common to many early civilizations and, in fact, characterizes the majority of agriculture today in developing countries where these techniques have been passed down to successive generations. Of the world’s 525 million farms, approximately 85% are fewer than 4 acres in size, tended to mostly by poor farmers in China, India, and Africa,1 where methods often reflect the same philosophies of stewardship and cultivation that inform the French intensive method we use today. In fact, small-scale agriculture represents the global history of agriculture up until the Industrial Revolution in the 18th century. And in much of the developing world, locally adapted traditions continue to shape the way agriculture is practiced. This supplement examines some of the methods used by farmers around the world, past and present, reflecting the principles on which the French-intensive method is based.As part of one of the oldest agriculture-based societies in the world, Chinese farmers have succeeded in maintaining fertile soils for thousands of years. Prior to the availability and use of synthetic fertilizers, one method Chinese farmers commonly used to maintain their soil’s fertility was to apply human waste to their fields, thereby returning large quantities of potassium, phosphorous, and nitrogen lost through harvest back to the soil. Applying this source of fertilizer, also called “night soil,” achieved many of the goals we aspire to in a French-intensive system. Recycling waste minimized external inputs and helped “close the system” by relying on a renewable, readily available source of fertilizer. High in organic matter, night soil also provided the necessary nutrients for growing successive crops on the same land without depleting the soil. Waste, both human and animal, served as the major source of fertility amendments that helped to build soil ecology and microbial activity.In Japan, compost production has been tied to small-scale farming for centuries. Farmers harvested herbaceous growth from nearby hillsides as a source of compost material. Compost houses were built and filled with this herbage, manure, and soil daily until piles reached five feet high. Water was constantly added to ensure saturation. Once the designated height was reached farmers let the piles sit five weeks in summer and seven weeks in winter before turning them to the other side of the house. The compost was then applied to dry land cereal crops in spring. A study conducted in the early 20th century found that nitrogen, phosphorus, and potassium were replenished by this composting system nearly at the level lost through harvest.2This study evaluated the efficiency of CDC-LT used with or without CO2 baits and placed inside or outside of residential dwellings in northwestern Thailand. This is the first in-depth survey and analysis, seeking to provide some guidelines for CDC-LT-based mosquito trapping studies and surveillance programs in this region of Thailand. Overall, CO2 baits significantly increased trapping efficiency of Anopheles spp. mosquitoes , especially when the traps were placed outside of residential dwellings. Stratification by season revealed that the effect was restricted to observations in the hot-season . Generally, the most abundant Anopheles species, An. minimus s.l. was captured preferentially in indoor traps, which is likely related to its anthropophilic nature.