Weed control with many of the available alternatives is generally not as reliable as with methyl bromide

The initial size of seedlings was also significantly and positively correlated with subsequent survival.In previous research, tree shelters consistently promoted the height growth of artificially planted blue oak seedlings . This accelerated growth results from environmental changes within the tubes — including elevated CO2 levels, increased humidity and higher temperatures — as well as protection that the tubes provide to seedlings from damage by animals. Shelters therefore offer the possibility of allowing seedlings to grow more rapidly to a height where they are relatively resistant to animal impacts. A study at the UC SierraFoothill Research and Extension Center in Yuba County, near one of the field sites, found that shelters caused dramatic increases in seedling height growth . Shelters had been placed over seedlings that were planted 2 years earlier but languished with little growth. Almost immediately, the seedlings began to grow rapidly, and 2 years later average seedling height was nearly 4 feet . By comparison, the controls grew very little and remained less than 1 foot tall. In our current study, tree shelters also significantly increased height growth, although the increase was not as great as that measured for artificially planted seedlings . Each year, livestock rubbing caused some shelters to be displaced so that they no longer covered the seedlings when we came to measure them in the fall. This may have contributed to reduced growth, hydro flood table though it was impossible to determine when during the year this had occurred. But we did observe browsing damage to some of these seedlings before we repositioned the tree shelters over them. The effects of the tree shelter treatments were not uniform over all sites.

Consequently, there were significant interactions in all 3 years for height growth between the shelter treatment and sites. For instance, while 2008 height growth was larger for seedlings in tree shelters at all sites, the magnitude of this difference varied considerably. At the San Luis Obispo site, the shelter treatment resulted in an average height increase of over 2 inches in 2008, while at the other sites the enhancement from the shelters was far less dramatic. Furthermore, the effects of tree shelters seemed somewhat dependent on initial seedling size, with larger seedlings benefiting more from the shelters. For example, the regressions of initial seedling height with subsequent height growth each year indicated that these variables were positively, and significantly, correlated.Weed control. California’s hardwood rangelands commonly have dense understories of introduced Mediterranean annual grasses , which compete with oak seedlings for moisture, nutrients and light and can make it difficult for the oak seedlings to grow into saplings . Removing this vegetation around the seedlings increases the resources, especially moisture, available for them. It may also reduce damage from voles and grasshoppers. Weed control around artificially planted blue oak seedlings has been shown to enhance their growth and survival . In our study results, the weed control treatment apparently had little effect on height growth , but, importantly, it significantly increased survival in 2 of the 3 years . Seedling mortality. Altogether, 28.2% of the original seedlings died . The causes of seedling mortality were difficult to determine. At the Yolo and San Benito county sites, feral hog rooting disturbed the soil and eliminated over a dozen seedlings.

At the Yuba and Santa Barbara county sites, livestock and deer browsing appeared to reduce seedling height and likely killed some seedlings not in shelters. At all of the sites, there was evidence of browsing of non-sheltered seedlings, and in many cases these seedlings were either killed or lost height during one or more years. At the San Luis Obispo and Yuba county sites, there was extensive gopher activity close to some seedlings, although only a couple of them appeared to be affected. The extremely high mortality at the Santa Barbara County site was most likely due to below-normal rainfall during 2008 and 2009, only 5.9 inches each year, compared with the long-term average of 8.3 inches . Even though blue oak is relatively drought resistant, it is not surprising that mortality was so high under these extremely dry conditions. Shade. Whether seedlings were growing in shade influenced how they performed. Shaded seedlings grew less, and differences in total height and height growth between shaded and non-shaded treatments were significantly different in 2009 and 2010. There were significant interactions for shade and shelter for height growth in all years — seedlings in tree shelters did not grow as much in shade as did those in the open. This is not surprising since tree shelters reduce light levels reaching the seedlings inside, often by 50% or more . In our study, light levels for seedlings in tree shelters in the shade were apparently too low to allow substantial growth. Seedling size. The height of the seedlings initially, at the start of the study, was strongly and positively correlated with how much the seedlings subsequently grew. It was also significantly positively correlated with survival. Taller seedlings have more biomass and photosynthetic tissue and would be expected to grow more; for regeneration, they are the best candidates for protection or weed control. Rainfall. This study took place during 3 consecutive relatively dry years , followed by one average or above-average rainfall year. We cannot say for certain that the large increase in 2010 seedling height growth compared to the previous 2 years’ growth was primarily due to increased rain, but it appeared that more soil moisture contributed to greater growth. For instance, we noticed more seedlings exhibiting second flushing — a second period of active shoot elongation — in 2010 than in previous years.

The positive effects of the shelter treatments were also greatest in 2010, suggesting that tree shelters are most beneficial when there is abundant moisture. Improved regeneration Our study has been under way for less than 4 years — a relatively short time in the life of blue oaks — but the data strongly suggests that tree shelters can enhance growth and that weed control can increase survival. Both techniques improved the chances for blue oak seedlings to grow into saplings. These trends were especially evident in the last year of the study, when annual precipitation was above average at most sites, and seedlings growing away from tree canopies and in full or near-full sunlight had the maximum benefit. In our experience, blue oak seedlings in the open covered with tree shelters generally grow into saplings in less than a decade. Compared with artificial regeneration techniques, this natural regeneration strategy is more cost efficient and therefore more likely to be widely adopted by California landowners. We estimate that this approach would cost less than half of what it costs to plant seedlings. We feel that using tree shelters and weed control to enhance early growth and survival of naturally occurring blue oak seedlings could significantly improve the regeneration of this important woodland species and promote its long-term conservation.Pest- and pathogen-firee planting stock is essential for successful establishment and future productivity of new orchards and vineyards. Clean stock is also a requirement for intrastate, interstate and international commerce of tree, vine and garden rose planting stock. To ensure the quality of commercially produced nursery stock in the state, the California Department of Food and Agriculture enforces laws and regulations related to the production of certified nursery stock as outlined in the Nursery Inspection Procedures Manual . Because of the potentially large and long-term impacts on the nursery crop as well as the subsequently planted orchards, vineyards and ornamental landscapes, control of plant-parasitic nematodes in nursery fields is a major focus of the nursery stock certification program. Producers of perennial crop nursery stock in California can meet nematode certification requirements by fumigating the field at the beginning of the nursery cycle using an approved treatment or by conducting a detailed inspection of soil and planting stock at the end of the production cycle. If growers elect to use inspection procedures instead of approved treatments and soil or plant samples are found to contain prohibited nematodes, vertical grow rack system further sampling is conducted to delineate the extent of the problem, and nursery stock from the affected area usually is destroyed. Preplant soil fumigation thus reduces the economic risk of a nonsalable nursery crop and is used in most tree and garden rose nurseries in California. Grapevine nursery stock also must meet phytosanitary requirements to be certified in California, but in contrast to tree and rose growers, many grape nursery producers elect to use the inspection procedures rather than fumigation. In practice, the risk of nematode occurrence in production of grapevine nursery stock without fumigants is reduced by spring planting, a relatively shorter nursery production cycle and market preference for smaller nursery stock. However, grape nursery operations with sandy soils or sites where grapes have been grown previously often use preplant fumigation practices comparable to tree and rose nurseries to reduce the economic and market risks of not meeting phytosanitary regulations. Most field-grown perennial nursery operations have used methyl bromide for preplant pest control because it effectively diffuses through the soil profile, penetrates roots and dependably provides effective pest control across a range of soil type and moisture conditions. Under the provisions of the U.S. Clean Air Act and the Montreal Protocol, the import and manufacture of methyl bromide is being phased out because of its deleterious effects on stratospheric ozone.

Perennial nursery producers have largely continued using methyl bromide under the critical use exemptions and quarantine/preshipment criteria . However, increasing production costs and international political pressure on CUE and QPS regulations have spurred efforts to identify economically viable alternatives to methyl bromide for the perennial nursery industry. Several factors limit the adoption of alternative fumigants in California nursery systems. First, there are very few fumigant or non-fumigant nematicides available . In the United States only a handful of fumigants are registered, including methyl bromide, 1,3-dichloropropene , chloropicrin, dimethyl disulfide , and methyl isothiocyanate generating compounds. Of these, DMDS is not currently registered in California and has had only limited testing in nurseries. Methyl iodide was registered in California in late 2010, but the federal registration was withdrawn by the manufacturer in early 2012. The nursery certification program and other regulations further limit available alternatives. Of the fumigants registered in the state, only 1,3-D is an approved treatment in nurseries with medium- to coarsetextured soils . However, it is not approved for nurseries with fine-textured soils because the registered rates are not sufficient to provide acceptable pest control. Most of the alternative fumigants are heavily regulated due to concerns about human safety and environmental quality related to emission of fumigants and associated volatile organic compounds . These concerns have led to a constantly changing regulatory environment, encompassing buffer zones, field preparation requirements, available compounds and rate limitations on a field and air basin level . Uncertainty within the nursery industry about current and pending fumigant regulations presents a continuing challenge to the adoption of methyl bromide alternatives in California. Although fumigation in the perennial crop nursery industry is driven by nematode certification, there are serious concerns that the level of secondary pest control provided by methyl bromide will not be matched by the alternatives. Although weeds can be addressed to a large extent with tillage, hand-weeding, and herbicides, there are likely to be environmental and economic impacts of greater reliance on these techniques. More importantly, many nursery producers are very concerned about the consequences of soilborne diseases that are currently controlled with methyl bromide or methyl bromide and chloropicrin combinations. Reliance on alternatives with narrower pest control spectrums may result in problems with new diseases or the resurrection of old ones. Research has been conducted in recent years to address issues limiting adoption of methyl bromide alternatives in California’s perennial crop nursery industry . As part of the USDA-ARS Pacific Area-wide Pest Management Program for Integrated Methyl Bromide Alternatives, two additional research and demonstration projects were implemented from 2007 to 2010. First, because current and pending regulations greatly affect how and when fumigants can be used, a research station field trial was conducted to simultaneously determine the effects of emission reduction techniques on pest control and fumigant emissions. Second, two trials were conducted in commercial nurseries to test and demonstrate pest control and nursery stock productivity with 1,3-D treatments in an effort to increase grower experience and comfort with available alternatives.

Aboveground biomass contained most of the nitrogen from legume green manure

The primary study objective was to quantify the effect of cover crops and amendments on soil fertility, potato yield and potato pests.Two cover crop studies were conducted at IREC — a study begun in 2014 that evaluated mid-summer cover crops and a study begun in 2016 that evaluated covercrops planted in spring, mid-summer and fall. Cover crop planting times and species were selected to fit local cropping systems and to maximize biomass production under local growing conditions. For example, planting cover crops in mid-summer is desirable for producers growing a grain hay crop because the mid-summer planting occurs shortly after hay harvest, which allows producers to generate crop income. A mid-summer planting also allows cover crop growth during the warm temperatures of summer and early fall. Planting cover crops in the spring is a good fit for producers with limited water availability because it takes advantage of stored winter soil moisture and cool, wet weather conditions during establishment. Planting in the fall is a good fit for producers who want to grow a full-season cash crop, such as hard red wheat, because fall planting allows them to plant after cash crop harvest. Fall planting is also desirable because the cover crop can prevent soil erosion during winter and early spring. In both studies, hydroponic rack potatoes were planted the year after cover crops were grown. Cover crop species included cool-season and warm-season species, seeded alone and in mixes. Cover crop species were selected based on their previous success in the local area or on previous research documenting success under similar growing conditions. A list of species evaluated is shown in table 1.

Cover crops were drill-seeded into a disked, packed seedbed using a drill cone planter with drill rows spaced 6 inches apart. Cover crop plant density was estimated using visual plant counts within a central rectangle in each plot, measuring 5 feet by 10 feet, when plants were 3 to 5 inches tall. Cover crops were grown under sprinkler irrigation, without synthetic fertilizer or pesticides. They were managed as a green manure by flail-mowing and disk-incorporating above ground biomass at early flowering. Cover crop biomass in each plot was estimated from a quadrat of 5 feet by 10 feet. An above ground biomass subsample was sent to a laboratory to estimate total nitrogen content in cover crop biomass. An untreated fallow treatment and a urea treatment were included in all trials for comparison purposes. The fallow treatment for spring cover crops was fallowed for 12 months before potato planting; the fallow treatment for mid-summer cover crops was fallowed, after harvest of the barley hay crop, for 8.5 months before potato planting; and the fallow treatment for fall cover crops and several amendments was fallowed, after harvest of the barley grain crop, for 6.5 months before potato planting. All fallow treatments, after weed suppression ratings were taken, were hand-weeded to prevent excessive weed growth and weed seed production. Planting of the spring cover crop occurred in midApril. Mid-summer plantings occurred in late July, after a spring barley hay crop was grown. The fall cover crop planting occurred in mid-September, also after a spring barley grain crop was grown. Cover crops were incorporated into the soil at 50% flowering — 71 to 77 days after planting for the spring planting, 70 to 76 days after planting for mid-summer plantings and 230 days after planting for the fall planting.

Fall-planted cover crops did not reach the flowering stage before incorporation. The reason for early termination of fall-planted cover crops was to allow 4 weeks between cover crop incorporation and potato planting and thus enable cover crop decomposition and prevent a green bridge. Total applied water for irrigated cover crop trials was 12 inches for the spring planting, 6 to 8 inches for midsummer plantings and 3.5 inches for the fall planting. Cover crop vigor was determined by visually evaluating plant canopy cover and height in the plot area, with a vigor score of 10 equal to the most vigorous growth and 1 equal to bare ground. Weed suppression ratings were determined by visually evaluating the density and height of weeds in each plot. A weed suppression rating equal to 10 represented zero weeds in the plot and 1 was equal to weed density and height similar to the unplanted bare-ground control. Weed suppression ratings were taken when weeds and cover crops were 6 to 10 inches tall. Weed biomass was measured in each plot at the time of cover crop harvest by hand-separating cover crop and weed plant material derived from the quadrat sample.Two amendment studies were conducted at IREC. One study evaluated fall-applied amendments in 2014 and another study evaluated amendments applied in fall 2016 and spring 2017. Amendments were applied by hand and disk-incorporated into the soil — in midSeptember for fall applications and in late April for spring applications. The tested organic amendments included chicken manure, steer manure, composted chicken manure and a compost mix using green waste and cow manure.

Bloodmeal and soy meal were broadcast-applied and incorporated using a Lilliston cultivator after bed preparation and before planting. These two amendments were included to represent organic alternatives to quick-release synthetic nitrogen fertilizers such as urea. Amendment application rates were based on the products’ moisture and nitrogen content, with the goal of applying 150 pounds of nitrogen per acre . Amendment application rates ranged from 1,100 pounds per acre for blood meal with 13% nitrogen to 10,000 pounds per acre for compost with 1.5% nitrogen. The nitrogen mineralization rates for the amendments varied and were not controlled in the experiment.Potatoes were planted over areas treated with cover crops, amendments and combinations of cover crops and amendments. Potatoes were also planted over areas treated with urea fertilizer and over untreated fallow areas. Planting occurred in the spring, without the use of synthetic fertilizers or pesticides. Preplant soil samples were taken at potato planting to confirm that supplies of phosphorus, potassium, sulfur and calcium were adequate to avoid deficiencies; all soil tests showed adequate nutrient levels according to University of California guidelines . Potato row spacing was 36 inches and seed spacing was 10 inches. The Russet Norkotah potato variety was evaluated in 2015 and the Yukon Gold variety was evaluated in 2017. Soil samples were collected from each plot shortly before planting to determine nitrate available at preplanting, as well as available ammonium and total nitrogen. Plot size was 12 feet by 40 feet; all sampling occurred in a middle area, measuring 6 feet by 30 feet, to avoid edge effects. The soil type at IREC is a Tulebasin mucky silty clay loam with 4.5% organic matter. To meet crop evapotranspiration needs, potatoes were irrigated with solid-set irrigation that entailed use of soil moisture monitors and an on-site CIMIS weather station. Crop vigor was monitored multiple times during the growing season by visually evaluating plant canopy cover, height and color over the plot area, with a vigor score of 10 equal to plants in the plot with the highest canopy cover and a dark-green color and 0 equal to short, senesced, yellow plants. Petiole nitrogen was measured at early tuber bulking and at crop maturity. Potatoes from each plot were mechanically harvested and graded to determine firesh-market tuber yield and tuber quality. Potatoes were graded by counting all potatoes in each plot and mechanically sorting them by weight into five size classes based on U.S. grade and carton classes. Tuber quality was determined by counting and weighing all cull tubers that displayed rot, greening, knobs, growth cracks, irregular shape and irregular skin appearance. A 10-tuber subsample from each plot was evaluated for internal deffects including hollow heart, brown spot bruise, rolling grow tables vascular discoloration and specific gravity.Cover crop establishment in all trials was successful. Plant densities were measured at or above 80% of the seeding rate , with two exceptions — a crop of cowpeas seeded in mid-summer and a crop of spring-seeded arugula . Low plant density for spring arugula was probably due to planting too deep. Arugula requires a shallow seeding depth of less than 0.5 inch. Subsequent seedlings of arugula at the correct seeding depth produced plant density higher than 80%.

Spring wheat, fall triticale, woollypod vetch, field peas, spring mustard and oilseed radish displayed rapid growth, high vigor and high weed suppression . Mixes of mustards and field peas or vetch, in 50/50 proportions, also had high vigor and high weed suppression. Spring-seeded arugula exhibited lower vigor and weed suppression than the other spring cover crops, likely due to the stand problems associated with excessively deep seeding. Oilseed radish, mustards and grasses planted in mid-summer, after a spring barley crop, exhibited lower vigor and biomass than spring plantings . This effect was caused by a deficiency of plant-available nitrogen at planting; the mustards, radish and grasses had low nitrate in plant tissue during the early season and a low percentage of nitrogen biomass at harvest compared to spring plantings . Nitrate nitrogen in the top 10 inches of fallow plots averaged 17 parts per million at the spring planting and below 5 ppm at the mid-summer and fall plantings. These nitrate concentrations respectively correspond to approximately 28 and 8 pounds of nitrogen per acre in the top 10 inches of the profile. Many growers express interest in growing a spring barley or wheat crop for revenue before planting cover crops, but these results clearly show that adequate mineralized soil nitrogen is needed for non-legume cover crops to flourish. The idea that legumes might contribute nitrogen to non-legume cover crops in a mixed planting was not supported, as mustard, radish and grass grown in a mix with field peas and vetches had vigor and biomass similar to the single-species planting; the mix was instead dominated by field peas, which fixed their own nitrogen but did not share it with other species.Field pea and vetch green manures contributed substantial nitrogen to the system, adding over 150 pounds — and in many cases over 200 pounds — of nitrogen per acre from above ground biomass . The highest nitrogen contributor was spring-planted “flex” field peas, which added 306 pounds of nitrogen per acre. Berseem clover and cowpeas contributed less than 70 pounds of nitrogen per acre because Tulelake’s short growing season was too cold for these species to reach maturity before frost. Several grass and mustard cover crops produced significant biomass, but their nitrogen content was less than half of that produced by most legume species . More than 150 pounds of nitrogen per acre were contributed by 50/50 mixes of legumes and either grass or mustard. Mineralized nitrogen at the time of potato planting was correlated to added nitrogen from cover crops , suggesting that little nitrogen was lost to leaching or denitrification over the winter. Mineralized nitrogen in the top 10 inches of soil for most field peas and vetches was more than double that for non-legume cover crops. Mineralized nitrogen at potato planting, in treatments that involved haying field peas’ above ground biomass and removing it from the field , was no different from fallow treatments. This is consistent with other studies demonstrating that above ground biomass contains most of the nitrogen in legume cover crops. Mineralized nitrogen at potato planting in fallow treatments averaged 55 pounds of nitrogen per acre for spring fallow, 48 pounds per acre for mid-summer fallow and 43 pounds for fall fallow. Mustard, radish and sorghum-sudangrass resulted in mineralized nitrogen similar to that of fallow treatments, suggesting these cover crops had a neutral effect on soil nitrogen . Spring wheat and fall triticale resulted in lower mineralized nitrogen at potato planting than was measured in fallow treatments, likely because decomposition of grass residue tied up available nitrogen. Delayed release of nitrogen in potatoes is problematic because potatoes require adequate nitrogen in the early season for vegetative growth and tuber initiation. Potato petiole nitrate at early bulking was used to evaluate in-season nitrogen availability. Legume cover crops resulted in much higher potato petiole nitrate at early bulking than did grasses; petiole nitrate for treatments with field peas and vetches was similar to petiole nitrate produced in conventional fertilizer controls . When comparing potato petiole nitrate in cover crop treatments to that in fallow treatments, legumes were higher, mustards were similar and grasses were lower . One year after growing potatoes , flag leaf nitrogen in winter wheat was higher in plots that had received spring vetch and field pea treatments than in fertilizer controls and fallow treatments .

This study suggests that switch grass is able to tolerate drought by mining deep soil moisture

A preintroduction evaluation can be performed in tandem with typical agronomic yield trials. This approach can both quantify ecological risk and assess the suitability and economic performance of the species within a particular region. To assess the invasive potential of any proposed bio-fuel species, including switch grass in California, we propose a seven-step evaluation protocol . These evaluations should be performed for all candidate genotypes and cultivars as well as for transformed genotypes of native species because ecological interactions can vary widely within a species. Science-based information generated from risk assessments, bio-fuel crop ecological studies, niche modeling and other evaluations can guide risk mitigation decisions at appropriate points within bio-fuel research and development, crop selection and production, harvest and transportation, storage site selection and conversion/refinery practices .Risk assessment tools have been used in Australia and New Zealand as an aid in decision making for the proposed introduction of novel species for horticultural, agronomic and other purposes. For potential bio-fuel species, risk assessment should serve as a basic first step in evaluating their invasive potential, whether the species are exotic, native, 4×8 grow tray novel constructs or genetically modified. We performed a risk assessment for switch grass in California using the Australian model .

Our analysis produced an inconclusive result, with an “evaluate further” classification . The first question in the risk assessment asks whether the species is domesticated. A “yes” response favors acceptance , under the assumption that domestication generally reduces the inherent weediness of wild types, which are wild plants not selected for production traits . As previously discussed, this is true for most agronomic and horticultural species, but the opposite is true for bio-fuel crops because selection for favorable bio-fuel crop characteristics generally enhances “weedy” characteristics . When we answered the first question differently, the outcome changed from “evaluate further” to “reject.” We further evaluated switch grass as a hypothetically sterile cultivar. In this case, the weed risk assessment yielded an acceptably low risk that it would become invasive. This suggests that seed production may be key to the potential invasiveness of switch grass. However, a lack of seed production does not guarantee a low risk of invasion, considering that the giant reed , which is sterile, is highly invasive in California. The invasiveness of giant reed is due to its ability to regenerate from stem nodes after the stem is detached from the rhizome as a result of flooding or control efforts. Despite the invasive potential that our analysis shows for switch grass in California, there are no documented cases of the species escaping in agricultural or natural systems. This, however, may be a function of the limited number of opportunities for introduction of switch grass propagules outside of intentional planting areas. It is also important, therefore, to conduct studies that will quantify switch grass performance in various ecological settings in order to mitigate the risk of propagule escape and establishment.The natural distribution of a species is largely controlled by climate factors, with precipitation and temperature playing the dominant roles .

Bioclimatic envelopes, or climate matches, can provide both an estimate of range suitability for the bio-fuel crop species outside cultivation and the agronomic potential of the bio-fuel crop in the targetregion . There are numerous methods for estimating the bio-climatic envelope, including CLIMEX, Maxent, GARP, BIOCLIM, classification and regression tree, and simple logistic regression. CLIMEX has been used to model the distribution of bio-control agents , poikilothermic animals and many invasive plant species . The strength of CLIMEX for invasive species applications is that the model can be based on the historical range and supplemented with empirically derived biological and physiological data . We performed a CLIMEX analysis of switch grass using the plant’s native range as a basis in building the model and then supplementing it with environmental tolerance data from greenhouse studies . In a global model of potential suitability, the potential cultivatable range of switch grass was very broad, both with and without irrigation inputs . Subsequent analysis of potential suitable habitat in the western United States indicated that much of the region is unsuitable for switch grass, likely because of the very dry summers of arid and Mediterranean climatic regions . However, when adequate yearlong soil moisture was available , the suitable range of switch grass increased dramatically throughout much of the western United States . This could indicate that the successful cultivation of switch grass would depend upon summer irrigation, while any escape from cultivation and invasion into natural areas would likely be confined to riparian or wetland areas with a permanent water source. Riparian systems are the most heavily invaded habitats in the Central Valley of California, as they possess the primary limiting resource of soil moisture . Furthermore, riparian areas often border production fields, and traversing them would be unavoidable during bio-fuel biomass transport.

The CLIMEX model does not forecast yield potential, but it does demonstrate that some regions of California are suitable for establishment and persistence of switch grass. In support of our suitability prediction in California, Pedroso et al. evaluated the agronomic potential of switch grass in four regions of the state. This evaluation showed high productivity in both study locations in the Central Valley, which was considered a highly suitable region based on our bio-climatic index . Switchgrass grown in the Imperial Valley , near the margin of highly suitable climatic conditions in our analysis, also produced high yields. In contrast, yields and survival of switch grass were lowest in the most northern, cooler, mountainous region of Tulelake, near the Oregon border in northeastern California. Our model also predicted that this region of the state would be poorly suited to switch grass establishment, even with irrigation.Each bio-fuel species should be evaluated for various physiological and environmental tolerances. This information can be used to identify the ecosystems that are most susceptible to invasion and can also be integrated into risk analysis and bio-climatic and agronomic models to estimate, and subsequently mitigate, the likelihood of invasion . Based on results from our CLIMEX analysis of switch grass in the western United States, water availability should be the major limiting factor for switch grass naturalization. To test this, we conducted a greenhouse study to evaluate switch grass’s tolerance of soil moisture stress at various levels of water availability, ranging from moisture deficit to flooded, and we also assessed the germination, establishment, performance and reproductive potential of four common ecotypes, both upland and lowland . Our results showed that cultivars of switch grass performed well in both well watered control and flooded conditions. Although switch grass survived extended periods without water, individual plants in drought treatments were shorter, with lower measurements for leaf area and specific leaf area, and they produced fewer tillers and less biomass . As expected, lowland types outperformed upland types in the flood treatment and also displayed higher fitness under most conditions, which likely explains why they are the target of germplasm improvement for bio-fuel cultivation . We concluded that switch grass, particularly lowland ecotypes, 2×4 flood tray has the ability to germinate, establish and flower in low moisture and even more so in flooded conditions. The evidence further supports the climate-matching data and indicates that soil moisture is the limiting factor in the establishment and growth of switch grass in regions of the western United States. While tolerance to a range of soil moisture conditions may increase the cultivatable range of switch grass, it also suggests that the species is not likely to be very competitive in natural areas exposed to prolonged drought, as is common in much of California. In another study, we grew switch grass in outdoor mesocosms under irrigated and rainfed conditions and assessed the spatial distribution and abundance of roots using minirhizotron images and whole root-system sampling . Although plants survived extended periods of drought, their shoot and root biomass, root length density, numbers of culms and culm height were greatly reduced under dry conditions. These data support the results of the greenhouse study .

The rainfed treatment reduced switch grass whole-plant biomass by 83%, culm production by 67% and root length density by 67% from the levels of irrigated plants. However, switch grass grew roots continuously into regions of available soil moisture as surface soil layers grew increasingly dry . A deep-rooting habit and continuous root growth from regions of water depletion to moister regions are strategies used for drought avoidance by plants exposed to periodic water stress . It is important to note that while switch grass survived dry, rainfed conditions, its performance was significantly reduced. This level of performance would be unacceptable for agronomic production and would also reduce the ability of switch grass to establish and compete with resident vegetation in drier natural areas.The results of our climate-matching analyses as well as the biological and physiological studies allowed us to identify habitats that were most susceptible to invasion by switch grass. From our previous work, we knew that riparian corridors and perhaps even rice production fields are the regions most likely to be susceptible to switch grass invasion in California. In subsequent work , we confirmed these findings by introducing switch grass propagules into a riparian habitat under controlled conditions and evaluating their colonization, survival and establishment potential under varying levels of soil moisture availability and competition. The results supported our greenhouse and mesocosm studies, again demonstrating that while switch grass can survive under drought conditions, its performance on upland sites away from streams was very poor compared to that of switch grass plants adjacent to the stream. This confirms our conclusion that riparian regions of the state are the areas most potentially susceptible to switch grass invasion, while dryland regions of California have very low susceptibility to invasion. Of equal importance, switch grass grown without competition in the first year in the wet habitat produced about six times more tillers than switch grass growing in an intact resident plant community with competition, and the tillers were twice as tall and yielded eight times the above ground biomass . This further indicates that, even in a suitable habitat, switch grass is not highly competitive with other vegetation.The probability of establishment of an invasive population is directly proportional to the propagule pressure from outside sources . In the case of switch grass, outside sources will be production fields, harvest and transportation equipment and biomass storage sites. Our initial risk assessment for the invasive potential of switch grass in California determined that seed production and dispersal were the means of the greatest threat that it would become invasive . To aid in their efficient conversion into energy, cellulosic bio-fuel species are typically harvested after senescence in the field, usually in late fall. In our seed biology experiments, we showed that switch grass germinates and survives under conditions that range from 10% soil moisture to submersion in water . From these experiments, we estimate that an average switch grass field would produce between 300 and 900 million seeds per hectare. Using a conservative estimate of 300 million seeds per hectare and 60% dormancy, approximately 120 million seeds per hectare would be capable of germinating, given adequate soil moisture conditions . This tells us that mitigation practices will be needed to reduce the risk of seeds spreading to sensitive ecosystems. Mitigation practices could include the planting of sterile cultivars, cleaning equipment before moving it to other areas and using closed transport systems and storage facilities.As with genetically modified food and feed crops, screening for possible cross hybridization with related and desirable species should be obligatory to reduce the chance of genetic contamination or creation of novel hybrids . In California, there are five native species and five introduced species within the genus Panicum . To date, there is no evidence of hybridization between switch grass and any other Panicum species, regardless of its native origin. Thus, the likelihood that switch grass would either contaminate the gene pool of native Panicum species or enhance the weediness of nonnative Panicum species through hybrid vigor seems very small. Mitigation recommendations In August 2009, the U.S. Invasive Species Advisory Committee , a group of non-federal experts and stakeholders chartered under the Federal Advisory Committee Act of 1972, adopted nine recommendations for the federal government’s bio-fuel programs .

The simplest price provision offers a set price per unit of biomass throughout the duration of the contract

In addition to relative performance incentives and the use of fieldmen as means to address moral hazard problems without shifting additional, exogenous risk to producers, pricing mechanisms can reallocate risk/minimize cost, while also facilitating access to traditional risk management strategies. The choice of pricing models offered in a biomass production contract can therefore have important implications for each of the three theoretical frameworks. While this eliminates all down-side price risk from producers, it also forgoes the potential for higher gains should the value of biomass increase. An acreage contract that compensates the producer only by acres of production eliminates producer yield risk, but has analogous price risk consequences. Cost-plus pricing similarly eliminates all down-side price risk to producers by setting a fixed profit margin above the seasonably fluctuating cost of required inputs and shifts the long-range risk of rising input costs to the end-user. On the other hand, indexed pricing provisions, where the price of the biomass is tied to commodity prices or other benchmarks that fluctuate over time, account for the opportunity cost of biomass production and enable use of traditional agricultural risk management tools, such as the commodity market strategies discussed previously. The theory behind index pricing is to identify a correlation in pricing between biomass and established commodities. For example, the price of biomass may fluctuate proportionately to the price of corn, crude oil, hydroponic trays or natural gas. Parties may develop creative indices to try to better match the price fluctuations of biomass, such as basing price on a theoretical “biomass index,” which could consist of various percentages of commodity contracts. 

The actual index price need not match the biomass price, but merely have proportionate price fluctuations. If this can be achieved, producers could employ market strategies in the respective commodities that compose the “biomass index” to protect their primary investments in biomass production. Of course, producers will have heterogeneous preferences for pricing provisions based on their individual risk tolerances and marketing skills. Because of these differences, no single compensation provision will be optimal for every producer. Producers with low risk tolerance will likely prefer fixed pricing or profit margins, or guaranteed minimum revenue provisions. Producers with high risk tolerances may prefer indexed pricing arrangements to allow them the opportunity to gain from higher prices while employing market strategies to minimize downside risks. As illustrated in the above discussion of pricing mechanisms, the potential contractual provisions embedded in a biomass contract are varied and fraught with complex tradeoffs unique to the agricultural context and further heightened due to the novelty of the bio-energy industry. Accordingly, the following section outlines many of the particular considerations of a biomass contract.In Table 2, below, we propose a list of specialized contract provisions in relation to the identified contract attributes . The result is a matrix framework for biomass contracting that incorporates the essential elements of the social compatibility, risk minimization, and cost-minimization contract models. Traditionally, biomass contracts have originated from end users, and this model is likely to continue. The extent to which individual producers have the ability to negotiate provisions identified in Table 2 is questionable at this stage in the industry’s development, and will likely vary by end-user.

Notwithstanding the current state of the market and its “take-it or-leave-it” biomass supply contracts, consideration of the issues and solutions discussed below can enhance participation and promote a more sustainable, stable biomass supply. And a stable, long-term biomass supply, at a low cost, is the single most important end-user objective. The more secure the biomass production agreements, the more assured the end-users and their financiers are that the processing plant will be able to operate at a profitable rate and duration. From an external information perspective, a transparent, vertically coordinated system allows for the end-user to offer contracts to a large number of producers. Overly restrictive confidentiality clauses, however, may foreclose the ability of producers to make this decision in consultation with community based peers and role models. Although end-users may have legitimate business reasons to prohibit disclosure of some contract terms, care should be taken to balance those needs with the underlying consideration that the beliefs and values of producers and their rural communities are important factors in the decision making process.221 Toward this end, conversion facilities targeting “community leaders” and more innovative farmers can take advantage of the reputation of traditional first movers in the community to encourage other participation. It is important to note that the current trialibility of most energy crops is often inherently poor, adding to the information uncertainty dynamics of contract negotiation. Offering apreliminary, short term contract with smaller quantity requirements, while providing equal access to quality information regarding research trials and production practices, will increase trialibility and reduce information uncertainty. As further incentive to engage producers in a step toward large scale bio-energy crop production, these initial trial contracts could include, subject to performance measures, guaranteed renewability and quantity expansion terms In sum, many of the risk-minimizing approaches to information asymmetry can complement non-economic goals and social interaction factors to make producers more comfortable in the decision to enter into a biomass supply contract.

The principles from sociology are simple, but powerful. The stronger the relationship between the two parties, and the more value a party perceives in a favorable reputation, the less a party will be willing to hold up a contracting partner or otherwise act opportunistically. Acting opportunistically, especially in relatively tight-knit rural communities, damages a Principal’s reputation and may hinder the ability to contract with other potential Agents. In general, biomass supply contracts should attempt to be cooperative rather than secretive, and account for the interaction and input of community engagement in the both negotiation and contract performance.As discussed above, contracts can minimize both exogenous and endogenous risks for both parties. Transferring risk to the other party, however, usually results in a risk-transfer premium, while attempting to minimize total risk through complete contract design is difficult to achieve and incurs its own set of costs. Accordingly, assigning price risk between the parties is one of the most important provisions in biomass production contracts. Several common pricing provisions have been considered in the literature, the simplest of which offers a set price per unit of biomass throughout the duration of the contract. While this assigning of price risk eliminates the producer’s exposure to all down-side price risk, it also eliminates the potential for higher gains, should the value of biomass or crop substitutes increase. An acreage contract that compensates the producer only by acres of production has similar price risk consequences, while also introducing yield risk. Cost-plus pricing eliminates all producer down-side price risk by setting a fixed profit margin, and also addresses input price risk. Similarly, escalators based on input costs is another technique to minimize producer price risk and may be especially important in perennial cropping systems in which producers are locked into a crop choice for extended periods. On the other hand, indexed pricing provisions may grow in popularity, where the price of the biomass is tied to commodity prices or other benchmarks that fluctuate over time. Different producers, however, may prefer different pricing provisions, seedling starter trays based on their individual risk tolerances and marketing skills. Producers with low risk tolerance will likely prefer fixed pricing schemes, or guaranteed minimum revenue provisions. Producers with high risk tolerances and marketing ability may prefer indexed pricing arrangements to allow opportunities for windfall profits. Opportunity cost pricing, in which the contract ties the price of biomass to the substitute ventures of the producer provides yet another option. Information asymmetry in the producer’s favor regarding pricing, however, allows an extraction of information rents from the end-user in the form of higher compensation levels. It is in this context that all the adverse selection tools become relevant: rationing, signaling, screening, signaling, and auctions. A rationing strategy of a fixed price per ton excludes producers that cannot turn a profit at the pre-determined level, and allows more efficient producers to gain information rents. This strategy, however, limits supply by excluding potential higher cost producers—a potentially costly strategy when a stable, low-cost supply is the most important end-user objective. Screening strategies to decrease information rents may increase supplies slightly, but developing optimal contracts to satisfy the incentive compatibility and participation constraints of all producer types is difficult and requires extensive information. What seems more feasible is for end-users to offer multiple compensation provisions to enable choice based on their risk tolerances. While this method does not address producers’ opportunity cost information, it is a simple way to address risk tolerance information, and avoids premiums for risk-averse producer acceptance of high-risk compensation provisions. A more complete analysis and discussion of screening to determine appropriate pricing provisions and contracts is beyond the scope of this paper, but merits further research. Signaling strategies may benefit both end-users and producers. End-users can establish eligibility requirements and collect observable information on local producers, thereby facilitating discriminatory pricing based on producer characteristics. For example, producers who are closer to the end-user; have a large amount of marginal land; or already possess biomass compatible equipment, are presumed to have lower opportunity costs, and may accept lower prices. Producers without these characteristics are presumed to have higher opportunity costs, and thus warrant higher compensation. In negotiating for compensation, these high opportunity cost producers can signal characteristics that are difficult to fake to gain higher compensation relative to others. Finally, creative end-users may choose to set prices by reverse auctioning. 

This method may only be feasible after end-users have secured sufficient interest from producers to ensure competitive pricing, which may only be possible once the industry is more developed. In this method, the end-user would auction off standard allotments of “biomass production rights.” To illustrate: the end-user would determine the amount of biomass needed to keep the plant at full capacity for a year, say 1 million tons. The end-user would then break this total capacity into standard contracts—perhaps 5,000 contracts of 200 tons. The end-user would then begin reverse auctioning the production rights, starting with a high bid and quoting lower prices until a single producer is left willing to produce at that price. That producer can then state how many set contracts of production he is willing to produce at that price. The auction continues until all 5,000 contracts are purchased by producers. Within the contracts, producers would prefer the ability to transfer or assign production rights. This allows for producers to transfer the production rights to subsequent lower-cost producers over time, thus making the production rights a fungible asset, similar in form to a commodity. Generally, producers might prefer to transfer all yield risk to the end-user through provisions, such as the acreage contract. This is consistent with the Risk-Minimizing perspective implication to loosen incentives to decrease producer risk. The moral hazard that this creates can be addressed through management strategies, such as monitoring. When end-users are unwilling to accept all yield risk, or are unable to adequately deal with moral hazard through monitoring and increased control, yield incentive contracts may be necessary. In fact, where the specific risks that affect yield are adequately addressed, incentive contracts may be equally acceptable to producers. In these contracts producer incentives are based on performance relative to other similar producers, rather than absolute measures of performance that are subject to common risks that affect all producers equally . By creating relative performance incentives, end-users can address moral hazard problems without shifting the incidence of common risk to producers. Contracts can further reduce common risk by grouping producers according to characteristics, such as by geography for weather risk and planting dates for other production risk. This reduction of common risk, however, may not eliminate idiosyncratic risks of producers, such as disease outbreaks, equipment failures, etc., and end-users retain some yield risk as they are not guaranteed a fixed amount of biomass for the conversion facility. Contracts should also address the consequences of a production surplus. Under an incentive contract, producers would prefer no maximum delivery amount. End-users, however, may desire a delivery ceiling to limit end-user waste when biomass production outstrips conversion facility capacity. Due to the extreme asset specificity of the surplus biomass in a nascent market, the end-user may retain all bargaining power for spot market purchases of surplus production. 

Asset leasing provides farmers another traditional risk management strategy

As the industry evolves, a hybrid structure is likely to emerge, in which end-users closely cooperate with producers through long-term contracting, rather than as direct owners or operators of biomass farms. We term this a “vertically coordinated” industry model. A vertically coordinated model presents several benefits over a vertically integrated system used in other industries dependent on vast quantities of raw materials, such as steel or petroleum products. For example, a vertically coordinated system does not disturb traditional agricultural practices or rural social structures that would result from transferring land and resource control to large energy companies. To the contrary, a vertically coordinated system that employs a variety of production contracts comports with recent trends in other agricultural sectors. This model also permits a greater number of producers to participate by increasing contracting opportunities, and allows greater management flexibility for producers. Vertical coordination also facilitates biomass production on more marginal lands, which increases economic feasibility in areas with relatively high farmland values such as the Midwest. Finally, a vertically coordinated model is compatible with existing cooperative business structures, thereby easing the long-term assimilation of producer cooperatives into the biomass supply chain. Myriad contract theories can inform the transition to a vertically coordinated supply chain model, how to dry cannabis ranging from consideration of the social compatibility between actors, and risk- and cost-minimization behaviors.

This article examines for the first time in scholarship the interactions and differences between these various theories in the context of building effective contractual relationships to facilitate the novel, emerging bio-economy. We first explore the influence of producers’ social networks and trialability on contract design. We then turn to the importance of risk management tools already available in the traditional agricultural commodity space to combat the uncertainty that can plague achievement of complete contracts, and highlight the importance of the parties’ learning and experience as a risk management tool. Risk-sharing affects costs, and thus risk management theories overlap with the large body of economics literature on the role of cost in contract design. We thus incorporate economists’ identification of adverse selection problems that stem from information asymmetry and moral hazards into potential contract-based solutions, such as rationing, screening, signaling, and auctioning, as well as measurement and monitoring strategies. But, these theories assume that parties are able and willing to write complete contracts, or contracts that specify each party’s obligations for possible contingencies. The section concludes by explaining why this is not always the case.While variations of the risk- and cost-minimizing perspectives are traditionally recognized in contract theory literature, scholars rarely apply sociological perspectives directly to contract theory. Scholarship should not underestimate, however, the influence of rural community norms and the learning styles of farmers who potentially will produce biomass. The legal profession should therefore explore the ability of contracts to ameliorate the range of societal pressures that inhibit contract formation and execution.

Sociological research has identified several factors that determine farmers’ willingness to adopt new technologies, as well as techniques to encourage innovation adoption. This framework draws largely from the work of Professor Pannell, which summarizes decades of innovation adoption research through an interdisciplinary perspective. According to Pannell, technology adoption research accedes that producers’ willingness to adopt depends on their “subjective perceptions or expectations rather than objective truth” that the technology will help them to better achieve their goals. Pannell further divides producer perceptions into three sets of issues: characteristics of producers within their social environment; technology attributes; and, the process of learning and experience. The more complex and serious the consequences of the decision, the more producers seek information and social interaction. Producers will look to those they perceive as trustworthy, credible, and possessing expertise, such as other farmers, researchers, and university extension agents. Farmers process information according to their numerous and varied individual goals, as well as their familial and social network. We address in subsequent sections the purely economic goals of wealth and financial security. Non-economic goals, however, impact greatly technology adoption. Pannell lists several categories of non-economic factors, such as environmental protection and enhancement; social approval and acceptance; personal integrity and ethical standards; and balance of work and lifestyle. As farmers increasingly rely on social networks for information, technology adoption will more likely impact these variables. As the adoption process progresses, “social commitment and support will help maintain confidence in the uncertain stages of field testing and early adoption.

Peer expectations of continued commitment or personal support and encouragement will reinforce commitment and provide a buffer against setbacks.” In sum, these non-economic social constructs can increase the likelihood of contract formation and performance. And, because the process of technology adoption is dependent on the producers’ social environment, maintenance of these social considerations should be taken into account in contract design. Specific factors that aid in technology adoption in the rural context include: relative strength of social networks and local organization; proximity to other adopters and sources of information; history of respectful relationships between adopters and innovation advocates; education; promotion and marketing programs by the government ; and the private sector. A national-level biomass production trade organization, along with local chapters, could increase social networking opportunities and identify potential peers for farmers seeking and processing information on conversion to biomass production. For example, the Illinois Biomass Working Group provides a collaborative network and educational opportunities for farmers considering biomass production in Central Illinois, while the Council on Sustainable Biomass Production, a private standards development initiative, links farmers with industry experts to explore sustainable production methods for biomass. As rural communities have greater access and familiarity to web-based sources of information and social networks, the importance of geographic proximity may decline in favor of general ease of information access—with Facebook and email replacing the coffee shop as the primary location for community information sharing. Social networks, while significant, are not determinative, and, as may be expected, specific characteristics of the actual innovation also heavily influence the adoption of technology. Relative advantage, defined as “the degree to which an innovation is perceived as being better than the idea [or practice] it supersedes,” is one key characteristic. An innovation’s cost, risk, and profitability relative to current practices are major contributors to an innovation’s relative advantage. But in addition to economic factors, the literature identifies several non-economic attributes with particular relevance to technology adoption in the sociological context. These include non-economic adjustment costs; compatibility with a landholder’s existing set of technologies, practices, and resources; government policies affecting the innovation, such as mandates or incentives to adopt or otherwise alter practices; compatibility of a practice with existing beliefs, values, and family lifestyle; self-image and brand loyalty; and the perceived environmental credibility of the practice. How much these factors influence technology adoption will again depend on the goals of the producer, and the social environment discussed previously. In this respect, the practical application of the concept of relative advantage is rather elementary: the greater the contracting parties can align the innovation adoption process to the non-economic goals of the producer, the greater the innovation’s relative advantage. Where the goals cannot be aligned, additional incentives may be required as compensation. As a step toward aligning these goals, how to cure cannabis recent efforts to develop sustainability certification schemes for biomass production seek to incorporate many of these social considerations into certification metrics, thereby creating a level playing field across biomass production markets. A second innovation characteristic—trialibility—refers to how easily an innovation can be sampled in a small quantity or with low initial cost. Relative trialibility includes not only the ease of establishing a trial, but also the ability to learn from the endeavor. Risk and uncertainty are decreased through trialibility in two ways: providing the producer the opportunity to gain skills in relation to the innovation, and allowing small scale adoption to avoid risks of large-scale loss due to inexperience or failure of the innovation. Several factors improve an innovation’s trialibility, including possessing characteristics of divisibility and observability, as well as trials that are indicative of long-term performance. On the other hand, innovation complexity, trials with long time-lags, high up-front capital costs, and potential hazards provide significant barriers to trialibility.

As with knowledge and learning, a trial experience minimizes uncertainty and increases the probability that the potential adopter will make correct decisions regarding whether and how to accept and implement the novel technology. A corollary to trialibility may be the presence of a certification regime, such as sustainability certification. The certification process may replace some aspects of trialibility as the communication mechanism between the sustainability standard certifier and producer provides a similar opportunity to gain skills related to the innovation and embark on steps to adoption without requiring an irrevocable commitment. The following section more thoroughly discusses the risk-minimization aspects of trialibility, as well as the role learning and experience from the sociological compatibility perspective plays within the risk minimization theory of contract design.Risk is inherent in all farming operations, and successful producers expend considerable effort to manage negative risk exposure. As one of the largest factors hindering producer participation in the biomass industry, farmers must have adequate means to address and minimize risk prior to market entry. The main categories of producer risk traditionally include yield/production, price, institutional, human/personal, and financial. Weather and technology are the primary components of yield risk. Price risk refers to uncertainty in input and output prices, and institutional risk arises from changes in agricultural policies and regulations . Farmers also face asset risk, the chance of loss of equipment, and contracting risk, which includes the threat of opportunistic behavior of contracting parties. Financial risk includes the business risks of obtaining and financing capital. Contracting is a commonly accepted tool in mitigating and sharing risk, and is a frequent topic in economic scholarship. Before discussing risk-sharing in the context of formal economic contract theory, however, this article explores two other risk management tools: learning and experience, and traditional agricultural risk management tools. High risk is not a new phenomenon for agricultural producers, but the difference for the producer in the biomass industry is that the traditional agricultural risk management tools are either unavailable or significantly diminished in this novel production milieu. Recall that an important principle from the Sociological-Compatibility perspective is that producers feel comfortable and familiar with using existing agricultural structures and practices. Therefore, the authors’ critique of the Risk-Minimization perspective begins by describing traditional agricultural management tools and their limits in the biomass context, with the goal of identifying opportunities to resurrect these traditional tools through contracting strategies.Farmers rely on a variety of risk management tools in traditional commodity agricultural production. Commonly used options include crop insurance, commodity market strategies, diversification, financial management, leasing, and adjusting cultural practices. Unfortunately, however, all these tools have limited availability in the current biomass production environment. For example, the Federal Crop Insurance Act provides for the development of policies for dedicated energy crops, but no policies are available currently for Miscanthus or switch grass— two promising bio-energy crops. Insurance products exist for corn grain, but current policies do not take into account the production and harvest of corn stover for bio-energy purposes. Similarly, commodity market strategies provide key risk management tools for producers to manage price risk, one of the larger risk exposures in agricultural production. Farmers can use existing commodity and futures markets to practice certain risk management strategies, such as hedging, futures, and options contracts, and forward pricing. As commodity markets do not exist for Miscanthus, switchgrass, or corn stover, biomass producers cannot take advantage of this important price risk management tool. As a second strategy, producers often diversify operations to manage production and price risk. Two types of diversification are common: enterprise diversification and geographic diversification. Enterprise diversification involves participating in more than one activity, such as growing multiple types of crops or using multiple cultural practices. Geographic diversification refers to spreading crop production over several non-contiguous locations to reduce catastrophic weather risk. Biomass contracts, as discussed below, however, may limit producer enterprise options for a variety of reasons. More importantly, the potentially high cost of transporting large quantities of biomass to the bio-energy conversion facility may further limit geographic diversification options. Leasing decreases financial risk by allowing producers to gain control over capital inputs without long-term payment commitments, and by increasing asset flexibility. However, this relatively simple strategy may have limited application in the biomass industry, as specialized equipment may be unavailable to lease or custom hire due to the infancy of the industry.

Adoption is often a social process as producers interact with others to obtain and evaluate information

Traceability to production source is strongly emphasized by EU food hygiene rules articulated at the member state level by UK public health agencies. British produce packaging typically carries clearly visible supplier codes that link each product to its original field and farmer, sometimes including the name of the farm or even farm owner on the back of the package. American packaging does not always contain supplier codes or any specific information on source, beyond regional descriptors of geographic origin. Common leafy greens supply chain practices such as aggregation of many producers’ fresh harvests during washing and processing mean that in some cases contaminated products cannot be tracked, and the feasibility of tracing backward from produce outbreaks differs enormously even from one leafy greens product to another . In another supply chain difference potentially related to traceability, UK fresh produce consumed within national borders does not typically travel as far geographically between harvest and final sale as CA agricultural products, many of which reach their final retail points of sale in the far corners of the United States. This can further complicate tracing efforts in the event of food safety crises, vertical farming pros and cons and long storage intervals may contribute to a higher potential for foodborne illnesses to develop and persist in shipped goods . Scholars in both the US and UK have used the transdisciplinary concept of ‘food miles’ as a way to think about the food system’s capacity to deliver desired social goods including food safety and environmental protection .

Invisible structural processes such as long supply chains that distance consumers from the sources of food production also disempower them from considering potential sources of risks. Although measuring and representing food miles meaningfully presents many challenges , food miles as a concept can make visible “missing objects” such as the length of fresh produce supply chains, allowing them to be considered as contributors to observed outcomes and part of social interventions proposed to solve market failures .Consumers in the UK and the US also each have a different culturally-constructed awareness of food safety risks, owing to different histories of food safety failures and regulatory responses in each location. Scholars have noted for many years that US consumers and regulators, in comparison with Europeans, seem to be more worried and anxious about a variety of food risks including inadequate nutrition, foodborne pathogens, harmful macro-nutrient profiles, carcinogens, and spoilage . European consumers and regulators have, by contrast, tended to be more cautious in the face of technological threats such as genetically modified organisms. These and other background framings of safety and risk in turn shape how regulators, retailers, and farmers think about what should be controlled, and how consumers think about the risks they may face. One difference that emerged during my interviews was the differential weight placed by UK regulators and retailers on farm-to-fork supply chain management. Yet another place where the impact of the BSE crisis on the development of UK food safety responses can be felt, the UK’s longstanding official focus on farm-to-fork supply chain management at both public and private level creates a system in which the expectation is that field level practices are receiving special regulatory oversight and must be kept under control.

In the US, the focus on farm to fork food safety regulation for fresh produce has come somewhat later, and is fully articulated only by the recent FSMA language. Other differences in how food safety is thought about and framed by various supply chain actors in the UK and in the US most likely stem from the influence of EU consumer protection priorities on UK food safety response. Pesticide use and regulation are very high on the radar for European Union member countries, and farmers and regulators are very aware of the many complex limitations that the EU-coordinated pesticide regulatory system places on them. In my interviews, UK standards and UK farmers I spoke with gave equal or greater priority to chemical safety in comparison with foodborne pathogen safety. During my UK field research, the phrase “food safety risks in fresh vegetables” was most often interpreted by both laypeople and members of industry or government as being a question of pesticide residues, until I offered additional specifics and turned the conversation to pathogenic foodborne illness. By contrast, California farmers, standards, and regulators seemed readily accustomed to separating pathogens from chemical residues, and considered pathogens to be higher priority. In CA, the phrase “food safety risk in fresh vegetables” was most commonly interpreted as having to do with foodborne pathogens that make people sick, rather than as a question of chemical residues or other hazards. Both concerns exist and are subjects of regulation in both locations, but the differences in discourse between CA and UK stakeholders in the lettuce industry show that different locally constructed meanings are contributing to different regional framings of food safety. In another difference over framings of what constitutes appropriate priorities for food safety, my interviews revealed a curious transatlantic bifurcation over the issue of what constitutes clean produce, or dirty produce. In CA, pre-washed lettuce products are seen as 5 cleaner and safer, and washing of leafy greens is a standard step in processing. In the UK, washing of leafy greens is also a common processing step, but a noticeable percentage of the UK consumer market views pre-washed lettuce as potentially more contaminated and less fresh than lettuce that has not been cleaned prior to sale. California lettuce farmers expressed to me that washing in anti-bacterially sanitized water was expected as a minimum level of safety assurance for the US market.

The microbiological safety of wash water is typically controlled through disinfectants, added according to prescriptive safety rules that effectively bring microbial risk in wash water below the threshold permitted even in municipal drinking water. One California food safety manager at a leafy greens farm noted to me that this requirement for the disinfection of leafy greens made little sense to her, because the toxic disinfectant has to then be rinsed off with pure water that is technically dirtier from a microbiological standpoint than the initial rinse. By contrast, in my UK interviews, air racking farmers noted that some customers increasingly view unwashed produce as having a higher guarantee of food safety than produce that has been washed and bagged before sale. UK farmers aware of this alternate view of safety made comments like “if we washed it, they’d wonder what happened to it that we wanted to hide” and “unwashed rocket goes for twice the price of washed now because consumers think it’s safer if no one has interfered with it.” Although exploring and testing each of these attitudes in depth was beyond the scope of this dissertation, it was clear during my field work that differences in societal expectations of food safety could provide important context for my findings. The language in which food safety is couched and the priorities seen as important in each location have bearing on the overall discursive climate surrounding food safety, potentially shaping or constraining the actions of those who seek to solve produce safety problems.Similarly, and not at all unexpectedly, I also noticed a systematic difference between the two nations in how my interviewees described the growing environment in our conversations. Differences in word choice painted a picture of the environment as helpful, or as dangerous and risky. Not every person I spoke with in each location exhibited the same linguistic choices, but a general trend did develop. On the whole, regulators in each location seemed to share similar framings in which the value of the environment was central, while farmers I spoke with in CA showed a much greater focus on the environment as a source of risk. My expectation is that this difference stems from what my interviewees reported as the preferential framing of food safety risks as problems generated by wildlife, as articulated in CA private food safety standards and retail buyer communications.

CA regulators took a fairly balanced approach to the topic, starting from a position of protecting economic concerns and moving between environmental considerations and public health goals when approaching food safety as a topic. Interviewees in state level agencies spoke to me many times of the tide of anti-environment sentiment they saw among farmers of leafy greens and other fresh produce in the wake of outbreaks and private requirements to protect crops from animal incursion. One individual who had been involved in transmitting food safety messaging at the state level and had some experience with assisting farmers to incorporate specific pro-environment land management practices into their operations told me that he was seeing “a reversal of several decades of conservation work” because of new food safety fears. In his view, it had been a long slog getting farmers to abandon the worst herbicides and pesticides and become comfortable with allowing sustainability practices that helped them comply with e.g. official water filtration requirements. Food safety pressure from buyers had brought back those anxieties for farmers, bringing with them the desire to fight and control the wild environment. Both regulators and farmers in CA reported to me that the farming community was worried about nature threatening their operations. Another CA interviewee from the policy side put it succinctly when he summed up the change in discourse by saying “nowadays, you don’t want to be encouraging [presence of wildlife], you want to be discouraging.” UK regulators evinced a similar focus on the economics of agriculture and food production, followed by acknowledgement of public health alongside sustainability concerns. But instead of signaling that they had experience with farmers who were preoccupied with food safety to the determent of environment, they unanimously agreed that farmers in the UK would not think of the safety and environmental matters as separate. Rather, they indicated that farmers thought of all of it in hybrid terms, considering the environment in the same breath as economic or public health concerns. Conversations with regulators invariably shifted to discussions of the EU Common Agricultural Policy which permits farmers to receive financial benefits for environmental protection practices and sustainable land stewardship efforts. As a result of this official focus and, for some, solid revenue stream for portions of their land not exploited for intensive agricultural production, a broad-based environmental awareness was quicker on the lips for the UK farming community. Farmers I spoke with echoed this different starting point, often seeming surprised that I chose to separate my questions into two sections, one for food safety and one for environmental conservation. Many of them specifically indicated to me that “those are really the same, no?” and explained in various terms that food safety was a natural result in their minds of a healthy farming environment and local biodiversity. One interviewee from a trade association who had long experience with UK farmers and some exposure to views from American farming groups via conferences in the US said to me pithily “over here we talk about how to maintain biodiversity and beneficial insects, whereas my American colleagues are much more of the opinion that they should kill everything.” A deep accounting of contemporary environmental thought as seen in leafy greens production was not my goal with this study, but this difference did emerge as commentary on a deeper divergence in environmental framings among farmers and supply chain actors that could contribute to broad differences in food safety perspectives surrounding leafy greens.Owing to different histories of agricultural development, UK interviewees I spoke with placed a different value and importance on the visual appearance of farms than did my CA interviewees. The function of farms as visually pleasing parts of the pastoral landscape in the UK seems to be an important background framing for food safety management decisions. Several UK farmers I spoke with indicated that some things they might otherwise do to manage food safety on their farms would not be feasible in their current locations simply because of the impacts they would have on the farm’s outward appearance for passersby. These farmers perceived conflicts between what risk management might indicate as the safest route, and what the broader local community would visually allow. Prohibited practices in this vein included fencing of field margins, removal of trees, clearing of bare ground buffer strips between fields and unmanaged lands or field divisions, and building of extensive glasshouses and poly tunnels to protect sensitive crops from environmental uncertainty and animal incursion.

The results of these comparisons are presented graphically in the sections that follow

The LGMA standard was not created through a government rule-making process or by state regulators, but requires that compliance with the standard be verified by state and federal audits and inspections under the terms of a federal marketing agreement.10 In the United Kingdom, several pre-regulatory assessments are currently included by public regulators as part of risk-based regulatory frameworks for food safety. These include Red Tractor Assured Produce Scheme, and the BRC Global Food Standard. These pre-regulatory assessments are the product of an industry-level trade union and an industry association respectively, but both are incorporated into the public regulatory process as part of pragmatic risk-based regulation on the part of the United Kingdom’s public regulatory approach, and a desire on the part of UK retailers to demonstrate due diligence as required under the 1990 Food Safety Act .Retailers of fresh leafy greens in both the United States and United Kingdom have responded to outbreaks and public concern with an assortment of non-state regulatory methods to protect the safety of retail foods, best way to dry cannabis while reassuring consumers and protecting brand reputations.

Examples include a variety of retailer-branded food safety standards targeting field-level risk reduction through specific agricultural practices related to hygiene and use of safe farm inputs, including Tesco’s NURTURE standard and the parallel US and European standards managed by the private group GlobalG.A.P. These standards are typically similar to state regulatory controls for food safety, but attempt to exceed the terms of local law by adding additional levels of specificity and broadening the scope of food safety regulation to include additional values such as environmental improvement and labor protections.In the United States, 99% of lettuce consumed is grown domestically, and California produces roughly 75% of the nation’s supply of leafy greens. California’s agricultural industry 1 is one of the most valuable in the world, at US $8.85 billion in 2015, of which 1.96 billion is attributed to lettuce and other leafy greens . Leafy greens are typically grown along California’s central coast where mild weather permits their cultivation during most of the year. During winter months, the neighboring state of Arizona produces much of the nation’s supply, often through the same management companies and contractual grower networks that source from California during the rest of the year. When landholdings are held by one corporation in both areas, upper level staff and harvest crews move to Arizona for the winter articulation of the supply chain, in order to continue producing without weather-related interruptions. As with much of California agriculture, California’s leafy greens farmers for the most part tend to operate either relatively small farms, or very large farms, with fewer farms operating at middle scales .

This is in part due to the legacy of California’s historical agricultural development, in which agriculture was from its inception focused on large-scale for-profit production. Many smaller farms exist today in part because of the alternative agriculture movement that has been active in California since the 1970s, but the central coast area is known as “the nation’s salad bowl” because of the very large corporately-owned farms that dominate the agricultural market of the state and operate on an industrial scale using hired labor . Lettuces are most often harvested by machine, and harvested products typically go from the farm to a processing center where they are washed,together with other similar products, and then enter the retail supply chain or the wholesale market for shipment out of state. The UK lettuce industry is considerably smaller than the California lettuce industry, affecting many aspects of the supply chain that will be considered in the remainder of this chapter and Chapter 4. Lettuce cultivation in the United Kingdom also works slightly differently, and some of these differences will be explored in greater detail in my Industry Level results later in this chapter and explained in Figures 3.4 and 3.5. The UK fresh vegetable industry was estimated at a value of $2.12 billion in 2016 , of which 203 million comes from the cultivation of lettuces and other leafy greens. In the UK, climate allows leafy greens to be produced across much of the area of England and Scotland throughout the warmer months of the year, while wintertime leafy greens supply most often comes from Spain, Italy and the Netherlands.

Compared to California, farm sizes are distributed more equally between size categories, with an emphasis on smaller farms. Some UK leafy greens farmers have responded to economic pressures of market competition by banding together into grower associations, in which family-owned and -operated farms continue to produce goods but do so in mutual association in order to benefit from centralized marketing efforts and collective economic of scale.Data for foodborne illness always contain some degree of uncertainty due to the post-hoc nature of outbreak data collection. Many incidents of intestinal disease go unreported. For those that are reported, it is not always possible for monitoring authorities to be entirely certain which pathogen was responsible, whether it came from food or another source, and which food vehicle among many was ultimately at fault. Reports are sometimes missing data, and personal information from infected individuals can contain omissions, mistakes, or recall bias. When food vectors can be identified and a pathogen confirmed, there is often still some degree of uncertainty over whether the pathogen originated on the food itself or was introduced from elsewhere via contact with other people or surfaces during handling and preparation. In some cases, more than one pathogen is implicated, or more than one food type is the likely source. Estimates of overall disease burden contain additional uncertainty because they must be extrapolated from observed reports of illness by adjusting for expected reporting rates. While individual point estimates contain these and other known and expected sources of uncertainty, it is possible to examine overall aggregate trends. In 2014, the UK’s Food Standards Agency released the results of an in-depth national study of the domestic burden of infectious intestinal disease . From roughly 500,000 cases of food-related intestinal disease observed over a one-year period from mid-2011 to 2012, the researchers identified the share of disease burden belonging to 19 pathogens, across 12 food commodity vectors. This FSA-funded study represents the best effort so far to reduce uncertainties using information from a meta-analysis of other disease source identification studies from nations broadly comparable to the UK in terms of general population health and public health monitoring systems, mathematical modelling, and expert consultation. According to these health statistics, the food class responsible for the biggest proportion of foodborne illness in the UK is poultry, at over 50% of illnesses. Red meats and produce are next most associated with illness, at much lower rates. The pathogens responsible for most hospitalizations and visits to physicians are Campylobacter, Salmonella, and E.coli . Previous large-scale analysis of public health data and research had placed the UK incidence of foodborne intestinal disease at nearly 2.5 million cases annually in 1992 and just over 1 million cases annually in 2000, showing an overall trend of improvement in public health outcomes which this latest study continues. In the US, how to cure cannabis it is estimated that risks from produce have been rising over the last several decades, increasing from 1% during the 1970s to as much as 12% during the 1990s . A large scale study of foodborne pathogenic illness from 1998 to 2008 found that 46% of foodborne illness during this time period was attributable to produce and especially to leafy vegetables, followed at 22% by red meat and poultry . These increases most likely represent a combination of factors including but perhaps not limited to increased produce consumption, more prevalent aggregation and processing steps, and changes in pathogen presence around livestock operations that may contaminate nearby produce . The most recent national US public health records maintained by the Center for Disease Control still implicate plant foods in 9.4 million reported cases of intestinal illness yearly, from which extrapolative estimates place the total actual number of foodborne illnesses at 47.8 million cases per year . Norovirus is credited with causing the largest number of illnesses and deaths in the US each year, while Salmonella, Campylobacter, Toxoplasma and Listeria are responsible for the largest numbers of hospitalizations and deaths. The United States Food and Drug Administration undertook an analysis of CDC outbreak data from 1996 to 2010, yielding 131 outbreaks from fresh produce in which it appeared likely that contamination had specifically occurred during growing, harvesting, manufacturing, processing, packing, holding and transportation of foods .

This research drove the establishment in 2011 of new national legislation instituting stronger preventive controls for food safety during primary production. Comparison suggests that population-adjusted foodborne illness rates as of 2000 were as much as 11 times higher in the US compared to the UK, although data interpretation uncertainties were noted from the difficulty of source identification, and national differences in reporting rates and disease severity may reduce the gap in overall foodborne illness burden . Notably, the food sources associated with illness in the UK are meats and poultry, while in the US plant foods are the highest source of risk, especially leafy vegetables.Research has offered somewhat mixed results so far as to whether standards instituted by private commercial entities can in fact create real change in the behaviors of their agricultural product suppliers. A study of food safety standards in UK, Canada and Australia concluded that the initial reasons for adoption of food safety standards—whether crisis-driven in response to outbreaks of pathogenic disease as in UK, or prevention-driven as tools for avoiding trade disputes as in Canada and Australia—significantly shaped the type of private and/or public solutions sought . A recent study of sustainable practices among produce and flower suppliers in South Africa concluded that one specific company-led eco-standard successfully created measurable improvements in environmental practice among suppliers . More such evaluations are needed, tracing the impacts of standards at state and private level, and their potential to drive change in social and ecological outcomes. Additionally, the challenge presented by overlapping and sometimes conflicting sets of requirements at state and private level has been noted by many scholars , yet efforts to harmonize this landscape of standards have not made significant progress. The presence of multiple private standards in one marketplace may act to fragment regulation of markets and further undermine harmonization efforts by creating additional parallel sets of prescriptive expectations . In the early 2000s, the Global Food Safety Initiative bench marking scheme sought to advance harmonization efforts by determining equivalency between standards and allowing existing diverse standards to be accepted as common currency in global marketplaces . Few studies have yet addressed the cumulative impact on farmers of following many overlapping food safety requirements from both state and non-state level, and the impact of bench marking efforts . Research is needed to explore relationships between private standards, the perceived pressures and constraints they generate for farmers, and resulting changes in land management decisions . I sought to illuminate this area of research and international environmental governance scholarship through my comparative case. At the conclusion of the previous chapter, I explored the network of state, non-state and hybrid food safety controls currently active for fresh produce grown in California and the United Kingdom. Table 2.2 provided a complete overview of food safety standards active in US and UK lettuce production. For my comparative study I will examine a subset of these standards, as listed in Table 3.2 below. Some standards imposed by retail or food service buyers of leafy greens products take the form of internal risk assessments that are used by the retailer or food service outlet as a barrier to entry for riskier suppliers, but which do not constitute full standalone produce production standards that seek to shape farmers’ practices. Such standards often require that prospective suppliers obtain produce safety certification from one or more external private standards operating at the national or international level before seeking to become a supplier. In order to contribute to scholarly efforts to analyze private standards separately from these sorts of risk assessments, instruments such as these were not considered in my analysis. My analysis rests instead on an examination of the full standalone standards most often referenced by these risk assessment tools as pre-requisites for initiating a supplier relationship.

My analysis will differ from the majority of comparative politics literature in one additional way

In Chapter Two, I will explain the basis of my US-UK comparison, presenting a detailed account of the existing regulatory framework governing leafy greens production in the United States and United Kingdom. I will ground my case study comparison in a comparative historical-political examination of food safety concerns and present the research framing that underpins my international comparison. In Chapter Three, I will present the results of my comparative research with US and UK lettuce farmers, retailers and regulators, in which I evaluate the structure of public regulatory controls, farmer opinions of food safety regulation, and the impact of public and private food safety standards on farmers’ land management decisions. In Chapter Four, I will explore underlying socio-cultural dimensions of difference within food safety discourse, including structural differences in how responsibility is assigned for food safety violations, regional differences in the discourse of food safety, divergent background framings of the growing environment, and different degrees of overlap between utilitarian agricultural use and preservationist conservation. I will end my comparative case by presenting a summary of my observations from this comparison, and recommendations for the future of transatlantic food safety governance.In response to the broad social and environmental changes brought by industrial activity in the 20th century, the industrialized world witnessed a change in public perception of human health, the environment, and the proper role of government in ensuring public safety. Beginning for the most part in the United States and spreading to Europe and the Global South, plant nursery benches these changes have been described by legal scholars as a series of gradual shifts in the functioning and accountability of government over time, contributing to the emergence of the modern regulatory state .

The concept of the rise of the modern regulatory state describes the growth of administrative structures leading to increasing expression of government authority through formal and informal rule-making by governments and their institutions, accompanied by systems of monitoring and enforcement . The early founding structures and commitments of European and American nation states were transformed during the 20th century by progressive reforms and bureaucratization, war mobilization, the rights revolution of the 1960s and 1970s, deregulation efforts during the 1980s, and the rise of corporate power from the 1980s onward . The regulatory language, structures and norms of practice that first gained prominence during this period of social and environmental awakening formed the basis for modern regulation of the food system. Modern regulatory states achieve policy goals through government actions that rely primarily on top-down processes of rule-making and enforcement. Regulatory rule-making involves defining goals to be achieved, such as the reduction of harmful pollutants in air or water, and to define the criteria and standards by which progress toward goals will be measured. Compliance with regulation can then be assessed through the designation of enforcement mechanisms to ensure uptake of regulatory targets . Regulation of this sort can enable a central government to achieve certain outcomes, but the durable structures of precedent in rule making may also constrain the use of executive power, limiting its usefulness as risk landscapes change over time. Additionally, some forms of regulation in a particular policy space may be more effective than others in terms of desired policy outcomes and broader attitudes toward regulation, engendering a wide variety of responses from those subject to regulation.

It is thus important to understand the relationships between regulatory approaches and the downstream economic and social effects that regulation creates within supply chains. The nature of regulatory requirements and how they are communicated can vary greatly from one government to another. Examining these differences allows scholars of regulation to discern differences in “regulatory style” which can be analyzed as indicators of national priorities, and as ways to explain the actions taken by various actors within both the public and private sectors . Ultimately the regulatory styles used to achieve policy outcomes will have impacts on the general norms of practice within industries subject to regulation, and the individual experiences of those complying with regulation. Regulatory style will also have consequences for the distribution of power among industry players, willingness of stakeholder parties to continue to submit to regulation, and broader societal good such as the environmental sustainability of agricultural production within a particular governance regime. For scholars of politics and policy, using a comparative approach that considers two or more different policy environments can be a powerful tool for seeing and analyzing policy mechanisms and their outcomes . Comparisons can reveal hidden details about the cases compared, and yield lessons that may be applicable in other contexts, identifiable through description, classification, hypothesis-testing and prediction . The core benefit of comparison is its ability to revealing ways in which aspects of policy that appear inevitable when viewed within their national context are in fact socially or culturally contingent when compared against other national contexts . By revealing that policy systems are culturally embedded, comparative policy work enables us to critically analyze the features of one policy system against another. This lens in turn allows us to benchmark policy systems against one another to show where the tools and outcomes of policy are equivalent, and where they are divergent. Where they are divergent, this style of analysis makes it possible to question the status quo. Many comparative policy scholars have chosen to compare countries, or groups of countries that share specific characteristics , and a great many comparisons have analyzed the English-speaking industrial powers as a cohesive group . For some purposes, it can also be useful to compare subnational units, for example in cases where using a national unit would introduce so much variation from one region to another that it would be impossible to see and analyze differences at smaller scales . In my analysis, I will examine the United States and United Kingdom as two nations from the industrialized, English-speaking world, comparing them on the basis of their policy similarity to identify possible sources for differences observed in policy outcomes. However, for the United States, I will center my analysis around a subnational unit: the state of California. Although this is somewhat unusual in the comparative politics literature, and may appear to be an inherently apples-and-oranges comparison, there are several reasons why I have chosen to situate my analysis in this unusual footing. First, the state of California is equivalent to a nation by measures including its population, area, and economy. Second, I hoped with this comparison to explore and benefit from past scholarly work examining areas of comparative policy difference such as use of the precautionary principle in public policy, and differences between adversarial direct regulation exemplified by certain periods of policy development in the United States and more cooperative models common in some European countries. Third, and most importantly, my comparative case study will be grounded in the production of fresh leafy greens, a sensitive agricultural crop grown in very specific climatic zones. In the United States, California supplies between 70% and 75% of US domestic nationwide leafy greens supply on an annual basis , metal greenhouse benches making its output clearly equivalent to a national unit. While most comparative politics research centers only on comparative policy contexts and the forms of regulation they embody, my analysis will add an element that is seldom considered.

My study will combine analysis of policy instruments at state, hybrid and non-state levels, with primary social science research conducted with farmers of leafy greens, providing real world information on the impacts that regulation in each national context is having on individual farmers and their environmental choices. In this way, I will deliver an interdisciplinary look at comparative policy in its human dimensions. In the remainder of this chapter, I will outline my comparative case by describing several forms of regulation currently active within the food system, as background for my investigation of the environmental consequences of public and private food safety regulation in fresh leafy greens. I will begin by examining three basic types of regulation that underlie current food safety governance regimes: State regulation, Co-regulation and Non-state regulation. I will explain the actors involved in each approach, how power is shared by actors, and the strengths and compliance challenges inherent in each approach. As a foundation for my international comparative case study of private regulation as it applies to leafy greens producers, I will trace the roots of current food safety regulation in the United Kingdom and in the United States and explain the role of food safety risk management in shaping farmers’ environmental practices. Lastly, I will present and explain the components of my comparative case study, and the guiding questions it aims to answer.Regulatory efforts in the modern food system exist along a spectrum from state-led regulation, through cooperative public-private regulation, to alternative forms of control in which standard setting and enforcement rely upon non-state entities. Table 2.1 compares and categorizes the most prominent regulatory forms that have been employed in the modern food system, arranging them according to the goals, methods, and structural characteristics of each approach. Although real-world regulatory efforts often reflect a blend of more than one style of regulation, it is nevertheless useful to examine archetypal forms to understand the multifaceted background of blended approaches and how their strategies borrow from these distinct types.Traditional state-led regulation as seen over the majority of the last century places control over rule-making, standard-setting and enforcement in the hands of public regulators and their regulatory scientists. Government regulators define regulatory targets, which industry actors must comply, what compliance looks like, how it shall be measured, and what consequences will accrue for noncompliance. This form of regulation is referred to as direct regulation, with the most extreme forms dubbed “Command-and-Control”. Firms operating under this form of regulation are ostensibly held separate from regulators, and do not participate directly in setting standards or crafting legislation. This style of regulation is often characterized as adversarial, punitive and legalistic; fear of sanctions forms a key motivating force . Outcomes are ensured through top-down administrative control, which typically stipulates both the desired outcome and how it must be achieved .Direct regulation came to prominence in the second half of the 20th century, paralleling the rise of modern regulatory states. In the United States, direct regulation evolved out of the Progressive Era’s focus on scientific rationality and direct provision of social and economic support through a bureaucratic central government. Direct regulations such as the Clean Air Act of 1963 and the Clean Water Act of 1972 applied a centrally controlled rule-making and enforcement process to the management of environmental problems linked to the activities of the manufacturing and chemical industries. Around the time of its establishment in the United States, this style of regulation became the dominant form of regulation across the nations of the Global North. Since the 1980s, this approach has struggled to maintain primacy in light of politically motivated deregulation efforts and increasing recognition of the international collective nature of many modern regulatory challenges. For example, many environmental and social problems— especially those in the modern, globalized food system—follow supply chains and human migration routes rather than national borders, outstripping the capacity and jurisdictional boundaries of traditional public regulation . Additional weaknesses of direct regulation include the lengthy process of gathering data, setting standards and crafting appropriate legislation, and the high financial and administrative costs of ensuring compliance once targets are set. Additionally, firms under this form of regulation may circumvent their exclusion from the regulatory table, through legally sanctioned lobbying and revolving door hiring, or via extra-legal activities such as concealed financial and political influence, any of which may allow them to “capture” regulators and weaken the regulatory process . Under the best of circumstances, it is difficult for state regulators to know everything that can and must be known for effective top-down management of a large and diverse array of threats, resulting in the potential for regulation of this sort to be incomplete or inadequate . Additionally, putting all the regulatory eggs in one basket by concentrating power into the hands of state regulators leaves command-and-control approaches subject to political regime changes, weak institutions, bureaucratic sluggishness, regulatory capture, and budget fluctuations, any of which may hamper regulatory outcomes .

The spectral absorption coefficients of gas molecules are retrieved from HITRAN database

Aerosols, solar long wave radiation and scattering by clouds are not included in the CIRC calculations. The temperature, pressure, gas concentration profiles and cloud properties used in the proposed radiative model are obtained from the input files provided on the CIRC website for validation purposes. The summarized model parameters and results of surface downwelling flux for Case 6 and Case 7 are presented in Table 3.5. Scenario 1 recovers the CIRC model parameters except that six out of ten gases are included to be consistent with Ref.. Compared with Scenario 1, Scenario 2 further includes the scatteringby cloud droplets. Scenario 3 adds aerosols where the aerosol profile is adapted from Ref.. Scenario 4 adds the ∼ 13 W m−2 extraterrestrial long wave irradiance. Scenario 5 uses the Gamma cloud droplet size distribution presented before, while also keeping the liquid water path unchanged. The results of the proposed radiative model are within 3% of the CIRC measurements for all scenarios. Since the measurements have uncertainties of 3%, the proposed model produces reliable results that are within the uncertainties of the measurements. The comparisons of different scenarios are presented in Fig. 3.9 and Fig. 3.10 for Case 6 and Case 7, respectively. The difference between S2 and S1 indicates the contribution of cloud scattering, which reduces the downwelling flux of the cloud layers and the layers below the clouds because part of the long wave radiation is scattered to outer space. The contribution of aerosols is quantified by the difference between S3 and S2, ebb and flow tray which increases the downwelling flux above the cloud layers while the surface downwelling flux is nearly unchanged. In the cloud layers and the layers below the clouds, the contribution of aerosols is diminished by the presence of clouds.

By comparing Figs. 3.10 and 3.9, the aerosol contribution is more distinct when optically thin clouds are present . The contribution of long wave irradiance from the Sun increases the downwelling flux above the cloud layers as shown by the difference between S4 and S3.The downwelling flux remains nearly unchanged in or below the cloud layers since the clouds ‘shield’ the long wave radiation from the layers above, so that layers below the clouds ‘see’ only the clouds. The difference between S5 and S4 shows the contribution of cloud droplet size distribution. The proposed size distribution has ∼ 30% lower COD when compared with the one used in CIRC , which increases the downwelling flux in and below the cloud layers. The difference is more distinct for optically thick clouds when comparing Figs. 3.9 and 3.10. The spectral differences between scenarios show up only in the atmospheric windowing bands . The surface downwelling flux for the five scenarios are presented in Table 3.5, where the differences between scenarios are smaller than 5 W m−2 , indicating that the surface downwelling flux is insensitive to the cloud scattering, aerosols, extraterrestrial long wave radiation and cloud droplet size distributions under cloudy skies.The primary goal of this Chapter is to develop an effective minimal model that incorporates the main physical mechanisms needed for calculation of the atmospheric downwelling long wave radiation at the ground level for widely different geographical sites. The operative word effective here means a complete model that is capable of discerning the effects of the main contributors to DLW while allowing for fast computations that can be performed by mini computers within time frames compatible with both the time scale of variations in the atmosphere, but also with time scales of engineering systems . All main features of the model and its implementation are described within the body of this work. A secondary goal is to examine the effects of water vapor, carbon dioxide and aerosols content on the surface DLW at high spectral resolutions. A spectrally resolved, multi-layer radiative model is developed to calculate surface downwelling long wave irradiance under clear-sky conditions.

The wave number spectral resolution of the model is 0.01 cm−1 and the atmosphere is represented by 18 non-uniform plane-parallel layers with the pressure of each layer determined by a constant σ coordinate system. Standard AFGL profiles for temperature and atmospheric gas concentrations have been adopted with the correction for current surface atmospheric gas concentrations. The model incorporates the most up-to-date HITRAN molecular spectral data for 7 atmospheric gases: H2O, CO2, O3, CH4, N2O, O2 and N2. The MT CKD model is used to calculate water vapor and CO2 continuum absorption coefficients.For a scattering atmosphere , the aerosol size distribution is assumed to follow a bimodal distribution. The size and refractive index of aerosols change as they absorb water, therefore the size distribution and refractive index are corrected for different values of local water vapor concentrations . The absorption coefficients, scattering coefficients and asymmetry factors for aerosols are calculated from the refractive indices for different size distributions by Mie theory. The radiosity and irradiance of each layer are calculated by energy balance equations using transfer factors with the assumption of isotropic aerosol scattering . The monochromatic downwelling and upwelling fluxes with scattering for each layer are further calculated using a recursive plating algorithm. Broadband fluxes are integrated over the spectrum for both non-scattering and scattering atmospheres. A model with 18 vertical layers is found to achieve grid independence for DLW. For a non-scattering atmosphere , the calculated surface DLW irradiance agrees within 2.91% with the mean values from InterComparison of Radiation Codes in Climate Models program, and the spectral density difference is smaller than 0.035 W cm m−2 . For a scattering atmosphere, the modeled DLW irradiance agrees within 3.08% relative error when compared to measured values from 7 climatologically diverse SURFRAD stations. This relative error is smaller than the error from a calibrated empirical model regressed from aggregate data for those same 7 stations, i.e., the proposed model captures the climatological differences between stations. We also note that these deviation values are within the uncertainty range of pyrgeometers . In summary, the proposed model is capable of capturing climatological and meteorological differences between locations when compared to extensive surface telemetry, which justifies its use for calculating DLW at other locations across the contiguous United States where measurements are not readily available. The proposed model also serves as a powerful and robust tool to study high spectral resolution interactions between atmospheric constituents within the critical long wave region of the electromagnetic spectrum.The LBL model proposed in Chapter 3 employs the two-flux approximation , reusable transfer factors and a recursive plating algorithm for aerosol scattering with the objective of improving overall computational performance for calculation of atmospheric DLW radiation using high-resolution spectral data. The complete model is easily coded in Python within a few hundred lines of code. Wave numbers are vectorized so that CPU time is only weakly dependent on spectral resolution when adapting the plating algorithm. As a comparison with a radiation model that can also be easily coded in Python, the speed of computation of a standard Monte Carlo simulation is linearly proportional to spectral resolution. A single run of the complete model described in this work requires 100s of Intel Xeon E5-2640 CPU time, where each run corresponds to one data point in Fig. 4.1. The use of the recursive plating algorithm alone reduces the total computational time by 30% when compared to direct matrix reduction. By contrast, an efficient Monte Carlo simulation for the same single case using 50,000 representative photon bundles emitted from each layer requires 90 minutes in the same CPU with 100 times smaller spectral resolution .

In other words, rolling greenhouse benches the proposed model is 3000 to 5400 times faster than an equivalent Monte Carlo simulation. Although other radiative models used in commercial codes also far outperform Monte Carlo simulations in terms of CPU time consumption, there are fewer options for doing so while retaining the level of accuracy and model robustness presented here, and not requiring either thousands of lines of FORTRAN/C coding, and/or expensive yearly fees for the use of optimized commercial products. The model proposed in this work is readily and efficiently implementable in high-level, open-source interpreted computer languages like Python, can easily accommodate different pressure-temperature and aerosol profiles, is only weakly dependent on spectral resolution, and is fast enough to be computed in real-time using low-cost mini-computers.For long wave radiative transfer in the atmosphere , a two – flux spectral multilayer model was developed by the authors to calculate the downwelling and upwelling flux densities in the atmosphere at a spectral resolution of 0.01 cm−1 . The two – flux model is sufficiently accurate for long wave radiative transfer because the radiation sources from the system are diffused. However, for shortwave transfer , the radiation source is highly directional so a two – flux model would be inappropriate. Therefore, a Monte Carlo radiative model is developed to calculate the shortwave radiative transfer in the atmosphere – Earth system. The atmosphere here is again modeled as N plane parallel layers extending from the ground to the top of atmosphere . The layers are divided using a pressure coordinates as detailed in Ref.. The temperature and atmospheric gas profiles are assumed to follow Air Force Geophysics Lab midlatitude summer profiles, with gas profiles corrected for current surface concentrations. The continuum absorption coefficients of water vapor and carbon dioxide are calcualted using MT CKD continuum model. In the long wave spectrum, the gas molecules are treated as non-scattering because the particle size is much smaller than the wavelength. The absorption, scattering coefficients and asymmetry parameters of aerosols and clouds are calculated via Mie theory by assuming proper size distributions. In the shortwave spectrum, the scattering of gas molecules are modeled as Rayleigh scattering. Ozone and oxygen continuum absorption are added because it is more significant in the shortwave than in the long wave spectrum. The following sections present the methodologies of calculating scattering coefficients of molecules, continuum absorption coefficients of ozone and oxygen and the Monte Carlo method.Large-scale deployment of renewable energy technologies as replacement for fossil fuel generation aims at mitigating global warming rates by reducing the emission of greenhouse gases . The scale of this offset is critical to validate societal investments in technologies that may be at the brink of achieving power grid parity. Two major solar technologies arose from market competition in the past decade for utility scale central power plants: photovoltaic and concentrated solar power . Of the several CSP technologies, those based on central towers with large heliostat fields appear to be the most efficient. These large scale solar farms also interact with the atmosphere and with the ground though surface albedo replacement in addition to the direct GHG emission offset. Solar PV farms are highly absorbing while CSP farms are highly reflective when compared to the ground that they cover. On one hand, PV generation is economically viable for distributed generation and can be scaled down to kilowatts . On the other hand, CSP plants offer a number of advantages for larger-scale power production, including higher efficiency , lower cost of thermal storage, higher capacity factors, etc. Photovoltaic systems supply usable solar power by means of direct photoelectric conversion. A typical PV system consists of fixed or sun tracking arrays of semiconductor solar panels that absorb and convert broad wavelength solar irradiance into DC power. Inverters transform electric current from direct to alternating current locally, and transformers elevate the voltage for transmission. Concentrated solar power systems generate solar power by using mirrored surfaces to concentrate the beam solar irradiance in order to elevate the temperature of pressurized steam. The energized steam then drives one or more turbines that are coupled to AC generators. Direct steam CSP tower plants use tens to hundreds of thousands of mirrors to concentrate radiative power on a boiler that transfer the heat to high-energy steam for operation of vapor turbines. Dry cooling fans that dispense the use of additional cooling water close the low-temperature Rankine cycle and return the vapor to the liquid state for pumping. A modern tower CSP plant operates with very low consumption of water and is thus suitable for arid and semi-arid climates. Widely recognized advantages of CSP technologies for central power plant generation are: higher thermodynamic conversion efficiency ; reliance on the direct normal component of the solar irradiance, which allows for a flatter generation profile throughout the day; and the lower cost of long-term thermal storage as compared to long-term electrical storage.

Most studies do not characterize the duration based on the host population

As mentioned before, we assumed that the main route of MAP transmission in the calf population is the through the general environment. Although there are some studies that consider MAP transmission from transient shedders to susceptible calves, we assumed that the number of calf-to-calf transmission in pen 1 is negligible. Furthermore, since the calf population is separated from the heifer and adult populations, the adult-to-calf transmission rates are considered zero. As suggested in [15, 38, 43], we assumed that a portion between zero to 15% of newborn calves become infected in the fresh/hospital, maternity and calving pens . To calculate βG in the calf population we used the values given in columns 2 and 3 of Table 1 in [37] for βG in heifer population was estimated based on the total number of heifers that tested positive for MAP by fecal culture . The upper range was calculated by dividing the number of test positive heifers, 3 to 24 months of age by the total number tested [~32/1266 = 0.0256]. The estimate was assumed to be the highest annual percentage of infected heifers since it spanned a range of 12 to 21 months of follow up and for βG in the adult population we used the range considered in [8]. The infectious cattle transmission rate βI in the heifer population is adopted from [8]. However, βI in the adult population is calculated from the values provided in columns 3 and 4 of Table 2 in [30]. The value of βC in the calf and heifer population is considered zero due to the fact that the super-shedders are only considered in the adult population. The ranges of stage durations 1/σL, 1/σI and 1/σC are mainly adopted from the previous studies.

The average shedding rate of infectious cattle in the heifer and adult population is obtained from our clinical study and the work by Bolton et al.. Duration of pathogen survival is considered the same for all pens. We therefore considered a range of 0.8 to 1.5 year as suggested by Magonbedze et al.. Pathogen transmission rate from pen environments to the general environment may vary based on the age group due to the amount of manure produced by each age group. We assumed larger values for adult populations compared to heifer and calf populations. The animal removal rates vary from farm to farm, rolling tables but it is known that the rates are higher in the calf and heifer populations. We adopted the same range of values used in [36]. Historical computerized dairy farm records with information on cattle IDs in each of the farms’ pens were obtained with the herd veterinarians’ and farm owners’ permission and no animals were enrolled for the purpose of the current study.Dairy herd movement records exported from each study herd included cow identification numbers, date of record and pen location in Table A , for a snapshot of the dairy farm data from 2011 to 2015. Some dairies had intervals of missing data for one or more pens, the latter could be due to the dairy management not utilizing such a pen or pens, or simply due to missing backups at regular intervals. The missing data was removed from the study. As summarized in Table 4, the cattle movement data of four dairy farms were used in this study. Farm 1 contributed records from 01/07/11 through 04/10/15 with data missing during the change of its breed make up between 02/01/12 through 12/31/13 interval resulting in herds 1 and 1 of Holsteins and Holstein Jersey mix breeds, respectively.

Particularly, we divided the data of Farm1 into two parts and excluded the interval with missing data. Farm 2 had pen residence records from 01/07/11 through 05/26/15. At the pen level, both Farm 1 and Farm 2 had data on cows residing in pens 1 through 13, but not pen 14 due to the transitory nature of cows moved into the maternity pen at calving only. Farm 3 contributed data from 06/15/13 through 05/13/15 on cow movements between pens 2, 3, 4, 7, . . ., 13 with no data recorded from pens 1, 5, and 14. Farm 4 records were the most completely recorded data for the herd of 4,604 cows. The data was complete for the period from 01/18/11 through 06/02/15 and it includes cow movement data for pens 1 through 14. Farms 1 and 2 have substantially higher numbers of cows than the other farms. Using Matlab optimization toolbox the CM model was fitted to the movement data of each farm. The model fitting resulted the estimated rates of moving cows between pens on 4 dairy farms. These di,j rates are presented in Table B . Tables D and E summarize the number of observations and the number of between-pen movements for all farms, respectively .In summary, serum ELISA test and cull is the most effective single control measure in reducing MAP infection. By far the best outcome is obtained by combining three control measures of test and cull, cleaning, and isolating calves and heifers from the herd. The risk of MAP occurrence was calculated by dividing the number of iterations with R0 greater than one by total number of iterations that R0 has been calculated. When we compare the no control option versus all combined control strategies, the risk of MAP occurrence in dairy cattle drops from 82% to 42% and the mean R0 value drops from 3.92 to 0.89. Although this demonstrates a very effective approach to JD management on a dairy farm, it reveals that the MAP occurrence risk is not eliminated even though that all control measures are simultaneously applied. Despite 42% risk of MAP occurrence, simulations of a MAP infected herd showed that employing all control measures reduced mean prevalence of MAP below 0.02% in calves and heifers, and mean prevalence in adult cows of 1.05% over ten years. Hence complete eradication of MAP was not possible, despite the fact that the prevalence and incidence of MAP were extremely low in the window of ten years. Table 5 summarizes results of the NC model simulations assuming that each control measure separately applied to a dairy farm. S1 Fig depicts the distributions of R0 values. In each panel, the curve represents the fitted generalized extreme value distribution. See Table F for the estimated sigma and mean values. In the absence of any control measures, identified as Control 0 in Table 5, the mean R0 value was 3.92 and with a long tail in the frequency plot such that it exceeded R0 = 20. The numerical simulations indicated that none of the controls were individually effective and hence they each failed to reduce the mean R0 values to below 1. In this regard, the top three measures were controls 4b, 3 and 4a with the mean R0 values of 1.31, 1.51, and 2.11, respectively. Table 6 summarizes statistics of R0 values for MAP transmission under all possible binary combinations of the control measures. Also, S2 and S3 Figs shows the related R0 distributions. Although the mean R0 values was reduced from 3.92 to 3.85 but the combination of controls 1 and 2 were not successful in reducing the risk of infection and hence MAP transmission. Control measure 4a, cannabis grower supplies weekly test and cull of cows at dry-off , and 4b, annual test and cull of adult cows made up the most effective binary combination control measure, while a combination of Control 3 with Control 4a or 4b was the second most effective combination control measure. Nevertheless, none of these binary combinations reduced the mean R0 value below one.

However, as shown in Table 6 the risk significantly dropped in the cases in which test and cull was combined with a control measure other than controls 1 or 2 . In the best case scenario, the risk of MAP occurrence decreased from 81% to 47% . Also, in all cases the mean R0 value was greater than 1, which indicates that JD will gradually spread in the herd. Although risk of MAP infection decreased with triple and all control measures, the risk was not eliminated and remained non-zero. In particular, as shown in Table 7, the risk of MAP infection decreased from 81% to 42% and the mean R0 value decreased from 3.92 to 0.89. From the most effective to the least, the combined measures with mean R0 values less than 1 are controls All, , , and , with mean R0 values of 0.898, 0.94, 0.95, and 0.97, respectively . Hence, on average, a farm that employs all control measures, or a combination of the three control measures should remain or gradually become disease free. However, there is still more than 40% risk of MAP infection remaining in the herd. In a similar manner, prevalence and incidence of MAP infection were estimated based on simulations for applying single control measures. Fig 6 represents the asymptotic behavior of MAP infection prevalence and incidence in a dairy farm. Namely, the curves were obtained by taking the mean values of 50,000 NC model simulations for a long period of time . For each simulation, a super shedder and an infected cow are initially introduced to the herd. It can be seen that control 4b was the most successful method in the population of adult cows. Control 3 could effectively slow down the increase of incidence and prevalence in calf and heifer populations. It should also be noted that control 5, which was designed for delaying the exposure of calves and heifers to infected cows, was the most effective method in the calf and heifer population to keep both prevalence and incidence low. This is despite the fact that control 5 had a poor efficacy of 81% risk of MAP occurrence . Details of the asymptotic values associated with the single measures can be found in Table F . Further simulations indicated that a combination of test and cull with control 1 did not reduce the incidence and prevalence in the calf and heifer populations due to the fact that test and cull is rarely applied to calf and heifer pens. We also calculated the range of the mean prevalence and incidence estimates in a practical time period using 50,000 NC model simulations. These values are presented in Table 8, where control 5 is still the best control measure in the population of calves and heifers. In the population of adult cows there was no single control measure, which was superior to all other measures. Hence, we investigated the prevalence and the incidence for the cases that more than one measure was employed. Namely, the results of combined control measures are presented in Table 9 showing that the mean prevalence and incidence estimates were substantially less in the calf and heifer populations. Moreover, the binary controls 3 and 4 are ineffective in calf and heifer populations, but they are effective in adult populations. NC model simulations for calves and heifer populations with the combination of double control measures 1 & 5, 2 & 5, 3 & 5, 5 & 4a, 5 & 4b , result in prevalence estimates below 0.01% , which is an important result that a disease-free herd can remain disease free, under two assumptions. First, extremely low number of infectious cow or supper shedder are accidently introduced to the herd. Second, a combination of the above-mentioned control measures is strictly implemented. A common control measure among these effective combinations is control measure 5 under which calves are born and raised in uninfected herds delaying to exposure to infected cattle. Despite being a different scenario, such Estimates may serve as a conservative scenario resulting in a prevalence of 0.008% estimated in the heifer population, making it the most effective measure in this age group. The next most effective control measure in calves and heifers were combinations with controls 2 resulting in a prevalence of 0.07% . All of these combinations included control 2 where exposure of calves to MAP infection is avoided starting at birth by relocated them off-site before being returned to their source herd as springers. Although the offsite nursery prevents contact between the calves and adult cattle, the environment in the off-site nursery pens may be contaminated by lagoon water in case of recycling flush water; and therefore, a mean of 0.07% prevalence is expected. Nevertheless, the simulations suggested that control 2 was the second best measure to reduce the JD prevalence in the calf and heifer populations.