The findings described in this dissertation make a small but significant step in this direction

The human immune system maintains a homeostatic relationship with commensals through numerous mechanisms, including stratification and compartmentalization of the intestine, production of a mucous layer and antimicrobial proteins, and limiting epithelial exposure and immune response. Two studies in Arabidopsis thaliana demonstrate that disrupting components of the plant immune system, such as the signaling molecules salicylic and jasmonic acid, influences microbial communication composition: the first shows evidence for altered root microbiome communities in plant hosts lacking genes controlling production of SA compared to control plants; and the second shows altered microbial communities in plants with mutations in genes controlling ethylene response and cuticle formation. Recent work in wheat also demonstrated a role for jasmonic acid in shaping composition of the microbiome, and again in this case, activation of JA signaling pathways altered microbial diversity and composition of root endophytes. In mammals, microbiota are critical in development and function of components of adaptive immunity, such as B and T cell diversity and differentiation. In plants, curing weed commensal bacteria influence host immunity by priming the plant for future exposure to pathogens through the induction of a systemic response, causing broad-range basal levels of protection.

A primed plant may respond more rapidly and strongly to pathogen invasion through a variety of mechanisms including: quicker closing of stomata, less sensitivity to bacterial manipulation of defenses, up regulation of defense-related genes, and stronger salicylic acid related immune responses. In some cases, the effects of priming can even be trans-generational through chromatin and histone modification, where the subsequent generation of primed plants exhibits enhanced resistance to bacterial, fungal, and herbivorous pathogens. Host-associated microbiota can also directly influence host resistance against invading pathogens. Common across most systems, the microbiome can serve a protective role that is independent of the host immune system through antagonism, competitive exclusion, or physical exclusion of pathogens, collectively referred to as defensive symbiosis. Recently, the phyllosphere microbiome, discussed in this work, has been found to protect its plant host against pathogens. In mammals it is clear that early exposure to microbes is crucial to the development of both branches of the immune system, influencing not only immune development and response against pathogens, but also tolerance to commensal microbiota. The role of early exposure to microbiota suggests it would be advantageous for a community of beneficial microbes to be transmitted vertically from parent to offspring from generation to generation. Transmission of microbiota in plants can occur vertically through the seeds, or horizontally from the soil and surrounding environment.

Plants ranging from trees to grasses are known to harbor bacteria in their seeds, many of which are reported to promote plant health. Despite this, there is no evidence that plants actively select for transmission of specific microbial communities, and there are no clear examples of adaptations to ensure seed mediated transmission. My work in Chapter 2 explores this topic through uncovering the importance of some seed-transmitted microbes in early seedling health.The plant phyllosphere is defined as the aerial surfaces of plants, or, all plant tissues growing above ground. This work primarily focuses on microbial epiphytes of the phyllosphere: the bacteria, viruses, and fungi that are found on the surfaces of leaves. The phyllosphere is a massive habitat estimated to exceed 108 km2 of plant surface area worldwide. It is, in general, a nutrient poor environment that undergoes fluctuations in temperature, UV, and moisture. The phyllosphere microbiome is known to harbor primarily four phyla of bacteria, and they reach an abundance of ~106 cells/cm2. Microbes from surrounding plant species, dust, soil, and other sources are thought to be the primary colonization source for the phyllosphere. In particular, neighboring plants have been shown to contribute to both the density and composition of local airborne microbes. However, as demonstrated in Chapter 3, the microbes frequently described as members of the phyllosphere microbiome may in fact be transient visitors and not well-adapted colonizers of the environment. Although there is a trend in phyllosphere research to focus on the bacterial portion of the microbiome, there have been some studies describing the fungal community as well.

There is even less work on the viral community, although from culture-based work, we know that bacteriophage viruses do indeed inhabit the phyllosphere and predate upon the bacterial community. There are many technical limitations that impede the field’s ability to fully describe the phyllosphere phage community. Nevertheless, my work in Chapters 4 and 5 contribute to our understanding of the importance of bacteriophages in this system. Compared to the field’s understanding of the below ground microbial habitat, the rhizosphere, the phyllosphere has been relatively understudied. Despite this, there are many advantages to the system. Specifically, the phyllosphere has a naturally distinct spatial structure, it is relatively easy to sample, and the microbes are highly culturable. Through inoculation using a fairly simple spray technique, the environment can be evenly saturated with diverse microbial inoculum, and it is possible to sample the successfully colonized community in its entirety. It is also easy to visualize, and spatial patterns of colonization and survival can be easily ascertained. Moreover, bacterial abundance and growth can be tracked using droplet digital PCR, and the bacterial and fungal communities can be described using next generation sequencing. Overall, the phyllosphere is an ideal system in which to study topics such as the relative importance of transmission events, host characteristics, the environment and microbemicrobe interactions in shaping the microbiome.Symbiotic associations between plants and microbes span from pathogenic to beneficial, and these interactions have been studied from many angles of science- from evolution to agriculture. My dissertation research seeks to address fundamental questions about microbial community ecology and host-microbiome interactions. It is motivated by the belief that rational design or manipulation of complex microbial communities has the potential to shape the future of medicine and agriculture, but this success will largely depend on our basic understanding of the systems at hand. Plant associated microbiomes are capable of enhancing host fitness through a number of mechanisms. They can promote growth through production of phytohormones and fixation of nutrients from the environment, confer both drought and stress tolerance, and even influence flowering time of their hosts. Perhaps one of the most influential ways that microbial organisms affect host fitness is through their impact on host immunity and disease resistance. In plants, microbes can confer disease resistance through both indirect and direct mechanisms and can indirectly protect against disease via the plant immune system. Plants are able to detect microbial-associated molecular patterns such as lipopolysaccharides in the environment, activating a generalized anti-microbial defense mechanism, and effectively priming the plant to respond more effectively when subsequently exposed to a pathogen. Experimental studies using isolated strains of bacteria have demonstrated that many can protect plants against pathogen colonization through direct inhibition of the pathogen’s growth, either through competition for resources or production of antimicrobials. Furthermore, there is long-standing understanding that plants can be ‘primed’ against pathogen colonization by colonization of non-pathogenic bacteria. It is now becoming clear that the microbiome as a whole might act collectively to confer disease resistance, although it is more difficult to pinpoint mechanisms underlying the effects of whole consortia compared to studying individual strains using culture-dependent methods. Both the rhizosphere and phyllosphere microbiomes have recently been shown to provide protection against pathogens. But even as we begin to understand microbiome-mediated protection against disease, it is unclear how a naturally protective community might assemble on a plant, and, once assembled, if it can be stably maintained over time. A broader consideration of how plant-associated microbiomes are acquired and transmitted among hosts is required to better understand how a generally beneficial community might persist across generations. The two dominant sources for assembly of the plant microbiome are horizontal transmission from the environment and unrelated plants and vertical transmission from parental plants. Local plant populations are important contributors to the airborne microorganism community, and thus that movement of microbes among neighboring plants can readily occur through aerial dispersal. Unlike horizontal transmission, however, cannabis protective tray vertical transmission holds the potential to connect, extend, and reinforce beneficial symbioses across temporal and spatial scales.

In plants, vertical transmission of microbial communities is observed in both vegetative and sexual reproduction. Parental microbiota can be transmitted through the foliar and vascular pathways onto seeds, though the most likely route across plant species remains unclear . Once on or within the seeds, they can act as the incipient members of a mature plant microbiome, critically shaping growth, development, and susceptibility to pathogens of newly emerging seedlings. Such transmission would allow plant lineages to maintain beneficial symbioses across multiple generations and pave the way for coevolution of the partners, as has been well-characterized in other systems . Moreover, studies on seed-associated microbes have focused primarily on endophytes from surface sterilized seeds, despite the fact that the seed surface is the most immediate interface between the embryo and parental tissues. As a result, endogenous seed epiphytes remain a relatively unexplored group, despite their potential importance in early colonization of plants. Here we present a study in which we examine whether endogenous seed epiphytic microbes, both as a community and in isolation, protect seedlings of various tomato types against a common plant pathogen, Pseudomonas syringae pv tomato strain DC3000 . By transferring naturally occurring seed-associated microbial communities back onto surface-sterilized seeds of either the original cultivar or different genotypes, and comparing pathogen colonization and disease susceptibility against un-inoculated control seedlings, we were able to test the impact of multiple seed-associated communities and bacterial isolates on disease progression, and examine the dose-dependence of protection conferred.Tomato fruits were collected from UC Davis Student Organic Farm in September 2017. We collected mature, intact fruits from a total of four different tomato types based on distinct fruit morphologies and field locations including: orange cherry tomatoes, red cherry tomatoes, medium-orange-sized tomatoes, and an unidentified heirloom variety. Fruit of the same tomato type/cultivar were collected from multiple plants planted in the same row, resulting in four tomato types . Tomato Type 1-3 were collected from non-neighboring lanes from one field, and the heirloom variety was collected from a neighboring field. Tomatoes were brought into the lab, pooled within tomato type , sterilized, and then fermented to collect seeds.Intrigued as to what made TT4 seed microbiota protective on not only its own tomato genotype but also others, we used 16S rRNA community profiling to sequence the bacterial communities of two week old seedlings whose seeds had been inoculated with TT4 microbiota. We found that these seedlings were strikingly dominated by OTUs in the genus Pantoea . Knowing that Pantoea is highly culturable, and also that many species are already used as biocontrol strains, we next sought to culture isolates from TT4 seeds to determine the exact species of Pantoea that were endogenously found on these seeds. We were able to culture six bacterial isolates from TT4 seeds. We also tried to culture bacterial isolates from the other three tomato types, and we were only able to culture one bacterial isolate from TT2, which we identified as a Bacillus species . Using Sanger sequencing, we sequenced the 16S genes of our isolates and identified them as species of Pantoea . Because Pantoea spp. are notoriously difficult to differentiate using 16S sequences, we chose three isolates based on distinct colony morphology and different 16S sequences, and sequenced their gyrB and rpoB genes as well. We were able to further confirm their identities and place them within a phylogenetic tree of Pantoea spp.. To our knowledge, our isolates have not been previously identified nor used as bio-control strains, although some related strains have been developed . Interestingly, ZM3 andZM2 appear to be similar based on DNA sequencing, with their partial 16S sequences aligning 99% to one another and their partial gyrB sequences aligning 100%. However, when grown on nutrient agar, their colonies are distinctly different colors; yellow and white, respectively. Whole genome sequences will further elucidate genetic differences between the isolates and are currently underway. Lastly, we aligned our ~420bp of amplicon sequencing data to near full length reverse Sanger sequencing reads of the isolates and observed 100% match of some of these OTUs with our isolate sequences .Through a combination of culture-dependent and independent methods, we were able to directly test the protective effects of naturally occurring seed-associated microbiota, both in consortia and as single isolates.

The plant immune system is also important in shaping the non-pathogenic microbiome

Similarly, according to Tena’s chronography, the summer solstice took place toward the end of the sixth month, called Etzalqualiztli. At that time, sunrise would have seemed to stand still at an azimuth of ca. 65°. Viewed from the top of the Templo Mayor, sunrise would have taken place behind Tepetlaoxtoc, in the western foothills of the Sierra de Patlachique, across the briny waters of Lake Texcoco where the Basin’s saltworks were . In coincidence, the first day of the seventh month, called Tecuilhuitontli, was devoted to a celebration in honor of Huixtocihuatl, the goddess of salt. Close to the summer solstice bearing, further east from the salty lakeshores, there were fertile agricultural terraces with cultivated milpas, or cornfields. Sahagún noted that in the eighth month, called Huey Tecuilhuitl, the goddess of fresh corn, Xilonen, or Chicomecoatl, was also celebrated. It does not seem coincidental that the name of Chiconcuac, a settlement found along this summer sunrise view, is derived from the name of this goddess. The winter solstice occurred close to the beginning of the 16th month, Atemoztli, a time in which sunrise seems to stand still at its southernmost azimuth of ca. 116°, on the northern slope of the Iztaccíhuatl volcano, the “sleeping woman” . According to Sahagún, the beginning of the following month, called Tititl, pipp mobile systems was devoted to celebrating Ilama Tecuhtli , also known as Tona .

The correlation between sunrise close to the woman-like volcano and the celebration of womanhood in general is striking. In summary, there seems to be a noteworthy association between some elements of the horizon calendar and the feasts and celebrations of each season: the arid spring equinox, when the sun rises behind Mount Tlaloc, was associated with Tlaloc, the god of water and rain. The summer solstice, when sunrise occurs behind the distant salty shores of Lake Texcoco, was associated with salt and summer corn. Finally, the winter equinox, when the sun rises at the side of Iztaccihuatl, the sleeping woman, was associated with womanhood and female gods.The previous analysis suggests a correlation between the Mexica calendar and the topographic elements of the Basin’s eastern horizon but leaves an important question unanswered, namely that of the calendric role of Mount Tlaloc. It seems very clear that the horizon calendar, as viewed from Tenochtitlan’s Templo Mayor, should have relied strongly on the date of the sun rising behind Mount Tlaloc, as this mountain could have provided, better than any other, the accuracy needed for the precise estimation of the length of the solar year and for leap year adjustments. However, none of the 16th century codices and manuscripts consulted for this study describe this phenomenon in a direct and clear manner, other than a general mention in Sahagún that at the beginning of the third month, close to the alignment date of sunrise with Mount Tlaloc, a feast was made to Tlaloc, the god of rains. If the alignment of sunrise with Mount Tlaloc was indeed an important calendric landmark when viewed from Templo Mayor, a clear mention could have been expected in the ancient codices, including the question of why did the Mexica not use the Mount Tlaloc alignment to mark the beginning of the new year.

The answer to this paradox may lie in the ruins of the ceremonial center found at Mount Tlaloc’s peak. The summit of Mount Tlaloc is crowned by a rectangular walled enclosure about 40 m east–west by 50 m north–south . This courtyard, or tetzacualo, consists of stone walls that have been estimated to have been 2 to 3 m high when originally built, with a ca. 94° east-west azimuth . The eastern side of the precinct opens to a 150 m-long, ca. 6 m-wide, walled straight causeway that has an azimuthal bearing of 101°55′, offset more than 8°southward from the roughly east–west bearing of the enclosure . Because the causeway runs down slope on the western side of the peak, some researchers have wondered whether the causeway was intentionally misaligned with the axis of the enclosure in order to accommodate a particular orientation to the setting sun . If viewed upslope, the azimuthal bearing of Mount Tlaloc’s causeway and the angular elevation of 4°02′ above the celestial horizon defines a point in the celestial sphere that aligns with the sun’s apparent position on February 23 to 24 each year. That is, an observer standing at the lower end of the causeway will see the rising sun appear in the center of the upper part of the stone ramp on February 23 or 24, after the last nemontemi day and in synchrony with the beginning of Basin’s new year as defined by Tena’s first chronology . The causeway seems to have been constructed as a calendric solar marker with a celestial bearing that allows for leap-year adjustments and indicates the end of the year and the beginning of a new solar year. The idea that the structure was used for precise astronomical observations is further reinforced by the fact that it seems to have had specific sight markers to avoid parallax error. Wicke and Horcasitas described that the causeway had a stone circle in its upper end where, presumably, a monolith could have stood. Correspondingly, it still has a stone square with an erect, 40-cm monolith in its lower end. Jointly, they could have been used as alignment markers to further improve alignment accuracy.

Almost a century ago, Rickards described the presence of a monolith with the figure of Tlaloc in the center of the tetzacualo and aligned with the causeway, as had been described earlier by Durán . Although the figure has been removed since , it could have functioned as yet another element for precise solar alignments . The importance of Mount Tlaloc as a solar observatory is enhanced by the fact that the two largest peaks of the Mexican Transversal Volcanic Axis east of the Basin of Mexico are visible from its peak and almost perfectly aligned. Viewed from the center of the stone courtyard, the nearest peak, Matlalcuéyetl or Malinche has an azimuth of 105°52.7′ while Citlaltépetl or Pico de Orizaba has an azimuth 105°26.5′. Because the azimuthal difference between the two peaks is less than the angular width of the sun’s disk, viewed at dawn they will seem like a single mountain with twoclose crests, where sunrise would be seen on February 10. In short, the causeway in Mount Tlaloc marks very precisely the beginning of the Mexica solar year, but the summit courtyard could have been used to identify a precise alignment 15 d before the beginning of the year, during Izcalli—the last month of the Mexica calendar. Ceramic fragments are common in and around the enclosure, and these fragments have been collected by archeologists and dated to the Mesoamerican Classical Period, early Toltec, and Mexica, suggesting that the site was used for ceremonies from the beginning of the Common Era to the collapse of the Mexica Empire in the 16th century . Although the constructions have not been dated with precision, early chroniclers reported that the sanctuary in Mount Tlaloc was used by the Toltecs before the 7th century CE and by the Chichimecs in the 12th century, before the arrival of the Aztecs to the Basin . It seems likely, then, industrial drying rack that the astronomical use and significance of the Mount Tlaloc causeway, and hence the beginning of the Mesoamerican calendar, preceded the founding of Tenochtitlan and the development of the Mexica civilization.Broda noted that the causeway of Mount Tlaloc points toward Mount Tepeyac, a hill that emerges from the Basin’s sediments south of the Sierra de Guadalupe, a range of basaltic mountains in the center of the Basin of Mexico. Indeed, when viewed from Tepeyac, Mount Tlaloc has an azimuth of 100°54′, very close to the bearing of the causeway on Tlaloc’s peak and an elevation of 2°38′ . Mount Tepeyac is the southernmost hill of the Sierra de Guadalupe, only 4 km northeast and 7 km east of the pre-Hispanic settlements of Tlatelolco and Azcapotzalco. According to Sahagún , the hill had been a place of worship and pilgrimage for the inhabitants of the Basin long before the Spanish Conquest. Broda’s observation suggests a visual alignment of calendric importance may have existed between the Tepeyac ranges and Mount Tlaloc. Indeed, sunrise alignment with Mount Tlaloc occurs on February 24 if viewed from Mount Tepeyac. Like the alignment in the summit’s causeway, the Mount Tepeyac solar alignment date corresponds with that of the causeway and also heralds the beginning of Tena’s new year . It can be hypothesized, then, that before the Mexica built the Templo Mayor, the inhabitants of the Basin of Mexico were using the alignment between Tepeyac and Mount Tlaloc as a fundamental landmark in their horizon calendar. They could have adjusted with precision their agricultural calendar to the solar year based on the sunrise alignment between Mount Tlaloc and Tepeyac.Agriculture was already well established in the Basin of Mexico by the first millennium BCE, largely around the Pre-classic Cuicuilco culture in the southwest of the Basin.

The Cuicuilco civilization collapsed in the 3rd century CE when the Xitle volcano became active and covered the whole south of the Basin under a mantle of lava . Broda has analyzed the horizon calendar as viewed from the main pyramid of Cuicuilco, built ca. 600 BCE, almost nine centuries before the apogee of the Mexica Empire. She concluded that the sunrise alignment with Mount Papayo on March 24, close to the equinox, “could have constituted a simple and effective mechanism to adjust for the true length of the solar year, which needed a correction of 1 d every 4 y.” Broda’s studies on Cuicuilco provide strong evidence suggesting that rigorous calendric calculations and leap-year adjustments wereat the heart of the development of Mesoamerican agricultural civilizations from very early times and were certainly very important in pre-Classical settlements. In addition to the equinoctial alignment of sunrise with Mount Papayo, the Cuicuilco observatory would have provided good calendric alignments with Mount Telapon and with the “head” of the “sleeping woman” profile of the Iztaccihuatl volcano . The latter date is very close to Tena’s estimate for the beginning of the Mexica calendric year and, because of Iztaccihuatl’s majestic proportions when viewed from the south of the Basin, could also have constituted an important landmark for calendric adjustments .Many early codices seem to validate the working hypothesis that Mount Tlaloc was instrumental in the establishment of the date of the Basin’s new year and in the adjustments necessary to keep the agricultural calendar in synchrony with the solar year. As discussed previously, Sahagún described how Atlcahualo, the first month of the year, was devoted to celebrate the Tlaloc gods of rain. Similarly, in Duran’s description of the nemontemi days, he reported that the year ended when a sign of the first day of the new year became visible above a mountain peak , suggesting the use of a landmark alignment to indicate the beginning of the new year. Similar associations between Mount Tlaloc and the first day of the new year are shown in other ancient codices, such as Codex Tovar, Codex Borbonicus, and the Wheel of Boban . The narrow historical relationship between the first month Atlcahualo and Mount Tlaloc has been recently described in detail by Broda . From an ecological perspective, it seems clear that the rugged eastern horizon of the Basin provided precise landmarks that would have allowed to adjust the xiuhpohualli, the count of the years, with the true solar calendar. Sahagún’s description of the feasts and ceremonies associated with some of the Mexica “months,” or 20-d periods, coincides well with themes from landmarks visible in the sunrise horizon from the Templo Mayor. Because of its position near the equinox, when viewed from the center of the Basin, Mount Tlaloc seems to have played a very important calendric role. The long causeway at the summit strongly suggests that the ceremonial structure was used as a solar landmark, aligning very precisely with the rising sun on February 23 to 24 and October 19 to 20. The same alignment is found if Mount Tlaloc is viewed from Mount Tepeyac, a holy site whose use as a sacred mount and solar observation post preceded the establishment of the Mexica civilization in the Basin.

African-American groups have sought to reclaim and remold their rich heritage through urban farming

By crops, we mean either annual or perennial crops, including tree crops. At the field scale, DFS may include polycultures, noncrop plantings such as insectary strips, integration of livestock or fish with crops , and/or rotation of crops or livestock over time, including cover cropping and rotational grazing. Around the field, DFS may incorporate noncrop plantings on field borders such as living fences and hedgerows. At the landscape scale, DFS may include natural or semi-natural communities of plants and animals within the cropped landscape/region, such as fallow fields, riparian buffers, pastures, meadows, woodlots, ponds, marshes, streams, rivers, and lakes, or combinations thereof . The resulting heterogeneous landscapes support both desired components of biodiversity and “associated biodiversity”; together these two elements make up agrobiodiversity . Components of the agrobiodiversity within DFS interact with one another and/or the physical environment to supply critical ecosystem services to the farming process, such as soil building, nitrogen fixation, nutrient cycling, water infiltration, pest or disease suppression, and pollination, pipp grow rack thereby achieving a more sustainable form of agriculture that relies primarily upon inputs generated and regenerated within the agroecosystem, rather than primarily on external, often nonrenewable, inputs .

Spatial considerations are important, since different components of the system must be in sufficient proximity, at each relevant scale, to create needed interactions and synergies. For example, the utility of intercropping for reducing below ground soil disease depends on spacing the different crops such that their root systems interact . Similarly, wild bee communities can only provide complete crop pollination services when a sufficient proportion of their natural habitat occurs within a given distance of crop fields . A DFS is not only spatially heterogeneous, but is variable across time, due both to human actions , and natural successional processes. Figure 1 presents the conceptual model of a DFS.The term agroecology goes back more than 80 years and originally referred to the ecological study of agricultural systems . Much agroecological work seeks to bring Western scientific knowledge into respectful dialogue with the local and indigenous knowledge that farmers use in managing ecological processes in existing agroecosystems . More recently this hybrid science has evolved to include the social and economic dimensions of food systems . Partly in response to the industrialized agriculture of the Green Revolution , agroecology also came to mean the adoption of sustainable agricultural practices , and became an integral component of various social movements seeking alternatives to industrial agri-food systems.

Thus agroecology currently holds multiple meanings, and can refer to an inter- or transdisciplinary science, a set of sustainable farming practices, and/or a social movement . DFS is not an alternative to agroecology. Rather, DFS is a framework that draws from agroecological, social, and conservation sciences to focus analytical and action-oriented attention toward farming systems in which cross-scale ecological diversification is a major mechanism for generating and regenerating ecosystem services and supplying critical inputs to farming. Agroecological principles and methods can be used to evaluate DFS and to design or revive processes of diversification . In this essay and series of articles, we explore the ramifications of DFS for both ecological health and socioeconomic welfare, as well as examining the intersection of DFS with existing industrialized agricultural systems, supply chains, and national and international policies.DFS are complex social-ecological systems that enable ecological diversification through the social institutions, practices, and governance processes that collectively manage food production and biodiversity . As many political ecology scholars emphasize, ecosystems are densely interconnected with social relationships . Ecological variables such as soil, water, and habitat help configure an array of farming practices, exchanges of food and resources, and landscape management decisions that, in turn, influence the structure and function of the ecosystem.

Further, as ecosystem services are generated and regenerated within a DFS, the resulting social benefits in turn support the maintenance of the DFS, enhancing its ability to provision these services sustainably . This interplay underlies numerous historically occurring and emerging DFS worldwide. Conversely, socio-political and economic processes such as the decrease of access and control over seeds or increased dependence on commodity markets can intervene to disrupt such feedback cycles, thus weakening DFS. The industrialization of agriculture has led to growing homogeneity across food systems as farming techniques and markets become more standardized . As a consequence, the complex social relationships underlying agriculture and ecosystem service provision have become less visible. Focusing on DFS can help farming communities, researchers, policy makers, and industry recognize and restore these relationships. At their core, DFS depend on agroecological principles that are developed in and through the social relationships among working farmers, their communities and environments, and researchers, including ecologists, anthropologists, agronomists, and ethnobiologists . As seen in the Kreman et al. examples these principles take varied forms depending on local conditions. To understand how DFS may develop, function, and evolve over time and space, the particular context of each DFS needs to be studied, paying particular attention to the politics and power relations that reciprocally shape its ecological conditions. Many DFS were developed through traditional and indigenous farming knowledge and agrobiodiversity that was accumulated over millennia . More recently, other DFS have been created through targeted agroecological studies designed by scientists to solve particular problems . Historically, much knowledge about biologically diverse farming practices has been created and shared through peer-to-peer learning within traditional farming communities and, more recently, also through their collaboration with researchers interested in further developing agroecology . These relationships continue to be critical to the growth of DFS in new societal contexts and geographic locations. Since the 1980s, with the rise of the Campesino-a-Campesino and La Via Campesina movements, institutions such as government agencies, domestic and international NGOs, and universities have become increasingly active in promoting and diffusing agroecological principles through research networks and programs . These actors have added new institutional dimensions to the social relationships that help sustain DFS. An illustration of how social and ecological systems interpenetrate within DFS is in the Andean highlands, where indigenous farmers have managed their lands agroecologically for 3,000 years . The ongoing interplay between human management and physical ecology has created a landscape of agroclimatic belts at different altitudes, each characterized by specific field rotation practices, terraces, and irrigation systems, and the selection of specific animals, crops, and crop varieties . Within these belts, traditional knowledge has helped sustain tremendous genetic diversity, by perpetuating adapted landraces and wild relatives of crops. Social cooperation is essential to managing the verticality and heterogeneity of the Andean ecosystem. A barter economy based on reciprocity, for example, facilitated complementary exchanges of plants and animals between ecological zones along the steep elevation gradient .In industrialized systems in both developed and developing countries, farmers must now negotiate with corporate food buyers, buy agrochemical and seed inputs from agents, seek loans from bank officials, and work with agricultural extension experts trained in pesticide use. Farmers rely on such relationships to compete effectively in supply chains and to manage changing ecological conditions, such as pest outbreaks. Nonetheless, these particular types of relationships often push individual farms to increased dependence on banks, damaging livelihoods, pipp horticulture racks cost and undermining collaborative social learning groups as farmers specialize in a single crop and maximize short-term yields through the use of external inputs, to meet loan repayments. The economic pressures in these tightly linked systems generally corrode ecosystem services, which are the very foundation of support for potential DFS. Farmers in industrialized systems may also engage in exploitative relations with immigrant or impoverished laborers, paying inadequate wages and enforcing long hours, helping perpetuate the apparent cheapness of food. Industrial production creates a number of “distances” between producers and consumers such that information flow diminishes across the supply chain . Thus within the industrial agri-food system, consumers remain relatively ignorant about the conditions of production, and would be less able to choose between products based on sustainability criteria, if they value these, and to exercise their buying power in favor of DFS.

In turn, the risk perceptions of consumers and corporations may inhibit the growth of DFS. For example, during the recent food safety scare in fresh leafy vegetables in California, corporate buyers insisted that growers remove native vegetation bordering fields that might attract wildlife. This action was taken largely to assuage consumer concerns, despite the lack of scientific support . In alternative agricultural systems such as organic or low-input farming, farmers can build particular forms of relationships that help sustain ecosystem services and social infrastructure more effectively. We discuss many of these relationships, including direct marketing, fair trade certification, and food justice movements. In developing and studying these alternative systems, however, researchers, policy makers, and NGOs often neglect race, socioeconomic, and gender issues, or sublimate them into a broad social justice category. Finding ways to be far more inclusive of diverse racial, gender, and socioeconomic groups can help strengthen the socialecological basis of agriculture. For instance, African-American growers once represented a sizable proportion of the U. S. farmer population, or one million in 1910, declining to 18,400 by 1997, due to race discrimination and violence, lack of land tenure , and multiple waves of economic migration from the South to urban centers . Many of these black farmers used DFS practices; their displacement helped create an opening for industrialized monocultures. Now, many new farmers in rural and urban areas are black, Latino, or Asian; there is evidence that these farmers are more likely than their established peers to embrace sustainable agriculture practices if adequately supported . Immigrants such as the Hmong may sometimes develop culturally relevant, more diversified food production enclaves within industrialized systems that preserve their traditions and provide livelihoods . They are developing new linkages between cities and nearby rural areas, potentially helping recreate DFS. For example, Will Allen founded Growing Power, an urban farming NGO that serves disadvantaged neighborhoods in Milwaukee and Chicago, attempting to encourage youth of all races to take up diversified farming. In Chicago, black activists and physicians have formed the Healthy Food Hub, a food aggregation NGO which sources produce from a historically black farming community, Pembroke Township, about an hour from Chicago. These efforts show how people can demand greater political agency in building a democratic DFS . New quantitative and qualitative research is badly needed to evaluate and critique the social benefits that DFS may provide in contrast to industrialized systems. In general, further analysis is needed to understand how the social elements of DFS can help generate and regenerate ecosystem services, thus maintaining diversified farming systems. In turn, more research is required on the political and socioeconomic interventions that could help rebuild or sustain the socialecological cycles that underlie DFS.DFS are often embedded in social, political, and economic conditions that differ from those accompanying industrialized monocultures , particularly with respect to core stakeholders, markets, and distribution systems. Yet, DFS may not always be able to realize their potential social-ecological benefits due to the lack of enabling environments. We explore how alternative agri-food networks and socialmovements relate to DFS and assess their potential to both maximize social benefits and promote DFS through their demands for food sovereignty and food justice. The agri-food systems approach reveals the interconnected systems of inputs, labor, land, capital, governance and knowledge that maintain specific types of agricultural production, distribution, and consumption systems . The governance and structure of the food system upstream from the farm, such as international agricultural trade liberalization policies that promote cheap food imports from industrial into developing countries, government subsidies for fossil fuel-based agrochemicals and commodity crops and irrigation projects that primarily benefit larger landholders , all help to maintain the industrialized agri-food system . This system then creates substantial obstacles to farmers seeking to use diversified farming methods, generate value from ecosystem services, and sell food products to viable markets. It also leaves consumers and communities disconnected from the origins, qualities, and the social and ecological consequences of the production of their food, fuel, and fiber. In the same way that industrialized monoculture production systems are sustained by industrialized agri-food systems, diversified farming systems are frequently interdependent with alternative agri-food networks .

Wells can be screened continuously along the bore or at specific depth intervals

This research would be incomplete without a description of the presence of spousalrun dairy farms in the U.S. A spousal-run dairy refers to a dairy that is managed by two operators that are married to one another. There is a historic assumption that many dairy farms are run by spouses, however, this research finds that trends in spousal commercial dairy operations does differ greatly by state . For some states, like Wisconsin, New York, and Idaho, a significantly large share of commercial dairy farms was being run by spouses, with over 40% of commercial dairy farms in each state being spousal run. In California, 31% of commercial dairy farms are run by spouses, but New Mexico had relatively few commercial dairies run by spouses and a decrease from 15% to 13% from 2012 to 2017. A large share of female core operators of commercial dairies was married to a principal operator in 2012 and 2017 . In 2017 Texas had the largest share with 80% of female core operators married to a principal operator and then Idaho and Wisconsin both had more than 75%. New Mexico had the smallest share of female core operators married to a principal operator with 48%, but that remains a significant share. Next, flood and drain hydroponics age of commercial dairy operators has been a point of discussion for because of the increasing age of dairy farm operators. Table 5.9 presents the share of operators by gender and age group for the Census year and state.

Across all state the largest share of female operators was in the less than 50 years old age group with all states following a similar trend of a decreasing share of younger operators and increase in the share of older operators. For male operators the largest share was the less than 50 age group also had the largest share. There was a significant share of male operators in the larger age group categories across all states with every state, but Wisconsin, have at least 10% of operators being male and over the age of 66. Finally, previous literature suggested that women may be more likely to adopt sustainable-minded practices. Regarding organic production, this seems to be true. In 2017, most organic commercial dairies have at least one female core operator, except in New Mexico for which only 17% of organic commercial dairies have at least one female core operator . The share of organic commercial dairies with at least one female operator is larger than the overall share of commercial dairies with at least one female operator. There was an increase in the share of female core operators that operated an organic commercial from 2007 to 2017 , but this was also with the addition of the fourth operator. There has been a slight increase in the share of organic commercial dairies across all states, but in 2017 all states had less than 15% of commercial dairies with organic production .

Organic dairies do tend to have smaller herd sizes, in general and more milk sales revenue per cow. Organic commercial dairies have a larger share of female core operators than commercial dairies overall for all states, except New Mexico. In 2017, organic commercial dairies report at least a 30% or more share of female core operators, except New Mexico which only had an 8% share of female core operators . In every state, except New York, there was an increase in the share of female core operators that manage organic commercial dairy. The share of female core operators that manage an organic dairy decreased by 28% in New York but increased by 66% in Idaho.Next, I turn to explore the relationship between the farm size and gender demographics of farm operators and spousal-run operation. COA is panel data, meaning that it is both times series and cross sectional in nature. For my analysis, I utilize a log-linear model with fixed effects in order account for cross-state and cross-time differences. The farm size variables, of the individual farm at time , are the logged dependent variables including Cowsit number of milk cows , TMDit total sales revenue from dairy or milk, and TVPit total value of production. I utilize farm-level operator characteristics variables including a binary variable for the presence of a female core operator , the share female operator on the individual farm , and a binary variable that indicates a spousal run farm variable .

Furthermore, I included a variable to control for a relationship between the age demographics of operators on farm size. MaxAgeit describes the maximum age listed by any given core operator on an individual commercial dairy. Table 5.13 shows the list of variables use in regressions and their corresponding definition. In addition, αi and λt represent the state fixed effect and the time fixed effect, respectively, and uit is an error term. Xit represents a vector of farm operator characteristics and farm management characteristics. logFarmSizeit represents a vector of the logged farm size variables listed above.Equations 1 is the regression equation used to show the relationship between the presence of a female operator and farm size, accounting for age, state, and year influences on farm size. Table 5.14 shows the relative coefficients and standard errors of each regression. Concerning the number of milk cows, the presence of at least one female core operator relates to a decrease in the herd size by about 12.9%, when holding constant for age, state, and year influences on farm size. With herd size, when accounting for the presence of a female operator, the max age corresponds to an increase in the herd size by 0.5%. The presence of at least one female core operator suggests a decrease of the total value of production by 31% and all milk or dairy sales by about 13.4% as well. So, across all farm size measures, there are relatively similar results.A one-year increase of the maximum age of any core operator relates to an increase in the total value of production by about 0.7%.A water well is a hole, shaft, or excavation used for the purpose of extracting ground water from the subsurface. Water may flow to the surface naturally after excavation of the hole or shaft. Such a well is known as a flowing artesian well. More commonly, water must be pumped out of the well. Most wells are vertical shafts, but they may also be horizontal or at an inclined angle. Horizontal wells are commonly used in bank filtration, where surface water is extracted via recharge through river bed sediments into horizontal wells located underneath or next to a stream. The oldest known wells, Qanats, are hand-dug horizontal shafts extending into the mountains of the old Persian empire in present-day Iran. Some wells are used for purposes other than obtaining ground water. Oil and gas wells are examples of this. Monitoring wells for groundwater levels and groundwater quality are other examples. Still other purposes include the investigation of subsurface conditions, shallow drainage, artificial recharge, indoor vertical farming and waste disposal. In this publication we focus on vertical water-production wells commonly used to supply water for domestic, municipal, and agricultural uses in California. Our purpose is to provide readers with some basic information about water wells to help them understand principles of effective well construction when they work with a professional driller, consultant, or well servicing agency for well drilling and maintenance.The location of a well is mainly determined by the well’s purpose. For drinking and irrigation water-production wells, groundwater quality and long-term groundwater supply are the most important considerations. The hydrogeological assessment to determine whether and where to locate a well should always be done by a knowledgeable driller or professional consultant. The water quality criteria to use for drinking water wells are the applicable local or state drinking water quality standards. For irrigation wells, the primary chemical parameters of concern are salinity and boron and the sodium-adsorption ratio. Enough ground water must be available to meet the pumping requirements of the wells.

For large municipal and agricultural production wells, pumping rate requirements range from about 500 to 4,000 gallons per minute . Small- and medium-sized community water systems may depend on water wells that produce from 100 to 500 gpm. Individual homes’ domestic wells may meet their needs with as few as 1 to 5 gpm, depending on local regulations. To determine whether the desired amount of ground water is available at a particular location and whether it is of appropriate quality, drillers and groundwater consultants rely on their prior knowledge of the local groundwater system, experience in similar areas, and a diverse array of information such as land surface topography, local vegetation, rock fracturing , local geology, groundwater chemistry, information on thickness, depth, and permeability of local aquifers from existing wells, groundwater levels, satellite or aerial photographs, and geophysical measurements. In most cases, the well location is further limited by property ownership, the need to keep surface transportation of the pumped ground water to a minimum, and access restrictions for the drilling equipment. When locating a well, one should also consider the proximity of potential sources of contamination such as fuel or chemical storage areas, nearby streams, sewer lines, and leach fields or septic tanks. The presence of a significant barrier between such potential sources and the well itself is very important for the protection of the well.Once the well location has been determined, a preliminary well design is completed. For many large production wells, a test hole will be drilled before well drilling to obtain more detailed information about the depth of water-producing zones, confining beds, well production capabilities, water levels, and groundwater quality. The final design is subject to site-specific observations made in the test hole or during the well drilling. The overall objective of the design is to create a structurally stable, long-lasting, efficient well that has enough space to house pumps or other extraction devices, allows ground water to move effortlessly and sediment-free from the aquifer into the well at the desired volume and quality, and prevents bacterial growth and material decay in the well . A well consists of a bottom sump, well screen, and well casing surrounded by a gravel pack and appropriate surface and borehole seals . Water enters the well through perforations or openings in the well screen. The latter is necessary when a well taps multiple aquifer zones, to ensure that screened zones match the aquifer zones from which water will be drawn. In alluvial aquifers, which commonly contain alternating sequences of coarse material and fine material, the latter construction method is much more likely to provide clean, sediment-free water and is more energy efficient than the installation of a continuous screen. Hardrock wells, on the other hand, are constructed very differently. Often, the borehole of a hardrock well will stand open and will not need to be screened or cased unless the hard rock crumbles easily.The purpose of the screen is to keep sand and gravel from the gravel pack out of the well while providing ample water flow to enter the casing. The screen should also be designed to allow the well to be properly developed . Slotted, louvered, and bridge-slotted screens and continuous wire wrap screens are the most common types. Slotted screens provide poor open area. They are not well suited for proper well development and maintenance, and are therefore not recommended. Wire wrap screens or pipe-based wire wrap screens give the best performance. The additional cost of wire wrap screens can be offset if you only install screen sections in the most productive formations along the borehole. The purposes of the blank well casing between and above the well screens are to prevent fine and very fine formation particles from entering the well, to provide an open pathway from the aquifer to the surface, to provide a proper housing for the pump, and to protect the pumped ground water from interaction with shallower ground water that may be of lower quality. The annular space between the well screen, well casing, and borehole wall is filled with gravel or coarse sand . The gravel pack prevents sand and fine sand particles from moving from the aquifer formation into the well. The gravel pack does not exclude fine silt and clay particles; where those occur in a formation it is best to use blank casing sections.

There has also been significant analysis in farm structure changes of the dairy industry

Although dairy farm size can be characterized for the U.S. overall, there are important distinctions by state, as the dairy farm size distributions differ greatly by state. It is important to distinguish growth patterns of dairy farms by state. Macdonald et al. detail that larger dairy farms are able to capture economies of scale, more so than smaller dairies, resulting in a lower average milk production cost. However, the article does go on to specify that the distribution of dairy farm size differs greatly by state based on the specific financial and economic environment of the dairy industry in that state. Alternatively, some dairy farms lower the average milk production costs by capturing the economies of scope, i.e., diversification of sales. This could be characterized as raising and selling replacement dairy heifers, or other agricultural products such as grain to maintain economic viability. Finally, I consider the relationship that farm operator characteristics may have with farm size and the decision of a farm to exit. In Chapter Five, I detail a specific line of analysis related to the influence of female farm operators on farm size, flood drain tray but in this chapter, I will discuss the influence that the age of the farm operator may have on the farm size.

Dairy farm size changes in response to these and other factors is important in considering future trends in farm size and their impact on milk production in the U.S. and the future structure of the dairy industry. This chapter aims to characterize the herd size distributions of the U.S. dairy industry, present evidence on the characteristics of the farm size distributions, and then finally discuss the correlation between farm level characteristics and farm size. This chapter will be structured as follows: a brief overview of previous literature on firm and farm size, a discussion about farm size distribution estimation, and then the results and discussion.Economic research and discussion have produced several theories on firm size and firm growth to characterize industries and the economy. This section will briefly review important studies related to firm size more generally and then will move on to research specific to the study of farm size and the economics of dairy farm size and size distributions. The study of firm size by economists can be best discussed chronologically, as much of the research builds off one another or finds results inconsistent with previously held theories. In 1931, Gibrat postulated what has come to be known as Gibrat’s Law that a firm’s growth rate is independent of its size.

This would mean that the growth rate of an individual firm over a particular time period should not be influenced by its original size. Ijiri et al. , using the foundation built by Gibrat’s Law, finds that firms that grew over 10% in the subsequent period are more likely to see above industry average growth, due to continued benefits of innovation that occurred in the subsequent periods. Viner theorizes that firm size distribution is based on the industry environment and that individual firms have a U-shaped average cost curve and will function at the minimum of this curve. He goes on to specify that firm entries and exits are determined by the quantity demanded by the market. Lucas used these previous works to build a new theory about the size distribution of firms in an industry that looks at size distribution as a solution for output maximization with a given set of production factors and managers with varied human capital levels. This model predicts the size distribution of firms based on the managerial ability of laborers and then subsequent resource allocation. Jovanovic finds that smaller firms will tend to have higher growth rates than larger firms, but that these smaller firms are more likely to exit the industry than the larger firms.

Evans discusses growth relative to a firms age, finding that a firm’s growth can be tied to the age of the firm itself and that older firms have a slower growth rate. This theory is hypothesized to remain true for dairy farms. Stemming from foundation of Gibrat’s law, which claims that the firm size distribution follows a log normal distribution, there has been significant literature on the size distribution of firms that looks at fitting parametric distributions to actual firm size data. Kondo, Lewis, and Stella evaluate recent non-farm panel data from the U.S. Census Bureau and find that the current U.S. firm size data best fits with a log normal distribution, but there are differences in goodness of fit by industry. Akhundjanov and Toda use the original data, in Gibrat’s original paper, find that a Pareto distribution better characterizes the empirical size distributions. The distribution of firm size remains a fundamental part of research firm growth patterns and the literature on firm size has been directly applied to research on the growth rate of farms and farm size changes in different agricultural industries. Two common parametric distribution used in farm size distribution analysis are log normal and exponential. Allanson evaluates farm size trends in England and Wales finding that the log normal distribution fits farm size measures relatively well across time. WhereasBoxely uses an exponential distribution to evaluate farm size data from the Agricultural Census and finds that from 1935-1964 farm size shifted to the right, but that at the state level farm size does tend to follow the exponential distribution with some regularity. Before going any further in the analysis, it is important to outline the concept of farm size for this analysis. Farm size measures across the whole agricultural industry tend to leave out key details that give better and more accurate accounts of the size of the farm for the commodity/industry. For example, when looking at the size of U.S. farms overall measuring the size of the farm based on acreage will lead to inaccurate or confusing results. The acreage needed to generate the same revenue for corn versus dairy milk or strawberries is substantially different. However, looking at the dairy industry specifically, many different characteristics shape a dairy’s economic footprint on the market, and therefore, defining how to characterize dairy farm size is fundamental to discussing changes in the dairy market. One can characterize the size of a dairy by the number of milk cows, or herd size, as one measure of dairy firm size. However, other characteristics such as the quantity of milk produced, the value of production, and value-added on the farm could also be considered as farm size measures . Different farm size measures allow us to answer different agricultural economic questions. While analyzing the dairy industry it is relevant to consider herd size, the milk and/or dairy sale revenue of the firm, and the total value of production, as we have already discussed in Chapter 2. Previous research on dairy farm size documents strong trends toward consolidation in the U.S. with a decrease of about 50% of all registered U.S. dairies from 2002 to 2019 . These trends in consolidation have differed by location with historically dairy producing regions seeing a large share of exits, these states were historically made up of smaller and mid-size dairies. MacDonald et al. detail the cost differences between larger and smaller dairies with cost advantages for larger dairies that drive the investment decision to increase herd size. This research suggested that there would continue to be a steady decline in the number of smaller and mid-size dairies and that the trend of consolidation would likely continue. This trend has raised research questions about what factors influence the distribution of farm size and the decisions of some farms to exit the industry. A common, albeit incorrect, assumption about the size distribution of the U.S. dairy industry is that it is bimodal.

This assumption comes from news reporting and political commentary that there is a “declining” middle of farms in the U.S. and that there is this dichotomy between small, sometimes organic, flood and drain tray farms and larger farms. Again, Wolf and Sumner find no evidence of a bimodal dairy industry using Farm Cost and Return Surveys of dairy farms for the years 1989 and 1993. In MacDonald et al. , they suggest that larger dairies tend to have lower costs per cow, which allows them to capture greater economies of scale. The cost-minimizing efforts of individual dairy farms will influence the specific farm management choices that the farm makes, as only the individual farm has a true sense of where it sits on its long-run average cost curve. Some of these management decisions include the dairy’s strategy to capture economies of scope, through sales diversification, or vertically integrate to minimize input and production costs. Sumner and Wolf find that vertical integration has little influence on the farm size and that the tendency for farms in the Pacific and South to have larger herd sizes remains true, even when accounting for the levels of vertical integration. The farm’s choice to incorporate different management strategies reflects the incentives and constraints that the farm faces, i.e., influences of geographic location and capital. Other influences on management choices by dairies are in part due to different environmental regulations in each state that impact the average cost of production for dairy farms.There has been a significant amount of agricultural economic research on dairy farm size with respect to their risk management and technical efficiency. Tauer finds that smaller dairies in New York do have a high average cost of production than dairies with larger herd sizes, but that these higher costs are due to inefficiencies and efficient small dairies are competitive with the larger dairies. Tauer and Mishra examine whether differences in technology or efficiency characterize the higher cost that smaller dairy farms face and find that using a frontier cost of production analysis show that inefficiencies in smaller dairies characterize the higher costs, not technological differences. Zimmermann and Heckelei utilize a Markov Chain Model on dairies in the European Union to characterize farm size change and find that regional characteristics such as off-farm opportunities and unemployment rates are significant in relation to dairy farm size change. They also find that high milk prices slow down farm size change due to high milk prices correlation to uncertainty and price volatility leading to a decrease in investment. Wolf details how dairy farms in Michigan have increased their use of risk management tools from 1999 to 2011 and find that the use of such risk management tools was positively correlated with measures of dairy farm size. This research also discusses how age related to risk management adoption with younger dairy farmers being less likely to utilize the risk management tools. Wolf outlines characteristics of dairy farm size change across time Beyond management decisions influencing or being correlated with the farm size and farms’ decision to exit, previous economic literature has hypothesized about the possible influences of operator characteristics, like human capital , the number of female operators, the age of operators, or other farm operator characteristics on farm size. Sumner and Leiby find that human capital positively influences the size of the farm, and this is hypothesized to be due to increasing opportunity costs for dairy farmers with high levels of human capital. Dairy farmers that have the possibility of making more money elsewhere will do so, therefore it seems likely that dairy farms with sufficient returns, which tend to be found on larger dairy farms, will attract high human capital management. Another aspect of the previous research related to farm size and the dairy industry is farm exits. There have been several studies of individual farm movement across farm size groups and characterization of exits. Most of this literature, however, has been limited to regions or states. Macdonald et al. finds that in 2016 about 40 percent of dairy farms with at least 2,000 milk cows did not have positive net returns and that the share of dairies that did not have positive net returns increased as herd size decreased. However, they do note that negative returns in the dairy industry are seen as temporary lows by dairy operators, so they do not serve as a direct indication of an expected exit from the industry. Other reasons for exits from agriculture, or dairy specifically, include increased suburbanization of previously agricultural land, driving land prices up, and strong local economies, opening off-farm employment opportunities for farm operators.

We performed a detailed validation study on a full scale Darrieus H-type VAWT

The wind turbine stands on a tubular steel tower, with a base diameter of 1.9 m. The drive train generator operates at 1200 rpm, while the rotor spins at a nominal speed of 55 rpm. The Micon 65/13M wind turbine was used for the Long-Term Inflow and Structural Testing program at the USDA-ARS test facility in Bushland, Texas. This project was initiated by Sandia National Laboratories in 2001 to explore the use of carbon fiber in wind turbine blades. The wind turbine is equipped with CX-100 blades, those structural model used in current FSI simulations was validated in Section 3.4.1. FSI simulations of the full Micon 65/13M wind turbine are carried out at realistic operational condition. A constant inflow wind speed of 10.5 m/s and fixed rotor speed of 55 rpm are prescribed. These correspond to the operating conditions reported for the field tests in [84]. The air density and viscosity are 1.23 kg/m3 and 1.78×10−5 kg/, respectively. Zero traction boundary conditions are prescribed at the outflow and nopenetration boundary conditions are prescribed at the top, bottom, and side surfaces of the outer computational domain. No-slip boundary conditions are prescribed at the rotor, nacelle, and tower, and are imposed weakly. Figure 4.3 shows the computational domain and Figure 4.4 mesh used in this study. The mesh consists of 5,134,916 linear elements, grow racks which are triangular prisms in the rotor boundary layers and tetrahedra everywhere else in the domain. The mesh is refined in the rotor and tower regions for better flow resolution near the wind turbine.

The size of the first element in the wall-normal direction is 0.002 m, and 15 layers of prismatic elements were generated with a growth ratio of 1.2. Figure 4.4 shows a 2D blade cross-section at 70% span wise station to illustrate the boundary-layer mesh used in the computations. The computations were carried out in a parallel computing environment. The mesh is partitioned into subdomains using METIS, and each subdomain is assigned to a compute core. The parallel implementation of the methodology may be found in [95]. The fluid and structural equations are integrated in time using the Generalized-α method with the time-step size of 3.0 × 10−5 s for all cases. In each time step, block-iterative FSI coupling is employed, which is efficient and stable for the application considered here.In Figure 4.5 the time history of the aerodynamic torque is plotted. As can be seen from the plot, using FSI, we capture the high frequency oscillations caused by the bending and torsional motions of the blades. In the case of the rigid blade the only high frequency oscillations in the torque curve are due to the trailing-edge turbulence. For the rigid blade case the effect of the tower on the aerodynamic torque is more pronounced, while in the case of FSI it is not as visible due to the relatively high torque oscillations. The ’dips’ in the aerodynamic torque can be seen at 60◦ , 180◦ , and 300◦ azimuthal angle, which is precisely when one of the three blades is passing the tower. The computed values of the aerodynamic torque are plotted together with field test results from. The upper and lower dashed lines indicate the aerodynamic torque bounds, while the middle dashed line gives its average value. Both the aerodynamic and FSI results compare very well with the experimental data.We present a preliminary, ongoing FSI simulation of a 5MW offshore wind turbine undergoing yawing motion. The wind turbine is equipped with 61 m blades designed by Sandia.

The structural model of a blade used in current FSI simulations was validated in Section 3.4.2. The wind turbine rotor is positioned at 80 m above ground and is tilted by 5◦ to avoid the blade hitting the tower as the rotor spins. Furthermore, the wind turbine rotor plane is initially placed at 15◦ relative to the wind direction. A fixed yawing rotational speed is applied to the gearbox to slowly turn the rotor into the wind at 0.03 rad/s . The inflow wind speed is set to 11.4 m/s. The initial rotor speed is set to 12.1 rpm, and the rotor is allowed to spin freely during the prescribed yawing motion. The structural mechanics mesh of the full turbine has 13,273 quadratic NURBS shell elements and two quadratic NURBS beam elements. The aerodynamics mesh has a total of 5,458,185 linear elements. Triangular prisms are employed in the blade boundary layers, and tetrahedral elements are used elsewhere in the aerodynamics domain.The size of the first boundary-layer element in the wall-normal direction is 1 cm, and the time step of 0.0001 s is employed in the computation. Snapshots of the structure deformed configuration are shown in Figure 4.10, while isosurfaces of vorticity colored by flow speed are shown in Figure 4.11. Figures 4.12 and 4.13 show the time history of the axial component of the aerodynamic torque and angular speed. Both are slowly increasing as the rotor turns into the wind, as expected. The level of the computed aerodynamic torque is consistent with the earlier simulationsfor this wind turbine operating under similar wind- and rotor-speed conditions .We present an FSI simulation of a 1.2 kW VAWT, which is a three-bladed, medium-solidity Darrieus turbine designed by Windspire Energy. The details of wind turbine geometry together with aerodynamic validation using a field-test data are presented in Section 2.3.2. The structural model is presented in Figure 4.14.

The rotor and struts are made of aluminum, and the tower is made of steel. Quadratic NURBS are employed for both the beam and shell discretizations. The total number of beam elements is 116, and total number of shell elements is 7,029. As a part of FSI simulations, we perform a preliminary investigation of the startup issues in VAWTs using the FSI methodology described earlier and the structural model of the Windspire design. We fix the inflow wind speed at 11.4 m/s, and consider three initial rotor speeds: 0 rad/s, 4 rad/s and 12 rad/s. Of interest is the transient response of the system. In particular, we will focus on how the rotor angular speed responds to the prescribed initial conditions, and what is the range of the tower tipdisplacement during the VAWT operation. The VAWT is allowed to spin freely and accelerate under the action of the ambient wind. The time step in the computations is set to 2.0 × 10−5 s. The mesh moving technique described in Section 4.2 is applied to this case in a straightforward fashion. The radius and height of the inner cylindrical domain that encloses the rotor are 1.6 m and 7 m, respectively. That is, the cylindrical domain extends 0.5 m above and below the rotor blades. The rotor axis direction nrot is defined according to Eq. , where the points xori and xtip are located at the bottom and top intersections of the tower beam and shell, respectively. The instantaneous rotor angular velocity is computed from Eq. , the spinning component is removed as per Eq. , and the two angular velocities are used to update the sliding-interface mesh positions. We fluid mesh was adopted from the aerodynamics simulations presented in Section 2.3.2 The time history of rotor speed is shown in Figures 4.15–4.17. For the 0 rad/s case the rotor speed begins to increase suggesting this configuration is favorable for self-starting. For the 4 rad/s case, grow table the rotor speed has a nearly linear acceleration region followed by a plateau region. In [16] the plateau region is defined as the regime when the turbine operates at nearly constant rotational speed. From the angular position of the blades in Figure 4.16 it is evident that the plateau region occurs approximately every 120◦ when one of the blades is in a stalled position. It lasts until the blade clears the stalled region, and the lift forces are sufficiently high for the rotational speed to start increasing again. As the rotational speed increases, the angular velocity is starting to exhibit local unsteady behavior in the plateau region. While the overall growth of the angular velocity for the 4 rad/s case is promising for the VAWT to self start, the situation is different for the 12 rad/s case . Here the rotor speed has little dependence on the angular position and stays nearly constant, close to its initial value. It is not likely that the rotor speed will reach to the operational levels in these conditions without an applied external torque, or a sudden change in wind speed,which is consistent with the findings of [17]. Figure 4.18 shows, for a full turbine, a snapshot of vorticity colored by flow speed for the 4 rad/s case. Figure 4.19 zooms on the rotor and shows several flow vorticity snapshots during the rotation cycle.

The figures indicate the complexity of the underlying flow phenomena and the associated computational challenges. Note the presence of quasi-2D vortex tubes that are created due to massive flow separation, and that quickly disintegrate and turn into fine-grained 3D turbulence further downstream. Figure 4.20 shows the turbine current configuration at two time instances during the cycle for the 4 rad/s case. The displacement is mostly in the direction of the wind, however, lateral tower displacements are also observed as a result of the rotor spinning motion. The displacement amplitude is around 0.10-0.12 m, which we find reasonable given the tower height of 9 m, and one of the VAWT design objectives being that the structure is not too flexible. This is also the case for the 0 rad/s and 12 rad/s cases.In this dissertation more advanced FSI simulations of wind turbines, such as rotor yawing for HAWTs, and full-machine FSI of VAWTs were targeted. A structural model of wind turbines design was constructed and discretized using the recently proposed isogeometric rotation-free shell and beam formulations. This approach presents a good combination of accuracy due to the structural geometry representation using smooth, higher-order functions, and efficiency due to the fact that only displacement degrees of freedom are employed in the formulation. By constructing a detailed material model of wind turbine blade with non-symmetric, multilayer layup we were able to reproduce the experimentally measured eigenfrequencies of the CX-100 blade of Micon 65/13M HAWT. To our knowledge, this is the first full-scale validation of the IGA-based thin-shell composite formulation. The ALE-VMS technique for aerodynamics modeling was augmented with an improved version of the sliding interface formulation, which allows the interface to move in space as a rigid object and accommodate the global turbine deflections in addition to the rotor spinning motion. The pure aerodynamics computation produced good agreement with reported wind tunnel and field-test data. A simulation of two side-by side wind turbines was also performed. Using novel mesh moving techniques we were able to simulate a large scale 5MW HAWT undergoing yawing motion. We also present FSI simulations of full-scale Micon 65/13M wind turbine with the CX-100 blades mounted on its rotor. The results of the aerodynamic and FSI simulations shows a good agreement with field test data for this wind turbine. The FSI simulation captures high-frequency oscillations in the aerodynamic torque, which are caused by the blade structural response. In the future work we plan to explore methods and devices to mitigate such high-frequency rotor vibrations. Dynamic FSI modeling of VAWTs in 3D and at full scale were reported for the first time in this dissertation with investigation of turbine start-up issues. From the FSI computations we see that for given wind conditions the rotor naturally accelerates at lower values of angular speed. However, as the angular speed grows, the rotor may encounter a dead band region. That is, the turbine self-starts, but then it is trapped in a lower rotational speed than is required for optimal performance, and some additional input is required to get the rotor to accelerate further. There may be multiple dead band regions that the turbine needs to overcome, with external forcing applied, before it reaches the target rotational speed. In the future, to address some of these issues, we plan to couple our FSI formulation with an appropriate control strategy to simulate more realistic VAWT operation scenarios. The numerical examples presented in this dissertation illustrate the successful application of the proposed techniques to the FSI simulation of wind turbines at full scale.It has been reported that bacteria loads associated with enormous amount of animal waste produced in the U.S. are the leading cause of impairment for rivers and streams.

The relative heights of the two peaks differ in different age groups

As there were multiple observations per individual , a random intercept was used for individuals. Utilization of places for each person was estimated using two different approaches. The first method was by checking whether more than two temporally-consecutive GPS points of a person fall within a polygon designated for the person’s home , farms, or forests on each day. This is equivalent to checking if a person spent at least an hour within the same polygon. For each participant, the number of days spending in each category of place was divided by the total number of days participated during the study period to obtain the proportion of being at the respective places. The second method estimated the utilization of places by a biased random bridge technique. Unlike priormethods for estimation of utilization of places such as location-based kernel density estimations , BRB takes the activity time between successive relocations into account and models space utilization as a time-ordered series of points to improve accuracy and biological relevance while adjusting for missing values. BRB estimates the probability of an individual being in a specific location during the study time period and can be used to estimate home range . To parameterize BRB models for each individual, we considered points collected more than three hours apart to be uncorrelated. However, the two temporally-consecutive points that are deemed uncorrelated by the prior cutoff, may in fact be correlated . Without manually adding points between them, 4×4 flood tray this method will underestimate the usage of homes. An individual is considered stationary when the distance between two consecutive points is less than 10 meters.

The minimum standard deviation in relocation uncertainty is set at 30 meters. For each individual, estimation for the usage of different places was done for the whole study period and for each season as described below. In Central and Southern Myanmar, the monsoon rain starts in mid-May and ends in mid-October. Therefore, we split the data on 15th May 2017 and 15th October 2017, and the period between the two dates was regarded as the “rainy season”. Mid-October to mid-March is the “cool and dry season”, mid-March to mid-May is the “hot and dry season”. Combinations of the two dry seasons had been used simply as the “dry season” in some of the analyses.The violin plot of the maximum daily Euclidian distances traveled in kilometers in log10 scale shows that there is a bimodal distribution for all three age groups. The violin plot is a hybrid of kernel density plot and box-plot with the axes flipped that is particularly used to describe data with multi-modal distribution. In the figure the vertical axis is the distance value in kilometers with the smallest value at the bottom, and the horizontal axis shows the density value. The heights and peaks in the following results refer to the width/broadness of the violins in the horizontal axis. The first peak was between 0.01 to 0.1 kilometers and the second peak was between 1 and 10 kilometers. For under 20s, the first peak is over 20% higher compared to the second peak. The difference between the two peaks in the other two age groups is less than 10%. The Wilcoxon rank-sum tests provided evidence that 20–40 and over-40 age groups have greater maximum daily Euclidian distances away from home compared to under-20 age group on average. Further disaggregation of this data by gender, and age group can be found in the Extended data: Figure S4.

Participants may make trips that would last several days, either because their destination could not be reached within a single day or because they stayed at their destination for several days . Using a buffer radius of 266 meters around their home GPS points as their home locations, we calculated the number of consecutive days they spent away from home. Aside from two participants , all other participants had at least one trip with more than two consecutive days away from home during their participation period. Trips of less than 10 consecutive days are the most frequent among the participants. There are male outliers of over 20-years old who took shorter consecutive day trips over 10 times. Making trips of over 10 consecutive days was relatively uncommon, but 21 participants still made at least one trip of over 20 consecutive days away from home. For each participant, we identified the number of days spent at farms, forests, or at one’s home, and looked for an association between farm visits and forest visits. Here we assumed that having at least two GPS points in the polygon of a particular place constitutes using the respective place for that day, and that a person can be at various types of places in a single day. We found that if a person spent a higher proportion of days at the farms, she or he will likely spend a lower proportion of days at the forests, and vice versa, even though both being at the farms and being in the forests are possible on the same day. Figure 2 shows the distribution of the proportion of the number of days being at the farms, forests or home for different age groups. All participants were found to be at their respective home for the majority of days. Compared to other age groups, the 20–40 age group had a higher proportion of time spent in the forests. The under-20 group had the highest proportion of time spent in the farms on average, followed by the 20–40 age group.

We also combined the geographic information of farms and forests with the place utilization estimated from a biased-random bridge algorithm, and calculated the utilization of each specific place over the study period . An example of the place utilization of a person can be seen in Figure 3. On average, participants in the under-20 age group spent 20.0% and 2.2% of their time in farms and forests, respectively. For the participants from the 20–40 age group the percentages are 7.6% and 7.4%, and for those in the over-40 age group, hydroponic tray the percentages are 7.2% and 3.8%, respectively.Being in the farms and forests at night might impose increased risks of diseases such as malaria because of potential exposure to important mosquito vector species . As seen in Figure 4, we looked at the total number of nights participants spent in the farms or in the forests. Two female participants spent at least a night in the farm compared to 22 male participants . As for spending at least a night in the forest, there were 21 males and only one female. Most participants in the 20–40 age group spent at least one night in the farm and in the forest whereas fewer than 35% of participants from under-20 and over-40 age groups spent a night in such places. The negative binomial regression provided strong evidence that males in this cohort were more likely to spend nights in farms and in forests compared to females, and that young adults were more likely to spend nights in the forest compared to the under-20 age group , after controlling for the remaining variables . Participants may spend consecutive nights in the farms or the forests without going back home. The number of consecutive nights spent in the farms or the forests is the subset of the multiday trips mentioned in the previous section. Figure 5 quantifies this metric for different age groups and gender. Persons of all age groups and gender spent varying numbers of consecutive nights in the farms. An under-20 male spent the most consecutive nights in the farm. A female of 20–40 age-group and a male of over-40 age-group spent two episodes of 11–15 consecutive nights in the farm. In contrast, there was little demographic heterogeneity among those who spent consecutive nights in the forests. A few males of the 20–40 age group not only spent long periods of consecutive nights , but also frequently spent many short periods of consecutive nights in the forests.Many detailed human movement studies have been done, mainly in the regions of high socio-economic status. Our study presents an analysis of human movement in a remote rural area that has been under-studied with regard to human ecology . Compared to other studies where GPS loggers were used for a very short period of time, there is a relatively long duration of participation in our study. This makes it possible to examine potential seasonal variation. Our data suggest a bimodal pattern of movement away from participant homes, with one peak nearby and another one to three kilometers away from their homes . There were differences in these movement patterns by demography, with under-20s staying close to home on the majority of the days and both 20–40 and over-40 age groups tending to move farther away each day. We hypothesize that the reason for this difference is that over-20 age groups are more heavily involved in subsistence activities than the under-20 age group.

Multiday trips of less than 10 days are common among the participants. The metrics of multiday trips do not signify anything unless they are associated with the activities done during the trip which vary from visits to friends/family, getting supplies at the nearby town, farming, foraging, and other economic or subsistence activities. All age groups in this study visited farm areas and spent the night in the farms, with no statistically significant difference found between age groups. When they spent their nights in the farms, they did it consecutively and on several occasions during the study period. Farming is one of the major forms of subsistence for rural families and it must be regarded as relatively safe compared to subsistence activities in the forests that all age groups partake in it. There was no seasonal variation in the number of nights spent at the farms in these data. Different types of crops are normally rotated over the year for cultivation in this region. In contrast, going to and sleeping in the forests, which may involve foraging, logging, mining etc., is found to be the task for males of the 20–40 age group. The median number of nights slept in the forest among those who ever spent the night in the forest was 7.5. Only males of the 20–40 age group spent a higher number of nights in the forest than the median value. The same males were found to take frequent and successive overnight trips to the forests. We surmise that the males in the 20–40 age group, most likely being the breadwinners of the family, are subject to any possible subsistence activities and are regarded as the most suitable persons to venture into the forests overnight despite dangers from wildlife and harsh living conditions. No seasonal variation was found in the number of nights of sleeping in the forest. In comparison, a questionnaire based movement survey conducted in similar Thai-Myanmar border area found seasonal movement patterns. Compared to home, sleeping places in the farms and forests may be more rudimentary, leaving people more vulnerable to medically important arthropods or other environmental risks . Spending several consecutive nights in the farms and forests may increase the chances of vector-borne diseases such as malaria since major malaria vectors in the area such as Anopheles dirus, and Anopheles minimus are found in the deep forests, forest edges, plantations and even in the rice fields. Studies have found that the increased risk of malaria in forest-goers is contributed by inconsistent bed net usage, misconception that alcohol consumption or blankets provides protection against mosquito bites, non-participation in the malaria prevention activities held at the villages. Results from this study, particularly the space utilization data, would be useful in spatially explicit individual-based infectious disease model such as which models the malaria elimination in the rural South East Asian region. Human mobility is a crucial part of many disease transmission dynamics, yet it has been ignored in many infectious disease models because of constraints on data and computational capacity. Compartmental models assume homogeneous mixing of individuals in their respective compartments. While they are quick to set up, they are not suitable for the disease elimination settings. Their homogeneous nature limits the modelers from exploring the impact of multiple interventions tailored towards different risk groups such as forest-goers in malaria intervention. Individual-based models could have individual specific properties and their related movement patterns thus achieving a heterogeneous population.

This result suggests that there is some degree of toxicity present in these hydrolysates

Our group has demonstrated the applicability of this process by generating hydrolysates with high concentrations of monomeric sugars and organic acids from several feed stocks like grasses, hardwoods, and softwoods, and converting them to terpene-based jet-fuel molecules using engineered strains of the yeast Rhodosporidium toruloides . Nevertheless, it is important to expand the range of lignocellulosic feedstocks used in this process to evaluate its versatility to advance towards the goal of developing a truly lignocellulosic feed stock-agnostic bio-refinery. Hemp is an attractive crop due to its fast growth, bio-remediation potential, and diverse agricultural applications, including the production of natural fibers, grains, essential oils, and other commodities. This biomass is composed of an outer fiber that represents approximately 30% of the weight and an inner core known as hurd that accounts for the remaining 70% . The hemp fiber is utilized in the textile industry as insulation material and for the production of bio-plastics in the automotive industry, while hemp hurd is used for low value applications such as animal bedding, concrete additives, or disposed of by combustion and landfill accumulation. This indicates that approximately 70 wt% of hemp biomass has the potential to be valorized into higher-value products and applications, indoor weed growing accessories which would improve the economics of the hemp industry and increase its sustainability footprint to promote a green economy. Mycelium-based composites are emerging as cheap and environmentally sustainable materials generated by fungal growth on a scaffold made of agricultural waste materials.

The mycelium composite can replace foams, timber, and plastics for applications like insulation, packaging, flooring, and other furnishings. For example, the company Ecovative Design LLC produces a foamlike packaging material made of hemp hurd and fungal mycelia, which is fully compostable. Anticipating the possibility of an increased demand of eco-friendly packaging materials in the near future, we are interested in evaluating the feasibility of diverting this used packing material away from landfills or composting facilities towards higher value applications, such as feedstock for bio-fuels. It is known that fungal enzymes can reduce the recalcitrance of the biomass to deconstruction, likely through modification of polysaccharides and lignin in plant biomass. Therefore, we hypothesized that the mycelium composite material could be more easily deconstructed and converted into higher value fuels and chemicals than the raw hemp hurd. In this study, hemp hurd and the mycelium-based packaging material were tested as biomass feedstocks for the production of the jet-fuel precursor bisabolene, using a one-pot ionic liquid technology and microbial conversion. First, we examined the deconstruction efficiency of the packaging material compared to hemp hurd, when subjugated to a onepot ionic liquid pretreatment process. Second, the influence of the pretreatment process parameters on the sugar yields was investigated by using a Box–Behnken statistical design. Finally, the generated hydrolysates were fermented to evaluate the bio-conversion of the depolymerized components by a bisabolene-producing R. toruloides strain. The composition of the hemp hurd and packaging material was determined as shown in Table 1. The total extractives of the hemp hurd and packaging material comprised 8.3 and 14.7% of the biomass, respectively.

The higher extractive content of the packaging material may be a result of the fungal growth stage in the packaging construction process. For the polysaccharide content, hemp hurd had higher glucan and xylan contents than the glucan and xylan content of the packaging material. Combining glucan and xylan content, the total fermentable sugars of the hemp hurd and packaging material was 43.7% and 40.4% of the hemp hurd biomass, respectively.This indicates that a small fraction of the polysaccharides may have been consumed and converted into extractives during mycelial growth. However, both types of biomass contain a substantial amount of polymeric carbohydrates that can be depolymerized into simple sugars for fermentation. The lignin content for both materials was the same ; however, it is possible that the mycelial growth in the packaging material could have altered the structure of lignin and made the polysaccharides more accessible to hydrolysis. We used the one-pot ionic liquid process on hemp hurd and package materials to test this hypothesis. One of the bottlenecks for the efficient conversion of lignocellulosic hydrolysates is the presence of compounds generated during the pretreatment and enzymatic hydrolysis stages that are toxic to bio-fuel-producing microbes. The degree of toxicity mainly depends on the type of biomass, pretreatment conditions, and the identity of the microorganism that will be used for fermenting the depolymerized substrates. Therefore, we performed a bio-compatibility test with the hydrolysates prepared from hemp hurd and packaging materials, using an engineered strain of the yeast R. toruloides known to be tolerant to ILs and biomass-derived compounds, and convert glucose and xylose to the jet fuel precursor bisabolene.

When the strain was inoculated directly in concentrated hydrolysates, negligible sugar consumption and very little growth was observed, as shown in Figure 2. Therefore, we prepared 50% diluted hydrolysates for further testing. Under these conditions, more than 90% of glucose and xylose conversion was observed in both hydrolysates, and the cells were able to grow and produce bisabolene . The utilization of hydrolysate with higher concentrations is beneficial for the economically feasible biorefinery development. Therefore, other strategies such as hydrolysate culture adaptation or detoxification may be required to improve bio-compatibility.The optimum levels of parameters for glucose and xylose yields from packaging materials recommended by the model were: reaction temperature of 126 and 128 C, reaction time of 2.1 and 2.0 h, and ionic liquid loading of 7.3% and 7.9%, corresponding to a predicted glucose and xylose yield of 74.6% and 81.7%. However, this optimal condition did not significantly improve the yields compared to the center point, rolling benches even though the reaction conditions required a 4% higher temperature than the center point, a rather small difference in temperature. This result suggests that other process parameters such as agitation and biomass solid loading percentage should be tested for further improvement in the yield. The model for hemp hurd found a saddle point instead of optimum levels, which means that the optimum process condition was not aligned within the current experimental conditions. Further investigation into the different range of reaction conditions such as higher reaction temperature is required to optimize the reaction condition for hemp hurd. If operating with a limited budget and time, the reaction condition having the highest glucose and xylose yield can be chosen. The highest glucose yield in the current reaction condition was obtained from hemp hurd at 140 C, 1 h reaction time and 7.5% ionic liquid loading, which has higher severity in reaction condition than the optimized reaction condition of packaging materials. This result indicates that the reaction parameter affects the sugar yield differently according to the biomass type, implying that the biomass properties change by mycelium growth. Regarding the packaging materials, the combined effects of reaction temperature, reaction time and ionic liquid loading on glucose yields are illustrated in Figure 4 and xylose yields in Figure 4. Response surface plots show that the glucose yield increased with the reaction temperature up to 133 C with subsequent decrease in yield at a higher temperature. The xylose yields showed a similar trend. Additionally, the glucose and xylose yield increased with the reaction time up to 2 h and 7.5% ionic liquid loading.

After those points, the glucose and xylose yield decreased, probably due to the loss of enzyme activity caused by the higher ionic liquid concentration. Additionally, the longer reaction time and the higher ionic liquid concentration might facilitate the production of other compounds such as furan derivatives or organic acids, which inhibits the enzyme activity during the pretreatment. Moreover, the production of other components probably led to a decrease in accessible carbohydrates to the enzyme . Further tests may be necessary to improve the sugar yield. ANOVA results shown in Table S5 indicate that reaction temperature and reaction time has statistically significant effects on glucose yield , while ionic liquid loading was not significant . Additionally, the statistically significant interaction effects of reaction temperature with reaction time and ionic liquid loadings were confirmed. ANOVA results associated with xylose yield show that reaction temperature had a significant effect on the yield , while reaction time and ionic liquids had no effect . Additionally, the interaction effect of reaction temperature with reaction time and ionic liquid loading was not significant, while the interaction effects of reaction time with ionic liquid were significant .This work demonstrates the feasibility of hemp hurd and packaging materials made of mycelium grown on hemp hurd to be used as feedstocks for bio-conversion to a jet-fuel precursor using a one-pot ionic liquid technology. During the initial test , the packaging materials produced higher sugar concentrations and yield than the hemp hurd . However, the Box–Behnken experimental design showed that the reaction conditions for the maximum sugar yields from each material was different and that the significance of the process parameter effect on the fermentable sugar yield was dependent on the biomass properties, suggesting that the mycelial growth affected the deconstructability of the hemp hurd. Furthermore, the fermentation test to convert fermentable sugar into bisabolene showed that hydrolysates fromthe packaging material resulted in a higher bisabolene titer than hydrolysates from the hemp hurd, probably due to the higher sugar concentrations generated form the packaging material. To fully take advantage of these packaging materials to produce bio-fuels after they are used and discarded, a more detailed correlation study between the fermentable sugar yield and physicochemical properties of biomass and packaging materials or packaging process parameters is required by testing different hemp material sources. In addition, methods to overcome hydrolysate toxicity will need to be employed to enable utilization of concentrated hydrolysate for increased product titers and a reduction in water consumption. Finally, further investigation into other process parameters such as agitation and biomass loadings are merited to fully optimize the pretreatment conditions, as well as performing pilot scale tests to generate data that can help assess the economic feasibility of this new conceptual process. Overall, this study indicates that it is possible to produce lignocellulosic supply chains for production of bio-fuels and biochemicals that include both raw biomass and biomass that has been first processed and valorized as commercial products, such as packaging materials, enabling the carbon in these lignocellulosic products to generate value multiple times in their life cycle. Understanding momentum, heat, and scalar mass exchanges between vegetation and the atmosphere is necessary for the quantification of evaporation and sensible heat flux for hydrologic budgets, ozone deposition on urban forests, nonmethane hydrocarbon emissions from natural vegetation, carbon storage in ecosystems, etc. Such exchanges are governed by a turbulent mixing process that appears to exhibit a number of universal characteristics . Early attempts to predict these universal characteristics made use of rough-wall boundary layer analogies but limited success was reported . A basic distinction between canopy and rough-wall boundary layer turbulence is that the ‘‘for-est–atmosphere’’ system is a porous medium permitting finite velocity and velocity perturbations well within the canopy. Hence, the canopy–atmosphere interface cannot impose a severe constraint on fluid continuity as an impervious boundary, as discussed in Raupach and Thom , Raupach , and Raupach et al. . Raupach et al. and Raupach et al. proposed a mixing layer analogy to model the universal characteristics of turbulence close to the canopy atmosphere interface in uniform and extensive canopies. Their analogy is based on solutions to the linearized perturbed two-dimensional inviscid momentum equations using hydrodynamic stability theory . For such a system of equations, HST predicts the unstable mode generation of two-dimensional transverse Kelvin–Helmholz waves with stream wise wavelength if the longitudinal velocity profile has an inflection point . Such instabilities are the origins of organized eddy motion in plane mixing layers; however, a KH eddy motion cannot be produced or sustained in boundary layers due to the absence of such an inflection point in the velocity profile. A plane mixing layer is a ‘‘boundary-free shear flow’’ formed in a region between two coflowing fluid streams of different velocity but same density . Raupach et al. recently argued that a strong inflection point in the mean velocity profile at the canopy–atmosphere interface results in a flow regime resembling a mixing layer rather than a boundary layer neighboring this interface. Raupach et al.’s ML analogy is the first theoretical advancement to analyzing the structure of turbulence close to the canopy– atmosphere interface of a horizontally extensive uniform forest.

Testing prices are not publicly advertised by licensed laboratories

Study of cannabis as an agricultural crop has been notoriously inadequate, but data provided by the water quality control board’s cannabis program offers critical new insights into the water use practices of cultivators entering the regulated industry. In this initial analysis, we found that subsurface water may be much more commonly used in cannabis cultivation than previously supposed. Further analyses of cannabis cultivation’s water extraction demand, as well as of geospatial variation in water demand, may help elaborate the ramifications of this finding. Ultimately, a better understanding of cannabis cultivation’s water demand will be useful for placing the cannabis industry in the greater context of all water allocation needs in the North Coast and throughout California. U.S. state markets for cannabis are evolving rapidly. As of mid-2019, 32 of 50 states had some form of legal medicinal cannabis system in place, and since 2012, 11 of those states had legalized and regulated adult-use cannabis.California was the first U.S. state to decriminalize the sale of medicinal cannabis, with the 1996 passage of the Compassionate Use Act . In 2003, a California state legislative act, Senate Bill 420, set out more specific rules for the operation of medicinal cannabis collectives and cooperatives. For the following 15 years, regulations on the cultivation, manufacturing, and sale of cannabis in California were largely limited to a wide variety of local ordinances, drying room with little intervention from the state government. In November 2016, California voters legalized adult-use cannabis by approving Proposition 64 .

Subsequently, the Medicinal and Adult-Use Cannabis Regulation and Safety Act of 2017 created a unified framework for the state licensing of cannabis businesses and the taxation and regulation of adult-use and medicinal cannabis. MAUCRSA regulations went into effect on January 1, 2018. Safety regulations generally add costs to production. One of the most costly components of California’s new system of cannabis regulation is the mandatory testing of all legal cannabis for more than 100 contaminants, including pesticides and heavy metals. This paper is the first to comprehensively examine the economic challenges of cannabis testing and estimate the cost of testing compliance per pound of cannabis marketed in a legal and licensed cannabis market. In a previous article, we provide a brief introduction to testing costs to which this paper supplies needed rigor. We review and compare the allowable tolerance levels for contaminants in cannabis with allowable levels in other crops from California, and review rejection rates in California since mandatory testing began in 2018. We compare these with rejection rates in other U.S. states where medical and recreational use of cannabis are permitted. We use primary data from California’s major cannabis testing laboratories, several cannabis testing equipment manufacturers, Bureau of Cannabis Control license data including geographical location information, and data from Cannabis Benchmarks on average wholesale batch sizes to estimate the testing cost per pound of cannabis legally marketed in California.At the U.S. federal level, cannabis is still classified as a Schedule I illegal narcotic, and its possession, sale, and even testing are serious criminal offenses under federal law. Even cannabis businesses that are fully compliant with state regulations thus face legal risks, uncertainties, and obstacles to doing business such as a lack of access to mainstream banks. In recent years, however, the conflict of state and federal laws has generally been mediated via a series of informal, non-binding agreements, letters, and memos of understanding between the U.S. Department of Justice and states. These understandings have enabled cannabis businesses to focus more on complying with state and local laws than on hiding from federal prosecutors. All of the U.S. states that have legalized, taxed, and regulated recreational cannabis, and most states that have legalized and regulated medicinal cannabis, require testing for some contaminants and testing and labeling of potency .

Colorado and Washington were the first states to vote to legalize and regulate adult-use cannabis, both in 2012. Colorado first introduced the enforcement of potency and homogeneity tests for retail cannabis products in 2014. Residual solvents and microbial contaminants were added to the testing requirements in 2015, and heavy metals and pesticide residues as of mid-2018 . Washington State mandates that licensed testing laboratories must also perform potency tests, moisture analysis, foreign matter,microbial and mycotoxin screenings, and screenings for residual solvents. Some states, including California and Colorado but not Washington, also require more sophisticated and costly wet-lab tests for pesticides and heavy metals. Per MAUCRSA, the California Department of Pesticide Regulation established maximum allowable thresholds for 66 different pesticides, including zero tolerance for trace amounts of 21 pesticides and low allowable trace amounts of 45 other pesticides. MAUCRSA also established thresholds for 22 residual solvents plus a variety of heavy metals and other contaminants. The Bureau of Cannabis Control was put in charge of licensing and regulating testing labs and enforcing the testing standards. In the 2016 marketplace, prior to the passage of Proposition 64—which was unregulated at the state level and partially regulated at the local level—total California cannabis production was estimated at approximately 13.5 million pounds of raw flower, with roughly 80% of this production illegally shipped out of the state. These out-of-state shipments may explain why California accounted for 70% of nationwide cannabis confiscations in 2016. Rough estimates suggest that only about one-quarter of California’s in-state cannabis consumption, or less than 5% of total cannabis production, went to the legal medicinal market in 2016. Until 2018, there were no rules in place at the state or local levels in California for testing contaminants, even for products legally marketed as medicinal cannabis. A minority of medicinal cannabis retailers in the pre-2018 state-unregulated market was routinely testing and labeling cannabis for THC potency, but few were voluntarily testing for contaminants. Informal evidence suggests that pesticide residues were common in cannabis products in the pre-regulated market. For example, in 2017 an investigation reported that 93% of 44 samples collected from 15 cannabis retailers in California had pesticide residues.The mandatory testing framework introduced under MAUCRSA is summarized in Table 1, where we briefly describe the tests for specific types of batches and the standards for passing each test.

Dried cannabis flower and cannabis products must be tested for concentrations of cannabinoids and various contaminants in order to enter the legal market. Some tests apply to all batches, while some others only apply to some forms of cannabis. Heavy metals tests were not mandatory until December 2018. Table 2 shows the list of contaminants with their maximum tolerance levels allowed in California. Tolerance levels are generally lower for products that are inhaled than for products that are eaten or applied topically. For 21 pesticides, how to trim cannabis the maximum residual level is zero, meaning that no trace of those residues may legally be detected in a sample of cannabis. MAUCRSA requires that all batches of cannabis flowers and products must be sampled and tested by licensed laboratories before being delivered to retailers. Distributors are responsible for testing . Fig 1 shows the flow of cannabis testing in California. The weight of a harvest batch cannot exceed 50 pounds; larger batches must be broken down into 50-pound sub-batches for testing. The sample size must be bigger than 0.35% of its weight. A processed batch cannot surpass 150,000 units.After testing each batch, laboratories must file a certificate of analysis indicating the results to distributors and to the BCC. If a sample fails any test, the batch that it represents cannot be delivered to dispensaries for marketing. Instead, it can be remediated or reprocessed and fully re-tested again. If a batch fails a second re-testing after a second remediation, or if a failed batch is not remediated, then the entire batch must be destroyed. Analyzing the cannabis market, compared with other agricultural markets, presents a unique challenge to researchers because of the rapidly changing legal environment, the lack of historical data or scientific studies, the lack of government tax data, and the cash nature of the business. Quotes are known to vary depending on the number of samples, the frequency of testing, the type of contract between the distributor and the laboratory, among others. Bulk pricing is common and is negotiated on a case-by-case basis. We approximate the costs of testing by collecting detailed data on the testing process and constructing in-depth estimates of the capital, fixed, and variable costs of running a licensed testing laboratory in California. We use these results in a set of simulations that estimate the costs per pound generated by cannabis testing under the California regulations in place as of mid-2019. We make some market assumptions based on the most reliable industry data available as of this writing in order to estimate the current cost per pound of testing compliance.We construct a simulation model using R software to assess the cost structure of cannabis testing in California under the current regulatory framework. We base our simulations on the number of testing labs and distributors that had been granted temporary licenses by the BCC as of April 2019. The number of labs and distributors in California will fluctuate as the industry continues to develop. To estimate costs incurred by labs, we first construct estimates of fixed and variable costs for labs based on their testing capacities. We calculate the cost of testing a sample of dried cannabis flower considering the lab scale and the distances between labs and distributors. Based on meetings with representatives of California testing labs, we assume that 70%, 20%, and 10% of the labs are distributed into small, medium, and large size categories. We assume that the testing industry is like many others in that many small firms supply relatively little of the output. We run 1,000 simulations to estimate the cost of sampling and testing for a sample of a typical batch of dried flowers from each of the 49 labs, assuming that costs, working hours, testing capacities, etc., may vary from lab to lab. Next, we use the weighted average of testing cost per sample to estimate cost per pound. We express total testing cost in dollars per pound of legal cannabis that reaches the market, after incorporating costs of remediating and re-testing failed batches and losses from batches of cannabis that cannot be remediated and must be destroyed.We used the list published by the BCC to identify actively licensed testing labs and requested, to managers or representatives, a personal or phone-call interview based on a set of questions that we used as a guideline . We interviewed one-fourth of the operating or prospective licensed testing labs listed by the BCC. We gathered data on market prices for testing equipment, supplies and chemical reagents consumed by equipment, equipment running capacities, and other cannabis testing inputs needed to build a compliant testing laboratory in California. Likewise, we collected financial, managerial, and logistics data. To complement licensed testing lab data, we also drew on personal interviews, phone calls, and email exchanges with sales representatives of three large equipment suppliers. Table 3 summarizes capital costs, other one-time expenses, and annual operational and maintenance costs used in our calculations. We report average cost and standard deviation for each estimate. We assume that medium-sized and large labs receive discounted prices on equipment, given the larger scale of their purchases. Based on information provided by equipment suppliers, we expect these discounts to be between 1.5% and 2.5%. Different-sized labs have different capacities based on their scale. We assume that larger labs have made larger capital investments and are better able to optimize processes when supplying a larger volume of testing . On the other hand, small testing labs require less equipment and less capital investment, and operate with low annual costs, but their testing capacities are also low. Table 4 summarizes our estimates of running time for tests, the main consumables used by testing machines, and the expected cost of running a specific test per sample. In addition, we include a range of $80 to $120 per sample to cover general material and labor apparel used while preparing and processing samples.Next, we must estimate sampling cost , which includes transportation, labor, equipment, and material costs. We use the zip codes of active licensed testing labs and distributors published by the BCC to estimate the distance from labs to distributors .

Traffic congestion around the port is also contributing to the slowdown of port operations

Figure 1 shows the annual TEU throughput at the POLA and POLB for the period 1997-2016. Although the explosive growth of the first ten years exhibited a slowdown after the recession of 2008, it has achieved quite a healthy recovery in the last five years reaching or surpassing its pre-recession levels. The numbers in Figure 1 include both loaded and empty units, destined for import or export. Figure 2 shows the change in total annual TEU throughput for the combined ports. The yearly change over the last five years is positive. The total container throughput through the POLA and POLB is expected to grow in the future, correlated with population increase, domestic demand for inexpensive manufactured goods, as well as global demand for US products, and improving competitiveness of US industry. Handling a large number of the necessary container transactions requires intensive management of operations, changes in transportation policy and modernized equipment. In the POLB and POLA, there are approximately 100,000 chassis available for leasing and transporting containers to and from warehouses, stores, factories, rail yards and container terminals. Among these 100,000 chassis available to the trucking companies there are chassis supplied by various third party chassis leasing companies. However, terminals within the ports do not always have chassis available from each company. At times, cannabis grow room chassis required by the trucks are either not available anywhere in the terminal or are dislocated and need to be repositioned. Prior to 2014 chassis companies did not work together or have a neutral chassis pool, and shortages and dislocations of chassis occurred frequently.

Trucks would often be required to travel between terminals and perform additional trips to pick up or drop off chassis at specific locations in addition to picking up and dropping off the containers for export and import. This was a lengthy and cumbersome process and generated additional queues at each terminal. The shortage of chassis can significantly lengthen truck turn times and cause additional cost for trucking companies and increase emissions at the port. Furthermore, lack of chassis could mean that the container will be kept at the carrier ship for prolonged time and the storage fees will continuously accumulate. The shipper will have to pay additional charge for the failure to discharge a container from the carrier ship within the agreed time frame, known as demurrage charge. Also, when containers are not discharged in a timely manner, the shippers will face a congested space in their area of operation. Such an issue would leave the shippers no choice but to rent additional storage area. This would lead to more expensive carrying cost and delayed delivery time. According to POLA/POLB terminal operators and PierPass officials one of the core reasons for port congestion is lack of chassis. At the POLB and POLA trucks are coming from many locations to drop off or pick up containers and chassis, where the freeways that truck drivers must use to access the port are also used heavily by commuters traveling through the densely populated area surrounding Los Angeles. The most heavily used freeway to get to and from the POLB and POLA is California Interstate 710 .

I-710 has for the most part, four lanes, heavily packed with trucks and commuter vehicles during rush hours, causing major congestion problems in the vicinity of the ports. As the American economy expands, there are more demands for commercial operations, increased freight, and increased numbers of foreign commercial partners. These growing factors give rise to recurring congestion at freight bottlenecks, creating a conflict between freight and passenger service. Moreover, as demands for trading partners increase, more freight ships will be docked to the ports. Handling more transactions also means that the ports will have to increase their processing capacity. This increase will undoubtedly cause the entrance to the port and the areas within the port itself to be heavily congested as well. Congestion in and outside of the port is detrimental to the economy of Southern California, as well as to that of the US as a whole. When there is additional congestion, port operators take much longer to unload cargo ships. Supply chains carrying goods through the POLB and POLA can then become slowed to the point where some retailers find it necessary to redirect their goods. The goods are then redirected by sea or air to other ports on the East Coast where they can be further distributed, resulting in reduced income for the surrounding area as well as additional costs for the retailers themselves. The POP is a neutral, interoperable chassis pool that was launched in February 2015, from DCLI, TRAC Intermodal and Flexi-Van, in cooperation with the POLA, POLB and SSA Marine. Their chassis are pooled together to provide a more efficient way of obtaining chassis for trucking companies, which are able to use the chassis from any of the chassis companies interchangeably. Thus, a trucker can pick any chassis from the POP and drop it off at any designed POP storage area without having to worry about returning chassis to the same exact location. Since truckers have access to any chassis, it allows for a smoother operation at the port and fewer inefficiencies in chassis-related operations.

However, the pools still remain commercially independent and are in competition with one another. A third party service provider manages the billing and other proprietary information among these pools. Nonetheless, even with the improved flexibility, interoperability and efficiency which the POP has introduced, the port still suffers some repositioning issues and the heavy traffic congestion problems remain.The concept of Centralized Processing of Chassis was introduced as one method for improving travel times associated with container retrieval. This concept was introduced in Europe as the Chassis Exchange Terminal. In the CET concept, the centralized processing of chassis was defined as an off-dock terminal located close to the port, where trucks would go to retrieve imports or drop-off exports instead of unloading and loading containers at the marine terminal. The first step in the operation with the CET involves a container being loaded onto a chassis at the marine terminal. The second step includes the chassis transport to the CET during off-peak hours, for example at night time. The last step in the operation is when a truck carrying a chassis with a container drives into the CET. At this point, the truck exchanges the chassis it brought into the CET with another chassis and container, grow trays which has already been transported to the CET during the second step. The exchange operation involves unhooking a chassis and hooking up another one at the CET. This is much simpler, more efficient, and a lot faster operation than the operation of unloading and loading containers and performing chassis exchanges at a regular marine terminal .The large volume of container trips results in traffic congestion in the areas around and within the ports and is expected to grow even higher in the future. It is clear that any system which helps reducing the total travel time for trucks between their points of origin and their destinations, is worth investigating, since as a consequence it will reduce traffic congestion, noise and emissions, in addition to saving time for both truckers and port operators. Such systems improve the travel time reliability and help the local economy to grow. By improving travel time reliability, local businesses require fewer operators and less equipment to deliver goods on time and need fewer distribution centers and less inventory to account for unreliable deliveries.Note that among the types of transactions described in Section 0 the Type 1 and Type 2 transactions are the only types which would be anticipated to contribute to a noticeable reduction in total transaction time if a CPF was used. In the case of Type 1 transactions, the export container can be dropped off at the desired marine terminal, and then the chassis can be returned to the CPF for storage and later retrieval. In the case of a Type 2 transaction, the chassis for import can be picked up at the CPF before entering the marine terminal to load the import container. In both cases if the chassis exchange transaction can be done more efficiently when it is performed outside of the marine terminal this could offer improvements in total time for the transaction. In Type 3 and 4 transactions one can see that no chassis exchange activities are necessary. In a Type 3 transaction the wheeled import includes a container already loaded on a chassis and can simply be picked up by the bobtail. In a Type 4 transaction the chassis used for the export container is the same one onto which the import container can be loaded afterwards.

Finally, Type 5 transactions, although they include a chassis exchange, would not be anticipated to have any reduced transaction times using an external CPF. This is due to the fact that after dropping off an export, the bobtail must drop off the chassis used so it can pick up a wheeled import at the same terminal, making in inefficient to travel to an external CPF to drop off the chassis only to return back to the marine terminal to pick up the wheeled import.A representative sample of seventy-one trucking companies which service the POLB and POLA is used in this case study. In order to select this sample, an initial list of TCs was created from an internet drayage directory which includes all companies operating within Los Angeles County. Since the location of the TCs is a critical variable for the optimization problem, all companies whose address was not included in the drayage directory were eliminated from the list. The final list contains all companies with known address using chassis. In the analysis herein the number of daily transactions between marine terminals and trucking companies was assumed to be a fixed value between each trucking company and each marine terminal. In the initial analysis, the number of total daily import transactions was set at 50,000 FEU based on forecasts of total daily port trips. Sensitivity analysis results used 10,000 import and 5,000 FEU export daily transactions based on the average daily import and export container traffic provided in Table 1Table 4.Potential CPF locations were identified by searching for vacant land within a 15-mile radius of the POLA and the POLB. The capacities of these locations were estimated by using the Google earth polygon built-in feature to calculate an approximate square footage. Several CPF layout options and chassis stacking methodologies were evaluated as described in 0. Chassis can be stored vertically or horizontally as shown in Appendix A, and each storage method has its advantages and disadvantages. Among the various possibilities that were considered, horizontal storage layout with a maximum of 3 chassis stacked on top of each other was selected for the case study. Using the estimated square footage, the number of forty-foot chassis which could fit in that area was determined using this preferred chassis layout methodology which assumed allocations for access roads; blocks of stacked chassis ; and blocks of unstacked chassis for ease of access, in order to minimize chassis retrieval times. An example of the layout for a 5000×5000 foot area is included in Figure 8 below. For this example, the maximum number of forty-foot chassis which could be stored in this area was estimated at 170,000. After verifying that the linear program behaved as expected for the two simplified models used in the reduced-node cases, the full model was analyzed using the same approach. In this case, all of the 16 potential CPF locations were included, each with its the estimated chassis storage capacities provided in Table 5. All 71 TCs and 14 MTs are also included with ~50,000 transactions distributed evenly between them . The results are summarized in Figure 12, where it can be seen that when P = 0 seconds, all of the transactions are routed directly from the TCs to the MTs. However, even with a 5-minute increase in efficiency at the CPFs in terms of average chassis retrieval time , approximately half of the transactions are routed through CPFs. The number of transactions that are routed directly from TCs to MTs is decreasing rapidly as the value of the parameter P increases. Figure 12 shows that when P=1200 sec, virtually no transactions are routed directly to marine terminals. Table 14 shows the percent utilization of the CPFs for P = 1200 seconds.