One potential explanation for these results might be a dominance gradient

In more recent work, cows may have been motivated to access limited feed bunk space on commercial dairies or to obtain prime pasture . In this study, as all animals were locked following milking to facilitate feeding treatments and health checks, cows would have had ample access silage regardless of queue position. Alternatively, Rathore suggested greater intermammary pressure might motivate high yielding animals to be milked earlier. As this herd was milked three times daily, however, this biological driver may also have been attenuated. Indeed, among modern studies with herds milked thrice daily, Polikarpus et al. found no significant correlation and Grasso et al. also found high yielding cows frequented the rear of the queue. Ultimately, as yield is influenced by a wide range of health and management factors, any number of confounding variables might be implicated in explaining this somewhat unexpected result. In this study a significant linear association between ageand entry position was not found. Recent work by Berry and McCarthy , which identified a nonlinear trend across parity, and by Grasso et al. , which highlighting significant interactions of parity with other biological drivers of queue position, suggests that a linear effect may not adequately capture the underlying biological relationship. A larger and more structured sample may be necessary to bring more complex age dynamics into clearer resolution.Visual inspection of means plots produced from mixed model analysis of sensor records recovered only a handful of statistically significant differences between queue quartiles when hour and day effects were assessed individually, trimming trays but several global trends were still readily visible.

With respect to minutes recorded as active, the 1st-3rd queue quartiles were visually indistinguishable in their cyclical behavioral patterns, but cows in the fourth queue quartile were consistently more active, particularly during the night and morning lounging period. With respect to longitudinal trends across days, fourth queue quartile animals were again more active across the observation window, whereas cows in the first queue quartile were consistently the least active. These patterns were somewhat mirrored in the longitudinal and cyclical analysis of high activity minutes, but the pattern was both less distinct and less consistent. No clear qualitative insights could be drawn for cyclical or longitudinal patterns in nonactivity. Cyclical patterns in minutes spent eating were not seen overnight or in the afternoon, but first queue quantile cows may have spent slightly more time eating after the morning milking. Longitudinal analysis of eating patterns suggested cows in the fourth queue quartile spent relatively less time eating, whereas the cows in the first and second queue quartile consistently spent more time at the bunk. This contrasted with longitudinal results for minutes spent ruminating, where the cows in the second queue quartile were consistently low. No clear distinctions between groups were recovered in cyclical rumination patterns. Temperature patterns were, surprisingly, the most visually distinct of all the sensorparameters. Cows in the first queue quartile were consistently lower in body temperature in both the longitudinal and cyclical time dimensions as compared with the remainder of the herd.

While the preceding analyses revealed few statistically significant differences at individual time points, collective analysis of days and subsets of the 24-hour management cycle would uundoubtedlyreturn statistically significant differences for the broader qualitative trends visually identified via mean plots. Within a linear modeling framework, however, this constitutes no small task. Group-by-date interaction effects were also significant for activity, high activity, and temperature models . This suggests that these models should not be simplified to a single cyclical or longitudinal trend, which would allow overall differences between groups to be tested via a single group intercept term. Targeted hypotheses comparing comprehensive trends between groups would instead require formulation of linear contrasts ± a daunting task with so many fixed effects terms used to accommodate the high sampling frequency and extended observation period of this dataset. Further, as with the linear models with cow attributes, behavioral synchronization due to social cohesion or compensatory use of physical resources in the pen could again create non-independence between animals in such sensor records. Any such issues in estimation of model degrees of freedom, compounded with the inability to fit behaviorally and empirically compelling correlation and variance models, would only serve to further confound the estimation of appropriate p-values from these models. Fortunately, the qualitative trends identified via the preceding means plots largely aligned with the significant bivariate associations identified by mutual conditional entropy tests summarized in Table 2. Activity again proved to be the most distinctive behavioral axis. Significant associations were identified for all three lounging periods when analyzed both independently and in aggregate, with the afternoon lounging periods being the most distinct. High activity also showed a significant relation to queue records, but this association may have been driven predominantly by overnight lounging period. Whereas no clear qualitative patterns were identified for non-activity data via the means plots, a significant association with queue records was identified during the afternoon lounging period.

A highly significant relationship was identified for time spent eating for the full sensor record, but given that time budgets recorded by this platform were segmented somewhat arbitrarily at the start of each hour, this result may simply reflect a lag in the arrival of cows to the feed bunk after exiting the parlor. Significant associations were not found during the lounging periods at the standard 0.05 cutoff, though records from the afternoon lounging period approached significance. These results were mirrored in rumination patterns, where again no significant association was recovered, but the afternoon lounging period approached significance. Finally, as with the linear modeling results, temperature proved highly distinct between queue subgroups for all subperiods.Visual inspection of tube plots produced with median queue subgroup values again yielded insights comparable to the linear modeling results . Based on the results of the MCE tests, all behavioral axes cows were clustered into two subgroups based on queueing records, with Group 1 cows consisting of 80 animals at the front of the queue, and Group 2 cows constituting the 34 animals in the rear. Tube plots of minutes spent active revealed Group 2 cows to be more active across all three lounging periods. This pattern was the most consistent in the morning and overnight lounging periods, though this difference was ultimately quite subtle and seldom constituted more than a few minutes. In the afternoon subperiod there was evidence of several periods with anomalously high activity levels, most of which occurred post-pasture access. The significant association recovered for minutes spent highly active in the overnight subperiod appeared to be largely driven by increased activity immediately following the evening milking, trim tray which could reflect divergent home pen behaviors, but might also have been driven by delays in milking. To complement these results for active and high active minutes, the significant association for afternoon non-activity records appears to have been driven by increased non-activity among the Group 1 cows during the three hours immediately preceding the night milking. As anticipated, differences in time spent eating were largely restricted to the 2-3 hours immediately following milking. Cows only lingered at the feed bunk during the morning lounging period, where median eating times for Group 1 cows were perhaps slightly higher. Similarly, differences in rumination also appeared restricted to time periods immediately following milking, with no clear differences seen during the lounging period with this coarse stratification of animals.

Finally, as with the mean plots, body temperature values again produced surprisingly distinctive results. More finely segmented into five queue groups by the mutual conditional entropy test, the tube plots proved a slightly cumbersome means of comparing temperature records, but a clear visual distinction could still be made between the Group 2 animals and the remainder of the herd. For all three lounging periods, this relatively small cluster of 17 cows that constituted the very front of the milking queue demonstrated lower median body temperature values, a distinction seen most clearly at night.The strong agreement between the results of these two analytical pipelines suggests that UML and conventional linear modeling approaches could be used interchangeably or in concert to glean preliminary insights from exploratory analyses of large sensor-based datasets that may inform future hypothesis-driven studies. Perhaps the most surprising result of these analyses, that cows frequenting the back of the queue are consistently more active between milkings, may indeed warrant further exploration. In much of the prior literature, health challenges that impede movement have been identified as the main driver of delayed entry into the parlor . In fact, this mechanism is so well-established that it has even been proposed that milk order records might be incorporated into genetic evaluations to improve estimates of health traits . As these analyses were run on the subset of animals with no recorded health events, however, it is possible that this dataset has brought other behavioral mechanisms into focus. Previous studies have found that animals of low social status frequent the rear of the herd in voluntary movements , and social dominance is known to impact resource access in spatially constricted conditions such as those found at the entrance to the milking parlor. If low dominance animals are in turn also forced to wait longer or walk farther to access resources in the home pen, this could potentially explain the increased activity levels of animals found in the rear of the queue. While the early literature has found the relationship between dominance value and milking order to be tenuous at best , it is possible that such social mechanisms may have been confounded by health status, with linear analyses of limited sample size failing to disentangle these mechanisms in non-disaggregated data.On this farm, where resources are not severely restricted and animals are frequently remixed, energetic investments in a dominance hierarchy may offer few returns . Such a behavioral strategy might also explain why it is high-yielding multiparous cows and not heifers that occupy the end of the queue. Both these hypotheses are ultimately purely speculative interpretations of these exploratory results; however, if proposals to incorporate milk order records into genetic indices are progressed, any correlations between queueing position and consistent individual differences in home pen behaviors likely warrant closer inspection to mitigate the risk of unintended and potentially deleterious selection pressures.As with previous studies of milk order records, these analyses perhaps raise more questions than answers. As dairy record management systems grow to accommodate an ever wider range data streams, perhaps future work considering more herds from a wider range of management strategies will succeed in further untangling the complex web of explanatory variables at the individual, herd, and farm levels that drive variation in queueing patterns. This dataset has, none the less, demonstrated the utility of unsupervised machine learning tools in ethological studies using sensor platforms to study larger groups of animals over extended periods of time. While these analyses recovered no evidence of social cohesion amongst large or temporally consistent subgroups, information theoretic approaches succeeded in clarifying the underlying pattern of heterogeneity in error variance between animals and also demonstrated an advantage in recovering evidence of non-uniform patterns in temporal nonstationary over basic EDA tools. After incorporating these insights into the structure of subsequent linear models, these model-free tools then showed some capacity to confirm inferential results where probabilistic assumptions were not strictly met, as well as an aptitude for recovering significant associations not captured by a simple linear effect. This flexible clustering-based approach to identifying significant bivariate associations was then easily extended to accommodate two high dimensional behavioral axes, providing equivalent insights to more computationally taxing mixed effect models. While UML approaches are by no means infallible, as seen here with artifacts produced by the spectral embeddings, these analyses have demonstrated that such tools can add value at every stage of the standard hypothesis-driven linear analysis pipeline, and may even offer an advantages over model-based approaches in early-stage exploratory projects. While many new methodological developments are doubtless on the ethological horizon, we hope this algorithmic toolset will provide a meaningful step forwards to meet the challenges of a future defined by ever larger and more complex data.Precision Livestock Farming technologies produce prodigious amounts of data . While the behaviors encoded by such sensors are often much simpler than those that can be quantified by a human observer, the measurement granularity and perseverance provided by these technologies create new opportunities to study complex behavioral patterns across time and in a wider range of contexts.

The later limitation can overlook important dynamic features of the behavioral patterns under consideration

This model-free framework can subsequently be extended to a multivariate estimator with two or more discrete variables. In the bivariate case, the distribution of one variable is compared across each encoded level of the other in order to decompose the total entropy in the joint encoding into three terms: the conditional entropy unique to the first variable, the conditional entropy unique to the second variable, and the mutual information that is redundant between the two encodings . This mutual information estimate in turn reflects how much information we learn about one encoded variable if we know the value of the other, and subsequently can be used to reflect the strength of a bivariate association between two sets of encoded data regardless of the underlying dynamic ±linear, quadratic, exponential, etc. Suppose our farmer with the overstocked cows, now fully aware of the welfare issues this management choice has created, reduces their stocking rate to a 1:1 ratio and continues to monitor the lying time of the animals to provide proof to their milk buyer that the issue has been resolved. After several months at this lower stocking rate, they review the data and are dismayed to find that there are still days where animals have inadequate lying times. To solve this new problem, they hire yet another consultant to help them track down the source of the problem. Suppose that this new dataset were collected in the summer, high quality shelving systems and so naturally, this consultant includes Temperature-Humidity Index as one of many candidate variables to consider as a potential source of the continued welfare concerns for this herd .

Using stochastic sampling techniques, the full details of which are provided in Supplemental Materials, we have simulated a fairly straight forward but nonlinear dynamic between these two variables. On days where the observed THI values are low, animals are not heat stressed and so spend the majority of their day out on pasture grazing. As the THI rises, cows become heat stressed for progressively larger proportions of the day, resulting in a gradual increase in the proportion of each day that cows spend lying down in the shade of their free stall barn. Above a certain high THI threshold, however, cows struggle to thermoregulate while lying down, causing them to stand for extended periods of time. When lying time is plotted against THI, as in Figure 7A, we can see that there is a clear nonrandom pattern in this data that is perhaps best characterized by a threshold model ±a dynamic that is commonly found when a single behavioral response is subject to the influence of competing underlying behavioral response mechanisms. If a simple linear effect were utilized to probe for a significant bivariate association between these two variables, however, a near-zero slope would be returned, as shown by the red line overlaid in Figure 7A. In this case, not only would a Pearson correlation test fail to identify this nonrandom but also nonlinear pattern, but because this pattern is also not monotonically increasing, even anonparametric Spearman Rank correlation test would fail to identify THI as a significant influence on lying times within this herd. Suppose both the THI and lying time measurements are discretized using simple equal-sized binning rules. If we compare the mutual information estimate from the two observed encodings against estimates generated from a simple permutation, the resulting p-value for this test of bivariate association would be highly significant .

To further characterize this dynamic, a simple contingency table, wherein each cell represents the total number of observations for each joint encoding, can be easily visualized as in Figure 7B. The mutual information estimate for the overall bivariate association can subsequently be decomposed into point wise mutual information estimates to reflect how much each cell in the observed table differs from the counts that would be expected by multiplying the marginal probabilities, which would be the distribution of joint observations anticipated if no association existed between the two encodings . Here blue cells indicate that there are fewer observations with the corresponding joint encoding than would be expected if no association between these variables were present, whereas orange cells are over-represented relative to the null. From this visualization we can clearly see that the probability of observing a given lying pattern is being shifted in different directions based on the level of the THI encoding. Thus, absent any prior intuition or assumptions about the relationship between these two variables, an information theoretic approach has successfully identified a significant bivariate association and provided insights into the underlying dynamic to inform further interpretation of the underlying behavioral mechanisms at play and subsequently the correct management interventions needed to remediate this welfare concern.For much of its history, ethological research in livestock has relied on human observers to encode behaviors of interest . While developing a detailed ethogram and observer training protocols constitute no simple task, there are several inherent advantages to this approach for subsequent statistical analyses. Continuous involvement of a human in the incoming data stream allows many erroneous data points to be identified and excluded from downstream analyses that they might otherwise destabilize. Extensive involvement of research personnel in the data collection phase also nurtures a deeper familiarity with the system under study. This not only aids in the specification of an appropriate statistical model and interpretation of results, but is often critical in identifying unexpected behavioral patterns that can inspire novel hypotheses. Unfortunately, the inherent quality of such data imposes practical limitations on the quantity that can be produced. This can restrict both the number of animals utilized in a study and the period of time over which they are observed. Restrictions on the number of animals that can be studied, on the other hand, can fundamentally alter the behavioral mechanisms at play in a herd. For example, the linearity of dominance hierarchies are known to change with group size .As commercial herds and flocks become ever larger, this only serves to broaden the gap between experimental findings and the welfare challenges they are meant to inform. Subsampling of animals or observations windows may be employed to reduce the number of observations collected without restricting the size of the study system. If the pre-existing base of scientific literature does not provide clear guidance on the selection of target animals or focal periods, however, such strategies may risk overlooking finer-grain behavioral patterns and skewing inferences about the collective behavior of the group . In recent years, livestock sensor technologies have become a popular alternative to visual observation . While the behaviors recorded are neither as complex or as detailed as those quantified via an observational ethogram, such devices have the capacity to continuously monitor hundreds or even thousands of animals for extended periods of time. Such a substantial expansion in the bandwidth capacity of ethological studies creates many new opportunities to better understand the behavior of livestock, particularly in large-scale commercial settings, commercial drying rack but also raises new methodological challenges.

Replacing nuanced human intuition with basic computer logic may increase the risk of erroneous data points, an issue that is only further compounded by the scale of data produced by such technologies, which renders many conventional visualizations techniques ineffective in identifying outliers. Observations recorded over extended time periods with high sampling frequency from large heterogeneous social groups may also contain a range of complex stochastic features ±autocorrelation, temporal nonstationary, heterogeneous variance structures, non-independence between experimental units, etc. ±that can lead to spurious inferences when not appropriately specified in a conventional liner model. The hands-off and somewhat black-boxed nature of many sensor platforms, however, do not nurture the intuition needed to identify many of these model structures a priori. Such insights must instead be drawn directly from the data itself, but here again, standard visualization tools may not scale to such large datasets. Unsupervised machine learning tools offer a distinct empirical approach to knowledge discovery that are purpose built for large and complex datasets . Whereas conventional linear models excel at providing answers to targeted experimental hypotheses, UML algorithms strive to identify and characterize the nonrandom patterns hiding beneath the stochastic surface of a dataset using model-free iterative techniques that impose few structural assumptions. This open-ended and highly flexible approach to data exploration may offer an empirical means by which to recover much of the familiarity with a study system that is lost with the shift from direct observation to sensor platforms. The purpose of this research was contrast the behavioral insights gleaned from UML algorithms with those recovered using conventional exploratory data analysis techniques, and to then explore how such information could be best integrated into standard linear analysis pipelines. Milking order, or the sequence in which cows enter the parlor to be milked, is recorded in all RFID equipped milking systems, making such records one of the most universal automated data streams to be found on modern dairies. Despite their ubiquity, such records are seldom used to inform individual or herd-level management strategies. This lack of utility, however, has not been for lack of study. The modest base of scientific literature that has since been compiled on this topic, however, has struggled to recover repeatable evidence of such associations . While such inconsistency may simply reflect non-uniformity in the behavioral strategies driving queueing patterns across different herds and farm environments, misspecification of the linear models used to describe this system could also contribute to volatility in these statistical inferences. The objective of this methodological case study will be visualize the various stochastic aspects of such records using UML tools in an effort to identify erroneous data points and heterogeneous variance structures that may not be recovered using conventional EDA techniques.Data for this case study was repurposed from a feed trial assessing the effect of an organic fat supplement on cow health and productivity through the first 150 days of lactation . All animal handling and experimental protocols were approved by the Colorado State University Institution of Animal Care and Use Committee . The study ran from January to July of 2017 on a certified organic dairy in Northern Colorado. A total of 200 mixed-parity Holstein cows were enrolled over a 1.5 month period as study-eligible animals calved. Cows were maintained in a closed herd for the duration of the study, with sick animals temporarily removed to a hospital pen when necessary. The study pen was an open-sided free stall barn, stocked at just above half capacity with respect to bunk space and beds, with free access to an adjacent outdoor dry lot. At roughly the midpoint of the trial, cows were moved overnight to a grass pasture that conformed with organic grazing requirements . Cows had access to total mixed ration ration following each milking. Animals were temporarily split into two subsections of the pen following the morning milking to facilitate administration of control and treatment diets. Cows remained locked for roughly 45 minutes following this division so that farm and research staff could collect health and fertility data. Additionally, all animals were fitted with CowManager® ear tag accelerometers . This commercial sensor platform, while designed and optimized for disease and heat detection, also provided hourly time budget estimates for total time engaged in a range of behaviors – eating, rumination, non-activity, activity, and high activity – as well as average skin temperature.Raw milk logs were exported from the rotary parlor following each morning milking , and were processed using data wrangling tools available in R version 3.5.1 . To account for missing records due to illnesses and RFID reader errors, ordinal entry positions were normalized by the total number of cows recorded in a given milking . For example, if a cow were always the last animal to enter the parlor, her ordinal entry position might vary widely with herd size, but her entry quantile would always be 1. The first 55 days of records were excluded from analyses to allow all animals to enter the herd over the rolling enrollment period and become established in their parlor entry position . To avoid irregularities in cow movements, several observation days surrounding management changes were also dropped, including: the two days preceding transition to pasture, the four days following pasture access, and the final seven days on trial.

Each theme was mentioned in at least half of the interviews

Sordariomycetes enrichment may indicate other community shifts that are ultimately the cause for enhanced fruit quality. Endophytes in the Hypocreales class, which was enriched in dry farm fields, are known to increase drought resistance and decrease pest pressure in their hosts, though none of the specific species known to exhibit this behavior were enriched in dry farm soils. On the other hand, Nectriaceae, the family that contains the Fusarium genus, was found to be enriched, though similarly no known pathogenic species were enriched in dry farm soils.Our study explored dry farm management practices and their influence on soil nutrient and fungal community dynamics in 7 fields throughout the Central Coast region of California, allowing us to explore patterns across a wide range of management styles, soil types, and climatic conditions. Though we were able to sample from a large swath of contexts in which tomatoes are dry farmed, we are also aware that conditions will vary year to year, especially as climates change and farmers can no longer rely on “typical” weather conditions in the region. While we are confident in the patterns we observed and the recommendations below, racks industries we also encourage further study across multiple years to better understand the full scope of the decision space in which dry farm growers are acting.

Given the scope of our current findings, we outline several management and policy implications for dry farmers and dry farming. Though we aim these implications towards the context of dry farm tomatoes in coastal California, we expect that they are likely to generalize to other dry farm crops grown in other regions with Mediterranean climates. First, given the expense and possibility that it is detrimental to fruit quality, we do not advise AMF inoculation for dry farm tomato growers. Second, we note the importance of nutrients below 60cm and the complexities of subsurface fertility management, and we recommend experimentation with organic amendments and deeply rooted cover crops that may be able to deliver nutrient sources that persist at depth, as well as planning several seasons in advance to build nutrients deeper in the soil profile. Finally, given our finding that dry farm soils develop a fungal signature that increases over time and its association with improved fruit quality, we encourage farmers to experiment with rotations that include only dry farm crops and even consider setting aside a field to be dry farmed in perpetuity. However, fully dry farmed rotations currently do not exist, likely due to a lack of commercially viable options for crops to include in a dry farm rotation. In order to experiment with potential dry farm rotations, as well as cover crops that can best scavenge excess nitrates and soil management regimes that can increase soil fertility at depth, farmers must be given both research support and a safety net for their own on-farm experimentation. Funding to mitigate the inherent risk in farmers’ management explorations will be key in further developing high-functioning dry farm management systems. Expanding land access to farmers who are committed to exploring dry farm management can additionally benefit these explorations.

Dry farm tomato systems on the Central Coast point to key management principles that can both help current growers flourish and provide guidance for how irrigation can be dramatically decreased in a variety of contexts without harming farmer livelihoods. In these systems, managing nutrients at depth–at least below 30cm and ideally below 60cm–is necessary to influence outcomes in fields where surface soils dry down quickly after transplant. Fostering locally-adapted soil microbial communities that are primed for water scarcity can improve fruit quality. Farmers can otherwise manage nutrients to maximize either yields or quality, giving latitude to match local field conditions to desired markets. As water scarcity intensifies in California agriculture and around the globe, dry farm management systems are positioned to play an important role in water conservation. Understanding and implementing dry farm best management practices will not only benefit fields under strict dry farm management, but will provide an increasingly robust and adaptable example for how farms can continue to function and thrive while drastically reducing water inputs.Unlike other forms of dryland farming , in this region dry farm tomatoes are grown over a summer season where there is a near guarantee of no rainfall. Farmers plant tomatoes into moisture from winter rains, counting on soils to hold on to enough water to support the crops over the course of the entire dry summer and fall. While some farmers irrigate 1-3 times in the first month after transplant, severe water restriction is what gives the fruits their intense flavor, and farmers trade water cuts that lower yields for price premiums that consumers are more than willing to pay for higher quality fruits. Beyond Bay Area consumer’s enthusiasm for high-quality local produce, dry farm tomatoes also trace their origins to a richer food culture of justice-oriented and farmer-centric food distribution in the region.

From the Black Panther Party’s Free Breakfast Program to strong community support for worker-owned and consumer food cooperatives , the Bay Area has become a hub of alternative values-based supply chains in a country largely dominated by an industrialized food system . Following this tradition, dry farm tomatoes originally found their footing in the United States in the Central Coast region 30 miles south of the Bay. In the 1970’s and 1980’s, innovative growers in small-scale cooperatives and teaching farms adapted an Italian and Spanish legacy of vegetable dry farming to the region’s Mediterranean climate, maritime influence, and high-clay soils . While these environmental features were necessary to grow tomatoes under dry farm management, the movement that sparked the reemergence of local farmer’s markets in the 1980’s also provided the access to direct to-consumer marketing that small farms needed to win consumer attention and loyalty, allowing farmers to both grow and sell this niche product. With their origins in local food distribution networks and local adaptations to a unique climate, dry farm tomatoes are now a signature of small, diversified, organic farms on the Central Coast and are a feature of many such operations’ business models. To this point, dry farming has largely followed its initial course and is only practiced at a small scale in the region, both in terms of geographic scope, and farm size. Dry farming may therefore be to playing a role in an agroecological transition in the region, buoying small-scale, thought-intensive management styles with access to a steady income source and consumer base. However, with recent droughts and water shortages in California, dry farming has recently begun to take a more prominent role in social and policy visions for the future of the state’s agricultural system. From the Sustainable Groundwater Management Act to emergency orders in drought years, farmers, researchers, policymakers, and the general public have become acutely aware of California’s currently unsustainable agricultural water use and the economic ramifications of water shortages . As an option that holds promise for maintaining farmer livelihoods while dramatically cutting water use, vertical growing systems journalists and policy groups have touted dry farming as an important system to target for significant expansion . Farmers have been considering how to use dry farming to adapt to drier futures for decades, lighting the way for researchers and policymakers’ more recent interest. However, up to this point, farmers’ thoughts and knowledge about dry farming have not been clearly elicited or formally incorporated into conversations about the future of the practice. Grounding conversations about future expansion of the practice in the knowledge of those who are most intimately familiar with its implementation is essential. At this moment of enthusiasm for dry farming, we look to practitioners to better understand the current state of dry farming on the Central Coast and its potential for expansion across California, along with the benefits and harms that expansion may carry.

We interviewed ten dry farmers, representing over half of the commercial dry farm tomato operations on the Central Coast, in order to collaboratively answer two central research questions. First, what business and land stewardship practices characterize successful tomato dry farming on California’s Central Coast? And second, what is the potential for dry farming to expand beyond its current adoption while maintaining its identity as a diversified practice that benefits small-scale operations? The majority of these farmers were part of an ongoing participatory research project in which field data were collected to better understand soil fungal communities and nutrient management in dry farm systems . These interviews were extensions of conversations and relationships fostered with farmers throughout the research process. We synthesized farmer insights into nine key themes that broadly describe how dry farming is currently practiced on the Central Coast, its potential to expand in scope , and the opportunities that farmers see as particularly provident for the practice. We also used the constraints identified by farmers to map areas most likely to be suitable for future dry farming. At this juncture of a high-functioning, low-water management system and urgent political interest in decreasing agricultural water use–in California and across the globe–we conclude by asking how dry farming can be a model for developing systems that decrease water use, and also how dry farming itself may be scaled out to other small-scale, thought-intensive operations without jeopardizing these same farms’ ability to continue profitably growing dry farm produce.Interviews were done with farmers who have commercial operations in California’s northern Central Coast region , as well as one farm with operations in Marin and Sonoma counties. Ranges of coastal mountains govern both climate and land use, trapping cool, moist air, and concentrating farming operations in valleys with fertile, alluvial soils. The Central Coast is known for its agricultural production–particularly berries, lettuce, and artichokes–that thrive in its fertile soils and mild climates that allow for year-round cultivation. Agricultural revenue in the region totals over $8 billion annually , making it a larger agricultural producer than most countries. This intensive production has led to both high land values and environmental degradation–largely in the form of water contamination–that shape both farmer decision-making and policy interventions . Within this landscape, farms often operate at industrial scales, though many small farms persist. Though cropland is consolidated into fewer, large operations , many smaller farms have found niches selling to local markets.After building relationships over the course of a year-long participatory field research process with eight tomato dry farmers , we conducted semistructured interviews with all farmers involved in that study. We interviewed two additional dry farmers who were not involved in the field project–one whose farm is in Sonoma County , and one whose farm could not participate in the field study due to extensive fire damage–for a total of ten farmers representing eight operations. Interviews were done in person , over the phone , and on Zoom in winter and fall 2022. Because there is no official record of tomato dry farmers in the Central Coast region, we used a snowball approach to identify farms that might be candidates for inclusion, asking each interviewee what other dry farm operations they knew of in the area. We can identify two dry farm tomato growers in the region who were not interviewed in this study, and we estimate that our interview subjects represented 50-75% of commercial dry farm tomato operations on California’s Central Coast. Interviews lasted 1-2 hours and focused on dry farm management practices, environmental constraints, support, water/land access, and economics . Interviews were recorded and transcribed, then analyzed through an interactive process of open, axial, and selective coding . Data were grouped into three overarching categories , with key themes in each category. In order to identify areas that might be suitable for future tomato dry farm management, we used farmer-described constraints to make a suitability map using publicly available datasets. We first compiled the environmental constraints on tomato dry farming described in each interview , which fell into three main categories: precipitation, temperature, and soil texture. We limited our analysis to California as the region these farmers are most familiar with to avoid extrapolating constraints beyond the context in which they were given. We used PRISM 30-year climate normals to characterize California’s temperature and precipitation . We used the average constraint named by the farmers; however, because these normals are a 30 year average and will stray significantly from these averages in individual years, particularly in the case of precipitation, we expect that we overestimate the extent of suitable areas.

Rotational complexity decreased with average rainfall during the growing season

They show that Kenyan exporters selectively allocated limited deliveries across contracts to maintain credibility in seller’s reliability. In a study of relational contracts in the Rwandan coffee industry, the same authors instrument the placement of mills to show that more potential competition from other mills reduces the use relational contracts with farmers making farmers worse off and reducing the quantity of coffee supplied to mills. Second-best competition is thus not necessarily welfare reducing.Biological simplification has accompanied agricultural intensification across the world, resulting in vast agricultural landscapes dominated by just one or two crop species. The Midwestern US is a prime example, where corn currently dominates at unprecedented spatial and temporal scales. An area the size of Norway is planted in corn in the Midwest in any given year with little variation in crop sequence; over half of Midwestern cropland is dedicated to corn-soy rotations and corn monoculture. Directly and indirectly, this agricultural homogeneity causes environmental degradation that harms ecosystem health while also contributing to climate change and increasing vulnerability to climate shocks. Agricultural diversification in space and time reverses this trend towards homogeneity with practices like crop rotations that vary which harvested crops are grown in a field from year to year. Crop rotations are a traditional agricultural practice with ample evidence that complex rotations— ones that include more species that turn over frequently—benefit farmers, crops, and ecosystems.

As one of the principles underlying agricultural soil management, diverse croprotations promote soil properties that provide multiple ecosystem services including boosting soil microbial diversity, grow trays enhancing soil fertility, improving soil structure and reducing pest pressur. These soil benefits combine to increase crop yields and stabilize them in times of environmental stress. Crop rotations’ environmental and economic benefits typically increase with the complexity of the rotation , while conversely, biophysical aspects like soil structure and microbial populations are degraded as rotations are simplified. Despite its benefits, crop rotational complexity continues its century-long decline in the Midwestern US. Corn-soy rotations increasingly dominate over historical crop sequences that included small grains and perennials, with corn monocultures also on the rise1. This increasing simplification is in part the result of a set of interlocking, long-standing federal policies aimed at maximizing production of a handful of commodity crops that distort farmers’ economic incentives. Regional rotation simplification is clear from analyses of crop frequency, county-level data, and farmer interviews. However, fine-grained patterns that more completely reflect farmers’ rotational choices across the region, and how those choices relate to influences from policy and biophysical factors that play out across agricultural landscapes, remain largely unstudied. This knowledge is essential for understanding how national agricultural policy manifests locally and interacts with biophysical phenomena to erode—or bolster—soil and environmental health, agricultural resilience, and farmers’ livelihoods. Bio-fuel mandates and concerted efforts to craft industrial livestock systems as end-users of these corn production systems make corn lucrative above other commodities, while federal crop insurance programs push farmers to limit the number of crops grown on their farms.

These policies, along with the current corporate food regime, drive pervasive economic incentives to grow corn, and farmers must increasingly choose between growing corn as often as possible to provide a source of government guaranteed income, and maximizing soil benefits and annual yields through diversified rotations. These policies both alter agricultural economics at a national level by boosting corn prices and manifest locally in grain elevators and bio-fuel plants that create pockets of high corn prices with rising demand closer to each facility. Biophysical factors like precipitation and land capability that are highly localized and spatially heterogeneous can catalyze or impede this simplification trend. For example, increasing rotational complexity is one strategy that farmers may employ to manage marginal soils or greater probability of drought, while ideal soil and climate conditions allow for rotation simplification to be profitable, at least in the short run5. As these top-down and bottom-up forces combine, we ask: how do farmers optimize crop rotational diversity in complex social-ecological landscapes, with top-down policy pressures to simplify intertwined with bottom-up biophysical incentives to diversify? . Because biophysical factors and even policy influences vary greatly at the field scale at which management decisions occur, an approach is needed to assess patterns of crop rotation that can capture simplification and diversification at this scale. Though remotely sensed data on crop types can now show fine-scale crop sequences, previous approaches to quantifying rotational complexity have relied on classifying rotations based on how often a certain crop appears in a region over a given time period, aggregating over large areas, or examining short sequences. To date, methods to capture rotational complexity have therefore been unable to address management decisions at the field scale , and/or lose valuable information about the number of crops present in a sequence and the complexity of their order .

At the other end of the spectrum, farmer surveys have impressively detailed the economic and biophysical considerations that go into farmers’ rotation decisions35, yet are limited by the number of farmers they can reach and who chooses to respond. Here, we explore how aspects of farm landscapes influence field-scale patterns of crop rotational complexity across the Midwestern US. We developed the first field-scale dataset of rotational complexity in corn-based rotations, covering 1.5 million fields in eight states across the Midwest and ranking crop sequences based on their capacity to benefit soils. We examined rotations from 2012-2017 to coincide with the introduction of the Renewable Fuel Standard, or “bio-fuel mandate,” which took full effect in 2012. We then correlated fields’ rotational complexity with biophysical and policy outcomes factors, using bootstrapped linear mixed models to account for spatial auto correlation in the data. By identifying spatially explicit predictors of rotational complexity, we illuminate how top-down policy pressures combine with biophysical conditions to create fine-scale simplification patterns that threaten the quality and long-term productivity of the United States’ most fertile soils.We focused our analysis on the eight Midwestern states with the highest corn acreage 2. We considered the six-year period from 2012 to 2017, which coincides with the introduction of the Renewable Fuel Standard in 2012. After deriving a novel field-scale rotational complexity index , we used spatially blocked bootstrapped regression to assess how key landscape factors associated with this indicator. These statistical methods account for overly confident parameter estimates that arise in naive models due to spatial autocorrelation in the data. All analyses were conducted in R47.To test for a relationship between RCI and predictive factors, all variables were centered and RCI was regressed against a set of covariate data in a linear mixed model including US state as a random effect to account for regional differences . We included interactions for which we had a priori hypotheses . The model was estimated using the R package `lme4`64. Two model assumptions are violated in the above model, requiring updated estimates of the parameters’ standard errors. First, because RCI is a derived statistic with an unusual domain, the index is not distributed according to a known distribution family and violates the assumption of normality in the residuals. Second, residuals showed high spatial autocorrelation at multiple scales and with an unknown structure, necessitating a nonparametric approach. Both violations are likely to shrink standard errors of the estimated parameters, leading to overconfident estimates; to illustrate, in the case of spatial autocorrelation, if the explanatory variables are randomly located in relation to crop rotation, spatial autocorrelation in crop rotation would falsely inflate significance. We used nonparametric spatial block bootstrapping to correct for this overconfidence. An algorithm for sparsely distributed spatial data, derived by Lahiri 2018, was implemented in R . Spatial block bootstrapping involves iteratively resampling data in spatial blocks to mimic the generation of autocorrelated data. Choice of block size is nontrivial, cannabis drying rack and choosing the optimal block is an open question, but blocks should be larger than the scale at which autocorrelation operates. Using the R package `gstat` to compute a variogram of the residuals generated by the naive LMM, we determined that range was 400815m. We used this as the dimension of each spatial block . We repeated this bootstrap with a range of possible spatial block sizes and found that this inference on parameters was robust to the choice of block size .Complexity of Corn-based Rotations in the Midwestern US: RCI values calculated for corn-based rotations create the first map, to our knowledge, that quantifies field-scale rotational complexity across the Midwestern US . RCI values from 2012-2017 range from 0-5.2 , and are positively skewed . Corn monoculture accounts for 4.5% of the study area and 3.3% of fields, suggesting that larger fields are more likely to be managed as monocultures .

The mode RCI score corresponds to a corn-soy rotation and dominates the region, covering over half of the study area. Two thirds of the area with this score was a CSCSCS or SCSCSC sequence, while the remaining third corresponds to other rotations that yield the same RCI .RCI scores have statistically clear correlations with land capability, mean rainfall, distance to the nearest bio-fuel plant, and field size, as well as with several interactions between these variables . Standard errors from the spatially blocked bootstrap were much larger than uncorrected naive confidence intervals, reflecting that accounting for spatial non-independence is necessary to estimate uncertainty of parameter estimates. Rotational complexity decreased with NCCPI, a proxy for land capability. We find that land of higher inherent capability is more likely to be used for lower complexity rotations. Fields with ample precipitation during the growing season are more likely to have simplified rotations. Though the relationship between the proximity of the nearest grain elevator and a field’s rotational complexity is not statistically clear , RCI showed a clear increase with distance to the nearest bio-fuel plant. Fields that are closer to bio-fuel plants are therefore more likely to have simplified rotations. Rotational complexity decreased with field size, with larger fields being more likely to have simplified rotations. Two of the interactions included in the model show statistically clear relationships. There is a positive interaction between land capability and field size, with higher quality land associated with decreasing RCI on small fields and slightly increasing RCI on large fields . The interaction between land capability and rainfall variance show a negative effect on RCI, with highly variable rainfall accentuating land capability’s impact on RCI . Interpretations of the relationship that each variable has with rotational complexity are shown in Table 4. Though each change is associated with a small shift in average RCI across the region, these can represent massive shifts in regional land management.As crop rotations continue to simplify in the Midwestern US despite robust evidence demonstrating yield and soil benefits from diversified rotations, our ability to explain and understand these trends will come in part from observing the biophysical and policy influences on farmers’ crop choices at one key scale of management: the field. By developing a novel metric, RCI, that can classify rotational complexity over large areas at the field scale, we open the door to regional analyses that can address the unique landscape conditions that impact farmers’ field-level management choices and their subsequent influence on rotational simplification. We find that as farmers are pushed towards simplification by broad federal policies , physical manifestations of these policies like bio-fuel plants are correlated with intensified simplification pressures. Similarly,we see that the pressure to build soils and boost crop yields through diversified rotations intensifies in fields with lower land capability, while conversely the negative effects of cropping system simplifications are accentuated on the region’s best soils.RCI uses the sequence of cash crops on a given field as a proxy for crop rotation, and sorts these sequences into scores based on the sequence’s complexity and potential for agro-ecosystem health. Because this metric has not been used in previous analyses, we verified RCI’s validity through comparisons to previous estimates of rotational prevalence in the region. For example, two separate surveys of farmers in the Midwestern US showed that between 24% and 46% report growing “diversified rotations” which we consider to be an RCI of greater than 2.24 . In the present study, 34% of fields had an RCI greater than 2.24. This and further comparisons of RCI to previous work show that RCI is capable of capturing previously-noted trends in the region.

Public-private partnerships can be found among both supply- and demand-side initiatives

Rural transformation is accompanied by land concentration in medium farms in countries like Kenya , Ghana , Tanzania , and Zambia . These farms are typically mechanized and owned by well-educated urban-based professionals who can be effective agents for technology adoption. These various success stories show that using agriculture for development can be done, but has not yet been sufficient to overcome aggregate rising gaps in yields between SSA and the rest of the world.While there has been limited success with raising public expenditures on agriculture, there has been considerable progress with data collection and with rigorous experimentation on how to promote the modernization of agriculture. We consequently know a lot more today about how to use agriculture for development than we did ten years ago, even though this knowledge has most often not been put into practice in the desirable form and to the desirable degree. It is consequently important to start by reviewing what we have learned. The main argument that has been used in support of the need for a structural transformation as the mechanism to grow and reduce poverty is that there is a large labor productivity gap between agriculture and non-agriculture . An important observation, however, based on the LSMS-ISA dat for SSA is that while the gap in labor productivity per person per year between non-agriculture and agriculture is indeed large, the gap in labor productivity per hour worked is relatively small . In other words, weed drying rack when agricultural workers do work, their labor productivity is not very different from that of non-agricultural workers.

What this suggests is that there is a deficit in work opportunities for agricultural vs. non-agricultural workers that creates an income gap between the two categories of workers.Because households engage in a multiplicity of sectoral activities, the relevant contrast in labor productivity is not between agriculture and non-agriculture, but between rural and urban households, with rural households typically principally engaged in agriculture. Looking at labor calendars for rural and urban households in Malawi in Figure 1, we see that weekly household hours worked are not different for rural and urban households at peak labor time, which corresponds to the planting season in December and January . During the rest of the year, there are much less employment opportunities for rural than urban households, with the former working about half the time worked by urban households during the low season . Lack of labor smoothing across months can thus be a major cause of income differentials between rural and urban households. Measuring annual labor productivity as median household real consumption per capita, rural households are at 57% of individuals in urban households. When this is measured not per year but per hour worked, rural households are at 81% of individuals in urban households. With high urban unemployment in Malawi limiting the option of reducing rural poverty through permanent or seasonal rural-urban migration, this suggests that a key instrument for rural poverty reduction is to have less idle time for land and labor throughout the monthly calendar. For Bangladesh, Lagakos et al. proposed filling labor calendars for rural households through migration to cities during the lean season. When this option is not available due to high urban unemployment filling and smoothing labor calendars in rural areas becomes a key dimension of poverty reduction. This can involve employment both in agriculture with more diversified farming systems and in the local rural non-farm economy. This is the purpose of the agricultural and rural transformations that are important in redefining how to use agriculture for development.

Based on work done for the IFAD Rural Development Report led by Binswanger, for China by Huang , by BRAC on graduating the ultra-poor out of poverty , for the Gates Foundation by Boettiger et al. , and for the ATAI project , a strategy of using agriculture for development would involve the following five steps: Asset building, Green Revolution, Agricultural Transformation, Rural Transformation, and ultimately Structural Transformation as described in Table 1. We refer to this strategy as the Agriculture for Development sequence. Minimum asset endowments for SHF under the form of land, capital, health, knowledge and skills, and social capital are needed to initiate production for the market and participation in a value chain. This corresponds to minimum capital endowments to get started in production in farm household models such as Eswaran and Kotwal’s , and to asset thresholds to escape poverty traps in Barrett and Carter . The BRAC graduation model for the rural ultra-poor thus importantly starts with achieving minimum asset thresholds for households to engage in self-employment in agriculture , with rigorous impact evaluations demonstrating success in raising household consumption in five of six case countries. Evaluation with a randomized experiment of a BRAC credit program for landless workers and SHF in Bangladesh shows that loans can be used to achieve minimum asset endowments by renting land and selecting more favorable fixed rent over sharecropping contracts . The Green Revolution, whereby productivity growth is achieved in staple crops through the adoption and diffusion of high yielding variety seeds and fertilizers is the initial step in agricultural modernization. It has been actively pursued to achieve food security and is a learning ground for the subsequent transformations of agriculture and rural areas. It has been a major success of the Consultative Group in International Agricultural Research and is still an ongoing effort in Sub-Saharan Africa and Eastern India. A key objective of the Agricultural Transformation is to fill in rural households’ labor calendars over as much of the year as possible through multiple cropping — which typically requires water control to cultivate land in the dry season, the development of value chains for new crops, and contracting among agents in these value chains. An example is the introduction of short duration rice varieties in Bangladesh that frees the land for an additional crop, typically high value products such as potatoes and onions, between rainy season and dry season rice crops. This makes an important contribution to filling land and labor calendars and to reducing the length of the hungry season . Because the Agricultural Transformation implies diversification of farming systems, it is a key element of national food security strategies where diverse diets, including perishable goods such as fruits and vegetables, dairy products, and meats that are less traded than staple foods, are an important element of healthy diets . SHFs are engaged in value chains that define the way they relate to markets. Value chains for agricultural products link farmers backward to their input and technology suppliers and forward to intermediaries, processors, cannabis drying racks and ultimately consumers . Relations within value chains often take the form of contractual arrangements. Induced by income gains for consumers, urbanization, and globalization, there has been in recent years a rapid development of value chains not only for low-value staple food crops, but also for medium value traditional domestic consumption and export crops, and high-value non-traditional export crops.

Their structure can take a wide variety of forms in linking SHF to consumers, ranging from traditional spot markets to elaborate contract farming, productive alliances , and out-grower schemes . Contracts can be “resource-providing”, thus contributing to solve market and institutional failures for participating SHFs. A key objective of the Rural Transformation is to give access to smallholder households to sources of income beyond agriculture. In Ghana, income derived from the rural non-farm economy for rural households is about 40% of total income, a share that increases as land endowments fall . It is indeed the case that, with land limitations, smallholder households rarely exit poverty with agriculture alone. A rural transformation requires the development of land markets and of labor markets . This process will typically happen first in the more favorable areas where a rural non-farm economy linked to agriculture can develop through forward, backward, and final demand linkages. It corresponds to the Agriculture Demand-Led Industrialization strategy advocated by Adelman and Mellor that is actively pursued in countries such as Ethiopia and Rwanda, and through CAADP in much of Sub-Saharan Africa.In vast regions of Sub-Saharan Africa and South Asia, the unfolding of an Agriculture for Development sequence has been held back by multiple obstacles that originate in asset deficits, market failures, and institutional deficiencies . This results in constraints to adoption of new technologies and lack of development of inclusive value chains. These failures may result in lack of profitability of innovations for particular SHFs given their specific circumstances, lack of local availability of the innovations in spite of potential profitability, and constraints to adoption in spite of potential profitability and availability. These constraints concern most particularly lack of access to sources of liquidity such as credit and savings, risk and lack of access to risk-reducing instruments such as insurance and emergency credit lines, lack of access to information about the existence of new technology and how to use it, and lack of access to input and output markets due to high transaction costs such as poor infrastructure and collusion of traders in local markets. The Agriculture for Development sequence is thus particularly multidimensional and difficult to implement. There are basically two contrasted approaches to potentially overcoming the problems that obstruct an Agriculture for Development sequence. The first consists in focusing on particular groups of farmers and addressing each of the problems in their own shapes and forms that affect them in modernizing. We can label this a “supply-side” approach to modernization and transformations. It consists in securing the existence and profitability of innovations, ensuring their local availability, and then overcoming each of the four major constraints to demand and adoption through either better technology or through institutional innovations . The agents for this approach are principally public and social such as governments, development agencies, NGOs, and donors. The second consists in creating incentives for SHF to modernize by building value chains for the particular product, and managing vertical and horizontal coordination within the value chains to overcome the profitability-availability-constraints obstacles as they apply to inclusion and competitiveness of SHF in the value chain. This is a “demand-side” approach to modernization and transformations. It consists in creating the demand for innovations in order to establish SHF competitiveness within a value chain, and then securing the existence, availability, and conditions for adoption of innovations. The approach thus requires both value chain development and value chain inclusion of SHFs. In this case, the agents are principally private such as enterprises and producer organizations for contracting, and lead firms, multi-stakeholder platforms, and benevolent agents for coordination. The theory of change we use in this review paper is represented in Figure 2. Circumstances for unleashing an Agriculture for Development sequence include the national and international context and policies, deficits in access to assets, and market and government failures that affect SHF. Approaches to modernization can follow a supply-side or a demand-side approach, in each case with specific agents engaging in the corresponding activities. Desired outputs are productivity growth in staple foods and Agricultural and Rural Transformations; desired outcomes are growth and poverty reduction. In what follows, we review each of these approaches in turn. Both have been extensively used and analyzed, yet belong to somewhat separate traditions in spite of obvious complementarity.Technological innovation are first analyzed in experimental plots, usually for yield and resilience to specific shocks. But this does not tell us whether the innovation is likely to be adopted by SHF. Analysis of the adoption problem should start with verification that the innovation is indeed profitable for the intended SHF under their own circumstances, objectives, and capacities. Measuring profitability in farmers’ plots is however very difficult . There are data problems in observing family labor time and definitional problems in establishing the opportunity cost for family labor and self-provided inputs. Conditions also vary year-to-year due to weather conditions, with only short time series to observe how climate affects outcomes, made even more difficult to interpret with climate change. And there are many unobservable conditions and complementary factors that affect profitability and compromise the external validity of any measurement made at a particular time and place. An alternative approach is to verify profitability without measuring it. Some among the best endowed and best located farmers have to be able to make sustained use of the innovation for the innovation to have adoption potential by others under current market, policy, and complementary input conditions. This can be established by observation, experimentation, or simulation.

There is a continual gain and loss of soil C that establishes a dynamic equilibrium

It presents a detailed account of management practices to enhance soil C storage and GHG mitigation, and a meta-analysis of published appraisals in the cropping systems of South Asia.Agriculture in South Asia is predominantly cereal-based, i.e. the cultivation of about 40 million hectares with multiple cereal crops or a single cereal crop, followed by a non-cereal crop such as legumes, vegetables, or potatoes, in an annual rotation . Rapid population growth and climate unpredictability in South Asia will increase the demand for food by at least 40% by 2050 . Meeting this projected need is doubly challenging, considering that 94% of the land suitable for farming is already under production and that 58% of agricultural areas face multiple hazards such as water shortage and extreme heat stress . It is anticipated that the current situation will worsen with climate change, which includes rising temperatures . The region is undergoing rapid economic growth, resulting in an increase in the emission of GHGs into the atmosphere. As of 2017, South Asia accounted for 7.5% of the world’s total CO2 emission from burning fossil fuels, of which India’s share was 6.6% and the remaining less than 1% was shared by seven other countries in the region . A large proportion of the total GHG emission from agriculture in South Asia comes from CH4 and N2O, cannabis drying racks representing 17% of the world’s total in 2017 with 179% increase since 1990 . India accounted for 11.8% and the other seven countries for the remaining 5.2% of total global CH4 and N2O emissions. Among the major sources of GHG emissions, rice cultivation is responsible for both CH4 and N2O emissions .

In South Asia, on a CO2-equivalent basis, rice cultivation and N fertilization are responsible for the largest emissions. Other sources of CH4 emissions include crop residue burning , and other sources of N2O emissions include the application of manure and crop residues to soils. Meeting the increased demand for food during the Green Revolution was associated with intensive cropping, soil management, and the use of agrochemicals, hence, resulted in the gradual loss of SOM . Although crop productivity has doubled or tripled during the last decades, negative impacts on the environment, biodiversity, soil, and air quality are common consequences . Conventional cultivation practices with exhaustive tillage and removal of crop residues by burning or for other uses in South Asia have not only resulted in nutrient and C losses but have also created a severe air pollution problem . About 2 million farmers in northwest India burn an estimated 23 million tons of rice residues every year . In some of the cities of northwest India, particulate air pollution in 2017 exceeded by more than five times the safe daily threshold limit, causing severe health problems both in rural and urban areas . Continuous tillage with the removal or burning of crop residues has also brought about the loss of SOM, resulting in a lower threshold, and adversely affecting soil functioning .The term “C sequestration” has been defined in many ways but broadly it is used to describe both natural and deliberate processes by which CO2 is either removed from the atmosphere or diverted from emission sources and stored in the terrestrial environment , oceans, and geological formations . It is the process of capture and long-term storage of CO2 in a stable state. This process can be direct or indirect, and can be biological, chemical, geological, or physical in nature. When inorganic CO2 is sequestered directly by plants through photosynthesis or through chemical reactions in the soil, this process is often called “C fixation”. Biological processes that occur in soils, wetlands, forests, oceans, and other ecosystems can store CO2, which is referred as “C sinks”. Bernoux et al. argued that since soils are associated with CH4 and N2O as well as with CO2 fluxes, the concept of “soil C sequestration” should not be limited to considerations of C storage or CO2 balance. All GHG fluxes must be computed at the plot level, or preferably at the level of the entire soil-plant pools of agroecosystems in C–CO2 or CO2-e, incorporating as many emission sources and sinks as possible for the entire soil-plant system.

These fluxes may originate from different ecosystem pools: solid or dissolved, organic or mineral. Bernoux et al. proposed that “soil C sequestration” or better, “soil-plant C sequestration”, should be considered as the result of the net balance of all GHGs, expressed in C–CO2 or CO2-e, computing all emission sources and sinks of a given agroecosystem in comparison to a reference agroecosystem, for a given period. Beyond its role in climate-change mitigation, SOM is not only a key component in nutrient cycling, but also influences a wide range of ecosystem services including water availability and quality and soil erodibility and is a source of energy for the soil biota that act as biological control agents for the pests and diseases of plants, livestock and even humans . SOM is most beneficial when it decays and releases energy and nutrients, and therefore its turnover is more important than the accrual of non-productive organic matter deposits . We propose that a definition of C sequestration should encompass not only the components of SOM in C storage and GHG mitigation, but also the characteristic dynamic turnover that results in labile pools essential for maintaining soil health. Therefore, there are two highly related aspects of C sequestration that aim to attain food security under a changing climate: reducing GHG emissions for mitigating climate change, and increasing soil C storage and linked C recycling for improving the efficient use of resources .Soils act both as a C sink and a C source . Eventually, the ability of a soil system to sequester C lies in the balance between net gains and net losses. Before the dramatic increase in C emissions during the industrial revolution, the global C cycle, or “C flux” was maintained at a near balance between uptake of CO2 and its release back into the atmosphere . Therefore, soil organic carbon can be characterized as a dynamic equilibrium between gains and losses. Practices that either increase gains or reduce losses can promote soil C sequestration. The soil C gain occurs largely from photosynthetically captured C and from the recycling of a part of the NPP as crop residues, including root biomass, rhizodepositions or manure/organic waste. The loss of soil C occurs largely from respiration by plants and the microbial decomposition and mineralization of organic residues to CO2 and CH4. In addition, soil erosion and photo degradation of surface litter are other important forms of C loss. Natural ecosystems are undisturbed and strike a balance of C gains over C losses, hence maintain greater C storage or C sinks. But the conversion of stable natural ecosystems to disturbed agricultural systems promotes soil C loss, converting soil from a net sink to a source of GHGs. It is interesting to note that globally, weed dry rack about 50% of vegetated land surface has been converted to agriculture . A recent estimate indicated that since the beginning of agriculture about 10–12 millennia ago, 456 Gt of C has been lost from the terrestrial biosphere . There are two components: from the prehistoric era to about 1750, the loss is estimated as 320 Gt; and from 1750 to the present era, there has been a further loss of 136 Gt. Another estimate reported the reduction of soil C by 128 Gt during the 10,000 years of cultivation .

On the other hand, Paustian et al. reported a soil C loss of 0.5 to >2 Mg C per hectare per year following the conversion of a natural ecosystem to cropland. This would result in the loss of 30–50% of the total C stock in the top 30 cm layer of topsoil until a new equilibrium was established. The large historic losses over a large time frame, and the fact that soil possesses two to three times more C storage capacity than the atmosphere, have led to a belief that soil has the potential to mitigate GHG emissions and climate change via sequestering soil C. During the last few decades, several researchers have published a range of estimates of soil C sequestration/C storage potential in agriculture. Based on 22 published studies, Fuss et al. reported global estimates of technical potential annual C sequestration rates ranging from 0.51 to 11.37 Gt of CO2 . A large range of reported estimates represented diverse agroecologies/systems , and management practices . The discrepancies in the areas assumed for extrapolation were reported to be the main reason for the large variation in the reported rates of SOC sequestration. In addition, variations in soil depths and the SOC equilibrium durations used for extrapolation cannot be ruled out. Nevertheless, based on the median values of minimums/maximums ranges, the best estimate of technical potential was 3.8 Gt CO2 yr 1 or 1.03 Gt C yr 1 . It is encouraging that a strong interest in this area is not limited to the scientific community only. Recently in the global C agenda for climate change mitigation and adaptation, soils have become a part through the initiation of three high level programmes . Firstly, in 2015, the French government launched the “4 per 1000” initiative at the 21st Conference of Parties of the United Nations Framework Convention on Climate Change as part of the Lima Paris Climate Agreement. The agreement recommended a voluntary plan of 4p1000 to sequester C in world soils at the rate of 0.4% or 4‰ annually . Secondly, at COP23 in 2018, the Koronivia workshops on agriculture were launched, giving emphasis on soils and SOC for climate-change mitigation. And finally in 2019, the FAO launched a program for the recarbonization of soils, called RECSOIL . In 4p1000 initiative, the value of 0.43% is based on the ratio of global anthropogenic C emissions and total SOC stock . Annual GHGs emissions from fossil C are estimated at 3.7 Gt per year and a global estimate of soil C stock of 860 Gt at 40 cm of soil depth. The value of 3.7 Gt C of emissions per year comes from the range of 2–5 Gt C estimated by Fuss et al. . For agricultural soils, Smith estimated the value of 0.45%, which is based on 1.3 Gt C of emissions per year and an agricultural SOC stock of 286 Gt C at 0–40 cm depth . For a 0–30 cm depth, the same annual sequestration potential would be equal to 0.53% of emissions and 0.56% of global and agricultural soil stocks . Considering the land area of the world as 149 million km2 , the average amount of C is calculated to be 161 tonnes of SOC per hectare, and 0.4% of this would be 0.6 tonnes of C per hectare per year. It has been argued that the initiative’s target of 4p1000 is highly ambitious, and important questions have been raised as to whether it is feasible to increase SOC stocks by 0.4% per year on average around the world . Soussana et al. and Rumpel et al. mentioned that 4p1000 initiative is indeed an aspirational goal with much uncertainty about what is achievable but aimed to promote concerted research and development programs on good soil management that could help mitigate climate change. They discussed various specific criticisms of the initiative in relation to biophysical, agronomic, and socioeconomic issues, and provided a more realistic scenario of what was possible and not possible. Subsequently, Amundson and Beaudeu further elaborated on the challenges and complexities involved in achieving this goal, and opined that adaptation may be more relevant than mitigation. They proposed the concept of “weather proofing soils” which would involve the development and promotion of improved soil C management approaches that are more adaptable. Recently, Amelung et al. suggested a soil-specific perspective on feasible C sequestration and some of its trade-offs. They also highlighted that crop land soils with large yield gap and/or large historic SOC losses have major potential for carbon sequestration. A greater need for local, reusable, and diversified knowledge on preservation and restoration of higher SOC stocks has been suggested . A few promising sustainable management options with higher SOC sequestration potential were identified for farmers in America .South Asia accounts for less than 5% of the world’s total land area and supports around 25% of the world’s population .

Nitrous oxide is even more effective at absorbing heat with a GWP 265 times that of CO2

These source signatures were comparable across season, particularly from manure lagoons, and were always different from one another by at least ~8‰. Additionally, isotopic signatures from CH4 hotspots observed from remote mobile surveys were consistent with on-farm isotopic signatures and captured CH4 source areas. Our downwind observations revealed that enteric fermentation derived CH4 contributed from 0 to 93% of CH4 in plumes that varied with the amount of animal housing and lagoon in the emission footprint . Measurements of 13C of CH4 downwind of dairy farms may be a useful tool to monitor and quantify enteric:manure ratios with changes in mitigation . As shown in this study, isotopic signatures of CH4 downwind of dairy farms can be used to estimate the fraction of contributing sources, such as from manure lagoons and enteric fermentation source areas. We measured that the fraction of enteric CH4 to total CH4 from a mixed cluster of dairy farms ranged from 0.33 to 0.53 similar to model predictions of 0.5 for this region . Most CH4 mitigation strategies separately address CH4 emitted from enteric fermentation, such as through feed additives , or manure emissions by changing management techniques . As governing bodies undertake mitigation strategies to reduce CH4 emissions from enteric fermentation or dairy manure management, it is essential to verify mitigation effectiveness. In California, for example, numerous dairy farms have recently adopted or plan to install digesters in the near future to capture and convert CH4 from manure lagoons into fuel. Although digesters are designed to capture most CH4 emissions, studies have detected notable CH4 leaks from biogas plants . An important area of future research is to quantify the effect of mitigation strategies by comparing δ13CCH4 downwind of dairy farms before and after installation of digesters.

Isotopic signatures in this study agree with previous research showing that manure CH4 is more enriched in 13C than enteric CH4. Our on-farm measurements, however, weed dryers show that manure lagoon CH4 is relatively more enriched in 13C than previously reported in Southern California . Townsend-Small et al. reported a 13CCH4 range of -52.4‰ to -50.3‰ from manure bio-fuel from a manure digester facility and Viatte et al. reported 13C of CH4 of about -57‰ near manure lagoons. This may be explained by differences in CH4 generation processes and manure management differences between Southern California and San Joaquin Valley. Dairies in the San Joaquin Valley predominately use flush systems and store manure in lagoons, while Southern California dairies typically operate dry lots that forgo flushing manure from the feed lanes such that less manure is stored in anaerobic lagoons . Nevertheless, all California farms produce liquid manure from flushing solids in the milking parlor . Although Viatte et al. reported a more depleted 13C of CH4 of about -57‰ near manure lagoons compared to this study, they also observed an ~8‰ fractionation between enteric CH4 and manure CH4, consistent with our findings of isotopic fractionation between manure lagoons and enteric CH4 from free stall barns. There may also be differences in the stable carbon isotope composition of feed and differences in biogeochemical factors that play a key role in determining which microbial communities and pathways promote or inhibit CH4 generation from dairy manure management, and in turn affect the isotopic signature of CH4 emissions. These include pH, dissolved oxygen level, temperature, volatile fatty acids, chemical composition of the substrate, total nitrogen, and nutrient composition .Substrate depletion may also explain this variation, but additional measurements of δ 13C of volatile solids or CO2 concentrations would be needed to confirm isotopically fractionated substrates.

During acetate fermentation, CH4 and CO2 are commonly formed simultaneously. Reduction of CO2 may further transform the generated CO2 into CH4. In the influential study conducted by Whiticar et al. , CH4 generated from pure acetate fermentation resulted in δ 13C-CH4 ranging from -60 to -33‰, whereas CH4 from pure CO2 reduction had δ13C-CH4 values ranging from -110 to -60‰. However, bacterial oxidation in the substrate may affect these pathways before being emitted to the atmosphere, and consequently enrich 13C values of CH4. Measurements of δ2H-CH4 can provide information about partial oxidation since this process enriches δ13C-CH4 and δ2H-CH4 values . Possible explanations for the subtle differences of the manure isotopic signatures between seasons at the reference site may be influenced by changes in diet composition of the milking cows, substrate depletion, perterbations in the lagoon , or a combination of these factors. A future study examining δ 13C and δ2H of methane and δ 13C-CO2 from dairy manure lagoon waste is necessary to confirm the dominant processes contributing to the enriched δ 13CCH4 signatures from California dairy manure lagoons. Isotopic signatures of CH4 from enteric fermentation depend on the C isotopic ratio of foods, specifically with the proportion of plants with C3 and C4 photosynthetic pathways in cattle diets . A diet consisting mostly of C3 plants has been shown to generate more depleted δ13CCH4 than a diet of C4 plants . A database of studies found that ruminants fed a diet of more than 60% C4 plants emit CH4 with δ13CCH4 signatures of -54.6 ± 3.1‰, whereas ruminants fed a C3 diet emit CH4 with δ13CCH4 signatures of -69.4 ± 3.1‰ . This ~15‰ difference is about the same difference between 13C of C3 and C4 feeds. Furthermore, there is a ~41‰ difference between feed and CH4 regardless of ruminant species and diet . Future studies could explore the relationship between diet and CH4 isotope composition across seasons from different cattle production groups. To improve source apportionment of regional CH4 emissions in top-down studies, it is important to consider direct measurements of δ13CCH4 of enteric methane given that it varies depending on diet composition. We have shown that δ13C measurements of atmospheric CH4 using a mobile platform can be used for source attribution of enteric and manure methane. Our findings show that CH4 from manure lagoons is more enriched in δ13C than CH4 from enteric fermentation across seasons on average by 14 ± 2‰. This has implications to track the effectiveness of mitigation strategies by measuring δ13CCH4 to quantify enteric:manure ratios over time. In addition, drying cannabis this study contributes to a body of knowledge dedicated to investigating the sources and processes responsible for the increasing global mole fraction of atmospheric methane.

Future work could explore whether δ13CCH4 signatures change with mitigation efforts. Additional measurements using δ13C and δ2H of CH4 and δ13C-CO2 could elucidate which methane generation processes drive manure lagoon emissions.Major differences in δ13CCH4 from dairy farms among regions underscore the importance of δ13CCH4 measurements at local scales for global analyses.Livestock agriculture is a major source of ammonia and greenhouse gas emissions, such as methane and nitrous oxide . In the United States, livestock contributes an estimated 66% of total agricultural GHG emissions . Methane is more efficient at trapping infrared radiation than carbon dioxide , with a lifetime of about 10 years in the troposphere and a global warming potential about 28 times that of CO2 on a 100-year scale . Ammonia is a gas-phase precursor to fine particulate matter, impacting human health and posing a threat to terrestrial and aquatic systems . As such, there is a need for accurate observations of GHG and NH3 emissions from the agricultural sector are imperative to address poor air quality and climate change. The San Joaquin Valley of California is a region with significant CH4, N2O, and NH3 emissions . Currently, there is disagreement whether state inventories accurately represent these gases across spatial and temporal scales. For example, atmospheric studies often report dairy CH4 emissions in California up to two times higher than bottom-up inventories . Meanwhile, other studies have reported that CH4 observations were comparable to inventories during the summer but not winter seasons, or using ground observations but not airborne measurements . A similar case is observed for NH3 in the SJV, where chemical transport models substantially underestimate gas-phase NH3 observations compared to airborne and satellite measurements . These results suggest that inventories likely underestimate and misrepresent agricultural NH3 emissions across spatial and temporal scales . There are limited N2O observations in the SJV of California, where most N2O emissions is expected from the agriculture sector . These studies show that top-down observations of N2O are at least two times higher than bottom-up inventories . In addition, these studies use either short-term airborne or tower observations, which provide limited seasonal and spatial information on N2O emission trends. The dairy sector is an important source of GHG and NH3 emissions in the SJV. Methane emissions from dairy farms is primarily emitted by enteric fermentation from ruminant gut microbes and anaerobic decomposition of dairy manure in storage ponds . Dairy manure management contributes a substantial fraction ofCH4, N2O, NH3 emissions and the relative magnitudes depends on manure management practices . Solid manure management includes storing manure in piles, deep pits, open lots, and daily spreading of dairy waste. In contrast, in a liquid manure management system, waste from barns and other dairy infrastructure, such as milking parlors, are washed and collected in slurry ponds or anaerobic lagoons . Anaerobic conditions, such as found in anaerobic manure lagoons, promote the production of CH4, and to a lesser extent N2O and NH3 emissions . Solid manure storage systems have reportedly higher N2O emissions than CH4 and NH3 emissions relative to manure lagoons. Nitrous oxide is generated from denitrification and nitrification reactions in manure-amended soils, manure storage, and direct N deposition by animals . In general, denitrification accounts for most of N2O emissions under anaerobic conditions. Nitrous oxide, along with NH3 and NO, is indirectly emitted through volatilization of manure N from nitrification and denitrification in soil after redeposition . Ammonia emissions, on the other hand, are primarily a byproduct of urea hydrolysis during the decomposition of urine and feces, which is mostly found in animal housing . Ammonia volatilization at liquid-surface interface occurs under high pH conditions since the pKa of NH4 + /NH3 is 9.25 . Storage of animal feed, such as silage piles, also emit NH3 and N2O As California moves towards meeting GHG and air pollution reduction goals, it is critical to gain a better understanding of the magnitude, temporal patterns, and source of emissions from dairy farms in the SJV region.Ground-based mobile lab measurements were collected in autumn of 2018 , spring , summer , and autumn of 2019 , and winter of 2020 . Table 3.2 shows a summary of these measurements and associated environmental conditions. Atmospheric measurements were performed with a mobile platform outfitted with multiple trace gas analyzers based on cavity ring down spectroscopy and an isotopic N2O analyzer based on off-axis integrated cavity output spectroscopy . In addition, a global satellite positioning unit recorded geolocation and vehicle speed and a weather station measured wind direction, wind speed, air temperature and relative humidity. A stationary 3 m meteorological tower with a 3-D sonic anemometer mounted was used to collect ambient temperature, wind speed, and wind direction. Atmospheric measurements of CH4, NH3, and N2O were collected from an inlet height of 2.87 m above ground level. Greenhouse gas measurements were corrected using high and low gas mixtures before and after each measurement period. The gas mixtures were tied to the NOAA Global Monitoring Division scale.The highest ΔNH3:ΔCH4 maxima were observed during the summer and autumn seasons, when air temperatures were high, for free stall barns, corrals, manure lagoons, and silage. In animal housing, NH3 emissions are a byproduct of urea hydrolysis from the decomposition of urine and feces. In general, NH3 volatilization increases with higher concentrations of NH4 + /NH3, substrate temperature, pH, wind speed and turbulence . When temperatures are high, this dairy farm increases the ventilation and moisture of free stall barns with ceiling fans and cools milking cows with periodic cooling water mist. Increased wind speed and ventilation rates tend to decrease CH4 emissions in animal housing . Increased turbulence and moisture conditions during the summer months potentially promoted more NH3 emissions in the free stall barns and decreased CH4 emissions. Methane emissions from animal housing are impacted by weather conditions and management practices. The quantity and quality of manure deposited onto the housing floor affects whether methanogenesis is promoted.

Food insecurity is one major effect of such disparity in wages

A 2013 University of California, Berkeley study, for example, found that across the United States, Blacks were 52% more likely, Asian Americans 32% more likely, and Latinos/as 21% more likely to live in conditions with increased heat related risk as compared to whites. Furthermore, low-income people and people of color are also less likely to have air conditioning. In the Los Angeles-Long Beach Metropolitan Area, for example, approximately twice as many Blacks do not have access to air conditioning compared to the general population. The cumulative impact of such circumstances is that Blacks in Los Angeles are twice as likely to die from a heat wave as other residents. Significantly, Blacks and other communities of color are also less likely to own cars to escape extreme weather events: nationally, 19% of Blacks reside in households without a single car, compared to 13.7% of Latinos/as and 4.6% of whites. Furthermore, climate change will lead to higher prices for energy, food and water, exacerbating the fact that low-income communities and communities of color already spend a greater portion of their income on basic necessities. Households in the lowest income bracket use more than twice the proportion of their total expenditures on electricity, and twice the proportion of their total expenditures on food, than do those households in the highest income bracket. Finally, due to climate change, low-income communities and communities of color will have fewer or shifting job opportunities. Low-income people of color hold the majority of jobs in sectors that will be significantly affected by climate change, such as agriculture and tourism.

In California, as of 2014, for example, there were 739,000 agriculture laborers, 49.2% of whom were Latinos/ as. Workers in these industries, growing benches particularly agricultural laborers, would be the first to lose their jobs in the event of an economic downturn due to climatic troubles. Additionally, people of color already own the most marginal farmland and benefit the least from support programs, thus leaving certain producers themselves at greater risk due to climate change. Corporations, furthermore, stand to benefit by way of the impacts of climate change and a Farm Bill that serves corporate interests. In the 2014 Farm Bill, the crop insurance program expanded to cover specialty crops and account for the higher value of organics. Due to extreme weather, however, the program’s costs have grown even without changes to the Farm Bill. After the 2012 drought, for example, the Federal Crop Insurance Program paid out $17.3 billion in losses, the highest ever, breaking the earlier record set in 2011, yet taxpayers covered nearly 75% of the payouts, minimizing any cost to crop insurance corporations. The public thus subsidizes not only the destructive type of agriculture but also the insurance payouts themselves caused in part by such destructive methods—a resilient arrangement that leaves corporations benefitting the most.Corporate Consolidation and Control: Corporate consolidation and control have become central features of the US food system, and the Farm Bill in particular. As of 2014, large-scale family-owned and non-family-owned operations account for 49.7% of the total value of production despite making up only 4.7% of all US farms. As of 2013, only 12 companies account for almost 53% of ethanol production capacity and own 38% of all ethanol production plants. As of 2007, four corporations own 85% of the soybean processing industry, 82% of the beef packing industry, 63% of the pork packing industry, and manufacture about 50% of the milk. Only four corporations control 53% of US grocery retail, and roughly 500 companies control 70% of food choice globally. 

Food System Worker Disparity: Racial and economic inequity is a central feature of the industrial and corporate-controlled food system. At every level of the food chain, for example, from food production to food service, workers of color typically make less than white workers. On average, white food workers earn $25,024 a year while workers of color make $19,349 a year.Significantly, women of color in particular suffer the most, earning almost half of what white male workers earn. In some contexts, a majority of farm workers who receive “piece-rate” earnings frequently earn far less than minimum wage—an exploitative practice deeply tied to immigration policy. For example, as of 2014, twice as many restaurant workers were food insecure compared to the overall US population; as of 2011, in Fresno County, California, 45% of farmworkers were food insecure, and in the state of Georgia, 63% of migrant farmworkers were food insecure. Beyond wages, few people of color hold management positions in the food system, with white people holding almost three out of every four managerial positions in the food system. As of 2012, 11.8% of executive and senior level officials and managers, and 21.0% of all first- and mid-level officials and managers in 2012 were people of color. One result of this disparity is that non-white food system workers experience greater food insecurity.Food Equity and Nutrition: Food insecurity in the US continues unabated, affecting low-income communities and communities of color in particular. As of 2013, 14.3% of US households—17.5 million households, roughly 50 million persons—were food insecure. The report also found that the rates of food insecurity were substantially higher than the national average for Black and Latino/a households, households with incomes near or below the federal poverty line, and households with children headed by single women or single men. Within this social, political, and economic climate, recent cuts to the Supplemental Nutrition Assistant Program and other meal support programs continue to disproportionately hurt communities of color, as they are frequently over represented in the lowest-paying sectors of the labor market. Land Access: In 1920, 14% of all US farmers were Black . By 1997, fewer than 20,000 US farmers were Black, and they owned only about 2 million acres. While white farmers were losing their farms during these decades as well, the rate that Black farmers lost their land has been estimated at two and a half to five times the rate of white-owned farm loss. Furthermore, between 1920 and 1997, the number of US farms operated by Blacks dropped 98%, while the number of US farms operated by whites dropped 65.8%. Although in 1982 the US Commission on Civil Rights concluded that the USDA was the primary reason Black farmers continued to lose their land at such astonishing rates. In 1983 President Reagan eliminated the division of the USDA that handled civil rights complaints. The USDA Office of Civil Rights would not re-open until 1996 during the Clinton Administration. The increasing influence of corporations inside and outside the food system since the early 1980s exacerbated such trends for communities of color, and marked the complex ties between the federal government and corporate interests. Farm Labor and Immigration Policy: The Farm Bill itself does not deal directly with immigration. However, the combination of an immigration system easily exploited by employers, and workers’ low income, limited formal education, limited command of the English language, and undocumented status, gives such farm laborers little opportunity for recourse within—or options outside of—the unjust working conditions that the Farm Bill has helped make possible. For example, as of 2009, 78% of all farmworkers were foreign born; 70% said they could not speak English “at all,” or could only speak “a little”; the median level of completed education was sixth grade; and 42% of farmworkers surveyed were migrants, a third of whom having traveled between the United States and another country, primarily Mexico. 

Significantly, many agricultural workers fear that challenging the illegal and unfair practices of their employers will result in further abuses, loss of their job, and, ultimately, deportation. Worse yet, bud drying system few attorneys are available to help poor agricultural workers, and federal legal aid programs are prohibited from representing undocumented immigrants. Ultimately, corporate control of the food system secures and exacerbates the unjust treatment of the predominately non-white and migrant agricultural workforce of the United States. Climate Change: In the United States, the relationship between disparity in exposures to environmental hazards and socio-economic status has been widely documented. As a major contributor to global climate change and the racialized distribution of its impacts, conventional agricultural production practices, in particular, have been instrumental toward this end. In 2013, for example, the US Environmental Protection Agency reported that greenhouse gas emissions from agriculture accounted for approximately 9% of total US greenhouse gas emissions—an increase of approximately 17% since 1990. Low-income communities and communities of color in the United States experience the brunt of the effects of climate change than other Americans: they breathe more polluted air, suffer more during extreme weather events, and have fewer means to escape such extreme weather events. Rising energy, food, and water costs also disproportionately effect low-income communities and communities of color, as such communities already spend a greater portion of their income on basic necessities than white communities. Finally, low-income communities and communities of color hold the majority of jobs in sectors that will be significantly affected by climate change, such as agriculture and tourism. Workers in these industries would be the first to lose their jobs in the event of an economic downturn due to climatic troubles. Significantly, this report found a number of structural barriers to addressing these racial/ethnic, gender, and economic inequities. Part I found that the Farm Bill—from its inception in 1933 to the Farm Bills of the 1980s onward— is defined by the long term shift from the subsidization of production and consumption to the subsidization of agribusiness itself. In this light, low-income communities and communities of color have been structurally positioned on the losing side of such shifts, and of US food and agriculture policy more broadly. They have also been given few options for recourse, given the ways in which the Farm Bill has been designed and re-designed to be insulated from democratic influence, particularly by way of countless layers of committees. Part II found that, despite the benefits of joint SNAP and Unemployment Insurance for low-income communities and communities of color, such of the benefits of both during the recession precipitated by the 2007–2008 financial crisis, supporting public nutrition assistance programs and fighting poverty and racial/ethnic inequality, are antithetical. Specifically, while such public assistance programs do indeed support, in some ways, the most marginalized communities, they ultimately maintain structural inequity by way of the major profits that corporations such as Walmart and other large retailers reap by distributing such benefits. These corporations are the same ones that funnel profits back to their corporate headquarters, outside their respective retail sites, and that force low wages and poor working conditions onto workers at all levels of the food system. Finally, Part III and Part IV found that supporting the inclusion of producers of color into current payment schemes and fighting poverty and racial/ethnic inequity are also antithetical, despite recent gains in terms of USDA Civil Rights settlements and slowly increasing participation in such programs by such producers. Specifically, while such disparities may be addressed, in part, by way of more representative Farm Service Agency committees—or by better outreach and assistance such payment programs, and their successor, crop insurance programs—ultimately they maintain structural inequity. They do so, for example, by re-entrenching existing property regimes that consistently push producers, be they of any racial/ethnic background, to cut costs where possible. Specifically, while these disparities may be addressed, in part, by way of more representative Farm Service Agency committees—or by better outreach and assistance— such payment programs, and their successor, crop insurance programs, they ultimately maintain structural inequity. Furthermore, such property regimes set the stage for corporations to fare best, and to grow in size, profit, and influence by way of the multiple mechanisms outlined in both Part III and Part IV. These short term policy interventions must be aligned with the long term strategy of challenging the structural and racialized barriers to a fair and sustainable food system, and thus the existing social, political, and economic frameworks that make such barriers possible. That is because structural change must arguably begin with the tools that are available at the moment, in this case the US Farm Bill, in order to address the most immediate needs for some. Yet, history has shown that such tools can only address the needs of some.

Methane emissions from dairy operations are thought to depend on the type of manure management used

The percentage of the population with income below 130% of the federal poverty line—the income limit for SNAP eligibility—increased substantially during the period of the Great Recession, from 54 million in 2007 to 60 million in 2009, and 64 million in 2011. During this period, the rate of SNAP participation rose among eligible households from 65% in 2007 to 75% in 2010, up to 83% in 2012, with the program expanding at a record pace of 20,000 people per day. By the end of 2014, more than 46 million people, over 14% of all Americans, were using SNAP. SNAP eligibility and use, however, varies significantly by race/ethnicity, with communities of color experiencing the highest rates of eligibility for, and use of, SNAP, particularly during economic downturns. For example, by end of 2009, SNAP was used by 12% of the US population , 28% of all Blacks and 15% of Latinos/as nationwide were using SNAP. On the other hand, only 8% of whites were using SNAP, substantially below the national average. Such trends follow racial/ethnic and economic geographies as well, with SNAP use greatest where poverty and racial/ethnic stratification runs deep. Across the ten core counties of the Mississippi Delta, for example, 45% of Black residents receive SNAP support, while in larger cities such as St. Louis, with a population of 353,064, the percentage of Black residents receiving SNAP support rises to 60%. Even in the largest cities, those with over 500,000 people, such trends re-main: white SNAP use peaks at 16% in the Bronx, New York for example, cannabis racks while Black SNAP use peaks at 54% in Kent, Michigan. Significantly, there are 20 counties across the United States where Blacks are at least 10 times as likely as whites to be SNAP beneficiaries, and 26 counties in the United States where over 80% of Blacks were SNAP recipients.

Conversely, there are only 5 counties with more than 39% of white receiving SNAP benefits. The growth of SNAP use amidst the Great Recession has been especially rapid in locations worst hit by the housing bubble burst, and particularly in suburbs across the United States where SNAP use has grown by half or more in dozens of counties. Furthermore, this is the first recession in which a majority of low-income communities and communities of color in metropolitan areas live in the suburbs, giving SNAP and other federal aid new prominence there. The increase in SNAP eligibility and use thus mirrors the impacts of the crisis in housing and employment, and the racialized distribution of impacts of such crises. Specifically, SNAP use was found to have increased by the greatest amount in places characterized by increased poverty, increased unemployment, more home foreclosures, and increased Latino/a populations. A 2012 Congressional Budget Office report confirmed such findings and estimated that although 20% of the growth in SNAP spending was caused by policy changes, including the temporarily higher benefit amounts enacted in the American Recovery and Reinvestment Act of 2009 , the housing crisis and weak economy were responsible for about 65% of the growth in spending on benefits between 2007 and 2011, with the remainder caused by other factors, including higher food prices and lower incomes among beneficiaries. Such has been the case historically: when unemployment rose, SNAP use always did too, signaling how SNAP use has long played a role in alleviating periods of economic distress. As such, SNAP is heavily focused on the poor. According to a 2015 Center on Budget and Policy Priorities report, about 92% of SNAP benefits go to households with incomes below the poverty line, and 57% go to households below half of the poverty line . Because families with the greatest need receive the largest benefits, and because households in the lowest income bracket use twice the proportion of their total expenditures on food than do those households in the highest income bracket, SNAP is a powerful anti-poverty tool.

SNAP, when measured as income, kept 4.8 million people out of poverty in 2013, including 2.1 million children, and lifted 1.3 million children above half of the poverty line in 2013. Furthermore, SNAP is also effective in reducing extreme poverty. A 2011 National Poverty Center study found that SNAP, when measured as income, nearly halved the number of extremely poor families with children in 2011 by 48% and cut the number of children in extreme poverty by more than half . That the increase in SNAP eligibility and use during the start of the Great Recession mirrored larger trends in the economy—and was patterned after long-standing racial and economic inequality—signals the need to again assert that the experience of food insecurity is one part of a larger structure that continues to affect the most historically marginalized populations. A 2010 Census Bureau report found that the recession not only grew the wealth gap between rich and poor; it also exacerbated the gap between different racial/ethnic groups. Between 2007 and 2009, the wealth gap between whites and Blacks nearly doubled, with whites having 22 times as much household wealth as Blacks and 15 times as much as Latinos/as. By 2010, the median household net worth for whites was $110,729 while for Blacks it was $4,995 and for Latinos/as it was of $7,424. Between 2005 and 2010, furthermore, median household net worth for Blacks, Latinos/as, and Asian Americans fell by roughly 60%, while the median net worth for white households fell by only 23%. Many people of color were pushed into bad mortgages by the nation’s biggest banks, while the loss of 600,000 public sector jobs during the recession also had a significant impact on communities of color, as Black and Latino/a workers are more likely to hold government jobs than their white counterparts. Although the current slow economic recovery is not unusual, the cumulative and sustained impacts of unemployment, income loss, and housing loss disproportionately experienced by low-income communities and communities of color signal the value of a safety net that protects such marginalized communities from sustained poverty and food insecurity. Two major parts of the recessionary safety net are the USDA’s Supplemental Nutrition Assistance Program and the Unemployment Insurance program of the US Department of Labor, which provides financial support to workers who become unemployed through no fault of their own. As with SNAP, pipp drying racks expenditures for UI generally expand during economic downturns and shrink during times of economic growth, primarily because economic downturns result in wider eligibility and participation. Significantly, households that participate jointly in both SNAP and UI can improve their ability to sustain food expenditures, nutrition, and overall standard of living during times of economic challenge and are an indicator of the strength of the recessionary safety net itself.

Toward this end, a 2010 USDA study found that the recession not only increased the number of SNAP households but also increased the extent of joint SNAP or UI households: an estimated 14.4% of SNAP households also received UI at some point in 2009—nearly double that of 7.8% in 2005. Moreover, an estimated 13.4% of UI households also received SNAP at some point in 2009, an increase of about one-fifth over the estimate of 11.1% from 2005. Significantly, people of color, hardest hit during the economic downturn, benefitted the most from the safety net. In 2009, the estimated joint SNAP and UI use for Blacks and for Latinos/as exceeded joint use by whites by about 16.6 and 9.8%, respectively. Together, SNAP and UI help sustain aggregate household spending and national production in economic downturns, making the impact of such downturns less severe than they would be in the absence of the programs. Such benefits are particularly pronounced for communities of color who not only experience relatively greater degrees of poverty, but also are hardest hit during economic downturns.In April 2012, the Congressional Budget Office estimated that temporarily higher benefit amounts enacted in the American Recovery and Reinvestment Act of 2009 accounted for about 20% of the growth in SNAP spending during the Great Recession. New legislation can thus affect safety net programs such as SNAP or UI and provide additional support for household spending and national production. Historically, there has been some form of federally financed SNAP and UI benefit extensions during recessions that build upon the benefits they already provide. In 2008, for example, national legislation provided a temporary increase in SNAP benefits for all SNAP participants and expanded eligibility for jobless adults without children. Similarly, UI benefits were extended by the Emergency Unemployment Compensation 2008 program. Together, such efforts highlight the potential benefit of strategic program extensions, particularly during pronounced times of need for communities that are already marginalized. Along with the federally financed temporary benefit extensions, these programs have the potential to have a substantial impact in cushioning the negative effects of recessions on the US population and economy. Ultimately, however, such program expansions are neither a long term nor a structural solution. While SNAP and other federal safety net programs are useful during times of economic hardship and pronounced food insecurity, or as potential anti-poverty tools, such programs only superficially act as efficient and effective forms of local economic stimulus. According to the USDA, for example, SNAP spending yields a substantial local multiplier effect, with every $1 of SNAP benefits spent in a community generating an additional $1.80 in local spending. Yet because many larger grocery retailers have non-local corporate headquarters, sales revenue is transferred outside the community, a phenomenon called “leakage.” For example, in 2008, the City of Oakland, CA estimated that approximately $230 million in grocery store spending is leaving the city. Thus, although it has the potential to help millions of Americans feed their families during economic crises and keep many out of extreme poverty, investing in SNAP is a questionable long term economic stimulus policy and social and economic equity tool because of the benefits accrued by corporations, and the injustices such corporations perpetuate with regard to the exploitation of their employees. Despite these limitations, however, both SNAP and UI have indeed had positive effects on both Gross Domestic Product and on job growth, as well as long term effects on beneficiaries. Research has shown, for example, that access to SNAP in childhood leads to a significant reduction in the incidence of obesity, high blood pressure, and diabetes, and, for women, on the other hand, an increase in economic self-sufficiency. Thus, such costs and benefits ultimately beg the question of whether SNAP, and the Farm Bill more broadly, are the best long term approach to challenging structural poverty, particularly as it is perpetuated by corporate control itself.THE STRUCTURE OF US AGRICULTURE determines and reflects the challenges faced by US farmers and rural communities. This includes farm size, type, cropping patterns, and ownership. Moreover, the federal food and agricultural policies, including the Farm Bill, affect the structure of US farmland through multiple forces and drivers, including taxes, lending programs, environmental and safety regulation, rural development programs, research and development funding, and commodity programs. In this light, Part III examines how such programs have shaped the structure of US farmland and, in turn, how they have affected the socio-economic well-being of low-income farmers and communities, as well as farmers and communities of color. It does so, first, by providing a snapshot of the structure of US farmland, including the outcomes of structural racialization with regard to farmland ownership and government payments . It then outlines the historical significance of change in the structure of US agriculture over the 20th century, and examines three federal rural and agricultural support programs in particular: Farm Service Agency lending programs, Farm Bill commodity programs, and Farm Bill Rural Development programs. Ultimately, Part III argues that such programs have historically undergirded white farmland ownership at the expense of farmland ownership by people of color. Significantly, these programs also highlight how white agricultural land ownership was held up amidst, and by way of, increasing consolidation and specialization, with farmers of color on the losing side of such shifts in the structure of US farmland. In the push for the dismantlement of corporate control and structural racialization, such trends thus require greater attention with regard to their role in intensifying marginality that low-income communities and communities of color face in terms of wealth, access to program benefits, and land access.

These patterns mirrored the effect of the housing and job crisis on people of color as well

Subsequently, under the 1938 Farm Bill, the federal government, and not a processor’s tax, would finance such subsidies, thus relieving corporations of any responsibility to maintain high commodity prices or profitable farms. Significantly, this funding structure was held in place during the shift in agricultural policy from the support of production to the support of prices by way of the doctrine of parity. The ongoing erosion of the doctrine of parity from 1952 onward, which included the lowering of price floors and reduction of supply management practices, sent farm prices crashing and ushered in a period of agricultural policy driven by agribusiness. Specifically, corporations such as Archer Daniels Midland and Cargill were instrumental in helping replace New Deal-era loan programs and land-idling arrangements with direct subsidies that supported low prices for corporate purchasers themselves. Anticipating the 1973 Farm Bill, for example, and alongside Secretary of Agriculture, Earl Butz, Cargill and the Farm Bureau argued that crashing farm prices would be a plus. They argued that not only would greater exports and new uses such as ethanol and sweeteners remedy the drop in price, but also that farms would remain profitable with the support of government subsidies. The winners and losers were clear under such policies: corporate buyers could acquire commodity crops for record low prices that were subsidized by the federal government while farmers continued to lose their lands and their income. Such policies, furthermore, cannabis racking systems constituted part of the larger trend in corporate growth, not limited solely to agribusiness.

For example, according to a 2013 Bureau of Economic Analysis, corporate profit as a percentage of GDP more than doubled between 1980 and 2013, rising from less than 5% to over 10%; before tax, corporate profit, as a percent of GDP, rose from less than 8% to over 12.5% between 1980 and 2013. Both periods, from the Great Depression and New Deal farm programs, to their erosion over the following decades, were characterized by structural racialization. Although New Deal-era legislation was geared toward pulling Americans out of poverty, it was itself a project of racial exclusion, with Black communities and other communities of color systematically barred from such supports. Southern committee members in Congress, for example, blocked efforts to include agricultural workers and domestic workers in the Social Security Act—the New Deal’s centerpiece legislation—largely because of the high concentration of black workers within those lines of work. In the 1930s, 60% of Black workers held domestic or agricultural jobs nationally while, in the southern United States, domestic and agricultural occupations employed almost 75% of Black workers, and 85% of Black women. Furthermore, although the National Recovery Administration set wages within the cotton industry at $12 a week, many Black workers had jobs that were not covered by the law and thus had their wages reduced by employers so that white workers could be paid more. Finally, Black agricultural workers were also left out of New Dealera agricultural union programs—namely the National Labor Relations Act, enacted and signed into law on July 5, 1935—while Black landowners in particular were excluded from federal farm support under the Agricultural Adjustment Administration. Significantly, the distribution of federal support during this period resulted in the dramatic decrease in the number of Black farms, from about 900,000 in 1930 to 682,000 in 1939. 

Although these programs were slowly eroded over the next few decades, farmers of color continued to face great hardship relative to white farmers. The period of agricultural mechanization and industrialization after World War II, marked by the widespread adoption of scientific and technological innovations is usually credited with weeding out supposedly “non-productive, inefficient” farmers. Yet farmers of color and particularly Black farmers, in the context of the uneven application of New Deal era supports and years of discriminatory practices, were at a great disadvantage during this period because they were prevented from attaining the requisite access to capital and thus economic stability for such a transition. The Emergence of the Neoliberal Corporate Food System From the late 1970s and early 1980s until today, corporations have taken on a new and more deeply entrenched set of relationships within the food system. In short, this period is defined by neoliberal capitalist expansion and corporate control that began with the global economic shocks of the 1970s and 1980s During the 1980s, and working for the interests of multinational corporations in securing markets abroad for agricultural commodities produced domestically, Structural Adjustment Programs broke down foreign tariffs, dismantled national marketing boards, and eliminated price guarantees in the Global South. Alongside this destructive guarantee of foreign markets, the 1950s-onward trend of dismantling domestic safety net programs for farmers, guaranteeing low prices for commodity purchasers , and making up the potential loss for farmers with government direct payments continued. Such trends culminated in the 1996 Farm Bill—the “Freedom to Farm” bill. This Farm Bill eliminated the structural safety nets that had long protected producers during lean years. Corporate buyers and groups such as the National Grain and Feed Association, composed of firms in the grain and feed industry, pushed the 1996 Farm Bill to completely eliminate price floors, the requirement to keep some land idle, and the grain reserves that were meant to stabilize supplies and therefore stabilize prices, while simultaneously encouraging farmers to plant as much as possible. The 1996 Farm Bill thus marked the culmination of the shift from the federal government subsidizing production and consumption to diminishing price supports and the subsidization of agribusiness itself.

The dismantling of such price controls drove prices down and allowed corporate buyers to profit off heavily subsidized commodities while securing their power over producers. Specifically, deregulation left farmers increasingly vulnerable to market fluctuations caused by speculation, price volatility, and the profit-motives of corporate buyers. The shifts under the 1996 Farm Bill were deemed a failure by both farmers and legislators, and by 1997, rapidly falling farm prices resulted in direct government emergency payments to farmers, despite the fact that the legislation was designed to completely phase out farm program payments. Between 1996 and 1998, expenditures for farm programs rose dramatically, from $7.3 billion to $12.4 billion. They then soared to $21.5 billion in 1999 to over $22 billion in 2001. From 1996 to 2001, US net farm income dropped by 16.5% despite these payments. Rather than address the underlying cause of the price drop—overproduction—Congress voted to make these “emergency” payments permanent in the 2002 Farm Bill. As outlined below, neoliberal corporate influence remains particularly salient within two domains: the first is food production, processing, distribution, and service, and the second is education, research, and development.Commodity Supports: One major way corporations continue to profit and exert their influence on food production, distribution, and consumption is through commodity support programs. Once the safety nets of the New Deal farm programs were cut back during the 1980s and 1990s, and completely eliminated in the 1996 Farm Bill, farmers began to produce much more corn, soybeans, wheat, and other commodity crops. Specifically, harvest drying rack the 1996 Farm Bill eliminated the requirement to keep some land idle, which encouraged farmers to plant far more than they had before. As a result, the higher supplies of these crops brought down their prices, which drastically hurt farmer incomes and greatly increased the profits corporate purchasers reaped from purchasing even cheaper commodities. These low prices undermined the economic viability of most crop farms in the late 1990s, and subsequently, Congress provided a series of emergency payments to farmers. Furthermore, because continued oversupply kept prices from recovering, Congress eventually made such payments permanent in the 2002 Farm Bill. The dismantling of direct payment support for farmers thus ushered in another form of federally subsidized cheap commodities for corporate buyers that still leaves farmers themselves relatively vulnerable: disaster assistance programs and other emergency aid. The 2014 Farm Bill in particular cut funding allocated to direct payments by about $19 billion over 10 years—the most drastic policy change in this Farm Bill—with much of this money going into other types of farm aid, including disaster assistance for livestock producers, subsidized loans for farmers, and, most significantly, the crop insurance program. Crop Insurance: As fundamental as direct payments and emergency payments have been for subsidizing agribusiness profits, under neoliberal political and economic restructuring, crop insurance has surpassed them as the most egregious and expensive subsidy for agribusiness. For decades, farmers have been able to buy federally subsidized crop insurance in order to protect against crop failure or a decline in commodity prices. However, private insurance corporations and banks that administer the program, such as Wells Fargo, benefit the most from crop insurance subsidies. In 2011, these corporations received $1.3 billion for administrative expenses with $10 billion in profits over the past decade. In order to help cushion the blow from the reduction of direct payments, under the 2014 Farm Bill, $90 billion over 10 years will go toward crop insurance, which is $7 billion more than the previous farm bill. However, much of this money will go to private insurance corporations and banks instead of farmers. On the production side, the increase in government support will be directed toward the deductibles that farmers have to pay before insurance benefits begin. In other words, unlike non-farm insurance policies , crop insurance insures not only the crops, but also the expected revenue from selling those crops. Thus, Agricultural Risk Coverage and Price Loss Coverage only pays out when prices drop below a certain threshold. As of early 2015, corn crops have already reached this threshold. There exists a risk that this insurance program could cost far more than expected depending on how crop prices continue to shift: therefore, this is one of the more contentious aspects of the 2014 Farm Bill. Another contentious part is the uneven distribution of benefits. A 2014 report by the Environmental Working Group estimates that 10,000 policyholders receive over $100,000 a year in subsidies, with some receiving over $1 million, while the bottom 80% of farmers collect only $5,000 annually. In short, under the guise of cutting subsidies by repealing unpopular direct payments to farmers, the 2014 Farm Bill instead increases more costly crop insurance subsidies. Food Chain Workers: The pressure for corporate profit and the history of corporate consolidation with regard to the food system, both vertical and horizontal, has driven corporations to continue to lower wages for millions of food system workers and accumulate more wealth. A 2011 national survey of over 630 food system workers conducted by the Food Chain Workers Alliance found that the median hourly wage was $9.65 per hour. More than 86% of food system workers were paid poverty wages while 23% of food system workers were paid less than the minimum wage. Despite their significant role in every part of the food system—from production to processing to distribution and service—food system workers experience a greater degree of food insecurity than the rest of the US workforce. For example, according to the Food Chain Workers Alliance report, food system workers use SNAP at more than one and a half times the rate of the remainder of the US workforce. Additionally, as of 2014, twice as many restaurant workers were food insecure compared to the overall US population; as of 2011, in Fresno County, the country’s most productive agricultural county, 45% of farmworkers are food insecure. The situation is even worse in other parts of the country: in 2011, 63% of migrant farmworkers in Georgia were food insecure. Women and people of color disproportionately feel the economic pressure experienced by food system workers as a result of corporate consolidation. A comprehensive 2011 study of food workers and economic disparity found that people of color typically make less than whites working in the food chain. It found that half of white food workers earn $25,024 a year while workers of color earn $19,349. The study found that women of color in particular suffer the most, earning almost half of what white male workers earn. Furthermore, workers of color experience wage theft more frequently than white workers. More than 20% of all workers of color reported experiencing wage theft, while only 13.2% of all white workers reported having their wages misappropriated. Significantly, the study found that such discrepancies exist in all four sectors of the food system: production, processing, distribution, and service. Furthermore, such trends hold across the overall workforce.