The relative heights of the two peaks differ in different age groups

As there were multiple observations per individual , a random intercept was used for individuals. Utilization of places for each person was estimated using two different approaches. The first method was by checking whether more than two temporally-consecutive GPS points of a person fall within a polygon designated for the person’s home , farms, or forests on each day. This is equivalent to checking if a person spent at least an hour within the same polygon. For each participant, the number of days spending in each category of place was divided by the total number of days participated during the study period to obtain the proportion of being at the respective places. The second method estimated the utilization of places by a biased random bridge technique. Unlike priormethods for estimation of utilization of places such as location-based kernel density estimations , BRB takes the activity time between successive relocations into account and models space utilization as a time-ordered series of points to improve accuracy and biological relevance while adjusting for missing values. BRB estimates the probability of an individual being in a specific location during the study time period and can be used to estimate home range . To parameterize BRB models for each individual, we considered points collected more than three hours apart to be uncorrelated. However, the two temporally-consecutive points that are deemed uncorrelated by the prior cutoff, may in fact be correlated . Without manually adding points between them, 4×4 flood tray this method will underestimate the usage of homes. An individual is considered stationary when the distance between two consecutive points is less than 10 meters.

The minimum standard deviation in relocation uncertainty is set at 30 meters. For each individual, estimation for the usage of different places was done for the whole study period and for each season as described below. In Central and Southern Myanmar, the monsoon rain starts in mid-May and ends in mid-October. Therefore, we split the data on 15th May 2017 and 15th October 2017, and the period between the two dates was regarded as the “rainy season”. Mid-October to mid-March is the “cool and dry season”, mid-March to mid-May is the “hot and dry season”. Combinations of the two dry seasons had been used simply as the “dry season” in some of the analyses.The violin plot of the maximum daily Euclidian distances traveled in kilometers in log10 scale shows that there is a bimodal distribution for all three age groups. The violin plot is a hybrid of kernel density plot and box-plot with the axes flipped that is particularly used to describe data with multi-modal distribution. In the figure the vertical axis is the distance value in kilometers with the smallest value at the bottom, and the horizontal axis shows the density value. The heights and peaks in the following results refer to the width/broadness of the violins in the horizontal axis. The first peak was between 0.01 to 0.1 kilometers and the second peak was between 1 and 10 kilometers. For under 20s, the first peak is over 20% higher compared to the second peak. The difference between the two peaks in the other two age groups is less than 10%. The Wilcoxon rank-sum tests provided evidence that 20–40 and over-40 age groups have greater maximum daily Euclidian distances away from home compared to under-20 age group on average. Further disaggregation of this data by gender, and age group can be found in the Extended data: Figure S4.

Participants may make trips that would last several days, either because their destination could not be reached within a single day or because they stayed at their destination for several days . Using a buffer radius of 266 meters around their home GPS points as their home locations, we calculated the number of consecutive days they spent away from home. Aside from two participants , all other participants had at least one trip with more than two consecutive days away from home during their participation period. Trips of less than 10 consecutive days are the most frequent among the participants. There are male outliers of over 20-years old who took shorter consecutive day trips over 10 times. Making trips of over 10 consecutive days was relatively uncommon, but 21 participants still made at least one trip of over 20 consecutive days away from home. For each participant, we identified the number of days spent at farms, forests, or at one’s home, and looked for an association between farm visits and forest visits. Here we assumed that having at least two GPS points in the polygon of a particular place constitutes using the respective place for that day, and that a person can be at various types of places in a single day. We found that if a person spent a higher proportion of days at the farms, she or he will likely spend a lower proportion of days at the forests, and vice versa, even though both being at the farms and being in the forests are possible on the same day. Figure 2 shows the distribution of the proportion of the number of days being at the farms, forests or home for different age groups. All participants were found to be at their respective home for the majority of days. Compared to other age groups, the 20–40 age group had a higher proportion of time spent in the forests. The under-20 group had the highest proportion of time spent in the farms on average, followed by the 20–40 age group.

We also combined the geographic information of farms and forests with the place utilization estimated from a biased-random bridge algorithm, and calculated the utilization of each specific place over the study period . An example of the place utilization of a person can be seen in Figure 3. On average, participants in the under-20 age group spent 20.0% and 2.2% of their time in farms and forests, respectively. For the participants from the 20–40 age group the percentages are 7.6% and 7.4%, and for those in the over-40 age group, hydroponic tray the percentages are 7.2% and 3.8%, respectively.Being in the farms and forests at night might impose increased risks of diseases such as malaria because of potential exposure to important mosquito vector species . As seen in Figure 4, we looked at the total number of nights participants spent in the farms or in the forests. Two female participants spent at least a night in the farm compared to 22 male participants . As for spending at least a night in the forest, there were 21 males and only one female. Most participants in the 20–40 age group spent at least one night in the farm and in the forest whereas fewer than 35% of participants from under-20 and over-40 age groups spent a night in such places. The negative binomial regression provided strong evidence that males in this cohort were more likely to spend nights in farms and in forests compared to females, and that young adults were more likely to spend nights in the forest compared to the under-20 age group , after controlling for the remaining variables . Participants may spend consecutive nights in the farms or the forests without going back home. The number of consecutive nights spent in the farms or the forests is the subset of the multiday trips mentioned in the previous section. Figure 5 quantifies this metric for different age groups and gender. Persons of all age groups and gender spent varying numbers of consecutive nights in the farms. An under-20 male spent the most consecutive nights in the farm. A female of 20–40 age-group and a male of over-40 age-group spent two episodes of 11–15 consecutive nights in the farm. In contrast, there was little demographic heterogeneity among those who spent consecutive nights in the forests. A few males of the 20–40 age group not only spent long periods of consecutive nights , but also frequently spent many short periods of consecutive nights in the forests.Many detailed human movement studies have been done, mainly in the regions of high socio-economic status. Our study presents an analysis of human movement in a remote rural area that has been under-studied with regard to human ecology . Compared to other studies where GPS loggers were used for a very short period of time, there is a relatively long duration of participation in our study. This makes it possible to examine potential seasonal variation. Our data suggest a bimodal pattern of movement away from participant homes, with one peak nearby and another one to three kilometers away from their homes . There were differences in these movement patterns by demography, with under-20s staying close to home on the majority of the days and both 20–40 and over-40 age groups tending to move farther away each day. We hypothesize that the reason for this difference is that over-20 age groups are more heavily involved in subsistence activities than the under-20 age group.

Multiday trips of less than 10 days are common among the participants. The metrics of multiday trips do not signify anything unless they are associated with the activities done during the trip which vary from visits to friends/family, getting supplies at the nearby town, farming, foraging, and other economic or subsistence activities. All age groups in this study visited farm areas and spent the night in the farms, with no statistically significant difference found between age groups. When they spent their nights in the farms, they did it consecutively and on several occasions during the study period. Farming is one of the major forms of subsistence for rural families and it must be regarded as relatively safe compared to subsistence activities in the forests that all age groups partake in it. There was no seasonal variation in the number of nights spent at the farms in these data. Different types of crops are normally rotated over the year for cultivation in this region. In contrast, going to and sleeping in the forests, which may involve foraging, logging, mining etc., is found to be the task for males of the 20–40 age group. The median number of nights slept in the forest among those who ever spent the night in the forest was 7.5. Only males of the 20–40 age group spent a higher number of nights in the forest than the median value. The same males were found to take frequent and successive overnight trips to the forests. We surmise that the males in the 20–40 age group, most likely being the breadwinners of the family, are subject to any possible subsistence activities and are regarded as the most suitable persons to venture into the forests overnight despite dangers from wildlife and harsh living conditions. No seasonal variation was found in the number of nights of sleeping in the forest. In comparison, a questionnaire based movement survey conducted in similar Thai-Myanmar border area found seasonal movement patterns. Compared to home, sleeping places in the farms and forests may be more rudimentary, leaving people more vulnerable to medically important arthropods or other environmental risks . Spending several consecutive nights in the farms and forests may increase the chances of vector-borne diseases such as malaria since major malaria vectors in the area such as Anopheles dirus, and Anopheles minimus are found in the deep forests, forest edges, plantations and even in the rice fields. Studies have found that the increased risk of malaria in forest-goers is contributed by inconsistent bed net usage, misconception that alcohol consumption or blankets provides protection against mosquito bites, non-participation in the malaria prevention activities held at the villages. Results from this study, particularly the space utilization data, would be useful in spatially explicit individual-based infectious disease model such as which models the malaria elimination in the rural South East Asian region. Human mobility is a crucial part of many disease transmission dynamics, yet it has been ignored in many infectious disease models because of constraints on data and computational capacity. Compartmental models assume homogeneous mixing of individuals in their respective compartments. While they are quick to set up, they are not suitable for the disease elimination settings. Their homogeneous nature limits the modelers from exploring the impact of multiple interventions tailored towards different risk groups such as forest-goers in malaria intervention. Individual-based models could have individual specific properties and their related movement patterns thus achieving a heterogeneous population.

This result suggests that there is some degree of toxicity present in these hydrolysates

Our group has demonstrated the applicability of this process by generating hydrolysates with high concentrations of monomeric sugars and organic acids from several feed stocks like grasses, hardwoods, and softwoods, and converting them to terpene-based jet-fuel molecules using engineered strains of the yeast Rhodosporidium toruloides . Nevertheless, it is important to expand the range of lignocellulosic feedstocks used in this process to evaluate its versatility to advance towards the goal of developing a truly lignocellulosic feed stock-agnostic bio-refinery. Hemp is an attractive crop due to its fast growth, bio-remediation potential, and diverse agricultural applications, including the production of natural fibers, grains, essential oils, and other commodities. This biomass is composed of an outer fiber that represents approximately 30% of the weight and an inner core known as hurd that accounts for the remaining 70% . The hemp fiber is utilized in the textile industry as insulation material and for the production of bio-plastics in the automotive industry, while hemp hurd is used for low value applications such as animal bedding, concrete additives, or disposed of by combustion and landfill accumulation. This indicates that approximately 70 wt% of hemp biomass has the potential to be valorized into higher-value products and applications, indoor weed growing accessories which would improve the economics of the hemp industry and increase its sustainability footprint to promote a green economy. Mycelium-based composites are emerging as cheap and environmentally sustainable materials generated by fungal growth on a scaffold made of agricultural waste materials.

The mycelium composite can replace foams, timber, and plastics for applications like insulation, packaging, flooring, and other furnishings. For example, the company Ecovative Design LLC produces a foamlike packaging material made of hemp hurd and fungal mycelia, which is fully compostable. Anticipating the possibility of an increased demand of eco-friendly packaging materials in the near future, we are interested in evaluating the feasibility of diverting this used packing material away from landfills or composting facilities towards higher value applications, such as feedstock for bio-fuels. It is known that fungal enzymes can reduce the recalcitrance of the biomass to deconstruction, likely through modification of polysaccharides and lignin in plant biomass. Therefore, we hypothesized that the mycelium composite material could be more easily deconstructed and converted into higher value fuels and chemicals than the raw hemp hurd. In this study, hemp hurd and the mycelium-based packaging material were tested as biomass feedstocks for the production of the jet-fuel precursor bisabolene, using a one-pot ionic liquid technology and microbial conversion. First, we examined the deconstruction efficiency of the packaging material compared to hemp hurd, when subjugated to a onepot ionic liquid pretreatment process. Second, the influence of the pretreatment process parameters on the sugar yields was investigated by using a Box–Behnken statistical design. Finally, the generated hydrolysates were fermented to evaluate the bio-conversion of the depolymerized components by a bisabolene-producing R. toruloides strain. The composition of the hemp hurd and packaging material was determined as shown in Table 1. The total extractives of the hemp hurd and packaging material comprised 8.3 and 14.7% of the biomass, respectively.

The higher extractive content of the packaging material may be a result of the fungal growth stage in the packaging construction process. For the polysaccharide content, hemp hurd had higher glucan and xylan contents than the glucan and xylan content of the packaging material. Combining glucan and xylan content, the total fermentable sugars of the hemp hurd and packaging material was 43.7% and 40.4% of the hemp hurd biomass, respectively.This indicates that a small fraction of the polysaccharides may have been consumed and converted into extractives during mycelial growth. However, both types of biomass contain a substantial amount of polymeric carbohydrates that can be depolymerized into simple sugars for fermentation. The lignin content for both materials was the same ; however, it is possible that the mycelial growth in the packaging material could have altered the structure of lignin and made the polysaccharides more accessible to hydrolysis. We used the one-pot ionic liquid process on hemp hurd and package materials to test this hypothesis. One of the bottlenecks for the efficient conversion of lignocellulosic hydrolysates is the presence of compounds generated during the pretreatment and enzymatic hydrolysis stages that are toxic to bio-fuel-producing microbes. The degree of toxicity mainly depends on the type of biomass, pretreatment conditions, and the identity of the microorganism that will be used for fermenting the depolymerized substrates. Therefore, we performed a bio-compatibility test with the hydrolysates prepared from hemp hurd and packaging materials, using an engineered strain of the yeast R. toruloides known to be tolerant to ILs and biomass-derived compounds, and convert glucose and xylose to the jet fuel precursor bisabolene.

When the strain was inoculated directly in concentrated hydrolysates, negligible sugar consumption and very little growth was observed, as shown in Figure 2. Therefore, we prepared 50% diluted hydrolysates for further testing. Under these conditions, more than 90% of glucose and xylose conversion was observed in both hydrolysates, and the cells were able to grow and produce bisabolene . The utilization of hydrolysate with higher concentrations is beneficial for the economically feasible biorefinery development. Therefore, other strategies such as hydrolysate culture adaptation or detoxification may be required to improve bio-compatibility.The optimum levels of parameters for glucose and xylose yields from packaging materials recommended by the model were: reaction temperature of 126 and 128 C, reaction time of 2.1 and 2.0 h, and ionic liquid loading of 7.3% and 7.9%, corresponding to a predicted glucose and xylose yield of 74.6% and 81.7%. However, this optimal condition did not significantly improve the yields compared to the center point, rolling benches even though the reaction conditions required a 4% higher temperature than the center point, a rather small difference in temperature. This result suggests that other process parameters such as agitation and biomass solid loading percentage should be tested for further improvement in the yield. The model for hemp hurd found a saddle point instead of optimum levels, which means that the optimum process condition was not aligned within the current experimental conditions. Further investigation into the different range of reaction conditions such as higher reaction temperature is required to optimize the reaction condition for hemp hurd. If operating with a limited budget and time, the reaction condition having the highest glucose and xylose yield can be chosen. The highest glucose yield in the current reaction condition was obtained from hemp hurd at 140 C, 1 h reaction time and 7.5% ionic liquid loading, which has higher severity in reaction condition than the optimized reaction condition of packaging materials. This result indicates that the reaction parameter affects the sugar yield differently according to the biomass type, implying that the biomass properties change by mycelium growth. Regarding the packaging materials, the combined effects of reaction temperature, reaction time and ionic liquid loading on glucose yields are illustrated in Figure 4 and xylose yields in Figure 4. Response surface plots show that the glucose yield increased with the reaction temperature up to 133 C with subsequent decrease in yield at a higher temperature. The xylose yields showed a similar trend. Additionally, the glucose and xylose yield increased with the reaction time up to 2 h and 7.5% ionic liquid loading.

After those points, the glucose and xylose yield decreased, probably due to the loss of enzyme activity caused by the higher ionic liquid concentration. Additionally, the longer reaction time and the higher ionic liquid concentration might facilitate the production of other compounds such as furan derivatives or organic acids, which inhibits the enzyme activity during the pretreatment. Moreover, the production of other components probably led to a decrease in accessible carbohydrates to the enzyme . Further tests may be necessary to improve the sugar yield. ANOVA results shown in Table S5 indicate that reaction temperature and reaction time has statistically significant effects on glucose yield , while ionic liquid loading was not significant . Additionally, the statistically significant interaction effects of reaction temperature with reaction time and ionic liquid loadings were confirmed. ANOVA results associated with xylose yield show that reaction temperature had a significant effect on the yield , while reaction time and ionic liquids had no effect . Additionally, the interaction effect of reaction temperature with reaction time and ionic liquid loading was not significant, while the interaction effects of reaction time with ionic liquid were significant .This work demonstrates the feasibility of hemp hurd and packaging materials made of mycelium grown on hemp hurd to be used as feedstocks for bio-conversion to a jet-fuel precursor using a one-pot ionic liquid technology. During the initial test , the packaging materials produced higher sugar concentrations and yield than the hemp hurd . However, the Box–Behnken experimental design showed that the reaction conditions for the maximum sugar yields from each material was different and that the significance of the process parameter effect on the fermentable sugar yield was dependent on the biomass properties, suggesting that the mycelial growth affected the deconstructability of the hemp hurd. Furthermore, the fermentation test to convert fermentable sugar into bisabolene showed that hydrolysates fromthe packaging material resulted in a higher bisabolene titer than hydrolysates from the hemp hurd, probably due to the higher sugar concentrations generated form the packaging material. To fully take advantage of these packaging materials to produce bio-fuels after they are used and discarded, a more detailed correlation study between the fermentable sugar yield and physicochemical properties of biomass and packaging materials or packaging process parameters is required by testing different hemp material sources. In addition, methods to overcome hydrolysate toxicity will need to be employed to enable utilization of concentrated hydrolysate for increased product titers and a reduction in water consumption. Finally, further investigation into other process parameters such as agitation and biomass loadings are merited to fully optimize the pretreatment conditions, as well as performing pilot scale tests to generate data that can help assess the economic feasibility of this new conceptual process. Overall, this study indicates that it is possible to produce lignocellulosic supply chains for production of bio-fuels and biochemicals that include both raw biomass and biomass that has been first processed and valorized as commercial products, such as packaging materials, enabling the carbon in these lignocellulosic products to generate value multiple times in their life cycle. Understanding momentum, heat, and scalar mass exchanges between vegetation and the atmosphere is necessary for the quantification of evaporation and sensible heat flux for hydrologic budgets, ozone deposition on urban forests, nonmethane hydrocarbon emissions from natural vegetation, carbon storage in ecosystems, etc. Such exchanges are governed by a turbulent mixing process that appears to exhibit a number of universal characteristics . Early attempts to predict these universal characteristics made use of rough-wall boundary layer analogies but limited success was reported . A basic distinction between canopy and rough-wall boundary layer turbulence is that the ‘‘for-est–atmosphere’’ system is a porous medium permitting finite velocity and velocity perturbations well within the canopy. Hence, the canopy–atmosphere interface cannot impose a severe constraint on fluid continuity as an impervious boundary, as discussed in Raupach and Thom , Raupach , and Raupach et al. . Raupach et al. and Raupach et al. proposed a mixing layer analogy to model the universal characteristics of turbulence close to the canopy atmosphere interface in uniform and extensive canopies. Their analogy is based on solutions to the linearized perturbed two-dimensional inviscid momentum equations using hydrodynamic stability theory . For such a system of equations, HST predicts the unstable mode generation of two-dimensional transverse Kelvin–Helmholz waves with stream wise wavelength if the longitudinal velocity profile has an inflection point . Such instabilities are the origins of organized eddy motion in plane mixing layers; however, a KH eddy motion cannot be produced or sustained in boundary layers due to the absence of such an inflection point in the velocity profile. A plane mixing layer is a ‘‘boundary-free shear flow’’ formed in a region between two coflowing fluid streams of different velocity but same density . Raupach et al. recently argued that a strong inflection point in the mean velocity profile at the canopy–atmosphere interface results in a flow regime resembling a mixing layer rather than a boundary layer neighboring this interface. Raupach et al.’s ML analogy is the first theoretical advancement to analyzing the structure of turbulence close to the canopy– atmosphere interface of a horizontally extensive uniform forest.

Testing prices are not publicly advertised by licensed laboratories

Study of cannabis as an agricultural crop has been notoriously inadequate, but data provided by the water quality control board’s cannabis program offers critical new insights into the water use practices of cultivators entering the regulated industry. In this initial analysis, we found that subsurface water may be much more commonly used in cannabis cultivation than previously supposed. Further analyses of cannabis cultivation’s water extraction demand, as well as of geospatial variation in water demand, may help elaborate the ramifications of this finding. Ultimately, a better understanding of cannabis cultivation’s water demand will be useful for placing the cannabis industry in the greater context of all water allocation needs in the North Coast and throughout California. U.S. state markets for cannabis are evolving rapidly. As of mid-2019, 32 of 50 states had some form of legal medicinal cannabis system in place, and since 2012, 11 of those states had legalized and regulated adult-use cannabis.California was the first U.S. state to decriminalize the sale of medicinal cannabis, with the 1996 passage of the Compassionate Use Act . In 2003, a California state legislative act, Senate Bill 420, set out more specific rules for the operation of medicinal cannabis collectives and cooperatives. For the following 15 years, regulations on the cultivation, manufacturing, and sale of cannabis in California were largely limited to a wide variety of local ordinances, drying room with little intervention from the state government. In November 2016, California voters legalized adult-use cannabis by approving Proposition 64 .

Subsequently, the Medicinal and Adult-Use Cannabis Regulation and Safety Act of 2017 created a unified framework for the state licensing of cannabis businesses and the taxation and regulation of adult-use and medicinal cannabis. MAUCRSA regulations went into effect on January 1, 2018. Safety regulations generally add costs to production. One of the most costly components of California’s new system of cannabis regulation is the mandatory testing of all legal cannabis for more than 100 contaminants, including pesticides and heavy metals. This paper is the first to comprehensively examine the economic challenges of cannabis testing and estimate the cost of testing compliance per pound of cannabis marketed in a legal and licensed cannabis market. In a previous article, we provide a brief introduction to testing costs to which this paper supplies needed rigor. We review and compare the allowable tolerance levels for contaminants in cannabis with allowable levels in other crops from California, and review rejection rates in California since mandatory testing began in 2018. We compare these with rejection rates in other U.S. states where medical and recreational use of cannabis are permitted. We use primary data from California’s major cannabis testing laboratories, several cannabis testing equipment manufacturers, Bureau of Cannabis Control license data including geographical location information, and data from Cannabis Benchmarks on average wholesale batch sizes to estimate the testing cost per pound of cannabis legally marketed in California.At the U.S. federal level, cannabis is still classified as a Schedule I illegal narcotic, and its possession, sale, and even testing are serious criminal offenses under federal law. Even cannabis businesses that are fully compliant with state regulations thus face legal risks, uncertainties, and obstacles to doing business such as a lack of access to mainstream banks. In recent years, however, the conflict of state and federal laws has generally been mediated via a series of informal, non-binding agreements, letters, and memos of understanding between the U.S. Department of Justice and states. These understandings have enabled cannabis businesses to focus more on complying with state and local laws than on hiding from federal prosecutors. All of the U.S. states that have legalized, taxed, and regulated recreational cannabis, and most states that have legalized and regulated medicinal cannabis, require testing for some contaminants and testing and labeling of potency .

Colorado and Washington were the first states to vote to legalize and regulate adult-use cannabis, both in 2012. Colorado first introduced the enforcement of potency and homogeneity tests for retail cannabis products in 2014. Residual solvents and microbial contaminants were added to the testing requirements in 2015, and heavy metals and pesticide residues as of mid-2018 . Washington State mandates that licensed testing laboratories must also perform potency tests, moisture analysis, foreign matter,microbial and mycotoxin screenings, and screenings for residual solvents. Some states, including California and Colorado but not Washington, also require more sophisticated and costly wet-lab tests for pesticides and heavy metals. Per MAUCRSA, the California Department of Pesticide Regulation established maximum allowable thresholds for 66 different pesticides, including zero tolerance for trace amounts of 21 pesticides and low allowable trace amounts of 45 other pesticides. MAUCRSA also established thresholds for 22 residual solvents plus a variety of heavy metals and other contaminants. The Bureau of Cannabis Control was put in charge of licensing and regulating testing labs and enforcing the testing standards. In the 2016 marketplace, prior to the passage of Proposition 64—which was unregulated at the state level and partially regulated at the local level—total California cannabis production was estimated at approximately 13.5 million pounds of raw flower, with roughly 80% of this production illegally shipped out of the state. These out-of-state shipments may explain why California accounted for 70% of nationwide cannabis confiscations in 2016. Rough estimates suggest that only about one-quarter of California’s in-state cannabis consumption, or less than 5% of total cannabis production, went to the legal medicinal market in 2016. Until 2018, there were no rules in place at the state or local levels in California for testing contaminants, even for products legally marketed as medicinal cannabis. A minority of medicinal cannabis retailers in the pre-2018 state-unregulated market was routinely testing and labeling cannabis for THC potency, but few were voluntarily testing for contaminants. Informal evidence suggests that pesticide residues were common in cannabis products in the pre-regulated market. For example, in 2017 an investigation reported that 93% of 44 samples collected from 15 cannabis retailers in California had pesticide residues.The mandatory testing framework introduced under MAUCRSA is summarized in Table 1, where we briefly describe the tests for specific types of batches and the standards for passing each test.

Dried cannabis flower and cannabis products must be tested for concentrations of cannabinoids and various contaminants in order to enter the legal market. Some tests apply to all batches, while some others only apply to some forms of cannabis. Heavy metals tests were not mandatory until December 2018. Table 2 shows the list of contaminants with their maximum tolerance levels allowed in California. Tolerance levels are generally lower for products that are inhaled than for products that are eaten or applied topically. For 21 pesticides, how to trim cannabis the maximum residual level is zero, meaning that no trace of those residues may legally be detected in a sample of cannabis. MAUCRSA requires that all batches of cannabis flowers and products must be sampled and tested by licensed laboratories before being delivered to retailers. Distributors are responsible for testing . Fig 1 shows the flow of cannabis testing in California. The weight of a harvest batch cannot exceed 50 pounds; larger batches must be broken down into 50-pound sub-batches for testing. The sample size must be bigger than 0.35% of its weight. A processed batch cannot surpass 150,000 units.After testing each batch, laboratories must file a certificate of analysis indicating the results to distributors and to the BCC. If a sample fails any test, the batch that it represents cannot be delivered to dispensaries for marketing. Instead, it can be remediated or reprocessed and fully re-tested again. If a batch fails a second re-testing after a second remediation, or if a failed batch is not remediated, then the entire batch must be destroyed. Analyzing the cannabis market, compared with other agricultural markets, presents a unique challenge to researchers because of the rapidly changing legal environment, the lack of historical data or scientific studies, the lack of government tax data, and the cash nature of the business. Quotes are known to vary depending on the number of samples, the frequency of testing, the type of contract between the distributor and the laboratory, among others. Bulk pricing is common and is negotiated on a case-by-case basis. We approximate the costs of testing by collecting detailed data on the testing process and constructing in-depth estimates of the capital, fixed, and variable costs of running a licensed testing laboratory in California. We use these results in a set of simulations that estimate the costs per pound generated by cannabis testing under the California regulations in place as of mid-2019. We make some market assumptions based on the most reliable industry data available as of this writing in order to estimate the current cost per pound of testing compliance.We construct a simulation model using R software to assess the cost structure of cannabis testing in California under the current regulatory framework. We base our simulations on the number of testing labs and distributors that had been granted temporary licenses by the BCC as of April 2019. The number of labs and distributors in California will fluctuate as the industry continues to develop. To estimate costs incurred by labs, we first construct estimates of fixed and variable costs for labs based on their testing capacities. We calculate the cost of testing a sample of dried cannabis flower considering the lab scale and the distances between labs and distributors. Based on meetings with representatives of California testing labs, we assume that 70%, 20%, and 10% of the labs are distributed into small, medium, and large size categories. We assume that the testing industry is like many others in that many small firms supply relatively little of the output. We run 1,000 simulations to estimate the cost of sampling and testing for a sample of a typical batch of dried flowers from each of the 49 labs, assuming that costs, working hours, testing capacities, etc., may vary from lab to lab. Next, we use the weighted average of testing cost per sample to estimate cost per pound. We express total testing cost in dollars per pound of legal cannabis that reaches the market, after incorporating costs of remediating and re-testing failed batches and losses from batches of cannabis that cannot be remediated and must be destroyed.We used the list published by the BCC to identify actively licensed testing labs and requested, to managers or representatives, a personal or phone-call interview based on a set of questions that we used as a guideline . We interviewed one-fourth of the operating or prospective licensed testing labs listed by the BCC. We gathered data on market prices for testing equipment, supplies and chemical reagents consumed by equipment, equipment running capacities, and other cannabis testing inputs needed to build a compliant testing laboratory in California. Likewise, we collected financial, managerial, and logistics data. To complement licensed testing lab data, we also drew on personal interviews, phone calls, and email exchanges with sales representatives of three large equipment suppliers. Table 3 summarizes capital costs, other one-time expenses, and annual operational and maintenance costs used in our calculations. We report average cost and standard deviation for each estimate. We assume that medium-sized and large labs receive discounted prices on equipment, given the larger scale of their purchases. Based on information provided by equipment suppliers, we expect these discounts to be between 1.5% and 2.5%. Different-sized labs have different capacities based on their scale. We assume that larger labs have made larger capital investments and are better able to optimize processes when supplying a larger volume of testing . On the other hand, small testing labs require less equipment and less capital investment, and operate with low annual costs, but their testing capacities are also low. Table 4 summarizes our estimates of running time for tests, the main consumables used by testing machines, and the expected cost of running a specific test per sample. In addition, we include a range of $80 to $120 per sample to cover general material and labor apparel used while preparing and processing samples.Next, we must estimate sampling cost , which includes transportation, labor, equipment, and material costs. We use the zip codes of active licensed testing labs and distributors published by the BCC to estimate the distance from labs to distributors .

Traffic congestion around the port is also contributing to the slowdown of port operations

Figure 1 shows the annual TEU throughput at the POLA and POLB for the period 1997-2016. Although the explosive growth of the first ten years exhibited a slowdown after the recession of 2008, it has achieved quite a healthy recovery in the last five years reaching or surpassing its pre-recession levels. The numbers in Figure 1 include both loaded and empty units, destined for import or export. Figure 2 shows the change in total annual TEU throughput for the combined ports. The yearly change over the last five years is positive. The total container throughput through the POLA and POLB is expected to grow in the future, correlated with population increase, domestic demand for inexpensive manufactured goods, as well as global demand for US products, and improving competitiveness of US industry. Handling a large number of the necessary container transactions requires intensive management of operations, changes in transportation policy and modernized equipment. In the POLB and POLA, there are approximately 100,000 chassis available for leasing and transporting containers to and from warehouses, stores, factories, rail yards and container terminals. Among these 100,000 chassis available to the trucking companies there are chassis supplied by various third party chassis leasing companies. However, terminals within the ports do not always have chassis available from each company. At times, cannabis grow room chassis required by the trucks are either not available anywhere in the terminal or are dislocated and need to be repositioned. Prior to 2014 chassis companies did not work together or have a neutral chassis pool, and shortages and dislocations of chassis occurred frequently.

Trucks would often be required to travel between terminals and perform additional trips to pick up or drop off chassis at specific locations in addition to picking up and dropping off the containers for export and import. This was a lengthy and cumbersome process and generated additional queues at each terminal. The shortage of chassis can significantly lengthen truck turn times and cause additional cost for trucking companies and increase emissions at the port. Furthermore, lack of chassis could mean that the container will be kept at the carrier ship for prolonged time and the storage fees will continuously accumulate. The shipper will have to pay additional charge for the failure to discharge a container from the carrier ship within the agreed time frame, known as demurrage charge. Also, when containers are not discharged in a timely manner, the shippers will face a congested space in their area of operation. Such an issue would leave the shippers no choice but to rent additional storage area. This would lead to more expensive carrying cost and delayed delivery time. According to POLA/POLB terminal operators and PierPass officials one of the core reasons for port congestion is lack of chassis. At the POLB and POLA trucks are coming from many locations to drop off or pick up containers and chassis, where the freeways that truck drivers must use to access the port are also used heavily by commuters traveling through the densely populated area surrounding Los Angeles. The most heavily used freeway to get to and from the POLB and POLA is California Interstate 710 .

I-710 has for the most part, four lanes, heavily packed with trucks and commuter vehicles during rush hours, causing major congestion problems in the vicinity of the ports. As the American economy expands, there are more demands for commercial operations, increased freight, and increased numbers of foreign commercial partners. These growing factors give rise to recurring congestion at freight bottlenecks, creating a conflict between freight and passenger service. Moreover, as demands for trading partners increase, more freight ships will be docked to the ports. Handling more transactions also means that the ports will have to increase their processing capacity. This increase will undoubtedly cause the entrance to the port and the areas within the port itself to be heavily congested as well. Congestion in and outside of the port is detrimental to the economy of Southern California, as well as to that of the US as a whole. When there is additional congestion, port operators take much longer to unload cargo ships. Supply chains carrying goods through the POLB and POLA can then become slowed to the point where some retailers find it necessary to redirect their goods. The goods are then redirected by sea or air to other ports on the East Coast where they can be further distributed, resulting in reduced income for the surrounding area as well as additional costs for the retailers themselves. The POP is a neutral, interoperable chassis pool that was launched in February 2015, from DCLI, TRAC Intermodal and Flexi-Van, in cooperation with the POLA, POLB and SSA Marine. Their chassis are pooled together to provide a more efficient way of obtaining chassis for trucking companies, which are able to use the chassis from any of the chassis companies interchangeably. Thus, a trucker can pick any chassis from the POP and drop it off at any designed POP storage area without having to worry about returning chassis to the same exact location. Since truckers have access to any chassis, it allows for a smoother operation at the port and fewer inefficiencies in chassis-related operations.

However, the pools still remain commercially independent and are in competition with one another. A third party service provider manages the billing and other proprietary information among these pools. Nonetheless, even with the improved flexibility, interoperability and efficiency which the POP has introduced, the port still suffers some repositioning issues and the heavy traffic congestion problems remain.The concept of Centralized Processing of Chassis was introduced as one method for improving travel times associated with container retrieval. This concept was introduced in Europe as the Chassis Exchange Terminal. In the CET concept, the centralized processing of chassis was defined as an off-dock terminal located close to the port, where trucks would go to retrieve imports or drop-off exports instead of unloading and loading containers at the marine terminal. The first step in the operation with the CET involves a container being loaded onto a chassis at the marine terminal. The second step includes the chassis transport to the CET during off-peak hours, for example at night time. The last step in the operation is when a truck carrying a chassis with a container drives into the CET. At this point, the truck exchanges the chassis it brought into the CET with another chassis and container, grow trays which has already been transported to the CET during the second step. The exchange operation involves unhooking a chassis and hooking up another one at the CET. This is much simpler, more efficient, and a lot faster operation than the operation of unloading and loading containers and performing chassis exchanges at a regular marine terminal .The large volume of container trips results in traffic congestion in the areas around and within the ports and is expected to grow even higher in the future. It is clear that any system which helps reducing the total travel time for trucks between their points of origin and their destinations, is worth investigating, since as a consequence it will reduce traffic congestion, noise and emissions, in addition to saving time for both truckers and port operators. Such systems improve the travel time reliability and help the local economy to grow. By improving travel time reliability, local businesses require fewer operators and less equipment to deliver goods on time and need fewer distribution centers and less inventory to account for unreliable deliveries.Note that among the types of transactions described in Section 0 the Type 1 and Type 2 transactions are the only types which would be anticipated to contribute to a noticeable reduction in total transaction time if a CPF was used. In the case of Type 1 transactions, the export container can be dropped off at the desired marine terminal, and then the chassis can be returned to the CPF for storage and later retrieval. In the case of a Type 2 transaction, the chassis for import can be picked up at the CPF before entering the marine terminal to load the import container. In both cases if the chassis exchange transaction can be done more efficiently when it is performed outside of the marine terminal this could offer improvements in total time for the transaction. In Type 3 and 4 transactions one can see that no chassis exchange activities are necessary. In a Type 3 transaction the wheeled import includes a container already loaded on a chassis and can simply be picked up by the bobtail. In a Type 4 transaction the chassis used for the export container is the same one onto which the import container can be loaded afterwards.

Finally, Type 5 transactions, although they include a chassis exchange, would not be anticipated to have any reduced transaction times using an external CPF. This is due to the fact that after dropping off an export, the bobtail must drop off the chassis used so it can pick up a wheeled import at the same terminal, making in inefficient to travel to an external CPF to drop off the chassis only to return back to the marine terminal to pick up the wheeled import.A representative sample of seventy-one trucking companies which service the POLB and POLA is used in this case study. In order to select this sample, an initial list of TCs was created from an internet drayage directory which includes all companies operating within Los Angeles County. Since the location of the TCs is a critical variable for the optimization problem, all companies whose address was not included in the drayage directory were eliminated from the list. The final list contains all companies with known address using chassis. In the analysis herein the number of daily transactions between marine terminals and trucking companies was assumed to be a fixed value between each trucking company and each marine terminal. In the initial analysis, the number of total daily import transactions was set at 50,000 FEU based on forecasts of total daily port trips. Sensitivity analysis results used 10,000 import and 5,000 FEU export daily transactions based on the average daily import and export container traffic provided in Table 1Table 4.Potential CPF locations were identified by searching for vacant land within a 15-mile radius of the POLA and the POLB. The capacities of these locations were estimated by using the Google earth polygon built-in feature to calculate an approximate square footage. Several CPF layout options and chassis stacking methodologies were evaluated as described in 0. Chassis can be stored vertically or horizontally as shown in Appendix A, and each storage method has its advantages and disadvantages. Among the various possibilities that were considered, horizontal storage layout with a maximum of 3 chassis stacked on top of each other was selected for the case study. Using the estimated square footage, the number of forty-foot chassis which could fit in that area was determined using this preferred chassis layout methodology which assumed allocations for access roads; blocks of stacked chassis ; and blocks of unstacked chassis for ease of access, in order to minimize chassis retrieval times. An example of the layout for a 5000×5000 foot area is included in Figure 8 below. For this example, the maximum number of forty-foot chassis which could be stored in this area was estimated at 170,000. After verifying that the linear program behaved as expected for the two simplified models used in the reduced-node cases, the full model was analyzed using the same approach. In this case, all of the 16 potential CPF locations were included, each with its the estimated chassis storage capacities provided in Table 5. All 71 TCs and 14 MTs are also included with ~50,000 transactions distributed evenly between them . The results are summarized in Figure 12, where it can be seen that when P = 0 seconds, all of the transactions are routed directly from the TCs to the MTs. However, even with a 5-minute increase in efficiency at the CPFs in terms of average chassis retrieval time , approximately half of the transactions are routed through CPFs. The number of transactions that are routed directly from TCs to MTs is decreasing rapidly as the value of the parameter P increases. Figure 12 shows that when P=1200 sec, virtually no transactions are routed directly to marine terminals. Table 14 shows the percent utilization of the CPFs for P = 1200 seconds.

The primary research method I used to conduct my research was participant observation

My in-depth study of drug policy reform at both the organizational and practical levels required me to branch beyond the traditional methods of quantitative sociology. I sought to get the inside story from insiders’ perspectives, to construct new categories of analysis, and to use these categories to understand the ever developing phenomenon of drug policy change. Since the data I sought was not amenable to quantitative analysis, I eschewed surveys, secondary data analysis and structured interviews. To conduct this study I employed four main types of research methods; participant observation at cannabis dispensaries, drug policy reform conferences, organization meetings, and festivals, depth interviews with activists and organization leaders, archival research of movement websites and literature, and archival research of media coverage of drug reform modalities and movement outcomes. I also analyzed state response to this movement as conveyed through official documents and news sources. As my project progressed, I used the Internet to explore how the movement uses social networking sites to connect activists to one another and to coordinate new forms of Internet based action. My ultimate goal for this research project is to construct a coherent historical narrative of the drug policy reform and medical marijuana movements. Because I sought to create a narrative, drying cannabis qualitative methods were well suited to my task. At the beginning of the process, I needed to look at existing sources on my topic to discover where I needed to fill in the blanks. My use of theory and method was hybrid in form.

Because of my exploratory orientation, I intended to deviate from the deductive, theory testing, orientation that guides much quantitative work in sociology Although I did not intend my study to be exclusively generative of entirely novel “grounded theory” , I also did not completely eschew existing theoretical work in the sociology of social movements and sociology of drugs. Instead I used a dialectic approach employing existing theories from the study of social movements to guide my initial research, and a grounded theory orientation to new data I found that augmented, stretched and contradicted existing theory. This qualitative approach is well suited to my purposes of constructing a narrative of drug policy reform from the viewpoints of its participants, and presenting a study that is amenable to the goals of public sociology . To map the distribution of the wider drug policy reform movement, initially I examined movement documents, literature from conferences and organization websites to discover and catalog the various organizations that comprise the movement. This aspect of my project gave me an understanding of the various concerns that motivate organizations in the movement, its organizational bases, and the number and size of organizations involved in the wider movement. Through cataloging the various organizations that comprise the movement, I was also able see the geographical distribution of movement organizations. The websites of drug policy reform organizations will also provide an understanding of the way that movement actors frame their concerns and goals, and which symbols and values they use to animate their activism. Recently, social networking websites including “Facebook” have afforded activists with new venues for networking and engaging in lobbying activities. Internet based activism has included organizing boycotts of corporations unsympathetic to drug use, petitioning government officials and Congressional representatives, and keeping members abreast of organizational campaigns.

In addition to linking participants to one another and keeping them informed about movement activities, social networking sites also offer activists a platform for lobbying politicians and publicizing their efforts. I will include an examination of these websites to assess the breadth of activity in this movement. As noted above, I have participated in drug policy reform for over ten years. Throughout this study I also directly participated in a particular modality of drug policy reform, working in a medical cannabis dispensary. By working as an employee in a medical cannabis dispensary, I was able to experience first hand what became a central discovery of my research, the hybrid character of the medical marijuana movement . After doing a thorough review of the social movement literature, I was able to build a theoretical vocabulary to explain this transition as a shifting of fields, from the political field to the commercial field. By working in the hybrid field of medical cannabis, I experienced the quotidian shifts in discourse and practice that facilitate the transition between these two fields of practice. The unique perspective I gained as an employee in a medical cannabis dispensary also gave me a front row seat to the framing strategies that people use at an active site, or modality, of drug policy reform. I was able to learn and practice the shift in diction that my fellow employees and I used to accomplish the discursive shift of changing a previously illicit substance into a legitimate or licit substance . On a practical level, by working at a dispensary I was able to meet other activists, medical cannabis patients, and attend numerous drug policy events as a volunteer. My status as an employee gave me entrée into the world of drug policy reform and also made my research feasible with minimal outside funding. I used participant observation to explore the sites where the drug policy movement constitutes itself. This element of the study looked at two locations where participants in this movement most often interact with one another face-to-face, festivals and conferences.

As noted by social movement scholars, face-to-face interactions are necessary to supplement the technologically based networking of participants through the Internet and other communication technologies. In addition to providing demographic data about attendees, the public speakers, panel discussions and presentations at these events offered rich qualitative data about the movement. I used this data to analyze how drug policy reformers frame their actions and to discover the key concerns of movement actors. I also used these events as convenient places to gather literature from various organizations. In addition to attending hemp fests, and conferences hosted by organizations, I attended several types of meetings during the course of my research project. I attended monthly and annual meetings of organizations, city council meetings, and city medical marijuana task force or commission meetings. These various meetings proved to be excellent sites for gathering qualitative data on how organizations and city governments work to regulate the emergent phenomenon of medical cannabis. To illuminate how organizations change drug policy, curing cannabis how various organizations work together, and the biographical dimensions of drug policy activism, I conducted in-depth qualitative interviews with the members of several different drug policy reform organizations. I employed a snowball sampling technique to reach the leaders and members of drug policy organizations. I sought out key figures in the medical cannabis movement to gain access to their unique knowledge of the movement’s history, policy outcomes , collaborating with other organizations and elite benefactors, and interactions with government officials. My interviews with key figures helped me to answer my research questions about the political opportunity structures that allow for novel drug policies. I also asked my interview subjects about their biographies, how they became involved in activism and what led to changes in their political consciousness. Occasionally, participants in the drug policy reform movement engage in public protest and acts of civil disobedience to decry existing drug policy and institute new policy arrangements. I attended and participated in a medical cannabis protest in November 2011. The events that precipitated the protest, the number and types of people in attendance and the slogans, speeches, and chants that the protesters used provided rich data for examining how medical marijuana is both a social movement and an industry. Episodes of civil disobedience also provide unique sites to analyze the interaction between the state and the drug policy reform movement. Under what circumstances do activists engage in civil disobedience? What metaphors, slogans and symbols do protestors deploy? What unites the diverse organizations, funders and participants of the drug policy reform movement is a belief that prohibition as an overarching approach to dealing with illicit drug use creates many problems for individuals and society.

Although not all organizations and individuals in the movement agree that prohibition should be rolled back in its entirety, all the organizations in the movement find at least some aspects of prohibition to create more problems than it solves. In the 1970s, organizations sought to decriminalize the adult use of cannabis because they viewed its prohibition as an affront to individual liberties, and because it relegated a whole class of otherwise law-abiding individuals to criminal status . In the 1980s, the harm reduction movement began as a public health based response to the spread of HIV and Hepatitis C among injection drug users. Eventually harm reduction blossomed into a philosophy undergirding an alternative approach to drug problems . It was not until the mid 1980s that a wholly anti-prohibitionist branch of the movement coalesced around the issues of racial injustice and the prison boom, human rights and instability in drug producing countries , and a reintegration of earlier branches of the movement . All three branches of the movement actively challenge the discourse of drug prohibition, in addition to specific policies sustained by the “drug control industrial complex” . At an abstract level, the various organizations and participants of the drug policy reform movement are engaging in a collective argument with supporters of drug prohibition. Billig uses a discursive approach to the conduct of social movements. In the tradition of social psychology, he emphasizes the importance of language for movements. “Social movements can be seen as a conducting arguments against prevailing common sense” . This makes the rhetorical tasks of social movements challenging because most attempts at persuasive discourse appeal to common sense. Essentially the movement argues “prohibition creates more problems than it solves.” As seen with the Occupy movement that began in New York City’s Wall Street district in September 2011, one of the most powerful effects a movement can have is on changing the national discussion or debate. While sociologists and economists have decried income stratification, income inequality and the ever shrinking middle class in the U.S. for decades, the Occupy movement was able to shatter the commonly held and widely disseminated myth that the U.S. is overwhelmingly a middle class society typified by a high degree of mobility. Although politicians and journalists have decried the central tactic of the Occupy movement, by physically occupying public space the movement was able to change the public debate much more quickly than movements that rely primarily on social movement organizations to make things happen. What makes the argument particularly difficult for the movement to win is an imbalance in access to what I have termed the means of representation. Until the 1990s, supporters of prohibition have had privileged access to the means of representation. As I show in chapter two, the drug policy reform movement is using the Internet to address this disparity with increasing success. In addition to challenging the discourse of prohibition on the Internet and increasingly in the mainstream news media, the drug policy reform movement converges at conferences and hemp rallies to vocalize, experience, and broadcast its challenge to the discourse of drug prohibition. The movement challenges both the policies enforced in the name of prohibition and on a more abstract level, representations of drug users and drug use that prohibitionist discourses seek to portray. By challenging policies and representations that are part and parcel of those policies, the movement collapses a conceptual division that New Social Movements theorists including Alberto Melucci and Manuel Castells seek to draw, the idea that movements are about cultural stakes and not legal or political stakes. I consider the question of whether the drug policy reform movement seeks political or cultural change during my research, and will revisit this dichotomy in later chapters. At the outset, I wish to make it known that I am not only an academic observer of drug policy reform, but I am also an active participant. My position as both an advocate for and observer of drug policy reform presents a difficult balancing act. While I strive to objectively represent and analyze the drug policy reform movement, I wholeheartedly support the basic argument of drug policy reform; prohibition is an ineffective way to deal with drug use and it creates more harmful consequences than it addresses.

Swimmers were removed from the filters under a dissecting scope

Globally, seafood consumption has been on the rise for over 50 years. Between 1961 and 2016, the average annual increase in worldwide seafood consumption was higher than the increases in consumption of beef, pork, and poultry combined. While seafood consumption has increased, global fishing catch – the tonnage of wild fin fish, crustaceans, molluscs, and other seafood caught each year – has remained relatively static since the late 1980s. In that time, aquaculture production has grown to meet the demand that wild fisheries could not Aquaculture is now the fastest-growing food sector and, as of 2016, provides more than half of all the seafood we eat globally. As the human population continues to grow, global demand for seafood will rise. A recent study by Hunter et al. concluded that by 2050, total food production will need to increase by as much as 70% in order to feed the projected population of 9.7 billion people. A significant portion of this increase will likely come from animal protein demanded by a growing middle class. With wild capture fisheries unlikely to meet increasing demand, aquaculture will play a critical role in feeding the world.Finfish, shellfish, and seaweed are farmed around the world both on land and in the ocean. On land, farmers primarily utilize freshwater ponds, lakes, and streams, though in some parts of the world, fully indoor, tank-based recirculating aquaculture systems are on the rise. Land-based aquaculture is often called “inland aquaculture.” In the ocean, the vast majority of seafood farming is done close to shore, in bays, estuaries, fjords, industrial drying racks and coastal waters Some marine aquaculture is done in the open ocean, sometimes miles from shore, where the water is deeper and farmers must contend with storms and higher wave energy Inland aquaculture currently contributes the vast majority of global aquaculture production and most of that is fin fish.

This farming method, particularly when it is done in ponds, lakes, and streams, must contend with other land and water uses; these conflicts will only increase as the human population grows. Non-RAS inland aquaculture can have negative environmental effects, such as pollution of freshwater drinking sources, ecosystem eutrophication, deforestation, and alteration of natural landscapes, particularly when it is done in developing countries without adequate regulation and oversight. RAS farming seeks to minimize these environmental effects by farming in indoor, closed systems – and many RAS companies market themselves as a sustainable alternative to other farming methods – but it has its own environmental trade-offs, including high energy use. RAS farming typically utilizes less land and water than traditional inland farming and will likely play a key role in future aquaculture production, particularly as the industry embraces renewable energy and technological innovation. However, with population growth increasing constraints on space and freshwater availability, the greatest potential for expanding production is in the ocean. Most marine aquaculture takes place in nearshore, coastal waters. As with inland aquaculture, these farms often compete with human uses. These conflicts can include coastal fishing grounds, recreational boating areas, and resistance from coastal landowners. Nearshore aquaculture can also negatively impact coastal ecosystems. Most notably, if they aren’t sited in areas with enough water movement, waste and excess feed can build up on the seafloor and negatively affect surrounding habitats. In some areas, nearshore farming has also resulted in modification/destruction of estuaries, mangroves, and other important coastal habitats. 

Responsible, well-sited, nearshore aquaculture operations can minimize environmental impacts and can avoid use conflicts by farming in remote areas with sufficient water movement. Another option is to move operations out into the open ocean, into deeper, offshore waters where there is more space, fewer use conflicts, and strong currents to flush waste from the nets. This report will discuss the present status and future of offshore aquaculture in the United States, with a specific focus on offshore fin fish farming, which has been the subject of myriad news stories, lawsuits, industry reports, and government memoranda in recent years.Norway is the world’s second largest exporter of fish and seafood, ranking only behind China, and is the leading producer of Atlantic salmon, with 1.2 million metric tons of annual production. The Norwegian government has publicly announced its intention to increase salmon production from 1 million mt to 5 million mt by 2050 but most salmon is currently produced in nearshore coastal waters and fjords, where expansion is increasingly limited by coastal acreage and environmental concerns such as fish escapes and the prevalence of sea lice. In late 2015, the Norwegian Ministry of Fisheries and Coastal Affairs announced a program through which the government would grant free “development concessions,” i.e. experimental licenses, to projects working to develop technological solutions to the industry’s acreage and environmental challenges. The free concessions are available for up to 15 years and if the project meets a set of fixed criteria within that time, the experimental license can be converted into a commercial license for a NOK 10 million fee, significantly less than the typical NOK 50-60 million licensing fee.

Proposed projects must be large-scale and backed by teams with proven expertise in both aquaculture and offshore infrastructure, such as offshore oil and gas extraction. Each experimental license allows for up to 780mt production, so some larger projects require multiple licenses. To date, companies representing 104 individual projects have applied for 898 of these experimental licenses; 53 licenses have been granted. The ‘biological pump’, a critical component of global bio-geochemical cycles, is responsible for transporting the carbon and nitrogen fixed by phytoplankton in the euphotic zone to the deep ocean . Within the biological pump, the relative contributions of phytoplankton production, aggregation , mineral ballasting , and mesozooplankton grazing to vertical carbon flux are still hotly debated and likely to vary spatially and temporally . While solid arguments exist supporting the importance of each export mechanism, the difficulty of quantifying and comparing individual processes insitu has resulted in investigators using a variety of models, which may support one hypothesis but not exclude others. As such, experimental evidence is needed to assess the nature of sinking material, and how it varies among and within ecosystems. Mesozooplankton can mediate biogeochemicallyrelevant processes in many ways, and thus play crucial roles in global carbon and nitrogen cycles. By packaging organic matter into dense, rapidly sinking fecal pellets, mesozooplankton can efficiently transport carbon and associated nutrients out of the surface ocean on passively sinking particles . In the California Current Ecosystem , for example, Stukel et al. have suggested that fecal pellet production by mesozooplankton is sufficient to account for all of the observed variability in vertical carbon fluxes. Diel vertically migrating mesozooplankton may also actively transport carbon and nitrogen to depth when they feed at the surface at night but descend during the day to respire, excrete, and sometimes die . At times, mesozooplankton are also able to regulate carbon export rates by exerting top-down grazing pressure on phytoplankton or consuming sinking particles . In this study, we utilize sediment traps and 234Th:238U disequilibrium to determine total passive sinking flux, and paired day-night vertically-stratified net tows to quantify the contributions of mesozooplankton to active transport during 2 cruises of the CCE Long-Term Ecological Research program in April 2007 and October 2008. Using microscopic enumeration of fecal pellets we show that, across a wide range of environmental conditions, commercial greenhouse benches identifiable fecal pellets account for a mean of 35% of passive carbon export at 100 m depth, with pigment analyses suggesting that total sinking flux of fecal material may be even higher. On average, mesozooplankton active transport contributes an additional 19 mg C m−2 d−1 that is not assessed by typical carbon export measurements.Data for this study come from 2 cruises of the CCELTER program conducted during April 2007 and October 2008 . During the study, water parcels with homogeneous characteristics were identified using satellite images of sea surface temperature and chlorophyll and site surveys with a Moving Vessel Profiler . Appropriate patches for process experiments were marked with a surface drifter with holey sock drogue at 15 m and tracked in real time using Globalstar telemetry. Another similarly drogued drift array with attached sediment traps was also deployed in close proximity to collect sinking particulate matter over the 2 to 4 d duration of each experimental cycle. During each experiment, paired day-night depthstratified samples of mesozooplankton were taken with a 1 m2 , 202 µm mesh Multiple Opening and Closing Net and Environmental Sensing System at 9 depths over the upper 450 m of the water column, with the midpoint of the tow corresponding approximately to the location of the surface drifter. These samples were later enumerated by ZooScan and grouped into broad taxonomic categories and size classes for calculation of mesozooplankton active transport.

Oblique bongo tows to 210 m were also taken at mid-night and mid-day to collect organisms for determination of size-fractionated dry weights and grazing rates of the mesozooplankton community. Size-fractionated dry weights were converted to carbon biomass using the dry weight to carbon relationships of Landry et al. .VERTEX-style drifting sediment traps were deployed on the drifter at the beginning and recovered at the end of each experimental cycle. Trap arrays consisted of 4 to 12 particle interceptor traps with an inner diameter of 70 mm and aspect ratio of 8:1. To create a semistable boundary layer immediately above the trap and minimize resuspension during recovery, each PIT had a baffle on top consisting of 14 smaller tubes with 8:1 aspect ratio. The baffle tubes were tapered at the top to ensure that all particles falling within the inner dia – meter of the PIT descended into the trap. On P0704, 8 PITs were deployed at a depth of 100 m during each cycle. On P0810, 8 to 12 PITs were deployed at 100 m, and 4 to 8 PITs were deployed near the base of the euphotic zone . Before deployment, each PIT was filled with a 2.2 l slurry composed of 0.1 µm filtered seawater with an additional 50 g l−1 NaCl to create a density interface within the tube that prevented mixing with in situ water. The traps were fixed with a final concentration of 4% formaldehyde before deployment to minimize decomposition as well as consumption by mesozoo-plankton grazers . Upon recovery, the depth of the salinity interface was determined, and the overlying water was gently removed with a peristaltic pump until only 5 cm of water remained above the interface. The water was then mixed to disrupt large clumps and screened through a 300 µm Nitex filter. The remaining >300 µm non-swimmer particles were then combined with the total <300 µm sample. Samples were then split with a Folsom splitter, and subsamples were taken for C, N, C:234Th, pigment analyses, and microscopy. Typically, subsamples of ¼ of the PIT tube contents were filtered through pre-combusted GF/F filters for organic carbon and nitrogen analyses. Filters were acidified prior to combustion in a Costech 4010 elemental combustion analyzer in the SIO Analytical Facility. Entire tubes were typically filtered through QMA filters for C:234Th analyses as described above. Triplicate subsamples were filtered, extracted in 90% acetone and analyzed for chlorophyll a and phaeopigment concentrations using acidification with HCl and a Turner Designs Model 10 fluorometer . Samples for microscopic analysis were stored in dark bottles and analyzed on land as described below.Watercress is a leafy-green crop in the Brassicaceae family, consumed widely across the world for its peppery taste and known to be the most nutrient dense salad leaf . The peppery taste is the result of high concentrations of glucosinolates – phytochemicals which can be hydrolyzed to isothiocyanates upon plant tissue damage, such as chewing, known for their potent anticancer , anti-inflammatory , and antioxidant effects that are beneficial to human health. Although ITCs are the main products of digestion depending on pH, metal ions, and other epithiospecifier proteins, nitriles can also be formed through GLS break-down and they too may have chemopreventitive properties . Watercress is high value horticultural crop. A specialty leafy vegetable, with a growing area of 282 ha in the US, with 75 ha of production in California, compared to 58 ha in the UK . It is also a high-value horticultural crop in the UK, with the market value of £8.90 per kg compared to £4.97 per kg for mixed baby leaf salad bags and represents a total value of £15 million per year .

This indicates that for PMCV it is already too late for this type of action to be taken

These lessons would apply not only to PMCV, but also to infectious diseases whose spread is predominantly via fish movement . The decision to use a susceptible-infected over a susceptible-infected-susceptible model for within-farm spread was based on the fact that different experimental studies have found the viral genome present in tissues of challenged fish throughout the whole duration of the study, indicating that the salmon immune response may be unable to eliminate the virus . This, together with studies where PMCV has been consistently found in cohorts of fish sampled through long periods of time, indicating that PMCV can be present in fish for some months , provides further support for the modeling approach used here. Nevertheless, more research is required to further validate or refute this modeling choice, as it is possible that fish clear the infection beyond the time frames used in both experimental and observational studies. The model was sensitive to changes in the values of the indirect transmission rate, rate of decay in environmental infectious pressure, and the rate of viral shedding from infected individuals, but not to changes in the level of spatial coupling . Model outputs were also not substantially influenced by different parameter assumptions regarding either distance or seasonality , noting that information about distance thresholds was derived from other viral infections such as infectious salmon anemia , where estimates have varied from 5 to 20 km or more . Collectively, these results suggest that local spread may play a secondary role in the spread of PMCV across the Atlantic salmon farms in the country. When local spread was removed completely from the model , it was even clearer that this transmission pathway under current model assumptions was not the most important. On the basis of these results, greenhouse bench top we hypothesize that the widespread presence of PMCV in Ireland is most likely a product of the shipments of infected but subclinical fish through the network of live fish movements that occur in Ireland.

This is consistent with fish being infected but subclinical for months prior to manifesting signs of disease , and by the structure of the network of live fish movements in the country . There is limited knowledge of agent survival of PMCV in the aquatic environment. Infection risk is higher on farms with a history of CMS outbreaks , which could suggest survival of the causal agent in the local environment. Further, infection pressure from farms within 100 km of seaway distance was found to be one of the most important risk factors for clinical CMS diagnosis , although this study did not evaluate spread via fish movement. It is noted that the distance over which infection can be transmitted via water is determined by an interaction between hydrodynamics, viral shedding and decay rates . Further research on PMCV survival in the environment is needed to guide parameterization of future models. The most effective intervention strategies are based on outdegree and outcloseness , with the highest impact being observed when using these intervention strategies with a proactive approach . Note that all outgoing shipments from selected farms are assumed to include only susceptible fish , which can be equated with high levels of bio-security. The outdegree and outcloseness based strategies are comparable, most likely because both strategies refer to outgoing shipments from a farm , the former with the number of farms receiving fish from a given source, and the latter inversely related to the number of intermediaries between the source and the rest of the farms in the network. Both centrality measures were moderately correlated with each other, with a Pearson correlation of 0.53 for the proactive approach when including all farms for each time window used. Based on a closer examination of the top eight farms of each list, for every year , one list always included at least the top three elements of the other. In other words, each list contained the top three farms in terms of outdegree and the top three farms in terms of outcloseness. Further iterations of this model could exploit the similarity between ranks of farms based on these two centrality measures, for example evaluating the effect of targeting a lower number of farms based on a list created from the top elements of both rankings. For the case presented here , either centrality measure could be used.

Being this the case, we would advocate for the use of outdegree over outcloseness, given its simplicity of estimation and understanding. The proportion of farms connected via live fish movements varied in a cyclical manner, with spikes during the periods of January-April, July, and October-December, which is consistent with results from our previous descriptive study of the network of live fish movements in Ireland . Interventions could be considered that specifically apply at these times of higher connectivity between farms, to take account of this observed cyclicity. The remaining between-farm prevalence levels observed after the implementation of this targeted strategies is due to residual infectious pressure and local spread, where PMCV is not fully cleared from the environment between generations of fish, allowing its transmission to newly stocked fish and locally between neighboring farms. Similarly, the lower performance of the reactive approach, even if all transmission via fish movements is halted suggests that eradication of PMCV is virtually impossible in Ireland, as it seems that after elimination of transmission via fish movements, the agent is consistently sustained by local spread . The lack of complete production records for all Irish Atlantic salmon farms was the main reason for using movement records to recreate fish population dynamics. Nevertheless, we consider that the rules as applied in this study were realistic. For example, if a farm ships fish in excess of the total fish population at the time of the shipment, it is reasonable to assume that these fish must have originated at a previous time. The options for this origin are either non-recorded, incoming fish shipments or hatching of new fish. In the case of the latter, this is perfectly reasonable if the fish deficit at the farm is due to a shipment of eggs. However, if the deficit is due to a shipment of older fish , assigning an enter event for this age groups is not realistic. Nevertheless, in the absence of records accounting for the origin of fish sent in these age groups, this seemed like a better approach than arbitrarily imputing their origin to another farm, which in turn would have created fish deficits in other farms cascading to the rest of the network. Arguably, the availability of complete production records from all Irish salmon farms would minimize this issue, although making such records available for a 9-year time period would pose a hefty burden on fish farmers. Additionally, botanicare rolling benches we assert that the impact of our imputation is marginal, considering that only 90 enter events were imputed during the study period , mostly at the beginning of the simulation , and involving mainly fertilized eggs in freshwater farms. This is further evident when evaluating the generated population dynamics, like the number of fish in each age group and the timing of fish enter events , where the abundance of each age group and the enter events follow a seasonal pattern that would be expected given the life cycle of farmed Atlantic salmon. Assigning exit events the day before the last fish shipment of a fish cohort was a simplification necessary for allowing farms not to overpopulate as the simulation proceeded.

The impact of assuming all fish within a cohort were present until the day before shipping is hard to gauge, but we think it would be a small effect, especially considering the large fish populations involved in salmon farming. Future iterations of this model could include a mortality function fitted from the data, or even better, real mortality data from fish farm production records, if available. One of the assumptions of the intervention strategies used in this study is that they are 100% effective in eliminating transmission between farms via fish movements. In order to achieve a similar level of effectiveness in the field, it would require screening of all fish shipments with a highly sensitive test before they exit the origin farm, and elimination of all positive batches . The sensitivity of currently used diagnostic methods is not reported in the literature, but one could arguably assume that the RT-PCR method for detection of the virus has a high sensitivity given its capacity to measure viral RNA, which may or may not be present within a virion that is able to replicate. Currently there are no confirmatory tests for PMCV, and diagnosis of the clinical disease is based on clinical observations, necropsy, and histopathological findings . As for diagnosing latently or subclinically infected fish, this would pose a great challenge today, as there are no cell cultures or other methods that could assist in such a task, which is particularly important for the correct diagnosis of the agent on the early stages of fish life, namely eggs, juvenile fish, and smolts. Further, even if accurate diagnostic tests were available, the feasibility of discarding all infected fish consignments is doubtful, as it would impose a heavy burden on fish farmers, especially considering the modeled current levels of PMCV prevalence. Nonetheless, it does suggest a clear path to prevent the spread of exotic infectious agents in Ireland, such as ISA virus, piscine reovirus , and others. For these agents, targeted surveillance strategies could be implemented based on the top ranked farms in terms of outdegree as described above, which would allow for a timely detection and prevention of further spread across the country. In conclusion, in this study we highlight the importance of human-assisted live fish movement for the dissemination of PMCV across the country, and demonstrate a means, using centrality based targeted surveillance strategies, to prevent this type of spread in the future for other infectious disease agents. These strategies should be applied early on in the epidemic process, before country-wide dissemination of the agent has taken place. The Irish salmon farming industry would benefit from this approach, as it would help in the early detection and prevent the spread of exotic viral agents which have the potential to severely impact local farms and the livelihoods of people that depend on them. This in turn would make Irish salmon farming a more robust and sustainable industry, capable of dealing with infectious agents in a timely and effective way, minimizing socioeconomic and environmental losses, and maximizing fish health and welfare. The literature documents high incidence of low back disorders in the agricultural industry . A national survey in the U.S. shows that, for males, farming is the occupation with the fifth highest risk of inducing low back pain . It has been suggested that the preponderance of the morbidity is related to farm workers’ working conditions, such as stooped working postures and awkward postures during lifting, carrying, and moving loads . Such hazards, however, affect both adult and youth workers. Estimates show that each year in the United States, more than 2 million youths under the age of 20 are exposed to such agricultural hazards . These youths perform many farm-related activities involving significant manual handling of materials and are exposed to factors found to be related to the development of musculoskeletal disorders and LBDs . For instance, emptying a bag of swine feed into a feeder, spreading straw, and shoveling silage into a feed bunk are all reported as causes of serious back injuries . It might be useful to first define terms commonly used in reference to workers based on their age. The term “legal adult” or “age of majority” is the threshold of adulthood as declared by law . This age varies based on geographical regions and may have several age-based restrictions . In most circumstances, “adult” is usually in reference to the age of majority, or one of its exception, and not the biological adult age. According to the National Institutes of Health , the term “child” is an individual under the age of 21, where the definition spans the period from birth to the age where most children are dependent on their parents .

A popular approach for surface reconstruction is the representation of surfaces by an implicit function

With the use of a k-d tree structure, the computational complexity of the k-nearest neighbor search scales better than linearly, O) on average, but the structure is not suitable for gradient-based optimization because the derivatives are discontinuous when the set of k-nearest neighbors switches. Outside the domain of non-interference constraint formulations currently employed in optimization, we discovered a significant body of research conducted on a remarkably similar problem by the computer graphics community. Surface reconstruction in the field of computer graphics is the process of converting a set of points into a surface for graphical representation. Implicit surface reconstruction methods such as Poisson , Multi-level Partition of Unity, and Smooth Signed Distance, to name a few, construct an implicit function from a point cloud to represent a surface. We observed that some of these distance-based formulations can be applied to overcome prior limitations in enforcing geometric non-interference constraints in gradient-based optimization. The first objective of this thesis is to devise a general methodology based on an appropriate surface reconstruction method to generate a smooth and fast-to-evaluate geometric non-interference constraint function from an oriented point cloud. It is desired that the function locally approximates the signed distance to a geometric shape and that its evaluation time scales independently of the number of points sampled over the shape NΓ.

The function must also be an accurate implicit representation of the surface implied by the given point cloud. The contribution of this paper is a new formulation for representing geometric non-interference constraints in gradient-based optimization. We investigate various properties of the proposed formulation, vertical drying racks commercial its efficiency compared to existing noninterference constraint formulations, and its accuracy compared to state-of-the-art surface reconstruction methods. Additionally, we demonstrate the computational speedup of our formulation in an experiment with a path planning optimization and shape optimization problem. This section, in full, is currently being prepared for submission for publication of the material. Anugrah J. Joshy, Jui-Te Lin, C´edric Girerd, Tania K. Morimoto, and John T. Hwang. The thesis author was the primary investigator and author of this material. Wind energy is a sustainable method for electric power generation that mitigates greenhouse gas emissions from other power generation resources, such as with fossil fuels. Predictions show that the climate change mitigation from wind energy development ranges from 0.3C to 0.8C by 2100. Off-shore wind farms can also mitigate the impacts of hurricanes for coastal communities. As such an impactful energy resource, the field of wind farm optimization has gained recent attention to maximize the energy production and economic feasibility of developing wind farms. The increased adoption of multidisciplinary design optimization techniques by the wind energy community has produced many recent works including the optimization of wind turbine designs, wind farm layouts, and active wind farm control. In general, turbine design, wind turbine layout, and active turbine control strategies are the three main methods to increase wind farm efficiency by reducing the wake interaction between turbines .

Although these methods individually may increase the net efficiency, it has been shown that considering multiple or all three methods can further the a more optimal model. Recent simultaneous optimization studies include control and layout optimization and turbine design and layout optimization. Numerical optimization, as an important design tool to solving these problems, has been widely used for wind farm optimization. Gradient-based and gradient-free algorithms are the two main algorithms to perform optimization. Historically, gradient-free algorithms have been used for wind farm optimization problems due to the high multi-modality in the design space of these problems. Gradient-free optimizers are robust to local minima, while gradient-based optimizers often converge to a local optima. However,as these problems increase in scale and the number of disciplines, the dimensionality of the design space may become impractical for gradient-free optimization. Gradient-free optimizers scale poorly in the number of function evaluations as the number of design variables increase in these complex wind farm problems. Gradient-based optimization, especially with analytic gradients, scales better in the number of function evaluations over gradient-free optimizers in these cases. In addition, recent developments have added methods for gradient-based optimizers to navigate the multi-modal design space of these problems. As a result, gradient-based optimization continues to play a key role in optimizing wind farms. When modeling wind farms for gradient-based optimization, it is important to consider the computational speed and differentiability of the models. High fidelity models are often very computational expensive to evaluate, and these models must be evaluated up to hundreds of times during optimization. Therefore, lower fidelity models that are less computationally expensive are often considered for use in gradient-based optimization.

Additionally, the differentiability of the models is a requirement in order to perform gradient-based optimization. The ability to calculate derivatives within the model has not always been readily available. Oftentimes, significant effort must be made to hand derive the derivatives, or in the worst case, using the finite difference method for derivatives, which is on the same order of function evaluations as gradient-free optimization. Current state-of-the-art gradient-based optimizations are performed using automatic differentiation, however it still requires a level of effort to implement into new models, especially when local smoothing techniques are required. A notable research problem in wind farm layout optimization is the representation of wind farm boundary constraints. Boundary constraints in wind farm layout optimization prevent the placement of a wind turbine on regions outside of the permitted zone. Examples of exclusion zones for off-shore wind farms include unsuitable seabed gradients, shipwrecks, and shipping lanes. These zones are often disjoint, non-convex, and highly irregular shapes represented in 2D. There exists a lack of a generic method to represent these boundaries in the wind farm optimization community. Additionally, the state-of-the art methods suffer from the same problems noted in Section 1.1, where the computational complexity scales with the number of points representing the polygonal wind farm boundary. Conveniently, the first contribution of this thesis addresses this issue. The new geometric non-interference constraint formulation provides a smooth, differentiable, and fast-to-evaluate constraint function that represents the wind farm boundary suitable for gradient-based optimization. Another tool that may show to benefit gradient-based wind farm optimization is a new modeling code language called the computational system design language. CSDL is an algebraic modeling language for defining numerical models that fully automates adjoint-based sensitivity analysis. Additionally, CSDL contains a three-stage compiler system that constructs an optimized computational graph representation of the models. As a new design language, it shows potential to improving the convenience and speed of developing the models to perform gradient-based wind farm optimization. The second objective of this thesis is to implement the two aforementioned tools–the geometric non-interference constraint formulation and the computational system design language –and perform optimization studies on multiple wind farm optimization problems. We conduct optimization studies on turbine hub heights, turbine yaw misalignment, and wind farm layout, and investigate their properties as it pertains to gradient-based optimization. These three problems demonstrate the potential of gradient-based optimization in turbine design, wind farm control, and wind farm layout optimization problems. Using well know analytical models, we conduct multiple optimization studies using CSDL as a modeling paradigm and verify its accuracy with other industry-leading optimization frameworks. Additionally, vertical grow racks we perform a wind farm layout optimization with a real-world wind farm, highlighting the accuracy and efficiency of the geometric non-interference constraint formulation. This section, in full, is currently being prepared for submission for publication of the material. Anugrah J. Joshy and John T. Hwang. The thesis author was a contributor to this material. We identify two preexisting methods for enforcing geometric non-interference constraints in gradient-based optimization that are both continuous and differentiable. Previous constraint formulations that utilize the nearest neighbor distance, e.g., Risco et al. and Bergeles et al. , have been used in optimization, but we note the that they are non-differentiable and may incur numerical difficulties in gradient-based optimization. Brelje et al. implement a general mesh-based constraint formulation for noninterference constraints between two triangulations of objects. Two nonlinear constraints define their formulation. The first constraint is that the minimum distance of the design shape to the geometric shape is greater than zero, and the second constraint is that the intersection length between the two bodies is zero, i.e., there is no intersection.

A binary check, e.g., ray tracing, must be used to reject optimization iterations where the design shape is entirely in the infeasible region, where the previous two constraints are satisfied. As noted by Brelje et al., this formulation may be susceptible to representing very thin objects, where the intersection length is very sensitive to the step size of the optimizer. Additionally, the constraint function has a computational complexity of O, which may be addressed by the use of graphics processing units . Lin et al. implement a modified signed distance function, making it differentiable throughout. Using an oriented set of points to represent the bounds of the feasible region, the constraint function is a distance-based weighted sum of signed distances between the points and a set of points on the design shape. This representation is inexact and is found to compromise accuracy for a smoothness in the constraint representation in practice. Additionally, their formulation has a computational complexity of O.Our first objective—to derive a smooth level set function from a set of oriented points—closely aligns with the problem of surface reconstruction in computer graphics. Surface reconstruction is done in many ways, and we refer the reader to for a full survey on surface reconstruction methods from point clouds. We, in particular, focus on surface reconstruction with implicit function representations from point clouds. Implicit surface reconstruction is done by constructing an indicator function between the interior and exterior of a surface, whose isocontour represents a smooth surface implied by the point cloud. The methodologies for surface reconstruction use implicit functions as a means to an end; however, the focus of our investigation is on the implicit function itself for enforcing non-interference constraints. We identify that the direct connection between non-interference constraints and implicit functions in surface reconstruction is that the reconstructed surface represents the boundary between the feasible and infeasible region in a continuous and differentiable way. The surface reconstruction problem begins with a representation of a geometric shape. Geometric non-interference constraints may be represented by geometric shapes using scanned samples of the surface of an anatomy, outer mold line meshes, user defined polygons, and a sampled set of points of seabed depths. Many geometric shape representations, including those mentioned, can be sampled and readily converted into an oriented point cloud and posed as a surface reconstruction problem. The construction of any point cloud comes with additional complexities. For example, machine tolerance of scanners introduce error into scans, and meshing algorithms produce different point cloud representations for the same geometric shape. As a result, implicit surface reconstruction methods often take into consideration nonuniform sampling, noise, outliers, misalignment between scans, and missing data in point clouds. Implicit surface reconstruction methods have been shown to address these issues well, including hole-filling, reconstructing surfaces from noisy samples, reconstructing sharp corners and edges, and reconstructing surfaces without normal vectors in the point cloud. Basis functions are commonly used to define the space of implicit functions for implicit surface reconstruction. Basis functions are constructed from a discrete set of points scattered throughout the domain, whose distribution and locations play an important role to defining the implicit function. Examples of these points include control points for B-splines, centers for radial basis functions, and shifts for wavelets. Implicit surface reconstruction methods distribute these points in various ways. One approach is to adaptively subdivide the implicit function’s domain using an octree structure. Octrees, as used by, recursively subdivide the domain into octants using various heuristics in order to form neighborhoods of control points near the surface. Heuristics include point density, error-controlled, and curvature-based subdivisions. Octrees are notable because the error of the surface reconstruction decays with the sampling width between control points, which decreases exponentially with respect to the octree depth. Additionally, the neighborhoods of control points from octrees can be solved for and evaluated in parallel using graphics processing units , which allows for on-demand surface reconstruction as demonstrated in [43]. Another approach for distributing the points that control the implicit function is to locate them directly on the points in the point cloud. In the formulation by Carr et al. , a chosen subset of points in the point cloud and points projected in the direction of the normal vectors are used to place the radial basis function centers, resulting in fewer centers than octrees that are still distributed near the surface.

Estimates for the coefficients on these variables provide the main results of the study

Interestingly, increasing the mean upstream experience of rivals by one unit raises a firm’s vertical integration probability by more than three times the amount caused by increasing the firm’s own upstream experience by one unit. This suggests that the magnitude of bandwagon effects in the generics industry is quite substantial. The number of potential upstream-only entrants, which was found to affect downstream payoffs positively, has a significantly negative coefficient in the vertical integration equation. The estimated marginal effects also indicate that increasing the number of potential upstream suppliers significantly lowers a firm’s probability of vertically integrating. This finding can be interpreted as follows: when the number of potential unintegrated upstream entrants is large so that a lower degree of vertical integration is expected to hold in equilibrium, each downstream entrant has a lower incentive to vertically integrate. This provides additional support to the view that firms’ vertical integration decisions are strategic complements. The main finding from the econometric analysis is that vertical integration decisions in the generics industry exhibit bandwagon effects: a firm’s incentive to vertically integrate is higher if it expects a greater prevalence of vertical integration among its rivals. What could be the cause of such strategic complementarity? One possible explanation is that the strategic complementarity of vertical integration is caused by foreclosure effects in the post-entry market.

Imagine a market where the foreclosure effects of vertical integration are severe relative to its efficiency effects. In such a market, an unintegrated downstream entrant earns a low profit when many of its rivals are vertically integrated, clone rack but it gains a high incremental profit by choosing to vertically integrate. On the other hand, when few of its rivals are vertically integrated, the firm’s incremental profit from integrating is likely to be small. By comparison, when foreclosure effects are weak relative to efficiency effects, the firm’s incremental profit from vertical integration is likely to be larger when fewer of its rivals are integrated . Another possibility is that firms in the industry learn from others about the benefits of vertical integration, as suggested by Rosengren and Meehan . The performance of a vertical integrated entrant in one market may inform others in the industry about the hitherto unknown benefits of vertical integration, and influence their actions in future markets. The existence of such learning spillovers would cause vertically integrated entry to become more prevalent over time; it would also create correlation between individual firms’ probability of vertical integration and their rivals’ upstream experience levels. However, while such inter-firm learning effects cannot be ruled out entirely, they are unlikely to be driving the estimated positive impact that rivals’ mean upstream experience has on the probability of vertical integration. This is because the year dummy variables in the vertical integration equation are expected to pick up any learning spillover effects that exist.

Turning to the marginal effects of the year dummies, we find that the probability of vertical integration was significantly higher in 2001 and 2002. The rising trend during the first half of the observation period is consistent with the existence of learning spillovers. Somewhat puzzling is the decreasing trend during the second half. One possible explanation is that some of the vertically integrated entries in the former period were caused by fad behavior, which declined in importance during the latter period. The US generic pharmaceutical industry has experienced a wave of vertical integration since the late 1990s. Industry reports suggest that this pattern may be associated with the increase in paragraph IV patent challenges that followed key court decisions in 1998. The 180-day market exclusivity given to the first generic entrant to file a patent challenge has turned the entry process in some generic drug markets into a race to be first; vertical integration may provide an advantage to the participants of the race by promoting investments aimed at the early development of active pharmaceutical ingredients . Another cause of the vertical merger wave suggested by industry reports is the existence of bandwagon effects: the rising degree of vertical integration in newly opening markets may have motivated firms to become vertically integrated themselves. This paper employs simple theoretical models to demonstrate the validity of these two explanations and to derive empirical tests. In the context of a simultaneous-move vertical integration game such as the one seen generally in the generics industry, the existence of bandwagon effects is equivalent to the strategic complementarity of vertical integration decisions.

The theoretical model in Section 2.3.1 shows that under strategic complementarity, a firm’s probability of vertical integration increases as its rivals’ cost of integration decreases. This result leads naturally to a simple test of bandwagon effects. The other model, presented in Section 2.3.2, shows that vertical integration enables firms to develop their APIs early during a patent challenge, increasing their chances of winning first-to-file status, when API supply contracts are incomplete and payment terms are determined through ex post bargaining. This prediction can be tested by seeing if markets characterized by paragraph IV certification are more likely to attract vertically integrated entrants. The two tests are applied to data on 85 generic drug markets that opened up during 1999-2005, using a trivariate probit model that accounts for selection and endogeneity. The coefficient estimate for the paragraph IV indicator variable shows that vertical integration probabilities are higher in paragraph IV markets as the theory suggests, but the marginal effect evaluated at representative values of the covariates is not significantly different from zero. Thus, the hypothesis that vertical integration facilitates relationship-specific non-contractible investments is only partially supported by the data. The past upstream entry experience of a downstream entrant is found to have a significantly positive impact on its probability of vertical integration. This suggests that upstream experience lowers the cost of vertical integration. We also find that the mean upstream experience of rivals has a significantly positive effect on a firm’s vertical integration probability. These two results combined indicate that vertical integration decisions are strategic complements – in other words, bandwagon effects are likely to exist. There are several possible sources of bandwagon effects. One possibility is that vertical integration generates foreclosure effects in the post-entry market, which, according to Buehler and Schmutzler , give rise to the strategic complementarity of vertical integration decisions. There is some empirical evidence to support the existence of foreclosure effects: the number of potential unintegrated upstream entrants has a positive effect on downstream payoffs but its effect on the returns to vertical integration is negative, which suggests that unintegrated downstream en-trants are better off if the market is less vertically integrated. Another candidate for the source of bandwagon effects is inter-firm learning about the benefits of vertical integration. The marginal effects of the year dummy variables provide some indication of inter-firm informational spillovers. However, hydroponic shelves learning effects are unlikely to be behind the estimated positive relationship between a firm’s probability of vertical integration and its rivals’ upstream experience levels. The effect of vertical integration on market outcomes such as prices and product quality in the final goods market can be either positive or negative. For instance, an increase in the level of vertical integration can lead to higher prices or lower prices in the downstream market, depending on the underlying demand and cost function parameters . This is because vertical integration has countervailing effects. One is to decrease the integrating firm’s costs – for instance, through the elimination of double marginalization or the facilitation of non-contractible investments. Such efficiency effects tend to lead to lower final good prices or higher product quality.

Another effect is to foreclose unintegrated rivals’ access to upstream suppliers or downstream buyers. Such foreclosure practices often lead to higher prices or lower quality for the final good. Finally, vertical integration can deter or facilitate entry by unintegrated firms, or induce them to become vertically integrated themselves. In other words, vertical integration can affect market outcomes by influencing the market structure formation process. As this discussion suggests, the link between vertical integration and market outcomes is quite complicated. For this reason, modern analyses on the effects of vertical integration tend to be conducted on an industry-by-industry basis. This paper presents a novel method for empirically examining vertical integration in an individual industry. It is based on a game theoretic model of simultaneous entry into an oligopolistic market consisting of an upstream segment and a downstream segment. The players of the game are potential entrants who can enter into one of the vertical segments or both. After they make entry and investment decisions, competition occurs within the post-entry market structure and profits are realized. Firms’ entry decisions are based on their expectations of post-entry profits, which in turn are affected by the entry decisions of others. Put another way, potential entrants form profit expectations according to the vertical market structure they expect in the entry equilibrium, as well as the position they foresee for themselves within that market structure. It is assumed that po- tential entrants are heterogeneous in observable ways and that the entry game is one of complete information. The econometric model is designed for application to a dataset consisting of multiple markets where vertical entry patterns are observed. The entry patterns are interpreted as outcomes of the vertical entry game. The object of estimation is the set of firm-level post-entry payoff equations corresponding to three different categories of entry: downstream-only, upstream-only, and vertically integrated. Potential entrants choose the entry category, or action, that yields the highest profit net of entry costs. Each payoff equation contains as arguments variables that describe the actions of other potential entrants. They represent rival effects – the effect of upstream, downstream, and vertically integrated rival entry on profits. While such estimates provide direct measures of inter-firm effects, they can also be used as indirect evidence on the effect of vertical integration on market outcomes. Like Chapter 2, the dataset used in this chapter comes from the US generic pharmaceutical industry. It covers multiple markets, each defined by a distinct pharmaceutical product. The upstream segment of each market supplies the active pharmaceutical ingredient while the downstream segment processes the API into finished formulations such as tablets and injectables. For each market, we observe multiple firms entering the two vertical segments – some of them into both segments – when patents and other exclusivities that protect the original product expire and generic entry becomes possible. From the estimated parameters of the vertical entry game, I find that vertical integration between a pair of firms has a significantly positive effect on independent downstream rivals. This suggests that vertical integration has a substantial efficiency effect that spills over to other firms in the downstream segment. Another finding is that in markets containing two upstream units and one downstream unit, backward integration by the downstream monopolist significantly reduces the profit of the unintegrated upstream firm. This is consistent with the existence of efficiency effects due to vertical integration; the independent upstream firm’s profit falls if it must contend with a tougher rival. The parameter estimates are used to simulate the effect of a hypothetical policy that bans vertically integrated entry. I find that while the ban tends to increase the number of upstream entrants, it tends to reduce the number of downstream entrants. Even though competition in the upstream segment is increased as a result, the lower efficiency of unintegrated suppliers or the existence of double marginalization problems leads to less entry in the downstream segment. This suggests that vertical integration has an entry-promoting effect in the generic drug industry. We cannot observe the effect of the policy on other market outcomes such as prices. However, the finding that vertical integration has significant efficiency effects as well as entry-promoting effects leads us to conclude that banning vertically integrated entry has an adverse effect on market performance. The remainder of the chapter is structured as follows. Section 3.2 explains how this study fits into the empirical industrial organization literature on vertical integration and that on market entry. To my knowledge, this is the first empirical paper to exploit an entry game structure in order to analyze the effects of vertical integration. In Section 3.3, I describe the process of vertical market structure formation in the generic pharmaceutical industry.

The entry process for generic pharmaceutical has greatly evolved over the last three decades

Partly to prevent such situations, the FDA requires originator firms to provide information on the patents covering new drugs as part of their NDA filings. Typically, originators provide information on all relevant patents except for those that only claim manufacturing processes. Once an NDA is approved, a list of patents that are associated with the new drug is published in a FDA publica-tion called “Approved Drug Products with Therapeutic Equivalence and Evaluations”, commonly known as the Orange Book.5 The Orange Book is used by generic companies to learn about the existence and duration of originator patents in every drug market that they contemplate for entry. Prior to 1984, generic firms seeking marketing approval had to provide the FDA with the same type of information as originator firms, including data on clinical trials conducted on a large number of patients. As a result of the substantial entry costs that this entailed, entry by generic companies was limited: in 1984, roughly 150 drug markets were estimated to have been lacking generic entrants despite the expiration of patents . The Drug Price Competition and Patent Restoration Act of 1984, also known as the HatchWaxman Amendments, drastically changed the process of generic entry. Most significantly, generic companies were exempted from submitting complete NDAs.6 Instead, a generic entrant could file an Abbreviated New Drug Application , which replaces full-scale clinical trial results with data on bio-equivalence. Bioe-quivalence tests, vertical farming equipment suppliers which compare generic and originator drugs in the way that the active ingredient is absorbed into the bloodstream of healthy subjects, are much smaller in scale and far cheaper to conduct than conventional clinical trials.

When the FDA reviews an ANDA for a generic product, its decision is based on the bio-equivalence test results as well as the clinical trial results contained in the originator product’s NDA. The introduction of the ANDA system implied a huge reduction in product development costs, and generic entry surged after the mid-1980s; the volume-based share of generic drugs rose from 19 percent in 1984 to 51 percent in 2002, increasing further to 74 percent in 2009 . ANDAs are prepared by downstream finished formulation manufacturers and submitted to the FDA some time before they plan to enter the generic market. In the case of a drug containing a new chemical entity, the earliest possible date for filing an ANDA is four years after the approval of the originator’s NDA , but typical filing dates are later. If a generic firm plans to enter after all patents listed in the Orange Book have expired, it begins the ANDA filing process two to three years before the patent expiration date . This reflects the expected time it takes the FDA to review an ANDA; the median approval time was 16.3 months in 2005, increasing in recent years to reach 26.7 months in 2009 .7When unexpired patents are listed in the Orange Book at the time of ANDA filing, the generic firm must make a certification regarding each patent. The firm either indicates that it will wait until the patent expires to enter, or certifies that the patent is invalid or not infringed by its product. The first option is called a paragraph III certification and the latter is called a paragraph IV certification, named after corresponding passages in section 505 of the Federal Food, Drug, and Cosmetic Act. By filing an ANDA containing a paragraph IV certification, a generic firm preemptively counters any patent infringement claims that it expects from the originator. The FDA cannot give full approval to an ANDA until all patents listed in the Orange Book have expired or have been determined to be invalid or not infringed; a tentative approval, which does not permit the ANDA applicant to enter, can be issued in the mean time.

The filing of an ANDA by a generic firm is not publicized by the FDA until the latter announces a tentative or full approval. Therefore, generic firms generally do not observe their rivals preparing and filing ANDAs in real time. The preparation of an ANDA involves the development of the generic drug product by the applicant, who uses it to conduct bio-equivalence tests.8 A physical sample of the product is submitted to the FDA along with documents pertaining to bio-equivalence and quality. An important part of generic product development is the sourcing of APIs. Here, the ANDA applicant faces a make-or-buy decision. If the firm has a plant equipped with specialized machinery such as chemical reactors, it can choose to produce its own API. If the ANDA applicant decides to buy its API from outside, it must find a supplier from among the many manufacturers located around the world. There is no centralized market for generic APIs, but international trade shows such as the Convention on Pharmaceutical Ingredients and Intermediates provide regular opportunities for buyers and suppliers to gather and transact. Once the API is obtained, the downstream firm develops the finished formulation and prepares documentation for the ANDA. The ANDA documents, which are used by the FDA to evaluate the safety and efficacy of the generic product, must convey detailed information regarding the manufacture of the API to the agency. When the API is purchased from outside, the required information must be supplied by the upstream manufacturer. Basic information on the processes used for synthesizing the API is usually shared between the seller and buyer, but there remain trade secrets – such as the optimal conditions for chemical reaction – that the upstream firm may be unwilling to fully disclose to the downstream buyer. This is because the buyer might misuse the trade secrets by divulging them to other upstream firms who are willing to supply the API at a lower price. To address such concerns among API manufacturers, and to maximize the quantity and quality of API-related information that reaches the FDA, the agency uses a system of Drug Master Files . DMFs are dossiers, prepared by individual manufacturers, that contain information on manufacturing processes and product quality for APIs.

By submitting the DMF directly to the FDA rather than to its downstream customer, the API manufacturer is able to convey all relevant information to the regulatory agency without risking the misuse of its trade secrets . Unlike ANDAs, the identities of submitted DMFs are published upon receipt by the FDA.10 If an ANDA applicant buys APIs from outside, it notifies the FDA about the source of the ingredient by referring to the serial number of a specific DMF. At the same time, the applicant contacts the DMF holder, who in turn informs the FDA that the ANDA applicant is authorized to refer to its DMF. In this way, the FDA reviewer knows where to find the API-related information for each ANDA. It is possible for the ANDA applicant to reference multiple DMFs at the time of filing, and for a single DMF to be referenced by multiple ANDAs. On the other hand, adding new DMF reference numbers after filing the ANDA is time-consuming. According to the Federal Trade Commission , it takes around eighteen months for an ANDA applicant to switch its API supplier by adding a new DMF reference. It would appear that a vertically integrated entrant has less of an incentive to use the DMF system than an unintegrated upstream firm. To the extent that the vertically integrated firm produces API exclusively for in-house use, grow light shelves concerns about the expropriation of trade secrets do not arise. In reality, however, many DMFs are filed by vertically integrated firms. One reason for this is that such firms often sell APIs to unintegrated downstream firms even if they are competing in the same market. For instance, Teva, a large Israeli generic drug company who is present in many US generic markets as a vertically integrated producer, sold 32 percent of its API output in 2008 to outside buyers . Another reason is that generic companies often file separate ANDAs for multiple formulations containing the same API. By submitting a DMF to the FDA, an integrated firm can avoid the burden of including the same API information in multiple ANDAs. While one cannot rule out the possibility that vertically integrated firms sometimes refrain from submitting DMFs, the above discussion suggests that a DMF submission is a good indicator of upstream entry by both vertically integrated and unintegrated entrants. A final note regarding DMFs addresses the possibility that a DMF submission does not necessarily imply entry into the API market. As Stafford suggests, some API manufacturers may file a DMF to attract the attention of potential buyers, but may not begin actual product development for the US market until buyer interest is confirmed. Such cases do appear to exist, but the practice is counterproductive for two reasons. First, a spurious DMF that is not backed by an actual product, while creating little real business for the firm, can be potentially damaging for an API manufacturer’s reputation. Second, changing the content of an already-submitted DMF is time-consuming and requires notification to downstream customers . Thus, it seems safe to assume that a DMF submission by a relatively established API manufacturer indicates upstream market entry. In order to motivate the subsequent empirical analysis, I present a stylized description of the vertical market structure formation process in the generic industry.

The process varies depending on whether or not a patent challenge is involved. I first consider the situation without patent challenges, and discuss the case involving patent challenges next. When all generic entrants decide to wait until the expiration of originator patents , the vertical market structure of a given generic drug market is formed through a simultaneous entry game. Potential entrants simultaneously choose their actions from the following four alternatives: unintegrated downstream entry, unintegrated upstream entry, vertically integrated entry, and no entry. A firm’s ANDA filing is not observed by the other players until the FDA announces its approval. This unobservability allows us to assume that firms make their downstream entry decisions simultaneously . On the other hand, an entrant’s submission of a DMF becomes observable when the FDA posts that information on its website. This creates the possibility that some firms choose their actions after observing the upstream entry decisions of other firms. However, since upstream manufacturers tend to submit DMFs later in the product development process, when they are already capable of producing the API on a commercial scale, it is reasonable to assume that upstream entry decisions are made simultaneously with downstream decisions. Once the identities of the market entrants are fixed, we can envision a matching process where downstream manufacturing units are matched with upstream units. The matching process is not observed, because data from the FDA do not tell us which ANDAs refer to which DMFs.14 Afterthe matches are realized, firms invest in product development and document preparation. Upstream units develop their APIs and submit DMFs to the FDA, while downstream units develop finished formulations and file their ANDAs.15 Downstream generic manufacturers market their products to consumers after the FDA approves their ANDAs and all patents and data exclusivities belonging to the originator expire. The payoffs of individual firms are realized when each downstream firm’s revenue is split between itself and its upstream supplier, in the form of payment for APIs. When entry into a generic drug market involves a paragraph IV patent challenge, the process of market structure formation can no longer be described as a simultaneous entry game. There are two reasons for this. First, there is no fixed date when generic firms begin to enter, due to the uncertain nature of patent litigation outcomes. Second, there exist regulatory rules that reward the first generic firm to initiate a successful patent challenge against the originator. This causes potential entrants to compete to become the first patent challenger. The system of rewarding patent challenges was introduced in 1984 as part of the HatchWaxman Amendments. The rationale for providing such an incentive to generic firms is that the outcome of a successful patent challenge – the invalidation of a patent or a finding of non-infringement – is a public good . Suppose that one generic firm invests in research and spends time and money on litigation to invalidate an originator patent listed in the Orange Book.