The primary research method I used to conduct my research was participant observation

My in-depth study of drug policy reform at both the organizational and practical levels required me to branch beyond the traditional methods of quantitative sociology. I sought to get the inside story from insiders’ perspectives, to construct new categories of analysis, and to use these categories to understand the ever developing phenomenon of drug policy change. Since the data I sought was not amenable to quantitative analysis, I eschewed surveys, secondary data analysis and structured interviews. To conduct this study I employed four main types of research methods; participant observation at cannabis dispensaries, drug policy reform conferences, organization meetings, and festivals, depth interviews with activists and organization leaders, archival research of movement websites and literature, and archival research of media coverage of drug reform modalities and movement outcomes. I also analyzed state response to this movement as conveyed through official documents and news sources. As my project progressed, I used the Internet to explore how the movement uses social networking sites to connect activists to one another and to coordinate new forms of Internet based action. My ultimate goal for this research project is to construct a coherent historical narrative of the drug policy reform and medical marijuana movements. Because I sought to create a narrative, drying cannabis qualitative methods were well suited to my task. At the beginning of the process, I needed to look at existing sources on my topic to discover where I needed to fill in the blanks. My use of theory and method was hybrid in form.

Because of my exploratory orientation, I intended to deviate from the deductive, theory testing, orientation that guides much quantitative work in sociology Although I did not intend my study to be exclusively generative of entirely novel “grounded theory” , I also did not completely eschew existing theoretical work in the sociology of social movements and sociology of drugs. Instead I used a dialectic approach employing existing theories from the study of social movements to guide my initial research, and a grounded theory orientation to new data I found that augmented, stretched and contradicted existing theory. This qualitative approach is well suited to my purposes of constructing a narrative of drug policy reform from the viewpoints of its participants, and presenting a study that is amenable to the goals of public sociology . To map the distribution of the wider drug policy reform movement, initially I examined movement documents, literature from conferences and organization websites to discover and catalog the various organizations that comprise the movement. This aspect of my project gave me an understanding of the various concerns that motivate organizations in the movement, its organizational bases, and the number and size of organizations involved in the wider movement. Through cataloging the various organizations that comprise the movement, I was also able see the geographical distribution of movement organizations. The websites of drug policy reform organizations will also provide an understanding of the way that movement actors frame their concerns and goals, and which symbols and values they use to animate their activism. Recently, social networking websites including “Facebook” have afforded activists with new venues for networking and engaging in lobbying activities. Internet based activism has included organizing boycotts of corporations unsympathetic to drug use, petitioning government officials and Congressional representatives, and keeping members abreast of organizational campaigns.

In addition to linking participants to one another and keeping them informed about movement activities, social networking sites also offer activists a platform for lobbying politicians and publicizing their efforts. I will include an examination of these websites to assess the breadth of activity in this movement. As noted above, I have participated in drug policy reform for over ten years. Throughout this study I also directly participated in a particular modality of drug policy reform, working in a medical cannabis dispensary. By working as an employee in a medical cannabis dispensary, I was able to experience first hand what became a central discovery of my research, the hybrid character of the medical marijuana movement . After doing a thorough review of the social movement literature, I was able to build a theoretical vocabulary to explain this transition as a shifting of fields, from the political field to the commercial field. By working in the hybrid field of medical cannabis, I experienced the quotidian shifts in discourse and practice that facilitate the transition between these two fields of practice. The unique perspective I gained as an employee in a medical cannabis dispensary also gave me a front row seat to the framing strategies that people use at an active site, or modality, of drug policy reform. I was able to learn and practice the shift in diction that my fellow employees and I used to accomplish the discursive shift of changing a previously illicit substance into a legitimate or licit substance . On a practical level, by working at a dispensary I was able to meet other activists, medical cannabis patients, and attend numerous drug policy events as a volunteer. My status as an employee gave me entrée into the world of drug policy reform and also made my research feasible with minimal outside funding. I used participant observation to explore the sites where the drug policy movement constitutes itself. This element of the study looked at two locations where participants in this movement most often interact with one another face-to-face, festivals and conferences.

As noted by social movement scholars, face-to-face interactions are necessary to supplement the technologically based networking of participants through the Internet and other communication technologies. In addition to providing demographic data about attendees, the public speakers, panel discussions and presentations at these events offered rich qualitative data about the movement. I used this data to analyze how drug policy reformers frame their actions and to discover the key concerns of movement actors. I also used these events as convenient places to gather literature from various organizations. In addition to attending hemp fests, and conferences hosted by organizations, I attended several types of meetings during the course of my research project. I attended monthly and annual meetings of organizations, city council meetings, and city medical marijuana task force or commission meetings. These various meetings proved to be excellent sites for gathering qualitative data on how organizations and city governments work to regulate the emergent phenomenon of medical cannabis. To illuminate how organizations change drug policy, curing cannabis how various organizations work together, and the biographical dimensions of drug policy activism, I conducted in-depth qualitative interviews with the members of several different drug policy reform organizations. I employed a snowball sampling technique to reach the leaders and members of drug policy organizations. I sought out key figures in the medical cannabis movement to gain access to their unique knowledge of the movement’s history, policy outcomes , collaborating with other organizations and elite benefactors, and interactions with government officials. My interviews with key figures helped me to answer my research questions about the political opportunity structures that allow for novel drug policies. I also asked my interview subjects about their biographies, how they became involved in activism and what led to changes in their political consciousness. Occasionally, participants in the drug policy reform movement engage in public protest and acts of civil disobedience to decry existing drug policy and institute new policy arrangements. I attended and participated in a medical cannabis protest in November 2011. The events that precipitated the protest, the number and types of people in attendance and the slogans, speeches, and chants that the protesters used provided rich data for examining how medical marijuana is both a social movement and an industry. Episodes of civil disobedience also provide unique sites to analyze the interaction between the state and the drug policy reform movement. Under what circumstances do activists engage in civil disobedience? What metaphors, slogans and symbols do protestors deploy? What unites the diverse organizations, funders and participants of the drug policy reform movement is a belief that prohibition as an overarching approach to dealing with illicit drug use creates many problems for individuals and society.

Although not all organizations and individuals in the movement agree that prohibition should be rolled back in its entirety, all the organizations in the movement find at least some aspects of prohibition to create more problems than it solves. In the 1970s, organizations sought to decriminalize the adult use of cannabis because they viewed its prohibition as an affront to individual liberties, and because it relegated a whole class of otherwise law-abiding individuals to criminal status . In the 1980s, the harm reduction movement began as a public health based response to the spread of HIV and Hepatitis C among injection drug users. Eventually harm reduction blossomed into a philosophy undergirding an alternative approach to drug problems . It was not until the mid 1980s that a wholly anti-prohibitionist branch of the movement coalesced around the issues of racial injustice and the prison boom, human rights and instability in drug producing countries , and a reintegration of earlier branches of the movement . All three branches of the movement actively challenge the discourse of drug prohibition, in addition to specific policies sustained by the “drug control industrial complex” . At an abstract level, the various organizations and participants of the drug policy reform movement are engaging in a collective argument with supporters of drug prohibition. Billig uses a discursive approach to the conduct of social movements. In the tradition of social psychology, he emphasizes the importance of language for movements. “Social movements can be seen as a conducting arguments against prevailing common sense” . This makes the rhetorical tasks of social movements challenging because most attempts at persuasive discourse appeal to common sense. Essentially the movement argues “prohibition creates more problems than it solves.” As seen with the Occupy movement that began in New York City’s Wall Street district in September 2011, one of the most powerful effects a movement can have is on changing the national discussion or debate. While sociologists and economists have decried income stratification, income inequality and the ever shrinking middle class in the U.S. for decades, the Occupy movement was able to shatter the commonly held and widely disseminated myth that the U.S. is overwhelmingly a middle class society typified by a high degree of mobility. Although politicians and journalists have decried the central tactic of the Occupy movement, by physically occupying public space the movement was able to change the public debate much more quickly than movements that rely primarily on social movement organizations to make things happen. What makes the argument particularly difficult for the movement to win is an imbalance in access to what I have termed the means of representation. Until the 1990s, supporters of prohibition have had privileged access to the means of representation. As I show in chapter two, the drug policy reform movement is using the Internet to address this disparity with increasing success. In addition to challenging the discourse of prohibition on the Internet and increasingly in the mainstream news media, the drug policy reform movement converges at conferences and hemp rallies to vocalize, experience, and broadcast its challenge to the discourse of drug prohibition. The movement challenges both the policies enforced in the name of prohibition and on a more abstract level, representations of drug users and drug use that prohibitionist discourses seek to portray. By challenging policies and representations that are part and parcel of those policies, the movement collapses a conceptual division that New Social Movements theorists including Alberto Melucci and Manuel Castells seek to draw, the idea that movements are about cultural stakes and not legal or political stakes. I consider the question of whether the drug policy reform movement seeks political or cultural change during my research, and will revisit this dichotomy in later chapters. At the outset, I wish to make it known that I am not only an academic observer of drug policy reform, but I am also an active participant. My position as both an advocate for and observer of drug policy reform presents a difficult balancing act. While I strive to objectively represent and analyze the drug policy reform movement, I wholeheartedly support the basic argument of drug policy reform; prohibition is an ineffective way to deal with drug use and it creates more harmful consequences than it addresses.

Swimmers were removed from the filters under a dissecting scope

Globally, seafood consumption has been on the rise for over 50 years. Between 1961 and 2016, the average annual increase in worldwide seafood consumption was higher than the increases in consumption of beef, pork, and poultry combined. While seafood consumption has increased, global fishing catch – the tonnage of wild fin fish, crustaceans, molluscs, and other seafood caught each year – has remained relatively static since the late 1980s. In that time, aquaculture production has grown to meet the demand that wild fisheries could not Aquaculture is now the fastest-growing food sector and, as of 2016, provides more than half of all the seafood we eat globally. As the human population continues to grow, global demand for seafood will rise. A recent study by Hunter et al. concluded that by 2050, total food production will need to increase by as much as 70% in order to feed the projected population of 9.7 billion people. A significant portion of this increase will likely come from animal protein demanded by a growing middle class. With wild capture fisheries unlikely to meet increasing demand, aquaculture will play a critical role in feeding the world.Finfish, shellfish, and seaweed are farmed around the world both on land and in the ocean. On land, farmers primarily utilize freshwater ponds, lakes, and streams, though in some parts of the world, fully indoor, tank-based recirculating aquaculture systems are on the rise. Land-based aquaculture is often called “inland aquaculture.” In the ocean, the vast majority of seafood farming is done close to shore, in bays, estuaries, fjords, industrial drying racks and coastal waters Some marine aquaculture is done in the open ocean, sometimes miles from shore, where the water is deeper and farmers must contend with storms and higher wave energy Inland aquaculture currently contributes the vast majority of global aquaculture production and most of that is fin fish.

This farming method, particularly when it is done in ponds, lakes, and streams, must contend with other land and water uses; these conflicts will only increase as the human population grows. Non-RAS inland aquaculture can have negative environmental effects, such as pollution of freshwater drinking sources, ecosystem eutrophication, deforestation, and alteration of natural landscapes, particularly when it is done in developing countries without adequate regulation and oversight. RAS farming seeks to minimize these environmental effects by farming in indoor, closed systems – and many RAS companies market themselves as a sustainable alternative to other farming methods – but it has its own environmental trade-offs, including high energy use. RAS farming typically utilizes less land and water than traditional inland farming and will likely play a key role in future aquaculture production, particularly as the industry embraces renewable energy and technological innovation. However, with population growth increasing constraints on space and freshwater availability, the greatest potential for expanding production is in the ocean. Most marine aquaculture takes place in nearshore, coastal waters. As with inland aquaculture, these farms often compete with human uses. These conflicts can include coastal fishing grounds, recreational boating areas, and resistance from coastal landowners. Nearshore aquaculture can also negatively impact coastal ecosystems. Most notably, if they aren’t sited in areas with enough water movement, waste and excess feed can build up on the seafloor and negatively affect surrounding habitats. In some areas, nearshore farming has also resulted in modification/destruction of estuaries, mangroves, and other important coastal habitats. 

Responsible, well-sited, nearshore aquaculture operations can minimize environmental impacts and can avoid use conflicts by farming in remote areas with sufficient water movement. Another option is to move operations out into the open ocean, into deeper, offshore waters where there is more space, fewer use conflicts, and strong currents to flush waste from the nets. This report will discuss the present status and future of offshore aquaculture in the United States, with a specific focus on offshore fin fish farming, which has been the subject of myriad news stories, lawsuits, industry reports, and government memoranda in recent years.Norway is the world’s second largest exporter of fish and seafood, ranking only behind China, and is the leading producer of Atlantic salmon, with 1.2 million metric tons of annual production. The Norwegian government has publicly announced its intention to increase salmon production from 1 million mt to 5 million mt by 2050 but most salmon is currently produced in nearshore coastal waters and fjords, where expansion is increasingly limited by coastal acreage and environmental concerns such as fish escapes and the prevalence of sea lice. In late 2015, the Norwegian Ministry of Fisheries and Coastal Affairs announced a program through which the government would grant free “development concessions,” i.e. experimental licenses, to projects working to develop technological solutions to the industry’s acreage and environmental challenges. The free concessions are available for up to 15 years and if the project meets a set of fixed criteria within that time, the experimental license can be converted into a commercial license for a NOK 10 million fee, significantly less than the typical NOK 50-60 million licensing fee.

Proposed projects must be large-scale and backed by teams with proven expertise in both aquaculture and offshore infrastructure, such as offshore oil and gas extraction. Each experimental license allows for up to 780mt production, so some larger projects require multiple licenses. To date, companies representing 104 individual projects have applied for 898 of these experimental licenses; 53 licenses have been granted. The ‘biological pump’, a critical component of global bio-geochemical cycles, is responsible for transporting the carbon and nitrogen fixed by phytoplankton in the euphotic zone to the deep ocean . Within the biological pump, the relative contributions of phytoplankton production, aggregation , mineral ballasting , and mesozooplankton grazing to vertical carbon flux are still hotly debated and likely to vary spatially and temporally . While solid arguments exist supporting the importance of each export mechanism, the difficulty of quantifying and comparing individual processes insitu has resulted in investigators using a variety of models, which may support one hypothesis but not exclude others. As such, experimental evidence is needed to assess the nature of sinking material, and how it varies among and within ecosystems. Mesozooplankton can mediate biogeochemicallyrelevant processes in many ways, and thus play crucial roles in global carbon and nitrogen cycles. By packaging organic matter into dense, rapidly sinking fecal pellets, mesozooplankton can efficiently transport carbon and associated nutrients out of the surface ocean on passively sinking particles . In the California Current Ecosystem , for example, Stukel et al. have suggested that fecal pellet production by mesozooplankton is sufficient to account for all of the observed variability in vertical carbon fluxes. Diel vertically migrating mesozooplankton may also actively transport carbon and nitrogen to depth when they feed at the surface at night but descend during the day to respire, excrete, and sometimes die . At times, mesozooplankton are also able to regulate carbon export rates by exerting top-down grazing pressure on phytoplankton or consuming sinking particles . In this study, we utilize sediment traps and 234Th:238U disequilibrium to determine total passive sinking flux, and paired day-night vertically-stratified net tows to quantify the contributions of mesozooplankton to active transport during 2 cruises of the CCE Long-Term Ecological Research program in April 2007 and October 2008. Using microscopic enumeration of fecal pellets we show that, across a wide range of environmental conditions, commercial greenhouse benches identifiable fecal pellets account for a mean of 35% of passive carbon export at 100 m depth, with pigment analyses suggesting that total sinking flux of fecal material may be even higher. On average, mesozooplankton active transport contributes an additional 19 mg C m−2 d−1 that is not assessed by typical carbon export measurements.Data for this study come from 2 cruises of the CCELTER program conducted during April 2007 and October 2008 . During the study, water parcels with homogeneous characteristics were identified using satellite images of sea surface temperature and chlorophyll and site surveys with a Moving Vessel Profiler . Appropriate patches for process experiments were marked with a surface drifter with holey sock drogue at 15 m and tracked in real time using Globalstar telemetry. Another similarly drogued drift array with attached sediment traps was also deployed in close proximity to collect sinking particulate matter over the 2 to 4 d duration of each experimental cycle. During each experiment, paired day-night depthstratified samples of mesozooplankton were taken with a 1 m2 , 202 µm mesh Multiple Opening and Closing Net and Environmental Sensing System at 9 depths over the upper 450 m of the water column, with the midpoint of the tow corresponding approximately to the location of the surface drifter. These samples were later enumerated by ZooScan and grouped into broad taxonomic categories and size classes for calculation of mesozooplankton active transport.

Oblique bongo tows to 210 m were also taken at mid-night and mid-day to collect organisms for determination of size-fractionated dry weights and grazing rates of the mesozooplankton community. Size-fractionated dry weights were converted to carbon biomass using the dry weight to carbon relationships of Landry et al. .VERTEX-style drifting sediment traps were deployed on the drifter at the beginning and recovered at the end of each experimental cycle. Trap arrays consisted of 4 to 12 particle interceptor traps with an inner diameter of 70 mm and aspect ratio of 8:1. To create a semistable boundary layer immediately above the trap and minimize resuspension during recovery, each PIT had a baffle on top consisting of 14 smaller tubes with 8:1 aspect ratio. The baffle tubes were tapered at the top to ensure that all particles falling within the inner dia – meter of the PIT descended into the trap. On P0704, 8 PITs were deployed at a depth of 100 m during each cycle. On P0810, 8 to 12 PITs were deployed at 100 m, and 4 to 8 PITs were deployed near the base of the euphotic zone . Before deployment, each PIT was filled with a 2.2 l slurry composed of 0.1 µm filtered seawater with an additional 50 g l−1 NaCl to create a density interface within the tube that prevented mixing with in situ water. The traps were fixed with a final concentration of 4% formaldehyde before deployment to minimize decomposition as well as consumption by mesozoo-plankton grazers . Upon recovery, the depth of the salinity interface was determined, and the overlying water was gently removed with a peristaltic pump until only 5 cm of water remained above the interface. The water was then mixed to disrupt large clumps and screened through a 300 µm Nitex filter. The remaining >300 µm non-swimmer particles were then combined with the total <300 µm sample. Samples were then split with a Folsom splitter, and subsamples were taken for C, N, C:234Th, pigment analyses, and microscopy. Typically, subsamples of ¼ of the PIT tube contents were filtered through pre-combusted GF/F filters for organic carbon and nitrogen analyses. Filters were acidified prior to combustion in a Costech 4010 elemental combustion analyzer in the SIO Analytical Facility. Entire tubes were typically filtered through QMA filters for C:234Th analyses as described above. Triplicate subsamples were filtered, extracted in 90% acetone and analyzed for chlorophyll a and phaeopigment concentrations using acidification with HCl and a Turner Designs Model 10 fluorometer . Samples for microscopic analysis were stored in dark bottles and analyzed on land as described below.Watercress is a leafy-green crop in the Brassicaceae family, consumed widely across the world for its peppery taste and known to be the most nutrient dense salad leaf . The peppery taste is the result of high concentrations of glucosinolates – phytochemicals which can be hydrolyzed to isothiocyanates upon plant tissue damage, such as chewing, known for their potent anticancer , anti-inflammatory , and antioxidant effects that are beneficial to human health. Although ITCs are the main products of digestion depending on pH, metal ions, and other epithiospecifier proteins, nitriles can also be formed through GLS break-down and they too may have chemopreventitive properties . Watercress is high value horticultural crop. A specialty leafy vegetable, with a growing area of 282 ha in the US, with 75 ha of production in California, compared to 58 ha in the UK . It is also a high-value horticultural crop in the UK, with the market value of £8.90 per kg compared to £4.97 per kg for mixed baby leaf salad bags and represents a total value of £15 million per year .

This indicates that for PMCV it is already too late for this type of action to be taken

These lessons would apply not only to PMCV, but also to infectious diseases whose spread is predominantly via fish movement . The decision to use a susceptible-infected over a susceptible-infected-susceptible model for within-farm spread was based on the fact that different experimental studies have found the viral genome present in tissues of challenged fish throughout the whole duration of the study, indicating that the salmon immune response may be unable to eliminate the virus . This, together with studies where PMCV has been consistently found in cohorts of fish sampled through long periods of time, indicating that PMCV can be present in fish for some months , provides further support for the modeling approach used here. Nevertheless, more research is required to further validate or refute this modeling choice, as it is possible that fish clear the infection beyond the time frames used in both experimental and observational studies. The model was sensitive to changes in the values of the indirect transmission rate, rate of decay in environmental infectious pressure, and the rate of viral shedding from infected individuals, but not to changes in the level of spatial coupling . Model outputs were also not substantially influenced by different parameter assumptions regarding either distance or seasonality , noting that information about distance thresholds was derived from other viral infections such as infectious salmon anemia , where estimates have varied from 5 to 20 km or more . Collectively, these results suggest that local spread may play a secondary role in the spread of PMCV across the Atlantic salmon farms in the country. When local spread was removed completely from the model , it was even clearer that this transmission pathway under current model assumptions was not the most important. On the basis of these results, greenhouse bench top we hypothesize that the widespread presence of PMCV in Ireland is most likely a product of the shipments of infected but subclinical fish through the network of live fish movements that occur in Ireland.

This is consistent with fish being infected but subclinical for months prior to manifesting signs of disease , and by the structure of the network of live fish movements in the country . There is limited knowledge of agent survival of PMCV in the aquatic environment. Infection risk is higher on farms with a history of CMS outbreaks , which could suggest survival of the causal agent in the local environment. Further, infection pressure from farms within 100 km of seaway distance was found to be one of the most important risk factors for clinical CMS diagnosis , although this study did not evaluate spread via fish movement. It is noted that the distance over which infection can be transmitted via water is determined by an interaction between hydrodynamics, viral shedding and decay rates . Further research on PMCV survival in the environment is needed to guide parameterization of future models. The most effective intervention strategies are based on outdegree and outcloseness , with the highest impact being observed when using these intervention strategies with a proactive approach . Note that all outgoing shipments from selected farms are assumed to include only susceptible fish , which can be equated with high levels of bio-security. The outdegree and outcloseness based strategies are comparable, most likely because both strategies refer to outgoing shipments from a farm , the former with the number of farms receiving fish from a given source, and the latter inversely related to the number of intermediaries between the source and the rest of the farms in the network. Both centrality measures were moderately correlated with each other, with a Pearson correlation of 0.53 for the proactive approach when including all farms for each time window used. Based on a closer examination of the top eight farms of each list, for every year , one list always included at least the top three elements of the other. In other words, each list contained the top three farms in terms of outdegree and the top three farms in terms of outcloseness. Further iterations of this model could exploit the similarity between ranks of farms based on these two centrality measures, for example evaluating the effect of targeting a lower number of farms based on a list created from the top elements of both rankings. For the case presented here , either centrality measure could be used.

Being this the case, we would advocate for the use of outdegree over outcloseness, given its simplicity of estimation and understanding. The proportion of farms connected via live fish movements varied in a cyclical manner, with spikes during the periods of January-April, July, and October-December, which is consistent with results from our previous descriptive study of the network of live fish movements in Ireland . Interventions could be considered that specifically apply at these times of higher connectivity between farms, to take account of this observed cyclicity. The remaining between-farm prevalence levels observed after the implementation of this targeted strategies is due to residual infectious pressure and local spread, where PMCV is not fully cleared from the environment between generations of fish, allowing its transmission to newly stocked fish and locally between neighboring farms. Similarly, the lower performance of the reactive approach, even if all transmission via fish movements is halted suggests that eradication of PMCV is virtually impossible in Ireland, as it seems that after elimination of transmission via fish movements, the agent is consistently sustained by local spread . The lack of complete production records for all Irish Atlantic salmon farms was the main reason for using movement records to recreate fish population dynamics. Nevertheless, we consider that the rules as applied in this study were realistic. For example, if a farm ships fish in excess of the total fish population at the time of the shipment, it is reasonable to assume that these fish must have originated at a previous time. The options for this origin are either non-recorded, incoming fish shipments or hatching of new fish. In the case of the latter, this is perfectly reasonable if the fish deficit at the farm is due to a shipment of eggs. However, if the deficit is due to a shipment of older fish , assigning an enter event for this age groups is not realistic. Nevertheless, in the absence of records accounting for the origin of fish sent in these age groups, this seemed like a better approach than arbitrarily imputing their origin to another farm, which in turn would have created fish deficits in other farms cascading to the rest of the network. Arguably, the availability of complete production records from all Irish salmon farms would minimize this issue, although making such records available for a 9-year time period would pose a hefty burden on fish farmers. Additionally, botanicare rolling benches we assert that the impact of our imputation is marginal, considering that only 90 enter events were imputed during the study period , mostly at the beginning of the simulation , and involving mainly fertilized eggs in freshwater farms. This is further evident when evaluating the generated population dynamics, like the number of fish in each age group and the timing of fish enter events , where the abundance of each age group and the enter events follow a seasonal pattern that would be expected given the life cycle of farmed Atlantic salmon. Assigning exit events the day before the last fish shipment of a fish cohort was a simplification necessary for allowing farms not to overpopulate as the simulation proceeded.

The impact of assuming all fish within a cohort were present until the day before shipping is hard to gauge, but we think it would be a small effect, especially considering the large fish populations involved in salmon farming. Future iterations of this model could include a mortality function fitted from the data, or even better, real mortality data from fish farm production records, if available. One of the assumptions of the intervention strategies used in this study is that they are 100% effective in eliminating transmission between farms via fish movements. In order to achieve a similar level of effectiveness in the field, it would require screening of all fish shipments with a highly sensitive test before they exit the origin farm, and elimination of all positive batches . The sensitivity of currently used diagnostic methods is not reported in the literature, but one could arguably assume that the RT-PCR method for detection of the virus has a high sensitivity given its capacity to measure viral RNA, which may or may not be present within a virion that is able to replicate. Currently there are no confirmatory tests for PMCV, and diagnosis of the clinical disease is based on clinical observations, necropsy, and histopathological findings . As for diagnosing latently or subclinically infected fish, this would pose a great challenge today, as there are no cell cultures or other methods that could assist in such a task, which is particularly important for the correct diagnosis of the agent on the early stages of fish life, namely eggs, juvenile fish, and smolts. Further, even if accurate diagnostic tests were available, the feasibility of discarding all infected fish consignments is doubtful, as it would impose a heavy burden on fish farmers, especially considering the modeled current levels of PMCV prevalence. Nonetheless, it does suggest a clear path to prevent the spread of exotic infectious agents in Ireland, such as ISA virus, piscine reovirus , and others. For these agents, targeted surveillance strategies could be implemented based on the top ranked farms in terms of outdegree as described above, which would allow for a timely detection and prevention of further spread across the country. In conclusion, in this study we highlight the importance of human-assisted live fish movement for the dissemination of PMCV across the country, and demonstrate a means, using centrality based targeted surveillance strategies, to prevent this type of spread in the future for other infectious disease agents. These strategies should be applied early on in the epidemic process, before country-wide dissemination of the agent has taken place. The Irish salmon farming industry would benefit from this approach, as it would help in the early detection and prevent the spread of exotic viral agents which have the potential to severely impact local farms and the livelihoods of people that depend on them. This in turn would make Irish salmon farming a more robust and sustainable industry, capable of dealing with infectious agents in a timely and effective way, minimizing socioeconomic and environmental losses, and maximizing fish health and welfare. The literature documents high incidence of low back disorders in the agricultural industry . A national survey in the U.S. shows that, for males, farming is the occupation with the fifth highest risk of inducing low back pain . It has been suggested that the preponderance of the morbidity is related to farm workers’ working conditions, such as stooped working postures and awkward postures during lifting, carrying, and moving loads . Such hazards, however, affect both adult and youth workers. Estimates show that each year in the United States, more than 2 million youths under the age of 20 are exposed to such agricultural hazards . These youths perform many farm-related activities involving significant manual handling of materials and are exposed to factors found to be related to the development of musculoskeletal disorders and LBDs . For instance, emptying a bag of swine feed into a feeder, spreading straw, and shoveling silage into a feed bunk are all reported as causes of serious back injuries . It might be useful to first define terms commonly used in reference to workers based on their age. The term “legal adult” or “age of majority” is the threshold of adulthood as declared by law . This age varies based on geographical regions and may have several age-based restrictions . In most circumstances, “adult” is usually in reference to the age of majority, or one of its exception, and not the biological adult age. According to the National Institutes of Health , the term “child” is an individual under the age of 21, where the definition spans the period from birth to the age where most children are dependent on their parents .

A popular approach for surface reconstruction is the representation of surfaces by an implicit function

With the use of a k-d tree structure, the computational complexity of the k-nearest neighbor search scales better than linearly, O) on average, but the structure is not suitable for gradient-based optimization because the derivatives are discontinuous when the set of k-nearest neighbors switches. Outside the domain of non-interference constraint formulations currently employed in optimization, we discovered a significant body of research conducted on a remarkably similar problem by the computer graphics community. Surface reconstruction in the field of computer graphics is the process of converting a set of points into a surface for graphical representation. Implicit surface reconstruction methods such as Poisson , Multi-level Partition of Unity, and Smooth Signed Distance, to name a few, construct an implicit function from a point cloud to represent a surface. We observed that some of these distance-based formulations can be applied to overcome prior limitations in enforcing geometric non-interference constraints in gradient-based optimization. The first objective of this thesis is to devise a general methodology based on an appropriate surface reconstruction method to generate a smooth and fast-to-evaluate geometric non-interference constraint function from an oriented point cloud. It is desired that the function locally approximates the signed distance to a geometric shape and that its evaluation time scales independently of the number of points sampled over the shape NΓ.

The function must also be an accurate implicit representation of the surface implied by the given point cloud. The contribution of this paper is a new formulation for representing geometric non-interference constraints in gradient-based optimization. We investigate various properties of the proposed formulation, vertical drying racks commercial its efficiency compared to existing noninterference constraint formulations, and its accuracy compared to state-of-the-art surface reconstruction methods. Additionally, we demonstrate the computational speedup of our formulation in an experiment with a path planning optimization and shape optimization problem. This section, in full, is currently being prepared for submission for publication of the material. Anugrah J. Joshy, Jui-Te Lin, C´edric Girerd, Tania K. Morimoto, and John T. Hwang. The thesis author was the primary investigator and author of this material. Wind energy is a sustainable method for electric power generation that mitigates greenhouse gas emissions from other power generation resources, such as with fossil fuels. Predictions show that the climate change mitigation from wind energy development ranges from 0.3C to 0.8C by 2100. Off-shore wind farms can also mitigate the impacts of hurricanes for coastal communities. As such an impactful energy resource, the field of wind farm optimization has gained recent attention to maximize the energy production and economic feasibility of developing wind farms. The increased adoption of multidisciplinary design optimization techniques by the wind energy community has produced many recent works including the optimization of wind turbine designs, wind farm layouts, and active wind farm control. In general, turbine design, wind turbine layout, and active turbine control strategies are the three main methods to increase wind farm efficiency by reducing the wake interaction between turbines .

Although these methods individually may increase the net efficiency, it has been shown that considering multiple or all three methods can further the a more optimal model. Recent simultaneous optimization studies include control and layout optimization and turbine design and layout optimization. Numerical optimization, as an important design tool to solving these problems, has been widely used for wind farm optimization. Gradient-based and gradient-free algorithms are the two main algorithms to perform optimization. Historically, gradient-free algorithms have been used for wind farm optimization problems due to the high multi-modality in the design space of these problems. Gradient-free optimizers are robust to local minima, while gradient-based optimizers often converge to a local optima. However,as these problems increase in scale and the number of disciplines, the dimensionality of the design space may become impractical for gradient-free optimization. Gradient-free optimizers scale poorly in the number of function evaluations as the number of design variables increase in these complex wind farm problems. Gradient-based optimization, especially with analytic gradients, scales better in the number of function evaluations over gradient-free optimizers in these cases. In addition, recent developments have added methods for gradient-based optimizers to navigate the multi-modal design space of these problems. As a result, gradient-based optimization continues to play a key role in optimizing wind farms. When modeling wind farms for gradient-based optimization, it is important to consider the computational speed and differentiability of the models. High fidelity models are often very computational expensive to evaluate, and these models must be evaluated up to hundreds of times during optimization. Therefore, lower fidelity models that are less computationally expensive are often considered for use in gradient-based optimization.

Additionally, the differentiability of the models is a requirement in order to perform gradient-based optimization. The ability to calculate derivatives within the model has not always been readily available. Oftentimes, significant effort must be made to hand derive the derivatives, or in the worst case, using the finite difference method for derivatives, which is on the same order of function evaluations as gradient-free optimization. Current state-of-the-art gradient-based optimizations are performed using automatic differentiation, however it still requires a level of effort to implement into new models, especially when local smoothing techniques are required. A notable research problem in wind farm layout optimization is the representation of wind farm boundary constraints. Boundary constraints in wind farm layout optimization prevent the placement of a wind turbine on regions outside of the permitted zone. Examples of exclusion zones for off-shore wind farms include unsuitable seabed gradients, shipwrecks, and shipping lanes. These zones are often disjoint, non-convex, and highly irregular shapes represented in 2D. There exists a lack of a generic method to represent these boundaries in the wind farm optimization community. Additionally, the state-of-the art methods suffer from the same problems noted in Section 1.1, where the computational complexity scales with the number of points representing the polygonal wind farm boundary. Conveniently, the first contribution of this thesis addresses this issue. The new geometric non-interference constraint formulation provides a smooth, differentiable, and fast-to-evaluate constraint function that represents the wind farm boundary suitable for gradient-based optimization. Another tool that may show to benefit gradient-based wind farm optimization is a new modeling code language called the computational system design language. CSDL is an algebraic modeling language for defining numerical models that fully automates adjoint-based sensitivity analysis. Additionally, CSDL contains a three-stage compiler system that constructs an optimized computational graph representation of the models. As a new design language, it shows potential to improving the convenience and speed of developing the models to perform gradient-based wind farm optimization. The second objective of this thesis is to implement the two aforementioned tools–the geometric non-interference constraint formulation and the computational system design language –and perform optimization studies on multiple wind farm optimization problems. We conduct optimization studies on turbine hub heights, turbine yaw misalignment, and wind farm layout, and investigate their properties as it pertains to gradient-based optimization. These three problems demonstrate the potential of gradient-based optimization in turbine design, wind farm control, and wind farm layout optimization problems. Using well know analytical models, we conduct multiple optimization studies using CSDL as a modeling paradigm and verify its accuracy with other industry-leading optimization frameworks. Additionally, vertical grow racks we perform a wind farm layout optimization with a real-world wind farm, highlighting the accuracy and efficiency of the geometric non-interference constraint formulation. This section, in full, is currently being prepared for submission for publication of the material. Anugrah J. Joshy and John T. Hwang. The thesis author was a contributor to this material. We identify two preexisting methods for enforcing geometric non-interference constraints in gradient-based optimization that are both continuous and differentiable. Previous constraint formulations that utilize the nearest neighbor distance, e.g., Risco et al. and Bergeles et al. , have been used in optimization, but we note the that they are non-differentiable and may incur numerical difficulties in gradient-based optimization. Brelje et al. implement a general mesh-based constraint formulation for noninterference constraints between two triangulations of objects. Two nonlinear constraints define their formulation. The first constraint is that the minimum distance of the design shape to the geometric shape is greater than zero, and the second constraint is that the intersection length between the two bodies is zero, i.e., there is no intersection.

A binary check, e.g., ray tracing, must be used to reject optimization iterations where the design shape is entirely in the infeasible region, where the previous two constraints are satisfied. As noted by Brelje et al., this formulation may be susceptible to representing very thin objects, where the intersection length is very sensitive to the step size of the optimizer. Additionally, the constraint function has a computational complexity of O, which may be addressed by the use of graphics processing units . Lin et al. implement a modified signed distance function, making it differentiable throughout. Using an oriented set of points to represent the bounds of the feasible region, the constraint function is a distance-based weighted sum of signed distances between the points and a set of points on the design shape. This representation is inexact and is found to compromise accuracy for a smoothness in the constraint representation in practice. Additionally, their formulation has a computational complexity of O.Our first objective—to derive a smooth level set function from a set of oriented points—closely aligns with the problem of surface reconstruction in computer graphics. Surface reconstruction is done in many ways, and we refer the reader to for a full survey on surface reconstruction methods from point clouds. We, in particular, focus on surface reconstruction with implicit function representations from point clouds. Implicit surface reconstruction is done by constructing an indicator function between the interior and exterior of a surface, whose isocontour represents a smooth surface implied by the point cloud. The methodologies for surface reconstruction use implicit functions as a means to an end; however, the focus of our investigation is on the implicit function itself for enforcing non-interference constraints. We identify that the direct connection between non-interference constraints and implicit functions in surface reconstruction is that the reconstructed surface represents the boundary between the feasible and infeasible region in a continuous and differentiable way. The surface reconstruction problem begins with a representation of a geometric shape. Geometric non-interference constraints may be represented by geometric shapes using scanned samples of the surface of an anatomy, outer mold line meshes, user defined polygons, and a sampled set of points of seabed depths. Many geometric shape representations, including those mentioned, can be sampled and readily converted into an oriented point cloud and posed as a surface reconstruction problem. The construction of any point cloud comes with additional complexities. For example, machine tolerance of scanners introduce error into scans, and meshing algorithms produce different point cloud representations for the same geometric shape. As a result, implicit surface reconstruction methods often take into consideration nonuniform sampling, noise, outliers, misalignment between scans, and missing data in point clouds. Implicit surface reconstruction methods have been shown to address these issues well, including hole-filling, reconstructing surfaces from noisy samples, reconstructing sharp corners and edges, and reconstructing surfaces without normal vectors in the point cloud. Basis functions are commonly used to define the space of implicit functions for implicit surface reconstruction. Basis functions are constructed from a discrete set of points scattered throughout the domain, whose distribution and locations play an important role to defining the implicit function. Examples of these points include control points for B-splines, centers for radial basis functions, and shifts for wavelets. Implicit surface reconstruction methods distribute these points in various ways. One approach is to adaptively subdivide the implicit function’s domain using an octree structure. Octrees, as used by, recursively subdivide the domain into octants using various heuristics in order to form neighborhoods of control points near the surface. Heuristics include point density, error-controlled, and curvature-based subdivisions. Octrees are notable because the error of the surface reconstruction decays with the sampling width between control points, which decreases exponentially with respect to the octree depth. Additionally, the neighborhoods of control points from octrees can be solved for and evaluated in parallel using graphics processing units , which allows for on-demand surface reconstruction as demonstrated in [43]. Another approach for distributing the points that control the implicit function is to locate them directly on the points in the point cloud. In the formulation by Carr et al. , a chosen subset of points in the point cloud and points projected in the direction of the normal vectors are used to place the radial basis function centers, resulting in fewer centers than octrees that are still distributed near the surface.

Estimates for the coefficients on these variables provide the main results of the study

Interestingly, increasing the mean upstream experience of rivals by one unit raises a firm’s vertical integration probability by more than three times the amount caused by increasing the firm’s own upstream experience by one unit. This suggests that the magnitude of bandwagon effects in the generics industry is quite substantial. The number of potential upstream-only entrants, which was found to affect downstream payoffs positively, has a significantly negative coefficient in the vertical integration equation. The estimated marginal effects also indicate that increasing the number of potential upstream suppliers significantly lowers a firm’s probability of vertically integrating. This finding can be interpreted as follows: when the number of potential unintegrated upstream entrants is large so that a lower degree of vertical integration is expected to hold in equilibrium, each downstream entrant has a lower incentive to vertically integrate. This provides additional support to the view that firms’ vertical integration decisions are strategic complements. The main finding from the econometric analysis is that vertical integration decisions in the generics industry exhibit bandwagon effects: a firm’s incentive to vertically integrate is higher if it expects a greater prevalence of vertical integration among its rivals. What could be the cause of such strategic complementarity? One possible explanation is that the strategic complementarity of vertical integration is caused by foreclosure effects in the post-entry market.

Imagine a market where the foreclosure effects of vertical integration are severe relative to its efficiency effects. In such a market, an unintegrated downstream entrant earns a low profit when many of its rivals are vertically integrated, clone rack but it gains a high incremental profit by choosing to vertically integrate. On the other hand, when few of its rivals are vertically integrated, the firm’s incremental profit from integrating is likely to be small. By comparison, when foreclosure effects are weak relative to efficiency effects, the firm’s incremental profit from vertical integration is likely to be larger when fewer of its rivals are integrated . Another possibility is that firms in the industry learn from others about the benefits of vertical integration, as suggested by Rosengren and Meehan . The performance of a vertical integrated entrant in one market may inform others in the industry about the hitherto unknown benefits of vertical integration, and influence their actions in future markets. The existence of such learning spillovers would cause vertically integrated entry to become more prevalent over time; it would also create correlation between individual firms’ probability of vertical integration and their rivals’ upstream experience levels. However, while such inter-firm learning effects cannot be ruled out entirely, they are unlikely to be driving the estimated positive impact that rivals’ mean upstream experience has on the probability of vertical integration. This is because the year dummy variables in the vertical integration equation are expected to pick up any learning spillover effects that exist.

Turning to the marginal effects of the year dummies, we find that the probability of vertical integration was significantly higher in 2001 and 2002. The rising trend during the first half of the observation period is consistent with the existence of learning spillovers. Somewhat puzzling is the decreasing trend during the second half. One possible explanation is that some of the vertically integrated entries in the former period were caused by fad behavior, which declined in importance during the latter period. The US generic pharmaceutical industry has experienced a wave of vertical integration since the late 1990s. Industry reports suggest that this pattern may be associated with the increase in paragraph IV patent challenges that followed key court decisions in 1998. The 180-day market exclusivity given to the first generic entrant to file a patent challenge has turned the entry process in some generic drug markets into a race to be first; vertical integration may provide an advantage to the participants of the race by promoting investments aimed at the early development of active pharmaceutical ingredients . Another cause of the vertical merger wave suggested by industry reports is the existence of bandwagon effects: the rising degree of vertical integration in newly opening markets may have motivated firms to become vertically integrated themselves. This paper employs simple theoretical models to demonstrate the validity of these two explanations and to derive empirical tests. In the context of a simultaneous-move vertical integration game such as the one seen generally in the generics industry, the existence of bandwagon effects is equivalent to the strategic complementarity of vertical integration decisions.

The theoretical model in Section 2.3.1 shows that under strategic complementarity, a firm’s probability of vertical integration increases as its rivals’ cost of integration decreases. This result leads naturally to a simple test of bandwagon effects. The other model, presented in Section 2.3.2, shows that vertical integration enables firms to develop their APIs early during a patent challenge, increasing their chances of winning first-to-file status, when API supply contracts are incomplete and payment terms are determined through ex post bargaining. This prediction can be tested by seeing if markets characterized by paragraph IV certification are more likely to attract vertically integrated entrants. The two tests are applied to data on 85 generic drug markets that opened up during 1999-2005, using a trivariate probit model that accounts for selection and endogeneity. The coefficient estimate for the paragraph IV indicator variable shows that vertical integration probabilities are higher in paragraph IV markets as the theory suggests, but the marginal effect evaluated at representative values of the covariates is not significantly different from zero. Thus, the hypothesis that vertical integration facilitates relationship-specific non-contractible investments is only partially supported by the data. The past upstream entry experience of a downstream entrant is found to have a significantly positive impact on its probability of vertical integration. This suggests that upstream experience lowers the cost of vertical integration. We also find that the mean upstream experience of rivals has a significantly positive effect on a firm’s vertical integration probability. These two results combined indicate that vertical integration decisions are strategic complements – in other words, bandwagon effects are likely to exist. There are several possible sources of bandwagon effects. One possibility is that vertical integration generates foreclosure effects in the post-entry market, which, according to Buehler and Schmutzler , give rise to the strategic complementarity of vertical integration decisions. There is some empirical evidence to support the existence of foreclosure effects: the number of potential unintegrated upstream entrants has a positive effect on downstream payoffs but its effect on the returns to vertical integration is negative, which suggests that unintegrated downstream en-trants are better off if the market is less vertically integrated. Another candidate for the source of bandwagon effects is inter-firm learning about the benefits of vertical integration. The marginal effects of the year dummy variables provide some indication of inter-firm informational spillovers. However, hydroponic shelves learning effects are unlikely to be behind the estimated positive relationship between a firm’s probability of vertical integration and its rivals’ upstream experience levels. The effect of vertical integration on market outcomes such as prices and product quality in the final goods market can be either positive or negative. For instance, an increase in the level of vertical integration can lead to higher prices or lower prices in the downstream market, depending on the underlying demand and cost function parameters . This is because vertical integration has countervailing effects. One is to decrease the integrating firm’s costs – for instance, through the elimination of double marginalization or the facilitation of non-contractible investments. Such efficiency effects tend to lead to lower final good prices or higher product quality.

Another effect is to foreclose unintegrated rivals’ access to upstream suppliers or downstream buyers. Such foreclosure practices often lead to higher prices or lower quality for the final good. Finally, vertical integration can deter or facilitate entry by unintegrated firms, or induce them to become vertically integrated themselves. In other words, vertical integration can affect market outcomes by influencing the market structure formation process. As this discussion suggests, the link between vertical integration and market outcomes is quite complicated. For this reason, modern analyses on the effects of vertical integration tend to be conducted on an industry-by-industry basis. This paper presents a novel method for empirically examining vertical integration in an individual industry. It is based on a game theoretic model of simultaneous entry into an oligopolistic market consisting of an upstream segment and a downstream segment. The players of the game are potential entrants who can enter into one of the vertical segments or both. After they make entry and investment decisions, competition occurs within the post-entry market structure and profits are realized. Firms’ entry decisions are based on their expectations of post-entry profits, which in turn are affected by the entry decisions of others. Put another way, potential entrants form profit expectations according to the vertical market structure they expect in the entry equilibrium, as well as the position they foresee for themselves within that market structure. It is assumed that po- tential entrants are heterogeneous in observable ways and that the entry game is one of complete information. The econometric model is designed for application to a dataset consisting of multiple markets where vertical entry patterns are observed. The entry patterns are interpreted as outcomes of the vertical entry game. The object of estimation is the set of firm-level post-entry payoff equations corresponding to three different categories of entry: downstream-only, upstream-only, and vertically integrated. Potential entrants choose the entry category, or action, that yields the highest profit net of entry costs. Each payoff equation contains as arguments variables that describe the actions of other potential entrants. They represent rival effects – the effect of upstream, downstream, and vertically integrated rival entry on profits. While such estimates provide direct measures of inter-firm effects, they can also be used as indirect evidence on the effect of vertical integration on market outcomes. Like Chapter 2, the dataset used in this chapter comes from the US generic pharmaceutical industry. It covers multiple markets, each defined by a distinct pharmaceutical product. The upstream segment of each market supplies the active pharmaceutical ingredient while the downstream segment processes the API into finished formulations such as tablets and injectables. For each market, we observe multiple firms entering the two vertical segments – some of them into both segments – when patents and other exclusivities that protect the original product expire and generic entry becomes possible. From the estimated parameters of the vertical entry game, I find that vertical integration between a pair of firms has a significantly positive effect on independent downstream rivals. This suggests that vertical integration has a substantial efficiency effect that spills over to other firms in the downstream segment. Another finding is that in markets containing two upstream units and one downstream unit, backward integration by the downstream monopolist significantly reduces the profit of the unintegrated upstream firm. This is consistent with the existence of efficiency effects due to vertical integration; the independent upstream firm’s profit falls if it must contend with a tougher rival. The parameter estimates are used to simulate the effect of a hypothetical policy that bans vertically integrated entry. I find that while the ban tends to increase the number of upstream entrants, it tends to reduce the number of downstream entrants. Even though competition in the upstream segment is increased as a result, the lower efficiency of unintegrated suppliers or the existence of double marginalization problems leads to less entry in the downstream segment. This suggests that vertical integration has an entry-promoting effect in the generic drug industry. We cannot observe the effect of the policy on other market outcomes such as prices. However, the finding that vertical integration has significant efficiency effects as well as entry-promoting effects leads us to conclude that banning vertically integrated entry has an adverse effect on market performance. The remainder of the chapter is structured as follows. Section 3.2 explains how this study fits into the empirical industrial organization literature on vertical integration and that on market entry. To my knowledge, this is the first empirical paper to exploit an entry game structure in order to analyze the effects of vertical integration. In Section 3.3, I describe the process of vertical market structure formation in the generic pharmaceutical industry.

The entry process for generic pharmaceutical has greatly evolved over the last three decades

Partly to prevent such situations, the FDA requires originator firms to provide information on the patents covering new drugs as part of their NDA filings. Typically, originators provide information on all relevant patents except for those that only claim manufacturing processes. Once an NDA is approved, a list of patents that are associated with the new drug is published in a FDA publica-tion called “Approved Drug Products with Therapeutic Equivalence and Evaluations”, commonly known as the Orange Book.5 The Orange Book is used by generic companies to learn about the existence and duration of originator patents in every drug market that they contemplate for entry. Prior to 1984, generic firms seeking marketing approval had to provide the FDA with the same type of information as originator firms, including data on clinical trials conducted on a large number of patients. As a result of the substantial entry costs that this entailed, entry by generic companies was limited: in 1984, roughly 150 drug markets were estimated to have been lacking generic entrants despite the expiration of patents . The Drug Price Competition and Patent Restoration Act of 1984, also known as the HatchWaxman Amendments, drastically changed the process of generic entry. Most significantly, generic companies were exempted from submitting complete NDAs.6 Instead, a generic entrant could file an Abbreviated New Drug Application , which replaces full-scale clinical trial results with data on bio-equivalence. Bioe-quivalence tests, vertical farming equipment suppliers which compare generic and originator drugs in the way that the active ingredient is absorbed into the bloodstream of healthy subjects, are much smaller in scale and far cheaper to conduct than conventional clinical trials.

When the FDA reviews an ANDA for a generic product, its decision is based on the bio-equivalence test results as well as the clinical trial results contained in the originator product’s NDA. The introduction of the ANDA system implied a huge reduction in product development costs, and generic entry surged after the mid-1980s; the volume-based share of generic drugs rose from 19 percent in 1984 to 51 percent in 2002, increasing further to 74 percent in 2009 . ANDAs are prepared by downstream finished formulation manufacturers and submitted to the FDA some time before they plan to enter the generic market. In the case of a drug containing a new chemical entity, the earliest possible date for filing an ANDA is four years after the approval of the originator’s NDA , but typical filing dates are later. If a generic firm plans to enter after all patents listed in the Orange Book have expired, it begins the ANDA filing process two to three years before the patent expiration date . This reflects the expected time it takes the FDA to review an ANDA; the median approval time was 16.3 months in 2005, increasing in recent years to reach 26.7 months in 2009 .7When unexpired patents are listed in the Orange Book at the time of ANDA filing, the generic firm must make a certification regarding each patent. The firm either indicates that it will wait until the patent expires to enter, or certifies that the patent is invalid or not infringed by its product. The first option is called a paragraph III certification and the latter is called a paragraph IV certification, named after corresponding passages in section 505 of the Federal Food, Drug, and Cosmetic Act. By filing an ANDA containing a paragraph IV certification, a generic firm preemptively counters any patent infringement claims that it expects from the originator. The FDA cannot give full approval to an ANDA until all patents listed in the Orange Book have expired or have been determined to be invalid or not infringed; a tentative approval, which does not permit the ANDA applicant to enter, can be issued in the mean time.

The filing of an ANDA by a generic firm is not publicized by the FDA until the latter announces a tentative or full approval. Therefore, generic firms generally do not observe their rivals preparing and filing ANDAs in real time. The preparation of an ANDA involves the development of the generic drug product by the applicant, who uses it to conduct bio-equivalence tests.8 A physical sample of the product is submitted to the FDA along with documents pertaining to bio-equivalence and quality. An important part of generic product development is the sourcing of APIs. Here, the ANDA applicant faces a make-or-buy decision. If the firm has a plant equipped with specialized machinery such as chemical reactors, it can choose to produce its own API. If the ANDA applicant decides to buy its API from outside, it must find a supplier from among the many manufacturers located around the world. There is no centralized market for generic APIs, but international trade shows such as the Convention on Pharmaceutical Ingredients and Intermediates provide regular opportunities for buyers and suppliers to gather and transact. Once the API is obtained, the downstream firm develops the finished formulation and prepares documentation for the ANDA. The ANDA documents, which are used by the FDA to evaluate the safety and efficacy of the generic product, must convey detailed information regarding the manufacture of the API to the agency. When the API is purchased from outside, the required information must be supplied by the upstream manufacturer. Basic information on the processes used for synthesizing the API is usually shared between the seller and buyer, but there remain trade secrets – such as the optimal conditions for chemical reaction – that the upstream firm may be unwilling to fully disclose to the downstream buyer. This is because the buyer might misuse the trade secrets by divulging them to other upstream firms who are willing to supply the API at a lower price. To address such concerns among API manufacturers, and to maximize the quantity and quality of API-related information that reaches the FDA, the agency uses a system of Drug Master Files . DMFs are dossiers, prepared by individual manufacturers, that contain information on manufacturing processes and product quality for APIs.

By submitting the DMF directly to the FDA rather than to its downstream customer, the API manufacturer is able to convey all relevant information to the regulatory agency without risking the misuse of its trade secrets . Unlike ANDAs, the identities of submitted DMFs are published upon receipt by the FDA.10 If an ANDA applicant buys APIs from outside, it notifies the FDA about the source of the ingredient by referring to the serial number of a specific DMF. At the same time, the applicant contacts the DMF holder, who in turn informs the FDA that the ANDA applicant is authorized to refer to its DMF. In this way, the FDA reviewer knows where to find the API-related information for each ANDA. It is possible for the ANDA applicant to reference multiple DMFs at the time of filing, and for a single DMF to be referenced by multiple ANDAs. On the other hand, adding new DMF reference numbers after filing the ANDA is time-consuming. According to the Federal Trade Commission , it takes around eighteen months for an ANDA applicant to switch its API supplier by adding a new DMF reference. It would appear that a vertically integrated entrant has less of an incentive to use the DMF system than an unintegrated upstream firm. To the extent that the vertically integrated firm produces API exclusively for in-house use, grow light shelves concerns about the expropriation of trade secrets do not arise. In reality, however, many DMFs are filed by vertically integrated firms. One reason for this is that such firms often sell APIs to unintegrated downstream firms even if they are competing in the same market. For instance, Teva, a large Israeli generic drug company who is present in many US generic markets as a vertically integrated producer, sold 32 percent of its API output in 2008 to outside buyers . Another reason is that generic companies often file separate ANDAs for multiple formulations containing the same API. By submitting a DMF to the FDA, an integrated firm can avoid the burden of including the same API information in multiple ANDAs. While one cannot rule out the possibility that vertically integrated firms sometimes refrain from submitting DMFs, the above discussion suggests that a DMF submission is a good indicator of upstream entry by both vertically integrated and unintegrated entrants. A final note regarding DMFs addresses the possibility that a DMF submission does not necessarily imply entry into the API market. As Stafford suggests, some API manufacturers may file a DMF to attract the attention of potential buyers, but may not begin actual product development for the US market until buyer interest is confirmed. Such cases do appear to exist, but the practice is counterproductive for two reasons. First, a spurious DMF that is not backed by an actual product, while creating little real business for the firm, can be potentially damaging for an API manufacturer’s reputation. Second, changing the content of an already-submitted DMF is time-consuming and requires notification to downstream customers . Thus, it seems safe to assume that a DMF submission by a relatively established API manufacturer indicates upstream market entry. In order to motivate the subsequent empirical analysis, I present a stylized description of the vertical market structure formation process in the generic industry.

The process varies depending on whether or not a patent challenge is involved. I first consider the situation without patent challenges, and discuss the case involving patent challenges next. When all generic entrants decide to wait until the expiration of originator patents , the vertical market structure of a given generic drug market is formed through a simultaneous entry game. Potential entrants simultaneously choose their actions from the following four alternatives: unintegrated downstream entry, unintegrated upstream entry, vertically integrated entry, and no entry. A firm’s ANDA filing is not observed by the other players until the FDA announces its approval. This unobservability allows us to assume that firms make their downstream entry decisions simultaneously . On the other hand, an entrant’s submission of a DMF becomes observable when the FDA posts that information on its website. This creates the possibility that some firms choose their actions after observing the upstream entry decisions of other firms. However, since upstream manufacturers tend to submit DMFs later in the product development process, when they are already capable of producing the API on a commercial scale, it is reasonable to assume that upstream entry decisions are made simultaneously with downstream decisions. Once the identities of the market entrants are fixed, we can envision a matching process where downstream manufacturing units are matched with upstream units. The matching process is not observed, because data from the FDA do not tell us which ANDAs refer to which DMFs.14 Afterthe matches are realized, firms invest in product development and document preparation. Upstream units develop their APIs and submit DMFs to the FDA, while downstream units develop finished formulations and file their ANDAs.15 Downstream generic manufacturers market their products to consumers after the FDA approves their ANDAs and all patents and data exclusivities belonging to the originator expire. The payoffs of individual firms are realized when each downstream firm’s revenue is split between itself and its upstream supplier, in the form of payment for APIs. When entry into a generic drug market involves a paragraph IV patent challenge, the process of market structure formation can no longer be described as a simultaneous entry game. There are two reasons for this. First, there is no fixed date when generic firms begin to enter, due to the uncertain nature of patent litigation outcomes. Second, there exist regulatory rules that reward the first generic firm to initiate a successful patent challenge against the originator. This causes potential entrants to compete to become the first patent challenger. The system of rewarding patent challenges was introduced in 1984 as part of the HatchWaxman Amendments. The rationale for providing such an incentive to generic firms is that the outcome of a successful patent challenge – the invalidation of a patent or a finding of non-infringement – is a public good . Suppose that one generic firm invests in research and spends time and money on litigation to invalidate an originator patent listed in the Orange Book.

A positive SPT result was indicated by a wheal measurement 3 mm or greater

The primary objective of this study is to examine the relationship between early biomass smoke exposure and atopy among a cohort of children enrolled in a HAP-reducing chimney stove intervention trial among a population living in the western highlands of Guatemala. In the study community, women often carry their youngest child on their back during cooking, until the child is approximately 18 months old, exposing the newborn children to high levels of HAP. We hypothesize that the availability of a vented chimney stove would reduce the children’s HAP exposure compared to those who use open fires for cooking and would be associated with risks of allergic sensitization.Participating households and children were recruited from the Randomized Exposure Study of Pollution Indoors and Respiratory Effects cohort and its follow-up study, the Chronic Respiratory Effects of Early Childhood Exposure to Respirable PM study. Details of the RESPIRE and CRECER cohorts have been published elsewhere . Briefly, 518 rural Guatemalan women with newborn children who cooked exclusively over an open fire were recruited for the RESPIRE study between October 2002 and December 2004. Households were randomized to either receive a chimney stove , which improves combustion and uses a chimney to vent emissions outdoors, or to continue to cook with their typical open fires until the end of the trial, when they also received the intervention plancha stove. CRECER, the follow up study, took place from 2006 to 2009 and revisited RESPIRE households and recruited 169 new households that were from the same geographical region, hydroponic rack system exclusively used open fires, had one child in the same age range as the RESPIRE study children and one infant less than 6 months old .

For equity purposes, these new households received a chimney stove at the end of the CRECER study when all exposure and outcome information had already been collected. All households in the study thus received a chimney stove at different time periods: RESPIRE intervention households received the stove when the index children were less than 6 months old; RESPIRE control households received the stove when the index children were approximately 18 months old; new CRECER households received the stove when the index children were approximately 5 years old, and their proxy infant siblings were 18–24 months old. The grouping and study timeline are illustrated in Figure 1.The plancha stoves provided in this study reduced the children’s biomass smoke exposure by improving combustion and venting cooking smoke outdoors. Although this may result in an increase in outdoor exposure, we would expect participants in households with plancha stoves to have lower overall biomass smoke exposure, because the total amount of smoke produced would not increase, and outdoor biomass smoke would affect children for shorter duration and lower intensity. We also hypothesized that group 1 index study children would have the lowest cumulative biomass smoke exposure because they were provided the plancha stoves earliest, followed by groups 2 and 3, respectively. To test these hypotheses, we measured carbon monoxide exposure for the study children. Personal CO exposure was used as a proxy for personal biomass smoke exposure: CO has been shown to correlate well with fine particulate matter exposure in this population, in homes using open fires or chimney stoves. Study participants wore small, passive CO diffusion tubes for 48 h every 3 months during RESPIRE and every 6 months during CRECER. Since group 3 index study children did not have CO measurements obtained when they were <18 months of age, the personal CO exposures of their younger infant siblings were used as a proxy for their early life exposures. Details on exposure assessment methodology, validation, and quality control and assurance have been extensively described elsewhere.

We combined data from different measurements, including the aforementioned 48-h samples, to estimate cumulative CO exposure. We used RESPIRE CO tube measurements to estimate cumulative exposure during the first 600 days of life for groups 1 and 2 and used CRECER infant sibling CO tube measurements to estimate cumulative exposure during the first 600 days of life for group 3. Cumulative CO exposure from 600 days old to first allergy questionnaire was estimated based on CRECER CO tube measurements for the study children. We conducted a sensitivity analysis using three alternative calculations for cumulative CO exposure, two of which did not use the 600 days cut point. Details of these calculations can be found in the Table A1.Five rounds of skin prick tests were performed on 539 participants to determine allergic sensitization to six common indoor and outdoor aeroallergens . During each round of SPT, a positive control and a negative control were also performed on each child. SPT results were considered invalid if the histamine control was negative or the saline control was positive. Results from children who reported taking antihistamine or cold medications prior to testing were also excluded.Atopic symptoms were assessed via quarterly respiratory questionnaires completed by the study children’s mothers. The QRQs were conducted three times during CRECER to ascertain the occurrence of symptoms associated with asthma, allergic rhinitis, and eczema. The questions were developed based on the International Study of Asthma and Allergies in Childhood questionnaire. All QRQs were conducted by field workers fluent in the participating mothers’ primary language . Details of the questionnaire development and translation processes have been published elsewhere. All questions were close ended and began with “in the last three months”. A child’s final allergic outcome was recorded as positive if the mother reported him/her having positive symptoms in any of the three QRQ rounds.Logistic regression models were used to analyze the relationship between biomass smoke exposure and the risks of developing allergic outcomes.

Primary statistical analysis used study group as a categorical exposure variable based on the length of having a chimney stove in the household. Odds ratios and 95% confidence intervals were reported. In this analysis, group 1 is the baseline level, groups 2 and 3 represent intermediate and highest levels of biomass smoke exposure, respectively. Secondary statistical analysis used the estimated cumulative CO exposure as a linear continuous exposure variable. Age, sex, second-hand smoke exposure, the number of children in the family, kitchen structure , child’s average weekly temazcal use in minutes, number and species of pets and farm animals at home, maternal history of allergic outcomes, parental education, and socioeconomic status were collected from the CRECER baseline questionnaire, and were adjusted for in the logistic regression models. We did not adjust for race because the study population was homogenous, self-identifying as Mam indigenous.Among the 557 households participating in CRECER, 20 lacking valid SPT or QRQ results were excluded. For the two households with twins that both participated in the study, only the first child recorded was kept in the analysis to ensure independence among observations. Respiratory outcomes were available for 537 children, 188 from group 1, 192 from group 2 and 157 from group 3. Valid SPT results were available for 526 children, 184 from group 1, 187 from group 2 and 155 from group 3 . The quality and precision of the five rounds of SPTs during CRECER improved with additional training and more experience of the staff, indicated by the significantly lower number of invalid tests in the later rounds. As such, rolling tables grow the results of the last valid round of SPT for each child was recorded as the child’s final allergic sensitization outcome. Among the 526 children with valid SPT results, results for 496 participants were taken from the 5th round of SPT tests. The household-level and child-level demographic characteristics were similar among the three study groups , especially between groups 1 and 2 that were randomized in the RESPIRE study. Study children were on average 3.6 years old at the first QRQ.This prospective cohort study followed more than 500 children in rural Guatemala over 7 years and examined associations between cooking-related biomass smoke exposure and childhood atopy outcomes. Children from households that received chimney stoves when the children were approximately 5 years old had higher risks of maternal-reported allergic asthma and rhinitis symptoms compared to children from households that received a chimney stove intervention within the first 6 months of life. A 1 ppm-year higher cumulative CO exposure and its related cumulative biomass smoke exposure was associated with 6–12% higher odds of maternal reported allergic rhinitis and conjunctivitis symptoms. No significant association was found between biomass smoke exposure and eczema or skin prick test outcomes. Notably, the overall prevalence of sensitization to cockroach in this population of rural Guatemalan children was high , which is similar to that reported for inner city children in the U.S. A summary of main results from studies that looked at household biomass smoke exposure and allergic or respiratory outcomes is presented in Table A2.

Our finding that higher biomass smoke exposure was associated with maternal-reported respiratory symptoms such as wheezing, sneezing, nasal congestion and rhinorrhea in their children is consistent with other studies that looked at exposure to biomass stove use and self reported or clinically diagnosed respiratory diseases among children. Prior studies that looked at exposure to cooking and heating-related HAP and atopy outcomes in children residing in high-income countries did not find significant associations after adjusting for lifestyle and socioeconomic factors. In this study, we did not find a significant association between biomass smoke exposure and maternal-reported eczema symptoms. Additionally, there was no significant association between biomass smoke exposure and allergic sensitization as measured by SPT. Part of the reason might be that important windows for atopic sensitization such as prenatal exposures were not captured in the study. The number of cases of allergic sensitization to dogs, cats, and ragweed were small among the study children , resulting in compromised statistical power. The low number of dog and cat allergies might have been due to the high dog ownership and medium cat ownership in the study population : living in proximity to animals is associated with lower sensitization to allergens among children. No significant differences in allergic sensitization or symptoms were found between children in group 1 and group 2, among whom chimney stoves were installed around birth and around 18 months old, respectively. This might have been due to the gradual deterioration of chimney stoves during the 2-year gap between the RESPIRE and CRECER studies, during which group 1 might have been exposed to higher HAP than group 2 because of the older stoves. Another reason might be insufficient exposure reduction, which was also found in a previous analysis of the RESPIRE study: a larger reduction in mean CO exposure was associated with reduction in pneumonia risks, but the moderate difference in group mean CO levels between groups 1 and 2 was not enough to yield a statistically significant difference in pneumonia risk between the groups. The high percentages of reported allergic symptoms and high prevalence of cockroach sensitization among the children in this study is contrary to the “hygiene hypothesis” or “microbial deprivation hypothesis” that early life exposure to microorganisms shapes the Th1 , Th2, and regulatory T cell responses and alters immune response patterns. For instance, children exposed to enteric pathogens have higher resistance to allergic sensitization compared to those living in pathogen-free environments. While the study population was exposed to abundant microorganisms, it is possible that exposure to HAP prenatally, in early life, or even in reduced amounts after the stove upgrade intervention, could promote a shift toward Th2 responses and thus increase risk for atopy. Previous studies have demonstrated that exposure to PM2.5 may increase the risk of asthma via airway inflammation, increase in oxidative stress, changes in immune signaling, and subsequent disruptions of airway epithelial cells and mucosal barrier function. Studies on rhesus monkeys have also found that co-exposure to a pollutant that causes oxidative stress, ozone, and allergen altered airway structural development and increased risk of an asthma-like phenotype. Another consideration is that the increased wheezing and rhinitis symptoms reported by their mothers among children exposed to higher HAP could also be due to the direct irritating effects of biomass smoke to the upper and lower airway epithelium rather than an underlying allergic mechanism. During study design, we hypothesized that group 3 index study children would have the highest cumulative biomass smoke exposure because they were provided the upgraded chimney stoves the latest, thus were exposed to higher levels of biomass smoke for the longest period of time.

The CB1 cannabinoid receptor antagonist AM251 prevented such an effect

While this phenotype does not exclude more nuanced forms of social anxiety, it does support the idea that the modulation of stress reactivity and social behavior can be dissociable. In line with this notion, CB1 over expression in the medial prefrontal cortex alters social interactions without overtly changing the anxiety-related phenotype. In contrast to socially impaired mice, parallel experiments in normal mice indicated that increasing anandamide does not alter social approach. One explanation could be technical – namely, that the social approach test has a ceiling effect or is unable to capture more subtle qualities of social interaction. Indeed, the task is typically used as a screening tool rather than a continuous-scale measure of sociability. Another explanation could be biological – that signaling systems in a healthy brain are able to compensate for endocannabinoid enhancement in a way that socially impaired brains cannot. These explanations are in line with previous reports of different social situations in which faah-/- mice demonstrated increased direct reciprocal interactions, as well as URB597-treated juvenile rats engaging in more social play. Therefore, expanded investigation is warranted into how anandamide contributes to different social contexts and the qualities of social interactions. Nevertheless, our set of results suggests that the prosocial action of FAAH blockade is selective for social impairment in certain contexts, which may be therapeutically advantageous for the spectral nature of ASD. Based on our results and the available literature, plant grow trays we can reasonably speculate on two possible scenarios improved by anandamide signaling that may underlie social impairment in BTBR and fmr1-/- mice.

First, oxytocin-driven anandamide activity in the nucleus accumbens, which we previously demonstrated to be important for social reward, may be impaired in these mice. Consistent with this idea, BTBR mice were found to have abnormal oxytocin expression in the hypothalamus45. BTBR mice were also reported to be deficient in conditioned place preference to social interactions. However, because social conditioned place preference is a relatively new construct, and the learning impairments in these mice make interpretation problematic, further support from the literature is lacking. A second possible scenario is that anandamide might correct an imbalance of excitatory and inhibitory neurotransmission in the cortex, which has been postulated to underlie ASD. Enhancing GABAergic activity in BTBR mice ameliorates their social impairment, and negative allosteric GABA modulation in C57Bl6J mice recapitulates social impairment. This suggests that a loss of balance between inhibitory and excitatory activity might contribute to social impairment. A simplified view of this result orients us to interpret our findings as indicating that anandamide could modulate such balance. This view is consistent with the presence of CB1 receptors on presynaptic terminals of both glutamatergic projection neurons and GABAergic interneurons.In conclusion, the present study provides new insights into the role of endocannabinoid signaling in social behavior and validates FAAH as a novel therapeutic target for the social impairment of ASD.The use of marijuana, the most widely consumed illicit drug in the world, and synthetic cannabinoid drugs typically starts in adolescence. Its usage pattern is striking. According to the 2014 NIDA survey, Monitoring the Future, past-month usage of marijuana was at 6.5% among 8th graders, 16.6% among 10th graders, and 21.2% among 12th graders . The percentage of high school seniors who think marijuana is harmful has declined from 52.4% five years ago to only 36.4% today, indicating a possible prelude to increasing usage in the coming years .

In addition to recreational use, the accelerating prevalence of cannabinoids as a medicine, e.g. in child epilepsy and autism, is exposing growing numbers of young people to the drug. These usage trends are especially concerning within a rapidly changing social sphere, consisting of technological advancements in marijuana cultivation and cannabinoid synthesis, widespread cultural acceptance, as well as novel politics and economies. Long-term effects are likely as a consequence of the roles of the endocannabinoid system in the developing brain. This signaling system has three main components: two lipid-derived local messengers – 2-arachidonoyl-sn-glycerol and anandamide , enzymes and transporters that mediate their formation and elimination, and cannabinoid receptors that are activated by endocannabinoids and regulate neuronal activity . Endocannabinoid activity regulates processes such as neuronal genesis, migration, and differentiation, as well as in synaptic pruning and glial cell formation . With regard to the adolescent period, developmental fluctuations have been reported on the expression levels of CB1 cannabinoid receptors in corticostriatal areas and, to a lesser extent, levels of endocannabinoids. For example,levels of CB1 receptors increased in the cortex and striatum during this period , while 2-AG is reduced in these regions and anandamide continuously increases in the prefrontal cortex . These lines of evidence each emphasize the vulnerability of the developing brain to consequences of early cannabinoid exposure. However, they also highlight the knowledge gap regarding the molecular mechanisms responsible for these consequences, especially with regard to brain reward systems . In fact, it remains unknown whether disrupted endocannabinoid signaling is responsible for impairment. We hypothesized that non-physiological activation of cannabinoid receptors during adolescence persistently impairs the endocannabinoid regulation of social reward in early adulthood.We first administered a synthetic cannabinoid receptor agonist, WIN55212-2 for 2 weeks to adolescent mice at postnatal day. We chose the WIN compound at this dose because it models the effects of marijuana on cannabinoid receptors over other bio-active constituents. After a two-week washout period, at PND58, we found that whereas vehicle-treated mice developed a social place preference, WIN treated mice developed no such preference . This result suggests that improper cannabinoid receptor activation during adolescence impairs the later expression of social reward in adulthood.It is reasonable to expect that socially stimulated anandamide mobilization, which we previously found to be important in social reward , would also be disrupted by nonphysiological activation of cannabinoid receptors in early life. Indeed, we found that after a twoweek washout, at PND58, mice previously treated with WIN had increased levels of anandamide in the nucleus accumbens , medial prefrontal cortex , and ventral hippocampus . Thus, socially stimulated anandamide signaling in these regions may no longer occur properly following WIN treatment.These preliminary results require additional confirmation. Nevertheless, the results are a proof in principle that early, custom grow rooms non-physiological activation of cannabinoid receptors can persistently impair social reward, and that this impairment is due to an underlying disruption in anandamide signaling.

Future investigation should be aimed at: the critical window that leads to persistent impairment; other persistent molecular changes, such as 2-AG levels and CB1 surface expression; and whether oxytocin-stimulated anandamide signaling is also impaired. These investigations would provide a more complete picture of the molecular changes responsible for persistent impairment. They would be relevant to the long-term effects of early cannabinoid exposure on mental illness and addiction.Social interactions and social support protect against physical pain. Such protection exerts powerful effects in adaptive as well as pathological contexts, with profound consequences for pain and pain-related health outcomes . Basic neural mechanisms linking a social signal to an analgesic one, however, are poorly defined. We recently identified a mechanism through which social stimulation mobilizes the endocannabinoid neurotransmitter anandamide, which in turn enhances the saliency of the social interaction . As endocannabinoid signaling is known for its analgesic effects independently of social factors, we therefore tested the idea that social stimulation might also recruit the endocannabinoid system as a mediator of analgesia. We targeted a key area for descending pain modulation, the periaqueductal grey .We used juvenile rats in order to analyze subdivisions of the periaqueductal grey and because of previous investigation of opioids in separation distress . These models, however, have not been extensively replicated. In preliminary experiments, we isolated juvenile rats for 24 h, then re-socialized half while keeping the other half isolated for 3 additional hours. This social stimulation protocol is consistent with that which elicits an increase in anandamide in the mouse nucleus accumbens, ventral hippocampus, and medial prefrontal cortex . We found that acute social stimulation in this manner increases levels of anandamide in the ventral periaqueductal grey , but not the dorsal PAG . In a survey of other regions, we found that the cingulate and insular cortices trended toward an increase, but these results were not significant, and the nucleusaccumbens did not show a change in anandamide . Paralleling this phenomenon, we used the hot-plate test to assess for acute pain response. Animals were placed inside a large beaker which had been heated to 55 degrees on a hot plate, and their latency to withdrawal or lick their hindpaw was measured. We found that social stimulation increased the threshold for paw withdrawal . These results suggest that social stimulation induces anandamide-mediated endocannabinoid signaling in order to exert social protection against pain.These preliminary experiments illustrate the concept that socially mobilized endocannabinoid signaling is recruited to buffer against physical pain. Further investigation should be directed toward: establishing the stimulation protocol, as the profile of endocannabinoid changes between mice and rats are clearly different – it is possible that rats may be more sensitive to the rewarding effects of play and thus require a shorter stimulation; this would be needed to understand whether endocannabinoids modulate an affective component of pain through reward signaling, or actually modulates descending pain at the level of the periaqueductal grey; whether isolation and social stimulation are qualitatively different or simply opposite effects; the confounding effects of stress on pain; and whether endocannabinoid signaling elicited here is also oxytocin-stimulated and, if so, the responsible circuitry.According to the United Nations, the population of the world is expected to grow in the next century, which in turn encourages the development of innovative techniques to ensure agricultural sustainability. Agriculture on productive land is threatened not only by high levels of urbanization, uneven water distribution, and inclement weather, but also is threats to biodiversity that have unfavorable environmental impacts. Due to the anticipated drastic population growth and constraints on resources in the upcoming decades, only 10% of the demand for food is estimated to be met by expansion of productive lands, with the remainder relying on new techniques that can achieve higher yields. Therefore, developing novel methods to augment the ratio of crop production over used land is a vital issue. In recent years, the indoor vertical farming systems with artificial light are found to be a viable solution to resolve the in-creasing demands of future agricultural products. The IVFS are promising alternatives to open field or greenhouse agriculture because they have precisely monitoring environmental parameters and are insensitive to outdoor climates, which can boost annual sales volume per unit area up to 100 times compared to that of open lands. Furthermore, employment of light emitting diodes as light sources can initiate and sustain photosynthesis reactions and the optical wavelength, light intensity, and radiation intervals can further enhance growth quality . Recently, many studies have been carried out to investigate how environmental parameters, such as closed-loop control, ultrasound, and electro-degradation, affect hydroponic cultivation of leafy vegetables in these systems. One of the most influential factors affecting growth in IVFS is to maintain a uniform air flow at an optimal air current speed over plants canopy surfaces. Poor flow uniformity or variation in air velocity over culture beds destabilizes crop production rates. It has been found that inducing a horizontal air speed of 0.3–0.5 m s−1 boosts photosynthesis through more efficiently exchanging species between the stomatal cavities in plants and the flow of air. Lee et al. studied the effects of air temperature and flow rate on the occurrence of lettuce leaf tip burn in a closed plant factory system. Furthermore, it was observed that the relative humidity of the air flow can significantly influence calcium transportation in lisianthus cultivars. According to Vanhassel et al., higher levels of relative humidity can significantly decrease the occurrence of tip burn. Therefore, it is vital to maintain relative humidity in the desired range to ensure even distribution of calcium in lettuce leaves. Over the past few years, researchers have been trying to develop techniques for improving uniformity over cultivation zones. Regardless of the recent progress, the control and automation systems of IVFS bring additional costs, which makes systematic experimental investigation and optimization a challenge. Computational fluid dynamics has been utilized as a reliable tool to numerically simulate complex physical phenomena. Markatos et al. developed a CFD procedure to study velocity and temperature distribution in enclosures using buoyancy-induced physics.

Sniffing time was scored by trained assistants who were unaware of treatment conditions

In humans, marijuana can either facilitate or impair social interactions and social saliency, possibly depending on dose and context . Analogously, in animal models, cannabinoid receptor activation with direct-acting agonist drugs disrupts social interactions, whereas FAAH inhibition enhances them, which is suggestive of a role for anandamide in socialization . While important, these data leave unanswered two key questions. The first is whether the anandamide, whose functions in the modulation of stress-coping responses are well recognized , might influence social behavior by modulating stress reactivity . We addressed this question with two complementary sets of experiments. In one, we used a model of socially conditioned place preference that focuses specifically on the acquisition of incentive salience . Mice were conditioned to social contact with familiar cage-mates for 24 h, an intervention that does not cause stress. When this conditioning procedure was paired with FAAH inhibition, or faah-/- mice were used instead of wild-type mice, the animals displayed a markedly increased preference for the social context. In separate studies, we evaluated the impact of genetic FAAH deletion or pharmacological FAAH blockade in the social approach task, in which mice are given the option of interacting with a novel conspecific or a novel object for a relatively short period of time . Socially normal mice spend more time with their conspecific than with the object. We found that neither pharmacological nor genetic FAAH blockade had any effect in this model.

Collectively, hemp drying racks our findings support the conclusion that anandamide signaling at CB1 receptors specifically regulates the incentive salience of social interactions, and that this effect is independent of anandamide’s ability to modulate anxiety. A second question addressed by the present work pertains to the neural circuits responsible for recruiting endocannabinoid neurotransmission during socialization. We used convergent experimental approaches to show that OTR activation by endogenously released oxytocin triggers anandamide mobilization in the NAc. This result is consistent with evidence indicating that oxytocin acts as a social reinforcement signal within this limbic region, where it elicits a presynaptic form of long-term depression in medium spiny neurons . Thus, a plausible interpretation of our findings is that oxytocin triggers an anandamide-mediated paracrine signal in the NAc, which influences synaptic plasticity through activation of local CB1receptors. Activation of these receptors is known to induce presynaptic LTD at corticostriatal synapses . While providing an economical explanation for our results, the hypothesis formulated above also raises several new questions. Particularly important among them are questions about the roles that other modulatory neurotransmitters may play in regulating the interaction between oxytocin and anandamide. Previous studies point toward serotonin, which is needed for the expression of oxytocin-dependent plasticity in the NAc , and dopamine, which has been implicated in striatal anandamide signaling . Defining such roles will require, however, further investigation. In conclusion, our results illuminate a novel mechanism underlying the prosocial actions of oxytocin, and provide unexpected insights on possible neural substrates involved in the social facilitation caused by marijuana.

Pharmacological modulation of oxytocin-driven anandamide signaling – by utilizing, for example, FAAH inhibitors – might open new avenues to treat social impairment in autism spectrum disorders.We dissolved URB597 and AM251 in a vehicle of saline/propylene glycol/Tween-80 . L-368,899 was dissolved in saline for intraperitoneal injections or in DMSO for intracerebroventricular injections. WAY-267464 was dissolved in DMSO. Clozapine-N-oxide and cocaine were dissolved in saline. For lipid analyses, L-368,899 was administered 0.5 h before the start of socialization , WAY-267464 was administered by i.c.v. injection to 24-h isolated animals 0.5 h prior to sacrifice, and CNO to 24-h isolated animals 1 h before sacrifice. For the socially conditioned place preference test, animals were habituated to injections for 3 days leading up to the experiment. URB597, L-368,899 and AM251 were administered twice a day during social conditioning . Balancing vehicle treatments were given during isolation conditioning . For the social approach task, URB597 was administered i.p. 3 h before starting the test.We anesthetized mice with ketamine-xylazine and stereotaxically implanted a 22-gauge guide cannula positioned 1 mm above the right lateral ventricle at coordinates from bregma: AP -0.2, ML -1.0 and DV -1.3 mm . Animals were allowed to recover 10 days after surgery, during which they were maintained in social housing . Infusions were made in wake animals through a 33-gauge infusion cannula that extended 1 mm beyond the end of the guide cannula. The injector was connected to a 10- µL Hamilton syringe by PE-20 polyethylene tubing. The syringe was driven by an automated pump at a rate of 0.66 µL/min to provide a total infusion volume of 0.5 µL. Cannula placements were verified histologically.We used a serotype 2 AAV vector that was previously constructed and validated to express a modified muscarinic receptor and the fluorescent protein mCherry .

The hM3Dq receptor is exclusively activated by the otherwise inert compound clozapine-N-oxide. Expression of the virus is restricted by the oxytocin promoter, which directs oxytocin cell-specific expression of hM3Dq receptors and mCherry. mCherry expression in the paraventricular nucleus was verified using a Leica 6000B epifluorescence microscope.We bilaterally injected the AAV2-oxt-hM3Dq-mCherry construct into the PVN using the following coordinates: AP -0.70, ML ±0.30 and DV – 5.20 mm. We used an adaptor with 33-G needle connected via polyethylene tubing to a 10-µL Hamilton syringe driven by an automated pump . We waited for 5 min prior to infusion in order for tissue to seal around the needle, infused a total volume of 0.5 µL over 5 min and waited 10 min after infusion before removing the needle. Experiments were conducted 3 weeks after viral injections to allow for recovery and adequate expression. During this period, animals were maintained in social housing .Whole brains were collected and flash-frozen in isopentane at -50 to – 60 °C. Frozen brains were maintained in liquid nitrogen on the day of sacrifice until they were transferred to storage at -80 °C. To take micropunches of brain tissue, we first transferred frozen brains to -20°C in a cryostat and waited 1 h for brains to attain local temperature. We then cut to the desired coronal depth and collected micropunches from bilateral regions of interest using a 1×1.5-mm puncher . The micropunches weighed approximately 1.75 mg. A reference micropunch was taken to normalize each punch to the brain’s weight. Bilateral punches were combined for lipid analyses.Procedures were described previously . Briefly, tissue samples were homogenized in methanol containing internal standards for H2 -anandamide , H2 -oleoylethanolamide and 2 H8-2-arachidonoyl-sn-glycerol . Lipids were separated by a modified Folch-Pi method using chloroform/methanol/water and open-bed silica column chromatography. For LC/MS analyses, we used an 1100 liquid chromatography system coupled to a 1946D-mass spectrometer detector equipped with an electrospray ionization interface . The column was a ZORBAX Eclipse XDB-C18 . We used a gradient elution method as follows: solvent A consisted of water with 0.1% formic acid, and Solvent B consisted of acetonitrile with 0.1%formic acid. The separation method used a flow rate of 0.3 mL/min. The gradient was 65% B for 15 min, then increased to 100% B in 1 min and kept at 100% B for 14 min. The column temperature was 15°C. Under these conditions, Na+ adducts of anandamide/H2 -anandamide had retention times of 6.9/6.8 min and m/z of 348/352, OEA/H2 -OEA had Rt 12.7/12.6 min and m/z 326/330, and 2-AG/2 H8-2-AG had Rt 12.4/12.0 min and m/z 401/409. An isotopedilution method was used for quantification .Ketamine-xylazine anesthetized mice were perfused through the heart left ventricle, industrial rolling racks first with ice-cold saline solution and then with a fixation solution containing 4% paraformaldehyde in 0.1 M phosphate buffered saline . Brains were collected, post-fixed for 1.5 h and cryoprotected using 30% sucrose in PBS. Coronal sections were cut using a microtome and mounted on Superfrost plus slides . For cFos immunostaining, sections were washed in 0.1 M glycine solution to quench excess PFA. Sections were incubated for 1 h in blocking solution . Washed sections were incubated for 48 h at 4°C with anti-cFos antibody .

After washing with 0.1M PBS to remove unbound primary antibody, sections were then incubated for 1 h at room temperature with donkey anti-rabbit IgG conjugated to Alexa Fluor 594. Slides were cover-slipped with Vectashield plus DAPI . Images were captured using a 10x objective on a Leica 6000B epifluorescence microscope with PCO Scientific CMOS camera and Metamorph acquisition software.cFos quantification. Image montages were stitched together using FIJI . Variability in background fluorescence was standardized by subtracting a gaussian-blurred image of each image from itself. Objects of cellular size and shape were then detected using Python 2.7 and FIJI. Brain regions were traced by hand in FIJI using an atlas reference , and resulting coordinates were used to restrict cell counts. Because immunostaining varies across animals and experiments, values were normalized as a ratio to the dorsal striatum of the same animal. The dStr was selected as an internal control because it did not vary across compared groups . Socially conditioned place preference . Following previously described procedures , mice were placed in an opaque acrylic box , divided into two chambers by a clear acrylic wall with a small opening. In the box, a 30-min pre-conditioning test was used to establish baseline non-preference to two types of autoclaved, novel bedding , which differed in texture and shade . Individual mice with strong preference for either type of bedding were excluded – typically, those that spent more than 1.5x time on one bedding over the other. The next day, animals were assigned to a social cage with cage-mates to be conditioned to one type of novel bedding for 24 h, then moved to an isolated cage with the other type of bedding for 24 h. Bedding assignments were counterbalanced for an unbiased design. Animals were then tested alone for 30 min in the two-chambered box to determine post conditioning preference for either type of bedding. Fresh bedding was used at each step and chambers were thoroughly cleaned between trials with SCOE 10X odor eliminator to avoid olfactory confounders. Volumes of bedding were measured to beconsistent – 300 mL in each side of the two-chambered box and 550 mL in the home-cage. Animals from the same cage were run concurrently in four adjacent, opaque CPP boxes. Scoring of chamber time and locomotion were automated using a validated image analysis script in ImageJ – the static background image was subtracted, moving objects of mouse shape and size were thresholded out and frames were counted in a position restricted manner.Three-chambered social approach task. Test mice were habituated to an empty three-chambered acrylic box , as previously described . Habituation consisted of a 10-min trial in the center chamber with doors closed, and then a 10-min trial in all chambers with doors open. Then, during the 10-min testing phase, subjects were offered a choice between a novel object and a novel mouse in opposing side-chambers. The novel object was an empty inverted pencil cup and the novel social stimulus mouse was a sex, age and weight matched 129/SvImJ mouse. These mice were used because they are relatively inert, and they were trained to prevent aggressive or abnormal behaviors. Weighted cups were placed on top of the pencil cups to prevent climbing. Low lighting was used – all chambers were measured to be 5 lux. The apparatus was thoroughly cleaned with SCOE 10X odor eliminator between trials to preclude olfactory confounders. Object/mouse side placement was counterbalanced between trials. Chamber time scoring was automated as in social CPP. Subjects with outlying inactivity or side preference were excluded.Cocaine and high-fat diet CPP. These paradigms were largely similar to social CPP, including unbiased and counterbalanced design, cleaning and habituation, exclusion criteria, and scoring, except for the following key differences which followed established methods . Mice were conditioned and tested in a two-chambered opaque acrylic box with a small opening. Pre- and post-conditioning tests allowed free access to both chambers and each had durations of 15 min and 20 min . For conditioning, animals underwent 30-min sessions alternating each day between saline/cocaine or standard chow pellet/high-fat pellet . The two chambers offered conditioning environments that differed in floor texture and wall pattern – sparse metal bars on the floor and solid black walls vs. dense-wire-mesh floors and striped walls.

The fetotoxic effects of AR in pregnant fishers and their fetuses are unknown

Complete centroids were generated for 42 monitored fishers, 12 fishers from the northwestern California population and 30 from the southern Sierra Nevada population . Of these fishers, 3-month MCP centroids were generated for 39 fishers, and 6-month centroids for 27 . Spatial analysis for 6-month centroids from the KRFP could not be conducted because all fishers in the data set were AR exposed. Sixteen fishers were excluded from the analysis due to lack of monitoring data. No spatial clustering of AR exposure was detected for any of the temporal periods, specific AR compounds, generation class of AR, or distribution of numbers of ARs per fisher in any of the study areas .Cause-specific mortality factors for all 58 fishers sampled ranged widely and included predation, infectious and non-infectious disease processes and vehicular strikes . The cause of death for four of these fishers was attributed to lethal toxicosis, indicated by AR exposure with simultaneous coagulopathy and bleeding into tissues or cavities and ruling out any concurrent processes that might cause hemorrhaging. Two of the four fishers killed by ARs were from the southern Sierra Nevada population, and two were from northern California and the case details are described below.An adult male fisher was recovered on 15 April 2009, grow racks with lights in the southern Sierra Nevada at the SNAMP project area. The fisher showed no signs of predation or scavenging . Gross necropsy determined that the fisher was in good nutritional and fair postmortem condition.

Frank blood was observed in both the thoracic and abdominal cavities , and in the pericardial sac . The stomach and lower gastrointestinal tract contained some blood but no prey or formed feces, and no mucosal changes were noted. There were no other findings on gross examination. Histopathologically, no significant changes were observed in any tissues. Brodifacoum and BRM were detected and quantified in the liver sample at 0.38 ppm and 0.11 ppm, respectively, and CHL at trace levels . The second fisher mortality was a lactating adult female recovered on 2 May 2010 in the center of a paved rural highway in the SNAMP project area approximately 3.7 km from Yosemite National Park. Vehicular strike was initially suspected as the cause of mortality due to the location of the carcass but lacerations, abrasions and visual evidence of trauma were not seen on gross examination of the intact carcass. The post-mortem state of the carcass was good and the nutritional state was poor . Shallow subcutaneous hemorrhage was noted over the hindquarters and spinal column with no associated fractures, punctures or abrasions. There was approximately 20 ml of frank blood within the thoracic cavity. There was no evidence of pneumothorax, vessel ruptures, or visceral tearing. No blood or visceral damage was seen in the abdominal cavity. Stomach contents contained various rodent parts with formed feces in the descending colon. Histopathologically, no significant changes were observed in any tissues. Brodifacoum and BRM were detected and quantified at 0.60 ppm and 0.17 ppm, while one first generation AR, DIP was detected at a trace level within the liver tissue .

No evidence was present to suggest that this fisher died due to vehicular trauma, despite its location on the highway.A sub-adult male fisher was recovered on 4 May 2010 at the base of several riparian shrubs near a watercourse in northwestern California at the HVRFP. Severe ectoparasitism on the carcass was noted in the field with ticks in both replete and non-replete stages. Predation was not suspected due to absence of external wounds. The gross necropsy determined that this fisher was in poor nutritional condition with no subcutaneous or visceral fat. Frank blood was present in the right external ear canal, nasal and oral cavities, within the lumen of the trachea and within the periorbital tissue with no associated skull fractures or punctures.The stomach was devoid of prey. The colon only contained semiformed feces. Ectoparisitism was severe with approximately 48 female and 10 male American dog ticks and 8 female and 2 male western black-legged ticks removed from various regions of the fisher. The liver sample from this fisher had quantifiable levels of BRD at 0.04 ppm as well as a trace level of CHL . The second northern California fisher AR death, was an adult male recovered on 26 May 2010 at the HVRFP. Field observations included no evidence of predation or scavenging. The nutritional state as well as the postmortem condition were poor. Gross necropsy determined that the fisher had no body fat present in any of the tissues. Frank blood was present in both thoracic and abdominal cavities. The stomach contained red and black fluid but no prey. Ectoparasitism was severe with 204 female and 27 male adult American dog ticks in both replete and non-replete stages on areas of the muzzle, chest, tops of fore-and hind-limbs as well as inguinal sections. Severe nematodiasis was seen in skeletal muscle throughout the body . Pulmonary nematodiasis was also noted in the marginal portions of the lungs. Histopathologically, no notable disease processes were seen but severe parasitism was noted. The liver sample for this fisher had quantifiable levels of BRD at 0.61 ppm and trace levels of BRM .Necropsies and AR testing was performed on four kits who were all still dependent on mother’s milk when they died following maternal abandonment from their mothers death. One kit, a female fisher from KRFP tested positive for AR exposure.

This kit was approximately six weeks of age and was recovered within a monitored maternal den tree shortly after maternal abandonment. Cause of death was determined to be acute starvation and dehydration. The liver tissue contained trace level of BRD but there was no associated hemorrhaging in any tissues, body cavities or lumina, suggesting that this finding was not clinically significant.Our findings demonstrate that anticoagulant rodenticides, which were not previously investigated in fishers or other remote forest carnivores, are a cause of mortality and may represent a conservation threat to these isolated California populations. This isthe first documentation of exposure to ARs and of direct mortality from ARs in fishers anywhere in their geographic range. Earlier studies suggest ARs posed little or no additive mortality effects on non-target populations. The shortfall of many of these studies was the utilization of common cosmopolitan species so they did not take in consideration that AR mortality may be additive in otherwise compromised populations. The spatially ubiquitous exposure observed within all post-weaning age classes and across the project areas in their contemporary range in California is of significant concern especially considering the recent work of Spencer et al. , who demonstrated that even a small increase in human-caused mortality of 10–20% in the isolated Southern Sierra Nevada fisher population would be enough to prevent population expansion if other restrictive habitat elements were removed. The high rate of exposure to second generation AR compounds in these populations is surprising and cause for concern. This generation of ARs are not only more acutely toxic, but have long retention through biphasic elimination in mammal tissues. Second-generation ARs are more toxic because death can occur from a single primary ingestion by a rodent. However, rolling benches for growing rodents can receive a lethal dose of second-generation ARs in one feeding bout and it can take up to 7 days before clinical signs manifest. Therefore, prey that have consumed a ‘‘super-lethal’’ dose of AR can pose a substantial risk to predators for several days prior to death. In one study, a group of Norway rats was given a choice between BRD bait and untreated food and another group had access only to the BRD bait. Both groups consumed 10 and 20 median lethal doses on the first day and 40 to 80 LD50 doses by day 6.5, respectively. If sources for these toxicants are maintained for even short periods, exposed rodents, the main prey source for fishers in these populations can pose significant threats to their predators. Many manufactures use ‘‘flavorizers’’ since the AR compound may be bitter and unpalatable to rodent pests. Emulsions used to increase palatability include sucrose, bacon, cheese, peanut butter, and apple flavors , and thus could be palatable to generalist carnivores like fishers. Although we did not visually detect AR bait in the stomach or GI tracts of any fishers that died, primary poisoning cannot be completely ruled out.In addition to the risk from lethal toxicosis, sub-lethal AR exposure may compromise fishers through a reduction in the function of normal clotting. The occurrence of AR -exposed wildlife dying from minor wounds that otherwise might have easily resolved themselves if ARs were not present suggests contributory lethal effects. Several cases describe raptors receiving minor defensive lacerations or trauma from prey that lead to the raptor’s death by exsanguination or hemorrhaging.

Fishers actively pursue a wide array of terrestrial and arboreal prey. Hence, it is conceivable that a fisher could receive similar wounds or trauma from prey, or during the pursuit of prey. Consequently, if clotting mechanisms were compromised due to ARs, benign injuries could lead to serious complications. The leading causes of mortality within the USFWS DPS is intraguild predation . It is possible that some of these cases, AR exposure could have compromised clotting mechanisms at the predation attempt and this deserves further study. High levels of tick infestations were noted in two of the AR mortalities when compared to other sympatric species within the same project area. In addition, locations of of these replete ticks were in infrequent regions in other captures, most likely due to a lack of regular grooming. Whether ARs played a role by allowing more ticks to obtain a blood meal due to immobilization due to compromised clotting factors is unknown. Furthermore, sublethal AR exposure may decrease an animal’s resilience to environmental stressors. In a study on rabbits and rats subjected to stressors such as severe decreases in ambient temperature , approximately 10% of test animals died; however when animals were exposed to low non-lethal doses of anticoagulants and subjected to the same stressors, mortality rates increased to 40–70%. It is unknown if stressors or injuries from environmental, physiological or even pathogenic factors could predispose fishers to elevated mortality rates when coupled with AR exposure.The documentation of neonatal or lactational transfer of AR to a dependent fisher kit was unexpected, and the effects of AR exposure to a kit during fetal development or shortly after birth are unstudied. AR exposure in pregnant or whelping domestic canids varied, causing no clinical signs in some cases but death due to coagulopathy immediately after delivery in other cases. The female fisher who gave birth to this kit did not exhibit clinical signs at pre- or postpartum captures and monitoring of her maternal den site verified that one kit survived from that litter . Nevertheless, clinical signs including hemorrhaging, inappetence and lethargy have been seen in domestic canid puppies of AR-exposed mothers. Mild to severe manifestations such as low birth weight, stillbirth or eventually neonatal death has been documented in several cases. In one human study where pregnant women received low doses of warfarin due to severe risk of thromboembolic events, 33% of them had stillbirths, 28% had abortions, and 11% of the neonates died shortly after birth. The range for congenital anomalies and miscarriages in pregnant females for prescribed doses of warfarin varied from 15 to 56% and long-term neurological symptoms have been reported in children that were exposed in-utero. In addition, because fishers exhibit delayed implantation of the blastocyst, whether ARs may cause pregnant females to abort or reabsorb the fetus merits further research. The transfer of first generation ARs from mother to offspring in milk is not well-understood and there are no data on lactational transfer of second-generation ARs.The quantity of AR we observed in fisher liver tissues varied and overlapped extensively in both sublethal and lethal cases with no clear indication of a numeric threshold that might indicate an amount leading to morbidity or mortality. This lack of predictive ability has been shown in numerous wildlife cases. For example, Brodifacoum, the most prominent AR compound detected in fishers in this study ranged considerably in lethal cases among individual mustelid species, with 0.32– 1.72 ppm in stoats, 0.7 ppm in least weasels, 1.47–1.97 in ferrets and 9.2 ppm in American mink.