California regulates cannabis with a strong hand and high taxes

When we found the same retailer or a branch of the same retail chain elsewhere in the same county, we kept the retailer in the data set. If a retailer disappeared and then reappeared in a later round of data collection, we kept it in the data set. If a retailer re moved its online price list, or moved its only location outside the original seven counties, we removed it from the data set for that data collection round . Between January 2017 and August 2017, we ob served significant attrition from the initial group of 542 retailers in the October 2016 seven-county sample. By August 2017, 389 of the original 542 retailers remained in the data set. As shown in tables 2 and 3, average prices for these retailers changed little during this 11-month period. We call this “attrition” because the data collection method was consistent over this time period. In our 2018 rounds of data collection, we impose the additional condition that retailers must be licensed, thus changing the data collection method . Thus, for 2018 data collection rounds, the percentage of retailers dropping out of the data set from the original October 2016 sample of 542 retailers should not be thought of as “attrition.” Some retailers may have removed their online price lists from both Weedmaps and Leafly but continued to operate. Attrition from the initial 542 retailers thus should not be interpreted solely as a measure of how many cannabis retailers left the legal cannabis segment. In November 2017, while continuing to track the original group of retailers that had been listing prices on Weedmaps since October 2016,flood tables for greenhouse we also collected data from all other retailers listing prices on Weedmaps in all counties of California.

These included the 169 retailers that by that time remained from the original panel; 700 additional retailers that had newly listed retail prices in the seven original counties after October 2016 ; and 1,652 retailers in other counties, for a total of 2,521 retailers across California.In January 2018, mandatory licensing laws went into ef fect, thus rendering illegal under state law any cannabis retailer without a temporary license from the Bureau of Cannabis Control. We verified licensing status by cross-referencing all Weedmaps and Leafly listings in California with the publicly available lists of temporary licenses granted by the Bureau of Cannabis Control. If both a Weedmaps and a Leafly listing were found, we used the Weedmaps data and dropped the Leafly data. In computing averages for our last three data collection rounds , we calculated “legally marketed” minimum and maxi mum price averages at California cannabis retailers that listed prices on Weedmaps and that had obtained temporary licenses to sell cannabis in compliance with state regulations at the time of each data collection round. For comparative purposes, we also collected a sample of about 90 unlicensed retailers in 20 counties from Weedmaps or Leafly, distributed similarly to the licensed retailers. We chose these retailers from within a set of 20 representative counties, approximately in proportion to the relative populations of those counties. We selected retailers for this “20-county unlicensed sample” arbitrarily from the first page of search results on Weedmaps for retailers in each of the 20 counties, but we did not use mathematical randomization to select the counties or the listings we chose within counties.These data may not be fully representative of legal cannabis price ranges for several reasons. First, as discussed above, not all legal retailers use Weedmaps or Leafly, and prices may not be representative of all prices. The price data we collected also may not fully rep resent the range of products in the market, which may have varied in different rounds of data collection.

As is suggested by the changing prevalence of 1-ounce flower packages and 500-milligram oil cartridge packages, product assortments may have changed within each of these categories. This problem plagues price data in many different industries, but changes in product assortments and price listings may have been especially rapid in the emerging cannabis market. The differences in price ranges we report here should not be interpreted as measures of price dispersion, because we are not observing maximum and minimum prices for exactly the same products at different retailers and thus are not comparing “apples to apples,” as is traditionally required to measure price dispersion. However, concrete differences in product attributes — such as potency or grow type for minimum-priced or maxi mum-priced cannabis — may also vary between retailers, and may correlate with price differences , even if price differences between agricultural products do not necessarily correlate with sensory characteristics . For instance, the minimum price for one-eighth ounce of flower at a particular retailer might represent a price for outdoor-grown cannabis with a THC concentration of 15%, whereas the minimum price for one-eighth ounce of flower at another retailer might represent a price for indoor-grown cannabis with a THC concentration of 20%. By analogy, if one were to collect minimum and maximum prices for all wine at retailers around California, the minimum-maximum range could not be used to measure price dispersion in a traditional sense; in order to measure dispersion, one would have to compare, for instance, the price of the same Kendall-Jackson Chardonnay at different stores. For our research, comparing prices for identical products across retailers would not have been feasible, given the Weedmaps format and our data collection methods. Our approach here, in reporting cannabis price ranges, is to make no assumptions about quality and assume that minimum and maximum prices are simply prices for different types of products. It would be interesting, in future work, to explore dispersion by collecting and comparing data on stadard product types across retailers.

Beyond requiring product standardization, an analysis of cannabis price dispersion with respect to geographic areas would also likely require a larger data set than ours. Hollenbeck and Uetake comment that regulatory barriers to entry can facilitate the exercise of monopolistic behavior by retailers. Dispersion measures, as proxies for competition, might help illuminate regulatory impacts. As more tax and sales data are released by government agencies, it might soon become possible for researchers to collect data sets of sufficient size and precision for dispersion to be measured.Table 2 shows average minimum and maximum prices over the course of the 21-month data collection period for the three product types that we studied, along with the number of observations in each period. In the last four rounds of data collection , we generally observe only relatively slight differences in both average prices and upward or downward movements among the three retailer groups . Both statewide and within the seven-county sample, average minimum and maximum prices for one-eighth ounce of flower and for 1 ounce of flower differed by 2.5% or less, but averages differed by up to 8.8% for 500-milligram cartridges. In table 3,indoor growing trays we report prices over the 21-month period for the non-attrited sample of the original retail store locations whose prices we collected in October 2016. These retailers may not be representative of overall state averages, particularly after the substantial attrition from the original group of retailers that we observed beginning in November 2017. However, this set of observations avoids potentially confounding factors introduced by the changing sample composition over time. Table 3 shows substantial attrition from the original seven-county sample of 542 retailers that listed prices on Weedmaps in October 2016. By July 2018, 21 months after the first round of price collection, only 74 non-attrited retailers from the original sample remained active on Weedmaps or Leafly. Local police crackdowns and municipal bans in some counties surely contributed to this 86% attrition rate, which should not be interpreted as representative of statewide attrition from Weedmaps or evidence of the general rate of business closures. What is more interesting, perhaps, is the basic observation that only 270 licensed cannabis retailers were listed on Weedmaps in all of California in July 2018, whereas in November 2017, near the end of the unregulated market, about 2,500 California cannabis businesses operated without the need for a license. This observation suggests, at least, that many medicinal cannabis retailers that had been operating legally in 2017 had not yet obtained licenses and entered the new legal market as of mid-2018. Figures 1, 2 and 3 show average minimum and maximum prices for one-eighth ounce of flower, 1 ounce of flower and 500-milligram oil cartridges for each round of data collection, both for legally marketed cannabis and for the 20-county unlicensed sample. In the 2016 and 2017 price data, before mandatory licensing, regulation and taxation, we observe relative stability in California cannabis price ranges for all three product types. In 2018, after licensing, regulation and taxation, we observe three patterns.

First, we observe falling prices for all products be tween February and May 2018, which may be related to retailers’ need to liquidate untested inventory that would become illegal as of July 2018. Second, we observe generally rising prices between May and July 2018, which may be related to the introduction of mandatory testing rules. However, because of the limitations and uncertain representativeness of the Weedmaps sample, as well as changes to our sampling methods in different rounds, we do not have a basis for inferring a causal relationship between testing rules or other regulatory events and our minimum and maximum price averages. Third, we observe rising maximum prices for 500-milligram oil cartridges over our last four data collection rounds. At all retailers statewide that listed prices on Weedmaps or Leafly, we observed a 33% in crease in maximum prices from November 2017 to July 2018. Table 2 shows that the latter pattern can be observed, with some variation, in prices both in the original seven counties and in all of California. We do not know to what extent the maximum price increases for cartridges might be attributed to the in troduction of new, higher-end products with differentiated sensory or functional attributes as the market has evolved; to differentiated packaging attributes; to price increases generated by increased high-end demand; to supply-side factors; or to other market effects. In general, the price patterns we observe demonstrate little evidence of seasonality, even though wholesale cannabis prices are known to vary seasonally because of the annual outdoor harvest and consequent increase in outdoor cannabis supply in the fall and winter months . We collected eight rounds of price data from the legal California retail cannabis market during a 21-month period of regulatory transition, as cannabis was being decriminalized, legalized and regulated in stages. Given the differences between the data sets we collected and the unknowns about Weedmaps that we have discussed above, readers should be especially cautious in interpreting the movements we observe as “trends.” We instead describe them as “patterns.” In general, one surprising result from our price data sets over time may be the relative lack of overall price movements in California cannabis prices, with the exception of rising maximum prices for cannabis oil cartridges in 2018. The data we report in this paper provides one source of unique information on the retail prices of cannabis flower and oil during the state’s period of transition to a regulated market environment. We hope that our data may useful to economists and other researchers who need to make basic assumptions about characteristics of the cannabis market. We did not collect price data for numerous products now available on the legal cannabis market in California, including edibles, waxes and topicals. The market has also changed in important ways since mid-2018. Many other basic reports on price data beyond ours are still needed to understand the economics of California’s rapidly changing cannabis market.Proposition 64 was really focused on the criminal justice aspects of cannabis prohibition — on [addressing] the negative impact of criminalization, primarily on people of color. It also focused on what happens to consumer safety and protection in the absence of regulation. It didn’t really prescribe regulation for the commercial sale of cannabis. The Legislature had already come up with a framework for regulating medical cannabis prior to Proposition 64 passing, and we didn’t have any reason to think that [the Legislature’s framework] would change drastically just because the criminal code had changed. We were right. You have to interface with a lot of agencies to be compliant. Those agencies are often over burdened and understaffed.

A recent landmark paper describes the successful production of THCA and CBDA from sugar in yeast

Employed was a combinatorial assembly of yeast toolkit parts and iterative design-build-learn-test cycles with strain selection guided by a mathematical model relating genetic design to monoterpene flux. To be functionally useful, the engineered strain needed to retain its ability to convert sugars to ethanol, and have precise, stable expression of flavor-determining monoterpenes linalool and geraniol. This work was in contrast to most metabolic engineering efforts which are commonly enlisted to maximize product titers. Multiple state of the art engineering techniques and iterative improvement schemes were employed to tune production of multiple commercially important metabolites without major collateral metabolic changes. For cannabinoids from C. sativa, an aromatic prenyltransferase catalyzes the formation of cannabigerolic acid from olivetolic acid and geranyl pyrophosphate . The pathway then branches again toward different cyclized products, such as tetrahydrocannabinolic acid , cannabidiolic acid , and cannabichromenic acid . Unnatural cannabinoid variants with tailored alkyl chains could also be obtained via feeding the engineered strain with hexanoic acid analogs, rolling grow trays demonstrating the substrate promiscuity of olivetolic acid pathway enzymes. Most notably, cannabinoid variants with an alkyne moiety were synthesized, paving the way for future click derivatization.

It has been shown that the cannabinoid alkyl side chain is a critical pharmacophore and may be a promising target for pharmaceutical discovery. Another study successfully reconstructed the entire β-bitter acid pathway by heterologous expression of two CoA ligases, a polyketide synthase, and a prenyltransferase complex in an optimized yeast system. A metabolon composed of two aromatic prenyltransferases was elucidated. Another key tool for increasing transgene expression and function for terpenoid biosynthesis is mutagenesis analysis, particularly for prenyltransferases given the plasticity and promiscuity of their active sites. Prenylated flavonoids are another subclass of plant phenolics, which combine a flavonoid skeleton with a prenyl side chain. Unlike other flavonoids, they have a narrow distribution in plants, limited to only several plant families, including Cannabaceae. Recent studies have demonstrated that hop terpenophenolics exhibit diverse bio-activities with a high potential for pharmaceutical applications 208. A prenylated flavonoid with a very potent phytoestrogen activity is 8-prenylnaringenin, produced in Humulus lupulus . 8-Prenylnaringenin was recently produced de novo as a proof of concept for yeast as a platform for biosynthesis of prenylated flavonoids . Recently, the importance of non-catalytic foldases and chaperones for terpenoid production in trichomes has been elucidates. THCA and CBDA are unstable and will be non-enzymatically converted to the decarboxylated forms, Δ9-tetrahydrocannabinol and cannabidiol respectively. It is hypothesized that CsaCHIL, a chalcone isomerase-like protein lacking catalytic activity, potentially binds THCA and/or CBDA for stabilization in hemp glandular trichomes and limits negative feedback to upstream enzymes. It has also been shown that upregulation of multiple foldases and chaperones resulted in a 20-fold improvement of THCA synthase functionality in yeast and poses a promising avenue for optimizing microbial production 210 .

The progression of terpenoid biosynthesis in microorganisms is limited by the dearth of characterized terpene synthases as well as the CYPs and GTs that modify these terpenes. Computational biology has enabled the discovery of new enzymes, as demonstrated by the identification of 55 predicted terpene synthases from C. sativa. CYPs, in particular, are hypothesized to be a main driving force of terpenoid diversification in plants through hydroxylation, sequential oxidations of specific positions , as well as catalyzing ring closure and rearrangement reactions that significantly increase terpenoid complexity. Most CYPs react with a distinct carbon on the terpene backbone, reactions that are challenging for synthetic chemistry, making biosynthesis of oxidized terpenoids a preferable option for production. These CYPs are generally localized to the ER of the native host in close proximity to the terpene synthase producing the substrate for the reaction. Often included on the ER are GTs required for the glycosylation of the oxidized terpenoid, forming potential metabolons on the ER membrane. There are many inherent challenges with transferring into microorganisms CYPs optimized by nature to work in plant systems. This is a major hurdle when working in prokaryotic cell factories due to their lack of an ER and cytochrome P450 reductases responsible for transferring electrons between the CYPs and electron carriers in eukaryotes. Groups have successfully engineered E. coli with functionally reconstructed plant-derived CYPs by generating fusion proteins with membrane anchors suitable for prokaryotic cells along with the co-expression of a CPR. A major advantage of working in yeast systems like S. cervasiae and Yarrowia lipolytica for the production of decorated terpenoids is the endogenous ER system. This has been successfully demonstrated in S. cerevisiae engineered to produce oxidized casbenes, a medically important diterpenoid derivative, that required the optimization of six CYPs, achieving titers of over 1 g/L, building upon techniques initially demonstrated in the landmark paper producing artemisinic acid, a plant-derived sesquiterpene, in yeast.

The terpenoid target space can be further expanded through the introduction of GTs from plants into microorganisms for the glycosylation of oxidized terpenoids. Beyond adding new functionality, plants natively produce glycosylated volatile or toxic terpenes for long-distance transport as well as storage of “disarmed” molecules. Saponins, modified triterpenoids synthesized through varying oxiditions and glycosylations of a β-amyrin backbone, have garnered recent interest in both the industrial and human health spaces 221. The biosynthesis of β- amyrin has been achieved in both E. coli and S. cerevisiae, but the production of its oxidized and glycosylated derivatives has been limited to yeast. Recently, Wang et al. achieved 2.25 g/L production of ginsenoside Rh2, an oxidized and glycosylated triterpene generally harvested from Panax spp., by the directed evolution of UGTPg45. This was the highest titer reported to date for an in vivo production system. Advances in cell-free platforms have enabled the interrogation of GT function in vitro and was recently deployed for the production of novel cannabinoid glycosides. This method allows for the characterization of GTs that can then be introduced to a production host for large scale biosynthesis. A challenge for future engineering will be the availability of substrate, nucleotide sugars, for glycosylation reactions in heterologous hosts. Limited work has been done in microbes aimed at producing various nucleotide sugars, but the formation, interconversion, and salvage of these substrates has been extensively studied in plants, providing a framework for future microbial engineering efforts. A new paradigm of modifying the subcellular morphology of production cells rather than optimizing metabolic flux has successfully increased oxidized terpenoid production titers in yeast. Kim et al. overexpressed INO2, an ER size regulation factor,horticulture trays which resulted in an increase in ER biogenesis, ER protein abundance, protein-folding capacity, and cell growth while limiting ER stress response. This resulted in a 71-fold increase in squalene production and an 8-fold increase in the CYP-mediated production of protopanaxadiol compared to control strains. A similar goal was achieved by knocking-out PAH1, which generates neutral triglycerides from phosphatidic acid. This strategy also enlarged the ER and boosted production of β-amyrin, medicagenic acid , and medicagenic-28-O-glucoside by eight-, six- and 16-fold, respectively, over the control strain. These strategies will prove to be pivotal advances in terpenoid engineering and may be applied to any yeast chassis engineered for maximizing the biosynthesis of terpenoids derivatives. A potential hindrance of terpenoid biosynthesis in microorganisms is the potential for product or intermediate toxicity preventing the accumulation of high levels of a desired molecule. Achieving maximum accumulation will be essential when commercializing next-generation bio-fuel alternatives like the sesquiterpene bisabolene. Groups have engineered synthetic hydrophobic droplets within the cell that allow for the storage and accumulation of lipophilic compounds like terpenes while circumventing growth or toxicity issues. While this work was done in plants, there is potential to transfer these technologies to microorganisms. Lipid engineering in yeast was accomplished through the overproduction of triacylglycerol and a knock-out of FLD1, which regulates lipid droplet size, resulting in oversized lipid droplets that accumulate and store lycopene, an acyclic tetraterpene, resulting in record titers of 2.37 g/L 234 . These challenges have brought recent attention to Yarrowia as a production host for plant derived terpenes due to its capacity to accumulate lipophilic compounds and the potential to utilize technology developed for S. cerevisiae in this new host. A recent pivotal study harnessed peroxisomes to produce squalene at an unprecedented titer through dual cytoplasmic peroxisomal engineering. This study indicates that peroxisomes can function analogously to trichomes due to their pathway compartmentalization.

While there has been little exploration thus far of the capability of yeast peroxisomes to mimic the trichome metabolic environment specifically, they are a promising avenue for the optimization of heterologous production of terpenoids in yeast. Utilizing microbial biosynthesis to produce economically relevant terpenoids limits the need to grow, harvest, and extract plant material. This provides an environmentally friendly synthesis platform for specialized terpenoids and permits their production at high concentration and purity. Advances in technologies and strategies for the identification and heterologous expression of terpenoid biosynthesis pathways in microorganisms will provide numerous opportunities for future research. While there has been recent success in engineering prokaryotes for terpene production, yeast will prove to be the optimum production host for more complex terpenoid derivatives and should be a cornerstone for future efforts. The progression of metabolic engineering for terpenoid production is only limited by the identification and application of plant-derived terpene synthases, prenyltransferases, CYPs, and GTs for the biosynthesis and decoration of natural terpenoid scaffolds. By implementing techniques previously described there is potential to expand the latent target space beyond the natural/known terpenome, enabling the biosynthesis of synthetic terpenoids. Achieving this goal will require new breakthroughs in host engineering along with optimizing the expression and function of heterologous pathways. Additionally, generating host strains that produce various or specialized nucleotide sugars for glycosylated terpenoids will provide a chassis for the production of terpenoid glycosides, allowing for the microbial biosynthesis of compounds with altered and enhanced bio-active properties.The difficulty sourcing medicinal plant terpenes is exemplified by the Taxol story: clinical development of Taxol was an agonizingly slow progress due to supply shortages of the natural producer Taxus brevifolia in the 1980s and 1990s. The concentration of Taxol in the plant is very low , and harvesting of yew for extraction is not sustainable, since T. brevifolia is now endangered. As is the case for all complex plant terpenes, full chemical synthesis is also not currently a viable economic option as it requires many steps , gives low yield, and it not scalable for production. Taxol is currently manufactured either by semisynthesis from 10-deacetylbaccatin III extracted from the needles of Taxus spp., or by extraction from plant cell suspension cultures grown with elicitors to improve production. Both methods still rely on a plant source, resulting in a low and unstable yield, high production costs, and unwanted byproducts. There are many examples of medicinally relevant plant diterpenes that are currently facing similar sourcing issues, with Taxol and cyclopamine as lead example. This is particularly regrettable because plant terpenes can have unique mechanisms of action not demonstrated by any other class of compounds. For example, Taxol stabilizes microtubules by binding at a unique and specific site resulting in cell cycle arrest making it an effective cancer treatment. There are two major challenges that historically have limited the production of complex plant terpenes in yeast, low yields for the first step in the pathway and optimizing complex pathways for the elaboration of the terpene scaffold requiring multiple tailoring enzymes. Previous work with Taxol indicates that multiple products are produced in early stages of the pathway, a major cause of low yields observed in yeast. Additionally, enzymes such as P450s are a notorious challenge for yeast heterologous expression, especially when required to act in series, resulting in diminishing yields of products, thus limiting both pathway discovery efforts as well as the reconstitution of multistep pathways. Despite these challenges, the rational design of strains to tune coupling with redox partners can improve P450 activity in yeast. Along with improving redox dynamics, P450 optimization could be enhanced via augmentation of the ER anchoring regions to improve the localization and expression of plant derived P450s in yeast; or the inclusion of non-enzymatic ER scaffold proteins engineered to bind the P450s for the formation of pseudo-metabolons . Taxol biosynthesis in the native host T. brevifolia is a complex pathway requiring nineteen enzymatic conversions, with eight of these enzymes yet to be identified/characterized. This includes eleven ER anchored enzymes with the remaining predicted as soluble cytosolic enzymes.

We propose some aspects of possible ideotypes for several biomass crops

Because of the mismatch in the volumes of fuel needed versus the volume of each individual therapeutic needed, it will be necessary to have a large number of crops, each producing the same bio-fuel precursor and different high-value products, which will be agronomically challenging. Biofuel production from lignocellulosic biomass relies on the microbial bioconversion of cell wall sugars and components into fuels and products . A major hurdle to efficient bioconversion is the recalcitrance of the feedstock material and the inhibitory effect that lignin has on this process. Cell-wall engineering has shown promise for decreasing overall recalcitrance by increasing the ratio of C6/C5 sugars, reducing lignin content, and reducing the acetylation of cell-wall polymers that limit the conversion efficiency of the feedstock material. While lignin is a major contributor of feedstock recalcitrance, it is also a promising substrate for specialized microbes that convert these aromatic polymers into usable products. The introduction of specialized microbial hosts into various processing systems has the potential to optimize the conversion of all lignocellulosic feedstock components into products with economic value, limiting the waste streams for bio-fuel production and increasing the viability for their use on a global scale. The synergistic application of these various strategies has the potential to make lignocellulosic bio-fuels economically viable while shifting the current paradigm of what an effective bio-fuel/bio-product production system achieves.

Through a multidisciplinary approach across all sectors,4×4 grow table we have the potential to revolutionize the manufacturing of bio-fuels/bio-products from lignocellulosic biomass ushering in a new era of green technologies. While the first and second generations of bio-fuels use light and CO2 to produce biomass in crops that is later fed to microbes, third-generation or algal bio-fuels combine energy capture and fuel production within a single cell of photosynthetic cyanobacteria and algae . Having the entire fuel-production process take place in one organism makes the process more direct and efficient with no energy invested in non-fermentable parts such as plant stems, roots, and leaves. The solar energy conversion in cyanobacteria and algae is higher than that in plants, reaching an efficiency of 3% in microalgae compared to less than 1% in most crops. Furthermore, many species can grow in wastewater or marine environments with simple nutritional requirements and therefore do not compete for land use with agriculture. It is estimated that microalgae can produce oil at a yield of 100,000 L/hectare/year, while palm and sunflower oil can only reach 1,000–6,000 L/hectare/year. Algal fermentation could also lead to 9,000 L/hectare/year of bioethanol production, compared to 600 L/hectare/year derived from corn. Despite these favorable comparisons, attempts at large-scale cultivations have struggled with high production costs. Unlike agriculture, which has been optimized over millennia by humans, the technology for mass scale cultivation of photosynthetic microorganisms is still in its early developmental stage. The cultivation can be done in either an open system like a raceway pond, or in a closed system such as a photobioreactor.Ideotypes are theoretical archetypes of crops which serve as a practical framework for plant breeders to critically evaluate what traits they should be targeting for specific applications.

With advances in plant biotechnology and a growing urgency to adopt more sustainable practices across our economy, new uses for crops as bioenergy feedstocks may pivot our definition of an ideal crop that is engineered for biomass and bioenergy production, in contrast to food production. Although there is a plethora of specific applications to which plant engineering efforts can contribute, here we highlight recent advances in two broad areas of research: increasing available plant biomass and engineering production of higher value co-products. Before our ability to genetically engineer plants, plant breeders were constrained to breeding and selecting from the morphological, physiological, and metabolic repertoire already preexisting in plant genomes. Initially, such efforts were focused on breeding out deleterious traits or on a narrow aim such as yield. Fifty years ago, the concept of an ideotype was proposed as an alternative regime. The ideotype is an idealized form of a particular crop, which could then be a target to breed towards, rather than merely breeding away from deleterious traits. This shift in mentality provided a much-needed framework to help set goals and target traits for plant breeding efforts. A useful ideotype must be ‘theoretically capable of greater production than the genotype it is to replace and of such design as to offer reasonable prospect that it can be bred from the material available. The discovery and development of plant genetic engineering technologies such as Agrobacterium-mediated and biolistic transformation expanded the scope of possible ideotypes, as plant engineering efforts can now draw on a much larger effective pool of genetic material, expanding from interfertile germplasm to all sequenced and characterized genes from across the tree of life.

Feedstock crops are harvested primarily for biomass, which is then used as a substrate for downstream processes . Thus, it becomes useful to frame plant carbon partitioning in terms of biomass composition, and what production or deposition of small molecules or polymers would be present in feedstock ideotypes. Using new synthetic biology tools to redesign carbon flow in plants, one may alter and optimize the composition of biomass and bioproducts in a way that cannot be achieved through conventional breeding methods, ultimately improving the scalability and feasibility of renewable feedstock crops. The ideotype for each crop may vary depending on its economics, growing region, and intended application. Here, we focus on carbon allocation as a metabolic/ physiological trait that may be modified to increase the utility and value of feedstock crops. Specifically, we focus on two aspects: 1) traits that may alter overall plant biomass and the usability of this biomass and 2) traits that may enhance the value of feedstock crops with the production of higher value co-products, paying special attention to advances within the last two years. The plant cell wall is a complex network of polymers and is one of the most effective carbon sequestering systems on the planet, with annual production of land plants estimated at 150–170 billion metric tons per year71. Cell walls represent a massive and largely untapped supply of six carbon sugars in the form of cellulose . However, cell walls are naturally recalcitrant to degradation and fermentation, limiting their use as chemical feedstocks rather than bulk materials. Lignin is a main inhibitor of saccharification in woody crops and hemicellulose limits saccharification yields in monocot biomass crops. Many engineering efforts have focused on decreasing lignin and improving fermentation characteristics. We are only beginning to explore ways to modify the composition and deposition of plant cell wall components to improve their ability to serve as biomass feedstocks. One strategy for reducing lignin accumulation uses 3-dehydroshikimate dehydratase from Corynebacterium glutamicum, which converts a lignin precursor into protocatechuate. Transgenic expression of QsuB in Arabidopsis thaliana plastids reduced lignin accumulation and improved saccharification yield by 25-100% depending on treatment method. Moreover, the six-carbon/five-carbon sugar ratio of the biomass also affects saccharification yields, with higher ratios performing better. The most highly accumulated C5 sugar is xylose, but xylan synthesis mutants show dwarfism due to xylem vessel collapse. This phenotype has been rescued by returning xylan synthesis specifically to vessel tissue,cannabis drying system leading to a 42% increase in saccharification yield compared to wild type. Acetylated cell wall components are converted during fermentation to acetic acid, which inhibits fermentation. RNA-interference has been used to decrease expression of genes responsible for acetylation, nearly tripling saccharification yields. Gene stacking has been used to generate engineered lines that contain multiple aforementioned traits. This demonstrates how modern bioengineering strategies can be used in tandem to modify the cell wall composition, a step towards engineering the optimum bioenergy crop ideotype. While ideotype specifics will vary by crop and intended application, in general an idealized biomass cell wall will have a high C6/C5 sugar ratio, low lignin concentration, and provide a favorable substrate for fermentation. Beyond modifying the molecular composition of the cell wall, others have also focused on engineering upstream metabolic processes to increase rates of photosynthesis, carbon fixation, and biomass production. Plants often absorb more photons than they can use for photosynthesis, leading to non-photochemical quenching that dissipates excess energy as heat but does not contribute to biomass. Mutation of light harvesting complex components results resulted in a 25% biomass increase in Nicotiana tabacum under field conditions78. It is also possible to modulate the NPQ process to shift more quickly from a heat-producing to a photosynthetic state, restoring energy capture via production of NADPH and ATP. Engineered N. tabacum over expressing the genes coordinating NPQ relaxation showed increases of ~15% in plant height, leaf area, and total biomass accumulation in field conditions.

These are promising results, as most plants use similar mechanisms making this technology applicable to bioenergy crops dependent on the maximum accumulation of lignocellulosic biomass. Another key process that limits the theoretical maximum for biomass accumulation is photorespiration. The primary cost of photorespiration stems from the process plants use to ‘recycle’ the unintended product formed via the oxygenase activity of RuBiSCO, leading to loss of both carbon and nitrogen. An alternative photorespiratory bypass based on the 3-hydroxypropionate bicycle was successfully engineered into cyanobacteria by expressing six heterologous genes from Chloroflexus aurantiacus. This bypass not only limits losses from photorespiration, it also fixes additional carbon and can supplement the Calvin-Benson cycle. Other photorespiratory bypasses have been demonstrated to work in planta yielding more than a 25% increase in biomass in field trials. Thus, the ability to modify both the rate of carbon fixation and the fate of carbon deposition in the form of various cell wall polymers have been shown to be complementary processes for increasing the accessible feedstock sugars from future feedstock plant crops. Lignocellulosic bioproduction offers a much larger potential supply of biomass than food-based fuels such as corn-ethanol, and reduces the conflict between food and fuels, materials, and other products which may be produced from biomass crops. Future biomass crop ideotypes should therefore be designed to ensure the use of lignocellulosic material is cost effective. Lignocellulosic bio-fuels have been slow to achieve commercial viability, in part due to low fuel prices and the chemical recalcitrance of lignocellulosic matter. A promising strategy to make lignocellulosic bio-fuels economically competitive is the co-production of higher value products directly in feedstock crops, which can be separated from the bulk carbon fuel source during processing. This can be achieved in two ways: either feedstocks for lignocellulosic bio-fuels can be modified so as to produce a higher value side product, or lignocellulosic bio-fuel can be produced from side products of other agricultural processes. The former is amenable to feedstock bioengineering efforts to optimize for bio-fuel purposes and will be discussed here. The ideotype of co-product crops will depend on the specific crop, but one important component is that the co product sells for more than the cost of extraction. Co-product value and market size tend are often inversely correlated, as shown in Figure 4.The base use of most biomass crops is production of ethanol, but plants have been engineered to produce co-products such as higher value fuels, commodity chemicals, and high value small molecules. Higher value fuel products include lipids for bio-diesel and jet fuel. Biodiesel-grade lipids have recently been produced in engineered sorghum that accumulates 8% dry weight oil in leaves in the form of lipid droplets85. These droplets can be extracted using simple, cheap techniques during the standard processing pipeline for lignocellulosic bio-fuels, minimizing additional purification costs. Jet fuel is also a high-volume product with an annual market size of 290 billion liters in 2015, with prices usually ranging around $1 per liter. There is no practical alternative available for liquid aviation fuels, which account for a small but rapidly growing fraction of total anthropogenic greenhouse gas emissions- currently 2.3% and growing at approximately 6% per year. Jet fuels have been produced from the oilseed crop camelina, and efforts are underway to increase jet fuel yield. Another promising high-volume side product is 1,5-pentanediol, a commodity chemical used in polyester and polyurethane production. The present market value is around $6000/ton, with a market size of 18 million USD. Using plants as a production chassis for high value low volume products has received substantial attention in recent years, with several analyses suggesting plants may allow for cheaper production of edible vaccines, bulk enzymes, and monoclonal antibodies than alternative systems.

An online survey was also the most cost-effective means of reaching a large number of cannabis growers

We received 101 responses, with variations in response rates among questions. Within this group, 36 growers provided feedback about their participation in state and county licensing initiatives, and 35 on the income they received from cannabis cultivation. We received feedback about the ways in which the legalization system could be improved from 30 participants. Although this is a small number of cannabis growers compared to estimates of the grower population, preliminary conclusions regarding grower perceptions can be drawn from this sample for the purpose of guiding future research on California’s cannabis policy.Of the 36 growers who provided feedback on their participation in state or county licensing initiatives, over half reported that they had not participated in them . Of the 35 growers who reported both on participation in licensing initiatives and income sources, 31% reported income from cannabis and had not applied for cultivation licenses, indicating their noncompliance with state and county regulations. Among the growers who had not applied for cultivation licenses and who also reported on income sources , 39% indicated that they obtained no income from cannabis, 11% received less than a quarter of their income from cannabis, 11% received between a quarter and half, 22% received between half and three quarters and 17% received more than three-quarters of their income from cannabis .

Among those who had applied for state or county licenses and reported income sources , 17% reported receiving no income from cannabis,vertical farming system 6% received a quarter or less, 6% received between a quarter and half, 12% received between half and three-quarters and 59% received all of their income from cannabis cultivation. Non-licensed growers who supported their livelihoods from cannabis cultivation and explained their noncompliance said they were unable to apply because of county cultivation bans or unformulated guidelines and cost constraints . Additionally, 20% indicated they planned to apply. A small grower from Siskiyou County explained, “I live in a ban county. I plan to apply in a nearby city once the city puts a cultivation ordinance on the books.” A small grower from Mendocino County specified that the plant “track and trace” provisions of the licensing system were cost prohibitive. Compliant and nonlicensed growers also commented on the state’s licensing system and how it could be improved . All respondents except one identified specific limitations of the system related to at least one of three themes: costs, regulatory inconsistencies or alterations needed to production practices.Of the growers who commented, 70% identified costs as inhibiting compliance with state legalization initiatives. A medium-sized grower from Mendocino County described the multi-agency licensing system as “Too many departments asking for too many fees.” A small, nonlicensed grower from Nevada County attributed increased costs to regulations around sales and transport: “I would be willing to pay my fair share of taxes on products sold if I could continue to be responsible to test and transport my own product, deal directly with dispensaries as I did for years.” Similarly, a small grower from Mendocino County, who had applied for a license, described lost profits from distributors controlling the pricing structure: “The distributor is controlling prices and gouging farmers because regulations prevent small farmers from taking their products to other licensees.”

Respondents identified possible inconsistency between county, regional and state production regulations as constraining their engagement with the legalization initiative. A large grower from Humboldt County said, “Often, one agency will approve a project, and the other agency involved doesn’t. Then, you are in violation with the approving agency if you don’t do the work, and in violation with the other agency if you do the work.” Respondents identified difficulties in altering their production practices to comply with the new regulatory system. A small grower from Mendocino County indicated that new regulations made previous standard practices illegal: “My situation is totally standard: well fenced-in area, no environmental impact. I grow tomatoes, etc., in hoop houses, and now, because I applied for a license, I suddenly must get a permit for hoop houses that have been here for 15 years.”Several survey participants suggested strategies for improving the regulatory system. A medium grower from Humboldt County, who had applied for two cultivation licenses, argued, “An opportunity to mitigate or a timeline to amortize costs will help small farmers who cannot afford the intense costs associated with regulations.” A small grower from Sonoma County, who was not licensed, suggested, “Keeping grows limited in acreage so that smaller growers can compete is crucial in my mind and will lead to a more diversified agricultural system.” Growers’ responses suggest high rates of noncompliance and characterize legalization as a system that legitimizes the cultivation activities of an exclusive set of growers: large growers with the financial resources to locate their farm in a legal jurisdiction, pay licensing fees, alter their practices and increase production to comply with new laws and remain competitive in legal markets. It is likely that rates of noncompliance within the broader cannabis grower population are even higher than reported in our data, as our survey reached only growers registered on industry listservs; and, even though it was anonymous, it covered illegal livelihood activities, creating potential disincentives to accurately declare practices.

Respondents’ accounts of small growers’ exclusion from newly regulated cannabis market opportunities — due to the misalignment of the regulations with existing practices and the costs of compliance — echo the literature on governmental and nongovernmental regulation and certification of production practices in other sectors, in which codification of regulations or standards has led to formal and informal exclusion of some growers from commodity markets . In the United States, for example, structural exclusion has been documented in the voluntary, third party certification of organic agriculture, because its particular standards and onerous costs have facilitated the dominance of agribusiness at the expense of small growers . Similar exclusionary tendencies are also a defining effect of the rise of the food safety regulatory regime, comprised of both state regulations and market-driven audit requirements . Our research indicates similar patterns with the legalization of cannabis: the burden of compliance not only favors larger producers over smaller ones but also shifts the profit-making opportunities from producers to non-producers . The illicit market continues in California, and the two markets, legal and illicit, likely influence one another. Disincentives for small growers to participate in legal markets can also be attributed to, along with the factors already discussed, the demand for cannabis in the illicit market channels, both in and out of state . As of June 2019, 39 states had yet to legalize cannabis for recreational sales . In California, state and county taxes increase the legal cannabis price, and that higher price may also contribute to in-state illicit market demand. To meet industry analysts’ estimates of $1 billion in tax revenue , at least $7 billion of cannabis needs to be sold through legal markets . In 2018, $2.5 billion was sold, and the state received $345 million in cannabis tax revenues .Accounts from non-compliant growers of the effects of legalization indicate a need to explore strategies that will incentivize growers’ participation in legal markets. Their accounts also raise questions for more research on the socioeconomic and environmental effects of the state’s licensing system. California’s new cannabis regulations put limits on transportation and distribution ,vertical farming racks and consolidate supply chains through a limited number of registered distributors . Further analysis on the effects of supply chain consolidation on compliance rates is needed to understand how non+environmental aspects of the licensing system influence cultivation practices. Further research is also warranted on small-producer cooperatives, which in other agricultural sectors have improved the collective access of growers to information, credit and markets, while also enhancing regulatory compliance, community development and innovation . Grower organizations in the cannabis industry include county and statewide policy and lobbying groups, as well as private marketing and environmental advocacy initiatives . Yet, given the historically clandestine nature of production, industry led cooperatives in the cannabis sector likely do not exhibit the political and economic influence at the state level that is exhibited by cooperatives in other sectors . At this point, producer organizing can receive only limited support from UC Cooperative Extension personnel because of the restrictions on use of federal funds for cannabis research or development. Little is known about the ways in which non-compliant growers presently organize to access illicit markets. It is possible that a reliance on clandestine markets creates disincentives to collective production and market access strategies. Illicit growers may be more likely to organize their resources to avoid detection, and, without access to crop insurance or crime reporting, to protect their operations.

Understanding forms of cooperation in clandestine markets may help identify social as well as economic factors most likely to facilitate compliance . State legalization of cannabis production presents an opportunity for growers to better manage risks and enhance returns. To this end, there is a need for further research and policy exploration of potential participation incentive mechanisms, such as tax credits, crop insurance, small business development grants, extension and training. These mechanisms could promote environmental objectives, community development goals and regulatory compliance. More understanding of what incentivizes growers would help UCCE identify extension efforts most likely to enhance growers’ control over the distribution of economic benefits from legal cannabis cultivation. Analyses of relationships between land use zoning, farm licensing requirements and compliance costs would help inform outreach with state, county and municipal policymakers to promote regulations most likely to elicit compliance and reduce enforcement costs. The high rates of non-licensed production coupled with growers’ accounts of the effects of legalization on communities indicate a need for more systematic research on the socioeconomic contributions that non-licensed growers are making. Because cannabis has historically operated as a cash economy, it is likely that the majority of income from cultivation has been spent locally; cash from cannabis is difficult to transport and invest elsewhere . These contributions to local communities were largely unaccounted for in the state’s economic analysis of the medical cannabis cultivation regulations, on which the recreational cultivation licensing program was based . The analysis identified “significant costs” of regulation for growers, including costs related to local and state licensing, cultivation plan preparation, water and pesticide use approval, farm record maintenance, business license applications, track and trace system operation, processing, legal labor, consultants and farm inputs . The analysis did not address regional effects — for example, the possibility for decreased spending in places with histories of cannabis cultivation as cultivation expands elsewhere and intensifies market competition. Interviews with leaders of cannabis organizations and distributors, growers, and representatives from county employment and benefits departments, among others, to document the socioeconomic changes they experience and witness in this transition to a regulated cannabis market will help build this knowledge base. The state’s economic analysis suggested that labor compliance costs would be the most significant direct regulatory cost for growers . In-depth analyses with growers and workers are needed to illuminate the characteristics of the cannabis labor force and its trajectory since legalization . To mitigate the negative consequences of legalization for growers and rural communities, the exclusionary and racialized effects of regulation also need to be better understood.Legalization of cannabis production in 2017 has generated demands for state regulatory, research and extension agencies, including UC, to address the ecological, social and agricultural aspects of this crop, which has an estimated retail value of over $10 billion . Despite its enormous value and importance to California’s agricultural economy, remarkably little is known about how the crop is cultivated. While general information exists on cannabis cultivation, such as plant density, growing conditions, and nutrient, pest and disease management , only a few studies have attempted to measure or characterize some more specific aspects of cannabis production, such as yield per plant and regional changes in total production area . These data represent only a very small fraction of domestic or global activity and are likely skewed since they were largely derived not from field studies but indirectly from police seizure data or aerial imagery . In California, where approximately 66% of U.S. marijuana is grown , knowledge of the specific practices across the wide range of conditions under which it is produced is almost nonexistent. Currently, 30 U.S. states have legalized cannabis production, sales and/or use, but strict regulations remain in place at the federal level, where it is classified as a Schedule I controlled substance.

An additional risk associated with working in the service industry involves the opportunity to earn tips

Not surprisingly, these are the same industries who historically have high rates of sexual harassment . Between 2000 and 2015, the combination of these industries made up 28% all sexual harassment charges filed to the EEOC . Such industries put employees at greater risk to experience sexual harassment, especially by customers and clients who sexualize workers and feel entitled to their services. Particularly in service sector industries, there is a prevailing belief in the mantra “the customer is always right” that both allows customers to becoming sexually forward without fear of consequences and employees to respond informally to such behavior as to not upset the customer . A study by the Restaurant Opportunities Center found that women employed in restaurants who earn a sub-minimum wage of $2.13 per hour as tipped workers were twice as likely to experience harassment from supervisors, co-workers and customers, compared to women employed in restaurants who received a sub-minimum wages greater than $2.13 per hour . The large reliance on tips creates an environment where workers, particularly women, are undervalued and forced to endure injustices for the sake of their income. Additional risk factors for sexual harassment can be identified at the interpersonal and individual level. At the interpersonal level, working in isolation is also associated with reports of harassment and general workplace violence. Environments in which workers are forced to become isolated from peers gives harassers easy access to targets and leaves workers with fewer chances to interact with others in their environment and signal to others if they are in need of assistance .

Additional interpersonal risk factors in the workplace include power differentials and the abuse of power,bud drying rack discussed in more detail below. Individual risk factors associated with a worker’s vulnerability include gender, sexual orientation and age. As previously mentioned, although anyone can experience sexual harassment, women are most often victimized and thus at greater of risk of experiencing harassment than men . Likewise, studies repeatedly indicate perpetuators are most likely to men. Aside from women, individuals who identify as queer, either in their sexual orientation or gender expression, including lesbian, gay, bisexual, and transgender folks also face great risks of experiencing general discrimination and sexual harassment. A meta-analysis of 386 studies on the victimization of LGBT individuals found that approximately of 50% of individuals in all samples experience sexual harassment . Although comparative studies examining rates of sexual harassment between heterosexual and LGBT samples have mixed findings determining effect sizes, they lean towards sexual minorities experiencing greater victimization than heterosexual identifying individuals . In addition to the risks posed by one’s gender and sexual orientation, young and unmarried female workers are most often targeted as victims of sexual harassment . Most service sector employees are relatively young adults between the ages of 15-25 years who face greater risks of harm in the workplace . Because of their age, workers are often unaware of their rights which include a safe work environment that is free of harassment as well as entitlement to fair pay . Consequently, they may not be equipped with the information or tools to formally handle an experience of sexual harassment . Responses and coping mechanisms to sexual harassment are just as critical to understanding the context of harassment in the workplace as are the individual and organizational risk factors that predict harassment among vulnerable workers.

However, while the majority of studies focus on investigating the frequency and prevalence of harassing behaviors, many do not address how workers react to such behavior. According to the USMSP , individual based responses to behaviors can be categorized as active responses , avoidance and toleration . Among the three categories, the top three behaviors employed by federal workers in response to harassment were asking the harasser to stop, avoiding the harasser, and ignoring the behavior or simply doing nothing . The action, or lack there-of, that an employee takes to address sexual harassment is related to multiple levels of influence: the severity of the incident, the power they as an employee hold in their place of work, the social support provided by their workplace and their own cultural profile . Studies investigating coping mechanisms have found strong connections between both the severity and frequency of the harassment to response patterns . For example, engaging in detached behaviors was associated with significantly lower frequency of unwanted sexual attention than engagement in simultaneous avoidance of the behavior and negotiation with the perpetrator , however the direction of this relationship is ambiguous. Studies have also found non-assertive actions to address sexual harassment to be more common if the sexually harassing behavior was not considered to be severe . Workers also opt for non-assertive responses when the source was someone other than a supervisor . This is consistent with previous studies which have found workers do not take action against customers to avoid crossing an ambiguous boundary between providing “good customer service” and protecting themselves . Studies have found that workplaces with few policies in place regarding sexual harassment are associated with passive responses to sexual harassment . This is not surprising given a lack of formal venues for filing complaints. Women whose workplace only employed informal policies for addressing harassment, were also less likely to engage in any form of direct response for similar reasons . Finally, cultural and social factors can influence a worker’s reaction and coping to harassment. The study by Cortina and Wasti found that White women more likely to practice detached behaviors compared to Latina women who practiced avoidant-negotiating behaviors and whose culture is historically more patriarchal and communal.

Despite cultural differences, both styles of coping are ultimately non-confrontational. This general lack of combative action can also be explained by the shame women are socially taught to feel in response to harassment , as well as the responsibility they feel towards protecting the perpetrator .Understanding that sexual harassment is common in the service sector, the current study seeks to shed light on sexual harassment in the context of cannabis dispensaries, a recently legalized industry, within the context of Los Angeles County. With the passage of Proposition 64 during November 2016, the possession, use and retail of recreational marijuana was decriminalized in California through the Medicinal and Adult-Use Cannabis Regulation and Safety Act . Beginning in January 2018, California began to issue licenses for the legal operation of medical and adult use cannabis shops, and by the end of year, the California Department of Tax and Fee Administration reported cannabis shops produced $345 million in tax revenue for the state with the highest concentration of shops located in Los Angeles . While there are many studies that focus on cannabis consumers – health outcomes and public safety issues related to the legalization and use of cannabis – little attention has been paid to workers in the industry. The small amount of occupational safety and health literature that does exist regarding the cannabis industry focuses on the biological, chemical, and physical hazards associated with cannabis flower and its production into various cannabis derived products . However, the hazards that affect the safety of cannabis workers extends beyond the flower and stems from the industry’s long and complicated history with sexual abuse. In a 2016 article by Reveal News,vertical grow rack system female trimmers from the Emerald Triangle of Northern California shared stories of sexual abuse including being asked to trim topless and forced to perform sexual acts in order to receive payment . Similarly, in 2019, Vice also published an article documenting unfair work practices within the industry including 10 to 15 hour shifts, and sexual harassment of female bud tenders from shop owners . Until recently, cannabis flourished in the black market where it was produced, cultivated and distributed with little to no formal monitoring or regulation. Given the risks associated with being involved in the cannabis industry prior to legalization, it was a very secretive industry to navigate. The secrecy associated with the industry helped to establish a “culture of silence” against reporting abuses in the workplace, particularly regarding sexual harassment and exploitation . Because of the industry’s history, there is a need to assure that workers entering the business are protected and treated with respect, as with any other workforce. Institutionalized sexism permeates several aspects of the industry and cannabis companies are not exempt for marketing strategies that use sex appeal to sell products. For example, the brand Ignite pairs images of half-naked women with animals, and poorly formed cannabis puns , showcasing the misogyny and harassment that exists within the industry . The sexism which breeds harassment is not only evident through advertising, it is also apparent in hiring practices as women have historically been hired to not only sell product but to simultaneously serve as attractive promotional models for a brand .

Although this is more common in illegal retail fronts known as “trap shops,” it is a distinguishing characteristic of the industry. This in turn has led to accounts from workers describing instances of overtly touchy customers and co-workers as well as instances of their product knowledge being undermined because of their appearance and gender .Although there does exist a report documenting the prevalence of sexual abuse in the cannabis industry published by New Frontier Data, a company whose primary mission is to collect and analyze data relevant to cannabis to better inform businesses and investors, the report is not publicly available to better examine its methodology, study sample, or results . Despite challenges with accessibility, the major findings of the report have been published through cannabis related news outlets and suggest that, of the 1,741 workers in the cannabis industry who participated, there are high levels of workplace violence relative to other industries . Sexual harassment is also a widespread issue in the industry, with nearly 27% of participants reporting they have either witnessed it and 18% reporting they have experienced it themselves . An additional one-third of participants reported that they knew someone who had been sexually harassed in the industry . When filtered to only include responses from female employees working in non-ownership or management positions , the percent of workers who have experienced sexual harassment decreased slightly to 14% . Concurrently, the percentage of those who know of someone who has experienced harassment increase to 49% , indicating the effect of power structures in the likelihood to experience harassment.As the cannabis industry continues to expand within California and across the United States, it is becoming closely intertwined with the labor movement through its growing union representation. Among the many groups that have fought to legalize cannabis, the United Food and Commercial Workers Union was the first union heavily involved in the 2010 campaign for Proposition 19, a previous attempt to legalize the recreational use of cannabis in California. Although the campaign failed, UFCW continued to support the market for cannabis in California and especially in Los Angeles. In 2012, the city of Los Angeles attempted to ban all sales on cannabis. UFCW Local 770 fought the ban in order to protect the jobs of dispensary workers in the city and their efforts resulted in Proposition D, passed in 2013, that protected 135 medical marijuana dispensaries that had earned their licenses before 2007. UFCW continued their efforts to protect jobs in cannabis with their support of the 2016 campaign for MAUCRSA. As part of MAUCRSA, UFCW negotiated labor peace agreements in medical and recreational use laws. The provision deems that any cannabis shop in California with 20 or more workers is allowed the opportunity to join a union . However, despite the labor movement’s progressive ideology that serves to protect workers’ rights, it is important to address the complicated history that labor unions also face with sexual harassment as protecting women’s rights and workers’ rights have not always been advocated for concurrently. For example, during the Jenson v. Eveleth Taconite Co. class-action lawsuit against workplace sexual harassment, the United Steelworkers Local 6860, responsible for representing the female workers and plaintiffs in the case, was found to be an inadequate source of protection for their female members reporting incessant harassment and abuse from co workers .

A flood of novel cannabis-derived products would make their way to market

In addition, a spatially and temporally resolved video of the temperature gradient versus time and the power produced versus time can be played to get an even better illustration of how the cells in a SOFC stack experience sudden and potentially damaging temperature fluctuations. A snapshot of the spatially and temporally resolved video tool is displayed in Figure 54. The information provided by the video can help understand the severe thermal stresses that the cells undergo during dramatic power demand spikes and provide some preliminary insight to their potential degradation and lifetime characteristics. The far-reaching benefits of the internet are not limited to developed countries. Studies have proven that developing countries can benefit significantly more than developed countries by having access to the internet. With so much of the world’s population currently having little or no internet connectivity, the following question is raised: Can the infrastructure that society now relies on to carry all this digital traffic keep up with the accelerating demand? Recent advances in technology have introduced an alternative to data storage by shifting it from localized on-site physical storage to large-scale out-of-sight centralized storage. This vast, dispersed network of centralized storage systems throughout the world has come to be referred to as the Cloud. Considering a traditional data center connected to the electric grid,equipment for growing weed less than 35% of the energy from the fuel source is delivered to the data center. The most significant inefficiencies result from power plant generation losses and transmission and distribution losses. In addition, within a data center, there are further losses associated with the infrastructure required for daily reliable operation .

The additional power consumed by these resources results in approximately less than 17.5% of the energy from the fuel source being delivered to the servers. It has been demonstrated that further developments for reducing greenhouse gas and criteria pollutant emissions can be made by implementing advanced alternative methods of power production that are proven to be more environmentally friendly and reliable replacements to combustion-based power. Fuel cells are a great alternative to combustion-based power and can convert fossil or renewable fuels into electricity more efficiently and with lower emissions. There are many reasons to consider fuel cells for data center power production, however, a few stand out above the rest. First off, high temperature fuel cells are inherently fuel flexible and as a result, their reliability is heavily influenced by the reliability of the natural gas grid. Estimates show that the reliability of the natural gas grid exhibits greater than five nines reliability, which is much higher than the three nines reliability exhibited by the electric grid. Secondly, the gas distribution network within a data center is much cheaper than the high voltage switch gear, transformers, and copper cables required to connect to the electric grid. If fuel cells are placed closer to the power consumption units , then data centers can easily eliminate the power distribution and backup power generation systems. This would be highly favorable because the electrical infrastructure accounts for over 25% of the capital cost for state-of-the-art data centers. Thirdly, fuel cells are environmentally friendly with near zero pollutant and greenhouse gas emissions. Although initial phases of introducing fuel cells into the data center would require that they operate on natural gas, even with this source of fuel, fuel cell emissions are much cleaner and far more efficient than those from combustion.

Carbon dioxide emissions have the potential to be reduced to 49%, nitrogen oxides by 91%, carbon monoxide by 68%, and volatile organic compounds by 93%. Recognizing the significant energy sink at the power plant level, Microsoft made a commitment to make their operations carbon neutral. Microsoft has envisioned a new concept for their data centers, aptly labelled as the ‘stark’ design. They proposed a direct generation method that places fuel cells at the server rack level, inches from the servers. The close proximity allows for the direct use of DC power without the large capital cost, potential for failures, and efficiency penalties associated with AC-DC inversion equipment. As a result, power distribution units, backup power generation equipment, high voltage transformers, expensive switch gear, and AC-DC power supplies in the servers can be completely removed from the data centers. Although a PEMFC system has demonstrated its dynamic load-following capabilities, its insatiable thirst for hydrogen was a major drawback for immediate implementation. Due to limitations in the burgeoning hydrogen infrastructure, Microsoft decided to opt for a high temperature SOFC system for its fuel-flexible capability, which would allow for quick installment and implementation in any building with an existing natural gas network. An added bonus to operating on natural gas is the highly reliable nature of the natural gas network, which subsequently contributes to the highly reliable nature of fuel cell systems. A relatively inexpensive and commercially tested SOFC system was needed to begin the research work. SolidPower’s Engen-2500 system was selected because it was commercially available and touted as one of the most efficient SOFC systems to date. SolidPower is an experienced developer of SOFC based systems, displaying its newly developed Engen-2500 micro-Combined Heat and Power SOFC appliance for home and industry. This micro-cogeneration system, Engen-2500, is a great solution for projects ranging from 2.5 kW up to 20 kW of electrical power.

The technology offers the ability to combine multiple Engen-2500 appliances in series. The Engen-2500 system is a floor standing unit generating a maximum of 2.5 kWe of net AC power and can run solely on natural gas fuel from the grid at normal supply pressure. Analyzing the Engen-2500 system using the SOFC system model developed for this thesis, spatial resolution is achieved by taking a single cell and dividing it into a grid of smaller elements, which are referred to as nodes. Each node is broken down further into five distinct segments that comprise a single cell: the oxidant separator plate, cathode gas stream, electrolyte , anode gas stream, and fuel separator plate. Spatially resolving each segment of a single cell permits the localized dynamic analysis of the conservation of mass, energy, and momentum equations while also locally evaluating the temperature, species mole fractions, pressure, and other required characteristics. The dynamic analysis of one cell is then scaled to the number of cells in the stack, ultimately representing the dynamic characteristics of the entire stack component. The same discretization method just described is applied to all components of the Engen-2500 SOFC system. The transport phenomena and electrochemical reactions evaluated at each locally resolved temperature, species mole fraction, and pressure, determine the performance of each component in the SOFC system. It is important to note that the results presented in this thesis incorporated educated guesses. Comparing all the model results for the polarization, power, outlet temperatures, and flow rate plots, it is evident that the simulations made in round three provided the best all-around match to the actual experimental data. From the simulations,grow tables 4×8 it is understood that matching the anode outlet temperatures plays a much bigger role in electrochemical theory than matching the cathode outlet temperature when working with high temperature fuel cells. The reason for this is believed to be a result of the electrochemical kinetics primarily occurring at the triple-phase boundary on the anode-electrolyte interface. An interesting trend was noticed when verifying the model outputs to those of the experimental results. The system model consistently predicts anode and cathode outlet temperatures that are very close to one another. This could be due in large part to the difficulty in accurately measuring heat transfer kinetics especially due to limitations in available information provided by SolidPower. The most revealing piece of information that was provided by SolidPower showed the locations of the fuel inlet and outlet thermocouples in the Engen-2500 stack. However, the air inlet and outlet thermocouples were not labelled in the provided confidential diagram. This leads me to question the veracity of the cathode outlet temperature since the location of the thermocouple could give some additional insight to the heat transfer kinetics. Furthermore, the model predicts the anode and cathode outlet temperatures at the immediate exit of the stack. Therefore, it is possible that if an air or fuel thermocouple is placed a distance away from the immediate stack exit, that the temperature measured by the thermocouple would indeed differ from the temperature at the stack exit because of convection and potentially radiation heat transfer kinetics due to such high operating temperatures. A few recommendations can be made to improve the accuracy of the fuel cell model, particularly concerning the operation of the fuel cell stack. The following recommendations would help tremendously with model verification efforts despite the potential for significant increases in computational time and power. The first recommendation would be to divide the PEN further into the anode, electrolyte, and cathode components for additional accuracy in calculating the electrochemical reaction kinetics at the anode-electrolyte interface and cathode-electrolyte interface, specifically concerning species conservation.

Currently, the model divides the cell into 5 distinct components, which are the fuel separator plate, anode gas stream, PEN, cathode gas stream, and oxidant separator plate. Therefore, splitting the PEN into the anode, electrolyte, and cathode could help provide results that are more accurate. Doing so also would also improve the vertical heat transfer that occurs in the direction that crosses all five distinct components. So if the fuel cell stack under consideration is anode-supported or electrolyte-supported, more accurate heat transfer can be modeled hopefully solving the errors encountered in this thesis with either the anode outlet temperature or cathode outlet temperature. The second recommendation would be to separate the activation and concentration over potential calculations to achieve more accurate activation and concentration over potentials when operating the model at very low or high current densities. Through repeated simulations and as is evident in the steady-state results section of this thesis, the lowest and highest current densities were the most inaccurate simulation points. The last recommendation would be to introduce a current-based controller that can replace the power controller for simpler and faster steady-state model verification assessments. A lot of time and care was needed to assess the steady-state model results because the experimental results were based on different current level inputs. Therefore, significant effort was required to repeatedly adjust the electrochemical parameters to achieve appropriate current levels. In 2016, when voters approved Proposition 64, they set the stage for radical change across California’s cannabis landscape. Licensed, regulated cannabis stores would soon throw open their doors. The state’s vast cannabis industry would be gin to emerge from illegality, though unlicensed operations would surely persist. UC researchers immediately understood that cannabis legalization would present California with pressing new questions, along numerous dimensions, that could only be answered through rigorous, broad ranging research. How would legalized cannabis cultivation affect the state’s water, wildlife and forests? How might impaired driving, or interconnections between cannabis and tobacco, influence public health? How would tax and regulatory policy affect the rate at which cannabis cultivators abandoned the illegal market? These questions and many more are now the subject of research around the UC system, and multiple campuses are establishing centers dedicated to cannabis research. This article surveys UC’s emerging architecture for cannabis research in the legalization era — and presents a sampling of notable research projects, both completed and ongoing.The Cannabis Research Center at UC Berkeley is an interdisciplinary program that, bringing together social, physical and natural scientists, evaluates the environmental impacts of cannabis cultivation; investigates the policy-related and regulatory dimensions of cultivation; and directly engages cannabis farmers and cannabis-growing communities. The center, according to Ted Grantham — one of three CRC co-directors and a UC Cooperative Extension assistant specialist affiliated with UC Berkeley’s Department of Environmental Science, Policy, and Management — is “focused on cannabis as an agricultural crop, grown in particular places by particular communities with unique characteristics.” For Grantham and the center’s co-founders, establishing the program was “a chance to develop policy-relevant research at the time of legalization and a time of rapidly shifting cultivation practices.” The center’s co-directors, in addition to Grantham, are Van Butsic — a UCCE assistant specialist affiliated with UC Berkeley’s Department of Environmental Science, Policy, and Management — and Eric Biber, a UC Berkeley professor of law. Other CRC re searchers are associated with entities such as the UC Berkeley Department of Integrative Biology, the UC Berkeley Geography Department, the UC Merced Environmental Engineering program and The Nature Conservancy. The center itself is affiliated with the UC Berkeley Social Science Matrix. The CRC formally launched with a public event in January.

These limitations have prompted an ongoing search for more selective and stable analogues

The inquiry into the active chemical constituents of Cannabis turned out to be more time consuming than expected. Many other plant-derived compounds, such as morphine and atropine, had long been identified when the Cannabis plant finally yielded its active principle, the terpenoid derivative ∆9 -tetrahydrocannabinol. The psychoactive properties of THC were recognized immediately, but the drug’s unique chemical structure offered no hints as to its mechanism of action. To complicate matters further, the hydrophobic nature of THC delayed experimentation and indicated that the compound might act by influencing membrane fluidity, rather than by combining with a specific receptor. This impasse was resolved by the development of new classes of potent and selective THC analogue7 , which led eventually to the pharmacological identification of cannabinoid-sensitive sites in the brain . The CB1 cannabinoid receptor was molecularly cloned from rat brain in 1990 and its immune system counterpart, the CB2 receptor, was identified by sequence homology three years later. These discoveries not only established the mechanism of action of THC, thereby fuelling the development of sub-type-selective agonists and antagonists , but they also initiated a hunt for brain-derived cannabinoid ligands. Surprisingly, the first THC-like factor to be isolated was a lipid, rather than the peptide that had been expected on the basis of the precedent set by morphine and the enkephalins. It was identified as the amide of ARACHIDONIC ACIDwith ethanolamine, and named anandamide after the Sanskrit word for bliss, ananda. This small lipid molecule resembled no known neurotransmitter,drying room but it did share structural features with the EICOSANOIDS, mediators of inflammation and pain with various functions in neural communication.

Though initially controversial, the signalling roles of anandamide were confirmed by the elucidation of the compound’s unique metabolic pathways and the demonstration of its release in the live brain. As the search for THC-like compounds continued, other bioactive lipids were extracted from animal tissues. These include 2-arachidonoylglycerol, noladin ether, virodhamine and N-arachidonoyldopamine. In this article, I review the synthesis, release and deactivation of the endogenous cannabinoids . I then outline the properties and distribution of brain CB1 receptors. Last, I describe the function of the endocannabinoids as local modulators of synaptic activity and their contribution to memory, anxiety, movement and pain.The membranes of plant cells contain a family of unusual lipids that consist of a long chain FATTY ACID tethered to the head group of PHOSPHATIDYLETHANOLAMINE through an amide bond. When attacked by a PHOSPHOLIPASE D enzyme, these membrane constituents generate a set of FATTY ACID ETHANOLAMIDES, which are used by plants as intercellular signalling molecules. They are released from cells in response to stress or infection, and stimulate the expression of genes engaged in systemic plant immunity. This ancestral biochemical device is conserved in mammalian cells, which use the ethanolamide of arachidonic acid, anandamide, as a primary component of the endocannabinoid signalling system. Anandamide formation in neurons is a two-step process, which parallels fatty acid ethanolamide production in plants. The first step is the stimulus-dependent cleavage of the phospholipid precursor N-arachidonoyl-PE. This reaction is mediated by an uncharacterized PLD and produces anandamide and phosphatidic acid, a metabolic intermediate that is used by cells in the synthesis of other glycerol-derived phospholipids. Genes encoding two PLD isoforms have been cloned in mammals, but it is not known whether either of these enzymes is responsible for anandamide synthesis. The brain contains tiny quantities of N-arachidonoyl-PE — probably too little to sustain anandamide release for an extended time.

The cellular stores of this precursor are replenished by the enzyme N-acyltransferase , which catalyses the intermolecular passage of an arachidonic acid group from the SN-1 position of PHOSPHATIDYLCHOLINE to the head group of PE . In cultures of rat cortical neurons, two intracellular second messengers control NAT activity: Ca2+ and cyclic AMP. Ca2+ is required to engage NAT, which is inactive in its absence, whereas cAMP works through protein kinase A-dependent phosphorylation to enhance NAT activity. Although catalysed by separate enzymes, the syntheses of anandamide and its parent lipid are thought to proceed in parallel because Ca2+-stimulated anandamide production is generally accompanied by de novo formation of N-arachidonoyl-PE. As expected of a Ca2+-activated process, anandamide formation can be elicited by Ca2+ ionophores, which carry Ca2+ ions across cell membranes. For example, in cultures of rat striatal neurons labelled by incubation with [3 H]ethanolamine, the Ca2+ ionophore ionomycin stimulates accumulation of [3 H]anandamide.A similar stimulation is produced by kainate , 4-aminopyridine or membrane-depolarizing concentrations of K+, and can be prevented by chelating extracellular Ca2+ . The Ca2+ dependence of anandamide synthesis was also demonstrated using MICRODIALYSIS. Administration of a high-K+ pulse in the rat striatum caused a reversible increase in interstitial anandamide concentrations, which was prevented by removal of Ca2+ from the perfusing solution. Although neural activity induces anandamide release in a Ca2+-dependent manner, Ca2+ entry into neurons is not the only determinant of anandamide generation: there is evidence that G-protein-coupled receptors canalso trigger this process. For example, application of the dopamine D2 -receptor agonist quinpirole causes an eight fold increase in anandamide outflow in the rat striatum, which is prevented by the D2 -receptor antagonist raclopride. This response is accompanied by an elevation in tissue anandamide content, indicating that it might be due to a net increase in anandamide formation rather than to extracellular release of preformed anandamide. Muscarinic acetylcholine receptors and metabotropic glutamate receptors can also cause endocannabinoid release in hippocampal slices in a Ca2+-independent manner, but the substance involved have not been identified.

How does occupation of D2 receptors initiate anandamide synthesis? Inhibition of cAMP formation, a hallmark of D2 -receptor signalling, is unlikely to be responsible for this effect because cAMP positively regulates NAT activity. Alternatively, D2 receptors could interact with the Rho family of small G proteins to stimulate PLD activity, or they might engage β−γ subunits of G proteins to activate phospholipase Cβ. PLCβ catalyses the cleavage of phosphatidylinositol-4,5- bisphosphate to produce inositol-1,4,5-trisphosphate, which might then recruit the NAT/PLD pathway by mobilizing Ca2+ from internal stores.Like other MONOACYLGLYCEROLS, 2-AG is at the crossroads of multiple routes of lipid metabolism, where it can serve interchangeably as an end-product for one pathway and precursor for another. These diverse metabolic roles can explain its high concentration in brain tissue, and imply that a significant fraction of brain 2-AG is engaged in housekeeping functions rather than in signalling. The place occupied by 2-AG at central intersections of lipid metabolism also complicates efforts to define the biochemical pathway responsible for its physiological synthesis. There is, however, enough information to indicate two possible routes . The first begins with the phospholipase-mediated formation of 1,2-diacylglycerol . This product regulates protein kinase C activity — an important second messenger function — and is a substrate for two enzymes: DAG kinase,vertical farming units which attenuates DAG signalling by catalysing its phosphorylation to phosphatidic acid; and DAG lipase , which hydrolyses DAG to monoacylglycerol. The fact that drug inhibitors of PLC and DGL block Ca2+-dependent 2-AG accumulation in rat cortical neurons indicates primary involvement of this pathway in 2-AG formation. An alternative pathway of 2-AG synthesis begins with the production, mediated by phospholipase A1, of a 2-arachidonoyl-LYSOPHOSPHOLIPID, which might be hydrolysed to 2-AG by lyso-PLC activity . Although there is no direct evidence for this mechanism in 2-AG formation, the high level of PLA1 expression in brain tissue makes it an intriguing target for future investigation. In addition to the phospholipase-operated pathways outlined above, monoacylglycerols can be produced by hormonesensitive lipase acting on triacylglycerols or by lipid phosphatases acting on lysophosphatidic acid. In general, however, these enzymes preferentially target lipids that are enriched in saturated or monounsaturated fatty acids, rather than the polyunsaturated species that would give rise to 2-AG. Irrespective of its exact mechanism, neuronal 2-AG production can be initiated by an increase in the concentration of intracellular Ca2+. In cultures of rat cortical neurons, the Ca2+ ionophore ionomycin and the glutamate receptor agonist NMDA stimulate 2-AG synthesis in a Ca2+-dependent manner. Likewise, in freshly dissected hippocampal slices, high-frequency stimulation of the SCHAFFER COLLATERALS produces a Ca2+-dependent increase in tissue 2-AG content32. Importantly, this treatment has no effect on the concentrations of non-cannabinoid monoacylglycerols, such as 1-palmitoylglycerol, which indicatesthat 2-AG formation is not due to a generalized increase in the rate of lipid turnover. Furthermore, highfrequency stimulation does not alter hippocampal anandamide concentrations, indicating that the syntheses of 2-AG and anandamide can be independently regulated. In further support of this idea, activation of D2 receptors — a potent stimulus for anandamide formation in the rat striatum — has no effect on striatal 2-AG concentrations.Noladin ether is an ether-linked analogue of 2-AG that binds to and activates CB1 receptors. Its pathway of formation has not been characterized, and its occurrence in the normal brain has been questioned. Virodhamine, the ester of arachidonic acid and ethanolamine , might act as an endogenous CB1 antagonist. Its presence in brain tissue has been documented, but is intriguing because this chemically unstable molecule is rapidly converted to anandamide in aqueous environments.

The mechanism of its synthesis is unknown, and its deactivation might share anandamide’s pathways of uptake and intracellular hydrolysis. Finally, the endogenous vanilloid agonist, N-arachidonoyldopamine, also exhibits affinity for cannabinoid receptors in vitro.How are endocannabinoids released from cells and how do they reach their targets? Classical transmitters and neuropeptides can diffuse through the water-filled space that surrounds neurons, but hydrophobic compounds such as anandamide and 2-AG tend to remain associated with lipid membranes. One possibility is that endocannabinoids might not leave the cell where they are produced; rather, they could move sideways within the plasmalemma until they collide with membrane embedded CB1 receptors. This hypothesis is supported by the role of an intramembranous amino-acid residue in the binding of anandamide to CB1 , as well as by the finding that certain cannabinoid agonists can approach the receptor by lateral membrane diffusion. Nevertheless, it does not account for two pieces of evidence. First, anandamide is found in incubation media of cells and in brain interstitial fluid, implying that it can overcome its tendency to partition in membrane. Perhaps more importantly, physiological experiments have shown that an endocannabinoid substance does leave postsynaptic cells to activate CB1 receptors on adjacent axon terminals. This unidentified compound might travel as far as 20 µm from its cell of origin before being eliminate. If endocannabinoids are released from neurons, what is the mechanism of their release? The fact that plasma membranes contain precursor molecules for both anandamide and 2-AG indicates that they could leave the cell as soon as they are formed. Extracellular lipid-binding proteins such as the lipocalins, which are expressed at high levels in the brain, might facilitate this step and help to deliver endocannabinoids to their cellular targets. Although this scenario awaits confirmation, it does mirror what happens in the bloodstream, where anandamide’s movements are made possible by its reversible binding to serum albumin.Anandamide and 2-AG can diffuse passively through lipid membranes, but this process is accelerated by a rapid and selective carrier system that is present in both neurons and glial cells.Although it is superficially similar to other transmitter systems, endocannabinoid transport is not driven by transmembrane Na+ gradients, indicating that it might be mediated by a FACILITATED DIFFUSION mechanism. In this respect, neural cells seem to internalize anandamide and 2-AG in a manner similar to fatty acids, eicosanoids and other biologically relevant lipids, by using energy-independent carriers. Several lipid-carrier proteins have been molecularly cloned, inspiring optimism that, despite current controversy , endocannabinoid transporter will eventually be characterized. Meanwhile, to gain insight into the role of transport in endocannabinoid inactivation, we can rely on an expanding series of pharmacological transport inhibitors. The prototype is AM404, which slows the elimination of both anandamide and 2-AG, magnifying their biological effects. This inhibitor has helped to unmask important roles of the endocannabinoid system in the regulation of neurotransmission and synaptic plasticity, but suffers from various limitations, including an affinity for VANILLOID RECEPTORS and susceptibility to enzymatic attack by fatty acid amide hydrolase . FAAH is an intracellular membrane-bound serine hydrolase that breaks down anandamide into arachidonic acid and ethanolamine. It has been molecularly cloned and its catalytic mechanism, which allows it to recognize a broad spectrum of amide and ester substrates, has been elucidated in detail.

The technology offers the ability to combine multiple Engen-2500 appliances in series

Dr. Mueller’s work suggests that through simulation, transient understanding, and control development, integrated control strategies can be developed and implemented to enable rapid SOFC system transient load following and improve disturbance rejection capabilities. Dr. McLarty put tremendous effort into significantly improving the architecture and computational accuracy of the NFCRC dynamic fuel cell modeling tools. His work improved upon previous fuel cell dynamic modeling studies, where a novel model was developed to spatially and temporally resolve the transient temperature, pressure, and species distributions fora simulated fuel cell stack in a computationally efficient manner. The model accounts for internal manifolding of fuel and oxidant streams and predicts two-dimensional fields associated with the dynamic operation of a single high temperature fuel cell. The readily calibrated novel model can accurately capture the dynamic performance of both planar solid oxide fuel cell and molten carbonate fuel cell systems of various sizes and flow configurations with a significant range of spatial resolution. Higher spatial resolutions in modeling efforts is likely lead to more accurate simulations that identify the precise locations of the most severe thermal gradients for any particular flow geometry considered. Figure 15 illustrates the spatial resolution of the electrolyte and its temperature gradient across the cell. Simulation of any spatial resolution is readily accomplished by specifying the number of rows and columns in the model initialization file. In addition, various air and fuel flow directions can be specified that include co-flow, counter-flow,cannabis grow equipment and cross-flow patterns. Previous dynamic SOFC and MCFC models have developed controls for basic load following operation, but have not captured the spatial resolution or internal heat transfer characteristics necessary for accurate spatial temperature gradients.

The spatial resolution methodology is based upon the quasi dimensional discretization of a single fuel cell node into five distinct control volumes in the through-cell direction. The five distinct control volumes consist of the oxidant separator plate, the cathode gas stream, the electrodes and appropriate electrolyte , the anode gas stream, and the fuel separator plate. Within each volume, the temperature, species concentrations, pressure, voltage, and current density are locally evaluated with dynamic conservation of mass, energy, and momentum equations. Models of additional fuel cell system components such as heat exchangers, external reformers, and thermal oxidizers are typically integrated with these spatially resolved fuel cell models to form complex models of integrated energy systems that incorporate the resolution of individual system component physics, chemistry and electrochemistry. Although the model analyzes the operation of a single fuel cell, the results are representative of the fuel cell stack. In other words, the model takes the two-dimensional operation of a single fuel cell and applies it to the three-dimensional stack by multiplying the model results by the number of cells included in the stack. An illustration of this is shown in Figure 16. Dr. McLarty identified the positive electrode-electrolyte-negative electrode temperature gradient as an important control parameter that is heavily influenced by the power density, inlet temperature, and air flow rate. In addition, transient responses emphasize the drastically different time scales associated with electrochemical performance and cell thermal dynamics. Novel control strategies have the potential to make use of the intermediate time scales for rapid transient response of a fuel cell system with minimal cell thermal fluctuations. Therefore, detailed physical models must be employed to study system level transient responses and determine the delicate balance between performance and longevity under dynamic operating conditions.

Dr. Zhao’s most recent research is particularly relevant to the research discussed in this thesis. Microsoft has chosen to work with the NFCRC to investigate the feasibility of a new data center concept – recall the aforementioned ‘stark’ design – that includes the detailed integration of fuel cell technology into server racks. The stark concept eliminates the need for backup power systems and can introduce significant emissions reductions and energy savings while enhancing data center availability and reliability. Figure 17 illustrates the potential energy savings at each step along the energy supply chain by utilizing in-rack fuel cell technology beginning with the fuel resource and ending at the server level. Note the significant energy savings in the initial step. This conveys that significant reductions in harmful criteria and greenhouse gas emissions associated with the conversion of fuel to useable energy is possible by implementing fuel cells in place of conventional power plants. The technical challenge for using fuel cells for in-rack server power generation is rapid load following because fuel cells are typically designed for relatively steady loads. However, Figure 18 illustrates the most severe transient loads that are characteristic of AC or DC powered servers and that a fuel cell system must be able to handle without failure. The dramatic rise or fall of a server’s power demand is expected to be quite challenging for fuel cell systems to react instantaneously. Although processes inside the fuel cell such as electrochemical reactions and charge transfer processes occur on time scales on the order of milliseconds, load following issues arise when the fuel cell system cannot meet both external system and balance of plant power demands. Limitations could result from conservative control techniques or inherently slow response of subsystem components, such as flow or chemical reaction delays associated with fuel and/or air processing equipment. Dr. Zhao’s analysis and experiments verify that it is possible to achieve low cost, low greenhouse gas emissions, high reliability , and high efficiency by using mid-sized FCs at the rack level, directly supplying DC power to the servers.

Doing so effectively replaces the power distribution system in a data center with a gas distribution network and eliminates reliance on the electrical utility grid. Reducing components in the data center energy supply chain not only cuts costs but also reduce points of maintenance and failure, which improves availability of the data center. In addition, by utilizing the fuel cell DC output, 53% energy efficiency in a single server rack can be achieved . The data obtained from steady state and dynamic response simulations of the PEMFC system to server and system dynamics can be used to determine energy storage requirements and develop optimal control strategies to enhance the dynamic load following capability. Although a PEMFC system has demonstrated its dynamic load-following capabilities, its insatiable thirst for hydrogen was a major drawback for immediate implementation. Due to limitations in the burgeoning hydrogen infrastructure, Microsoft decided to opt for a high temperature SOFC system for its fuel-flexible capability, which would allow for quick installment and implementation in any building with an existing natural gas network. An added bonus to operating on natural gas is the highly reliable nature of the natural gas network, which subsequently contributes to the highly reliable nature of fuel cell systems. A relatively inexpensive and commercially tested SOFC system was needed to begin the research work. Solid Power’s Engen-2500 system was selected because it was commercially available and touted as one of the most efficient SOFC systems to date. Solid Power, formerly known as SOFCpower, began as a spin-off of another Italian company,indoor grow cannabis the Eurocoating SpA Turbocoating Group. Turbocoating is a privately held company focused on developing and manufacturing coatings and special processes for gas turbine and aero engine component manufacturers. As a fledgling Italian company, SolidPower aimed to become a leader in the development and commercialization of stacks and power generation units integrated into SOFC systems. In 2006, SolidPower acquired Swiss-based HTceramix for the industrial production and commercialization of the latter’s integrated solid oxide fuel cell system, HoTbox. HTceramix was a developer of SOFCs with a mission to manufacture and deliver fully integrated SOFC generators to system integrators at competitive prices. At the heart of its development is the SOFConnex based stack, which used a unique approach for stacking ceramic fuel cells. During the period of the acquisition, HTceramix had expanded its facilities to cope with a large increase in orders from the Asia-Pacific region and added multiple test benches for stacks and HoTboxes. Meanwhile, SolidPower was setting up a pilot production line that would be operational by 2007 to begin producing SOFC systems. SolidPower is now an experienced developer of SOFC-based systems, displaying its newly developed Engen-2500 micro-Combined Heat and Power SOFC appliance for home and industry at the recent Hannover Messe trade fair in Germany in 2015. At the trade fair, Guido Gummert, CEO of SolidPower GmbH , explains, “In our development of energy cell technology, we have succeeded in bringing down the operating temperature to around 700°C, which means that we can work with less heat generation for the current we produce. Our objective has been to achieve the highest possible electrical efficiency, but without compromising the total efficiency of the system.

With an electrical efficiency of 50% and a total efficiency of 90% , we are right out in front” . The brand new micro-cogeneration system, Engen-2500, is a good solution for projects ranging from 2.5 kW up to 20 kW of electrical power. It has been granted the coveted A++ classification under the European Energy Related Products directive, certifying a high level of electrical efficiency with maximum micro CHP efficiency. This targets end-users with larger electricity and heat requirements, such as small and medium-sized businesses, or groups of several office units within a building. The device is distinguished by the low percentage of dissipated thermal power and the high life in operation that result in a substantial cost savings. The Engen-2500 system is a floor-standing unit generating a maximum of 2.5 kWe of net AC power and can run solely on natural gas fuel from the grid at normal supply pressure. A connection for tap water supply is required to startup the system. The heat available is recovered with water, exchanged within the Engen-2500 and then transferred to an external water storage tank. When integrated for mCHP purposes, the system is controlled by the heat available at output, meaning the integrated system controller modulates power output following a heat demand command from an external energy manager. As an alternative, the system can also be operated in load-following heat-capped mode where the power can be modulated between 30% and 100% total stack power. To form a better understanding of the design structure for the Engen-2500 SOFC system, it is necessary to know what components are involved and how they interact with one another. Fortunately, SolidPower was willing to provide a design schematic of their Engen-2500 system in addition to some verbal details. Conveniently, the Engen-2500 system can be characterized by its two main segments, which house all the necessary components for operation – the HotBox and ColdBox . Although very few details were provided from SolidPower regarding the position of each component in the Engen-2500 system, it was reasonably speculated that the HotBox contains two SOFC stacks , an external reformer, an oxidizer, and a heat exchanger. These components are surrounded by insulation to mitigate heat losses to the environment . The remaining components necessary for SOFC operation are housed in the ColdBox section and include a desulfurizer, condenser, water drainage tank, pump, three valves, two air blowers, and the electronics associated with control and operation. All components with the exception of the SOFC stack are referred to as the balance of plant . The function of each component in the balance of plant is discussed in detail in the balance of plant section of this thesis. The design schematic provided by SolidPower for the Engen-2500 system was reproduced and illustrated in Figure 22 on the next page. The schematic clearly defines the connections between the balance of plant components and the SOFC stack. It is important to note that SolidPower designed their system to recirculate the anode off-gas into the oxidizer to reduce CO and H2 emissions of the system. In order to keep the oxidizer burning hot enough to supply heat to the endothermic external reformer via heat transfer, additional natural gas fuel and ambient air is mixed with the anode off-gas. Additionally, the cathode exhaust is designed to mix with the oxidizer exhaust where the mixture would be used to provide ample heat to preheat the ambient cold air in a heat exchanger before the air reaches the cathode.

A high standard deviation indicates that the metric varies substantially throughout the job’s execution

Still, our data hints that memory bandwidth is another resource that may be over designed by average in HPC. However, we cannot relate our findings to the impact to application performance if we were to reduce available memory bandwidth. That is because applications may exhibit brief but important phases of high memory bandwidth usage that may be penalized if we reduce available memory bandwidth.Figure 4 shows a CDF of average idle time among all compute cores in nodes, expressed as a percentage. These statistics were generated using “proc” reports of idle kernel cycles for each hardware thread. To generate a sample for each node every 1s, we average the idle percentage of all hardware threads in each node. As shown, about half of the time Haswell nodes have at most a 49.9% CPU idle percentage and KNL nodes 76.5%. For Haswell nodes, average system-wide CPU idle time in each sampling period never drops lower than 28% in a 30 s period and for KNL 30% . These statistics are largely due to the two hardware threads per compute core in Haswell and four in KNL, because in Cori 80% of the time Haswell nodes use only one hardware thread and 50% in KNL. Similarly, many jobs reserve entire nodes but do not use all cores in those nodes. Datacenters have also reported 28%–55% CPU idle in the case of Google trace data and 20%–50% most of the time in Alibaba.We measure per-node injection bandwidth at every NIC by using hardware counters in the Cray Aries interconnect. Those counters record how many payload bytes each node sent to and received from the Aries network. We report results for each node as a percentage utilization of the maximum per-node NIC bandwidth of 16 GB/s per direction.

We also verify against similar statistics generated by using NIC flit counters and multiplying by the flit size. In Cori,vertical farming supplies access to the global file system uses the Aries network so our statistics include file system accesses. Figure 4 shows a CDF of node-wide NIC bandwidth utilization. As shown, 75% of the time Haswell nodes use at most 0.5% of available NIC bandwidth. For KNL nodes the latter percentage becomes 1.25%. In addition, NIC bandwidth consistently exhibits a sustained bursty behavior. In particular, in a two-week period, sustained 30s average NIC bandwidth in about 60 separate occurrences increased by more than 3× compared to the overall average.In this section, we analyze how much metrics change across a job’s lifetime. Figure 5 shows a CDF of the standard deviation of all values throughout each job’s execution, calculated separately for different metrics. This graph was generated by calculating the standard deviation of values for each metric that each job has throughout the job’s execution. To normalize for different absolute values of each job, standard deviation is expressed as a percentage of the per-job average value for each metric. A value of 50% indicates that the job’s standard deviation is half of the job’s average for that metric. As shown, occupied memory and CPU idle percentages do not highly vary during job execution, but memory and NIC bandwidths do. The variability of memory and NIC bandwidths is intuitive, because many applications exhibit phases of low and high memory bandwidth. Network and memory bandwidth have been previously observed to have bursty behavior for many applications. In contrast, once an application completes reserving memory capacity, the reservation’s size typically does not change significantly until the application terminates. These observations are important for provisioning resources for disaggregation.

For metrics that do not considerably vary throughout a job’s execution, average system-wide or per-job measurements of those metrics are more representative of future behavior. Therefore, provisioning for average utilization, with perhaps an additional factor such as a standard deviation, likely will satisfy application requirements for those metrics for the majority of the time. In contrast, metrics that vary considerably have an average that is less representative. Therefore, for those metrics resource disaggregation should consider the maximum or near-maximum value.The workloads we study are part of the MLPerf benchmark suite version 0.7. MLPerf is maintained by a large consortium of companies that are pioneering AI in terms of neural network design and system architectures for both training and inference. The workloads that are part of the suite have been developed by world-leading research organizations and are representative of workloads that execute in actual production systems. We select a range of different neural networks, representing different applications: Transformer: Transformer’s novel multi-head attention mechanism allows for better parallel processing of the input sequence and therefore faster training times, but it also overcomes the vanishing gradient issue that typical RNNs suffer from. It is for these reasons that Transformers became state-of-the-art for natural language processing tasks. Such tasks include machine translation, time series prediction, as well as text understanding and generation. Transformers are the fundamental building block for networks like bidirectional encoder representations from transformers and GPT. It has also been demonstrated that Transformers are used for vision tasks. BERT: The BERT network only implements the encoder and is designed to be a language model.

Training is often done in two phases; the first phase is unsupervised to learn representations and the second phase is then used to fine-tune the network with labeled data. Language models are deployed in translation systems and human-to-machine interactions. We focus on supervised fine-tuning. ResNet50: vision tasks, particular image classification, were among the first to make DL popular. First developed by Microsoft, ResNet50 is often regarded as a standard benchmark for DL tasks and is one of the most used DL networks for image classification and segmentation, object detection, and other vision tasks. DLRM: the last benchmark in our study is the deep-learning recommendation model . Recommender systems differ from the other networks in that they deploy vast embed ding tables. These tables are sparsely accessed before a dense representation is fed into a more classical neural network. Many companies deploy these systems to offer customer recommendtions based on their history of items they bought or content they enjoyed. All workloads are ran using the official MLPerf docker containers on the datasets that are also used for the official benchmark results. Given the large number of hyperparameters to run these networks, we refer to the docker containers and scripts that are used to run the benchmarks. We only adapt batch sizes and denote them in our results.Machine learning typically consists of two phases: training and inference. During training the net work learns and optimizes parameters from a carefully curated dataset. Training is a throughput critical task and input samples are batched together to increase efficiency. Inference, however, is done on a trained and deployed model and is often sensitive to latency. Input batches are usu ally smaller and there is less computation and lower memory footprints, as no errors need to be back propagated and parameters are not optimized. We measure various metrics for training and inference runs for BERT and ResNet50. Results are shown in Figure 12 . GPU utilization is high for BERT during training and inference phases, both in terms of compute and memory capacity utilization. However, the CPU compute and memory capacity utilization is low. ResNet50, however,cannabis indoor greenhouse shows large CPU compute utilization, which is also higher during inference. Inference requires significantly less computation, which means the CPU is more utilized compared to training to provide the data and launch the work on the GPU. Training consumes significantly more GPU resources, especially memory capacity. This is not surprising, as these workloads were designed for maximal performance on GPUs. Certain parts of the system, notably CPU resources, remain underutilized, which motivates disaggregation.Further, training and inference have different requirements and disaggregation helps to provision resources accordingly. The need for disaggregation is also evident in NVIDIA’s introduction of multi-instance GPU, which allows to partition a GPUs into seven independent and smaller GPUs. Our work takes this further and considers disaggregation at the rack scale.Figure 12 shows resource utilization during the training of various MLPerf workloads.

These benchmarks are run on a single DGX1 system with 8 Volta V100 GPUs. While we can generally observe that CPU utilization is low, GPU utilization is consistently high across all workloads. We also depict bandwidth utilization of NVLink and PCIe. We note that bandwidth is shown as an average effective bandwidth across all GPUs for the entire measurement period. In addition, we can only sample in intervals of 1 s, which limits our ability to capture peaks in high-speed links like NVLink. Nonetheless, we can observe that overall effective bandwidth is low, which suggests that links are not highly utilized by average. All of the shown workloads are data-parallel, while DLRM also implements model parallelism. In data parallelism, parameters need to be reduced across all workers, resulting in an all-reduce operation for every optimization step. As a result, the network can be underutilized during computation of parameter gradients. The highest bandwidth utilization is from DLRM, which is attributed to the model parallel phase and its all-to-all communication.Another factor of utilization is inter-node scaling. We run BERT and ResNet50 on up to 16 DGX1 systems, connected via InfiniBand EDR. The results are depicted in Figure 13. For ResNet50, we also distinguish between weak scaling and strong scaling. BERT is shown for weak scaling only. Weak scaling is preferred as it generally leads to higher efficiency and utilization, as shown by our results. In data parallelism, this means we keep the number of input samples per GPU, referred to as sub-batch , constant, and while we scale-out the effective global batch size increases. At some point the global batch size reaches a critical limit after which the network stops converging or converges slower so that any performance benefit diminishes. At this point, strong scaling becomes the only option to further reduce training time. As Figure 13 shows, strong scaling increases the bandwidth requirements, both intra- and inter-node, but reduces compute and memory utilization of individual GPUs. While some under utilization is still beneficial in terms of total training time, it eventually becomes too inefficient. Many neural nets train for hours or even days on small-scale systems, rendering large-scale train ing necessary. This naturally leads to some underutilization of certain resources, which motivates disaggregation to allow resources to be used for other tasks.To understand what our observations mean for resource disaggregation, we assume a Cori cabinet that contains Haswell nodes. That is, each cabinet has 384 Haswell CPUs, 768 memory modules of 17 GB/s and 16 GB each, and 192 NICs that connect at 16 GB/s per direction to nodes . We use memory bandwidth measurements from Haswell nodes, and memory capacity and NIC band width from both Haswell and KNL nodes. Since a benefit of resource disaggregation is increasing the utilization factor that can help to reduce overall resources, we define and sweep a resource reduction factor percentage for non-CPU resources. For instance, a resource reduction factor of 50% means that our cabinet has half the memory and NIC resources of a Cori Haswell cabinet. The resource reduction factor is a useful way to illustrate the tradeoff between the range of disaggregation and how aggressively we reduce available resources. We assume resource disaggregation hardware capable of allocating resources in a fine-grain manner, such as fractions of memory capacity of the same memory module to different jobs or CPUs.We perform two kinds of analyses. The first anal ysis is agnostic to job characteristics and quantifies the probability that a CPU will have to connect to resources in other racks. This more readily translates to inter-rack bandwidth requirements for system-wide disaggregation. To that end, we make the assumption that a CPU can be allocated to any application running in the system with uniform random probability. We use the CDFs of resource usage from Section 3 as probability distributions. As shown in Figure 14 , with no resource reduction a CPU has to cross a rack 0.01% of the time to find additional memory bandwidth, 0.16% to find additional memory capacity, and 0.28% to find additional NIC bandwidth. With a 50% resource reduction factor, these numbers become 20.2%, 11%, and 2%, respectively.

Another area of improvement for fourth-generation systems is bicycle redistribution innovations

In addition, London partnered with BIXI and plans to launch its own bike sharing program with 6,000 bicycles and 400 stations by Summer 2010. The most widely known third-generation bike sharing system today is “Vélib’” in Paris, France. To date, Vélib’ operates with 20,600 bicycles and has plans of expanding to 23,900 bicycles by the end of 2009 . Over two million Parisians have access to 1,451 bicycle stations, which are available every 300 meters, 24-hours a day, and seven days a week. Vélib’ operates on a fee-based system where program users are encouraged to employ bicycles for short trips by offering the first thirty minutes of cycling free to users. After thirty minutes, increasing costs are scheduled. Users also have the option of purchasing a one-day pass for €1 , a one-week pass for €5 , or a one-year subscription for €29 . Between 2007 and 2008, Vélib’ reported that 20 million trips were made through their program. Averaging 78,000 trips per day, Vélib’s usage rates require that the program operate as efficiently as possible to maintain and distribute bicycles.The largest IT-based system in North America is BIXI in Montreal, Canada. BIXI stands for BIcycle-TaXI. Launched in May 2009, BIXI operates with 5,000 bicycles, 400 stations, and 11,000 program members . BIXI’s system has also been chosen as the provider for Boston’s planned bike sharing program, which aims to launch with 2,500 bicycles and 290 stations by Summer 2010. It is equally important to note that technological advances in the BIXI program mark a shift towards the fourth-generation of bike sharing described below. While the implementation of bike sharing programs in North America is limited,grow tray stand bike sharing activity in South America only recently started in 2008.

At present, Brazil and Chile are the only two nations with fully operating programs. Argentina and Colombia are in the process of planning their own bike sharing systems. In 2008, Brazil launched two bike sharing programs—“UseBike” in São Paulo and “Samba” in Rio de Janiero. UseBike operates with 202 bicycles and 23 bike stations. This program offers users one free hour and costs two Brazilian reais for each additional hour. Samba was launched with 80 bicycles and eight bike stations. It is in the process of expanding to neighboring cities and is expected to reach 500 bicycles with 50 bike stations by the end of 2009. To access bicycles, Samba requires mobile phone activation. Users are instructed to subscribe online, then they can walk up to any of the eight bike stations, call the designated number from their mobile phone, enter a security code, dial the station and spot number, and the bicycle is unlocked. Following the launch of Samba in Brazil, Chile launched its own bike sharing program with 50 bicycles and 10 bike stations.Asia’s bike sharing history is limited to third generation IT-Based Systems. Despite its more limited experience, Asia is the fastest growing market for bike sharing activity today. The first bike sharing program to launch in Asia was “TownBike” in Singapore in 1999 . This program ended in 2007. The second bike sharing program in Asia was the “Taito Bicycle Sharing Experiment,” which operated in Taito, Japan from November 2002 to January 2003. It was the first bike sharing pilot in Japan and was funded by the national government’s Social Experiment grants. The program operated with 130 bicycles at 12 locations. Bicycles were accessed by magnetic striped membership cards, which helped prevent theft. Due to Taito’s high population density, program users felt that more bicycle locations were necessary . At present, bike sharing programs are operating in South Korea, Taiwan, and Mainland China. South Korea’s city government launched its first bike sharing program, “Nubija,” in Chongwan in 2008.

The program has 430 bicycles and 20 terminals located at the city center. Similar to other programs, Nubija does not charge users a fee for the first hour of use. “C-Bike” in Kaohsiung City launched in 2009, as the first bike sharing program in Taiwan. The entire system operates on a build-operate-transfer basis that costs NT$90 million . Following the launch of Kaohsiung’s program, the Taipei government partnered with Giant to launch their bike sharing system, “YouBike,” in 2009. This program is completely automated with an electronic management system that allows bicycles to be rented and returned to any location. There are 500 bicycles at 10 locations that provide 718 YouBike parking spaces in Taipei . The largest and most famous bike sharing program in Asia is the “Public Bicycle” system in Hangzhou, China, which was launched by the Hangzhou Public Transport Corporation in 2008. This system was the first IT-Based system in Mainland China. With a population of 3.73 million, Hangzhou’s high population density makes it a promising bike sharing location. Hangzhou’s system operates with 40,000 bicycles and 1,600 stations and is expected to expand to 50,000 bicycles and 2,000 stations by the end of 2009 . Increasing the number of bicycle stations to 2,000 means that tourists and residents will have access to a bicycle station every 200 meters. According to a survey by the Hangzhou Public Transport Corporation, bicycles are used six times per day on average, and no bicycles have been lost during the first year implementation . The Hangzhou Public Bicycle System has surpassed Vélib as the largest bike sharing program in the world. Not surprisingly, it has sparked great interest in bike sharing in Mainland China. Indeed, Beijing, Tianjin, Hainan, and Suzhou have already launched pilot programs in 2008 and 2009. In February 2010, the City of Melbourne, Australia also announced plans for its first bike sharing program. The city has selected BIXI as the provider and plans to launch with 1,000 bicycles and 52 stations by Summer 2010.

The success of third-generation programs has made it the most prominent bike sharing model worldwide. Furthermore, third-generation successes have increased the number of bike sharing vendors, providers, service models, and technologies. Bike sharing providers, for instance, range from local governments to transport agencies, advertising companies, for-profit, and non-profit groups . Bike sharing is funded through advertising, self-funding, user fees, municipalities, and public-private partnerships . Table 2 below provides an overview of bike sharing business models and providers. The most prominent funding sources for third-generation bike sharing are municipalities and advertising partnerships . According to Midgley local governments operate 27% of existing bike sharing systems. In addition, JCDecaux and Clear Channel— the two biggest outdoor advertising companies—operate 23% and 16% of worldwide bike sharing programs, respectively . Public agencies also are becoming an increasingly important provider of bike sharing programs. In China, for instance, public transport agencies operate the Hangzhou bike sharing system under local government guidance. Furthermore, non-profit bike sharing programs, which typically require public support at the start-up stage,hydroponic racks are likely to remain a prominent model for the foreseeable future. At present, major bike sharing vendors include Clear Channel Adshel, BIXI, Veoila Transportation, Cemusa, JCDecaux, and B-Cycle . Of these, the major bike sharing systems are: 1) SmartBike by Clear Channel Outdoor in the U.S., 2) Bicincittà by Comunicare in Italy, and 3) Cyclocity by JCDecaux in France . Increasing use of advanced technologies in third generation bike sharing has led to a growing market for technology vendors. IT-based systems became popular after the largest outdoor advertising company, Clear Channel, launched their first Smart Bike program in Rennes, France. Other companies that provide automated IT-based systems include: Biceberg ; BIXI Public Bike System ; Ebikeshare ; LeisureTec Bike Station ; Q I Systems CycleStation ; Sekura-Byk ; and Urban Racks .At present, there is limited research on the environmental and social benefits of bike sharing, particularly before-and-after behavioral trends. However, many bike sharing programs have conducted user-based surveys that document program experience. One impact of bike sharing is its potential to provide emission-free transportation. SmartBike, for instance, estimates that over 50,000 SmartBike trips cover a total of 200,000 kilometers per day.

SmartBike calculates that a car covering this same distance would produce 37,000 kilograms of carbon dioxide emissions per day . With an average of 78,000 trips per day and approximately 20 minutes per trip, Vélib users cover an estimated 312,000 km per day. A car covering this same distance would have produced approximately 57,720 kg of CO2 per day. As of August 2009, BIXI users covered an estimated 3,612,799 km, which translates into 909,053 kg of reduced greenhouse gas emissions . As of October 2009, the Hangzhou Public Bicycle Program generated 172,000 trips per day. With an average trip lasting approximately 30 minutes, Hangzhou program users covered an estimated 1,032,000 km per day. In contrast, an automobile covering this same distance would produce 190,920 kg of emissions. If successful, these data suggests that increased bike sharing activity has the potential to yield notable greenhouse gas emission reductions. The potential of bike sharing programs to reduce vehicle emissions is promising when one considers current data on modal shifts. For instance, in a recent survey of SmartBike members, researchers found that bike sharing drew nearly 16% of individuals who would otherwise have used personal vehicles for trip making . Velo’v in Lyon, France reports that bicycle use replaced 7% of trips that would otherwise have been made by private vehicles . In Paris, 20% of Vélib’ users also reported using personal vehicles less frequently . The growth and evolution of bike sharing programs worldwide has led to increased public awareness of bike sharing and its potential social, environmental, financial, and health-based benefits. Along with increased bike sharing awareness, public perception of bicycling as a transportation mode also has evolved. A 2008 Vélib’ survey, for instance, found that 89% of program users agreed that Vélib’ made it easier to travel through Paris. According to SmartBike, nearly 79% of respondents reported that bike sharing use in Washington, D.C. was faster or more convenient than other options. In Montreal, the initial public reaction to BIXI was skeptical. However, the heavy presence of BIXI bicycles has led Montreal residents to embrace the new system. In general, cities that have implemented successful bike sharing programs appear to have positively impacted the perception of bicycling as a viable transportation mode. While very few studies evaluate behavioral shifts, available data suggest notable changes. For example, during the first year of Velo’v, the City of Lyon documented a 44% increase in bicycle riding . Ninety-six percent were new users who had not previously bicycled in the Lyon city center. In addition, bicycle riding in Paris also increased by 70% with the launch of Vélib’. Given the relatively limited impact data, more research is needed on the social and environmental benefits of bike sharing. The advances and shortcomings of previous and existing bike sharing models have contributed to a growing body of knowledge of this shared public transportation mode. Such experiences are making way for an emerging fourth-generation bike sharing model or Demand-Responsive, Multi-Modal Systems. These systems build upon the third generation and emphasize: 1) flexible, clean docking stations; 2) bicycle redistribution innovations; 3) smart card integration with other transportation modes, such as public transit and carsharing; and 4) technological advances including GPS tracking, touchscreen kiosks, and electric bikes . See Figure 1 above for an overview of the four generations of bike sharing described in this paper. “BIXI,” which launched in Canada in May 2009, and is operating with 5,000 bicycles and 11,000 members, marks the beginning of bike sharing’s fourth generation . One of the major innovations of BIXI’s bicycle docking stations is that they are mobile, which allows stations to be removed and transferred to different locations. This innovation enables bicycle stations to be relocated according to usage patterns and user demands. Another improvement that BIXI’s system might offer to future bike sharing programs is the use of solar-powered stations. Not surprisingly, solar-powered stations would further reduce emissions and the need to secure access to an energy grid to support operations . Fourth generation bike sharing also may consider omitting docking stations and opt for flex stations where users employ mobile phone technology and street furniture for bicycle pick up and drop off, as do five cities in Germany. Vélib’s use of specially designed vehicles for bicycle relocation represents a first step towards addressing this issue. However, employing larger, designated vehicles for bicycle transport increases implementation costs and is not emission free, at present. In the future, bike sharing services will continue to deploy more efficient redistribution methods .