Sample records for airshed model uam

  1. SAI (Systems Applications, Incorporated) Urban Airshed Model. Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schere, K.L.

    1985-06-01

    This magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the SAI Urban Airshed Model (UAM). The UAM is a 3-dimensional gridded air-quality simulation model that is well suited for predicting the spatial and temporal distribution of photochemical pollutant concentrations in an urban area. The model is based on the equations of conservation of mass for a set of reactive pollutants in a turbulent-flow field. To solve these equations, the UAM uses numerical techniques set in a 3-D finite-difference grid array of cells, each about 1 to 10 kilometers wide and 10 to severalmore » hundred meters deep. As output, the model provides the calculated pollutant concentrations in each cell as a function of time. The chemical species of prime interest included in the UAM simulations are O3, NO, NO/sub 2/ and several organic compounds and classes of compounds. The UAM system contains at its core the Airshed Simulation Program that accesses input data consisting of 10 to 14 files, depending on the program options chosen. Each file is created by a separate data-preparation program. There are 17 programs in the entire UAM system. The services of a qualified dispersion meteorologist, a chemist, and a computer programmer will be necessary to implement and apply the UAM and to interpret the results. Software Description: The program is written in the FORTRAN programming language for implementation on a UNIVAC 1110 computer under the UNIVAC 110 0 operating system level 38R5A. Memory requirement is 80K.« less

  2. STUDY USING A THREE-DIMENSIONAL PHOTOCHEMICAL SMOG FORMATION MODEL UNDER CONDITIONS OF COMPLEX FLOW: APPLICATION OF THE URBAN AIRSHED MODEL TO THE TOKYO METROPOLITAN AREA

    EPA Science Inventory

    The purpose of this study is to evaluate the Urban Airshed Model (UAM), a three-dimensional photochemical urban air quality simulation model, using field observations from the Tokyo Metropolitan Area. mphasis was placed on the photochemical smog formation mechanism under stagnant...

  3. Evaluation of CALGRID using two different ozone episodes and comparison to UAM results

    NASA Astrophysics Data System (ADS)

    Kumar, Naresh; Russell, Armistead G.; Tesche, Thomas W.; McNally, Dennis E.

    Air quality models serve as the foundation for policy decisions regarding programs designed to improve air quality. The California Air Resources Board Airshed Model (CALGRID) is one of the latest photochemical air quality models developed for assessing ozone control strategies. CALGRID was modified to include the lates CBIV chemical mechanism in place of the original SAPRC mechanism. After modification, a detailed evaluation of CALGRID was carried out using two different ozone episodes. The first evaluation used data obtained during the Southern California Air Quality Study (SCAQS). The second evaluation used data obtained for the September, 1984 SCCCAMP episodes in the South Central Coast Air Basin (SCCAB). Model results were compared against observations of O 3, NO, NO 2, and different organic compounds. For the SCCAB episode, the results were also compared with those obtained from the Urban Airshed Model (UAM). Similar to other studies, the ozone predictions from the SCAQS application were biased low, as were various ROG components. The reason for this can be linked to the under-representation of ROG and CO in the emissions inventory. For the SCCAB episode, both the UAM and CALGRID models significantly underestimated NO and NO 2 concentrations. The two models slightly underestimated ozone concentrations above approximately 9 pphm on the third and last day of the simulation. Sensitivity experiments were performed for both the studies. It was found that both CALGRID and UAM are strongly sensitive to the boundary conditions and moderately sensitive to the emissions for the episodes modeled.

  4. Georgia Basin-Puget Sound Airshed Characterization Report 2014

    EPA Science Inventory

    The Georgia Basin - Puget Sound Airshed Characterization Report, 2012 was undertaken to characterize the air quality within the Georgia Basin/Puget Sound region,a vibrant, rapidly growing, urbanized area of the Pacific Northwest. The Georgia Basin - Puget Sound Airshed Characteri...

  5. Uncertainties in Episodic Ozone Modeling Stemming from Uncertainties in the Meteorological Fields.

    NASA Astrophysics Data System (ADS)

    Biswas, Jhumoor; Trivikrama Rao, S.

    2001-02-01

    This paper examines the uncertainty associated with photochemical modeling using the Variable-Grid Urban Airshed Model (UAM-V) with two different prognostic meteorological models. The meteorological fields for ozone episodes that occurred during 17-20 June, 12-15 July, and 30 July-2 August in the summer of 1995 were derived from two meteorological models, the Regional Atmospheric Modeling System (RAMS) and the Fifth-Generation Pennsylvania State University-National Center for Atmospheric Research Mesoscale Model (MM5). The simulated ozone concentrations from the two photochemical modeling systems, namely, RAMS/UAM-V and MM5/UAM-V, are compared with each other and with ozone observations from several monitoring sites in the eastern United States. The overall results indicate that neither modeling system performs significantly better than the other in reproducing the observed ozone concentrations. The results reveal that there is a significant variability, about 20% at the 95% level of confidence, in the modeled 1-h ozone concentration maxima from one modeling system to the other for a given episode. The model-to-model variability in the simulated ozone levels is for most part attributable to the unsystematic type of errors. The directionality for emission controls (i.e., NOx versus VOC sensitivity) is also evaluated with UAM-V using hypothetical emission reductions. The results reveal that not only the improvement in ozone but also the VOC-sensitive and NOx-sensitive regimes are influenced by the differences in the meteorological fields. Both modeling systems indicate that a large portion of the eastern United States is NOx limited, but there are model-to-model and episode-to-episode differences at individual grid cells regarding the efficacy of emission reductions.

  6. Development of a comprehensive air quality modeling framework for a coastal urban airshed in south Texas

    NASA Astrophysics Data System (ADS)

    Farooqui, Mohmmed Zuber

    Tropospheric ozone is one of the major air pollution problems affecting urban areas of United States as well as other countries in the world. Analysis of surface observed ozone levels in south and central Texas revealed several days exceeding 8-hour average ozone National Ambient of Air Quality Standards (NAAQS) over the past decade. Two major high ozone episodes were identified during September of 1999 and 2002. A photochemical modeling framework for the high ozone episodes in 1999 and 2002 were developed for the Corpus Christi urban airshed. The photochemical model was evaluated as per U.S. Environmental Protection Agency (EPA) recommended statistical methods and the models performed within the limits set by EPA. An emission impact assessment of various sources within the urban airshed was conducted using the modeling framework. It was noted that by nudging MM5 with surface observed meteorological parameters and sea-surface temperature, the coastal meteorological predictions improved. Consequently, refined meteorology helped the photochemical model to better predict peak ozone levels in urban airsheds along the coastal margins of Texas including in Corpus Christi. The emissions assessment analysis revealed that Austin and San Antonio areas were significantly affected by on-road mobile emissions from light-duty gasoline and heavy-duty diesel vehicles. The urban areas of San Antonio, Austin, and Victoria areas were estimated to be NOx sensitive. Victoria was heavily influenced by point sources in the region while Corpus Christi was influenced by both point and non-road mobile sources and was identified to be sensitive to VOC emissions. A rise in atmospheric temperature due to climate change potentially increase ozone exceedances and the peak ozone levels within the study region and this will be a major concern for air quality planners. This study noted that any future increase in ambient temperature would result in a significant increase in the urban and regional

  7. On the construction, comparison, and variability of airsheds for interpreting semivolatile organic compounds in passively sampled air.

    PubMed

    Westgate, John N; Wania, Frank

    2011-10-15

    Air mass origin as determined by back trajectories often aids in explaining some of the short-term variability in the atmospheric concentrations of semivolatile organic contaminants. Airsheds, constructed by amalgamating large numbers of back trajectories, capture average air mass origins over longer time periods and thus have found use in interpreting air concentrations obtained by passive air samplers. To explore some of their key characteristics, airsheds for 54 locations on Earth were constructed and compared for roundness, seasonality, and interannual variability. To avoid the so-called "pole problem" and to simplify the calculation of roundness, a "geodesic grid" was used to bin the back-trajectory end points. Departures from roundness were seen to occur at all latitudes and to correlate significantly with local slope but no strong relationship between latitude and roundness was revealed. Seasonality and interannual variability vary widely enough to imply that static models of transport are not sufficient to describe the proximity of an area to potential sources of contaminants. For interpreting an air measurement an airshed should be generated specifically for the deployment time of the sampler, especially when investigating long-term trends. Samples taken in a single season may not represent the average annual atmosphere, and samples taken in linear, as opposed to round, airsheds may not represent the average atmosphere in the area. Simple methods are proposed to ascertain the significance of an airshed or individual cell. It is recommended that when establishing potential contaminant source regions only end points with departure heights of less than ∼700 m be considered.

  8. Atmospheric speciation of mercury in two contrasting Southeastern US airsheds

    NASA Astrophysics Data System (ADS)

    Gabriel, Mark C.; Williamson, Derek G.; Brooks, Steve; Lindberg, Steve

    Simultaneous measurement of gaseous elemental, reactive gaseous, and fine particulate mercury took place in Tuscaloosa AL, (urban airshed) and Cove Mountain, TN (non-urban airshed) during the summers of 2002 and 2003. The objective of this research was to (1) summarize the temporal distribution of each mercury specie at each site and compare to other speciation data sets developed by other researchers and (2) provide insight into urban and non-urban mercury speciation effects using various statistical methods. Average specie concentrations were as follows: 4.05 ng m -3 (GEM), 13.6 pg m -3 (RGM), 16.4 pg m -3 (Hg-p) for Tuscaloosa; 3.20 ng m -3 (GEM), 13.6 pg m -3 (RGM), 9.73 pg m -3 (Hg-p) for Cove Mountain. As a result of urban airshed impacts, short periods of high concentration for all mercury species was common in Tuscaloosa. At Cove Mountain a consistent mid-day rise and evening drop for mercury species was found. This pattern was primarily the result of un-impacted physical boundary layer movement, although, other potential impacts were ambient photochemistry and air-surface exchange of mercury. Meteorological parameters that are known to heavily impact mercury speciation were similar for the study period for Tuscaloosa and Cove Mountain except for wind speed (m s -1), which was higher at Cove Mountain. For both sites statistically significant ( p<0.0001), inverse relationships existed between wind speed and Hg 0 concentration. A weaker windspeed-Hg 0 correlation existed for Tuscaloosa. By analyzing Hg concentration—wind speed magnitude change at both sites it was found that wind speed at Cove Mountain had a greater influence on Hg 0 concentration variability than Tuscaloosa by a factor of 3. Using various statistical tests, we concluded that the nature of Tuscaloosa's atmospheric mercury speciation was the result of typical urban airshed impacts. Cove Mountain showed atmospheric mercury speciation characteristics indicative of a non-urban area along with

  9. SPATIAL ANALYSIS OF AIR POLLUTION AND DEVELOPMENT OF A LAND-USE REGRESSION ( LUR ) MODEL IN AN URBAN AIRSHED

    EPA Science Inventory

    The Detroit Children's Health Study is an epidemiologic study examining associations between chronic ambient environmental exposures to gaseous air pollutants and respiratory health outcomes among elementary school-age children in an urban airshed. The exposure component of this...

  10. Modeling ozone episodes in the Baltimore-Washington region

    NASA Technical Reports Server (NTRS)

    Ryan, William F.

    1994-01-01

    Surface ozone (O3) concentrations in excess of the National Ambient Air Quality Standard (NAAQS) continue to occur in metropolitan areas in the United States despite efforts to control emissions of O3 precursors. Future O3 control strategies will be based on results from modeling efforts that have just begun in many areas. Two initial questions that arise are model sensitivity to domain-specific conditions and the selection of episodes for model evaluation and control strategy development. For the Baltimore-Washington region (B-W), the presence of the Chesapeake Bay introduces a number of issues relevant to model sensitivity. In this paper, the specific questions of the determination of model volume (mixing height) for the Urban Airshed Model (UAM) is discussed and various alternative methods compared. For the latter question, several analytic approaches, Cluster Analysis and classification and Regression Tree (CART) analysis are undertaken to determine meteorological conditions associated with severe O3 events in the B-W domain.

  11. X-ray diffraction analysis and in vitro characterization of the UAM2 protein from Oryza sativa

    DOE PAGES

    Welner, Ditte Hededam; Tsai, Alex Yi-Lin; DeGiovanni, Andy M.; ...

    2017-03-29

    The role of seemingly non-enzymatic proteins in complexes interconverting UDP-arabinopyranose and UDP-arabinofuranose (UDP-arabinosemutases; UAMs) in the plant cytosol remains unknown. To shed light on their function, crystallographic and functional studies of the seemingly non-enzymatic UAM2 protein from Oryza sativa (OsUAM2) were undertaken. Here, X-ray diffraction data are reported, as well as analysis of the oligomeric state in the crystal and in solution. OsUAM2 crystallizes readily but forms highly radiation-sensitive crystals with limited diffraction power, requiring careful low-dose vector data acquisition. Using size-exclusion chromatography, it is shown that the protein is monomeric in solution. Finally, limited proteolysis was employed to demonstratemore » DTT-enhanced proteolytic digestion, indicating the existence of at least one intramolecular disulfide bridge or, alternatively, a requirement for a structural metal ion.« less

  12. Urban airshed modeling of air quality impacts of alternative transportation fuel use in Los Angeles and Atlanta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-12-01

    The main objective of NREL in supporting this study is to determine the relative air quality impact of the use of compressed natural gas (CNG) as an alternative transportation fuel when compared to low Reid vapor pressure (RVP) gasoline and reformulated gasoline (RFG). A table lists the criteria, air toxic, and greenhouse gas pollutants for which emissions were estimated for the alternative fuel scenarios. Air quality impacts were then estimated by performing photochemical modeling of the alternative fuel scenarios using the Urban Airshed Model Version 6.21 and the Carbon Bond Mechanism Version IV (CBM-IV) (Geary et al., 1988) Using thismore » model, the authors examined the formation and transport of ozone under alternative fuel strategies for motor vehicle transportation sources for the year 2007. Photochemical modeling was performed for modeling domains in Los Angeles, California, and Atlanta, Georgia.« less

  13. Model simulation of meteorology and air quality during the summer PUMA intensive measurement campaign in the UK West Midlands conurbation.

    PubMed

    Baggott, Sarah; Cai, Xiaoming; McGregor, Glenn; Harrison, Roy M

    2006-05-01

    The Regional Atmospheric Modeling System (RAMS) and Urban Airshed Model (UAM IV) have been implemented for prediction of air pollutant concentrations within the West Midlands conurbation of the United Kingdom. The modelling results for wind speed, direction and temperature are in reasonable agreement with observations for two stations, one in a rural area and the other in an urban area. Predictions of surface temperature are generally good for both stations, but the results suggest that the quality of temperature prediction is sensitive to whether cloud cover is reproduced reliably by the model. Wind direction is captured very well by the model, while wind speed is generally overestimated. The air pollution climate of the UK West Midlands is very different to those for which the UAM model was primarily developed, and the methods used to overcome these limitations are described. The model shows a tendency towards under-prediction of primary pollutant (NOx and CO) concentrations, but with suitable attention to boundary conditions and vertical profiles gives fairly good predictions of ozone concentrations. Hourly updating of chemical concentration boundary conditions yields the best results, with input of vertical profiles desirable. The model seriously underpredicts NO2/NO ratios within the urban area and this appears to relate to inadequate production of peroxy radicals. Overall, the chemical reactivity predicted by the model appears to fall well below that occurring in the atmosphere.

  14. Using UAM Corpustool to Explore the Language of Evaluation in Interview Program

    ERIC Educational Resources Information Center

    Hu, Chunyu; Tan, Jinlin

    2017-01-01

    As an interactional encounter between a journalist and one or more newsworthy public figures, an interview program is a special type of discourse that is full of evaluative language. This paper sets out to explore evaluation in interview programs from the perspective of appraisal system. The corpus software used in this study is UAM CorpusTool…

  15. Draft Genome Sequence of Sphingobacterium sp. CZ-UAM, Isolated from a Methanotrophic Consortium

    PubMed Central

    Steffani-Vallejo, José Luis; Zuñiga, Cristal; Cruz-Morales, Pablo; Lozano, Luis; Morales, Marcia; Licona-Cassani, Cuauhtemoc; Revah, Sergio

    2017-01-01

    ABSTRACT Sphingobacterium sp. CZ-UAM was isolated from a methanotrophic consortium in mineral medium using methane as the only carbon source. A draft genome of 5.84 Mb with a 40.77% G+C content is reported here. This genome sequence will allow the investigation of potential methanotrophy in this isolated strain. PMID:28818899

  16. Draft Genome Sequence of Sphingobacterium sp. CZ-UAM, Isolated from a Methanotrophic Consortium.

    PubMed

    Steffani-Vallejo, José Luis; Zuñiga, Cristal; Cruz-Morales, Pablo; Lozano, Luis; Morales, Marcia; Licona-Cassani, Cuauhtemoc; Revah, Sergio; Utrilla, José

    2017-08-17

    Sphingobacterium sp. CZ-UAM was isolated from a methanotrophic consortium in mineral medium using methane as the only carbon source. A draft genome of 5.84 Mb with a 40.77% G+C content is reported here. This genome sequence will allow the investigation of potential methanotrophy in this isolated strain. Copyright © 2017 Steffani-Vallejo et al.

  17. Air Quality in the Puebla-Tlaxcala Airshed in Mexico during April 2009

    NASA Astrophysics Data System (ADS)

    Ruiz Suarez, L. G.; Torres Jardón, R.; Torres Jaramillo, J. A.; Barrera, H.; Castro, T.; Mar Morales, B. E.; García Reynoso, J. A.; Molina, L. T.

    2012-04-01

    East of the Mexico Megacity, is the metropolitan area of Puebla-Tlaxcala which is reproducing the same patterns of urban sprawl as in the Mexico City Metropolitan Area. Is an area of high industrial density, the fragmented urban sprawl boost the use of particular cars in detrimental of public transport use. Emissions inventories reflect this fact; they also show a considerable use of biomass energy in households and small using a set of industries and service business. In April 2009 we carried out a preliminary field campaign in the basin, we deployed three mobile units, one in the north, in a site connecting with the valley of Mexico basin, one in the south where it may connect with the Cuautla-Cuernavaca Airshed and one in a receptor site to the Puebla Metropolitan Area. In addition to the available data from local air quality network within the City of Puebla. Analysis of the 2009 data show a complex flow pattern induced by the Popocateptl and Iztaccihuatl volcanoes to the west and La Malinche volcano to the east. Excess NOx emissions in the urban and industrial core lead to very low ozone levels within but high ozone concentrations are observed in the peri-urban and rural areas, exceeding the Mexican Air Quality Standards. In our presentation we will describe and explain these observations and will describe a field campaign to be carried out in March-April 2012 aiming to better document the air quality in the Puebla-Tlaxcala Airshed. Hybrid observation-model maps for ozone critical levels show the population exposed to exeedences to the official standards. AOT40 maps also show that crops and forests in the region are exposed to unhealthy ozone levels. These results add to those from MILAGRO and CARIEM field campaigns on the regional scale of the air quality issues in central Mexico. A point is made on the need to update the Mexicp Air Quality Standard for ozone.

  18. Test and Evaluation of Ultrasonic Additive Manufacturing (UAM) for a Large Aircraft Maintenance Shelter (LAMS) Baseplate

    DTIC Science & Technology

    2015-03-26

    TEST AND EVALUATION OF ULTRASONIC ADDITIVE MANUFACTURING (UAM) FOR A LARGE AREA MAINTENANCE...States Government. AFIT-ENV-MS-15-M-158 TEST AND EVALUATION OF ULTRASONIC ADDITIVE MANUFACTURING FOR A LARGE AREA MAINTENANCE SHELTER...Civil Engineer (CE) operations. This research replicates a Large Area Maintenance Shelter (LAMS) baseplate design for ultrasonic additive

  19. Biomedical Engineering curriculum at UAM-I: a critical review.

    PubMed

    Martinez Licona, Fabiola; Azpiroz-Leehan, Joaquin; Urbina Medal, E Gerardo; Cadena Mendez, Miguel

    2014-01-01

    The Biomedical Engineering (BME) curriculum at Universidad Autónoma Metropolitana (UAM) has undergone at least four major transformations since the founding of the BME undergraduate program in 1974. This work is a critical assessment of the curriculum from the point of view of its results as derived from an analysis of, among other resources, institutional databases on students, graduates and their academic performance. The results of the evaluation can help us define admission policies as well as reasonable limits on the maximum duration of undergraduate studies. Other results linked to the faculty composition and the social environment can be used to define a methodology for the evaluation of teaching and the implementation of mentoring and tutoring programs. Changes resulting from this evaluation may be the only way to assure and maintain leadership and recognition from the BME community.

  20. Aryl hydrocarbon receptor-mediated activity of particulate organic matter from the Paso del Norte airshed along the U.S.-Mexico border.

    PubMed Central

    Arrieta, Daniel E; Ontiveros, Cynthia C; Li, Wen-Whai; Garcia, Jose H; Denison, Michael S; McDonald, Jacob D; Burchiel, Scott W; Washburn, Barbara Shayne

    2003-01-01

    In this study, we determined the biologic activity of dichloromethane-extracted particulate matter < 10 micro m in aerodynamic diameter (PM10) obtained from filters at three sites in the Paso del Norte airshed, which includes El Paso, Texas, USA; Juarez, Chihuahua, Mexico, and Sunland Park, New Mexico, USA. The extracts were rich in polycyclic aromatic hydrocarbons (PAHs) and had significant biologic activity, measured using two in vitro assay systems: ethoxyresorufin-(O-deethylase (EROD) induction and the aryl hydrocarbon-receptor luciferase reporter system. In most cases, both EROD (5.25 pmol/min/mg protein) and luciferase activities (994 relative light units/mg) were highest in extracts from the Advance site located in an industrial neighborhood in Juarez. These values represented 58% and 55%, respectively, of induction associated with 1 micro M ss-naphthoflavone exposures. In contrast, little activity was observed at the Northeast Clinic site in El Paso, the reference site. In most cases, luciferase and EROD activity from extracts collected from the Tillman Health Center site, situated in downtown El Paso, fell between those observed at the other two sites. Overall, a statistically significant correlation existed between PM10 and EROD and luciferase activities. Chemical analysis of extracts collected from the Advance site demonstrated that concentrations of most PAHs were higher than those reported in most other metropolitan areas in the United States. Calculations made with these data suggest a cancer risk of 5-12 cases per 100,000 people. This risk estimate, as well as comparisons with the work of other investigators, raises concern regarding the potential for adverse health effects to the residents of this airshed. Further work is needed to understand the sources, exposure, and effects of PM10 and particulate organic material in the Paso del Norte airshed. PMID:12896850

  1. Stable bioemulsifiers are produced by Acinetobacter bouvetii UAM25 growing in different carbon sources.

    PubMed

    Ortega-de la Rosa, Nestor D; Vázquez-Vázquez, Jose L; Huerta-Ochoa, Sergio; Gimeno, Miquel; Gutiérrez-Rojas, Mariano

    2018-06-01

    Acinetobacter species are identified as producing surface-active and emulsifying molecules known as bioemulsifiers. Production, characterization and stability of bioemulsifiers produced by Acinetobacter bouvetii UAM25 were studied. A. bouvetii UAM25 grew in three different carbon and energy sources: ethanol, a glycerol-hexadecane mixture and waste cooking oil in an airlift bioreactor, showing that bioemulsifier production was growth associated. The three purified bioemulsifiers were lipo-heteropolysaccharides of high molecular weight (4866 ± 533 and 462 ± 101 kDa). The best carbon source and energy for bioemulsifier production was wasted cooking oil, with a highest emulsifying capacity (76.2 ± 3.5 EU mg -1 ) as compared with ethanol (46.6 ± 7.1 EU mg -1 ) and the glycerol-hexadecane mixture (49.5 ± 4.2 EU mg -1 ). The three bioemulsifiers in our study displayed similar macromolecular structures, regardless of the nature (hydrophobic or hydrophilic) of the carbon and energy source. Bioemulsifiers did not decrease surface tension, but the emulsifying capacity of all of them was retained under extreme variation in salinity (0-50 g NaCl L -1 ), pH (3-10) and temperature (25-121 °C), indicative of remarkable stability. These findings contribute to understanding of the relationship between: production, physical properties, chemical composition and stability of bioemulsifiers for their potential applications in biotechnology, such as bioremediation of hydrocarbon-contaminated soil and water.

  2. Characterization of Scenedesmus obtusiusculus AT-UAM for high-energy molecules accumulation: deeper insight into biotechnological potential of strains of the same species.

    PubMed

    Toledo-Cervantes, Alma; Garduño Solórzano, Gloria; Campos, Jorge E; Martínez-García, Martha; Morales, Marcia

    2018-03-01

    Scenedesmus obtusiusculus AT-UAM, isolated from Cuatro Ciénegas wetlands in Mexico was taxonomically, molecularly and biochemically compared to S. obtusiusculus CCAP 276/25 (Culture Collection of Algae and Protozoa, Scotland, UK). Analysis of Internal Transcribed Spacer 2 (ITS2) secondary structures confirmed that the mexican strain belongs to S. obtusiusculus with one change in the ITS2 nucleotide sequence. However, both strains exhibited different biochemical and fatty acid profiles and therefore biotechnological potential, emphasizing the need for deeper studies among strains of the same species. Furthermore, the biochemical variations of S. obtusiusculus AT-UAM under nitrogen starvation and different levels of irradiance were evaluated. The maximum lipid production (1730 mg L -1 ) was obtained at 613 μmol m -2  s -1 while the highest carbohydrate content (49%) was achieved at 896 μmol m -2  s -1 . Additionally, this strain was capable of storing lipids (∼52%) and carbohydrates (∼40%) under outdoor condition depending on the light availability in the cultivation broth.

  3. Hybrid Speaker Recognition Using Universal Acoustic Model

    NASA Astrophysics Data System (ADS)

    Nishimura, Jun; Kuroda, Tadahiro

    We propose a novel speaker recognition approach using a speaker-independent universal acoustic model (UAM) for sensornet applications. In sensornet applications such as “Business Microscope”, interactions among knowledge workers in an organization can be visualized by sensing face-to-face communication using wearable sensor nodes. In conventional studies, speakers are detected by comparing energy of input speech signals among the nodes. However, there are often synchronization errors among the nodes which degrade the speaker recognition performance. By focusing on property of the speaker's acoustic channel, UAM can provide robustness against the synchronization error. The overall speaker recognition accuracy is improved by combining UAM with the energy-based approach. For 0.1s speech inputs and 4 subjects, speaker recognition accuracy of 94% is achieved at the synchronization error less than 100ms.

  4. Effect of cadmium on the bioelement composition of Nostoc UAM208: Interaction with calcium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandez-Pinas, F.; Mateo, P.; Bonilla, I.

    1997-04-01

    Heavy metals may cause effects on the cyanobacterial cell including possible damage to the membranes and leakage from cells resulting in the loss or reduction of essential bioelements. There are many reports in the literature concerning morphological, biochemical and physiological changes caused by cadmium in cyanobacteria, but data on the influence of cadmium on the ion balance of the cell dealing with the interactive effect of cadmium and calcium are limited. Calcium has been found to exert a protective role against heavy metal toxicity in a variety of organisms, We previously reported that calcium is able to counteract the toxicmore » effect of cadmium towards growth, photosynthesis, nitrogenase activity and pigment content of the cyanobacterium Nostoc UAM208. In the present study, we analyzed the content of essential ions, as affected by cadmium treatment, to search for possible mechanisms of heavy metal damage and toxicity in Nostoc. We also studied whether calcium enrichment (1.1 mM final concentration) has any influence on the heavy metal effect on those ionic contents. 13 refs., 2 figs.« less

  5. Airsheds, Isotopes and Ecosystem Metabolism in Mountainous Terrain

    NASA Astrophysics Data System (ADS)

    Sulzman, E.; Barnard, H.; Bond, B. J.; Czarnomski, N. M.; Hauck, M.; Kayler, Z.; Mix, A. C.; Pypker, T.; Rugh, W.; Unsworth, M.

    2005-12-01

    At least 20% of the terrestrial surface of the earth is covered by mountains, which contain many of the world's most productive ecosystems. Interactions between vegetation and the physical environment are often very different in mountains than on flat land. However, few studies have addressed these unique interactions, and many of the tools used to measure and monitor ecosystem metabolism are difficult or impossible to use in complex, mountainous terrain. In a project we call the "Andrews Airshed study" located in western Oregon Cascades, we aim to identify and explore sources of variation in the isotopic composition of ecosystem respiration (δ13CR-eco) and airflow patterns in cold-air drainage, with the eventual aim of "inverting" this understanding so that we can use δ13CR-eco to monitor intra- and inter-annual variations in ecosystem metabolism on a basin scale. We are measuring patterns of airflow, quantifying the CO2 concentration in the flow, and measuring the carbon isotope composition of ecosystem-respired CO2 as well as soil-respired CO2 (δ13CR-soil), which accounts for more than half of δ13CR-eco. We have designed an automated air sampling device that we programmed to sample air at 10 ppm intervals from 30 m above the stream in our 100 ha, deeply-incised watershed. Samples are collected via Valco valves into stainless steel tubing that can be connected directly to an isotope ratio mass spectrometer. We also designed and installed soil gas sampling probes, which are located in five 10 m2 sampling plots from ridge top to valley floor to the opposite ridge top. Weekly samples (May-Sept, 2005) of air from soil and the nocturnal air flow show seasonal variation in δ13CR-eco over a 2 per mil range, with more enriched values corresponding to lower soil moisture. Soil-respired CO2 also reveal seasonality and are isotopically enriched compared with above-ground air. δ13CR-soil values from north- and south-facing slopes of the watershed differ by 1 per mil

  6. Source Apportionment Analysis of Measured Fine Particulate Matter in a Semi-Arid Urban Airshed in Corpus Christi, U.S.A

    NASA Astrophysics Data System (ADS)

    Karnae, Saritha; John, Kuruvilla

    2010-05-01

    Corpus Christi is an industrialized urban area of South Texas that is currently in compliance with the National Ambient Air Quality Standards (NAAQS) for PM2.5 as set by the United States Environmental Protection Agency (U.S EPA). However a gradual increase in the annual and 24-hour PM2.5 concentrations was noted since 2001. In this study, principal component analysis/absolute principal component scores (PCA/APCS) was used as a source apportionment technique to identify key source categories that affected the measured PM2.5 concentrations at a continuous ambient monitoring station (CAMS) 04 maintained and operated by Texas Commission on Environmental Quality (TCEQ) during 2000 through 2007. Cluster analysis using computed backward trajectories was performed on days with high PM2.5 concentrations. The elevated PM days were heavily influenced by transported levels of PM during three types of episodic events including smoke plumes due to biomass burning in Mexico and Central America during April and May, sub-Saharan dust transport from Africa during June and July, and regional haze transport from highly industrialized areas of Texas and surrounding Midwestern states during September. Pyrotechnic emissions during local firework events during the New Year day celebrations under stagnant meteorological conditions also resulted in elevated PM2.5 concentrations. PCA/APCS identified five key source categories that accounted for 78% of the variance in the PM2.5 concentrations measured within the urban airshed. Secondary sulphates were identified to be the major contributor accounting for 46% of the apportioned mass. This was followed by mobile sources which accounted for 26%. The other sources that were identified by PCA/APCS included crustal dust, a commingled source of biomass burning and sea salt, and secondary nitrates. Increase in secondary sulphates was observed during August and September typically associated with the long range transport of continental haze from

  7. Electroacoustics modeling of piezoelectric welders for ultrasonic additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Hehr, Adam; Dapino, Marcelo J.

    2016-04-01

    Ultrasonic additive manufacturing (UAM) is a recent 3D metal printing technology which utilizes ultrasonic vibrations from high power piezoelectric transducers to additively weld similar and dissimilar metal foils. CNC machining is used intermittent of welding to create internal channels, embed temperature sensitive components, sensors, and materials, and for net shaping parts. Structural dynamics of the welder and work piece influence the performance of the welder and part quality. To understand the impact of structural dynamics on UAM, a linear time-invariant model is used to relate system shear force and electric current inputs to the system outputs of welder velocity and voltage. Frequency response measurements are combined with in-situ operating measurements of the welder to identify model parameters and to verify model assumptions. The proposed LTI model can enhance process consistency, performance, and guide the development of improved quality monitoring and control strategies.

  8. Current and estimated future atmospheric nitrogen loads to the Chesapeake Bay Watershed

    EPA Science Inventory

    Nitrogen deposition for CMAQ scenarios in 2011, 2017, 2023, 2028, and a 2048-2050 RCP 4.5 climate scenario will be presented for the watershed and tidal waters. Comparisons will be made with the 2017 Airshed Model to the previous 2010 Airshed Model estimates. In addition, atmosph...

  9. STUDY USING A THREE-DIMENSIONAL SMOG FORMATION MODEL UNDER CONDITIONS OF COMPLEX FLOW

    EPA Science Inventory

    To clarify the photochemical smog formation mechanisms under conditions of complex flow, the SAI Urban Airshed Model was evaluated using a 1981 field observed data base. In the Tokyo Metropolitan Area higher O3 concentrations are usually observed near the shore in the morning. As...

  10. Relative quantitative comparisons of the extracellular protein profiles of Staphylococcus aureus UAMS-1 and its sarA, agr, and sarA agr regulatory mutants using one-dimensional polyacrylamide gel electrophoresis and nanocapillary liquid chromatography coupled with tandem mass spectrometry.

    PubMed

    Jones, Richard C; Deck, Joanna; Edmondson, Ricky D; Hart, Mark E

    2008-08-01

    One-dimensional polyacrylamide gel electrophoresis followed by nanocapillary liquid chromatography coupled with mass spectrometry was used to analyze proteins isolated from Staphylococcus aureus UAMS-1 after 3, 6, 12, and 24 h of in vitro growth. Protein abundance was determined using a quantitative value termed normalized peptide number, and overall, proteins known to be associated with the cell wall were more abundant early on in growth, while proteins known to be secreted into the surrounding milieu were more abundant late in growth. In addition, proteins from spent media and cell lysates of strain UAMS-1 and its isogenic sarA, agr, and sarA agr regulatory mutant strains during exponential growth were identified, and their relative abundances were compared. Extracellular proteins known to be regulated by the global regulators sarA and agr displayed protein levels in accordance with what is known regarding the effects of these regulators. For example, cysteine protease (SspB), endopeptidase (SspA), staphopain (ScpA), and aureolysin (Aur) were higher in abundance in the sarA and sarA agr mutants than in strain UAMS-1. The immunoglobulin G (IgG)-binding protein (Sbi), immunodominant staphylococcal antigen A (IsaA), IgG-binding protein A (Spa), and the heme-iron-binding protein (IsdA) were most abundant in the agr mutant background. Proteins whose abundance was decreased in the sarA mutant included fibrinogen-binding protein (Fib [Efb]), IsaA, lipase 1 and 2, and two proteins identified as putative leukocidin F and S subunits of the two-component leukotoxin family. Collectively, this approach identified 1,263 proteins (matches of two peptides or more) and provided a convenient and reliable way of identifying proteins and comparing their relative abundances.

  11. Formaldehyde and its relation to CO, PAN, and SO2 in the Houston-Galveston airshed

    NASA Astrophysics Data System (ADS)

    Rappenglück, B.; Dasgupta, P. K.; Leuchner, M.; Li, Q.; Luke, W.

    2010-03-01

    The Houston-Galveston Airshed (HGA) is one of the major metropolitan areas in the US that is classified as a nonattainment area of federal ozone standards. Formaldehyde (HCHO) is a key species in understanding ozone related air pollution; some of the highest HCHO concentrations in North America have been reported for the HGA. We report on HCHO measurements in the HGA from summer 2006. Among several sites, maximum HCHO mixing ratios were observed in the Houston Ship Channel (HSC), a region with a very high density of industrial/petrochemical operations. HCHO levels at the Moody Tower (MT) site close to downtown were dependent on the wind direction: southerly maritime winds brought in background levels (0.5-1 ppbv) while trajectories originating in the HSC resulted in high HCHO (up to 31.5 ppbv). Based on the best multiparametric linear regression model fit, the HCHO levels at the MT site can be accounted for as follows: 38.5±12.3% from primary vehicular emissions (using CO as an index of vehicular emission), 24.1±17.7% formed photochemically (using peroxyacetic nitric anhydride (PAN) as an index of photochemical activity) and 8.9±11.2% from industrial emissions (using SO2 as an index of industrial emissions). The balance 28.5±12.7% constituted the residual which cannot be easily ascribed to the above categories and/or which is transported into the HGA. The CO related HCHO fraction is dominant during the morning rush hour (06:00-09:00 h, all times are given in CDT); on a carbon basis, HCHO emissions are up to 0.7% of the CO emissions. The SO2 related HCHO fraction is significant between 09:00-12:00 h. After 12:00 h HCHO is largely formed through secondary processes. The HCHO/PAN ratios are dependent on the SO2 levels. The SO2 related HCHO fraction at the downtown site originates in the ship channel. Aside from traffic-related primary HCHO emissions, HCHO of industrial origin serves as an appreciable source for OH in the morning.

  12. Formaldehyde and its relation to CO, PAN, and SO2 in the Houston-Galveston airshed

    NASA Astrophysics Data System (ADS)

    Rappenglück, B.; Dasgupta, P. K.; Leuchner, M.; Li, Q.; Luke, W.

    2009-11-01

    The Houston-Galveston Airshed (HGA) is one of the major metropolitan areas in the US that is classified as a nonattainment area of Federal ozone standards. Formaldehyde (HCHO) is a key species in understanding ozone related air pollution; some of the highest HCHO concentrations in North America have been reported for the HGA. We report on HCHO measurements in the HGA from summer 2006. Among several sites, maximum HCHO mixing ratios were observed in the Houston Ship Channel (HSC), a region with a very high density of industrial/petrochemical operations. HCHO levels at the Moody Tower (MT) site close to downtown were dependent on the wind direction: southerly maritime winds brought in background levels (0.5-1 ppbv) while trajectories originating in the HSC resulted in high HCH (up to 31.5 ppbv). Based on the best multiparametric linear regression model fit, the HCHO levels at the MT site can be accounted for as follows: 38.5±12.3% from primary vehicular emissions (using CO as an index of vehicular emission), 24.1±17.7% formed photochemically (using peroxyacetic nitric anhydride (PAN) as an index of photochemical activity) and 8.9±11.2% from industrial emissions (using SO2 as an index of industrial emissions). The balance 28.5±12.7% constituted the residual which cannot be easily ascribed to the above categories and/or which is transported into the HGA. The CO related HCHO fraction is dominant during the morning rush hour (06:00-09:00 h, all times are given in CDT); on a carbon basis, HCHO emissions are up to 0.7% of the CO emissions. The SO2 related HCHO fraction is significant between 09:00-12:00 h. After 12:00 h HCHO is largely formed through secondary processes. The HCHO/PAN ratios are dependent on the SO2 levels. The SO2 related HCHO fraction at the downtown site originates in the ship channel. Aside from traffic-related primary HCHO emissions, HCHO of industrial origin serves as an appreciable source for OH in the morning.

  13. EVALUATING THE PERFORMANCE OF REGIONAL-SCALE PHOTOCHEMICAL MODELING SYSTEMS: PART II--OZONE PREDICTIONS. (R825260)

    EPA Science Inventory

    In this paper, the concept of scale analysis is applied to evaluate ozone predictions from two regional-scale air quality models. To this end, seasonal time series of observations and predictions from the RAMS3b/UAM-V and MM5/MAQSIP (SMRAQ) modeling systems for ozone were spectra...

  14. Implications of different approaches for characterizing ambient air pollutant concentrations within the urban airshed for time-series studies and health benefits analyses.

    PubMed

    Strickland, Matthew J; Darrow, Lyndsey A; Mulholland, James A; Klein, Mitchel; Flanders, W Dana; Winquist, Andrea; Tolbert, Paige E

    2011-05-11

    In time-series studies of the health effects of urban air pollutants, decisions must be made about how to characterize pollutant levels within the airshed. Emergency department visits for pediatric asthma exacerbations were collected from Atlanta hospitals. Concentrations of carbon monoxide, nitrogen dioxide, ozone, sulfur dioxide, particulate matter less than 10 microns in diameter (PM10), particulate matter less than 2.5 microns in diameter (PM2.5), and the PM2.5 components elemental carbon, organic carbon, and sulfate were obtained from networks of ambient air quality monitors. For each pollutant we created three different daily metrics. For one metric we used the measurements from a centrally-located monitor; for the second we averaged measurements across the network of monitors; and for the third we estimated the population-weighted average concentration using an isotropic spatial model. Rate ratios for each of the metrics were estimated from time-series models. For pollutants with relatively homogeneous spatial distributions we observed only small differences in the rate ratio across the three metrics. Conversely, for spatially heterogeneous pollutants we observed larger differences in the rate ratios. For a given pollutant, the strength of evidence for an association (i.e., chi-square statistics) tended to be similar across metrics. Given that the chi-square statistics were similar across the metrics, the differences in the rate ratios for the spatially heterogeneous pollutants may seem like a relatively small issue. However, these differences are important for health benefits analyses, where results from epidemiological studies on the health effects of pollutants (per unit change in concentration) are used to predict the health impacts of a reduction in pollutant concentrations. We discuss the relative merits of the different metrics as they pertain to time-series studies and health benefits analyses.

  15. Inclusion in the Workforce for Students with Intellectual Disabilities: A Case Study of a Spanish Postsecondary Education Program

    ERIC Educational Resources Information Center

    Judge, Sharon; Gasset, Dolores Izuzquiza

    2015-01-01

    The Autonomous University of Madrid (UAM) is the first Spanish university to provide training to young people with intellectual disabilities (ID) in the university environment, which qualifies them for inclusion in the workforce. In this practice brief we describe the UAM-Prodis Patronage Chair program, a successful model used at Spanish…

  16. Temperature and Relative Humidity Vertical Profiles within Planetary Boundary Layer in Winter Urban Airshed

    NASA Astrophysics Data System (ADS)

    Bendl, Jan; Hovorka, Jan

    2017-12-01

    The planetary boundary layer is a dynamic system with turbulent flow where horizontal and vertical air mixing depends mainly on the weather conditions and geomorphology. Normally, air temperature from the Earth surface decreases with height but inversion situation may occur, mainly during winter. Pollutant dispersion is poor during inversions so air pollutant concentration can quickly rise, especially in urban closed valleys. Air pollution was evaluated by WHO as a human carcinogen (mostly by polycyclic aromatic hydrocarbons) and health effects are obvious. Knowledge about inversion layer height is important for estimation of the pollution impact and it can give us also information about the air pollution sources. Temperature and relative humidity vertical profiles complement ground measurements. Ground measurements were conducted to characterize comprehensively urban airshed in Svermov, residential district of the city of Kladno, about 30 km NW of Prague, from the 2nd Feb. to the 3rd of March 2016. The Svermov is an air pollution hot-spot for long time benzo[a]pyrene (B[a]P) limit exceedances, reaching the highest B[a]P annual concentration in Bohemia - west part of the Czech Republic. Since the Svermov sits in a shallow valley, frequent vertical temperature inversion in winter and low emission heights of pollution sources prevent pollutant dispersal off the valley. Such orography is common to numerous small settlements in the Czech Republic. Ground measurements at the sports field in the Svermov were complemented by temperature and humidity vertical profiles acquired by a Vaisala radiosonde positioned at tethered He-filled balloon. Total number of 53 series of vertical profiles up to the height of 300 m was conducted. Meteorology parameters were acquired with 4 Hz frequency. The measurements confirmed frequent early-morning and night formation of temperature inversion within boundary layer up to the height of 50 m. This rather shallow inversion had significant

  17. UDP-arabinopyranose mutase 3 is required for pollen wall morphogenesis in rice (Oryza sativa).

    PubMed

    Sumiyoshi, Minako; Inamura, Takuya; Nakamura, Atsuko; Aohara, Tsutomu; Ishii, Tadashi; Satoh, Shinobu; Iwai, Hiroaki

    2015-02-01

    l-Arabinose is one of the main constituents of cell wall polysaccharides such as pectic rhamnogalacturonan I (RG-I), glucuronoarabinoxylans and other glycoproteins. It is found predominantly in the furanose form rather than in the thermodynamically more stable pyranose form. UDP-L-arabinofuranose (UDP-Araf), rather than UDP-L-arabinopyranose (UDP-Arap), is a sugar donor for the biosynthesis of arabinofuranosyl (Araf) residues. UDP-arabinopyranose mutases (UAMs) have been shown to interconvert UDP-Araf and UDP-Arap and are involved in the biosynthesis of polysaccharides including Araf. The UAM gene family has three members in Oryza sativa. Co-expression network in silico analysis showed that OsUAM3 expression was independent from OsUAM1 and OsUAM2 co-expression networks. OsUAM1 and OsUAM2 were expressed ubiquitously throughout plant development, but OsUAM3 was expressed primarily in reproductive tissue, particularly at the pollen cell wall formation developmental stage. OsUAM3 co-expression networks include pectin catabolic enzymes. To determine the function of OsUAMs in reproductive tissues, we analyzed RNA interference (RNAi)-knockdown transformants (OsUAM3-KD) specific for OsUAM3. OsUAM3-KD plants grew normally and showed abnormal phenotypes in reproductive tissues, especially in terms of the pollen cell wall and exine. In addition, we examined modifications of cell wall polysaccharides at the cellular level using antibodies against polysaccharides including Araf. Immunolocalization of arabinan using the LM6 antibody showed low levels of arabinan in OsUAM3-KD pollen grains. Our results suggest that the function of OsUAM3 is important for synthesis of arabinan side chains of RG-I and is required for reproductive developmental processes, especially the formation of the cell wall in pollen. © The Author 2014. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  18. 76 FR 28181 - Approval and Promulgation of Air Quality Implementation Plans; New Mexico; Sunland Park Section...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-16

    ... Santa Teresa are located along the border region of New Mexico and are adjacent to El Paso, Texas, and Ciudad Juarez, Mexico, or what is commonly referred to as the Paso del Norte Airshed. New Mexico... due to airshed contributions from Mexico and Texas. Air quality within the Paso del Norte Airshed has...

  19. Dynamics of ultrasonic additive manufacturing.

    PubMed

    Hehr, Adam; Dapino, Marcelo J

    2017-01-01

    Ultrasonic additive manufacturing (UAM) is a solid-state technology for joining similar and dissimilar metal foils near room temperature by scrubbing them together with ultrasonic vibrations under pressure. Structural dynamics of the welding assembly and work piece influence how energy is transferred during the process and ultimately, part quality. To understand the effect of structural dynamics during UAM, a linear time-invariant model is proposed to relate the inputs of shear force and electric current to resultant welder velocity and voltage. Measured frequency response and operating performance of the welder under no load is used to identify model parameters. Using this model and in-situ measurements, shear force and welder efficiency are estimated to be near 2000N and 80% when welding Al 6061-H18 weld foil, respectively. Shear force and welder efficiency have never been estimated before in UAM. The influence of processing conditions, i.e., welder amplitude, normal force, and weld speed, on shear force and welder efficiency are investigated. Welder velocity was found to strongly influence the shear force magnitude and efficiency while normal force and weld speed showed little to no influence. The proposed model is used to describe high frequency harmonic content in the velocity response of the welder during welding operations and coupling of the UAM build with the welder. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Modeling the Dispersion of Inert Particles Using the SAQM Model

    NASA Astrophysics Data System (ADS)

    Pearson, R.; Fitzgerald, R. M.

    2005-12-01

    Cities throughout the U.S are subject to the emission of particulate matter (PM) into the atmosphere from a variety of sources. The impact of these emissions has been studied extensively in for regulatory compliance in the area of health effects, air quality and visibility. Little work has been done to study the fate and transport of the inert particulate matter within the El Paso-Juarez Airshed. The Environmental Physics Group at The University of Texas at El Paso has recently applied the SARMAP Air Quality Model (SAQM) to model the dispersion of inert particulate matter in the El Paso-Juarez Airshed. The meteorological data for the SAQM was created with the Penn State/NCAR meteorological modeling system, version 5 (MM5). The SAQM was used to simulate two common occurrences for large particulate emission and concentration. The first was periods of heavy traffic volume at the international bridges which cause large numbers of cars to sit, with engines running, for extended periods of time. The second was moderate to high wind events that cause large amounts of coarse particulate matter to become entrained in the atmosphere and transported into and around the region. Output from the MM5 was used as the meteorological driver for the SAQM. The MM5 was initialized with data from the NCAR reanalysis project. Meteorological data collected in the region by the Texas Commission on Environmental Quality (TCEQ) and by EPA was used for Four Dimensional Data Assimilation. The MM5 was nudged with gridded, surface and observational data. Statistical analysis was done on the MM5 for the variables, wind speed, wind direction, temperature and mixing ratio. The statistics performed included RMSE, RMSEs, RMSEu and index of agreement SAQM was applied to the domain with grid cell sizes of 1.3 km per side. Temporal comparisons were done between EPA's PM2.5 to identify similarities in the evolution of the SAQM with observation. The experience gained in this work will facilitate further

  1. The two Rasamsonia emersonii α-glucuronidases, ReGH67 and ReGH115, show a different mode-of-action towards glucuronoxylan and glucuronoxylo-oligosaccharides.

    PubMed

    Martínez, Patricia Murciano; Appeldoorn, Maaike M; Gruppen, Harry; Kabel, Mirjam A

    2016-01-01

    The production of biofuels and biochemicals from grass-type plant biomass requires a complete utilisation of the plant cellulose and hemicellulosic xylan via enzymatic degradation to their constituent monosaccharides. Generally, physical and/or thermochemical pretreatments are performed to enable access for the subsequent added carbohydrate-degrading enzymes. Nevertheless, partly substituted xylan structures remain after pretreatment, in particular the ones substituted with (4-O-methyl-)glucuronic acids (UAme). Hence, α-glucuronidases play an important role in the degradation of UAmexylan structures facilitating the complete utilisation of plant biomass. The characterisation of α-glucuronidases is a necessity to find the right enzymes to improve degradation of recalcitrant UAmexylan structures. The mode-of-action of two α-glucuronidases was demonstrated, both obtained from the fungus Rasamsonia emersonii; one belonging to the glycoside hydrolase (GH) family 67 (ReGH67) and the other to GH115 (ReGH115). Both enzymes functioned optimal at around pH 4 and 70 °C. ReGH67 was able to release UAme from UAme-substituted xylo-oligosaccharides (UAmeXOS), but only the UAme linked to the non-reducing end xylosyl residue was cleaved. In particular, in a mixture of oligosaccharides, UAmeXOS having a degree of polymerisation (DP) of two were hydrolysed to a further extent than longer UAmeXOS (DP 3-4). On the contrary, ReGH115 was able to release UAme from both polymeric UAmexylan and UAmeXOS. ReGH115 cleaved UAme from both internal and non-reducing end xylosyl residues, with the exception of UAme attached to the non-reducing end of a xylotriose oligosaccharide. In this research, and for the first time, we define the mode-of-action of two α-glucuronidases from two different GH families both from the ascomycete R. emersonii. To date, only four α-glucuronidases classified in GH115 are characterised. ReGH67 showed limited substrate specificity towards only UAmeXOS, cleaving

  2. Using Passive Sampling to Asses Ozone Formation in Sparsely Monitored Areas

    NASA Astrophysics Data System (ADS)

    Crosby, C. M.; Mainord, J.; George, L. A.

    2016-12-01

    Tropospheric ozone (O3), a secondary pollutant, is detrimental to both human health and the environment. O3 is formed from nitrogen oxides (NOx) and volatile organic compounds, (VOC's) in the presence of sunlight. Hermiston is a low population rural city in Oregon (17,707), where O3 levels are expected to be minimal. However, Hermiston has recently experienced elevated O3 concentrations, approaching EPA levels of non-attainment. These levels were not predicted by airshed modeling of the region, suggesting that precursor emissions are not adequately represented in the model. Due to the limited monitoring in the area, there are no measurements of precursors in the region. In this study, passive Ogawa samplers were used to measure NOx and O3 levels at twenty sites in the area. The concentrations were then mapped in conjunction with wind trajectories derived from HYSPLIT and compared to NOx point sources attained from the National Emissions Inventory (NEI). The measurement campaign revealed areas of elevated NOx concentrations that were not accounted for in the airshed model. Further exploration is needed to identify these sources. This study lays groundwork for the use of passive sampling to ground-truth airshed models in the absence of monitoring networks.

  3. Using Sediment Records to Reconstruct Historical Inputs Combustion-Derived Contaminants to Urban Airsheds/Watersheds: A Case Study From the Puget Sound

    NASA Astrophysics Data System (ADS)

    Louchouarn, P. P.; Kuo, L.; Brandenberger, J.; Marcantonio, F.; Wade, T. L.; Crecelius, E.; Gobeil, C.

    2008-12-01

    Urban centers are major sources of combustion-derived particulate matter (e.g. black carbon (BC), polycyclic aromatic hydrocarbons (PAH), anhydrosugars) and volatile organic compounds to the atmosphere. Evidence is mounting that atmospheric emissions from combustion sources remain major contributors to air pollution of urban systems. For example, recent historical reconstructions of depositional fluxes for pyrogenic PAHs close to urban systems have shown an unanticipated reversal in the trends of decreasing emissions initiated during the mid-20th Century. Here we compare a series of historical reconstructions of combustion emission in urban and rural airsheds over the last century using sedimentary records. A complex suite of combustion proxies (BC, PAHs, anhydrosugars, stable lead concentrations and isotope signatures) assisted in elucidating major changes in the type of atmospheric aerosols originating from specific processes (i.e. biomass burning vs. fossil fuel combustion) or fuel sources (wood vs. coal vs. oil). In all studied locations, coal continues to be a major source of combustion-derived aerosols since the early 20th Century. Recently, however, oil and biomass combustion have become substantial additional sources of atmospheric contamination. In the Puget Sound basin, along the Pacific Northwest region of the U.S., rural locations not impacted by direct point sources of contamination have helped assess the influence of catalytic converters on concentrations of oil-derived PAH and lead inputs since the early 1970s. Although atmospheric deposition of lead has continued to drop since the introduction of catalytic converters and ban on leaded gasoline, PAH inputs have "rebounded" in the last decade. A similar steady and recent rise in PAH accumulations in urban systems has been ascribed to continued urban sprawl and increasing vehicular traffic. In the U.S., automotive emissions, whether from gasoline or diesel combustion, are becoming a major source of

  4. The Effect of Ultrasonic Additive Manufacturing on Integrated Printed Electronic Conductors

    NASA Astrophysics Data System (ADS)

    Bournias-Varotsis, Alkaios; Wang, Shanda; Hutt, David; Engstrøm, Daniel S.

    2018-07-01

    Ultrasonic additive manufacturing (UAM) is a low temperature manufacturing method capable of embedding printed electronics in metal components. The effect of UAM processing on the resistivity of conductive tracks printed with five different conductive pastes based on silver, copper or carbon flakes/particles in either a thermoplastic or thermoset filler binder are investigated. For all but the carbon-based paste, the resistivity changed linearly with the UAM energy input. After UAM processing, a resistivity increase of more than 150 times was recorded for the copper based thermoset paste. The silver based pastes showed a resistivity increase of between 1.1 and 50 times from their initial values. The carbon-based paste showed no change in resistivity after UAM processing. Focussed ion beam microstructure analysis of the printed conductive tracks before and after UAM processing showed that the silver particles and flakes in at least one of the pastes partly dislodged from their thermoset filler creating voids, thereby increasing the resistivity, whereas the silver flakes in a thermoplastic filler did not dislodge due to material flow of the polymer binder. The lowest resistivity (8 × 10-5 Ω cm) after UAM processing was achieved for a thermoplastic paste with silver flakes at low UAM processing energy.

  5. The Effect of Ultrasonic Additive Manufacturing on Integrated Printed Electronic Conductors

    NASA Astrophysics Data System (ADS)

    Bournias-Varotsis, Alkaios; Wang, Shanda; Hutt, David; Engstrøm, Daniel S.

    2018-03-01

    Ultrasonic additive manufacturing (UAM) is a low temperature manufacturing method capable of embedding printed electronics in metal components. The effect of UAM processing on the resistivity of conductive tracks printed with five different conductive pastes based on silver, copper or carbon flakes/particles in either a thermoplastic or thermoset filler binder are investigated. For all but the carbon-based paste, the resistivity changed linearly with the UAM energy input. After UAM processing, a resistivity increase of more than 150 times was recorded for the copper based thermoset paste. The silver based pastes showed a resistivity increase of between 1.1 and 50 times from their initial values. The carbon-based paste showed no change in resistivity after UAM processing. Focussed ion beam microstructure analysis of the printed conductive tracks before and after UAM processing showed that the silver particles and flakes in at least one of the pastes partly dislodged from their thermoset filler creating voids, thereby increasing the resistivity, whereas the silver flakes in a thermoplastic filler did not dislodge due to material flow of the polymer binder. The lowest resistivity (8 × 10-5 Ω cm) after UAM processing was achieved for a thermoplastic paste with silver flakes at low UAM processing energy.

  6. Genetic Algorithms and Nucleation in VIH-AIDS transition.

    NASA Astrophysics Data System (ADS)

    Barranon, Armando

    2003-03-01

    VIH to AIDS transition has been modeled via a genetic algorithm that uses boom-boom principle and where population evolution is simulated with a cellular automaton based on SIR model. VIH to AIDS transition is signed by nucleation of infected cells and low probability of infection are obtained for different mutation rates in agreement with clinical results. A power law is obtained with a critical exponent close to the critical exponent of cubic, spherical percolation, colossal magnetic resonance, Ising Model and liquid-gas phase transition in heavy ion collisions. Computations were carried out at UAM-A Supercomputing Lab and author acknowledges financial support from Division of CBI at UAM-A.

  7. Designing an in-situ ultrasonic nondestructive evaluation system for ultrasonic additive manufacturing

    NASA Astrophysics Data System (ADS)

    Nadimpalli, Venkata K.; Nagy, Peter B.

    2018-04-01

    Ultrasonic Additive Manufacturing (UAM) is a solid-state layer by layer manufacturing process that utilizes vibration induced plastic deformation to form a metallurgical bond between a thin layer and an existing base structure. Due to the vibration based bonding mechanism, the quality of components at each layer depends on the geometry of the structure. In-situ monitoring during and between UAM manufacturing steps offers the potential for closed-loop control to optimize process parameters and to repair existing defects. One interface that is most prone to delamination is the base/build interface and often UAM component height and quality are limited by failure at the base/build interface. Low manufacturing temperatures and favorable orientation of typical interface defects in UAM make ultrasonic NDE an attractive candidate for online monitoring. Two approaches for in-situ NDE are discussed and the design of the monitoring system optimized so that the quality of UAM components is not affected by the addition of the NDE setup. Preliminary results from in-situ ultrasonic NDE indicate the potential to be utilized for online qualification, closed-loop control and offline certification of UAM components.

  8. Impact of ultrasound on solid-liquid extraction of phenolic compounds from maritime pine sawdust waste. Kinetics, optimization and large scale experiments.

    PubMed

    Meullemiestre, A; Petitcolas, E; Maache-Rezzoug, Z; Chemat, F; Rezzoug, S A

    2016-01-01

    Maritime pine sawdust, a by-product from industry of wood transformation, has been investigated as a potential source of polyphenols which were extracted by ultrasound-assisted maceration (UAM). UAM was optimized for enhancing extraction efficiency of polyphenols and reducing time-consuming. In a first time, a preliminary study was carried out to optimize the solid/liquid ratio (6g of dry material per mL) and the particle size (0.26 cm(2)) by conventional maceration (CVM). Under these conditions, the optimum conditions for polyphenols extraction by UAM, obtained by response surface methodology, were 0.67 W/cm(2) for the ultrasonic intensity (UI), 40°C for the processing temperature (T) and 43 min for the sonication time (t). UAM was compared with CVM, the results showed that the quantity of polyphenols was improved by 40% (342.4 and 233.5mg of catechin equivalent per 100g of dry basis, respectively for UAM and CVM). A multistage cross-current extraction procedure allowed evaluating the real impact of UAM on the solid-liquid extraction enhancement. The potential industrialization of this procedure was implemented through a transition from a lab sonicated reactor (3 L) to a large scale one with 30 L volume. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Coastal recirculation potential affecting air pollutants in Portugal: The role of circulation weather types

    NASA Astrophysics Data System (ADS)

    Russo, Ana; Gouveia, Célia; Levy, Ilan; Dayan, Uri; Jerez, Sonia; Mendes, Manuel; Trigo, Ricardo

    2016-06-01

    Coastal zones are under increasing development and experience air pollution episodes regularly. These episodes are often related to peaks in local emissions from industry or transportation, but can also be associated with regional transport from neighbour urban areas influenced by land-sea breeze recirculation. This study intends to analyze the relation between circulation weather patterns, air mass recirculation and pollution levels in three coastal airsheds of Portugal (Lisbon, Porto and Sines) based on the application of an objective quantitative measure of potential recirculation. Although ventilation events have a dominant presence throughout the studied 9-yrs period on all the three airsheds, recirculation and stagnation conditions occur frequently. The association between NO2, SO2 and O3 levels and recirculation potential is evident during summer months. Under high average recirculation potential and high variability, NO2 and SO2 levels are higher for the three airsheds, whilst for O3 each airshed responds differently. This indicates a high heterogeneity among the three airsheds in (1) the type of emission - traffic or industry - prevailing for each contaminant, and (2) the response to the various circulation weather patterns and recirculation situations. Irrespectively of that, the proposed methodology, based on iterative K-means clustering, allows to identify which prevailing patterns are associated with high recirculation potential, having the advantage of being applicable to any geographical location.

  10. Comparison of Homogeneous and Heterogeneous CFD Fuel Models for Phase I of the IAEA CRP on HTR Uncertainties Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Su-Jong Yoon

    2014-04-01

    Computational Fluid Dynamics (CFD) evaluation of homogeneous and heterogeneous fuel models was performed as part of the Phase I calculations of the International Atomic Energy Agency (IAEA) Coordinate Research Program (CRP) on High Temperature Reactor (HTR) Uncertainties in Modeling (UAM). This study was focused on the nominal localized stand-alone fuel thermal response, as defined in Ex. I-3 and I-4 of the HTR UAM. The aim of the stand-alone thermal unit-cell simulation is to isolate the effect of material and boundary input uncertainties on a very simplified problem, before propagation of these uncertainties are performed in subsequent coupled neutronics/thermal fluids phasesmore » on the benchmark. In many of the previous studies for high temperature gas cooled reactors, the volume-averaged homogeneous mixture model of a single fuel compact has been applied. In the homogeneous model, the Tristructural Isotropic (TRISO) fuel particles in the fuel compact were not modeled directly and an effective thermal conductivity was employed for the thermo-physical properties of the fuel compact. On the contrary, in the heterogeneous model, the uranium carbide (UCO), inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers of the TRISO fuel particles are explicitly modeled. The fuel compact is modeled as a heterogeneous mixture of TRISO fuel kernels embedded in H-451 matrix graphite. In this study, a steady-state and transient CFD simulations were performed with both homogeneous and heterogeneous models to compare the thermal characteristics. The nominal values of the input parameters are used for this CFD analysis. In a future study, the effects of input uncertainties in the material properties and boundary parameters will be investigated and reported.« less

  11. Regional photochemical air quality modeling in the Mexico-US border area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendoza, A.; Russell, A.G.; Mejia, G.M.

    1998-12-31

    The Mexico-United States border area has become an increasingly important region due to its commercial, industrial and urban growth. As a result, environmental concerns have risen. Treaties like the North American Free Trade Agreement (NAFTA) have further motivated the development of environmental impact assessment in the area. Of particular concern are air quality, and how the activities on both sides of the border contribute to its degradation. This paper presents results of applying a three-dimensional photochemical airshed model to study air pollution dynamics along the Mexico-United States border. In addition, studies were conducted to assess how size resolution impacts themore » model performance. The model performed within acceptable statistic limits using 12.5 x 12.5 km{sup 2} grid cells, and the benefits using finer grids were limited. Results were further used to assess the influence of grid-cell size on the modeling of control strategies, where coarser grids lead to significant loss of information.« less

  12. Feasibility study of using the RoboEarth cloud engine for rapid mapping and tracking with small unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.

    2014-11-01

    This paper presents the ongoing development of a small unmanned aerial mapping system (sUAMS) that in the future will track its trajectory and perform 3D mapping in near-real time. As both mapping and tracking algorithms require powerful computational capabilities and large data storage facilities, we propose to use the RoboEarth Cloud Engine (RCE) to offload heavy computation and store data to secure computing environments in the cloud. While the RCE's capabilities have been demonstrated with terrestrial robots in indoor environments, this paper explores the feasibility of using the RCE in mapping and tracking applications in outdoor environments by small UAMS. The experiments presented in this work assess the data processing strategies and evaluate the attainable tracking and mapping accuracies using the data obtained by the sUAMS. Testing was performed with an Aeryon Scout quadcopter. It flew over York University, up to approximately 40 metres above the ground. The quadcopter was equipped with a single-frequency GPS receiver providing positioning to about 3 meter accuracies, an AHRS (Attitude and Heading Reference System) estimating the attitude to about 3 degrees, and an FPV (First Person Viewing) camera. Video images captured from the onboard camera were processed using VisualSFM and SURE, which are being reformed as an Application-as-a-Service via the RCE. The 3D virtual building model of York University was used as a known environment to georeference the point cloud generated from the sUAMS' sensor data. The estimated position and orientation parameters of the video camera show increases in accuracy when compared to the sUAMS' autopilot solution, derived from the onboard GPS and AHRS. The paper presents the proposed approach and the results, along with their accuracies.

  13. Effects of Relaxing and Arousing Music during Imagery Training on Dart-Throwing Performance, Physiological Arousal Indices, and Competitive State Anxiety

    PubMed Central

    Kuan, Garry; Morris, Tony; Kueh, Yee Cheng; Terry, Peter C.

    2018-01-01

    Music that is carefully selected to match the requirements of activities and the characteristics of individuals has been shown to produce significant impacts on performance enhancement (Priest et al., 2004). There is also evidence that music can enhance imagery (Grocke and Wigram, 2007), although few studies have investigated the effects of music on imagery in the context of sport skills. In the present study, the effects of relaxing and arousing music during imagery on dart-throwing performance, physiological arousal indices, and competitive state anxiety, were investigated among 63 novice dart throwers. Participants had moderate-to-high imagery ability and were randomly assigned to unfamiliar relaxing music (URM), unfamiliar arousing music (UAM), or no music (NM) groups. Performance was assessed by 40 dart throws at a concentric circles dartboard before and after 12 imagery sessions over 4 weeks. Measures of galvanic skin response (GSR), peripheral temperature (PT), and heart rate (HR) were taken during imagery sessions 1 and 12, and the Competitive State Anxiety Inventory-2 Revised (CSAI-2R) was administered prior to the pre- and post-intervention performance task. Dart-throwing gain scores were significantly higher for URM than for UAM and NM, with no significant difference between UAM and NM (URM = 37.24 ± 5.66, UAM = 17.57 ± 5.30, and NM = 13.19 ± 6.14, F2,62 = 5.03, p = 0.01, η2 = 0.14). GSR, PT, and HR reflected lower arousal for URM than for UAM or NM. Significant decreases in somatic anxiety were evident for URM and UAM but not NM. Significant decreases in cognitive anxiety were evident for URM and NM but not UAM. Significant increases in self-confidence were evident for URM but not UAM or NM. Performance improved in all three conditions but URM was associated with the largest performance gain, the lowest physiological indices of arousal, and the most positive CSAI-2R profiles. Listening to relaxing music during imagery may have benefits for

  14. Effects of Relaxing and Arousing Music during Imagery Training on Dart-Throwing Performance, Physiological Arousal Indices, and Competitive State Anxiety.

    PubMed

    Kuan, Garry; Morris, Tony; Kueh, Yee Cheng; Terry, Peter C

    2018-01-01

    Music that is carefully selected to match the requirements of activities and the characteristics of individuals has been shown to produce significant impacts on performance enhancement (Priest et al., 2004). There is also evidence that music can enhance imagery (Grocke and Wigram, 2007), although few studies have investigated the effects of music on imagery in the context of sport skills. In the present study, the effects of relaxing and arousing music during imagery on dart-throwing performance, physiological arousal indices, and competitive state anxiety, were investigated among 63 novice dart throwers. Participants had moderate-to-high imagery ability and were randomly assigned to unfamiliar relaxing music (URM), unfamiliar arousing music (UAM), or no music (NM) groups. Performance was assessed by 40 dart throws at a concentric circles dartboard before and after 12 imagery sessions over 4 weeks. Measures of galvanic skin response (GSR), peripheral temperature (PT), and heart rate (HR) were taken during imagery sessions 1 and 12, and the Competitive State Anxiety Inventory-2 Revised (CSAI-2R) was administered prior to the pre- and post-intervention performance task. Dart-throwing gain scores were significantly higher for URM than for UAM and NM, with no significant difference between UAM and NM (URM = 37.24 ± 5.66, UAM = 17.57 ± 5.30, and NM = 13.19 ± 6.14, F 2,62 = 5.03, p = 0.01, η 2 = 0.14). GSR, PT, and HR reflected lower arousal for URM than for UAM or NM. Significant decreases in somatic anxiety were evident for URM and UAM but not NM. Significant decreases in cognitive anxiety were evident for URM and NM but not UAM. Significant increases in self-confidence were evident for URM but not UAM or NM. Performance improved in all three conditions but URM was associated with the largest performance gain, the lowest physiological indices of arousal, and the most positive CSAI-2R profiles. Listening to relaxing music during imagery may have benefits for

  15. VII International Congress of Engineering Physics

    NASA Astrophysics Data System (ADS)

    2015-01-01

    In the frame of the fortieth anniversary celebration of the Universidad Autónoma Metropolitana and the Physics Engineering career, the Division of Basic Science and Engineering and its Departments organized the "VII International Congress of Physics Engineering". The Congress was held from 24 to 28 November 2014 in Mexico City, Mexico. This congress is the first of its type in Latin America, and because of its international character, it gathers experts on physics engineering from Mexico and all over the globe. Since 1999, this event has shown research, articles, projects, technological developments and vanguard scientists. These activities aim to spread, promote, and share the knowledge of Physics Engineering. The topics of the Congress were: • Renewable energies engineering • Materials technology • Nanotechnology • Medical physics • Educational physics engineering • Nuclear engineering • High precision instrumentation • Atmospheric physics • Optical engineering • Physics history • Acoustics This event integrates lectures on top trending topics with pre-congress workshops, which are given by recognized scientists with an outstanding academic record. The lectures and workshops allow the exchange of experiences, and create and strengthen research networks. The Congress also encourages professional mobility among all universities and research centres from all countries. CIIF2014 Organizing and Editorial Committee Dr. Ernesto Rodrigo Vázquez Cerón Universidad Autónoma Metropolitana - Azcapotzalco ervc@correo.azc.uam.mx Dr. Luis Enrique Noreña Franco Universidad Autónoma Metropolitana - Azcapotzalco lnf@correo.azc.uam.mx Dr. Alberto Rubio Ponce Universidad Autónoma Metropolitana - Azcapotzalco arp@correo.azc.uam.mx Dr. Óscar Olvera Neria Universidad Autónoma Metropolitana - Azcapotzalco oon@correo.azc.uam.mx Professor Jaime Granados Samaniego Universidad Autónoma Metropolitana - Azcapotzalco jgs@correo.azc.uam.mx Dr. Roberto Tito Hern

  16. Social Network Analysis of Biomedical Research Collaboration Networks in a CTSA Institution

    PubMed Central

    Bian, Jiang; Xie, Mengjun; Topaloglu, Umit; Hudson, Teresa; Eswaran, Hari; Hogan, William

    2014-01-01

    BACKGROUND The popularity of social networks has triggered a number of research efforts on network analyses of research collaborations in the Clinical and Translational Science Award (CTSA) community. Those studies mainly focus on the general understanding of collaboration networks by measuring common network metrics. More fundamental questions about collaborations still remain unanswered such as recognizing “influential” nodes and identifying potential new collaborations that are most rewarding. METHODS We analyzed biomedical research collaboration networks (RCNs) constructed from a dataset of research grants collected at a CTSA institution (i.e. University of Arkansas for Medical Sciences (UAMS)) in a comprehensive and systematic manner. First, our analysis covers the full spectrum of a RCN study: from network modeling to network characteristics measurement, from key nodes recognition to potential links (collaborations) suggestion. Second, our analysis employs non-conventional model and techniques including a weighted network model for representing collaboration strength, rank aggregation for detecting important nodes, and Random Walk with Restart (RWR) for suggesting new research collaborations. RESULTS By applying our models and techniques to RCNs at UAMS prior to and after the CTSA, we have gained valuable insights that not only reveal the temporal evolution of the network dynamics but also assess the effectiveness of the CTSA and its impact on a research institution. We find that collaboration networks at UAMS are not scale-free but small-world. Quantitative measures have been obtained to evident that the RCNs at UAMS are moving towards favoring multidisciplinary research. Moreover, our link prediction model creates the basis of collaboration recommendations with an impressive accuracy (AUC: 0.990, MAP@3: 1.48 and MAP@5: 1.522). Last but not least, an open-source visual analytical tool for RCNs is being developed and released through Github. CONCLUSIONS

  17. Archive for the history of psychology in Spain: The Archivo Histórico, Bibliográfico y Documental de Psicología of the Universidad Autónoma de Madrid.

    PubMed

    Quintana, José; Sáiz, Milagros; Balltondre, Mónica; Sáiz, Dolors

    2012-11-01

    In this article, we describe the content, sources, and history of the Archivo Histórico de la Facultad de Psicología (the Historical Archive for the History of Psychology) at the Universidad Autónoma de Madrid (UAM, Spain). This archive is the result of the task carried out by some professors of the Faculty of Psychology at UAM for the preservation and increase of sources for a history of psychology in Spain. Collections from the 19th to the 20th century were recovered because of the UAM effort and some other contributions. Most of the sources for a history of psychology in the Spanish context were unknown and nearly lost before Faculty of Psychology's task. Among other projects, UAM archive is acquiring classical texts of psychology by buying facsimiles from different publishing houses and, what is more relevant, they guarantee access to the sources for research purposes. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  18. EUCLID: automatic classification of proteins in functional classes by their database annotations.

    PubMed

    Tamames, J; Ouzounis, C; Casari, G; Sander, C; Valencia, A

    1998-01-01

    A tool is described for the automatic classification of sequences in functional classes using their database annotations. The Euclid system is based on a simple learning procedure from examples provided by human experts. Euclid is freely available for academics at http://www.gredos.cnb.uam.es/EUCLID, with the corresponding dictionaries for the generation of three, eight and 14 functional classes. E-mail: valencia@cnb.uam.es The results of the EUCLID classification of different genomes are available at http://www.sander.ebi.ac. uk/genequiz/. A detailed description of the different applications mentioned in the text is available at http://www.gredos.cnb.uam. es/EUCLID/Full_Paper

  19. Critical levels and loads and the regulation of industrial emissions in northwest British Columbia, Canada

    NASA Astrophysics Data System (ADS)

    Williston, P.; Aherne, J.; Watmough, S.; Marmorek, D.; Hall, A.; de la Cueva Bueno, P.; Murray, C.; Henolson, A.; Laurence, J. A.

    2016-12-01

    Northwest British Columbia, Canada, a sparsely populated and largely pristine region, is targeted for rapid industrial growth owing to the modernization of an aluminum smelter and multiple proposed liquefied natural gas (LNG) facilities. Consequently, air quality in this region is expected to undergo considerable changes within the next decade. In concert, the increase in LNG capacity driven by gas production from shale resources across North America has prompted environmental concerns and highlighted the need for science-based management decisions regarding the permitting of air emissions. In this study, an effects-based approach widely-used to support transboundary emissions policy negotiations was used to assess industrial air emissions in the Kitimat and Prince Rupert airsheds under permitted and future potential industrial emissions. Critical levels for vegetation of SO2 and NO2 and critical loads of acidity and nutrient nitrogen for terrestrial and aquatic ecosystems were estimated for both regions and compared with modelled concentration and deposition estimates to identify the potential extent and magnitude of ecosystem impacts. The critical level for SO2 was predicted to be exceeded in an area ranging from 81 to 251 km2 in the Kitimat airshed owing to emissions from an existing smelter, compared with <1 km2 in Prince Rupert under the lowest to highest emissions scenarios. In contrast, the NO2 critical level was not exceeded in Kitimat, and ranged from 4.5 to 6 km2 in Prince Rupert owing to proposed LNG related emissions. Predicted areal exceedance of the critical load of acidity for soil ranged from 1 to 28 km2 in Kitimat and 4-10 km2 in Prince Rupert, while the areal exceedance of empirical critical load for nutrient N was predicted to be greater in the Prince Rupert airshed (20-94 km2) than in the Kitimat airshed (1-31 km2). The number of lakes that exceeded the critical load of acidity did not vary greatly across emissions scenarios in the Kitimat (21

  20. NOAA Atmospheric Sciences Modeling Division support to the US Environmental Protection Agency

    NASA Astrophysics Data System (ADS)

    Poole-Kober, Evelyn M.; Viebrock, Herbert J.

    1991-07-01

    During FY-1990, the Atmospheric Sciences Modeling Division provided meteorological research and operational support to the U.S. Environmental Protection Agency. Basic meteorological operational support consisted of applying dispersion models and conducting dispersion studies and model evaluations. The primary research effort was the development and evaluation of air quality simulation models using numerical and physical techniques supported by field studies. Modeling emphasis was on the dispersion of photochemical oxidants and particulate matter on urban and regional scales, dispersion in complex terrain, and the transport, transformation, and deposition of acidic materials. Highlights included expansion of the Regional Acid Deposition Model/Engineering Model family to consist of the Tagged Species Engineering Model, the Non-Depleting Model, and the Sulfate Tracking Model; completion of the Acid-MODES field study; completion of the RADM2.1 evaluation; completion of the atmospheric processes section of the National Acid Precipitation Assessment Program 1990 Integrated Assessment; conduct of the first field study to examine the transport and entrainment processes of convective clouds; development of a Regional Oxidant Model-Urban Airshed Model interface program; conduct of an international sodar intercomparison experiment; incorporation of building wake dispersion in numerical models; conduct of wind-tunnel simulations of stack-tip downwash; and initiation of the publication of SCRAM NEWS.

  1. Urban airshed modeling of air quality impacts of alternative transportation fuel use in Los Angeles and Atlanta

    DOT National Transportation Integrated Search

    1997-12-01

    This report documents a photochemical modeling study of the potential impacts on air quality of future emissions from alternative fuel vehicles (AFVs). The main objective of the National Renewable Energy Laboratory (NREL) in supporting this study is ...

  2. Comparative study on the physicochemical and functional properties of the mucilage in the carpel of Nymphaea odorata using ultrasonic and classical heating extractions.

    PubMed

    Wu, WeiZhi; Tu, ChinWei; Yang, WenJen; Wang, HengLong; Chang, ChaoLin; Chung, JengDer; Lu, MeiKuang; Liao, WeiTung

    2018-02-21

    The cooked carpel of Nymphaea odorata has a large amount of transparent mucilage; however, the basic characteristics of this mucilage have not yet been reported. This study compared the physicochemical and functional properties of this mucilage obtained using conventional hot water extraction (HWM) and ultrasonic-assisted extraction (UAM). Neither HWM nor UAM affected the viability of mouse skin fibroblasts (NIH/3T3) below 100 μg/mL. UAM had a higher yield production, phenol concentration, and in vitro antioxidant activity, but lower viscosity and water-holding capacity than for HWM. The Fourier transform infrared spectra revealed that the dialyzed HWM and UAM, named HWMD and UAMD, respectively, appeared major spectral differences at 1730 cm -1 and 1605 cm -1 , implying that the degree of methylation was different between HWMD and UAMD. Compared to HWMD, UAMD in low-molecular weight polysaccharides increased. Evidently, the basic characteristics of native mucilage in the carpel of N. odorata were greatly changed by various extractions. Nevertheless, sugar analysis indicated that glucuronic acid was the mainly composition of HWMD and UAMD. Copyright © 2017. Published by Elsevier B.V.

  3. Comparative Study of the Minichromosome Maintenance Proteins Complex (MCM 4/5/6) in Ameloblastoma and Unicystic Ameloblastoma.

    PubMed

    Apellániz, Delmira; Pereira-Prado, Vanesa; Sicco, Estefania; Vigil-Bastitta, Gabriela; González-González, Rogelio; Mosqueda-Taylor, Adalberto; Molina-Frechero, Nelly; Hernandez, Marcela; Sánchez-Romero, Celeste; Bologna-Molina, Ronell

    2018-05-01

    Solid/conventional ameloblastoma (AM) and unicystic ameloblastoma (UAM) are the most frequent benign epithelial odontogenic tumors located in the maxillary region, and their treatment usually consists of extensive surgical resection. Therefore, it is relevant to study molecular markers to better understand the biological behavior of these tumors. The aim of this study was to describe and compare the expression of proteins related to cellular proliferation: Ki-67 and MCM4-6 complex. An immunohistochemistry technique was performed, with antibodies against Ki-67, MCM4, MCM5, and MCM6, in 10 AM and 10 UAM tumors. The results were quantified using label index and analyzed statistically. AM and UAM had greater expression of MCM6, followed by MCM5, MCM4, and Ki-67 ( P < .05). Immunoexpression of Ki-67 and MCM5 was exclusively nuclear, whereas the expression of MCM4 and MCM6 was nuclear and cytoplasmic. The results suggest that MCM5 is a trustable cell proliferation marker with higher sensitivity compared with Ki-67 and may be useful to predict the biological behavior of AM and UAM. Despite this, further studies are necessary, including a correlation with clinical parameters to confirm these findings.

  4. Air quality modeling of selected aromatic and non-aromatic air toxics in the Houston urban and industrial airshed

    NASA Astrophysics Data System (ADS)

    Coarfa, Violeta Florentina

    2007-12-01

    Air toxics, also called hazardous air pollutants (HAPs), pose a serious threat to human health and the environment. Their study is important in the Houston area, where point sources, mostly located along the Ship Channel, mobile and area sources contribute to large emissions of such toxic pollutants. Previous studies carried out in this area found dangerous levels of different HAPs in the atmosphere. This thesis presents several studies that were performed for the aromatic and non-aromatic air toxics in the HGA. For these studies we developed several tools: (1) a refined chemical mechanism, which explicitly represents 18 aromatic air toxics that were lumped under two model species by the previous version, based on their reactivity with the hydroxyl radical; (2) an engineering version of an existing air toxics photochemical model that enables us to perform much faster long-term simulations compared to the original model, that leads to a 8--9 times improvement in the running time across different computing platforms; (3) a combined emission inventory based on the available emission databases. Using the developed tools, we quantified the mobile source impact on a few selected air toxics, and analyzed the temporal and spatial variation of selected aromatic and non-aromatic air toxics in a few regions within the Houston area; these regions were characterized by different emissions and environmental conditions.

  5. Assembling a biogenic hydrocarbon emissions inventory for the SCOS97-NARSTO modeling domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benjamin, M.T.; Winer, A.M.; Karlik, J.

    1998-12-31

    To assist in developing ozone control strategies for Southern California, the California Air Resources Board is developing a biogenic hydrocarbon (BHC) emissions inventory model for the SCOS97-NARSTO domain. The basis for this bottom-up model is SCOS97-NARSTO-specific landuse and landcover maps, leafmass constants, and BHC emission rates. In urban areas, landuse maps developed by the Southern California Association of Governments, San Diego Association of Governments, and other local governments are used while in natural areas, landcover and plant community databases produced by the GAP Analysis Project (GAP) are employed. Plant identities and canopy volumes for species in each landuse and landcovermore » category are based on the most recent botanical field survey data. Where possible, experimentally determined leafmass constant and BHC emission rate measurements reported in the literature are used or, for those species where experimental data are not available, values are assigned based on taxonomic methods. A geographic information system is being used to integrate these databases, as well as the most recent environmental correction algorithms and canopy shading factors, to produce a spatially- and temporally-resolved BHC emission inventory suitable for input into the Urban Airshed Model.« less

  6. Downregulation of a UDP-Arabinomutase Gene in Switchgrass (Panicum virgatum L.) Results in Increased Cell Wall Lignin While Reducing Arabinose-Glycans

    DOE PAGES

    Willis, Jonathan D.; Smith, James A.; Mazarei, Mitra; ...

    2016-10-26

    Switchgrass (Panicum virgatum L.) is a C 4 perennial prairie grass and a dedicated feedstock for lignocellulosic biofuels. Saccharification and biofuel yields are inhibited by the plant cell wall's natural recalcitrance against enzymatic degradation. Plant hemicellulose polysaccharides such as arabinoxylans structurally support and cross-link other cell wall polymers. Grasses predominately have Type II cell walls that are abundant in arabinoxylan, which comprise nearly 25% of aboveground biomass. A primary component of arabinoxylan synthesis is uridine diphosphate (UDP) linked to arabinofuranose (Araf). A family of UDP-arabinopyranose mutase (UAM)/reversible glycosylated polypeptides catalyze the interconversion between UDP-arabinopyranose (UDP-Arap) and UDP-Araf. The expression ofmore » a switchgrass arabinoxylan biosynthesis pathway gene, PvUAM1, was decreased via RNAi to investigate its role in cell wall recalcitrance in the feedstock. PvUAM1 encodes a switchgrass homolog of UDP-arabinose mutase, which converts UDP-Arap to UDP-Araf. Southern blot analysis revealed each transgenic line contained between one to at least seven T-DNA insertions, resulting in some cases, a 95% reduction of native PvUAM1 transcript in stem internodes. Transgenic plants had increased pigmentation in vascular tissues at nodes, but were otherwise similar in morphology to the non-transgenic control. Cell wall-associated arabinose was decreased in leaves and stems by over 50%, but there was an increase in cellulose. In addition, there was a commensurate change in arabinose side chain extension. Cell wall lignin composition was altered with a concurrent increase in lignin content and transcript abundance of lignin biosynthetic genes in mature tillers. Enzymatic saccharification efficiency was unchanged in the transgenic plants relative to the control. Plants with attenuated PvUAM1 transcript had increased cellulose and lignin in cell walls. A decrease in cell wall-associated arabinose was

  7. Downregulation of a UDP-Arabinomutase Gene in Switchgrass (Panicum virgatum L.) Results in Increased Cell Wall Lignin While Reducing Arabinose-Glycans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willis, Jonathan D.; Smith, James A.; Mazarei, Mitra

    Switchgrass (Panicum virgatum L.) is a C 4 perennial prairie grass and a dedicated feedstock for lignocellulosic biofuels. Saccharification and biofuel yields are inhibited by the plant cell wall's natural recalcitrance against enzymatic degradation. Plant hemicellulose polysaccharides such as arabinoxylans structurally support and cross-link other cell wall polymers. Grasses predominately have Type II cell walls that are abundant in arabinoxylan, which comprise nearly 25% of aboveground biomass. A primary component of arabinoxylan synthesis is uridine diphosphate (UDP) linked to arabinofuranose (Araf). A family of UDP-arabinopyranose mutase (UAM)/reversible glycosylated polypeptides catalyze the interconversion between UDP-arabinopyranose (UDP-Arap) and UDP-Araf. The expression ofmore » a switchgrass arabinoxylan biosynthesis pathway gene, PvUAM1, was decreased via RNAi to investigate its role in cell wall recalcitrance in the feedstock. PvUAM1 encodes a switchgrass homolog of UDP-arabinose mutase, which converts UDP-Arap to UDP-Araf. Southern blot analysis revealed each transgenic line contained between one to at least seven T-DNA insertions, resulting in some cases, a 95% reduction of native PvUAM1 transcript in stem internodes. Transgenic plants had increased pigmentation in vascular tissues at nodes, but were otherwise similar in morphology to the non-transgenic control. Cell wall-associated arabinose was decreased in leaves and stems by over 50%, but there was an increase in cellulose. In addition, there was a commensurate change in arabinose side chain extension. Cell wall lignin composition was altered with a concurrent increase in lignin content and transcript abundance of lignin biosynthetic genes in mature tillers. Enzymatic saccharification efficiency was unchanged in the transgenic plants relative to the control. Plants with attenuated PvUAM1 transcript had increased cellulose and lignin in cell walls. A decrease in cell wall-associated arabinose was

  8. A summer pharmacy camp for high school students as a pharmacy student recruitment tool.

    PubMed

    Myers, Tristan L; DeHart, Renee M; Dunn, Eddie B; Gardner, Stephanie F

    2012-05-10

    To determine the effectiveness of a summer pharmacy camp on participants' pursuit of enrollment in doctor of pharmacy degree programs. All participants (n = 135) in a pharmacy camp at the University of Arkansas for Medical Sciences (UAMS) College of Pharmacy from 2007-2010 were invited to complete an anonymous online survey instrument. Seventy-three students completed the survey instrument (54% response rate). Ninety-six percent of pharmacy camp participants said that they would recommend pharmacy camp to a friend, and 76% planned to apply or had applied to doctor of pharmacy degree program. Seven of the camp participants had enrolled in the UAMS College of Pharmacy. The pharmacy summer camp at UAMS is effective in maintaining high school students' interest in the profession of pharmacy. Continued use of the pharmacy camp program as a recruitment tool is warranted; however, additional research on this topic is needed.

  9. Mediterranean Diet and Health-Related Quality of Life in Two Cohorts of Community-Dwelling Older Adults.

    PubMed

    Pérez-Tasigchana, Raúl F; León-Muñoz, Luz M; López-García, Esther; Banegas, José R; Rodríguez-Artalejo, Fernando; Guallar-Castillón, Pilar

    2016-01-01

    In older adults, the Mediterranean diet is associated with lower risk of chronic diseases, but its association with health-related quality of life (HRQL) is still uncertain. This study assessed the association between the Mediterranean diet and HRQL in 2 prospective cohorts of individuals aged ≥60 years in Spain. The UAM-cohort (n = 2376) was selected in 2000/2001 and followed-up through 2003. At baseline, diet was collected with a food frequency questionnaire, which was used to develop an 8-item index of Mediterranean diet (UAM-MDP). The Seniors-ENRICA cohort (n = 1911) was recruited in 2008/2010 and followed-up through 2012. At baseline, a diet history was used to obtain food consumption. Mediterranean diet adherence was measured with the PREDIMED score and the Trichopoulou's Mediterranean Diet Score (MSD). HRQL was assessed, at baseline and at the end of follow-up, with the physical and mental component summaries (PCS and MCS) of the SF-36 questionnaire in the UAM-cohort, and the SF-12v.2 questionnaire in the Seniors-ENRICA cohort. Analyses were conducted with linear regression, and adjusted for the main confounders including baseline HRQL. In the UAM-cohort, no significant associations between the UAM-MDP and the PCS or the MCS were found. In the Seniors-ENRICA cohort, a higher PREDIMED score was associated with a slightly better PCS; when compared with the lowest tertile of PREDIMED score, the beta coefficient (95% confidence interval) for PCS was 0.55 (-0.48 to 1.59) in the second tertile, and 1.34 (0.21 to 2.47) in the highest tertile. However, the PREDIMED score was non-significantly associated with a better MCS score. The MSD did not show an association with either the PCS or the MCS. No clinically relevant association was found between the Mediterranean diet and HRQL in older adults in Spain.

  10. Allometric scaling of UK urban emissions: interpretation and implications for air quality management

    NASA Astrophysics Data System (ADS)

    MacKenzie, Rob; Barnes, Matt; Whyatt, Duncan; Hewitt, Nick

    2016-04-01

    Allometry uncovers structures and patterns by relating the characteristics of complex systems to a measure of scale. We present an allometric analysis of air quality for UK urban settlements, beginning with emissions and moving on to consider air concentrations. We consider both airshed-average 'urban background' concentrations (cf. those derived from satellites for NO2) and local pollution 'hotspots'. We show that there is a strong and robust scaling (with respect to population) of the non-point-source emissions of the greenhouse gases carbon dioxide and methane, as well as the toxic pollutants nitrogen dioxide, PM2.5, and 1,3-butadiene. The scaling of traffic-related emissions is not simply a reflection of road length, but rather results from the socio-economic patterning of road-use. The recent controversy regarding diesel vehicle emissions is germane to our study but does not affect our overall conclusions. We next develop an hypothesis for the population-scaling of airshed-average air concentrations, with which we demonstrate that, although average air quality is expected to be worse in large urban centres compared to small urban centres, the overall effect is an economy of scale (i.e., large cities reduce the overall burden of emissions compared to the same population spread over many smaller urban settlements). Our hypothesis explains satellite-derived observations of airshed-average urban NO2 concentrations. The theory derived also explains which properties of nature-based solutions (urban greening) can make a significant contribution at city scale, and points to a hitherto unforeseen opportunity to make large cities cleaner than smaller cities in absolute terms with respect to their airshed-average pollutant concentration.

  11. Process Control and Development for Ultrasonic Additive Manufacturing with Embedded Fibers

    NASA Astrophysics Data System (ADS)

    Hehr, Adam J.

    Ultrasonic additive manufacturing (UAM) is a recent additive manufacturing technology which combines ultrasonic metal welding, CNC machining, and mechanized foil layering to create large gapless near net-shape metallic parts. The process has been attracting much attention lately due to its low formation temperature, the capability to join dissimilar metals, and the ability to create complex design features not possible with traditional subtractive processes alone. These process attributes enable light-weighting of structures and components in an unprecedented way. However, UAM is currently limited to niche areas due to the lack of quality tracking and inadequate scientific understanding of the process. As a result, this thesis work is focused on improving both component quality tracking and process understanding through the use of average electrical power input to the welder. Additionally, the understanding and application space of embedding fibers into metals using UAM is investigated, with particular focus on NiTi shape memory alloy fibers.

  12. Effect of preexercise ingestion of modified cornstarch on substrate oxidation during endurance exercise.

    PubMed

    Johannsen, Neil M; Sharp, Rick L

    2007-06-01

    The purpose of this study was to investigate differences in substrate oxidation between dextrose (DEX) and unmodified (UAMS) and acid/alcohol-modified (MAMS) cornstarches. Seven endurance-trained men (VO2peak = 59.1 +/- 5.4 mL.kg-1.min-1) participated in 2 h of exercise (66.4% +/- 3.3% VO2peak) 30 min after ingesting 1 g/kg body weight of the experimental carbohydrate or placebo (PLA). Plasma glucose and insulin were elevated after DEX (P < 0.05) compared with UAMS, MAMS, and PLA. Although MAMS and DEX raised carbohydrate oxidation rate through 90 min of exercise, only MAMS persisted throughout 120 min (P < 0.05 compared with all trials). Exogenous-carbohydrate oxidation rate was higher in DEX than in MAMS and UAMS until 90 min of exercise. Acid/alcohol modification resulted in augmented carbohydrate oxidation with a small, sustained increase in exogenous-carbohydrate oxidation rate. MAMS appears to be metabolizable and available for oxidation during exercise.

  13. Aerosol characterizaton in El Paso-Juarez airshed using optical methods

    NASA Astrophysics Data System (ADS)

    Esparza, Angel Eduardo

    2011-12-01

    The assessment and characterization of atmospheric aerosols and their optical properties are of great significance for several applications such as air pollution studies, atmospheric visibility, remote sensing of the atmosphere, and impacts on climate change. Decades ago, the interest in atmospheric aerosols was primarily for visibility impairment problems; however, recently interest has intensified with efforts to quantify the optical properties of aerosols, especially because of the uncertainties surrounding the role of aerosols in climate change. The main objective of the optical characterization of aerosols is to understand their properties. These properties are determined by the aerosols' chemical composition, size, shape and concentration. The general purpose of this research was to contribute to a better characterization of the aerosols present in the Paso del Norte Basin. This study permits an alternative approach in the understanding of air pollution for this zone by analyzing the predominant components and their contributions to the local environment. This dissertation work had three primary objectives, in which all three are intertwined by the general purpose of the aerosol characterization in the Paso del Norte region. The first objective was to retrieve the columnar aerosol size distribution for two different cases (clean and polluted scenarios) at each season (spring, summer, fall and winter) of the year 2009. In this project, instruments placed in buildings within the University of Texas at El Paso (UTEP) as well as a monitoring site (CAMS 12) from the Texas Commission on Environmental Quality (TCEQ) provided the measurements that delimited the aerosol size distribution calculated by our model, the Environmental Physics Inverse Reconstruction (EPIRM) model. The purpose of this objective was to provide an alternate method of quantifying and size-allocating aerosols in situ, by using the optical properties of the aerosols and inversely reconstruct and

  14. Seasonal impact of regional outdoor biomass burning on air pollution in three Indian cities: Delhi, Bengaluru, and Pune

    NASA Astrophysics Data System (ADS)

    Liu, Tianjia; Marlier, Miriam E.; DeFries, Ruth S.; Westervelt, Daniel M.; Xia, Karen R.; Fiore, Arlene M.; Mickley, Loretta J.; Cusworth, Daniel H.; Milly, George

    2018-01-01

    Air pollution in many of India's cities exceeds national and international standards, and effective pollution control strategies require knowledge of the sources that contribute to air pollution and their spatiotemporal variability. In this study, we examine the influence of a single pollution source, outdoor biomass burning, on particulate matter (PM) concentrations, surface visibility, and aerosol optical depth (AOD) from 2007 to 2013 in three of the most populous Indian cities. We define the upwind regions, or ;airsheds,; for the cities by using atmospheric back trajectories from the HYSPLIT model. Using satellite fire radiative power (FRP) observations as a measure of fire activity, we target pre-monsoon and post-monsoon fires upwind of the Delhi National Capital Region and pre-monsoon fires surrounding Bengaluru and Pune. We find varying contributions of outdoor fires to different air quality metrics. For the post-monsoon burning season, we find that a subset of local meteorological variables (air temperature, humidity, sea level pressure, wind speed and direction) and FRP as the only pollution source explained 39% of variance in Delhi station PM10 anomalies, 77% in visibility, and 30% in satellite AOD; additionally, per unit increase in FRP within the daily airshed (1000 MW), PM10 increases by 16.34 μg m-3, visibility decreases by 0.155 km, and satellite AOD increases by 0.07. In contrast, for the pre-monsoon burning season, we find less significant contributions from FRP to air quality in all three cities. Further, we attribute 99% of FRP from post-monsoon outdoor fires within Delhi's average airshed to agricultural burning. Our work suggests that although outdoor fires are not the dominant air pollution source in India throughout the year, post-monsoon fires contribute substantially to regional air pollution and high levels of population exposure around Delhi. During 3-day blocks of extreme PM2.5 in the 2013 post-monsoon burning season, which coincided

  15. Seasonal Impact of Regional Outdoor Biomass Burning on Air Pollution in Three Indian Cities: Delhi, Bengaluru, and Pune

    NASA Technical Reports Server (NTRS)

    Liu, Tianjia; Marlier, Miriam E.; DeFries, Ruth S.; Westervelt, Daniel M.; Xia, Karen R.; Fiore, Arlene M.; Mickley, Loretta J.; Cusworth, Daniel H.; Milly, George

    2017-01-01

    Air pollution in many of India's cities exceeds national and international standards, and effective pollution control strategies require knowledge of the sources that contribute to air pollution and their spatiotemporal variability. In this study, we examine the influence of a single pollution source, outdoor biomass burning, on particulate matter (PM) concentrations, surface visibility, and aerosol optical depth (AOD) from 2007 to 2013 in three of the most populous Indian cities. We define the upwind regions, or "airsheds," for the cities by using atmospheric back trajectories from the HYSPLIT model. Using satellite fire radiative power (FRP) observations as a measure of fire activity, we target pre-monsoon and post-monsoon fires upwind of the Delhi National Capital Region and pre-monsoon fires surrounding Bengaluru and Pune. We find varying contributions of outdoor fires to different air quality metrics. For the post-monsoon burning season, we find that a subset of local meteorological variables (air temperature, humidity, sea level pressure, wind speed and direction) and FRP as the only pollution source explained 39% of variance in Delhi station PM(sub 10) anomalies, 77% in visibility, and 30% in satellite AOD; additionally, per unit increase in FRP within the daily airshed (1000 MW), PM(sub 10) increases by 16.34 micrograms per cubic meter, visibility decreases by 0.097 km, and satellite AOD increases by 0.07. In contrast, for the pre-monsoon burning season, we find less significant contributions from FRP to air quality in all three cities. Further, we attribute 99% of FRP from post-monsoon outdoor fires within Delhi's average airshed to agricultural burning. Our work suggests that although outdoor fires are not the dominant air pollution source in India throughout the year, post-monsoon fires contribute substantially to regional air pollution and high levels of population exposure around Delhi. During 3-day blocks of extreme PM(sub 2.5) in the 2013 post

  16. Ozone modeling in an ethanol, gasoline and diesel fuels environment: The metropolitan area of Sao Paulo, Brazil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade, M.F.; Miguel, A.H.; Seinfeld, J.H.

    1995-12-01

    Over the past several years, in the Metropolitan Area of Sao Paulo (MASP), ambient ozone concentrations have reached over five times the concentration considered protective of public health by the World Health Organization, with routine occurrence of levels that exceed Brazil`s 1 hour National Ambient Air Quality Standard (160 {mu}g/m{sup 3}). For the past 19 years, ethanol has been used both as fuel (E95) and as gasoline additive (E20G80) in light duty vehicles. This talk will discuss the results of the application of the CIT photochemical airshed model to the February 16-17, 1989 meteorological experiment carried out in the MASP.more » Simulated hourly ozone concentrations for the 1989 vehicular fleet included three cases: (1) the actual fleet (F.95, E20G80, and diesels), (2) a light duty fleet fueled with E95 only, and (3) entirely with gasoline.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sridharan, Niyanth; Gussev, Maxim; Seibert, Rachel

    Ultrasonic additive manufacturing (UAM) is a solid-state process, which uses ultrasonic vibrations at 20 kHz along with mechanized tape layering and intermittent milling operation, to build fully functional three-dimensional parts. In the literature, UAM builds made with low power (1.5 kW) exhibited poor tensile properties in Z-direction, i.e., normal to the interfaces. This reduction in properties is often attributed to the lack of bonding at faying interfaces. The generality of this conclusion is evaluated further in 6061 aluminum alloy builds made with very high power UAM (9 kW). Tensile deformation behavior along X and Z directions were evaluated with small-scalemore » in-situ mechanical testing equipped with high-resolution digital image correlation, as well as, multi-scale characterization of builds. Interestingly, even with complete metallurgical bonding across the interfaces without any discernable voids, poor Z-direction properties were observed. This reduction is correlated to coalescence of pre-existing shear bands at interfaces into micro voids, leading to strain localization and spontaneous failure on tensile loading.« less

  18. Rationalization of anisotropic mechanical properties of Al-6061 fabricated using ultrasonic additive manufacturing

    DOE PAGES

    Sridharan, Niyanth; Gussev, Maxim; Seibert, Rachel; ...

    2016-09-01

    Ultrasonic additive manufacturing (UAM) is a solid-state process, which uses ultrasonic vibrations at 20 kHz along with mechanized tape layering and intermittent milling operation, to build fully functional three-dimensional parts. In the literature, UAM builds made with low power (1.5 kW) exhibited poor tensile properties in Z-direction, i.e., normal to the interfaces. This reduction in properties is often attributed to the lack of bonding at faying interfaces. The generality of this conclusion is evaluated further in 6061 aluminum alloy builds made with very high power UAM (9 kW). Tensile deformation behavior along X and Z directions were evaluated with small-scalemore » in-situ mechanical testing equipped with high-resolution digital image correlation, as well as, multi-scale characterization of builds. Interestingly, even with complete metallurgical bonding across the interfaces without any discernable voids, poor Z-direction properties were observed. This reduction is correlated to coalescence of pre-existing shear bands at interfaces into micro voids, leading to strain localization and spontaneous failure on tensile loading.« less

  19. High-Fidelity Computational Aerodynamics of Multi-Rotor Unmanned Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Ventura Diaz, Patricia; Yoon, Seokkwan

    2018-01-01

    High-fidelity Computational Fluid Dynamics (CFD) simulations have been carried out for several multi-rotor Unmanned Aerial Vehicles (UAVs). Three vehicles have been studied: the classic quadcopter DJI Phantom 3, an unconventional quadcopter specialized for forward flight, the SUI Endurance, and an innovative concept for Urban Air Mobility (UAM), the Elytron 4S UAV. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, and a hybrid turbulence model. The DJI Phantom 3 is simulated with different rotors and with both a simplified airframe and the real airframe including landing gear and a camera. The effects of weather are studied for the DJI Phantom 3 quadcopter in hover. The SUI En- durance original design is compared in forward flight to a new configuration conceived by the authors, the hybrid configuration, which gives a large improvement in forward thrust. The Elytron 4S UAV is simulated in helicopter mode and in airplane mode. Understanding the complex flows in multi-rotor vehicles will help design quieter, safer, and more efficient future drones and UAM vehicles.

  20. Adaptation of the South-West Wing of Collegium Chemicum of Adam Mickiewicz University in Poznań for Storage Facilities/ Adaptacja Południowo-Zachodniego Skrzydła Budynku Collegium Chemicum Uam W Poznaniu Na Cele Magazynowe

    NASA Astrophysics Data System (ADS)

    Ścigałło, Jacek

    2015-06-01

    The article refers to the problems of adaptation of Collegium Chemicum facilities belonging to Adam Mickiewicz Uniwersity in Poznań to its storage needs. The subject building is situated in Grunwaldzka Street in Poznań. In the introduction part, the building and its structural solutions are described. The results of the materials research and the measurements of the used reinforcement have been presented. The structure diagnostic analyses were performed basing on measurements and research. The analysis allowed the determination of the limit loads. The results of the performed analysis of the current state turned out to be unsatisfactory, not only in terms of the planned storage load but also in terms of the current load state, as was shown by the construction analysis. W pracy przedstawiono problemy związane z adaptacją budynku dydaktycznego Collegium Chemicum przy ul. Grunwaldzkiej w Poznaniu na cele magazynowe Biblioteki Głównej UAM. Na wstępie opisano badany budynek oraz scharakteryzowano zastosowane w nim rozwiązania konstrukcyjne. Przedstawiono wyniki wykonanych badań materiałowych oraz pomiarów inwentaryzacyjnych zastosowanego zbrojenia. Na podstawie wykonanych pomiarów i badań przeprowadzono analizę diagnostyczną konstrukcji, która pozwoliła na wyznaczenie dopuszczalnych wartości obciążeń powierzchni stropowych. Wyniki wykonanej analizy konstrukcji w stanie istniejącym okazały się dalece niezadowalające nie tylko z punktu widzenia planowanych, znacznych obciążeń magazynowych. Analiza wykazała bowiem, że konstrukcja jest już znacznie przeciążona w aktualnym stanie jej obciążenia

  1. U.S.-Canada cooperation: the U.S.-Canada air quality agreement.

    PubMed

    McLean, Brian; Barton, Jane

    2008-01-01

    The impetus for the Canada-U.S. Air Quality Agreement was transboundary acid rain in eastern North America. This problem drove the parties to develop a bilateral agreement that not only addressed this issue, but also set up a broad and flexible framework to address other air quality problems. In 2000, the Ozone Annex to reduce smog and its precursor pollutants was negotiated. A transboundary particulate matter (PM) science assessment in 2004 led to the commencement of negotiation of a PM annex in late 2007. Over the course of 15 yr, Canada and the United States also developed innovative cooperative arrangements. Two transboundary airshed dialogues became important sources of practical on-the-ground cooperation in the Georgia Basin-Puget Sound and the Great Lakes Basin. In addition to providing the basis for ongoing international dialogue, these transboundary airshed projects resulted in changes to administrative practices as the parties exchange information and learn from each other in ways that benefit the airshed community. The nature of the Air Quality Agreement also enabled both Canada and the United States to address concerns each has had about specific pollutant sources and to address them in ways that avoided confrontation and resulted in air quality improvements for people living in the airsheds. Case studies of three of the "informal consultations" that have occurred under the agreement are described: where discussions occurred around a power plant in Michigan, a power plant in Saskatchewan, and a steel mill in Ontario. More than an agreement, this relationship has built a capacity to deal with common problems. Fostering such a relationship with its implicit transfer of knowledge and experience has opened doors for discussions on a new Clean Air framework in Canada and joint analyses of cross-border sulfur dioxide (SO2) and nitrogen oxides (NOx) emissions caps and trading. U.S. experience with cap and trading is highlighted for background and context. The

  2. Canada-U.S. Border Air Quality Strategy Border Reports

    EPA Pesticide Factsheets

    View reports on two airshed pilot studies that explored the human health effects of air pollution in the United States and Canada, plus a report on the feasibility of transboundary emissions cap and trade program.

  3. Updating the biomedical engineering curriculum: Inclusion of Health Technology Assessment subjects.

    PubMed

    Martinez Licona, Fabiola; Urbina, Edmundo Gerardo; Azpiroz-Leehan, Joaquin

    2010-01-01

    This paper describes the work being carried out at Metropolitan Autonomous University (UAM) in Mexico City with regard to the continuous evaluation and updating of the Biomedical Engineering (BME) curriculum. In particular the courses regarded as part of the BME basic branch are reduced and new sets of elective subjects are proposed in order to bring closer the research work at UAM with the subjects in the BME curriculum. Special emphasis is placed on subjects dealing with Health Technology Assessment (HTA) and Health economics, as this branch of the BME discipline is quite promising in Mexico, but there are very few professionals in the field with adequate qualifications.

  4. Observations and global numerical modelling of the St. Patrick's Day 2015 geomagnetic storm event

    NASA Astrophysics Data System (ADS)

    Foerster, M.; Prokhorov, B. E.; Doornbos, E.; Astafieva, E.; Zakharenkova, I.

    2017-12-01

    With a sudden storm commencement (SSC) at 04:45 UT on St. Patrick's day 2015 started the most severe geomagnetic storm in solar cycle 24. It appeared as a two-stage geomagnetic storm with a minimum SYM-H value of -233 nT. In the response to the storm commencement in the first activation, a short-term positive effect in the ionospheric vertical electron content (VTEC) occurred at low- and mid-latitudes on the dayside. The second phase commencing around 12:30 UT lasted longer and caused significant and complex storm-time changes around the globe with hemispherical different ionospheric storm reactions in different longitudinal ranges. Swarm-C observations of the neutral mass density variation along the orbital path as well as Langmuir probe plasma and magnetometer measurements of all three Swarm satellites and global TEC records are used for physical interpretations and modelling of the positive/negative storm scenario. These observations pose a challenge for the global numerical modelling of thermosphere-ionosphere storm processes as the storm, which occurred around spring equinox, obviously signify the existence of other impact factors than seasonal dependence for hemispheric asymmetries to occur. Numerical simulation trials using the Potsdam version of the Upper Atmosphere Model (UAM-P) are presented to explain these peculiar M-I-T storm processes.

  5. Scanning Electron Microanalysis and Analytical Challenges of Mapping Elements in Urban Atmospheric Particles

    EPA Science Inventory

    Elemental mapping with energy-dispersive X-ray spectroscopy (EDX) associated with scanning electron microscopy is highly useful for studying internally mixed atmospheric particles. Presented is a study of individual particles from urban airsheds and the analytical challenges in q...

  6. Chemical Composition and Source Apportionment of Size Fractionated Particulate Matter in Cleveland, Ohio, USA

    EPA Science Inventory

    The Cleveland airshed comprises a complex mixture of industrial source emissions that contribute to periods of non-attainment for fine particulate matter (PM 2.5 ) and are associated with increased adverse health outcomes in the exposed population. Specific PM sources responsible...

  7. Overview of the Mathematical and Empirical Receptor Models Workshop (Quail Roost II)

    NASA Astrophysics Data System (ADS)

    Stevens, Robert K.; Pace, Thompson G.

    On 14-17 March 1982, the U.S. Environmental Protection Agency sponsored the Mathematical and Empirical Receptor Models Workshop (Quail Roost II) at the Quail Roost Conference Center, Rougemont, NC. Thirty-five scientists were invited to participate. The objective of the workshop was to document and compare results of source apportionment analyses of simulated and real aerosol data sets. The simulated data set was developed by scientists from the National Bureau of Standards. It consisted of elemental mass data generated using a dispersion model that simulated transport of aerosols from a variety of sources to a receptor site. The real data set contained the mass, elemental, and ionic species concentrations of samples obtained in 18 consecutive 12-h sampling periods in Houston, TX. Some participants performed additional analyses of the Houston filters by X-ray powder diffraction, scanning electron microscopy, or light microscopy. Ten groups analyzed these data sets using a variety of modeling procedures. The results of the modeling exercises were evaluated and structured in a manner that permitted model intercomparisons. The major conclusions and recommendations derived from the intercomparisons were: (1) using aerosol elemental composition data, receptor models can resolve major emission sources, but additional analyses (including light microscopy and X-ray diffraction) significantly increase the number of sources that can be resolved; (2) simulated data sets that contain up to 6 dissimilar emission sources need to be generated, so that different receptor models can be adequately compared; (3) source apportionment methods need to be modified to incorporate a means of apportioning such aerosol species as sulfate and nitrate formed from SO 2 and NO, respectively, because current models tend to resolve particles into chemical species rather than to deduce their sources and (4) a source signature library may be required to be compiled for each airshed in order to

  8. ASSESSING THE ROLE OF PARTICULATE MATTER SIZE AND COMPOSITION ON GENEEXPRESSION IN PULMONARY CELLS

    EPA Science Inventory

    Identifying the mechanisms by which air pollution causes human health effects is a daunting task. Airsheds around the world are composed of pollution mixtures made up of hundreds of chemical and biological components with an extensive array of physico-chemical properties. Curre...

  9. High-Fidelity Computational Aerodynamics of the Elytron 4S UAV

    NASA Technical Reports Server (NTRS)

    Ventura Diaz, Patricia; Yoon, Seokkwan; Theodore, Colin R.

    2018-01-01

    High-fidelity Computational Fluid Dynamics (CFD) have been carried out for the Elytron 4S Unmanned Aerial Vehicle (UAV), also known as the converticopter "proto12". It is the scaled wind tunnel model of the Elytron 4S, an Urban Air Mobility (UAM) concept, a tilt-wing, box-wing rotorcraft capable of Vertical Take-Off and Landing (VTOL). The three-dimensional unsteady Navier-Stokes equations are solved on overset grids employing high-order accurate schemes, dual-time stepping, and a hybrid turbulence model using NASA's CFD code OVERFLOW. The Elytron 4S UAV has been simulated in airplane mode and in helicopter mode.

  10. Modeling the convective transport of pollutants from eastern Colorado, USA into Rocky Mountain National Park

    NASA Astrophysics Data System (ADS)

    Pina, A.; Schumacher, R. S.; Denning, S.

    2015-12-01

    Rocky Mountain National Park (RMNP) is a Class I Airshed designated under the Clean Air Act. Atmospheric nitrogen (N) deposition in the Park has been a known problem since weekly measurements of wet deposition of inorganic N began in the 1980s by the National Atmospheric Deposition Program (NADP). The addition of N from urban and agriculture emissions along the Colorado Front Range to montane ecosystems degrades air quality/visibility, water quality, and soil pH levels. Based on NADP data during summers 1994-2014, wet N deposition at Beaver Meadows in RMNP exhibited a bimodal gamma distribution. In this study, we identified meteorological transport mechanisms for 3 high wet-N deposition events (all events were within the secondary peak of the gamma distribution) using the North American Regional Reanalysis (NARR) and the Weather Research and Forecasting (WRF) model. The NARR was used to identify synoptic-scale influences on the transport; the WRF model was used to analyze the convective transport of pollutants from a concentrated animal feeding operation near Greeley, Colorado, USA. The WRF simulation included a passive tracer from the feeding operation and a convection-permitting horizontal spacing of 4/3 km. The three cases suggest (a) synoptic-scale moisture and flow patterns are important for priming summer transport events and (b) convection plays a vital role in the transport of Front Range pollutants into RMNP.

  11. Spatial and Temporal Patterns of Mercury Accumulation in Lacustrine Sediments Across the Laurentian Great Lakes Region

    EPA Science Inventory

    Data from 103 sediment cores from the Great Lakes and inland lakes of the Great Lakes airshed were compiled to examine and provide a synthesis of patterns of historical and recent changes in mercury (Hg) deposition. Limited data from the lower Laurentian Great Lakes shows a lega...

  12. Atmospheric reactivity studies of aliphatic amines

    USDA-ARS?s Scientific Manuscript database

    Ambient studies of particulate matter have shown that alkyl amines are often present in particles in areas impacted by agricultural emissions. These locations include California’s Central Valley and Inland Empire and Utah’s Cache Valley. These compounds are not typically observed in airsheds that so...

  13. Monitoring IACP samples and construction of a centralized data base

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, D.B.; Ray, D.B.; Simonson, J.

    1991-01-01

    The Integrated Air Cancer Project (IACP) is a multiyear US EPA research program established to develop and evaluate methods required to identify the principal airborne carcinogens, determine emission sources, and improve the estimate of comparative human cancer risk. The first major field study designed to examine a residential wood combustion airshed was conducted in Boise, Idaho during the 1986-1987 winter heating season. The second major field study conducted in Roanoke, Virgina during the 1988-1989 was to study residential oil heating and wood combustion. Motor vehicle emissions were considered a major combustion product contributor in both airsheds. This paper describes twomore » critical components of the project. The first component is the sample custody and tracking of the samples before analysis. The second component describes the data management of the sample field data (eg. sample site, time, date, flow rate) as well as the analytical data (eg. mutagenicity, particle concentrations) for the environmental samples.« less

  14. Developing a Model to Estimate Freshwater Gross Primary Production Using MODIS Surface Temperature Observations

    NASA Astrophysics Data System (ADS)

    Saberi, S. J.; Weathers, K. C.; Norouzi, H.; Prakash, S.; Solomon, C.; Boucher, J. M.

    2016-12-01

    Lakes contribute to local and regional climate conditions, cycle nutrients, and are viable indicators of climate change due to their sensitivity to disturbances in their water and airsheds. Utilizing spaceborne remote sensing (RS) techniques has considerable potential in studying lake dynamics because it allows for coherent and consistent spatial and temporal observations as well as estimates of lake functions without in situ measurements. However, in order for RS products to be useful, algorithms that relate in situ measurements to RS data must be developed. Estimates of lake metabolic rates are of particular scientific interest since they are indicative of lakes' roles in carbon cycling and ecological function. Currently, there are few existing algorithms relating remote sensing products to in-lake estimates of metabolic rates and more in-depth studies are still required. Here we use satellite surface temperature observations from Moderate Resolution Imaging Spectroradiometer (MODIS) product (MYD11A2) and published in-lake gross primary production (GPP) estimates for eleven globally distributed lakes during a one-year period to produce a univariate quadratic equation model. The general model was validated using other lakes during an equivalent one-year time period (R2=0.76). The statistical analyses reveal significant positive relationships between MODIS temperature data and the previously modeled in-lake GPP. Lake-specific models for Lake Mendota (USA), Rotorua (New Zealand), and Taihu (China) showed stronger relationships than the general combined model, pointing to local influences such as watershed characteristics on in-lake GPP in some cases. These validation data suggest that the developed algorithm has a potential to predict lake GPP on a global scale.

  15. 40 CFR 52.2308 - Area-wide nitrogen oxides (NOX) exemptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... attain the National Ambient Air Quality Standards (NAAQS) for ozone by the CAA mandated deadline without... consists of El Paso county, and shares a common airshed with Juarez, Mexico. The exemption request was... required under section 182(f), but for emissions emanating from Mexico. On November 21, 1994, the EPA...

  16. 40 CFR 52.2308 - Area-wide nitrogen oxides (NOX) exemptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... attain the National Ambient Air Quality Standards (NAAQS) for ozone by the CAA mandated deadline without... consists of El Paso county, and shares a common airshed with Juarez, Mexico. The exemption request was... required under section 182(f), but for emissions emanating from Mexico. On November 21, 1994, the EPA...

  17. Atmospheric organochlorine pesticides in the western Canadian Arctic: Evidence of transpacific transport

    NASA Astrophysics Data System (ADS)

    Bailey, R.; Barrie, L. A.; Halsall, C. J.; Fellin, P.; Muir, D. C. G.

    2000-05-01

    Concentrations of hexachlorocyclohexanes (HCHs), chlordane, and dichlorodiphenyltrichloroethane (DDT) were measured in ambient air samples on a weekly basis between December 1992 and January 1995 at Tagish Yukon, Canada. In winter, unusually high air concentrations of HCHs, DDT, and chlordanes at Tagish were predominantly influenced by transpacific long-range atmospheric transport from eastern Asia that generally occurred within 5 days. HCH and heptachlor epoxide concentrations were correlated with the time that air spent over eastern Asia prior to arrival at Tagish. Chlordane and DDT, which also increase with transpacific transport, do not show a correlation with the time the upwind airshed included Asia as the composition of these pesticides in the atmosphere is affected by differences in usage patterns, application methods, variable composition of parent pesticides and metabolites in the soil, and rates of volatilization. Air masses originating from North America had the highest concentrations of HCHs and chlordanes when the 5-day upwind airshed included the western United States. Concentrations of HCHs may also be influenced by lindane usage in Canada.

  18. ALCHEMIST (Anesthesia Log, Charge Entry, Medical Information, and Statistics)

    PubMed Central

    Covey, M. Carl

    1979-01-01

    This paper presents an automated system for the handling of charges and information processing within the Anesthesiology department of the University of Arkansas for the Medical Sciences (UAMS). The purpose of the system is to take the place of cumbersome, manual billing procedures and in the process of automated charge generation, to compile a data base of patient data for later use. ALCHEMIST has demonstrated its value by increasing both the speed and the accuracy of generation of patient charges as well as facilitating the compilation of valuable, informative reports containing statistical summaries of all aspects of the UAMS operating wing case load. ALCHEMIST allows for the entry of fifty different sets of information (multiple items in some sets) for a total of 107 separate data elements from the original anesthetic record. All this data is entered as part of the charge entry procedure.

  19. Telemedicine Interest for Routine Follow-Up Care Among Neurology Patients in Arkansas.

    PubMed

    Bashiri, Maryam; Greenfield, L John; Oliveto, Alison

    2016-06-01

    Teleneurology in Arkansas has been used primarily for management of acute stroke with a state-funded hub-and-spoke model allowing physicians at rural hospitals to access vascular neurologists in time to facilitate tissue plasminogen activator administration. Routine neurologic care has been provided only in small pilot studies. We wished to determine patient interest in participating in teleneurology for routine follow-up visits as well as demographic and medical factors associated with interest. New and established patients of the Neurology Outpatient Clinic at the University of Arkansas for Medical Sciences (UAMS) were surveyed between March 2011 and December 2012 to assess their interest in participating in teleneurology as well as potential factors associated with their interest. Of 1,441 respondents, 52.4% were interested in telemedicine. Of those interested versus uninterested in telemedicine, respectively, 68.9% versus 36.32% traveled more than 1 h to the clinic, 64.7% versus 35.3% had difficulty secondary to neurological conditions, 22.6% versus 6.8% had missed medical appointments due to travel problems, and 43.1% versus 9.4% had travel-imposed financial hardship. Telemedicine interest for routine follow-up visits was strong among patients at the UAMS Neurology Outpatient Clinic. Factors positively associated with interest included long travel distances, travel expenses, and transportation difficulties. These results suggest that implementing a telemedicine program for follow-up visits would be acceptable to neurology patients for routine ongoing care.

  20. It's Hard to Get from Here to There: Early Intervention for Rural Young Children in Arkansas

    ERIC Educational Resources Information Center

    Marsh, Carolyn; Casey, Patrick H.

    2006-01-01

    The Kids First program at the University of Arkansas Medical School (UAMS) is an outgrowth of the Infant Health and Development Program, a randomized trial of an early intervention approach for premature, low birth weight children, which showed that intensive intervention had significant initial benefits in the cognitive development and behavior…

  1. Bioavailable transition metals in particulate matter mediate cardiopulmonary injury in healthy and compromised animal models.

    PubMed Central

    Costa, D L; Dreher, K L

    1997-01-01

    Many epidemiologic reports associate ambient levels of particulate matter (PM) with human mortality and morbidity, particularly in people with preexisting cardiopulmonary disease (e.g., chronic obstructive pulmonary disease, infection, asthma). Because much ambient PM is derived from combustion sources, we tested the hypothesis that the health effects of PM arise from anthropogenic PM that contains bioavailable transition metals. The PM samples studied derived from three emission sources (two oil and one coal fly ash) and four ambient airsheds (St. Louis, MO; Washington; Dusseldorf, Germany; and Ottawa, Canada). PM was administered to rats by intratracheal instillation in equimass or equimetal doses to address directly the influence of PM mass versus metal content on acute lung injury and inflammation. Our results indicated that the lung dose of bioavailable transition metal, not instilled PM mass, was the primary determinant of the acute inflammatory response for both the combustion source and ambient PM samples. Residual oil fly ash, a combustion PM rich in bioavailable metal, was evaluated in a rat model of cardiopulmonary disease (pulmonary vasculitis/hypertension) to ascertain whether the disease state augmented sensitivity to that PM. Significant mortality and enhanced airway responsiveness were observed. Analysis of the lavaged lung fluids suggested that the milieu of the inflamed lung amplified metal-mediated oxidant chemistry to jeopardize the compromised cardiopulmonary system. We propose that soluble metals from PM mediate the array of PM-associated injuries to the cardiopulmonary system of the healthy and at-risk compromised host. PMID:9400700

  2. Forecasting PM10 in metropolitan areas: Efficacy of neural networks.

    PubMed

    Fernando, H J S; Mammarella, M C; Grandoni, G; Fedele, P; Di Marco, R; Dimitrova, R; Hyde, P

    2012-04-01

    Deterministic photochemical air quality models are commonly used for regulatory management and planning of urban airsheds. These models are complex, computer intensive, and hence are prohibitively expensive for routine air quality predictions. Stochastic methods are becoming increasingly popular as an alternative, which relegate decision making to artificial intelligence based on Neural Networks that are made of artificial neurons or 'nodes' capable of 'learning through training' via historic data. A Neural Network was used to predict particulate matter concentration at a regulatory monitoring site in Phoenix, Arizona; its development, efficacy as a predictive tool and performance vis-à-vis a commonly used regulatory photochemical model are described in this paper. It is concluded that Neural Networks are much easier, quicker and economical to implement without compromising the accuracy of predictions. Neural Networks can be used to develop rapid air quality warning systems based on a network of automated monitoring stations. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Actas de la 4th mesa redonda sobre investigacion en "Lenguas Extranjeras" en la Universidad Autonoma Metropolitana 1996. (Proceedings from the 4th Roundtable on Investigation in Foreign Languages in the Autonomous Metropolitan University 1996).

    ERIC Educational Resources Information Center

    Zoreda, Margaret Lee, Comp.; Diop, Mamoudou Si, Comp.; Vivaldo Lima, Javier, Comp.

    The articles included in this volume were selected as exemplary papers from the conference in Mexico. The goal of the forum was to provide an opportunity for researchers from the three branches of the Universidad Autonoma Metropolitana (UAM) to meet and discuss research projects currently in progress at the university. The works presented here…

  4. Changes in ground layer vegetation following timber harvests on the Missouri Ozark Forest Ecosystem Project

    Treesearch

    Jennifer K. Grabner; Eric K. Zenner

    2002-01-01

    The Missouri Ozark Forest Ecosystem Project (MOFEP) is a landscape-scale experiment to test for effects of the following three common forest management practices on upland forests: 1) even-aged management (EAM), 2) uneven-aged management (UAM), and 3) no-harvest management (NHM). The first round of harvesting treatments was applied on the nine MOFEP sites in 1996. One...

  5. Harvest-associated disturbance in upland Ozark forests of the Missouri Ozark Forest Ecosystem Project

    Treesearch

    Johann N. Bruhn; James J. Wetteroff; Jeanne D. Mihail; Randy G. Jensen; James B. Pickens

    2002-01-01

    The Missouri Ozark Forest Ecosystem Project (MOFEP) is a long-term, multidisciplinary, landscape-based research program studying effects of even-aged (EAM), uneven-aged (UAM), and no-harvest (NHM) management on forest communities. The first MOFEP timber harvests occurred from May through November 1996. Harvest- related disturbance occurred on 69 of 180 permanent 0.2-ha...

  6. Does air pollution pose a public health problem for New Zealand?

    PubMed

    Scoggins, Amanda

    2004-02-01

    Air pollution is increasingly documented as a threat to public health and a major focus of regulatory activity in developed and developing countries. Air quality indicators suggest New Zealand has clean air relative to many other countries. However, media releases such as 'Christchurch wood fires pump out deadly smog' and 'Vehicle pollution major killer' have sparked public health concern regarding exposure to ambient air pollution, especially in anticipation of increasing emissions and population growth. Recent evidence is presented on the effects of air quality on health, which has been aided by the application of urban airshed models and Geographic Information Systems (GIS). Future directions for research into the effects of air quality on health in New Zealand are discussed, including a national ambient air quality management project: HAPINZ--Health and Air Pollution in New Zealand.

  7. Determining air quality and greenhouse gas impacts of hydrogen infrastructure and fuel cell vehicles.

    PubMed

    Stephens-Romero, Shane; Carreras-Sospedra, Marc; Brouwer, Jacob; Dabdub, Donald; Samuelsen, Scott

    2009-12-01

    Adoption of hydrogen infrastructure and hydrogen fuel cell vehicles (HFCVs) to replace gasoline internal combustion engine (ICE) vehicles has been proposed as a strategy to reduce criteria pollutant and greenhouse gas (GHG) emissions from the transportation sector and transition to fuel independence. However, it is uncertain (1) to what degree the reduction in criteria pollutants will impact urban air quality, and (2) how the reductions in pollutant emissions and concomitant urban air quality impacts compare to ultralow emission gasoline-powered vehicles projected for a future year (e.g., 2060). To address these questions, the present study introduces a "spatially and temporally resolved energy and environment tool" (STREET) to characterize the pollutant and GHG emissions associated with a comprehensive hydrogen supply infrastructure and HFCVs at a high level of geographic and temporal resolution. To demonstrate the utility of STREET, two spatially and temporally resolved scenarios for hydrogen infrastructure are evaluated in a prototypical urban airshed (the South Coast Air Basin of California) using geographic information systems (GIS) data. The well-to-wheels (WTW) GHG emissions are quantified and the air quality is established using a detailed atmospheric chemistry and transport model followed by a comparison to a future gasoline scenario comprised of advanced ICE vehicles. One hydrogen scenario includes more renewable primary energy sources for hydrogen generation and the other includes more fossil fuel sources. The two scenarios encompass a variety of hydrogen generation, distribution, and fueling strategies. GHG emissions reductions range from 61 to 68% for both hydrogen scenarios in parallel with substantial improvements in urban air quality (e.g., reductions of 10 ppb in peak 8-h-averaged ozone and 6 mug/m(3) in 24-h-averaged particulate matter concentrations, particularly in regions of the airshed where concentrations are highest for the gasoline scenario).

  8. Three-dimensional finite element analysis of vertical and angular misfit in implant-supported fixed prostheses.

    PubMed

    Assunção, Wirley Gonçalves; Gomes, Erica Alves; Rocha, Eduardo Passos; Delben, Juliana Aparecida

    2011-01-01

    Three-dimensional finite element analysis was used to evaluate the effect of vertical and angular misfit in three-piece implant-supported screw-retained fixed prostheses on the biomechanical response in the peri-implant bone, implants, and prosthetic components. Four three-dimensional models were fabricated to represent a right posterior mandibular section with one implant in the region of the second premolar (2PM) and another in the region of the second molar (2M). The implants were splinted by a three-piece implant-supported metal-ceramic prosthesis and differed according to the type of misfit, as represented by four different models: Control = prosthesis with complete fit to the implants; UAM (unilateral angular misfit) = prosthesis presenting unilateral angular misfit of 100 μm in the mesial region of the 2M; UVM (unilateral vertical misfit) = prosthesis presenting unilateral vertical misfit of 100 μm in the mesial region of the 2M; and TVM (total vertical misfit) = prosthesis presenting total vertical misfit of 100 μm in the platform of the framework in the 2M. A vertical load of 400 N was distributed and applied on 12 centric points by the software Ansys, ie, a vertical load of 150 N was applied to each molar in the prosthesis and a vertical load of 100 N was applied at the 2PM. The stress values and distribution in peri-implant bone tissue were similar for all groups. The models with misfit exhibited different distribution patterns and increased stress magnitude in comparison to the control. The highest stress values in group UAM were observed in the implant body and retention screw. The groups UVM and TVM exhibited high stress values in the platform of the framework and the implant hexagon, respectively. The three types of misfit influenced the magnitude and distribution of stresses. The influence of misfit on peri-implant bone tissue was modest. Each type of misfit increased the stress values in different regions of the system.

  9. 2016 Targeted AirShed Grant Program - Closed Announcement FY 2016

    EPA Pesticide Factsheets

    Targeted Air Shed Grant Program proposal for FY 2016. The overall goal of the program is to reduce air pollution in the Nation’s areas with the highest levels of ozone and PM2.5 ambient air concentrations.

  10. The use of an atmospheric dispersion model to determine influence regions in the Prince George, B.C. airshed from the burning of open wood waste piles.

    PubMed

    Ainslie, B; Jackson, P L

    2009-06-01

    A means of determining air emission source regions adversely influencing the city of Prince George, British Columbia, Canada from potential burning of isolated piles of mountain pine beetle-killed lodge pole pine is presented. The analysis uses the CALPUFF atmospheric dispersion model to identify safe burning regions based on atmospheric stability and wind direction. Model results show that the location and extent of influence regions is sensitive to wind speed, wind direction, atmospheric stability and a threshold used to quantify excessive concentrations. A concentration threshold based on the Canada Wide PM(2.5) Standard is used to delineate the influence regions while Environment Canada's (EC) daily ventilation index (VI) is used to quantify local atmospheric stability. Results from the analysis, to be used by air quality meteorologists in assessing daily requests for burning permits, are presented as a series of maps delineating acceptable burning locations for sources placed at various distances from the city center and under different ventilation conditions. The results show that no burning should be allowed within 10 km of the city center; under poor ventilation conditions, no burning should be allowed within 20 km of the city center; under good ventilation conditions, burning can be allowed within 10-15 km of the city center; under good to fair ventilation conditions, burning can be allowed beyond 15 km of the city center; and if the wind direction can be reliably forecast, burning can be allowed between 5 and 10 km downwind of the city center under good ventilation conditions.

  11. Five-Axis Ultrasonic Additive Manufacturing for Nuclear Component Manufacture

    NASA Astrophysics Data System (ADS)

    Hehr, Adam; Wenning, Justin; Terrani, Kurt; Babu, Sudarsanam Suresh; Norfolk, Mark

    2017-03-01

    Ultrasonic additive manufacturing (UAM) is a three-dimensional metal printing technology which uses high-frequency vibrations to scrub and weld together both similar and dissimilar metal foils. There is no melting in the process and no special atmosphere requirements are needed. Consequently, dissimilar metals can be joined with little to no intermetallic compound formation, and large components can be manufactured. These attributes have the potential to transform manufacturing of nuclear reactor core components such as control elements for the High Flux Isotope Reactor at Oak Ridge National Laboratory. These components are hybrid structures consisting of an outer cladding layer in contact with the coolant with neutron-absorbing materials inside, such as neutron poisons for reactor control purposes. UAM systems are built into a computer numerical control (CNC) framework to utilize intermittent subtractive processes. These subtractive processes are used to introduce internal features as the component is being built and for net shaping. The CNC framework is also used for controlling the motion of the welding operation. It is demonstrated here that curved components with embedded features can be produced using a five-axis code for the welder for the first time.

  12. Five-axis ultrasonic additive manufacturing for nuclear component manufacture

    DOE PAGES

    Hehr, Adam; Wenning, Justin; Terrani, Kurt A.; ...

    2016-01-01

    Ultrasonic additive manufacturing (UAM) is a three-dimensional metal printing technology which uses high-frequency vibrations to scrub and weld together both similar and dissimilar metal foils. There is no melting in the process and no special atmosphere requirements are needed. Consequently, dissimilar metals can be joined with little to no intermetallic compound formation, and large components can be manufactured. These attributes have the potential to transform manufacturing of nuclear reactor core components such as control elements for the High Flux Isotope Reactor at Oak Ridge National Laboratory. These components are hybrid structures consisting of an outer cladding layer in contact withmore » the coolant with neutron-absorbing materials inside, such as neutron poisons for reactor control purposes. UAM systems are built into a computer numerical control (CNC) framework to utilize intermittent subtractive processes. These subtractive processes are used to introduce internal features as the component is being built and for net shaping. The CNC framework is also used for controlling the motion of the welding operation. Lastly, it is demonstrated here that curved components with embedded features can be produced using a five-axis code for the welder for the first time.« less

  13. Bropirimine inhibits osteoclast differentiation through production of interferon-β

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Hiroaki; Mochizuki, Ayako; Yoshimura, Kentaro

    Bropirimine is a synthetic agonist for toll-like receptor 7 (TLR7). In this study, we investigated the effects of bropirimine on differentiation and bone-resorbing activity of osteoclasts in vitro. Bropirimine inhibited osteoclast differentiation of mouse bone marrow-derived macrophages (BMMs) induced by receptor activator of nuclear factor κB ligand (RANKL) in a concentration-dependent manner. Furthermore, it suppressed the mRNA expression of nuclear factor of activated T-cells, cytoplasmic, calcineurin-dependent 1 (NFATc1), a master transcription factor for osteoclast differentiation, without affecting BMM viability. Bropirimine also inhibited osteoclast differentiation induced in co-cultures of mouse bone marrow cells (BMCs) and mouse osteoblastic UAMS-32 cells in the presencemore » of activated vitamin D{sub 3}. Bropirimine partially suppressed the expression of RANKL mRNA in UAMS-32 cells induced by activated vitamin D{sub 3}. Finally, the anti-interferon-β (IFN-β) antibody restored RANKL-dependent differentiation of BMMs into osteoclasts suppressed by bropirimine. These results suggest that bropirimine inhibits differentiation of osteoclast precursor cells into osteoclasts via TLR7-mediated production of IFN-β.« less

  14. Rapid ultrasonic and microwave-assisted micellar extraction of zingiberone, shogaol and gingerols from gingers using biosurfactants.

    PubMed

    Peng, Li-Qing; Cao, Jun; Du, Li-Jing; Zhang, Qi-Dong; Xu, Jing-Jing; Chen, Yu-Bo; Shi, Yu-Ting; Li, Rong-Rong

    2017-09-15

    Two kinds of extraction methods ultrasonic-assisted micellar extraction (UAME) and microwave-assisted micellar extraction (MAME) coupled with ultra-high performance liquid chromatography with ultraviolet detector (UHPLC-UV) were developed and evaluated for extraction and determination of zingerone, 6-gingerol, 8-gingerol, 6-shogaol and 10-gingerol in Rhizoma Zingiberis and Rhizoma Zingiberis Preparata. A biosurfactant, hyodeoxycholic acid sodium salt, was used in micellar extraction. Several experimental parameters were studied separately by a univariate method. The result indicated that the MAME was more efficient than UAME. The optimal conditions of MAME were as follows: 100mM of hyodeoxycholic acid sodium salt was used as surfactant, the irradiation time was set at 10s and the extraction temperature was set at 60°C. The validation results indicated that the limits of detection were in the range of 3.80-8.11ng/mL. The average recoveries were in the range of 87.32-103.12% for the two samples at two spiking levels. Compared with other reported methods, the proposed MAME-UHPLC-UV method was more effective, quicker (10s) and more eco-friendly. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Canada-wide standards and innovative transboundary air quality initiatives.

    PubMed

    Barton, Jane

    2008-01-01

    Canada's approach to air quality management is one that has brought with it opportunities for the development of unique approaches to risk management. Even with Canada's relatively low levels of pollution, science has demonstrated clearly that air quality and ecosystem improvements are worthwhile. To achieve change and address air quality in Canada, Canadian governments work together since, under the constitution, they share responsibility for the environment. At the same time, because air pollution knows no boundaries, working with the governments of other nations is essential to get results. International cooperation at all levels provides opportunities with potential for real change. Cooperation within transboundary airsheds is proving a fruitful source of innovative opportunities to reduce cross-border barriers to air quality improvements. In relation to the NERAM Colloquium objective to establish principles for air quality management based on the identification of international best practice in air quality policy development and implementation, Canada has developed, both at home and with the United States, interesting air management strategies and initiatives from which certain lessons may be taken that could be useful in other countries with similar situations. In particular, the Canada-wide strategies for smog and acid rain were developed by Canadian governments, strategies that improve and protect air quality at home, while Canada-U.S. transboundary airshed projects provide examples of international initiatives to improve air quality.

  16. Summary of Meteorological Observations, Surface (SMOS), Kingsville, Texas.

    DTIC Science & Technology

    1984-09-01

    of surface C weather observation. The six parts are: Part A - Wather Conditions/ Atmospheric Phenn. Part S1 - Preci pitatien/Spofal 1/Snow, Depth... WATHER SERVICE SU2VACl WINDS AMMOIC MMD86AW MPW OFU ~AHOY hUam~W _______________ 41±low kI&K GO!PtO I2 Ci I~. I~JT2. C,. 2.& . W 1. -S . S 64~ E - S

  17. Identification of the Microtubule-Inhibitor Activated Bcl-xL Kinase: A Regulator of Breast Cancer Cell Chemosensitivity to Taxol

    DTIC Science & Technology

    2011-08-01

    not shown). 1 5 Translational research study We continued the clinical protocol entitled “Alterations in the Bcl-2 Family Proteins...Bcl- 2 protein phosphorylation in hematopoietic cells may be more likely to have fever, neutropenia than others, for example. While effective...Alterations in the Bcl-2 Family Proteins in the Peripheral Blood Following Treatment with Taxanes in Patients with Breast Cancer.” Accepted by the UAMS

  18. High Throughput Assay for Bacterial Adhesion on Acellular Dermal Matrices and Synthetic Surgical Materials

    PubMed Central

    Nyame, Theodore T.; Lemon, Katherine P.; Kolter, Roberto; Liao, Eric C.

    2013-01-01

    Background There has been increasing use of various synthetic and biologically derived materials in surgery. Biologic surgical materials are used in many plastic surgery procedures, ranging from breast reconstruction to hernia repairs. In particular, acellular dermal matrix (ADM) material has gained popularity in these applications. There is a paucity of data on how ADM compares to other surgical materials as a substrate for bacterial adhesion, the first step in formation biofilm, which occurs in prosthetic wound infections. We have designed a high throughput assay to evaluate Staphylococcus aureus adherence on various synthetic and biologically derived materials. Methods Clinical isolates of Staphylococcus aureus (strains SC-1 and UAMS-1) were cultured with different materials and bacterial adherence was measured using a resazurin cell vitality reporter microtiter assay. Four materials that are commonly utilized in reconstructive procedures were evaluated: prolene mesh, vicryl mesh, and two different ADM preparations (AlloDerm®, FlexHD®). We were able to develop a high throughput and reliable assay for quantifying bacterial adhesion on synthetic and biologically derived materials. Results The resazurin vitality assay can be reliably used to quantify bacterial adherence to acellular dermal matrix material, as well as synthetic material. S. aureus strains SC-1 and UAMS-1 both adhered better to ADM materials (AlloDerm® vs. FlexHD®) than to the synthetic material prolene. S. aureus also adhered better to vicryl than to prolene. Strain UAMS-1 adhered better to vicryl and ADM materials than did strain SC-1. Conclusion Our results suggest that S. aureus adheres more readily to ADM material than to synthetic material. We have developed an assay to rapidly test bacterial formation on surgical materials, using two S. aureus bacterial strains. This provides a standard method to evaluate existing and new materials with regard to bacterial adherence and potential

  19. Associations between immune function and air pollution among postmenopausal women living in the Puget Sound airshed

    NASA Astrophysics Data System (ADS)

    Williams, Lori A.

    Air pollution is associated with adverse health outcomes, and changes in the immune system may be intermediate steps between exposure and a clinically relevant adverse health outcome. We analyzed the associations between three different types of measures of air pollution exposure and five biomarkers of immune function among 115 overweight and obese postmenopausal women whose immunity was assessed as part of a year-long moderate exercise intervention trial. For air pollution metrics, we assessed: (1) residential proximity to major roads (freeways, major arterials and truck routes), (2) fine particulate matter(PM2.5) at the nearest monitor to the residence averaged over three time windows (3-days, 30-days and 60-days), and (3) nitrogen dioxide (NO2) modeled based on land use characteristics. Our immune biomarkers included three measures of inflammation---C-reactive protein, serum amyloid A and interleukin-6---and two measures of cellular immunity---natural killer cell cytotoxicity and T lymphocyte proliferation. We hypothesized that living near a major road, increased exposure to PM2.5 and increased exposure to NO2 would each be independently associated with increased inflammation and decreased immune function. We observed a 21% lower average natural killer cell cytotoxicity among women living within 150 meters of a major arterial road compared to other women. For PM2.5 , we observed changes in 3 of 4 indicators of lymphocyte proliferation stimulated by anti-CD3---an antibody to the T cell receptor associated with increases in 3-day averaged PM2.5. For 30-day averaged PM 2.5 and 60-day averaged PM2.5 we did not observe any statistically significant associations. We observed an increase in lymphocyte proliferation index stimulated by the plant protein phytohemagglutinin (PHA) at 1 of 2 PHA concentrations in association with modeled NO2. For the three inflammatory markers, we observed no notable associations with any of our measures of air pollution. If confirmed, our

  20. AIRSHED DOMAINS FOR MODELING ATMOSPHERIC DEPOSITION OF OXIDIZED AND REDUCED NITROGEN TO THE NEUSE/PAMLICO SYSTEM OF NORTH CAROLINA

    EPA Science Inventory

    Atmospheric deposition is important to nutrient loadings to coastal estuaries. Atmospheric emissions of nitrogen travel hundreds of kilometers as they are removed via atmospheric deposition. Long-range transport from outside the Neuse/Pamlico system in North Carolina is an impo...

  1. 2015 Targeted AirShed Grant Program Grant - Closed Announcement FY 2015

    EPA Pesticide Factsheets

    Targeted Air Shed Grant Program proposal for FY 2015. The overall goal of the program is to reduce air pollution in the Nation’s areas with the highest levels of ozone and PM2.5 ambient air concentrations.

  2. EXPOSURE TO VOLATILE ORGANIC COMPOUNDS MEASURED IN A SOURCE IMPACTED AIRSHED

    EPA Science Inventory

    A three-year exposure monitoring study is being conducted in a large city in the Midwestern U.S. The study is aimed at determining the factors influencing exposures to air pollutants of outdoor origin, including volatile organic compounds (VOCs) and particulate matter.

  3. Uncertainty characterization and quantification in air pollution models. Application to the ADMS-Urban model.

    NASA Astrophysics Data System (ADS)

    Debry, E.; Malherbe, L.; Schillinger, C.; Bessagnet, B.; Rouil, L.

    2009-04-01

    uncertainty analysis. We chose the Monte Carlo method which has already been applied to atmospheric dispersion models [2, 3, 4]. The main advantage of this method is to be insensitive to the number of perturbed parameters but its drawbacks are its computation cost and its slow convergence. In order to speed up this one we used the method of antithetic variable which takes adavantage of the symmetry of probability laws. The air quality model simulations were carried out by the Association for study and watching of Atmospheric Pollution in Alsace (ASPA). The output concentrations distributions can then be updated with a Bayesian method. This work is part of an INERIS Research project also aiming at assessing the uncertainty of the CHIMERE dispersion model used in the Prev'Air forecasting platform (www.prevair.org) in order to deliver more accurate predictions. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the PAris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.

  4. Prognostic Validation of SKY92 and Its Combination With ISS in an Independent Cohort of Patients With Multiple Myeloma.

    PubMed

    van Beers, Erik H; van Vliet, Martin H; Kuiper, Rowan; de Best, Leonie; Anderson, Kenneth C; Chari, Ajai; Jagannath, Sundar; Jakubowiak, Andrzej; Kumar, Shaji K; Levy, Joan B; Auclair, Daniel; Lonial, Sagar; Reece, Donna; Richardson, Paul; Siegel, David S; Stewart, A Keith; Trudel, Suzanne; Vij, Ravi; Zimmerman, Todd M; Fonseca, Rafael

    2017-09-01

    High risk and low risk multiple myeloma patients follow a very different clinical course as reflected in their PFS and OS. To be clinically useful, methodologies used to identify high and low risk disease must be validated in representative independent clinical data and available so that patients can be managed appropriately. A recent analysis has indicated that SKY92 combined with the International Staging System (ISS) identifies patients with different risk disease with high sensitivity. Here we computed the performance of eight gene expression based classifiers SKY92, UAMS70, UAMS80, IFM15, Proliferation Index, Centrosome Index, Cancer Testis Antigen and HM19 as well as the combination of SKY92/ISS in an independent cohort of 91 newly diagnosed MM patients. The classifiers identified between 9%-21% of patients as high risk, with hazard ratios (HRs) between 1.9 and 8.2. Among the eight signatures, SKY92 identified the largest proportion of patients (21%) also with the highest HR (8.2). Our analysis also validated the combination SKY92/ISS for identification of three classes; low risk (42%), intermediate risk (37%) and high risk (21%). Between low risk and high risk classes the HR is >10. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Failure mode and effects analysis of the universal anaesthesia machine in two tertiary care hospitals in Sierra Leone

    PubMed Central

    Rosen, M. A.; Sampson, J. B.; Jackson, E. V.; Koka, R.; Chima, A. M.; Ogbuagu, O. U.; Marx, M. K.; Koroma, M.; Lee, B. H.

    2014-01-01

    Background Anaesthesia care in developed countries involves sophisticated technology and experienced providers. However, advanced machines may be inoperable or fail frequently when placed into the austere medical environment of a developing country. Failure mode and effects analysis (FMEA) is a method for engaging local staff in identifying real or potential breakdowns in processes or work systems and to develop strategies to mitigate risks. Methods Nurse anaesthetists from the two tertiary care hospitals in Freetown, Sierra Leone, participated in three sessions moderated by a human factors specialist and an anaesthesiologist. Sessions were audio recorded, and group discussion graphically mapped by the session facilitator for analysis and commentary. These sessions sought to identify potential barriers to implementing an anaesthesia machine designed for austere medical environments—the universal anaesthesia machine (UAM)—and also engaging local nurse anaesthetists in identifying potential solutions to these barriers. Results Participating Sierra Leonean clinicians identified five main categories of failure modes (resource availability, environmental issues, staff knowledge and attitudes, and workload and staffing issues) and four categories of mitigation strategies (resource management plans, engaging and educating stakeholders, peer support for new machine use, and collectively advocating for needed resources). Conclusions We identified factors that may limit the impact of a UAM and devised likely effective strategies for mitigating those risks. PMID:24833727

  6. Source apportionment of fine particulate matter measured in an industrialized coastal urban area of South Texas

    NASA Astrophysics Data System (ADS)

    Karnae, Saritha; John, Kuruvilla

    2011-07-01

    Corpus Christi is a growing industrialized urban airshed in South Texas impacted by local emissions and regional transport of fine particulate matter (PM 2.5). Positive matrix factorization (PMF2) technique was used to evaluate particulate matter pollution in the urban airshed by estimating the types of sources and its corresponding mass contributions affecting the measured ambient PM 2.5 levels. Fine particulate matter concentrations by species measured during July 2003 through December 2008 at a PM 2.5 speciation site were used in this study. PMF2 identified eight source categories, of which secondary sulfates were the dominant source category accounting for 30.4% of the apportioned mass. The other sources identified included aged sea salt (18.5%), biomass burns (12.7%), crustal dust (10.1%), traffic (9.7%), fresh sea salt (8.1%), industrial sources (6%), and a co-mingled source of oil combustion & diesel emissions (4.6%). The apportioned PM mass showed distinct seasonal variability between source categories. The PM levels in Corpus Christi were affected by biomass burns in Mexico and Central America during April and May, sub-Saharan dust storms from Africa during the summer months, and a continental haze episode during August and September with significant transport from the highly industrialized areas of Texas and the neighboring states. Potential source contribution function (PSCF) analysis was performed and it identified source regions and the influence of long-range transport of fine particulate matter affecting this urban area.

  7. Burdensome and Unnecessary Reporting Requirements of the Public Utility Regulatory Policies Act Need to be Changed.

    DTIC Science & Technology

    1981-09-14

    Commissioners PURPA Public Utility Regulatory Policies Act %GLOSSAk(¥ Aavertising standard As aefineu oy PUijA, no electric utility may recover from any per- son...systems in 4o States, vuerto kico, (uam, and virgin Islanus. Automatic adjustment As detined by PURPA , no electric clause stanuard utility may increase any...Interruptiole rate standard As defined by PURPA , a rate oftereu to eacn industrial and commercial * electric consumer tnat snail retiect the cost of

  8. RECEPTOR MODEL COMPARISONS AND WIND DIRECTION ANALYSES OF VOLATILE ORGANIC COMPOUNDS AND SUBMICROMETER PARTICLES IN AN ARID, BINATIONAL, URBAN AIRSHED

    EPA Science Inventory

    The relationship between continuous measurements of volatile organic compounds sources and particle number was evaluated at a Photochemical Assessment Monitoring Station Network (PAMS) site located near the U.S.-Mexico Border in central El Paso, TX. Sources of volatile organic...

  9. CXCL2 synthesized by oral squamous cell carcinoma is involved in cancer-associated bone destruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oue, Erika; Section of Maxillofacial Surgery, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University; Global Center of Excellence

    Highlights: Black-Right-Pointing-Pointer Oral cancer cells synthesize CXCL2. Black-Right-Pointing-Pointer CXCL2 synthesized by oral cancer is involved in osteoclastogenesis. Black-Right-Pointing-Pointer CXCL2-neutralizing antibody inhibited osteoclastogenesis induced by oral cancer cells. Black-Right-Pointing-Pointer We first report the role of CXCL2 in cancer-associated bone destruction. -- Abstract: To explore the mechanism of bone destruction associated with oral cancer, we identified factors that stimulate osteoclastic bone resorption in oral squamous cell carcinoma. Two clonal cell lines, HSC3-C13 and HSC3-C17, were isolated from the maternal oral cancer cell line, HSC3. The conditioned medium from HSC3-C13 cells showed the highest induction of Rankl expression in the mouse stromal cellmore » lines ST2 and UAMS-32 as compared to that in maternal HSC3 cells and HSC3-C17 cells, which showed similar activity. The conditioned medium from HSC3-C13 cells significantly increased the number of osteoclasts in a co-culture with mouse bone marrow cells and UAMS-32 cells. Xenograft tumors generated from these clonal cell lines into the periosteal region of the parietal bone in athymic mice showed that HSC3-C13 cells caused extensive bone destruction and a significant increase in osteoclast numbers as compared to HSC3-C17 cells. Gene expression was compared between HSC3-C13 and HSC3-C17 cells by using microarray analysis, which showed that CXCL2 gene was highly expressed in HSC3-C13 cells as compared to HSC3-C17 cells. Immunohistochemical staining revealed the localization of CXCL2 in human oral squamous cell carcinomas. The increase in osteoclast numbers induced by the HSC3-C13-conditioned medium was dose-dependently inhibited by addition of anti-human CXCL2-neutralizing antibody in a co-culture system. Recombinant CXCL2 increased the expression of Rankl in UAMS-32 cells. These results indicate that CXCL2 is involved in bone destruction induced by oral cancer. This is

  10. The age estimation practice related to illegal unaccompanied minors immigration in Italy.

    PubMed

    Pradella, F; Pinchi, V; Focardi, M; Grifoni, R; Palandri, M; Norelli, G A

    2017-12-01

    The migrants arrived to the Italian coasts in 2016 were 181.436, 18% more than the previous year and 6% more than the highest number ever since. An "unaccompanied minor" (UAM) is a third-country national or a stateless person under eighteen years of age, who arrives on the territory of the Member State unaccompanied by an adult responsible for him/her whether by law or by the practice of the Member State concerned, and for as long as he or she is not effectively taken into the care of such a person; it includes a minor who is left unaccompanied after he/she entered the territory of the Member States. As many as 95.985 UAMs applied for international protection in an EU member country just in 2015, almost four times the number registered in the previous year. The UAMs arrived in Italy were 28.283 in 2016; 94% of them were males, 92% unaccompanied, 8% of them under 15; the 53,6% is 17; the individuals between 16 and 17 are instead the 82%. Many of them (50%), 6561 in 2016, escaped from the sanctuaries, thus avoiding to be formally identified and registered in Italy in the attempt to reach more easily northern Europe countries, since The Dublin Regulations (2003) state that the asylum application should be held in the EU country of entrance or where parents reside. The age assessment procedures can therefore be considered as a relevant task that weighs in on the shoulders of the forensic experts with all the related issues and the coming of age is the important threshold. In the EU laws on asylum, the minors are considered as one of the groups of vulnerable persons towards whom Member States have specific obligations. A proper EU common formal regulation in the matter of age estimation procedures still lacks. According to the Italian legal framework in the matter, a medical examination should have been always performed but a new law completely changed the approach to the procedures of age estimation of the migrant (excluding the criminal cases) with a better adherence

  11. The Prussian and American General Staffs: An Analysis of Cross-Cultural Imitation, Innovation, and Adaptation

    DTIC Science & Technology

    1981-03-30

    Block 20. If dilfof mt Report) It. SjPPLEMIENTAR’V NOTES S Docuinent i’i a thesis aubmitted in partial fulfillmecnt of tho requirements for the...this type of warship. IlTe distin-tivo features of 1Witation are a similality in organittion, structure, design, uame* s or proceeses beyond a similar•.y...of Europe, especiall’- the military a,.tachgs and military educators, are studied. Additionally, the chapter analy:-. s the effect of Emory Upton and

  12. Enhanced ultrasonically assisted turning of a β-titanium alloy.

    PubMed

    Maurotto, Agostino; Muhammad, Riaz; Roy, Anish; Silberschmidt, Vadim V

    2013-09-01

    Although titanium alloys have outstanding mechanical properties such as high hot hardness, a good strength-to-weight ratio and high corrosion resistance; their low thermal conductivity, high chemical affinity to tool materials severely impair their machinability. Ultrasonically assisted machining (UAM) is an advanced machining technique, which has been shown to improve machinability of a β-titanium alloy, namely, Ti-15-3-3-3, when compared to conventional turning processes. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Opto-EM and Devices Investigation

    DTIC Science & Technology

    1989-06-01

    value of bandwidth is used the signal to noise ratio of the UAM system becomes ( SNP )u z exp ’cr. 2P,\\ 3 .V N._ W/ 3.3 GAUSSIAN SHAPED MESSAGE SPECTRA...Unclassified N/A 2a. SECURITY CLASSIFICATION AUTHORITY 3 . DISTRIBUTION/AVAILABILITY OF REPORT N/A Approved for public release; 2b. DECLASSIFICATION...Synthesis, Single Crystal Growth, Purification and Characterization of Indium Phosphide 2. Deposition of Select Silicides Under High Vacuum Conditions 3 . Use

  14. [Design and activity verification of human parathyroid hormone (1-34) mutant protein].

    PubMed

    Qiu, Shuang; Jiang, Yue-Shui; Li, Zhi-Qin; Lei, Jian-Yong; Chen, Yun; Jin, Jian

    2012-07-01

    Through protein-protein BLAST of homologous sequences in different species in NCBI database and preliminary simulating molecular docking and molecular dynamics by computer software discovery studio 3.1, three amino acids R25K26K27 of natural human parathyroid hormone (1-34) with Q25E26L27 were mutated and the biological activity of the mutant peptide was evaluated. Result showed that: root mean superposition deviation RMSD value between PTH (1-34)-(RKK-QEL) and PTH (1-34) peptide main chain was 2.509 3, indicating that the differences between the two main chain structural conformation was relatively small; the interaction energy between PTH (1-34)-(RKK-QEL) and its receptor protein PTH1R had been enhanced by 7.5% compared to nature PTH (1-34), from -554.083 kcal x mol(-1) to -599.253 kcal x mol(-1); the number of hydrogen bonds was increased from 32 to 38; PTH (1-34)-(RKK-QEL) can significantly stimulate the RANKL gene expression (P < 0.01) while inhibiting the OPG gene expression (P < 0.01) in UAMS-32P cells; in the co-culture system of UAMS-32P cells and mouse primary femur bone marrow cells, PTH (1-34)-(RKK-QEL) stimulated the formation of osteoclasts (P < 0.01) and had a higher biological activity than PTH (1-34) standard reagents.

  15. Results for Phase I of the IAEA Coordinated Research Program on HTGR Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard; Bostelmann, Friederike; Yoon, Su Jong

    2015-01-01

    The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied. High Temperature Gas-cooled Reactors (HTGR) has its own peculiarities, coated particle design, large graphite quantities, different materials and high temperatures that also require other simulationmore » requirements. The IAEA has therefore launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modeling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the HTR-PM (INET, China). This report summarizes the contributions of the HTGR Methods Simulation group at Idaho National Laboratory (INL) up to this point of the CRP. The activities at INL have been focused so far on creating the problem specifications for the prismatic design, as well as providing reference solutions for the exercises defined for Phase I. An overview is provided of the HTGR UAM objectives and scope, and the detailed specifications for Exercises I-1, I-2, I-3 and I-4 are also included here for completeness. The main focus of the report is the compilation and discussion of reference results for Phase I (i.e. for input parameters at their nominal or best-estimate values), which is defined as the first step of the uncertainty quantification process. These reference results can be used by other CRP participants for comparison with other codes or their own

  16. Effects of 10% biofuel substitution on ground level ozone formation in Bangkok, Thailand

    NASA Astrophysics Data System (ADS)

    Milt, Austin; Milano, Aaron; Garivait, Savitri; Kamens, Richard

    2009-12-01

    The Thai Government's search for alternatives to imported petroleum led to the consideration of mandating 10% biofuel blends (biodiesel and gasohol) by 2012. Concerns over the effects of biofuel combustion on ground level ozone formation in relation to their conventional counterparts need addressing. Ozone formation in Bangkok is explored using a trajectory box model. The model is compared against O 3, NO, and NO 2 time concentration data from air monitoring stations operated by the Thai Pollution Control Department. Four high ozone days in 2006 were selected for modeling. Both the traditional trajectory approach and a citywide average approach were used. The model performs well with both approaches but slightly better with the citywide average. Highly uncertain and missing data are derived within realistic bounds using a genetic algorithm optimization. It was found that 10% biofuel substitution will lead to as much as a 16 ppb peak O 3 increase on these four days compared to a 48 ppb increase due to the predicted vehicle fleet size increase between 2006 and 2012. The approach also suggests that when detailed meteorological data is not available to run three dimensional airshed models, and if the air is stagnant or predominately remains over an urban area during the day, that a simple low cost trajectory analysis of O 3 formation may be applicable.

  17. An intercomparison of biogenic emissions estimates from BEIS2 and BIOME: Reconciling the differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, J.G.; Emigh, R.A.; Pierce, T.E.

    1996-12-31

    Biogenic emissions play a critical role in urban and regional air quality. For instance, biogenic emissions contribute upwards of 76% of the daily hydrocarbon emissions in the Atlanta, Georgia airshed. The Biogenic Emissions Inventory System-Version 2.0 (BEIS2) and the Biogenic Model for Emissions (BIOME) are two models that compute biogenic emissions estimates. BEIS2 is a FORTRAN-based system, and BIOME is an ARC/INFO{reg_sign} - and SAS{reg_sign}-based system. Although the technical formulations of the models are similar, the models produce different biogenic emissions estimates for what appear to be essentially the same inputs. The goals of our study are the following: (1)more » Determine why BIOME and BEIS2 produce different emissions estimates; (2) Attempt to understand the impacts that the differences have on the emissions estimates; (3) Reconcile the differences where possible; and (4) Present a framework for the use of BEIS2 and BIOME. In this study, we used the Coastal Oxidant Assessment for Southeast Texas (COAST) biogenics data which were supplied to us courtesy of the Texas Natural Resource Conservation Commission (TNRCC), and we extracted the BEIS2 data for the same domain. We compared the emissions estimates of the two models using their respective data sets BIOME Using TNRCC data and BEIS2 using BEIS2 data.« less

  18. Building a DNA barcode library of Alaska's non-marine arthropods.

    PubMed

    Sikes, Derek S; Bowser, Matthew; Morton, John M; Bickford, Casey; Meierotto, Sarah; Hildebrandt, Kyndall

    2017-03-01

    Climate change may result in ecological futures with novel species assemblages, trophic mismatch, and mass extinction. Alaska has a limited taxonomic workforce to address these changes. We are building a DNA barcode library to facilitate a metabarcoding approach to monitoring non-marine arthropods. Working with the Canadian Centre for DNA Barcoding, we obtained DNA barcodes from recently collected and authoritatively identified specimens in the University of Alaska Museum (UAM) Insect Collection and the Kenai National Wildlife Refuge collection. We submitted tissues from 4776 specimens, of which 81% yielded DNA barcodes representing 1662 species and 1788 Barcode Index Numbers (BINs), of primarily terrestrial, large-bodied arthropods. This represents 84% of the species available for DNA barcoding in the UAM Insect Collection. There are now 4020 Alaskan arthropod species represented by DNA barcodes, after including all records in Barcode of Life Data Systems (BOLD) of species that occur in Alaska - i.e., 48.5% of the 8277 Alaskan, non-marine-arthropod, named species have associated DNA barcodes. An assessment of the identification power of the library in its current state yielded fewer species-level identifications than expected, but the results were not discouraging. We believe we are the first to deliberately begin development of a DNA barcode library of the entire arthropod fauna for a North American state or province. Although far from complete, this library will become increasingly valuable as more species are added and costs to obtain DNA sequences fall.

  19. Immunohistochemical evaluation of myofibroblast density in odontogenic cysts and tumors.

    PubMed

    Kouhsoltani, Maryam; Halimi, Monireh; Jabbari, Golchin

    2016-01-01

    Background. The aim of this study was to investigate myofibroblast (MF) density in a broad spectrum of odontogenic cysts and tumors and the relation between the density of MFs and the clinical behavior of these lesions. Methods. A total of 105 cases of odontogenic lesions, including unicystic ameloblastoma (UAM), solid ameloblastoma (SA), odontogenic keratocyst (OKC), dentigerous cyst (DC), radicular cyst (RC) (15 for each category), and odontogenic myxoma (OM), adenomatoid odontogenic tumor (AOT), calcifying odontogenic cyst (COC) (10 for each category), were immunohistochemically stained with anti-α-smooth muscle actin antibody. The mean percentage of positive cells in 10 high-power fields was considered as MF density for each case. Results. A statistically significant difference was observed in the mean scores between the study groups (P < 0.001). The intensity of MFs was significantly higher in odontogenic tumors compared to odontogenic cysts (P < 0.001). There was no statistically significant difference between odontogenic tumors, except between UAM and OM (P = 0.041). The difference between OKC and odontogenic tumors was not statistically significant (P > 0.05). The number of MFs was significantly higher in OKC and lower in COC compared to other odontogenic cysts (P = 0.007 and P = 0.045, respectively). Conclusion. The results of the present study suggest a role for MFs in the aggressive behavior of odontogenic lesions. MFs may represent an important target of therapy, especially for aggressive odontogenic lesions. Our findings support the classification of OKC in the category of odontogenic tumors.

  20. Design and Implementation of A DICOM PACS With Secure Access Via Internet

    DTIC Science & Technology

    2001-10-25

    which we have called SINFIM ( Sistema de INFormación de Imágenes Médicas). This system permits the automatic acquisition of DICOM images [2] [3...1] MA. Martínez, AJR. Jiménez, BV. Medina, LJ. Azpiroz. “Los Sistemas PACS”. [Last access 08/12/2000]. Available URL http://itzamna.uam.mx...junio, por el que se aprueba el Reglamento de medidas de seguridad de los ficheros automatizados que contengan datos de carácter personal” (Boletín Oficial del Estado, número 151, de 25 de junio de 1999)

  1. Applications of computer assisted surgery and medical robotics at the ISSSTE, México: preliminary results.

    PubMed

    Mosso, José Luis; Pohl, Mauricio; Jimenez, Juan Ramon; Valdes, Raquel; Yañez, Oscar; Medina, Veronica; Arambula, Fernando; Padilla, Miguel Angel; Marquez, Jorge; Gastelum, Alfonso; Mosso, Alejo; Frausto, Juan

    2007-01-01

    We present the first results of four projects of a second phase of a Mexican Project Computer Assisted Surgery and Medical Robotics, supported by the Mexican Science and Technology National Council (Consejo Nacional de Ciencia y Tecnología) under grant SALUD-2002-C01-8181. The projects are being developed by three universities (UNAM, UAM, ITESM) and the goal of this project is to integrate a laboratory in a Hospital of the ISSSTE to give service to surgeons or clinicians of Endoscopic surgeons, urologist, gastrointestinal endoscopist and neurosurgeons.

  2. Engaging Underrepresented Minorities in Research: Our Vision for a "Research-Friendly Community".

    PubMed

    Olson, Mary; Cottoms, Naomi; Sullivan, Greer

    2015-01-01

    This article introduces our "Research-Friendly Community" vision, placing research in the arena of social justice by giving citizens a voice and opportunity to actively determine research agendas in their community. The mission of Tri-County Rural Health Network, a minority-owned, community-based nonprofit serving 16 counties in Arkansas' Mississippi River Delta region, is to increase access to health-related services and opportunities to both participate in and shape research. Tri-County has built trust with the community through the use of Deliberative Democracy Forums, a model devised by the Kettering Foundation and through a community health worker program called Community Connectors. Over time, a partnership was formed with investigators at the University of Arkansas for Medical Sciences (UAMS). Tri-County serves as a boundary spanner to link community members, other community organizations, local politicians, policy maker, and researchers. We describe our experience for other nonprofits or universities who might want to develop a similar program.

  3. Uncertainty characterization and quantification in air pollution models. Application to the CHIMERE model

    NASA Astrophysics Data System (ADS)

    Debry, Edouard; Mallet, Vivien; Garaud, Damien; Malherbe, Laure; Bessagnet, Bertrand; Rouïl, Laurence

    2010-05-01

    probability density function (PDF) is associated with an input parameter, according to its assumed uncertainty. Then the combined PDFs are propagated into the model, by means of several simulations with randomly perturbed input parameters. One may then obtain an approximation of the PDF of modeled concentrations, provided the Monte Carlo process has reasonably converged. The uncertainty analysis with CHIMERE has been led with a Monte Carlo method on the French domain and on two periods : 13 days during January 2009, with a focus on particles, and 28 days during August 2009, with a focus on ozone. The results show that for the summer period and 500 simulations, the time and space averaged standard deviation for ozone is 16 µg/m3, to be compared with an averaged concentration of 89 µg/m3. It is noteworthy that the space averaged standard deviation for ozone is relatively constant over time (the standard deviation of the timeseries itself is 1.6 µg/m3). The space variation of the ozone standard deviation seems to indicate that emissions have a significant impact, followed by western boundary conditions. Monte Carlo simulations are then post-processed by both ensemble [4] and Bayesian [5] methods in order to assess the quality of the uncertainty estimation. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the Paris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Mallet, V., and B

  4. Influence of atmospheric deposition on Okefenokee National Wildlife Refuge

    USGS Publications Warehouse

    Winger, P.V.; Lasier, P.J.; Jackson, B.P.

    1995-01-01

    Designation of Okefenokee National Wildlife Refuge (Georgia) as a Class I Air Quality Area affords mandatory protection of the airshed through permit-review processes for planned developments. Rainfall is the major source of water to the swamp, and potential impacts from developments in the airshed are high. To meet management needs for baseline information, chemical contributions from atmospheric deposition and partitioning of anions and cations in various matrices of the swamp, with emphasis on mercury and lead, were determined during this study. Chemistry of rainfall was measured on an event basis from one site and quarterly on surface water, pore water, floc, and sediment from four locations. A sediment core collected from the Refuge interior was sectioned, aged, and analyzed for mercury. Rainfall was acidic (pH 4.7-4.9), with average total and methyl mercury concentrations of 9 ng/L and 0.1 ng/L, respectively. Surface waters were acidic (pH 3.8-4.1), dilute (specific conductance 35-60 pS), and highly organic (dissolved organic carbon 35-50 mg/L). Total mercury was 1-3.5 ng/L in surface and pore water, and methyl mercury was 0.02-0.20 ng/L. Total mercury in sediments and floc was 100-200 ng/g dry weight, and methyl mercury was 4-16 ng/g. Lead was 0-1.7 pg/L in rainfall, not detectable in surface water, 3.4-5.4 pg/L in pore water, and 3.9-4.9 mg/kg in floc and sediment. Historical patterns of mercury deposition showed an increase in total mercury from pre-1800 concentrations of 250 ng/g to 500 ng/g in 1950, with concentrations declining thereafter to present.

  5. What does atmospheric nitrogen contribute to the Gulf of Mexico area of oxygen depletion?

    NASA Astrophysics Data System (ADS)

    Rabalais, N. N.

    2017-12-01

    The northern Gulf of Mexico influenced by the freshwater discharge and nutrient loads of the Mississippi River watershed is the location of the world's second largest human-caused area of coastal hypoxia. Over 500 more anthropogenic `dead zones' exist in coastal waters. The point source inputs within the Mississippi River watershed account for about ten per cent of the total nitrogen inputs to the Mississippi River, with the remaining being nonpoint source. Atmospheric nitrogen makes up about sixteen per cent of the nonpoint source input of nitrogen. Most of the NOx is generated within the Ohio River watershed from the burning of fossil fuels. Some remains to be deposited into the same watershed, but the airshed deposits much of the NOx along the U.S. eastern seaboard, including Chesapeake Bay, which also has a hypoxia problem. Most of the volatilized ammonia is produced from fertilizers or manure within the upper Mississippi River watershed, is deposited within a localized airshed, and is not airborne long distances like the NOx. The atmospheric nitrogen input to the coastal waters affected by hypoxia is considered to be minimal. In the last half century, the nitrogen load from the Mississippi River to the Gulf of Mexico has increased 300 percent. During this period, low oxygen bottom-waters have developed in the coastal waters and worsened coincident with the increase in the nitrogen load. The 31-yr average size of the bottom-water hypoxia area in the Gulf of Mexico is 13,800 square kilometers, well over the 5,000 square kilometers goal of the Mississippi River Nutrient/Gulf of Mexico Hypoxia Task Force. Knowing the amounts and sources of excess nutrients to watersheds with adjacent coastal waters experiencing eutrophication and hypoxia is important in the management strategies to reduce those nutrients and improve water quality.

  6. Immunohistochemical evaluation of myofibroblast density in odontogenic cysts and tumors

    PubMed Central

    Kouhsoltani, Maryam; Halimi, Monireh; Jabbari, Golchin

    2016-01-01

    Background. The aim of this study was to investigate myofibroblast (MF) density in a broad spectrum of odontogenic cysts and tumors and the relation between the density of MFs and the clinical behavior of these lesions. Methods. A total of 105 cases of odontogenic lesions, including unicystic ameloblastoma (UAM), solid ameloblastoma (SA), odontogenic keratocyst (OKC), dentigerous cyst (DC), radicular cyst (RC) (15 for each category), and odontogenic myxoma (OM), adenomatoid odontogenic tumor (AOT), calcifying odontogenic cyst (COC) (10 for each category), were immunohistochemically stained with anti-α-smooth muscle actin antibody. The mean percentage of positive cells in 10 high-power fields was considered as MF density for each case. Results. A statistically significant difference was observed in the mean scores between the study groups (P < 0.001). The intensity of MFs was significantly higher in odontogenic tumors compared to odontogenic cysts (P < 0.001). There was no statistically significant difference between odontogenic tumors, except between UAM and OM (P = 0.041). The difference between OKC and odontogenic tumors was not statistically significant (P > 0.05). The number of MFs was significantly higher in OKC and lower in COC compared to other odontogenic cysts (P = 0.007 and P = 0.045, respectively). Conclusion. The results of the present study suggest a role for MFs in the aggressive behavior of odontogenic lesions. MFs may represent an important target of therapy, especially for aggressive odontogenic lesions. Our findings support the classification of OKC in the category of odontogenic tumors. PMID:27092213

  7. Distributive Distillation Enabled by Microchannel Process Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arora, Ravi

    The application of microchannel technology for distributive distillation was studied to achieve the Grand Challenge goals of 25% energy savings and 10% return on investment. In Task 1, a detailed study was conducted and two distillation systems were identified that would meet the Grand Challenge goals if the microchannel distillation technology was used. Material and heat balance calculations were performed to develop process flow sheet designs for the two distillation systems in Task 2. The process designs were focused on two methods of integrating the microchannel technology 1) Integrating microchannel distillation to an existing conventional column, 2) Microchannel distillation formore » new plants. A design concept for a modular microchannel distillation unit was developed in Task 3. In Task 4, Ultrasonic Additive Machining (UAM) was evaluated as a manufacturing method for microchannel distillation units. However, it was found that a significant development work would be required to develop process parameters to use UAM for commercial distillation manufacturing. Two alternate manufacturing methods were explored. Both manufacturing approaches were experimentally tested to confirm their validity. The conceptual design of the microchannel distillation unit (Task 3) was combined with the manufacturing methods developed in Task 4 and flowsheet designs in Task 2 to estimate the cost of the microchannel distillation unit and this was compared to a conventional distillation column. The best results were for a methanol-water separation unit for the use in a biodiesel facility. For this application microchannel distillation was found to be more cost effective than conventional system and capable of meeting the DOE Grand Challenge performance requirements.« less

  8. Mesoscale simulations of atmospheric flow and tracer transport in Phoenix, Arizona

    NASA Astrophysics Data System (ADS)

    Wang, Ge; Ostoja-Starzewski, Martin

    2006-09-01

    Large urban centres located within confining rugged or complex terrain can frequently experience episodes of high concentrations of lower atmospheric pollution. Metropolitan Phoenix, Arizona (United States), is a good example, as the general population is occasionally subjected to high levels of lower atmospheric ozone, carbon monoxide and suspended particulate matter. As a result of dramatic but continuous increase in population, the accompanying environmental stresses and the local atmospheric circulation that dominates the background flow, an accurate simulation of the mesoscale pollutant transport across Phoenix and similar urban areas is becoming increasingly important. This is particularly the case in an airshed, such as that of Phoenix, where the local atmospheric circulation is complicated by the complex terrain of the area.

  9. Receptor model source attributions for Utah’s Salt Lake City airshed and the impacts of wintertime secondary ammonium nitrate and ammonium chloride aerosol.

    EPA Science Inventory

    Communities along Utah’s Wasatch Front are currently developing strategies to reduce daily average PM2.5 levels to below National Ambient Air Quality Standards during wintertime, persistent, multi-day stable atmospheric conditions or cold-air pools. Speciated PM2.5 data from the ...

  10. Developing Oxidized Nitrogen Atmospheric Deposition Source Attribution from CMAQ for Air-Water Trading for Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Dennis, R. L.; Napelenok, S. L.; Linker, L. C.; Dudek, M.

    2012-12-01

    Estuaries are adversely impacted by excess reactive nitrogen, Nr, from many point and nonpoint sources, including atmospheric deposition to the watershed and the estuary itself as a nonpoint source. For effective mitigation, trading among sources of Nr is being considered. The Chesapeake Bay Program is working to bring air into its trading scheme, which requires some special air computations. Airsheds are much larger than watersheds; thus, wide-spread or national emissions controls are put in place to achieve major reductions in atmospheric Nr deposition. The tributary nitrogen load reductions allocated to the states to meet the TMDL target for Chesapeake Bay are large and not easy to attain via controls on water point and nonpoint sources. It would help the TMDL process to take advantage of air emissions reductions that would occur with State Implementation Plans that go beyond the national air rules put in place to help meet national ambient air quality standards. There are still incremental benefits from these local or state-level controls on atmospheric emissions. The additional air deposition reductions could then be used to offset water quality controls (air-water trading). What is needed is a source to receptor transfer function that connects air emissions from a state to deposition to a tributary. There is a special source attribution version of the Community Multiscale Air Quality model, CMAQ, (termed DDM-3D) that can estimate the fraction of deposition contributed by labeled emissions (labeled by source or region) to the total deposition across space. We use the CMAQ DDM-3D to estimate simplified state-level delta-emissions to delta-atmospheric-deposition transfer coefficients for each major emission source sector within a state, since local air regulations are promulgated at the state level. The CMAQ 4.7.1 calculations are performed at a 12 km grid size over the airshed domain covering Chesapeake Bay for 2020 CAIR emissions. For results, we first present

  11. Acute and Chronic Changes in the Subglottis Induced by Graded CO2 Laser Injury in the Rabbit Airway*

    PubMed Central

    Otteson, Todd D.; Sandulache, Vlad C.; Barsic, Mark; DiSilvio, Gregory M.; Hebda, Patricia A.; Dohar, Joseph E.

    2010-01-01

    Objective To investigate the repair process following CO2 laser injury to the upper airway mucosa (UAM) during the development of chronic subglottic stenosis (SGS). Design Animals were assigned to either sham control (cricothyroidotomy only) or injured (cricothyroidotomy and posterior subglottic laser) groups using various CO2 laser exposures (8W, 12W, 16W) for 4 seconds. Subjects 24 New Zealand white rabbits. Interventions The subglottis was approached via cricothyroidotomy. Sham control airways were immediately closed while injured airways were subjected to graded CO2 laser exposures prior to closure. Airways were endoscopically monitored preoperatively, postoperatively, and on postoperative days 7,14,28,42,56,70 and 84. Animals were sacrificed at 14 and 84 days. Subglottic tissue was harvested for histological evaluation (re-epithelialization, extracellular matrix, vascularity and inflammation). Results 1) Increases in UAM thickness up to five times thicker than normal mucosa were observed, but were limited primarily to the lamina propria. The mucosal epithelium regenerated without chronic changes. Focal areas of cartilage repair were encountered acutely post-injury and to a greater extent in the chronic phases of repair. 2) Acutely, the thickened lamina propria was comprised of poorly organized extracellular matrix components and demonstrated increases in blood vessel size and number. 3) Histological changes present in the acute phase only partially resolved in progression to chronic SGS. Chronic SGS was characterized by thick collagen fiber bundles extending into the remodeled subglottic cartilage. Conclusions The CO2 laser induces acute changes to lamina propria architecture and vascularity which persist chronically. Elucidating responsible signaling pathways may facilitate the development of therapeutic agents to prevent or reduce the formation of SGS. PMID:18645117

  12. Defining new aims for BME programs in Latin America: the case of UAM-Iztapalapa.

    PubMed

    Azpiroz-Leehan, J; Martinez, L F; Urbina, M E G; Cadena, M M; Sacristan, E

    2016-08-01

    The need for upkeep and management of medical technology has fostered the creation of a large number of under graduate programs in the field of biomedical Engineering. In Latin America alone, there are over 85 programs dedicated to this. This contrasts with programs in other regions where most of the undergraduates continue on to pursue graduate degrees or work as research and development engineers in the biomedical industry. In this work we analyze the situation regarding curricular design in the 48 BME programs in Mexico and compare this to suggestions and classifications of programs according to needs and possibilities. We then focus on a particular institution, Universidad Autónoma Metropolitana and due to its characteristics and performance we propose that it should redefine its aims from the undergraduate program on, in order to not only generate research but also to provide a nurturing environment for a budding biomedical industry in Mexico.

  13. Operational Solution to the Nonlinear Klein-Gordon Equation

    NASA Astrophysics Data System (ADS)

    Bengochea, G.; Verde-Star, L.; Ortigueira, M.

    2018-05-01

    We obtain solutions of the nonlinear Klein-Gordon equation using a novel operational method combined with the Adomian polynomial expansion of nonlinear functions. Our operational method does not use any integral transforms nor integration processes. We illustrate the application of our method by solving several examples and present numerical results that show the accuracy of the truncated series approximations to the solutions. Supported by Grant SEP-CONACYT 220603, the first author was supported by SEP-PRODEP through the project UAM-PTC-630, the third author was supported by Portuguese National Funds through the FCT Foundation for Science and Technology under the project PEst-UID/EEA/00066/2013

  14. Naphthalene and Naphthoquinone: Distributions and Human Exposure in the Los Angeles Basin

    NASA Astrophysics Data System (ADS)

    Lu, R.; Wu, J.; Turco, R.; Winer, A. M.; Atkinson, R.; Paulson, S.; Arey, J.; Lurmann, F.

    2003-12-01

    Naphthalene is the simplest and most abundant of the polycyclic aromatic hydrocarbons (PAHs). Naphthalene is found primarily in the gas-phase and has been detected in both outdoor and indoor samples. Evaporation from naphthalene-containing products (including gasoline), and during refining operations, are important sources of naphthalene in air. Naphthalene is also emitted during the combustion of fossil fuels and wood, and is a component of vehicle exhaust. Exposure to high concentrations of naphthalene can damage or destroy red blood cells, causing hemolytic anemia. If inhaled over a long period of time, naphthalene may cause kidney and liver damage, skin allergy and dermatitis, cataracts and retinal damage, as well as attack the central nervous system. Naphthalene has been found to cause cancer as a result of inhalation in animal tests. Naphthoquinones are photooxidation products of naphthalene and the potential health effects of exposure to these quinones are a current focus of research. We are developing and applying models that can be used to assess human exposure to naphthalene and its photooxidation products in major air basins such as California South Coast Air Basin (SoCAB). The work utilizes the Surface Meteorology and Ozone Generation (SMOG) airshed model, and the REgional Human EXposure (REHEX) model, including an analysis of individual exposure. We will present and discuss simulations of basin-wide distributions of, and human exposures to, naphthalene and naphthoquinone, with emphasis on the uncertainties in these estimates of atmospheric concentrations and human exposure. Regional modeling of pollutant sources and exposures can lead to cost-effective and optimally health-protective emission control strategies.

  15. Anthropogenic Sources of Arsenic and Copper to Sediments of a Suburban Lake, 1964-1998

    NASA Astrophysics Data System (ADS)

    Rice, K. C.; Conko, K. M.; Hornberger, G. M.

    2002-05-01

    Nonpoint-source pollution from urbanization is becoming a widespread problem. Long-term monitoring data are necessary to document geochemical processes in urban settings and changes in sources of chemical contaminants over time. In the absence of long-term data, lake-sediment cores can be used to reconstruct past processes, because they serve as integrators of sources of pollutants from the contributing airshed and catchment. Lake Anne is a 10.9-ha man-made lake in a 235-ha suburban catchment in Reston, Virginia, with a population density of 1,116 people/km2. Three sediment cores, collected in 1996 and 1997, indicate increasing concentrations of arsenic and copper since 1964, when the lake was formed. The cores were compared to a core collected from a forested catchment in the same airshed that showed no increases in concentrations of these elements. Neither an increase in atmospheric deposition nor diagenesis and remobilization were responsible for the trends in the Lake Anne cores. Mass balances of sediment, arsenic, and copper were calculated using 1998 data on precipitation, streamwater, road runoff, and a laboratory leaching experiment on pressure-treated lumber. Sources of arsenic to the lake in 1998 were in-lake leaching of pressure-treated lumber (52%) and streamwater (47%). Road runoff was a greater (93%) source of copper than leaching of pressure-treated lumber (4%). Atmospheric deposition was an insignificant source (<3%) of both elements. Urbanization of the catchment was confirmed as a major cause of the increasing arsenic and copper in the lake cores through an annual historical reconstruction of the deposition of sediment, arsenic, and copper to the lake for 1964-1997. Aerial photography indicated that the area of roads and parking lots in the catchment increased to 26% by 1997 and that the number of docks on the lake also increased over time. The increased mass of arsenic and copper in the lake sediments corresponded to the increased amount of

  16. Chemical composition and source apportionment of size fractionated particulate matter in Cleveland, Ohio, USA.

    PubMed

    Kim, Yong Ho; Krantz, Q Todd; McGee, John; Kovalcik, Kasey D; Duvall, Rachelle M; Willis, Robert D; Kamal, Ali S; Landis, Matthew S; Norris, Gary A; Gilmour, M Ian

    2016-11-01

    The Cleveland airshed comprises a complex mixture of industrial source emissions that contribute to periods of non-attainment for fine particulate matter (PM 2.5 ) and are associated with increased adverse health outcomes in the exposed population. Specific PM sources responsible for health effects however are not fully understood. Size-fractionated PM (coarse, fine, and ultrafine) samples were collected using a ChemVol sampler at an urban site (G.T. Craig (GTC)) and rural site (Chippewa Lake (CLM)) from July 2009 to June 2010, and then chemically analyzed. The resulting speciated PM data were apportioned by EPA positive matrix factorization to identify emission sources for each size fraction and location. For comparisons with the ChemVol results, PM samples were also collected with sequential dichotomous and passive samplers, and evaluated for source contributions to each sampling site. The ChemVol results showed that annual average concentrations of PM, elemental carbon, and inorganic elements in the coarse fraction at GTC were ∼2, ∼7, and ∼3 times higher than those at CLM, respectively, while the smaller size fractions at both sites showed similar annual average concentrations. Seasonal variations of secondary aerosols (e.g., high NO 3 - level in winter and high SO 4 2- level in summer) were observed at both sites. Source apportionment results demonstrated that the PM samples at GTC and CLM were enriched with local industrial sources (e.g., steel plant and coal-fired power plant) but their contributions were influenced by meteorological conditions and the emission source's operation conditions. Taken together the year-long PM collection and data analysis provides valuable insights into the characteristics and sources of PM impacting the Cleveland airshed in both the urban center and the rural upwind background locations. These data will be used to classify the PM samples for toxicology studies to determine which PM sources, species, and size fractions are

  17. The IAEA coordinated research programme on HTGR uncertainty analysis: Phase I status and Ex. I-1 prismatic reference results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bostelmann, Friederike; Strydom, Gerhard; Reitsma, Frederik

    The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied, in contrast to the historical approach where sensitivity analysis were performed and uncertainties then determined by a simplified statistical combination of a few important inputmore » parameters. New methodologies are currently under development in the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity. High Temperature Gas-cooled Reactor (HTGR) designs require specific treatment of the double heterogeneous fuel design and large graphite quantities at high temperatures. The IAEA has therefore launched a Coordinated Research Project (CRP) on HTGR Uncertainty Analysis in Modelling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the Chinese HTR-PM. Work has started on the first phase and the current CRP status is reported in the paper. A comparison of the Serpent and SCALE/KENO-VI reference Monte Carlo results for Ex. I-1 of the MHTGR-350 design is also included. It was observed that the SCALE/KENO-VI Continuous Energy (CE) k ∞ values were 395 pcm (Ex. I-1a) to 803 pcm (Ex. I-1b) higher than the respective Serpent lattice calculations, and that within the set of the SCALE results, the KENO-VI 238 Multi-Group (MG) k ∞ values were up to 800 pcm lower than the KENO-VI CE values. The use of the

  18. The IAEA coordinated research programme on HTGR uncertainty analysis: Phase I status and Ex. I-1 prismatic reference results

    DOE PAGES

    Bostelmann, Friederike; Strydom, Gerhard; Reitsma, Frederik; ...

    2016-01-11

    The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied, in contrast to the historical approach where sensitivity analysis were performed and uncertainties then determined by a simplified statistical combination of a few important inputmore » parameters. New methodologies are currently under development in the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity. High Temperature Gas-cooled Reactor (HTGR) designs require specific treatment of the double heterogeneous fuel design and large graphite quantities at high temperatures. The IAEA has therefore launched a Coordinated Research Project (CRP) on HTGR Uncertainty Analysis in Modelling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the Chinese HTR-PM. Work has started on the first phase and the current CRP status is reported in the paper. A comparison of the Serpent and SCALE/KENO-VI reference Monte Carlo results for Ex. I-1 of the MHTGR-350 design is also included. It was observed that the SCALE/KENO-VI Continuous Energy (CE) k ∞ values were 395 pcm (Ex. I-1a) to 803 pcm (Ex. I-1b) higher than the respective Serpent lattice calculations, and that within the set of the SCALE results, the KENO-VI 238 Multi-Group (MG) k ∞ values were up to 800 pcm lower than the KENO-VI CE values. The use of the

  19. KSC-2012-3009

    NASA Image and Video Library

    2012-05-23

    CAPE CANAVERAL, Fla. – At NASA’s Kennedy Space Center Visitor Complex in Florida, Lunabotics UAM Team students from the Universidad Autonoma Metropolitano in Mexico transport their lunabot to the Lunarena during NASA’s Lunabotics Mining Competition. The mining competition is sponsored by NASA Kennedy Space Center’s Education Office for the agency’s Exploration Systems Mission Directorate. Undergraduate and graduate students from more than 50 universities and colleges in the U.S. and other countries use their remote-controlled Lunabots to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to lunar soil. For more information, visit www.nasa.gov/lunabotics. Photo credit: NASA/Frankie Martin

  20. Peds PLACE: Quality Continuing Medical Education in Arkansas

    PubMed Central

    González-Espada, Wilson J.; Hall-Barrow, Julie; Hall, R. Whit; Burke, Bryan L.; Smith, Christopher E.

    2017-01-01

    The University of Arkansas for Medical Sciences (UAMS) and Arkansas Children’s Hospital (ACH) sponsor Peds PLACE (Pediatric Physician Learning and Collaborative Education), a telemedicine continuing education program. This study assessed to what extent participants were satisfied with Peds PLACE and how to improve it. It was found that 95% of the participants agreed that the presentations related to their professional needs and 98% that it increased their knowledge. In addition, 81% evaluated the presentations as some of the best they have attended and 93% agreed that the information would translate into professional practice and enhance patient care. Comments were positive and correlated with the survey data. Participants recommended several ways to improve Peds PLACE. PMID:19385271

  1. Adapting industry-style business model to academia in a system of Performance-based Incentive Compensation.

    PubMed

    Reece, E Albert; Nugent, Olan; Wheeler, Richard P; Smith, Charles W; Hough, Aubrey J; Winter, Charles

    2008-01-01

    Performance-Based Incentive Compensation (PBIC) plans currently prevail throughout industry and have repeatedly demonstrated effectiveness as powerful motivational tools for attracting and retaining top talent, enhancing key indicators, increasing employee productivity, and, ultimately, enhancing mission-based parameters. The University of Arkansas for Medical Sciences (UAMS) College of Medicine introduced its PBIC plan to further the transition of the college to a high-performing academic and clinical enterprise. A forward-thinking compensation plan was progressively implemented during a three-year period. After the introduction of an aggressive five-year vision plan in 2002, the college introduced a PBIC plan designed to ensure the retention and recruitment of high-quality faculty through the use of uncapped salaries that reflect each faculty member's clinical, research, and education duties. The PBIC plan was introduced with broad, schoolwide principles adaptable to each department and purposely flexible to allow for tailor-made algorithms to fit the specific approaches required by individual departments. As of July 2006, the college had begun to reap a variety of short-term benefits from Phase I of its PBIC program, including increases in revenue and faculty salaries, and increased faculty morale and satisfaction.Successful implementation of a PBIC plan depends on a host of factors, including the development of a process for evaluating performance that is considered fair and reliable to the entire faculty. The college has become more efficient and effective by adopting such a program, which has helped it to increase overall productivity. The PBIC program continues to challenge our faculty members to attain their highest potential while rewarding them accordingly.

  2. Solar cycle variation of Mars exospheric temperatures: Critical review of available dayside measurements and recent model simulations

    NASA Astrophysics Data System (ADS)

    Bougher, Stephen; Huestis, David

    The responses of the Martian dayside thermosphere to solar flux variations (on both solar rotation and solar cycle timescales) have been the subject of considerable debate and study for many years. Available datasets include: Mariner 6,7,9 (UVS dayglow), Viking Lander 1-2 (UAMS densities upon descent), several aerobraking campaigns (MGS, Odyssey, MRO densities), and Mars Express (SPICAM dayglow). Radio Science derived plasma scale heights near the ionospheric peak can be used to derive neutral temperatures in this region (only); such values are not applicable to exobase heights (e.g. Forbes et al. 2008; Bougher et al. 2009). Recently, densities and temperatures derived from precise orbit determination of the MGS spacecraft (1999-2005) have been used to establish the responses of Mars' exosphere to long-term solar flux variations (Forbes et al., 2008). From this multi-year dataset, dayside exospheric temperatures weighted toward moderate southern latitudes are found to change by about 120 K over the solar cycle. However, the applicability of these drag derived exospheric temperatures to near solar minimum conditions is suspect (e.g Bruinsma and Lemoine, 2002). Finally, re-evaluation of production mechanisms for UV dayglow emissions implies revised values for exospheric temperatures (e.g. Simon et al., 2009; Huestis et al. 2010). Several processes are known to influence Mars' exospheric temperatures and their variability (Bougher et al., 1999; 2000; 2009). Solar EUV heating and its variations with solar fluxes received at Mars, CO2 15-micron cooling, molecular thermal conduction, and hydrodynamic heating/cooling associated with global dynamics all contribute to regulate dayside thermo-spheric temperatures. Poorly measured dayside atomic oxygen abundances render CO2 cooling rates uncertain at the present time. However, global thermospheric circulation models can be exercised for conditions spanning the solar cycle and Mars seasons to address the relative roles of

  3. The relationship between changes in daily air pollution and hospitalizations in Perth, Australia 1992-1998: a case-crossover study.

    PubMed

    Hinwood, A L; De Klerk, N; Rodriguez, C; Jacoby, P; Runnion, T; Rye, P; Landau, L; Murray, F; Feldwick, M; Spickett, J

    2006-02-01

    A case-crossover study was undertaken to investigate the relationship between daily air pollutant concentrations and daily hospitalizations for selected disease categories in Perth, Western Australia. Daily measurements of particles (measured by nephelometry and PM2.5), photochemical oxidants (measured as ozone), nitrogen dioxide (NO2) and carbon monoxide (CO) concentrations were obtained from 1992 to 1998 via a metropolitan network of monitoring stations. Daily PM2.5 concentrations were estimated using monitored data, modelling and interpolation. Hospital morbidity data for respiratory, cardiovascular (CVD), gastrointestinal (GI) diseases, chronic obstructive pulmonary diseases (COPD) excluding asthma; pneumonia/influenza diseases; and asthma were obtained and categorized into all ages, less than 15 years and greater than 65 years. Gastrointestinal morbidity was used as a control disease. The data were analyzed using conditional logistic regression. The results showed a small number of significant associations for daily changes in particle concentrations, nitrogen dioxide and carbon monoxide for the respiratory diseases, CODP, pneumonia, asthma and CVD hospitalizations. Changes in ozone concentrations were not significantly associated with any disease outcomes. These data provide useful information on the potential health impacts of air pollution in an airshed with very low sulphur dioxide concentrations and lower nitrogen dioxide concentrations commonly found in many other cities.

  4. Partitioning phase preference for secondary organic aerosol in an urban atmosphere.

    PubMed

    Chang, Wayne L; Griffin, Robert J; Dabdub, Donald

    2010-04-13

    Secondary organic aerosol (SOA) comprises a significant portion of atmospheric particular matter. The impact of particular matter on both human health and global climate has long been recognized. Despite its importance, there are still many unanswered questions regarding the formation and evolution of SOA in the atmosphere. This study uses a modeling approach to understand the preferred partitioning behavior of SOA species into aqueous or organic condensed phases. More specifically, this work uses statistical analyses of approximately 24,000 data values for each variable from a state of the art 3D airshed model. Spatial and temporal distributions of fractions of SOA residing in the aqueous phase (fAQ) in the South Coast Air Basin of California are presented. Typical values of fAQ within the basin near the surface range from 5 to 80%. Results show that the likelihood of large fAQ values is inversely proportional to the total SOA loading. Analysis of various meteorological parameters indicates that large fAQ values are predicted because modeled aqueous-phase SOA formation is less sensitive than that of organic-phase SOA to atmospheric conditions that are not conducive to SOA formation. There is a diurnal variation of fAQ near the surface: It tends to be larger during daytime hours than during nighttime hours. Results also indicate that the largest fAQ values are simulated in layers above ground level at night. In summary, one must consider SOA in both organic and aqueous phases for proper regional and global SOA budget estimation.

  5. Gas-Phase Formation Rates of Nitric Acid and Its Isomers Under Urban Conditions

    NASA Technical Reports Server (NTRS)

    Okumura, M.; Mollner, A. K.; Fry, J. L.; Feng, L.

    2005-01-01

    Ozone formation in urban smog is controlled by a complex set of reactions which includes radical production from photochemical processes, catalytic cycles which convert NO to NO2, and termination steps that tie up reactive intermediates in long-lived reservoirs. The reaction OH + NO2 + M -4 HONO2 + M (la) is a key termination step because it transforms two short-lived reactive intermediates, OH and NO2, into relatively long-lived nitric acid. Under certain conditions (low VOC/NOx), ozone production in polluted urban airsheds can be highly sensitive to this reaction, but the rate parameters are not well constrained. This report summarizes the results of new laboratory studies of the OH + NO2 + M reaction including direct determination of the overall rate constant and branching ratio for the two reaction channels under atmospherically relevant conditions.

  6. Department of the Navy Justification of Estimates for Fiscal Year 1988/ 1989 Submitted to Congress. Operation and Maintenance, Navy Reserve

    DTIC Science & Technology

    1987-01-01

    t" td =U E 4-1 C-’M4- *+2 mU C 01 - 0 00 C01C ) fa .0 a) %A 4- 10 L- tor miE LA 01 Er~uO- 04-0 E . 0 CD u ft~o c ma-C uam 0 Ul (A &Ai %.o w1 0 C4-A M...06U S4D’’W 0 4-A 41 C % C 0 a, - ) .- 4. %A -- & - -v, ’ 4-0 C IVAU E 4-’ 4) ’q CL Q>C 4..C 0 "ow - 34A c IutoC 05 >’ m’U c = c 4) 4) 4. 3UV -t CL>%u ’ t

  7. A Study on the Armillary Spheres of the Confucianists in Joseon Dynasty

    NASA Astrophysics Data System (ADS)

    Lee, Yong Sam; Kim, Sang Hyuk; Lee, Min Soo; Jeong, Jang Hae

    2010-12-01

    Armillary sphere, generally known as, not only astronomical instrument for observing astronomical phenomena but also symbolizes the royal authority and royal political ideology which is based on Confucianism. Among the well-reputed Confucian scholars were built their own armillary spheres. However, these armillary spheres which exist are damaged and most of parts of its have been lost. We analyzed and measured the remnants of armillary spheres which were made by Toegye Lee Hwang, Uam Song Si-Yeol and Goedam Bae Sang-Yeol who were well-reputed Confucian scholars in Joseon Dynasty, and have been executed the restorations of Toegye Lee Hwang and Song Si-Yeols armillary sphere based on the drawings which were drawn as the original form by analysis and measurement of its remnants.

  8. POLAR ORGANIC COMPOUNDS IN FINE PARTICLES FROM THE NEW YORK, NEW JERSEY, AND CONNECTICUT REGIONAL AIRSHED

    EPA Science Inventory

    Five key scientific questions guiding this research were explored. They are given here with results generated from the project.
     
    B.1.        How can polar organic compounds be measured in atmospheric fine particulate matter? Is there potential a...

  9. Fate of Airborne Contaminants in Okefenokee National Wildlife Refuge

    USGS Publications Warehouse

    Winger, P.V.; Lasier, P.J.

    1997-01-01

    Designation of Okefenokee National Wildlife Refuge as a Class I Air Quality Area (given the highest level of protection possible from air pollutants under the Clean Air Act Amendments of 1977) affords mandatory protection of the Refuge's airshed through the permit-review process for planned developments. Rainfall is the major source of water to the swamp, and potential impacts from developments to the airshed are high. To meet management needs for baseline information, chemical contributions from atmospheric deposition and partitioning of anions and cations, with emphasis on mercury and lead, in the various matrices of the Swamp were determined between July 1993 and April 1995. Chemistry of rainfall was determined on an event basis from one site located at Refuge Headquarters. Field samples of surface water, pore water, floc and sediment were collected from four locations on the Refuge: Chesser Prairie, Chase Prairie, Durden Prairie, and the Narrows. A sediment core sample was collected from the Refuge interior at Bluff Lake for aging of mercury deposition. Rainfall was acidic (pH 4.8) with sulfate concentrations averaging 1.2 mg/L and nitrate averaging 0.8 mg/L. Lead in rainfall averaged 1 ?g/L and total and methylmercury concentrations were 11.7 ng/L and 0.025 ng/L, respectively. The drought of 1993 followed by heavy rains during the fall and winter caused a temporary alteration in the cycling and availability of trace-elements within the different matrices of the Swamp. Surface water was acidic (pH 3.8 to 4.1), dilute (specific conductance 35-60 ?S/cm), and highly organic (DOC 35-50 mg/L). Sediment and floc were also highly organic (>90%). Total mercury averaged 3.6 ng/L in surface water, 9.0 ng/L in pore water and about 170 ng/g in floc and sediments. Mercury bioaccumulated in the biota of the Refuge: fish fillets (Centrarchus macropterus, Esox niger, Lepomus gulosus and Amia calva) had >2 ?g/g dry weight, alligators (Alligator mississippiensis) >4 ?g/g dry

  10. Odzwierciedlenie zmian klimatycznych w przebiegu fenologicznych pór roku. w Poznaniu w latach 1958-2009

    NASA Astrophysics Data System (ADS)

    Górska-Zajączkowska, Maria; Wójtowicz, Wanda

    2011-01-01

    The term "phenology" is derived from the Greek word phainomenon. It combines indications of changes in climate, plants and animals. The interrelationship is close enough to determine phonological seasons on the basis of plant phases of development (foliage, flowering, seed dispersal, leaf shedding) of certain species of plants taken as indicator species. They act as "measurement devices" of sorts in a domain known as climate phenology. The Botanical Garden in Poznań has been conducting Adam Mickiewicz Uniwersity (UAM) phenological observations and phenological season analyses since 1958, with regard to A. Łukasiewicz's 1967 slight revision of the selection of indicator species. Using the obtained data start dates and duration of subsequent phenological seasons have been determined. Phenological season analysis in successive years is performed by comparing meteorological data from the IMiGW (Institute of Meteorology and Water Economy) weather station at Poznań Ławica, and since 2006 data from the UAM Botanical Garden's own weather station. Among the five decades of observations, the last one is characterized by a distinctly higher average temperature. During this decade, the pre-spring vegetation period in plants started earlier than in the previous forty years. The onset of spring and early summer was also accelerated, whereas summer, autumn, and late autumn were delayed. The average duration of successive phenological seasons, compared to the data from 1958-1997 was as follows: prespring, early summer and all autumn seasons lasted longer, the average length of early spring and spring was the same, while summer was slightly shorter in this ten-year period. Winter, defined using meteorological criteria, was an unstable season often starting only in the new calendar year. The difference between the average duration of this season in the periods under consideration was very significant, with 89 days in the 40-year period and 67 days in the last

  11. Airport Activity Statistics of Certificated Route Air Carriers.

    DTIC Science & Technology

    1981-12-31

    161 2 209 83 IN IN1330 2INW, " 1w- OPIUAXIIUU ALL MAM AT SNhM AN UAM IImM 12 MONTHS4 EN0ED 0ECEMBER it. 196t PT.l sa"d £Wisb 346.64 I P.4H- bbd ...341-# 1261 30 &1 S ERIElE 2 1 112 ALL TYPES 1261 :1 &Sig Its? 1392 99.40 td L TAM 7 NY TYP8 OU101ATION, IY AlIM 1796. DY MININIY. AND 8NY~ L2 MONTH1...BY ANCRAM TYPE BY CONMW91T AND1 BY CA10 12 MNITHIM D88 ECEMSER 31. 1941 L T.0149.. dava sakar d.1d &.-. i A-..., cmtr Ca.ri8...d TD - :PA, - I- (Abt m o

  12. AtlasCBS: a web server to map and explore chemico-biological space

    NASA Astrophysics Data System (ADS)

    Cortés-Cabrera, Álvaro; Morreale, Antonio; Gago, Federico; Abad-Zapatero, Celerino

    2012-09-01

    New approaches are needed that can help decrease the unsustainable failure in small-molecule drug discovery. Ligand Efficiency Indices (LEI) are making a great impact on early-stage compound selection and prioritization. Given a target-ligand database with chemical structures and associated biological affinities/activities for a target, the AtlasCBS server generates two-dimensional, dynamical representations of its contents in terms of LEI. These variables allow an effective decoupling of the chemical (angular) and biological (radial) components. BindingDB, PDBBind and ChEMBL databases are currently implemented. Proprietary datasets can also be uploaded and compared. The utility of this atlas-like representation in the future of drug design is highlighted with some examples. The web server can be accessed at http://ub.cbm.uam.es/atlascbs and https://www.ebi.ac.uk/chembl/atlascbs.

  13. Physical Training Methods For Mine Rescuers In 2015

    NASA Astrophysics Data System (ADS)

    Marin, Laurentiu; Pavel, Topala; Marin, Catalina Daniela; Sandu, Teodor

    2015-07-01

    Research and development activities presented were aimed at obtaining a nanocomposite polyurethane matrix with special anti-wear, anti-slip and fire-resistant properties. Research and development works were materialized by obtaining polyurethane nanocomposite matrix, by its physico-chemical modification in order to give the desired technological properties and by characterization of the obtained material. Polyurethane nanocomposite matrix was obtained by reacting a PETOL 3 type polyetherpolyol (having a molecular weight of 5000 UAM) with a diisocyanate under well-established reaction conditions. Target specific technological properties were obtained by physical and chemical modification of polyurethane nanocomposite matrix. The final result was getting a pellicle material based on modified nanocomposite polyurethane, with anti-wear, anti-slip and fire-resistant properties, compatible with most substrates encountered in civil and industrial construction: wood, concrete, metal.

  14. [Investigation of bacterial diversity in the biological desulfurization reactor for treating high salinity wastewater by the 16S rDNA cloning method].

    PubMed

    Liu, Wei-Guo; Liang, Cun-Zhen; Yang, Jin-Sheng; Wang, Gui-Ping; Liu, Miao-Miao

    2013-02-01

    The bacterial diversity in the biological desulfurization reactor operated continuously for 1 year was studied by the 16S rDNA cloning and sequencing method. Forty clones were randomly selected and their partial 16S rDNA genes (ca. 1,400 bp) were sequenced and blasted. The results indicated that there were dominant bacterias in the biological desulfurization reactor, where 33 clones belonged to 3 different published phyla, while 1 clone belonged to unknown phylum. The dominant bacterial community in the system was Proteobacteria, which accounted for 85.3%. The bacterial community succession was as follows: the gamma-Proteobacteria(55.9%), beta-Proteobacteria(17.6%), Actinobacteridae (8.8%), delta-Proteobacteria (5.9%) , alpha-Proteobacteria(5.9%), and Sphingobacteria (2.9%). Halothiobacillus sp. ST15 and Thiobacillus sp. UAM-I were the major desulfurization strains.

  15. Fabrication of (U,Am)O2 pellet with controlled porosity from oxide microspheres

    NASA Astrophysics Data System (ADS)

    Ramond, Laure; Coste, Philippe; Picart, Sébastien; Gauthé, Aurélie; Bataillea, Marc

    2017-08-01

    U1-xAmxO2±δ mixed-oxides are considered as promising compounds for americium heterogeneous transmutation in Sodium Fast Neutron Reactor. Porous microstructure is envisaged in order to facilitate helium and fission gas release and to reduce pellet swelling during irradiation and under self-irradiation. In this study, the porosity is created by reducing (U,Am)3O8 microspheres into (U,Am)O2 during the sintering. This reduction is accompanied by a decrease of the lattice volume that leads to the creation of open porosity. Finally, an (U0.90Am0.10)O2 porous ceramic pellet (D∼89% of the theoretical density TD) with controlled porosity (≥8% open porosity) was obtained from mixed-oxide microspheres obtained by the Weak Acid Resin (WAR) process.

  16. AtlasCBS: a web server to map and explore chemico-biological space.

    PubMed

    Cortés-Cabrera, Alvaro; Morreale, Antonio; Gago, Federico; Abad-Zapatero, Celerino

    2012-09-01

    New approaches are needed that can help decrease the unsustainable failure in small-molecule drug discovery. Ligand Efficiency Indices (LEI) are making a great impact on early-stage compound selection and prioritization. Given a target-ligand database with chemical structures and associated biological affinities/activities for a target, the AtlasCBS server generates two-dimensional, dynamical representations of its contents in terms of LEI. These variables allow an effective decoupling of the chemical (angular) and biological (radial) components. BindingDB, PDBBind and ChEMBL databases are currently implemented. Proprietary datasets can also be uploaded and compared. The utility of this atlas-like representation in the future of drug design is highlighted with some examples. The web server can be accessed at http://ub.cbm.uam.es/atlascbs and https://www.ebi.ac.uk/chembl/atlascbs.

  17. PREFACE: XXIX International Conference on Photonic, Electronic, and Atomic Collisions (ICPEAC2015)

    NASA Astrophysics Data System (ADS)

    Díaz, C.; Rabadán, I.; García, G.; Méndez, L.; Martín, F.

    2015-09-01

    The 29th International Conference on Photonic, Electronic and Atomic Collisions (XXIX ICPEAC) was held at the Palacio de Congresos ''El Greco'', Toledo, Spain, on 22-28 July, 2015, and was organized by the Universidad Autónoma de Madrid (UAM) and the Consejo Superior de Investigaciones Científicas (CSIC). ICPEAC is held biannually and is one of the most important international conferences on atomic and molecular physics. The topic of the conference covers the recent progresses in photonic, electronic, and atomic collisions with matter. With a history back to 1958, ICPEAC came to Spain in 2015 for the very first time. UAM and CSIC had been preparing the conference for six years, ever since the ICPEAC International General Committee made the decision to hold the XXIX ICPEAC in Toledo. The conference gathered 670 participants from 52 countries and attracted 854 contributed papers for presentation in poster sessions. Among the latter, 754 are presented in issues 2-12 of this volume of the Journal of Physics Conference Series. In addition, five plenary lectures, including the opening one by the Nobel laureate Prof. Ahmed H. Zewail and the lectures by Prof. Maciej Lewenstein, Prof. Paul Scheier, Prof. Philip H. Bucksbaum, and Prof. Stephen J. Buckman, 62 progress reports and 26 special reports were presented following the decision of the ICPEAC International General Committee. Detailed write-ups of most of the latter are presented in issue 1 of this volume, constituting a comprehensive tangible record of the meeting. On the occasion of the International Year of Light (IYL2015) and with the support of the Fundación Española para la Ciencia y la Tecnología (FECYT), the program was completed with two public lectures delivered by the Nobel laureate Prof. Serge Haroche and the Príncipe de Asturias laureate Prof. Pedro M. Echenique on, respectively, ''Fifty years of laser revolutions in physics'rquot; and ''The sublime usefulness of useless science''. Also a

  18. Vitamin E Phosphate Coating Stimulates Bone Deposition in Implant-related Infections in a Rat Model.

    PubMed

    Lovati, Arianna B; Bottagisio, Marta; Maraldi, Susanna; Violatto, Martina B; Bortolin, Monica; De Vecchi, Elena; Bigini, Paolo; Drago, Lorenzo; Romanò, Carlo L

    2018-06-01

    Implant-related infections are associated with impaired bone healing and osseointegration. In vitro antiadhesive and antibacterial properties and in vivo antiinflammatory effects protecting against bone loss of various formulations of vitamin E have been demonstrated in animal models. However, to the best of our knowledge, no in vivo studies have demonstrated the synergistic activity of vitamin E in preventing bacterial adhesion to orthopaedic implants, thus supporting the bone-implant integration. The purpose of this study was to test whether a vitamin E phosphate coating on titanium implants may be able to reduce (1) the bacterial colonization of prosthetic implants and (2) bone resorption and osteomyelitis in a rat model of Staphylococcus aureus-induced implant-related infection. Twelve rats were bilaterally injected in the femurs with S aureus UAMS-1-Xen40 and implanted with uncoated or vitamin E phosphate-coated titanium Kirschner wires without local or systemic antibiotic prophylaxis. Eight rats represented the uninfected control group. A few hours after surgery, two control and three infected animals died as a result of unexpected complications. With the remaining rats, we assessed the presence of bacterial contamination with qualitative bioluminescence imaging and Gram-positive staining and with quantitative bacterial count. Bone changes in terms of resorption and osteomyelitis were quantitatively analyzed through micro-CT (bone mineral density) and semiquantitatively through histologic scoring systems. Six weeks after implantation, we found only a mild decrease in bacterial count in coated versus uncoated implants (Ti versus controls: mean difference [MD], -3.705; 95% confidence interval [CI], -4.416 to -2.994; p < 0.001; TiVE versus controls: MD, -3.063; 95% CI, -3.672 to -2.454; p < 0.001), whereas micro-CT analysis showed a higher bone mineral density at the knee and femoral metaphysis in the vitamin E-treated group compared with uncoated implants (knee

  19. SURVEY OF VOLATILE ORGANIC COMPOUNDS ASSOCIATED WITH AUTOMOTIVE EMISSIONS IN THE URBAN AIRSHED OF SAO PAULO, BRAZIL

    EPA Science Inventory

    The Metropolitan Region of Sao Paulo (MRSP), Brazil, is one of the largest metropolitan areas in the world (population 17 million, approx.) and relies heavily on alcohol-based fuels for automobiles. It is estimated that about 40% of the total volume of fuel is ethanol with som...

  20. Atmospheric production of oxalic acid/oxalate and nitric acid/nitrate in the Tampa Bay airshed: Parallel pathways

    NASA Astrophysics Data System (ADS)

    Martinelango, P. Kalyani; Dasgupta, Purnendu K.; Al-Horr, Rida S.

    Oxalic acid is the dominant dicarboxylic acid (DCA), and it constitutes up to 50% of total atmospheric DCAs, especially in non-urban and marine atmospheres. A significant amount of particulate H 2Ox/oxalate (Ox) occurred in the coarse particle fraction of a dichotomous sampler, the ratio of oxalate concentrations in the PM 10 to PM 2.5 fractions ranged from 1 to 2, with mean±sd being 1.4±0.2. These results suggest that oxalate does not solely originate in the gas phase and condense into particles. Gaseous H 2Ox concentrations are much lower than particulate Ox concentrations and are well correlated with HNO 3, HCHO, and O 3, supporting a photochemical origin. Of special relevance to the Bay Region Atmospheric Chemistry Experiment (BRACE) is the extent of nitrogen deposition in the Tampa Bay estuary. Hydroxyl radical is primarily responsible for the conversion of NO 2 to HNO 3, the latter being much more easily deposited. Hydroxyl radical is also responsible for the aqueous phase formation of oxalic acid from alkenes. Hence, we propose that an estimate of rad OH can be obtained from H 2Ox/Ox production rate and we accordingly show that the product of total oxalate concentration and NO 2 concentration approximately predicts the total nitrate concentration during the same period.

  1. Recirculation, stagnation and ventilation: The 2014 legionella episode

    NASA Astrophysics Data System (ADS)

    Russo, Ana; Soares, Pedro M. M.; Gouveia, Célia M.; Cardoso, Rita M.; Trigo, Ricardo M.

    2017-04-01

    Legionella transmission through the atmosphere is unusual, but not unprecedented. A scientific paper published in 2006 reports a surge in Pas-de-Calais, France, in which 86 people have been infected by bacteria released by a cooling tower more than 6 km away [3]. Similarly, in Norway, in 2005, there was another case where contamination spread beyond 10 km, although more concentrated within a radius of 1 km from an industrial unit [2]. An unprecedented large Legionella outbreak occurred in November 2014 nearby Lisbon, Portugal. As of 7 November 2014, 375 individuals become hill and 12 died infected by the Legionella pneumophila bacteria, contracted by inhalation of steam droplets of contaminated water (aerosols). These droplets are so small that can carry the bacteria directly to the lungs, depositing it in the alveoli. One way of studying the propagation of legionella episodes is through the use of aerosol dispersion models. However, such approaches often require detailed 3D high resolution wind data over the region, which isn't often available for long periods. The likely impact of wind on legionella transmission can also be understood based on the analysis of special types of flow conditions such as stagnation, recirculation and ventilation [1, 4]. The Allwine and Whiteman (AW) approach constitutes a straightforward method to assess the assimilative and dispersal capacities of different airsheds [1,4], as it only requires hourly wind components. Thus, it has the advantage of not needing surface and upper air meteorological observations and a previous knowledge of the atmospheric transport and dispersion conditions. The objective of this study is to analyze if the legionella outbreak event which took place in November 2014 had extreme potential recirculation and/or stagnation characteristics. In order to accomplish the proposed objective, the AW approach was applied for a hindcast time-series covering the affected area (1989-2007) and then for an independent

  2. The Universal Anaesthesia Machine (UAM): assessment of a new anaesthesia workstation built with global health in mind.

    PubMed

    de Beer, D A H; Nesbitt, F D; Bell, G T; Rapuleng, A

    2017-04-01

    The Universal Anaesthesia Machine has been developed as a complete anaesthesia workstation for use in low- and middle-income countries, where the provision of safe general anaesthesia is often compromised by unreliable supply of electricity and anaesthetic gases. We performed a functional and clinical assessment of this anaesthetic machine, with particular reference to novel features and functioning in the intended environment. The Universal Anaesthesia Machine was found to be reliable, safe and consistent across a range of tests during targeted functional testing. © 2016 The Association of Anaesthetists of Great Britain and Ireland.

  3. EIR: enterprise imaging repository, an alternative imaging archiving and communication system.

    PubMed

    Bian, Jiang; Topaloglu, Umit; Lane, Cheryl

    2009-01-01

    The enormous number of studies performed at the Nuclear Medicine Department of University of Arkansas for Medical Sciences (UAMS) generates a huge amount PET/CT images daily. A DICOM workstation had been used as "mini-PACS" to route all studies, which is historically proven to be slow due to various reasons. However, replacing the workstation with a commercial PACS server is not only cost inefficient; and more often, the PACS vendors are reluctant to take responsibility for the final integration of these components. Therefore, in this paper, we propose an alternative imaging archiving and communication system called Enterprise Imaging Repository (EIR). EIR consists of two distinguished components: an image processing daemon and a user friendly web interface. EIR not only reduces the overall waiting time of transferring a study from the modalities to radiologists' workstations, but also provides a more preferable presentation.

  4. The prevalence of fluorosis in children is associated with naturally occurring water fluoride concentration in Mexico.

    PubMed

    Mariño, Rodrigo

    2013-09-01

    Fluorosis and dental caries in Mexican schoolchildren residing in areas with different water fluoride concentrations and receiving fluoridated salt. Garcia-Perez A, Irigoyen-Carnacho ME, Borges-Yanez A. Caries Res 2013;47(4):299-308. Rodrigo Mariño Is there an association between the presence of dental fluorosis and fluoride concentration in drinking water? and Is there an association between the severity of fluorosis and dental caries experience in schoolchildren residing in two rural towns in Mexico (with water fluoride concentrations of 0.70 and 1.50 ppm) that also receive fluoridated salt? Government: National Council of Science and Technology (Consejo Nacional de Ciencia y Tecnologia, CONACYT) Other: Autonomous University, Xochimilco (Universidad Autonoma Metropolitana, UAM-X) TYPE OF STUDY/DESIGN: Cross-sectional Level 3: Other evidence Not applicable. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. The MUMBA campaign: measurements of urban, marine and biogenic air

    NASA Astrophysics Data System (ADS)

    Paton-Walsh, Clare; Guérette, Élise-Andrée; Kubistin, Dagmar; Humphries, Ruhi; Wilson, Stephen R.; Dominick, Doreena; Galbally, Ian; Buchholz, Rebecca; Bhujel, Mahendra; Chambers, Scott; Cheng, Min; Cope, Martin; Davy, Perry; Emmerson, Kathryn; Griffith, David W. T.; Griffiths, Alan; Keywood, Melita; Lawson, Sarah; Molloy, Suzie; Rea, Géraldine; Selleck, Paul; Shi, Xue; Simmons, Jack; Velazco, Voltaire

    2017-06-01

    The Measurements of Urban, Marine and Biogenic Air (MUMBA) campaign took place in Wollongong, New South Wales (a small coastal city approximately 80 km south of Sydney, Australia) from 21 December 2012 to 15 February 2013. Like many Australian cities, Wollongong is surrounded by dense eucalyptus forest, so the urban airshed is heavily influenced by biogenic emissions. Instruments were deployed during MUMBA to measure the gaseous and aerosol composition of the atmosphere with the aim of providing a detailed characterisation of the complex environment of the ocean-forest-urban interface that could be used to test the skill of atmospheric models. The gases measured included ozone, oxides of nitrogen, carbon monoxide, carbon dioxide, methane and many of the most abundant volatile organic compounds. The aerosol characterisation included total particle counts above 3 nm, total cloud condensation nuclei counts, mass concentration, number concentration size distribution, aerosol chemical analyses and elemental analysis.The campaign captured varied meteorological conditions, including two extreme heat events, providing a potentially valuable test for models of future air quality in a warmer climate. There was also an episode when the site sampled clean marine air for many hours, providing a useful additional measure of the background concentrations of these trace gases within this poorly sampled region of the globe. In this paper we describe the campaign, the meteorology and the resulting observations of atmospheric composition in general terms in order to equip the reader with a sufficient understanding of the Wollongong regional influences to use the MUMBA datasets as a case study for testing a chemical transport model. The data are available from PANGAEA (http://doi.pangaea.de/10.1594/PANGAEA.871982).

  6. Partitioning phase preference for secondary organic aerosol in an urban atmosphere

    NASA Astrophysics Data System (ADS)

    Chang, Wayne Li-Wen

    Secondary organic aerosol (SOA) comprises a significant portion of atmospheric particular matter (PM). The impact of PM on both human health and global climate has long been recognized. Despite its importance, there are still many unanswered questions regarding the formation and evolution of SOA in the atmosphere. This study uses a modeling approach to understand the preferred partitioning behavior of SOA species into aqueous or organic condensed phases. More specifically, this work uses statistical analyses of approximately 24,000 data values for each variable from a state-of-the-art 3-D airshed model. Spatial and temporal distributions of fractions of SOA residing in the aqueous phase (fAQ) in the South Coast Air Basin of California are presented. Typical values of fAQ within the basin near the surface range from 5 to 80%. Results show that the distribution of fAQ values is inversely proportional to the total SOA loading. Further analysis accounting for various meteorological parameters indicates that large fAQ values are the results of aqueous-phase SOA insensitivity to the ambient conditions; while organic-phase SOA concentrations are dramatically reduced under unfavorable SOA formation conditions, aqueous-phase SOA level remains relatively unchanged, thus increasing fAQ at low SOA loading. Diurnal variations of fAQ near the surface are also observed: it tends to be larger during daytime hours than nighttime hours. When examining the vertical gradient of fAQ, largest values are found at heights above the surface layer. In summary, one must consider SOA in both organic and aqueous phases for proper regional and global SOA budget estimation.

  7. Chlorine activation indoors and outdoors via surface-mediated reactions of nitrogen oxides with hydrogen chloride

    PubMed Central

    Raff, Jonathan D.; Njegic, Bosiljka; Chang, Wayne L.; Gordon, Mark S.; Dabdub, Donald; Gerber, R. Benny; Finlayson-Pitts, Barbara J.

    2009-01-01

    Gaseous HCl generated from a variety of sources is ubiquitous in both outdoor and indoor air. Oxides of nitrogen (NOy) are also globally distributed, because NO formed in combustion processes is oxidized to NO2, HNO3, N2O5 and a variety of other nitrogen oxides during transport. Deposition of HCl and NOy onto surfaces is commonly regarded as providing permanent removal mechanisms. However, we show here a new surface-mediated coupling of nitrogen oxide and halogen activation cycles in which uptake of gaseous NO2 or N2O5 on solid substrates generates adsorbed intermediates that react with HCl to generate gaseous nitrosyl chloride (ClNO) and nitryl chloride (ClNO2), respectively. These are potentially harmful gases that photolyze to form highly reactive chlorine atoms. The reactions are shown both experimentally and theoretically to be enhanced by water, a surprising result given the availability of competing hydrolysis reaction pathways. Airshed modeling incorporating HCl generated from sea salt shows that in coastal urban regions, this heterogeneous chemistry increases surface-level ozone, a criteria air pollutant, greenhouse gas and source of atmospheric oxidants. In addition, it may contribute to recently measured high levels of ClNO2 in the polluted coastal marine boundary layer. This work also suggests the potential for chlorine atom chemistry to occur indoors where significant concentrations of oxides of nitrogen and HCl coexist. PMID:19620710

  8. Naphthalene distributions and human exposure in Southern California

    NASA Astrophysics Data System (ADS)

    Lu, Rong; Wu, Jun; Turco, Richard P.; Winer, Arthur M.; Atkinson, Roger; Arey, Janet; Paulson, Suzanne E.; Lurmann, Fred W.; Miguel, Antonio H.; Eiguren-Fernandez, Arantzazu

    The regional distribution of, and human exposure to, naphthalene are investigated for Southern California. A comprehensive approach is taken in which advanced models are linked for the first time to quantify population exposure to the emissions of naphthalene throughout Southern California. Naphthalene is the simplest and most abundant of the polycyclic aromatic hydrocarbons found in polluted urban environments, and has been detected in both outdoor and indoor air samples. Exposure to high concentrations of naphthalene may have adverse health effects, possibly causing cancer in humans. Among the significant emission sources are volatilization from naphthalene-containing products, petroleum refining, and combustion of fossil fuels and wood. Gasoline and diesel engine exhaust, with related vaporization from fuels, are found to contribute roughly half of the daily total naphthalene burden in Southern California. As part of this study, the emission inventory for naphthalene has been verified against new field measurements of the naphthalene-to-benzene ratio in a busy traffic tunnel in Los Angeles, supporting the modeling work carried out here. The Surface Meteorology and Ozone Generation (SMOG) airshed model is used to compute the spatial and temporal distributions of naphthalene and its photooxidation products in Southern California. The present simulations reveal a high degree of spatial variability in the concentrations of naphthalene-related species, with large diurnal and seasonal variations as well. Peak naphthalene concentrations are estimated to occur in the early morning hours in the winter season. The naphthalene concentration estimates obtained from the SMOG model are employed in the Regional Human Exposure (REHEX) model to calculate population exposure statistics. Results show average hourly naphthalene exposures in Southern California under summer and winter conditions of 270 and 430 ng m -3, respectively. Exposure to significantly higher concentrations

  9. Medical Simulation as a Vital Adjunct to Identifying Clinical Life-Threatening Gaps in Austere Environments.

    PubMed

    Chima, Adaora M; Koka, Rahul; Lee, Benjamin; Tran, Tina; Ogbuagu, Onyebuchi U; Nelson-Williams, Howard; Rosen, Michael; Koroma, Michael; Sampson, John B

    2018-04-01

    Maternal mortality and morbidity are major causes of death in low-resource countries, especially those in Sub-Saharan Africa. Healthcare workforce scarcities present in these locations result in poor perioperative care access and quality. These scarcities also limit the capacity for progressive development and enhancement of workforce training, and skills through continuing medical education. Newly available low-cost, in-situ simulation systems make it possible for a small cadre of trainers to use simulation to identify areas needing improvement and to rehearse best practice approaches, relevant to the context of target environments. Nurse anesthetists were recruited throughout Sierra Leone to participate in simulation-based obstetric anesthesia scenarios at the country's national referral maternity hospital. All subjects participated in a detailed computer assisted training program to familiarize themselves with the Universal Anesthesia Machine (UAM). An expert panel rated the morbidity/mortality risk of pre-identified critical incidents within the scenario via the Delphi process. Participant responses to critical incidents were observed during these scenarios. Participants had an obstetric anesthesia pretest and post-test as well as debrief sessions focused on reviewing the significance of critical incident responses observed during the scenario. 21 nurse anesthetists, (20% of anesthesia providers nationally) participated. Median age was 41 years and median experience practicing anesthesia was 3.5 years. Most participants (57.1%) were female, two-thirds (66.7%) performed obstetrics anesthesia daily but 57.1% had no experience using the UAM. During the simulation, participants were observed and assessed on critical incident responses for case preparation with a median score of 7 out of 13 points, anesthesia management with a median score of 10 out of 20 points and rapid sequence intubation with a median score of 3 out of 10 points. This study identified

  10. Sonochemical synthesis of highly crystalline photocatalyst for industrial applications.

    PubMed

    Noman, Muhammad Tayyab; Militky, Jiri; Wiener, Jakub; Saskova, Jana; Ashraf, Muhammad Azeem; Jamshaid, Hafsa; Azeem, Musaddaq

    2018-02-01

    Highly photo active pure anatase form of TiO 2 nanoparticles with average particle size 4nm have been successfully synthesized by ultrasonic acoustic method (UAM). The effects of process variables i.e. precursors concentration and sonication time were investigated based on central composite design and response surface methodology. The characteristics of the resulting nanoparticles (RNP) were analyzed by scanning electron microscopy, dynamic light scattering, transmission electron microscopy, X-ray diffractometry and Raman spectroscopy. Photocatalytic experiments were performed with methylene blue dye which is considered as model organic pollutant in textile industry. A comparative analysis between the RNP and commercially available Degussa P25 for photocatalytic performance against dye removal efficiency was performed. The rapid removal of methylene blue in case of RNP indicates their higher photocatalytic activity than P25. Maximum dye removal efficiency 98.45% was achieved with optimal conditions i.e. TTIP conc. 10mL, EG conc. 4mL and sonication time 1h. Interestingly, no significant difference was found in the photocatalytic performance of RNP after calcination. Moreover, self-cleaning efficiency of RNP deposited on cotton was evaluated in RGB color space. The obtained results indicate the significant impact of ultrasonic irradiations on the photocatalytic performance of pure anatase form than any other hybrid type of TiO 2 nanoparticles. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  12. Evidence of community structure in biomedical research grant collaborations.

    PubMed

    Nagarajan, Radhakrishnan; Kalinka, Alex T; Hogan, William R

    2013-02-01

    Recent studies have clearly demonstrated a shift towards collaborative research and team science approaches across a spectrum of disciplines. Such collaborative efforts have also been acknowledged and nurtured by popular extramurally funded programs including the Clinical Translational Science Award (CTSA) conferred by the National Institutes of Health. Since its inception, the number of CTSA awardees has steadily increased to 60 institutes across 30 states. One of the objectives of CTSA is to accelerate translation of research from bench to bedside to community and train a new genre of researchers under the translational research umbrella. Feasibility of such a translation implicitly demands multi-disciplinary collaboration and mentoring. Networks have proven to be convenient abstractions for studying research collaborations. The present study is a part of the CTSA baseline study and investigates existence of possible community-structure in Biomedical Research Grant Collaboration (BRGC) networks across data sets retrieved from the internally developed grants management system, the Automated Research Information Administrator (ARIA) at the University of Arkansas for Medical Sciences (UAMS). Fastgreedy and link-community community-structure detection algorithms were used to investigate the presence of non-overlapping and overlapping community-structure and their variation across years 2006 and 2009. A surrogate testing approach in conjunction with appropriate discriminant statistics, namely: the modularity index and the maximum partition density is proposed to investigate whether the community-structure of the BRGC networks were different from those generated by certain types of random graphs. Non-overlapping as well as overlapping community-structure detection algorithms indicated the presence of community-structure in the BRGC network. Subsequent, surrogate testing revealed that random graph models considered in the present study may not necessarily be appropriate

  13. Community Health Warriors: Marshallese Community Health Workers' Perceptions and Experiences with CBPR and Community Engagement.

    PubMed

    Purvis, Rachel S; Bing, Williamina Ioanna; Jacob, Christopher J; Lang, Sharlynn; Mamis, Sammie; Ritok, Mandy; Rubon-Chutaro, Jellesen; McElfish, Pearl Anna

    2017-01-01

    Our manuscript highlights the viewpoints and reflections of the native Marshallese community health workers (CHWs) engaged in research with the local Marshallese community in Northwest Arkansas. In particular, this paper documents the vital role Marshallese CHWs play in the success of programs and research efforts. The negative health effects of nuclear testing in the Marshall Islands has been passed down through many generations, along with unfavorable attitudes toward the U.S. government and researchers. However, the community-based participatory research (CBPR) approach used by the University of Arkansas for Medical Sciences (UAMS) has allowed the native Marshallese CHWs to become advocates for the Marshallese community. The use of native CHWs has also leveled the power dynamics that can be a barrier to community-based research, and has strengthened trust with community stakeholders. Our paper shows how using Marshallese CHWs can produce positive health outcomes for the Marshallese community.

  14. Materials research at CMAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zucchiatti, Alessandro

    2013-07-18

    The Centro de Micro Analisis de Materiales (CMAM) is a research centre of the Universidad Autonoma de Madrid dedicated to the modification and analysis of materials using ion beam techniques. The infrastructure, based on a HVEE 5MV tandem accelerator, provided with a coaxial Cockcroft Walton charging system, is fully open to research groups of the UAM, to other public research institutions and to private enterprises. The CMAM research covers a few important lines such as advanced materials, surface science, biomedical materials, cultural heritage, materials for energy production. The Centre gives as well support to university teaching and technical training. Amore » detail description of the research infrastructures and their use statistics will be given. Some of the main research results will be presented to show the progress of research in the Centre in the past few years and to motivate the strategic plans for the forthcoming.« less

  15. GOFC-GOLD/LCLUC/START Regional Networking: building capacity for science and decision-making.

    NASA Astrophysics Data System (ADS)

    Justice, C. O.; Vadrevu, K.; Gutman, G.

    2016-12-01

    Over the past 20 years, the international GOFC-GOLD Program and START, with core funding from the NASA LCLUC program and ESA have been developing regional networks of scientists and data users for scientific capacity building and sharing experience in the use and application of Earth Observation data. Regional networks connect scientists from countries with similar environmental and social issues and often with shared water and airsheds. Through periodic regional workshops, regional and national projects are showcased and national priorities and policy drivers are articulated. The workshops encourage both north-south and south-south exchange and collaboration. The workshops are multi-sponsored and each include a training component, targeting early career scientists and data users from the region. The workshops provide an opportunity for regional scientists to publish in peer-reviewed special editions focused on regional issues. Currently, the NASA LCLUC program funded "South and Southeast Asia Regional Initiative (SARI)" team is working closely with the USAID/NASA SERVIR program to implement some capacity building and training activities jointly in south/southeast Asian countries to achieve maximum benefit.

  16. Breathing easier? The known impacts of biodiesel on air quality

    PubMed Central

    Traviss, Nora

    2013-01-01

    Substantial scientific evidence exists on the negative health effects of exposure to petroleum diesel exhaust. Many view biodiesel as a ‘green’, more environmentally friendly alternative fuel, especially with respect to measured reductions of particulate matter in tailpipe emissions. Tailpipe emissions data sets from heavy-duty diesel engines comparing diesel and biodiesel fuels provide important information regarding the composition and potential aggregate contribution of particulate matter and other pollutants to regional airsheds. However, exposure – defined in this instance as human contact with tailpipe emissions – is another key link in the chain between emissions and human health effects. Although numerous biodiesel emissions studies exist, biodiesel exposure studies are nearly absent from the literature. This article summarizes the known impacts of biodiesel on air quality and health effects, comparing emissions and exposure research. In light of rapidly changing engine, fuel and exhaust technologies, both emissions and exposure studies are necessary for developing a fuller understanding of the impact of biodiesel on air quality and human health. PMID:23585814

  17. Challenge theme 7: Information support for management of border security and environmental protection: Chapter 9 in United States-Mexican Borderlands: Facing tomorrow's challenges through USGS science

    USGS Publications Warehouse

    Parcher, Jean W.; Page, William R.

    2013-01-01

    Historically, international borders were located far from the major political and economic capitals of their countries and rarely received adequate planning or infrastructure development. Today, as a result of global economics and increased movement of goods between nations, border regions play a much greater role in commerce, tourism, and transportation. For example, Mexico is the second largest destination for United States exports (Woodrow Wilson Center Mexico Institute, 2009). The rapid population and economic growth along the United States–Mexican border, undocumented human border crossings, and the unique natural diversity of resources in the Borderlands present challenges for border security and environmental protection. Assessing risks and implementing sustainable growth policies to protect the environment and quality of life greatly increase in complexity when the issues cross an international border, where social services, environmental regulations, lifestyles, and cultural beliefs are unique for each country. Shared airsheds, water and biological resources, national security issues, and disaster management needs require an integrated binational approach to assess risks and develop binational management strategies.

  18. Environmental behavior and fate of methyl tert-butyl ether (MTBE)

    USGS Publications Warehouse

    Squillace, Paul J.; Pankow, James F.; Korte, Nic E.; Zogorski, John S.

    1996-01-01

    When gasoline that has been oxygenated with methyl tert-butyl ether (MTBE) comes in contact with water, large amounts of MTBE can dissolve; at 25 degrees Celsius the water solubility of MTBE is about 5,000 milligrams per liter for a gasoline that is 10 percent MTBE by weight. In contrast, for a nonoxygenated gasoline, the total hydrocarbon solubility in water is typically about 120 milligrams per liter. MTBE sorbs only weakly to soil and aquifer materials; therefore, sorption will not significantly retard MTBE's transport by ground water. In addition, MTBE generally resists degradation in ground water. The half-life of MTBE in the atmosphere can be as short as 3 days in a regional airshed. MTBE in the air tends to partition into atmospheric water, including precipitation. However, washout of gas-phase MTBE by precipitation would not, by itself, greatly alter the gas-phase concentration of the compound in the air. The partitioning of MTBE to precipitation is nevertheless strong enough to allow for up to 3 micrograms per liter or more inputs of MTBE to surface and ground water.

  19. Introduction: Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5)

    NASA Astrophysics Data System (ADS)

    Martin, S. T.; Artaxo, P.; Machado, L. A. T.; Manzi, A. O.; Souza, R. A. F.; Schumacher, C.; Wang, J.; Andreae, M. O.; Barbosa, H. M. J.; Fan, J.; Fisch, G.; Goldstein, A. H.; Guenther, A.; Jimenez, J. L.; Pöschl, U.; Silva Dias, M. A.; Smith, J. N.; Wendisch, M.

    2015-11-01

    The Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) Experiment was carried out in the environs of Manaus, Brazil, in the central region of the Amazon basin during two years from 1 January 2014 through 31 December 2015. The experiment focused on the complex interactions among vegetation, atmospheric chemistry, and aerosol production on the one hand and their connections to aerosols, clouds, and precipitation on the other. The objective was to understand and quantify these linked processes, first under natural conditions to obtain a baseline and second when altered by the effects of human activities. To this end, the pollution plume from the Manaus metropolis, superimposed on the background conditions of the central Amazon basin, served as a natural laboratory. The present paper, as the Introduction to the GoAmazon2014/5 Special Issue, presents the context and motivation of the GoAmazon2014/5 Experiment. The nine research sites, including the characteristics and instrumentation of each site, are presented. The sites range from time point zero (T0) upwind of the pollution, to T1 in the midst of the pollution, to T2 just downwind of the pollution, to T3 furthest downwind of the pollution (70 km). In addition to the ground sites, a low-altitude G-159 Gulfstream I (G1) observed the atmospheric boundary layer and low clouds, and a high-altitude Gulfstream G550 (HALO) operated in the free troposphere. During the two-year experiment, two Intensive Operating Periods (IOP1 and IOP2) also took place that included additional specialized research instrumentation at the ground sites as well as flights of the two aircraft. GoAmazon2014/5 IOP1 was carried out from 1 February to 31 March 2014 in the wet season. GoAmazon2014/5 IOP2 was conducted from 15 August to 15 October 2014 in the dry season. The G1 aircraft flew during both IOP1 and IOP2, and the HALO aircraft flew during IOP2. In the context of the Amazon basin, the two IOPs also correspond to the clean

  20. Introduction: Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5)

    NASA Astrophysics Data System (ADS)

    Martin, S. T.; Artaxo, P.; Machado, L. A. T.; Manzi, A. O.; Souza, R. A. F.; Schumacher, C.; Wang, J.; Andreae, M. O.; Barbosa, H. M. J.; Fan, J.; Fisch, G.; Goldstein, A. H.; Guenther, A.; Jimenez, J. L.; Pöschl, U.; Silva Dias, M. A.; Smith, J. N.; Wendisch, M.

    2016-04-01

    The Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) Experiment was carried out in the environs of Manaus, Brazil, in the central region of the Amazon basin for 2 years from 1 January 2014 through 31 December 2015. The experiment focused on the complex interactions among vegetation, atmospheric chemistry, and aerosol production on the one hand and their connections to aerosols, clouds, and precipitation on the other. The objective was to understand and quantify these linked processes, first under natural conditions to obtain a baseline and second when altered by the effects of human activities. To this end, the pollution plume from the Manaus metropolis, superimposed on the background conditions of the central Amazon basin, served as a natural laboratory. The present paper, as the introduction to the special issue of GoAmazon2014/5, presents the context and motivation of the GoAmazon2014/5 Experiment. The nine research sites, including the characteristics and instrumentation of each site, are presented. The sites range from time point zero (T0) upwind of the pollution, to T1 in the midst of the pollution, to T2 just downwind of the pollution, to T3 furthest downwind of the pollution (70 km). In addition to the ground sites, a low-altitude G-159 Gulfstream I (G-1) observed the atmospheric boundary layer and low clouds, and a high-altitude Gulfstream G550 (HALO) operated in the free troposphere. During the 2-year experiment, two Intensive Operating Periods (IOP1 and IOP2) also took place that included additional specialized research instrumentation at the ground sites as well as flights of the two aircraft. GoAmazon2014/5 IOP1 was carried out from 1 February to 31 March 2014 in the wet season. GoAmazon2014/5 IOP2 was conducted from 15 August to 15 October 2014 in the dry season. The G-1 aircraft flew during both IOP1 and IOP2, and the HALO aircraft flew during IOP2. In the context of the Amazon basin, the two IOPs also correspond to the clean and

  1. Structure and non-structure of centrosomal proteins.

    PubMed

    Dos Santos, Helena G; Abia, David; Janowski, Robert; Mortuza, Gulnahar; Bertero, Michela G; Boutin, Maïlys; Guarín, Nayibe; Méndez-Giraldez, Raúl; Nuñez, Alfonso; Pedrero, Juan G; Redondo, Pilar; Sanz, María; Speroni, Silvia; Teichert, Florian; Bruix, Marta; Carazo, José M; Gonzalez, Cayetano; Reina, José; Valpuesta, José M; Vernos, Isabelle; Zabala, Juan C; Montoya, Guillermo; Coll, Miquel; Bastolla, Ugo; Serrano, Luis

    2013-01-01

    Here we perform a large-scale study of the structural properties and the expression of proteins that constitute the human Centrosome. Centrosomal proteins tend to be larger than generic human proteins (control set), since their genes contain in average more exons (20.3 versus 14.6). They are rich in predicted disordered regions, which cover 57% of their length, compared to 39% in the general human proteome. They also contain several regions that are dually predicted to be disordered and coiled-coil at the same time: 55 proteins (15%) contain disordered and coiled-coil fragments that cover more than 20% of their length. Helices prevail over strands in regions homologous to known structures (47% predicted helical residues against 17% predicted as strands), and even more in the whole centrosomal proteome (52% against 7%), while for control human proteins 34.5% of the residues are predicted as helical and 12.8% are predicted as strands. This difference is mainly due to residues predicted as disordered and helical (30% in centrosomal and 9.4% in control proteins), which may correspond to alpha-helix forming molecular recognition features (α-MoRFs). We performed expression assays for 120 full-length centrosomal proteins and 72 domain constructs that we have predicted to be globular. These full-length proteins are often insoluble: Only 39 out of 120 expressed proteins (32%) and 19 out of 72 domains (26%) were soluble. We built or retrieved structural models for 277 out of 361 human proteins whose centrosomal localization has been experimentally verified. We could not find any suitable structural template with more than 20% sequence identity for 84 centrosomal proteins (23%), for which around 74% of the residues are predicted to be disordered or coiled-coils. The three-dimensional models that we built are available at http://ub.cbm.uam.es/centrosome/models/index.php.

  2. Source apportionments of ambient fine particulate matter in Israeli, Jordanian, and Palestinian cities.

    PubMed

    Heo, Jongbae; Wu, Bo; Abdeen, Ziad; Qasrawi, Radwan; Sarnat, Jeremy A; Sharf, Geula; Shpund, Kobby; Schauer, James J

    2017-06-01

    This manuscript evaluates spatial and temporal variations of source contributions to ambient fine particulate matter (PM 2.5 ) in Israeli, Jordanian, and Palestinian cities. Twenty-four hour integrated PM 2.5 samples were collected every six days over a 1-year period (January to December 2007) in four cities in Israel (West Jerusalem, Eilat, Tel Aviv, and Haifa), four cities in Jordan (Amman, Aqaba, Rahma, and Zarka), and three cities in Palestine (Nablus, East Jerusalem, and Hebron). The PM 2.5 samples were analyzed for major chemical components, including organic carbon and elemental carbon, ions, and metals, and the results were used in a positive matrix factorization (PMF) model to estimate source contributions to PM 2.5 mass. Nine sources, including secondary sulfate, secondary nitrate, mobile, industrial lead sources, dust, construction dust, biomass burning, fuel oil combustion and sea salt, were identified across the sampling sites. Secondary sulfate was the dominant source, contributing 35% of the total PM 2.5 mass, and it showed relatively homogeneous temporal trends of daily source contribution in the study area. Mobile sources were found to be the second greatest contributor to PM 2.5 mass in the large metropolitan cities, such as Tel Aviv, Hebron, and West and East Jerusalem. Other sources (i.e. industrial lead sources, construction dust, and fuel oil combustion) were closely related to local emissions within individual cities. This study demonstrates how international cooperation can facilitate air pollution studies that address regional air pollution issues and the incremental differences across cities in a common airshed. It also provides a model to study air pollution in regions with limited air quality monitoring capacity that have persistent and emerging air quality problems, such as Africa, South Asia and Central America. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The future of airborne sulfur-containing particles in the absence of fossil fuel sulfur dioxide emissions

    DOE PAGES

    Perraud, Véronique; Horne, Jeremy R.; Martinez, Andrew S.; ...

    2015-10-19

    Sulfuric acid (H2SO4), formed from oxidation of sulfur dioxide (SO2) emitted during fossil fuel combustion, is a major precursor of new airborne particles, which have well-documented detrimental effects on health, air quality, and climate. Another precursor is methanesulfonic acid (MSA), produced simultaneously with SO2 during the atmospheric oxidation of organosulfur compounds (OSCs), such as dimethyl sulfide. In the present paper, a multidisciplinary approach is used to examine how contributions of H2SO4 and MSA to particle formation will change in a large coastal urban area as anthropogenic fossil fuel emissions of SO2 decline. The 3-dimensional University of California Irvine–California Institute ofmore » Technology airshed model is used to compare atmospheric concentrations of gas phase MSA, H2SO4, and SO2 under current emissions of fossil fuel-associated SO2 and a best-case futuristic scenario with zero fossil fuel sulfur emissions. Model additions include results from (i) quantum chemical calculations that clarify the previously uncertain gas phase mechanism of formation of MSA and (ii) a combination of published and experimental estimates of OSC emissions, such as those from marine, agricultural, and urban processes, which include pet waste and human breath. Results show that in the zero anthropogenic SO2 emissions case, particle formation potential from H2SO4 will drop by about two orders of magnitude compared with the current situation. However, particles will continue to be generated from the oxidation of natural and anthropogenic sources of OSCs, with contributions from MSA and H2SO4 of a similar order of magnitude. Finally, this could be particularly important in agricultural areas where there are significant sources of OSCs.« less

  4. The future of airborne sulfur-containing particles in the absence of fossil fuel sulfur dioxide emissions.

    PubMed

    Perraud, Véronique; Horne, Jeremy R; Martinez, Andrew S; Kalinowski, Jaroslaw; Meinardi, Simone; Dawson, Matthew L; Wingen, Lisa M; Dabdub, Donald; Blake, Donald R; Gerber, R Benny; Finlayson-Pitts, Barbara J

    2015-11-03

    Sulfuric acid (H2SO4), formed from oxidation of sulfur dioxide (SO2) emitted during fossil fuel combustion, is a major precursor of new airborne particles, which have well-documented detrimental effects on health, air quality, and climate. Another precursor is methanesulfonic acid (MSA), produced simultaneously with SO2 during the atmospheric oxidation of organosulfur compounds (OSCs), such as dimethyl sulfide. In the present work, a multidisciplinary approach is used to examine how contributions of H2SO4 and MSA to particle formation will change in a large coastal urban area as anthropogenic fossil fuel emissions of SO2 decline. The 3-dimensional University of California Irvine-California Institute of Technology airshed model is used to compare atmospheric concentrations of gas phase MSA, H2SO4, and SO2 under current emissions of fossil fuel-associated SO2 and a best-case futuristic scenario with zero fossil fuel sulfur emissions. Model additions include results from (i) quantum chemical calculations that clarify the previously uncertain gas phase mechanism of formation of MSA and (ii) a combination of published and experimental estimates of OSC emissions, such as those from marine, agricultural, and urban processes, which include pet waste and human breath. Results show that in the zero anthropogenic SO2 emissions case, particle formation potential from H2SO4 will drop by about two orders of magnitude compared with the current situation. However, particles will continue to be generated from the oxidation of natural and anthropogenic sources of OSCs, with contributions from MSA and H2SO4 of a similar order of magnitude. This could be particularly important in agricultural areas where there are significant sources of OSCs.

  5. The future of airborne sulfur-containing particles in the absence of fossil fuel sulfur dioxide emissions

    PubMed Central

    Perraud, Véronique; Horne, Jeremy R.; Martinez, Andrew S.; Kalinowski, Jaroslaw; Meinardi, Simone; Dawson, Matthew L.; Wingen, Lisa M.; Dabdub, Donald; Blake, Donald R.; Gerber, R. Benny; Finlayson-Pitts, Barbara J.

    2015-01-01

    Sulfuric acid (H2SO4), formed from oxidation of sulfur dioxide (SO2) emitted during fossil fuel combustion, is a major precursor of new airborne particles, which have well-documented detrimental effects on health, air quality, and climate. Another precursor is methanesulfonic acid (MSA), produced simultaneously with SO2 during the atmospheric oxidation of organosulfur compounds (OSCs), such as dimethyl sulfide. In the present work, a multidisciplinary approach is used to examine how contributions of H2SO4 and MSA to particle formation will change in a large coastal urban area as anthropogenic fossil fuel emissions of SO2 decline. The 3-dimensional University of California Irvine–California Institute of Technology airshed model is used to compare atmospheric concentrations of gas phase MSA, H2SO4, and SO2 under current emissions of fossil fuel-associated SO2 and a best-case futuristic scenario with zero fossil fuel sulfur emissions. Model additions include results from (i) quantum chemical calculations that clarify the previously uncertain gas phase mechanism of formation of MSA and (ii) a combination of published and experimental estimates of OSC emissions, such as those from marine, agricultural, and urban processes, which include pet waste and human breath. Results show that in the zero anthropogenic SO2 emissions case, particle formation potential from H2SO4 will drop by about two orders of magnitude compared with the current situation. However, particles will continue to be generated from the oxidation of natural and anthropogenic sources of OSCs, with contributions from MSA and H2SO4 of a similar order of magnitude. This could be particularly important in agricultural areas where there are significant sources of OSCs. PMID:26483454

  6. The future of airborne sulfur-containing particles in the absence of fossil fuel sulfur dioxide emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perraud, Véronique; Horne, Jeremy R.; Martinez, Andrew S.

    Sulfuric acid (H2SO4), formed from oxidation of sulfur dioxide (SO2) emitted during fossil fuel combustion, is a major precursor of new airborne particles, which have well-documented detrimental effects on health, air quality, and climate. Another precursor is methanesulfonic acid (MSA), produced simultaneously with SO2 during the atmospheric oxidation of organosulfur compounds (OSCs), such as dimethyl sulfide. In the present paper, a multidisciplinary approach is used to examine how contributions of H2SO4 and MSA to particle formation will change in a large coastal urban area as anthropogenic fossil fuel emissions of SO2 decline. The 3-dimensional University of California Irvine–California Institute ofmore » Technology airshed model is used to compare atmospheric concentrations of gas phase MSA, H2SO4, and SO2 under current emissions of fossil fuel-associated SO2 and a best-case futuristic scenario with zero fossil fuel sulfur emissions. Model additions include results from (i) quantum chemical calculations that clarify the previously uncertain gas phase mechanism of formation of MSA and (ii) a combination of published and experimental estimates of OSC emissions, such as those from marine, agricultural, and urban processes, which include pet waste and human breath. Results show that in the zero anthropogenic SO2 emissions case, particle formation potential from H2SO4 will drop by about two orders of magnitude compared with the current situation. However, particles will continue to be generated from the oxidation of natural and anthropogenic sources of OSCs, with contributions from MSA and H2SO4 of a similar order of magnitude. Finally, this could be particularly important in agricultural areas where there are significant sources of OSCs.« less

  7. [Non-neoplastic enlargement of salivary glands: clinico-histologic analysis].

    PubMed

    González Guevara, Martha Beatriz; Torres Tejero, Marco Antonio; Martínez Mata, Guillermo

    2005-01-01

    We carried out a retrospective study on non-neoplastic enlargement of the salivary glands at the Oral Histopathology Diagnostic Center of the Autonomous Metropolitan University at Xochimilco (UAM-Xochimilco) in Mexico during a period of 24 years (1979-2003). From 5,625 biopsies received and analyzed, a total of 461 (8.2%) were non-neoplastic enlargement of the salivary glands; for each case, we registered demographic data as well as clinic characteristics. These lesions were characterized as a heterogeneous group of pathologic entities among which we included local, obstructive, infectious, and immunopathologic lesions. The most frequent lesion was the extravasation cyst in 341 (74%) cases, followed by chronic sialoadenitis and Sjögren's syndrome with 54 (11.7%) and 41 (8.8%) cases, respectively, and at a lesser percentage mucous retention cyst, sialosis, benign lymphoepithelial lesions and those related with sialolytes. Females were affected more frequently; mean age was second to third life decades. These lesions were most frequently localized on inferior labial mucosa.

  8. OncoSimulR: genetic simulation with arbitrary epistasis and mutator genes in asexual populations.

    PubMed

    Diaz-Uriarte, Ramon

    2017-06-15

    OncoSimulR implements forward-time genetic simulations of biallelic loci in asexual populations with special focus on cancer progression. Fitness can be defined as an arbitrary function of genetic interactions between multiple genes or modules of genes, including epistasis, restrictions in the order of accumulation of mutations, and order effects. Mutation rates can differ among genes, and can be affected by (anti)mutator genes. Also available are sampling from simulations (including single-cell sampling), plotting the genealogical relationships of clones and generating and plotting fitness landscapes. Implemented in R and C ++, freely available from BioConductor for Linux, Mac and Windows under the GNU GPL license. Version 2.5.9 or higher available from: http://www.bioconductor.org/packages/devel/bioc/html/OncoSimulR.html . GitHub repository at: https://github.com/rdiaz02/OncoSimul. ramon.diaz@iib.uam.es. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  9. Investigating the Potential Barrier Function of Nanostructured Materials Formed in Engineered Barrier Systems (EBS) Designed for Nuclear Waste Isolation.

    PubMed

    Cuevas, Jaime; Ruiz, Ana Isabel; Fernández, Raúl

    2018-02-21

    Clay and cement are known nano-colloids originating from natural processes or traditional materials technology. Currently, they are used together as part of the engineered barrier system (EBS) to isolate high-level nuclear waste (HLW) metallic containers in deep geological repositories (DGR). The EBS should prevent radionuclide (RN) migration into the biosphere until the canisters fail, which is not expected for approximately 10 3  years. The interactions of cementitious materials with bentonite swelling clay have been the scope of our research team at the Autonomous University of Madrid (UAM) with participation in several European Union (EU) projects from 1998 up to now. Here, we describe the mineral and chemical nature and microstructure of the alteration rim generated by the contact between concrete and bentonite. Its ability to buffer the surrounding chemical environment may have potential for further protection against RN migration. © 2018 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Coupling meteorology, metal concentrations, and Pb isotopes for source attribution in archived precipitation samples.

    PubMed

    Graney, Joseph R; Landis, Matthew S

    2013-03-15

    A technique that couples lead (Pb) isotopes and multi-element concentrations with meteorological analysis was used to assess source contributions to precipitation samples at the Bondville, Illinois USA National Trends Network (NTN) site. Precipitation samples collected over a 16month period (July 1994-October 1995) at Bondville were parsed into six unique meteorological flow regimes using a minimum variance clustering technique on back trajectory endpoints. Pb isotope ratios and multi-element concentrations were measured using high resolution inductively coupled plasma-sector field mass spectrometry (ICP-SFMS) on the archived precipitation samples. Bondville is located in central Illinois, ~250km downwind from smelters in southeast Missouri. The Mississippi Valley Type ore deposits in Missouri provided a unique multi-element and Pb isotope fingerprint for smelter emissions which could be contrasted to industrial emissions from the Chicago and Indianapolis urban areas (~125km north and east, of Bondville respectively) and regional emissions from electric utility facilities. Differences in Pb isotopes and element concentrations in precipitation corresponded to flow regime. Industrial sources from urban areas, and thorogenic Pb from coal use, could be differentiated from smelter emissions from Missouri by coupling Pb isotopes with variations in element ratios and relative mass factors. Using a three endmember mixing model based on Pb isotope ratio differences, industrial processes in urban airsheds contributed 56±19%, smelters in southeast Missouri 26±13%, and coal combustion 18±7%, of the Pb in precipitation collected in Bondville in the mid-1990s. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. The influence of scales of atmospheric motion on air pollution over Portugal

    NASA Astrophysics Data System (ADS)

    Russo, Ana; Trigo, Ricardo; Mendes, Manuel; Jerez, Sonia; Gouveia, Célia Marina

    2014-05-01

    Air pollution is determined by the combination of different factors, namely, emissions, physical constrains, meteorology and chemical processes [1,2,3]. The relative importance of such factors is influenced by their interaction on diverse scales of atmospheric motion. Each scale depicts different meteorological conditions, which, when combined with the different air pollution sources and photochemistry, result in varying ambient concentrations [2]. Identifying the dominant scales of atmospheric motion over a given airshed can be of great importance for many applications such as air pollution and pollen dispersion or wind energy management [2]. Portugal has been affected by numerous air pollution episodes during the last decade. These episodes are often related to peak emissions from local industry or transport, but can also be associated to regional transport from other urban areas or to exceptional emission events, such as forest fires. This research aims to identify the scales of atmospheric motion which contribute to an increase of air pollution. A method is proposed for differentiating between the scales of atmospheric motion that can be applied on a daily basis from data collected at several wind-measuring sites in a given airshed and to reanalysis datasets. The method is based on the daily mean wind recirculation and the mean and standard deviation between sites. The determination of the thresholds between scales is performed empirically following the approach of Levy et al. [2] and also through a automatic statistical approach computed taking into account the tails of the distributions (e.g. 95% and 99% percentile) of the different wind samples. A comparison is made with two objective approaches: 1) daily synoptic classification for the same period over the region [4] and 2) a 3-D backward trajectory approach [5,6] for specific episodes. Furthermore, the outcomes are expected to support the Portuguese authorities on the implementation of strategies for a

  12. Enhancing non-refractory aerosol apportionment from an urban industrial site through receptor modeling of complete high time-resolution aerosol mass spectra

    NASA Astrophysics Data System (ADS)

    McGuire, M. L.; Chang, R. Y.-W.; Slowik, J. G.; Jeong, C.-H.; Healy, R. M.; Lu, G.; Mihele, C.; Abbatt, J. P. D.; Brook, J. R.; Evans, G. J.

    2014-08-01

    Receptor modeling was performed on quadrupole unit mass resolution aerosol mass spectrometer (Q-AMS) sub-micron particulate matter (PM) chemical speciation measurements from Windsor, Ontario, an industrial city situated across the Detroit River from Detroit, Michigan. Aerosol and trace gas measurements were collected on board Environment Canada's Canadian Regional and Urban Investigation System for Environmental Research (CRUISER) mobile laboratory. Positive matrix factorization (PMF) was performed on the AMS full particle-phase mass spectrum (PMFFull MS) encompassing both organic and inorganic components. This approach compared to the more common method of analyzing only the organic mass spectra (PMFOrg MS). PMF of the full mass spectrum revealed that variability in the non-refractory sub-micron aerosol concentration and composition was best explained by six factors: an amine-containing factor (Amine); an ammonium sulfate- and oxygenated organic aerosol-containing factor (Sulfate-OA); an ammonium nitrate- and oxygenated organic aerosol-containing factor (Nitrate-OA); an ammonium chloride-containing factor (Chloride); a hydrocarbon-like organic aerosol (HOA) factor; and a moderately oxygenated organic aerosol factor (OOA). PMF of the organic mass spectrum revealed three factors of similar composition to some of those revealed through PMFFull MS: Amine, HOA and OOA. Including both the inorganic and organic mass proved to be a beneficial approach to analyzing the unit mass resolution AMS data for several reasons. First, it provided a method for potentially calculating more accurate sub-micron PM mass concentrations, particularly when unusual factors are present, in this case the Amine factor. As this method does not rely on a priori knowledge of chemical species, it circumvents the need for any adjustments to the traditional AMS species fragmentation patterns to account for atypical species, and can thus lead to more complete factor profiles. It is expected that this

  13. Enhancing non-refractory aerosol apportionment from an urban industrial site through receptor modelling of complete high time-resolution aerosol mass spectra

    NASA Astrophysics Data System (ADS)

    McGuire, M. L.; Chang, R. Y.-W.; Slowik, J. G.; Jeong, C.-H.; Healy, R. M.; Lu, G.; Mihele, C.; Abbatt, J. P. D.; Brook, J. R.; Evans, G. J.

    2014-02-01

    Receptor modelling was performed on quadrupole unit mass resolution aerosol mass spectrometer (Q-AMS) sub-micron particulate matter (PM) chemical speciation measurements from Windsor, Ontario, an industrial city situated across the Detroit River from Detroit, Michigan. Aerosol and trace gas measurements were collected on board Environment Canada's CRUISER mobile laboratory. Positive matrix factorization (PMF) was performed on the AMS full particle-phase mass spectrum (PMFFull MS) encompassing both organic and inorganic components. This approach was compared to the more common method of analysing only the organic mass spectra (PMFOrg MS). PMF of the full mass spectrum revealed that variability in the non-refractory sub-micron aerosol concentration and composition was best explained by six factors: an amine-containing factor (Amine); an ammonium sulphate and oxygenated organic aerosol containing factor (Sulphate-OA); an ammonium nitrate and oxygenated organic aerosol containing factor (Nitrate-OA); an ammonium chloride containing factor (Chloride); a hydrocarbon-like organic aerosol (HOA) factor; and a moderately oxygenated organic aerosol factor (OOA). PMF of the organic mass spectrum revealed three factors of similar composition to some of those revealed through PMFFull MS: Amine, HOA and OOA. Including both the inorganic and organic mass proved to be a beneficial approach to analysing the unit mass resolution AMS data for several reasons. First, it provided a method for potentially calculating more accurate sub-micron PM mass concentrations, particularly when unusual factors are present, in this case, an Amine factor. As this method does not rely on a priori knowledge of chemical species, it circumvents the need for any adjustments to the traditional AMS species fragmentation patterns to account for atypical species, and can thus lead to more complete factor profiles. It is expected that this method would be even more useful for HR-ToF-AMS data, due to the ability

  14. Introduction: Observations and modeling of the Green Ocean Amazon (GoAmazon2014/5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, S. T.; Artaxo, P.; Machado, L. A. T.

    The Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) Experiment was carried out in the environs of Manaus, Brazil, in the central region of the Amazon basin for 2 years from 1 January 2014 through 31 December 2015. The experiment focused on the complex interactions among vegetation, atmospheric chemistry, and aerosol production on the one hand and their connections to aerosols, clouds, and precipitation on the other. The objective was to understand and quantify these linked processes, first under natural conditions to obtain a baseline and second when altered by the effects of human activities. To this end, the pollution plume from themore » Manaus metropolis, superimposed on the background conditions of the central Amazon basin, served as a natural laboratory. The present paper, as the introduction to the special issue of GoAmazon2014/5, presents the context and motivation of the GoAmazon2014/5 Experiment. The nine research sites, including the characteristics and instrumentation of each site, are presented. The sites range from time point zero (T0) upwind of the pollution, to T1 in the midst of the pollution, to T2 just downwind of the pollution, to T3 furthest downwind of the pollution (70 km). In addition to the ground sites, a low-altitude G-159 Gulfstream I (G-1) observed the atmospheric boundary layer and low clouds, and a high-altitude Gulfstream G550 (HALO) operated in the free troposphere. During the 2-year experiment, two Intensive Operating Periods (IOP1 and IOP2) also took place that included additional specialized research instrumentation at the ground sites as well as flights of the two aircraft. GoAmazon2014/5 IOP1 was carried out from 1 February to 31 March 2014 in the wet season. GoAmazon2014/5 IOP2 was conducted from 15 August to 15 October 2014 in the dry season. In addition, the G-1 aircraft flew during both IOP1 and IOP2, and the HALO aircraft flew during IOP2. In the context of the Amazon basin, the two IOPs

  15. Introduction: Observations and modeling of the Green Ocean Amazon (GoAmazon2014/5)

    DOE PAGES

    Martin, S. T.; Artaxo, P.; Machado, L. A. T.; ...

    2016-04-19

    The Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) Experiment was carried out in the environs of Manaus, Brazil, in the central region of the Amazon basin for 2 years from 1 January 2014 through 31 December 2015. The experiment focused on the complex interactions among vegetation, atmospheric chemistry, and aerosol production on the one hand and their connections to aerosols, clouds, and precipitation on the other. The objective was to understand and quantify these linked processes, first under natural conditions to obtain a baseline and second when altered by the effects of human activities. To this end, the pollution plume from themore » Manaus metropolis, superimposed on the background conditions of the central Amazon basin, served as a natural laboratory. The present paper, as the introduction to the special issue of GoAmazon2014/5, presents the context and motivation of the GoAmazon2014/5 Experiment. The nine research sites, including the characteristics and instrumentation of each site, are presented. The sites range from time point zero (T0) upwind of the pollution, to T1 in the midst of the pollution, to T2 just downwind of the pollution, to T3 furthest downwind of the pollution (70 km). In addition to the ground sites, a low-altitude G-159 Gulfstream I (G-1) observed the atmospheric boundary layer and low clouds, and a high-altitude Gulfstream G550 (HALO) operated in the free troposphere. During the 2-year experiment, two Intensive Operating Periods (IOP1 and IOP2) also took place that included additional specialized research instrumentation at the ground sites as well as flights of the two aircraft. GoAmazon2014/5 IOP1 was carried out from 1 February to 31 March 2014 in the wet season. GoAmazon2014/5 IOP2 was conducted from 15 August to 15 October 2014 in the dry season. In addition, the G-1 aircraft flew during both IOP1 and IOP2, and the HALO aircraft flew during IOP2. In the context of the Amazon basin, the two IOPs

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simoneit, B.R.T.; Radzi bin Abas, M.; Cass, G.R.

    Biomass combustion is an important primary source of carbonaceous particles in the global atmosphere. Various molecular markers have been proposed for this process but additional specific tracers are needed. The injection of natural product organic compounds into smoke occurs primarily by direct volatilization/steam stripping and by pyrolysis. Although the composition of organic matter in smoke particles is highly variable, the molecular structures of the tracers are generally source specific. Homologous compounds and biomarkers present in smoke are derived directly from plant wax, gum and resin by volatilization and secondarily from pyrolysis of biopolymers (e.g., lignin, cutin, suberin), wax, gum andmore » resin. The component complexity is illustrated with examples from controlled bums of temperate and tropical biomass fuels. Conifer smoke contains characteristic tracers from diterpenoids as well as phenolics and other oxygenated species. These are recognizable in urban airsheds. The major organic components of smoke from tropical biomass are straight-chain, aliphatic and oxygenated compounds and triterpenoids. Several compounds are potential key indicators for combustion of such biomass. The precursor to product approach of organic geochemistry can be applied successfully to provide molecular tracers for studying smoke plume chemistry and dispersion.« less

  17. Dynamic behavior of semivolatile organic compounds in indoor air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loy, Michael David Van

    1998-12-09

    Exposures to a wide range of air pollutants are often dominated by those occurring in buildings because of three factors: 1) most people spend a large fraction of their time indoors, 2) many pollutants have strong indoor sources, and 3) the dilution volume in buildings is generally several orders of magnitude smaller than that of an urban airshed. Semivolatile organic compounds (SVOCS) are emitted by numerous indoor sources, including tobacco combustion, cooking, carpets, paints, resins, and glues, so indoor gasphase concentrations of these compounds are likely to be elevated relative to ambient levels. The rates of uptake and release ofmore » reversibly sorbing SVOCS by indoor materials directly affect both peak concentrations and persistence of the pollutants indoors after source elimination. Thus, accurate predictions of SVOC dynamics in indoor air require an understanding of contaminant sorption on surface materials such as carpet and wallboard. The dynamic behaviors of gas-phase nicotine and phenanthrene were investigated in a 20 ms stainless steel chamber containing carpet and painted wallboard. Each compound was studied independently, first in the empty chamber, then with each sorbent individually, and finally with both sorbents in the chamber.« less

  18. The University of Utah Urban Undertaking (U4)

    NASA Astrophysics Data System (ADS)

    Lin, J. C.; Mitchell, L.; Bares, R.; Mendoza, D. L.; Fasoli, B.; Bowling, D. R.; Garcia, M. A.; Buchert, M.; Pataki, D. E.; Crosman, E.; Horel, J.; Catharine, D.; Strong, C.; Ehleringer, J. R.

    2015-12-01

    The University of Utah is leading efforts to understand the spatiotemporal patterns in both emissions and concentrations of greenhouse gases (GHG) and criteria pollutants within urban systems. The urbanized corridor in northern Utah along the Wasatch Front, anchored by Salt Lake City, is undergoing rapid population growth that is projected to double in the next few decades. The Wasatch Front offers multiple advantages as an unique "urban laboratory": urban regions in multiple valleys spanning numerous orders of magnitude in population, each with unique airsheds, well-defined boundary conditions along deserts and tall mountains, strong signals during cold air pool events, seasonal contrasts in pollution, and a legacy of productive partnerships with local stakeholders and governments. We will show results from GHG measurements from the Wasatch Front, including one of the longest running continuous CO2 records in urban areas. Complementing this record are comprehensive meteorological observations and GHG/pollutant concentrations on mobile platforms: light rail, helicopter, and research vans. Variations in the GHG and pollutant observations illustrate human behavior and the resulting "urban metabolism" taking place on hourly, weekly, and seasonal cycles, resulting in a coupling between GHG and criteria pollutants. Moreover, these observations illustrate systematic spatial gradients in GHG and pollutant distributions between and within urban areas, traced to underlying gradients in population, energy use, terrain, and land use. Over decadal time scales the observations reveal growth of the "urban dome" due to expanding urban development. Using numerical models of the atmosphere, we further link concentrations of GHG and air quality-relevant pollutants to underlying emissions at the neighborhood scale as well as urban planning considerations.

  19. Characterization of metals emitted from motor vehicles.

    PubMed

    Schauer, James J; Lough, Glynis C; Shafer, Martin M; Christensen, William F; Arndt, Michael F; DeMinter, Jeffrey T; Park, June-Soo

    2006-03-01

    A systematic approach was used to quantify the metals present in particulate matter emissions associated with on-road motor vehicles. Consistent sampling and chemical analysis techniques were used to determine the chemical composition of particulate matter less than 10 microm in aerodynamic diameter (PM10*) and particulate matter less than 2.5 microm in aerodynamic diameter (PM2.5), including analysis of trace metals by inductively coupled plasma mass spectrometry (ICP-MS). Four sources of metals were analyzed in emissions associated with motor vehicles: tailpipe emissions from gasoline- and diesel-powered vehicles, brake wear, tire wear, and resuspended road dust. Profiles for these sources were used in a chemical mass balance (CMB) model to quantify their relative contributions to the metal emissions measured in roadway tunnel tests in Milwaukee, Wisconsin. Roadway tunnel measurements were supplemented by parallel measurements of atmospheric particulate matter and associated metals at three urban locations: Milwaukee and Waukesha, Wisconsin, and Denver, Colorado. Ambient aerosol samples were collected every sixth day for one year and analyzed by the same chemical analysis techniques used for the source samples. The two Wisconsin sites were studied to assess the spatial differences, within one urban airshed, of trace metals present in atmospheric particulate matter. The measurements were evaluated to help understand source and seasonal trends in atmospheric concentrations of trace metals. ICP-MS methods have not been widely used in analyses of ambient aerosols for metals despite demonstrated advantages over traditional techniques. In a preliminary study, ICP-MS techniques were used to assess the leachability of trace metals present in atmospheric particulate matter samples and motor vehicle source samples in a synthetic lung fluid.

  20. Towards a Consistent and Scientifically Accurate Drug Ontology.

    PubMed

    Hogan, William R; Hanna, Josh; Joseph, Eric; Brochhausen, Mathias

    2013-01-01

    Our use case for comparative effectiveness research requires an ontology of drugs that enables querying National Drug Codes (NDCs) by active ingredient, mechanism of action, physiological effect, and therapeutic class of the drug products they represent. We conducted an ontological analysis of drugs from the realist perspective, and evaluated existing drug terminology, ontology, and database artifacts from (1) the technical perspective, (2) the perspective of pharmacology and medical science (3) the perspective of description logic semantics (if they were available in Web Ontology Language or OWL), and (4) the perspective of our realism-based analysis of the domain. No existing resource was sufficient. Therefore, we built the Drug Ontology (DrOn) in OWL, which we populated with NDCs and other classes from RxNorm using only content created by the National Library of Medicine. We also built an application that uses DrOn to query for NDCs as outlined above, available at: http://ingarden.uams.edu/ingredients. The application uses an OWL-based description logic reasoner to execute end-user queries. DrOn is available at http://code.google.com/p/dr-on.

  1. Expression of an alkane monooxygenase (alkB) gene and methyl tert-butyl ether co-metabolic oxidation in Pseudomonas citronellolis.

    PubMed

    Bravo, Ana Luisa; Sigala, Juan Carlos; Le Borgne, Sylvie; Morales, Marcia

    2015-04-01

    Pseudomonas citronellolis UAM-Ps1 co-metabolically transforms methyl tert-butyl ether (MTBE) to tert-butyl alcohol with n-pentane (2.6 mM), n-octane (1.5 mM) or dicyclopropylketone (DCPK) (4.4 mM), a gratuitous inducer of alkane hydroxylase (AlkB) activity. The reverse transcription quantitative real-time PCR was used to quantify the alkane monooxygenase (alkB) gene expression. The alkB gene was expressed in the presence of n-alkanes and DCPK and MTBE oxidation occurred only in cultures when alkB was transcribed. A correlation between the number of alkB transcripts and MTBE consumption was found (ΜΤΒΕ consumption in μmol = 1.44e(-13) x DNA copies, R(2) = 0.99) when MTBE (0.84 mM) was added. Furthermore, alkB was cloned and expressed into Escherichia coli and the recombinant AlkB had a molecular weight of 42 kDa. This is the first report where the expression of alkB is related to the co-metabolic oxidation of MTBE.

  2. Acoustical phenomenon in ancient Totonac's monument

    NASA Astrophysics Data System (ADS)

    Sánchez-Dehesa, José; Ha˚Kansson, Andreas; Cervera, Francisco; Meseguer, Francisco; Manzanares-Martínez, Betsabé; Ramos-Mendieta, Felipe

    2004-05-01

    The circle of gladiators is a monument built by Totonac Indians in the ceremonial site of Cempoala, which is located near Veracruz (Mexico). The city is believed to date to around 1200 A.D. The monument is a round structure with crenellated wall tops, and it has a diameter of 13.4 m. Though the deterioration of this monument is noticeable, it presents a singular acoustical phenomenon whose strength had to be probably extraordinary on the date of its construction. In brief, along any diameter in the circle, one can find two focal points such that if one person speaks on one focus, another person located on the other hears the sound reinforced. In other words, this circular place acoustically behaves as if it were elliptical. Here, we report the experimental characterization of the phenomenon and present a theoretical explanation. Also, the intentionality of the Totonacs is speculated since these people are associated with the Mayan culture, which is known by its realizations of environments with astonishing sonic properties. [Work supported by CEAL-UAM of Spain.

  3. Ultrasound assisted microextraction-nano material solid phase dispersion for extraction and determination of thymol and carvacrol in pharmaceutical samples: experimental design methodology.

    PubMed

    Roosta, Mostafa; Ghaedi, Mehrorang; Daneshfar, Ali; Sahraei, Reza

    2015-01-15

    In the present study, for the first time, a new extraction method based on "ultrasound assisted microextraction-nanomaterial solid phase dispersion (UAME-NMSPD)" was developed to preconcentrate the low quantity of thymol and carvacrol in pharmaceutical samples prior to their HPLC-UV separation/determination. The analytes were accumulated on nickel sulfide nanomaterial loaded on activated carbon (NiS-NP-AC) that with more detail identified by XRD, FESEM and UV-vis technique. Central composite design (CCD) combined with desirability function (DF) was used to search for optimum operational conditions. Working under optimum conditions specified as: 10 min ultrasonic time, pH 3, 0.011 g of adsorbent and 600 μL extraction solvent) permit achievement of high and reasonable linear range over 0.005-2.0 μg mL(-1) (r(2)>0.9993) with LOD of thymol and carvacrol as 0.23 and 0.21 μg L(-1), respectively. The relative standard deviations (RSDs) were less than 4.93% (n=3). Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Featured Image: Stars from Broken Clouds and Disks

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2018-04-01

    This still from a simulation captures binary star formation in action. Researchers have long speculated on the processes that lead to clouds of gas and dust breaking up into smaller pieces to form multiple-star systems but these take place over a large range of scales, making them difficult to simulate. In a new study led by Leonardo Sigalotti (UAM Azcapotzalco, Mexico), researchers have used a smoothed-particle hydrodynamics code to model binary star formation on scales of thousands of AU down to scales as small as 0.1 AU. In the scene shown above, a collapsing cloud of gas and dust has recently fragmented into two pieces, forming a pair of disks separated by around 200 AU. In addition, we can see that smaller-scale fragmentation is just starting in one of these disks, Disk B. Here, one of the disks spiral arms has become unstable and is beginning to condense; it will eventually form another star, producing a hierarchical system: a close binary within the larger-scale binary. Check out the broaderprocessin the four panels below (which show the system as it evolves over time), or visitthe paper linked below for more information about what the authors learned.Evolution of a collapsed cloud after large-scale fragmentation into a binary protostar: (a) 44.14 kyr, (b) 44.39 kyr, (c) 44.43 kyr, and (d) 44.68 kyr. The insets show magnifications of the binary cores. [Adapted from Sigalotti et al. 2018]CitationLeonardo Di G. Sigalotti et al 2018 ApJ 857 40. doi:10.3847/1538-4357/aab619

  5. Interactions between reactive nitrogen and the Canadian landscape: A budget approach

    NASA Astrophysics Data System (ADS)

    Clair, Thomas A.; Pelletier, Nathan; Bittman, Shabtai; Leip, Adrian; Arp, Paul; Moran, Michael D.; Dennis, Ian; Niemi, David; Sterling, Shannon; Drury, Craig F.; Yang, Jingyi

    2014-11-01

    The movement of excess reactive nitrogen (Nr) from anthropogenic activities to natural ecosystems has been described as one of the most serious environmental threats facing modern society. One of the approaches for tracking this movement is the use of budgets that quantify fluxes. We constructed an Nr budget for Canada using measured and modeled values from the scientific literature, government databases, and data from new agri-environmental indicators, in order to produce information for policy makers and scientists to understand the major flows of nitrogen to allow a better assessment of risks to the Canadian environment. We divided the Canadian territory south of 60°N into areas dominated by natural ecosystems, as well as by agricultural and urban/industrial activities to evaluate Nr flows within, between, and out of these units. We show that Canada is a major exporter of Nr due to the availability of inexpensive commercial fertilizers. The large land area suitable for agriculture makes Canada a significant agricultural Nr exporter of both grain crops and livestock. Finally, Canada exports petroleum N mainly to the United States. Because of its location and prevailing atmospheric transport patterns, Canada is a net receptor of Nr air pollution from the United States, receiving approximately 20% of the Nr leaving the U.S. airshed. We found that overall, terrestrial natural ecosystems as well as the atmosphere are in balance between Nr inputs and outputs when all N reactive and nonreactive fluxes are included. However, when only reactive forms are considered, almost 50% of N entering the Canadian atmosphere cannot be accounted for and is assumed to be lost to the Atlantic and Arctic oceans or to unmeasured dry deposition. However, agricultural and freshwater landscapes are showing large differences between measured inputs and outputs of N as our data suggest that denitrification in soils and aquatic systems is larger than what models predict. Our work also shows

  6. Quantification of hourly variability in NO(x) emissions for baseload coal-fired power plants.

    PubMed

    Abdel-Aziz, Amr; Frey, H Christopher

    2003-11-01

    The objectives of this paper are to (1) quantify variability in hourly utility oxides of nitrogen (NO(x)) emission factors, activity factors, and total emissions; (2) investigate the autocorrelation structure and evaluate cyclic effects at short and long scales of the time series of total hourly emissions; (3) compare emissions for the ozone (O3) season versus the entire year to identify seasonal differences, if any; and (4) evaluate interannual variability. Continuous emissions monitoring data were analyzed for 1995 and 1998 for 32 units from nine baseload power plants in the Charlotte, NC, airshed. Unit emissions have a strong 24-hr cycle attributable primarily to the capacity factor. Typical ranges of the coefficient of variation for emissions at a given hour of the day were from 0.2 to 0.45. Little difference was found when comparing weekend emissions with the entire week or when comparing the O3 season with the entire year. There were substantial differences in the mean and standard deviation of emissions when comparing 1995 and 1998 data, indicative of the effect of retrofits of control technology during the intervening time. The wide range of variability and its autocorrelation should be accounted for when developing probabilistic utility emission inventories for analysis of near-term future episodes.

  7. Heavy haze in winter Beijing driven by fast gas phase oxidation

    NASA Astrophysics Data System (ADS)

    Lu, K.; Tan, Z.; Wang, H.; Li, X.; Wu, Z.; Chen, Q.; Wu, Y.; Ma, X.; Liu, Y.; Chen, X.; Shang, D.; Dong, H.; Zeng, L.; Shao, M.; Hu, M.; Fuchs, H.; Novelli, A.; Broch, S.; Hofzumahaus, A.; Holland, F.; Rohrer, F.; Bohn, B.; Georgios, G.; Schmitt, S. H.; Schlag, P.; Kiendler-Scharr, A.; Wahner, A.; Zhang, Y.

    2017-12-01

    Heavy haze conditions were frequently presented in the airsheds of Beijing and surrounding areas, especially during winter time. To explore the trace gas oxidation and the subsequent formation of aerosols, a comprehensive field campaign was performed at a regional site (in the campus of University of Chinese Academy of Science, UCAS) in Beijing winter 2016. Serious haze pollution processes were often observed with the fast increase of inorganic salt (especially nitrate) and these pollutions were always associated with enhanced humidity and the concentrations of PAN (PeroxyAcyl Nitrates) which is normally a marker of gas phase oxidations from NOx and VOCs. Moreover, based on the measurements of OH, HO2, RO2, total OH reactivity, N2O5, NO, NO2, SO2, particle concentrations/distributions/chemical compositions, and meteorological parameters, the gas phase oxidation rates that leads to the formation of sulfate, nitrate and secondary organic aerosols were estimated. These determined formation rates were clearly enhanced by several folds during pollution episodes compared to that of the clean air masses. Preliminary analysis result showed that the gas phase formation potential of nitrate and secondary organic aerosols were larger than the observed concentrations of nitrate and SOA of which the excess production may be explained by deposition and dilution.

  8. Secondary organic aerosol from atmospheric photooxidation of indole

    NASA Astrophysics Data System (ADS)

    Montoya-Aguilera, Julia; Horne, Jeremy R.; Hinks, Mallory L.; Fleming, Lauren T.; Perraud, Véronique; Lin, Peng; Laskin, Alexander; Laskin, Julia; Dabdub, Donald; Nizkorodov, Sergey A.

    2017-09-01

    Indole is a heterocyclic compound emitted by various plant species under stressed conditions or during flowering events. The formation, optical properties, and chemical composition of secondary organic aerosol (SOA) formed by low-NOx photooxidation of indole were investigated. The SOA yield (1. 3 ± 0. 3) was estimated from measuring the particle mass concentration with a scanning mobility particle sizer (SMPS) and correcting it for wall loss effects. The high value of the SOA mass yield suggests that most oxidized indole products eventually end up in the particle phase. The SOA particles were collected on filters and analysed offline with UV-vis spectrophotometry to measure the mass absorption coefficient (MAC) of the bulk sample. The samples were visibly brown and had MAC values of ˜ 2 m2 g-1 at λ = 300 nm and ˜ 0. 5 m2 g-1 at λ = 400 nm, comparable to strongly absorbing brown carbon emitted from biomass burning. The chemical composition of SOA was examined with several mass spectrometry methods. Direct analysis in real-time mass spectrometry (DART-MS) and nanospray desorption electrospray high-resolution mass spectrometry (nano-DESI-HRMS) were both used to provide information about the overall distribution of SOA compounds. High-performance liquid chromatography, coupled to photodiode array spectrophotometry and high-resolution mass spectrometry (HPLC-PDA-HRMS), was used to identify chromophoric compounds that are responsible for the brown colour of SOA. Indole derivatives, such as tryptanthrin, indirubin, indigo dye, and indoxyl red, were found to contribute significantly to the visible absorption spectrum of indole SOA. The potential effect of indole SOA on air quality was explored with an airshed model, which found elevated concentrations of indole SOA during the afternoon hours contributing considerably to the total organic aerosol under selected scenarios. Because of its high MAC values, indole SOA can contribute to decreased visibility and poor air

  9. New Zealand traffic and local air quality.

    PubMed

    Irving, Paul; Moncrieff, Ian

    2004-12-01

    Since 1996 the New Zealand Ministry of Transport (MOT) has been investigating the effects of road transport on local air quality. The outcome has been the government's Vehicle Fleet Emissions Control Strategy (VFECS). This is a programme of measures designed to assist with the improvement in local air quality, and especially in the appropriate management of transport sector emissions. Key to the VFECS has been the development of tools to assess and predict the contribution of vehicle emissions to local air pollution, in a given urban situation. Determining how vehicles behave as an emissions source, and more importantly, how the combined traffic flows contribute to the total emissions within a given airshed location was an important element of the programme. The actual emissions output of a vehicle is more than that determined by a certified emission standard, at the point of manufacture. It is the engine technology's general performance capability, in conjunction with the local driving conditions, that determines its actual emissions output. As vehicles are a mobile emissions source, to understand the effect of vehicle technology, it is necessary to work with the average fleet performance, or "fleet-weighted average emissions rate". This is the unit measure of performance of the general traffic flow that could be passing through a given road corridor or network, as an average, over time. The flow composition can be representative of the national fleet population, but also may feature particular vehicle types in a given locality, thereby have a different emissions 'signature'. A summary of the range of work that has been completed as part of the VFECS programme is provided. The NZ Vehicle Fleet Emissions Model and the derived data set available in the NZ Traffic Emission Rates provide a significant step forward in the consistent analysis of practical, sustainable vehicle emissions policy and air-quality management in New Zealand.

  10. MEPSA: minimum energy pathway analysis for energy landscapes.

    PubMed

    Marcos-Alcalde, Iñigo; Setoain, Javier; Mendieta-Moreno, Jesús I; Mendieta, Jesús; Gómez-Puertas, Paulino

    2015-12-01

    From conformational studies to atomistic descriptions of enzymatic reactions, potential and free energy landscapes can be used to describe biomolecular systems in detail. However, extracting the relevant data of complex 3D energy surfaces can sometimes be laborious. In this article, we present MEPSA (Minimum Energy Path Surface Analysis), a cross-platform user friendly tool for the analysis of energy landscapes from a transition state theory perspective. Some of its most relevant features are: identification of all the barriers and minima of the landscape at once, description of maxima edge profiles, detection of the lowest energy path connecting two minima and generation of transition state theory diagrams along these paths. In addition to a built-in plotting system, MEPSA can save most of the generated data into easily parseable text files, allowing more versatile uses of MEPSA's output such as the generation of molecular dynamics restraints from a calculated path. MEPSA is freely available (under GPLv3 license) at: http://bioweb.cbm.uam.es/software/MEPSA/ CONTACT: pagomez@cbm.csic.es. Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Dynamic assessment of microbial ecology (DAME): a web app for interactive analysis and visualization of microbial sequencing data.

    PubMed

    Piccolo, Brian D; Wankhade, Umesh D; Chintapalli, Sree V; Bhattacharyya, Sudeepa; Chunqiao, Luo; Shankar, Kartik

    2018-03-15

    Dynamic assessment of microbial ecology (DAME) is a Shiny-based web application for interactive analysis and visualization of microbial sequencing data. DAME provides researchers not familiar with R programming the ability to access the most current R functions utilized for ecology and gene sequencing data analyses. Currently, DAME supports group comparisons of several ecological estimates of α-diversity and β-diversity, along with differential abundance analysis of individual taxa. Using the Shiny framework, the user has complete control of all aspects of the data analysis, including sample/experimental group selection and filtering, estimate selection, statistical methods and visualization parameters. Furthermore, graphical and tabular outputs are supported by R packages using D3.js and are fully interactive. DAME was implemented in R but can be modified by Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), and JavaScript. It is freely available on the web at https://acnc-shinyapps.shinyapps.io/DAME/. Local installation and source code are available through Github (https://github.com/bdpiccolo/ACNC-DAME). Any system with R can launch DAME locally provided the shiny package is installed. bdpiccolo@uams.edu.

  12. IAEA Coordinated Research Project on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard; Bostelmann, F.

    (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) be implemented. This CRP is a continuation of the previous IAEA and Organization for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) international activities on Verification and Validation (V&V) of available analytical capabilities for HTGR simulation for design and safety evaluations. Within the framework of these activities different numerical and experimental benchmark problems were performed and insight was gained about specific physics phenomena and the adequacy of analysis methods.« less

  13. The unprecedented 2014 Legionnaires' disease outbreak in Portugal: atmospheric driving mechanisms

    NASA Astrophysics Data System (ADS)

    Russo, Ana; Gouveia, Célia M.; Soares, Pedro M. M.; Cardoso, Rita M.; Mendes, Manuel T.; Trigo, Ricardo M.

    2018-03-01

    A large outbreak of Legionnaires' disease occurred in November 2014 nearby Lisbon, Portugal. This epidemic infected 377 individuals by the Legionella pneumophila bacteria, resulting in 14 deaths. The primary source of transmission was contaminated aerosolized water which, when inhaled, lead to atypical pneumonia. The unseasonably warm temperatures during October 2014 may have played a role in the proliferation of Legionella species in cooling tower systems. The episode was further exacerbated by high relative humidity and a thermal inversion which limited the bacterial dispersion. Here, we analyze if the Legionella outbreak event occurred during a situation of extreme potential recirculation and/or stagnation characteristics. In order to achieve this goal, the Allwine and Whiteman approach was applied for a hindcast simulation covering the affected area during a near 20-year long period (1989-2007) and then for an independent period covering the 2014 event (15 October to 13 November 2014). The results regarding the average daily critical transport indices for the 1989-2007 period clearly indicate that the airshed is prone to stagnation as these events have a dominant presence through most of the study period (42%), relatively to the occurrence of recirculation (18%) and ventilation (17%) events. However, the year of 2014 represents an exceptional year when compared to the 1989-2007 period, with 53 and 33% of the days being classified as under stagnation and recirculation conditions, respectively.

  14. The unprecedented 2014 Legionnaires' disease outbreak in Portugal: atmospheric driving mechanisms.

    PubMed

    Russo, Ana; Gouveia, Célia M; Soares, Pedro M M; Cardoso, Rita M; Mendes, Manuel T; Trigo, Ricardo M

    2018-03-23

    A large outbreak of Legionnaires' disease occurred in November 2014 nearby Lisbon, Portugal. This epidemic infected 377 individuals by the Legionella pneumophila bacteria, resulting in 14 deaths. The primary source of transmission was contaminated aerosolized water which, when inhaled, lead to atypical pneumonia. The unseasonably warm temperatures during October 2014 may have played a role in the proliferation of Legionella species in cooling tower systems. The episode was further exacerbated by high relative humidity and a thermal inversion which limited the bacterial dispersion. Here, we analyze if the Legionella outbreak event occurred during a situation of extreme potential recirculation and/or stagnation characteristics. In order to achieve this goal, the Allwine and Whiteman approach was applied for a hindcast simulation covering the affected area during a near 20-year long period (1989-2007) and then for an independent period covering the 2014 event (15 October to 13 November 2014). The results regarding the average daily critical transport indices for the 1989-2007 period clearly indicate that the airshed is prone to stagnation as these events have a dominant presence through most of the study period (42%), relatively to the occurrence of recirculation (18%) and ventilation (17%) events. However, the year of 2014 represents an exceptional year when compared to the 1989-2007 period, with 53 and 33% of the days being classified as under stagnation and recirculation conditions, respectively.

  15. [A prospective cohort study on injuries among school-age children with and without behavior problems].

    PubMed

    Peng, Ying-chun; Ni, Jin-fa; Tao, Fang-biao; Wu, Xi-ke

    2003-08-01

    To study the annual incidence of injuries and the relationship between behavior problems and injuries among school-age children. A prospective cohort study on injuries for 1-year follow-up period was conducted among 2 005 school-age children selected by cluster sampling from three primary schools in Maanshan city. They subjects were divided into two groups with or without exposure according to behavior problems rated by the Rutter Child Behavior Questionnaire at the beginning of the study. Nonparametric test was performed to analyze the differences in injuries between the two groups of children, and the influential factors for injuries were analyzed with multi-classification ordinal response variable logistic regression model. The overall incidence rate for injuries in school-age children was 42.51%, while among children with and without behavior problems were 64.87% and 38.85%, respectively. There were significant differences between the two groups (u = -6.054, P = 0.000). However, the incidence rates of injuries in school-age children with antisocial (A) behavior, neurotic (N) behavior and mixed (M) behavior were 66.99%, 67.41% and 61.40%, respectively. No significant differences were found among them (u(A,N) = -0.052, P = 0.958; u(A,M) = -0.400, P = 0.689; u(N,M) = -0.364, P = 0.716). Multivariate analysis indicated that injuries in school-age children were associated with children behavior problems, maternal age at childbirth, bad conditions during mother pregnancy, education background of mother, prevention measures for safety at home and the child accompanied to travel between school and home by adults. Behavior problems of children seemed to be the major risk factors for injuries. Children with behavior problems represented a significant risk group for injuries among school-age children. When planning intervention strategies on injuries, behavior problems should be emphasized to ensure optimal effectiveness of intervention.

  16. Experimental Constraints on a Dark Matter Origin for the DAMA Annual Modulation Effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aalseth, Craig E.; Barbeau, Phil; Cerdeno, D. G.

    2008-12-17

    Abstract Follows: C. E. Aalseth,1 P. S. Barbeau,2 D. G. Cerdeño,3 J. Colaresi,4 J. I. Collar,2 P. de Lurgio,5 G. Drake,5 J. E. Fast,1 C. H. Greenberg,2 T. W. Hossbach,1 J. D. Kephart,1 M. G. Marino,7 H. S. Miley,1 J. L. Orrell,1 D. Reyna,6 R. G. H. Robertson,7 R. L. Talaga,5 O. Tench,4 T. D. Van Wechel,7 J. F. Wilkerson,7 and K. M. Yocum4 (CoGeNT Collaboration) 1Pacific Northwest National Laboratory, Richland, Washington 99352, USA 2Kavli Institute for Cosmological Physics and Enrico Fermi Institute, University of Chicago, Chicago, Illinois 60637, USA 3Departamento de Física Teórica C-XI & Instituto de Física Teóricamore » UAM-CSIC, Universidad Autónoma de Madrid, Cantoblanco, E-28049 Madrid, Spain 4CANBERRA Industries, Meriden, Connecticut 06450, USA 5Argonne National Laboratory, Argonne, Illinois 60439, USA 6Sandia National Laboratories, Livermore, California 94550, USA 7Center for Experimental Nuclear Physics and Astrophysics, and Department of Physics, University of Washington, Seattle, Washington 98195, USA Received 7 July 2008; revised 6 August 2008; published 17 December 2008 A claim for evidence of dark matter interactions in the DAMA experiment has been recently reinforced. We employ a new type of germanium detector to conclusively rule out a standard isothermal galactic halo of weakly interacting massive particles as the explanation for the annual modulation effect leading to the claim. Bounds are similarly imposed on a suggestion that dark pseudoscalars might lead to the effect. We describe the sensitivity to light dark matter particles achievable with our device, in particular, to next-to-minimal supersymmetric model candidates.« less

  17. The status of the AMS system at MALT in its 20th year

    NASA Astrophysics Data System (ADS)

    Matsuzaki, Hiroyuki; Nakano, Chuichiro; Tsuchiya, Yoko S.; Ito, Seiji; Morita, Akira; Kusuno, Haruka; Miyake, Yasuto; Honda, Maki; Bautista VII, Angel T.; Kawamoto, Marina; Tokuyama, Hironori

    2015-10-01

    MALT (Micro Analysis Laboratory, Tandem accelerator, The University of Tokyo) was designed for a 'highly sensitive and precise elemental and isotopic microanalysis system' using an ion-beam generated by a Pelletron™ 5UD tandem accelerator. Currently, a multi-nuclide AMS (10Be, 14C, 26Al, 36Cl, 129I) system is available and shows good performance in both precision and sensitivity, and the accelerator serves for PIXE, NRA, ERDA/RBS measurements as well. The total operation time of the accelerator has been over 95,000 hours since the start of MALT, 20 years ago. After the Fukushima Daiichi Nuclear Power Plant (FDNPP) accident, many projects related to 129I have been conducted. The retrospective reconstruction of the 131I distribution at the accident from 129I is one of the most important missions for dose evaluation of the residents. The accident-derived 129I is also quite useful as a tracer for the general iodine dynamics in the environment. As a new tool for environmental assessment related to nuclear activity, including the global fallout from past atmospheric nuclear bomb testing, effects from the spent fuel reprocessing plant, and nuclear accidents such as Chernobyl and FDNPP, a 236U-AMS system is now under development.

  18. A positive correlation between immunohistochemical expression of CD31 and mast cell tryptase in odontogenic tumors.

    PubMed

    Kouhsoltani, Maryam; Halimi, Monireh; Dibazar, Sana

    2015-06-01

    In this study, we compared mast cell tryptase and CD31 expression between odontogenic tumors with the aim of predicting the clinical behavior of these lesions at the time of initial biopsy. We also evaluated the correlation between mast cell tryptase and CD31 expression to clarify the role of mast cells (MCs) in the growth of odontogenic tumors. Immunohistochemical staining with anti-MC tryptase and anti-CD31 antibodies was performed on 48 cases of odontogenic tumors including solid ameloblastoma (SAM), unicystic ameloblastoma (UAM), odontogenic myxoma (OM), cystic calcifying odontogenic tumor (CCOT) and adenomatoid odontogenic tumor (AOT). Ten high power fields were analyzed for each sample. Total MC count was significantly increased in SAM compared to other odontogenic tumors (p<0.05). Microvessel density was statistically higher in SAM and AOT compared to remaining odontogenic tumors (p<0.05). A significant correlation was observed between MCs and microvessels in odontogenic tumors (p=0.018, r=0.34). Our findings suggest a role for MCs in aggressive clinical behavior of odontogenic tumors. The significant correlation found between MC count and microvessel density in odontogenic tumors is in agreement with the theory of participation of MCs in tumor progression. Targeting MC activity may represent an important nonsurgical therapeutic approach, especially for aggressive odontogenic tumors.

  19. Impact of implicit effects on uncertainties and sensitivities of the Doppler coefficient of a LWR pin cell

    NASA Astrophysics Data System (ADS)

    Hursin, Mathieu; Leray, Olivier; Perret, Gregory; Pautz, Andreas; Bostelmann, Friederike; Aures, Alexander; Zwermann, Winfried

    2017-09-01

    In the present work, PSI and GRS sensitivity analysis (SA) and uncertainty quantification (UQ) methods, SHARK-X and XSUSA respectively, are compared for reactivity coefficient calculation; for reference the results of the TSUNAMI and SAMPLER modules of the SCALE code package are also provided. The main objective of paper is to assess the impact of the implicit effect, e.g., considering the effect of cross section perturbation on the self-shielding calculation, on the Doppler coefficient SA and UQ. Analyses are done for a Light Water Reactor (LWR) pin cell based on Phase I of the UAM LWR benchmark. The negligence of implicit effects in XSUSA and TSUNAMI leads to deviations of a few percent between the sensitivity profiles compared to SAMPLER and TSUNAMI (incl. implicit effects) except for 238U elastic scattering. The implicit effect is much larger for the SHARK-X calculations because of its coarser energy group structure between 10 eV and 10 keV compared to the applied SCALE libraries. It is concluded that the influence of the implicit effect strongly depends on the energy mesh of the nuclear data library of the neutron transport solver involved in the UQ calculations and may be magnified by the response considered.

  20. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.

  1. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.

  2. Assessing transboundary influences in the lower Rio Grande Valley

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukerjee, S.; Shadwick, D.S.; Dean, K.E.

    1999-07-01

    The Lower Rio Grande Valley Transboundary Air Pollution Project (TAPP) was a US-Mexico Border XXI Program project to assess transboundary air pollution in and near Brownsville, Texas. The study used a three-site air monitoring network very close to the border to capture the direct impact of local sources and transboundary transport. Ambient data included particulate mass and elemental composition, VOCs, PAHs, pesticides, and meteorology. Also, near real-time, PM{sub 2.5} mass measurements captured potential pollutant plume events occurring over 1-h periods. Data collected were compared to screening levels and other monitoring data to assess general air pollution impacts on nearby bordermore » communities. Wind sector analyses, chemical tracer analyses, principal component analyses, and other techniques were used to assess the extent of transboundary transport of air pollutants and identify possible transboundary air pollution sources. Overall, ambient levels were comparable to or lower than other urban and rural areas in Texas and elsewhere. Movement of air pollution across the border did not appear to cause noticeable deterioration of air quality on the US side of the Lower Rio Grande Valley. Dominant southeasterly winds from the Gulf of Mexico were largely responsible for the clean air conditions in the Brownsville airshed. Few observations of pollutants exceeded effects screening levels, almost all being VOCs; these appeared to be due to local events and immediate influences, not regional phenomena or persistent transboundary plumes.« less

  3. Chemical Composition and Source Apportionment of Size ...

    EPA Pesticide Factsheets

    The Cleveland airshed comprises a complex mixture of industrial source emissions that contribute to periods of non-attainment for fine particulate matter (PM 2.5 ) and are associated with increased adverse health outcomes in the exposed population. Specific PM sources responsible for health effects however are not fully understood. Size-fractionated PM (coarse, fine, and ultrafine) samples were collected using a ChemVol sampler at an urban site (G.T. Craig (GTC)) and rural site (Chippewa Lake (CLM)) from July 2009 to June 2010, and then chemically analyzed. The resulting speciated PM data were apportioned by EPA positive matrix factorization to identify emission sources for each size fraction and location. For comparisons with the ChemVol results, PM samples were also collected with sequential dichotomous and passive samplers, and evaluated for source contributions to each sampling site. The ChemVol results showed that annual average concentrations of PM, elemental carbon, and inorganic elements in the coarse fraction at GTC were ~ 2, ~7, and ~3 times higher than those at CLM, respectively, while the smaller size fractions at both sites showed similar annual average concentrat ions. Seasonal variations of secondary aerosols (e.g., high N03- level in winter and high SO42- level in summer) were observed at both sites. Source apportionment results demonstrated that the PM samples at GTC and CLM were enriched with local industrial sources (e.g., steel plant and coa

  4. Particle size distribution and composition in a mechanically ventilated school building during air pollution episodes.

    PubMed

    Parker, J L; Larson, R R; Eskelson, E; Wood, E M; Veranth, J M

    2008-10-01

    Particle count-based size distribution and PM(2.5) mass were monitored inside and outside an elementary school in Salt Lake City (UT, USA) during the winter atmospheric inversion season. The site is influenced by urban traffic and the airshed is subject to periods of high PM(2.5) concentration that is mainly submicron ammonium and nitrate. The school building has mechanical ventilation with filtration and variable-volume makeup air. Comparison of the indoor and outdoor particle size distribution on the five cleanest and five most polluted school days during the study showed that the ambient submicron particulate matter (PM) penetrated the building, but indoor concentrations were about one-eighth of outdoor levels. The indoor:outdoor PM(2.5) mass ratio averaged 0.12 and particle number ratio for sizes smaller than 1 microm averaged 0.13. The indoor submicron particle count and indoor PM(2.5) mass increased slightly during pollution episodes but remained well below outdoor levels. When the building was occupied the indoor coarse particle count was much higher than ambient levels. These results contribute to understanding the relationship between ambient monitoring station data and the actual human exposure inside institutional buildings. The study confirms that staying inside a mechanically ventilated building reduces exposure to outdoor submicron particles. This study supports the premise that remaining inside buildings during particulate matter (PM) pollution episodes reduces exposure to submicron PM. New data on a mechanically ventilated institutional building supplements similar studies made in residences.

  5. Evaluation of microstructure stability at the interfaces of Al-6061 welds fabricated using ultrasonic additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sridharan, Niyanth S.; Gussev, Maxim N.; Parish, Chad M.

    Here, ultrasonic additive manufacturing (UAM) is a solid-state additive manufacturing process that uses fundamental principles of ultrasonic welding and sequential layering of tapes to fabricate complex three-dimensional (3-D) components. One of the factors limiting the use of this technology is the poor tensile strength along the z-axis. Recent work has demonstrated the improvement of the z-axis properties after post-processing treatments. The abnormally high stability of the grains at the interface during post-weld heat treatments is, however, not yet well understood. In this work we use multiscale characterization to understand the stability of the grains during post-weld heat treatments. Aluminum alloymore » (6061) builds, fabricated using ultrasonic additive manufacturing, were post-weld heat treated at 180, 330 and 580 °C. The grains close to the tape interfaces are stable during post-weld heat treatments at high temperatures (i.e., 580 °C). This is in contrast to rapid grain growth that takes place in the bulk. Transmission electron microscopy and atom-probe tomography display a significant enrichment of oxygen and magnesium near the stable interfaces. Based on the detailed characterization, two mechanisms are proposed and evaluated: nonequilibrium nano-dispersed oxides impeding the grain growth due to grain boundary pinning, or grain boundary segregation of magnesium and oxygen reducing the grain boundary energy.« less

  6. Evaluation of microstructure stability at the interfaces of Al-6061 welds fabricated using ultrasonic additive manufacturing

    DOE PAGES

    Sridharan, Niyanth S.; Gussev, Maxim N.; Parish, Chad M.; ...

    2018-03-06

    Here, ultrasonic additive manufacturing (UAM) is a solid-state additive manufacturing process that uses fundamental principles of ultrasonic welding and sequential layering of tapes to fabricate complex three-dimensional (3-D) components. One of the factors limiting the use of this technology is the poor tensile strength along the z-axis. Recent work has demonstrated the improvement of the z-axis properties after post-processing treatments. The abnormally high stability of the grains at the interface during post-weld heat treatments is, however, not yet well understood. In this work we use multiscale characterization to understand the stability of the grains during post-weld heat treatments. Aluminum alloymore » (6061) builds, fabricated using ultrasonic additive manufacturing, were post-weld heat treated at 180, 330 and 580 °C. The grains close to the tape interfaces are stable during post-weld heat treatments at high temperatures (i.e., 580 °C). This is in contrast to rapid grain growth that takes place in the bulk. Transmission electron microscopy and atom-probe tomography display a significant enrichment of oxygen and magnesium near the stable interfaces. Based on the detailed characterization, two mechanisms are proposed and evaluated: nonequilibrium nano-dispersed oxides impeding the grain growth due to grain boundary pinning, or grain boundary segregation of magnesium and oxygen reducing the grain boundary energy.« less

  7. A two-period assessment of changes in specialist contact in a high-risk pregnancy telemedical program.

    PubMed

    Britt, David W; Norton, Jonathan D; Hubanks, Amanda S; Navidad, Susan A; Perkins, Rosalyn J; Lowery, Curtis L

    2006-02-01

    The purpose was to examine the organizational impact of a state-wide high-risk pregnancy telemedical system, Antenatal and Neonatal Guidelines, Education and Learning System (ANGELS), after the first year of its roll out. The focus is on several aspects of system organization, including the volume and diversity of patient-based telemedical consultations and weekly telemedical case discussions, telephone consultations, and changes in the pattern of birth-related patient transports. Individual data on patient transports and associated hospital days, provider-specialist telephone calls, and telemedical consultations were collected for two time periods: December 2002-May 2003 (prior to initiation of ANGELS), and December 2003-May 2004 (postinitiation of ANGELS). Different statistical tests were constructed to compare the two periods as appropriate. Significant increases were observed in the volume and geographic diversity of telemedical consultations and the volume of telephone consultations. There was a moderate, but nonsignificant decrease in the number of maternal transports to University of Arkansas School of Medical Sciences (UAMS), and the average length of stay decreased. The type of specialist-provider and specialist-patient contact has changed as the ANGELS high-risk pregnancy telemedical system has evolved over the first year. We conclude that the rollout of the ANGELS program is changing the shape of high-risk patient care in Arkansas, and we attribute that to an evolving collegial network between specialists and generalists.

  8. Propionibacterium-Produced Coproporphyrin III Induces Staphylococcus aureus Aggregation and Biofilm Formation

    PubMed Central

    Wollenberg, Michael S.; Claesen, Jan; Escapa, Isabel F.; Aldridge, Kelly L.; Fischbach, Michael A.

    2014-01-01

    ABSTRACT The majority of bacteria detected in the nostril microbiota of most healthy adults belong to three genera: Propionibacterium, Corynebacterium, and Staphylococcus. Among these staphylococci is the medically important bacterium Staphylococcus aureus. Almost nothing is known about interspecies interactions among bacteria in the nostrils. We observed that crude extracts of cell-free conditioned medium from Propionibacterium spp. induce S. aureus aggregation in culture. Bioassay-guided fractionation implicated coproporphyrin III (CIII), the most abundant extracellular porphyrin produced by human-associated Propionibacterium spp., as a cause of S. aureus aggregation. This aggregation response depended on the CIII dose and occurred during early stationary-phase growth, and a low pH (~4 to 6) was necessary but was not sufficient for its induction. Additionally, CIII induced plasma-independent S. aureus biofilm development on an abiotic surface in multiple S. aureus strains. In strain UAMS-1, CIII stimulation of biofilm depended on sarA, a key biofilm regulator. This study is one of the first demonstrations of a small-molecule-mediated interaction among medically relevant members of the nostril microbiota and the first description of a role for CIII in bacterial interspecies interactions. Our results indicate that CIII may be an important mediator of S. aureus aggregation and/or biofilm formation in the nostril or other sites inhabited by Propionibacterium spp. and S. aureus. PMID:25053784

  9. In situ characterization of uranium and americium oxide solid solution formation for CRMP process: first combination of in situ XRD and XANES measurements.

    PubMed

    Caisso, Marie; Picart, Sébastien; Belin, Renaud C; Lebreton, Florent; Martin, Philippe M; Dardenne, Kathy; Rothe, Jörg; Neuville, Daniel R; Delahaye, Thibaud; Ayral, André

    2015-04-14

    Transmutation of americium in heterogeneous mode through the use of U1-xAmxO2±δ ceramic pellets, also known as Americium Bearing Blankets (AmBB), has become a major research axis. Nevertheless, in order to consider future large-scale deployment, the processes involved in AmBB fabrication have to minimize fine particle dissemination, due to the presence of americium, which considerably increases the risk of contamination. New synthesis routes avoiding the use of pulverulent precursors are thus currently under development, such as the Calcined Resin Microsphere Pelletization (CRMP) process. It is based on the use of weak-acid resin (WAR) microspheres as precursors, loaded with actinide cations. After two specific calcinations under controlled atmospheres, resin microspheres are converted into oxide microspheres composed of a monophasic U1-xAmxO2±δ phase. Understanding the different mechanisms during thermal conversion, that lead to the release of organic matter and the formation of a solid solution, appear essential. By combining in situ techniques such as XRD and XAS, it has become possible to identify the key temperatures for oxide formation, and the corresponding oxidation states taken by uranium and americium during mineralization. This paper thus presents the first results on the mineralization of (U,Am) loaded resin microspheres into a solid solution, through in situ XAS analysis correlated with HT-XRD.

  10. Contribution of the Staphylococcus aureus Atl AM and GL murein hydrolase activities in cell division, autolysis, and biofilm formation.

    PubMed

    Bose, Jeffrey L; Lehman, McKenzie K; Fey, Paul D; Bayles, Kenneth W

    2012-01-01

    The most prominent murein hydrolase of Staphylococcus aureus, AtlA, is a bifunctional enzyme that undergoes proteolytic cleavage to yield two catalytically active proteins, an amidase (AM) and a glucosaminidase (GL). Although the bifunctional nature of AtlA has long been recognized, most studies have focused on the combined functions of this protein in cell wall metabolism and biofilm development. In this study, we generated mutant derivatives of the clinical S. aureus isolate, UAMS-1, in which one or both of the AM and GL domains of AtlA have been deleted. Examination of these strains revealed that each mutant exhibited growth rates comparable to the parental strain, but showed clumping phenotypes and lysis profiles that were distinct from the parental strain and each other, suggesting distinct roles in cell wall metabolism. Given the known function of autolysis in the release of genomic DNA for use as a biofilm matrix molecule, we also tested the mutants in biofilm assays and found both AM and GL necessary for biofilm development. Furthermore, the use of enzymatically inactive point mutations revealed that both AM and GL must be catalytically active for S. aureus to form a biofilm. The results of this study provide insight into the relative contributions of AM and GL in S. aureus and demonstrate the contribution of Atl-mediated lysis in biofilm development.

  11. Daily ambient air pollution metrics for five cities: Evaluation of data-fusion-based estimates and uncertainties

    NASA Astrophysics Data System (ADS)

    Friberg, Mariel D.; Kahn, Ralph A.; Holmes, Heather A.; Chang, Howard H.; Sarnat, Stefanie Ebelt; Tolbert, Paige E.; Russell, Armistead G.; Mulholland, James A.

    2017-06-01

    Birmingham. The greatest improvements occur for O3 and PM2.5. Squared spatiotemporal correlation coefficients range between simulations and observations determined using cross-validation across all cities for air pollutants of secondary and mixed origins are R2 = 0.88-0.93 (O3), 0.81-0.89 (SO4), 0.67-0.83 (PM2.5), 0.52-0.72 (NO3), 0.43-0.80 (NH4), 0.32-0.51 (OC), and 0.14-0.71 (PM10). Results for relatively homogeneous pollutants of secondary origin, tend to be better than those for more spatially heterogeneous (larger spatial gradients) pollutants of primary origin (NOx, CO, SO2 and EC). Generally, background concentrations and spatial concentration gradients reflect interurban airshed complexity and the effects of regional transport, whereas daily spatial pattern variability shows intra-urban consistency in the fused data. With sufficiently high CTM spatial resolution, traffic-related pollutants exhibit gradual concentration gradients that peak toward the urban centers. Ambient pollutant concentration uncertainty estimates for the fused data are both more accurate and smaller than those for either the observations or the model simulations alone.

  12. Models and role models.

    PubMed

    ten Cate, Jacob M

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. 2015 S. Karger AG, Basel

  13. Expert models and modeling processes associated with a computer-modeling tool

    NASA Astrophysics Data System (ADS)

    Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.

    2006-07-01

    Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.

  14. 10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  15. Anthropogenic climate change has driven Lake Superior productivity beyond the range of Holocene variability

    NASA Astrophysics Data System (ADS)

    OBeirne, M. D.; Werne, J. P.; Hecky, R. E.; Johnson, T. C.; Katsev, S.; Reavie, E. D.

    2013-12-01

    Recent studies have noted that changes in Lake Superior's physical, chemical and biological processes are apparent - including a warming of the surface waters at a rate twice as great as the surrounding airshed in the last 20 years. These changes are often difficult to perceive as cause for concern when not placed within a historical context. In this study, bulk C and N abundance and stable isotope composition was determined on sediments from three piston and corresponding gravity cores, representing a record of lake-wide paleoproductivity trends spanning the Holocene. These data are compared with the same measurements on eight multi-cores sampled at high resolution spanning the past ~200 years, which allows for the historical comparison with recent (1800 A.D. to present) productivity trends. Throughout the Holocene, Lake Superior experienced a slow, steady increase in productivity consistent with conventional lake ontogeny. During the last 200 years, however, the Lake Superior basin has undergone biogeochemical changes that are unique in the context of the Holocene. Lake-wide sedimentary bulk organic carbon data indicate increasing primary production between 1900 and present, as indicated by a ~2‰ increase in δ13Corg. In contrast,δ15Norg values, which increased throughout the Holocene, become progressively 15N-depleted after 1900, likely due to atmospheric deposition of NOx from fossil fuel combustion. The most recent increases in primary productivity are likely a response to increasing water temperatures, leading to longer ice-free periods as previously documented in Lake Superior.

  16. The relationship between airborne small ions and particles in urban environments

    NASA Astrophysics Data System (ADS)

    Ling, Xuan; Jayaratne, Rohan; Morawska, Lidia

    2013-11-01

    Ions play an important role in affecting climate and particle formation in the atmosphere. Small ions rapidly attach to particles in the air and, therefore, studies have shown that they are suppressed in polluted environments. Urban environments, in particular, are dominated by motor vehicle emissions and, since motor vehicles are a source of both particles and small ions, the relationship between these two parameters is not well known. In order to gain a better understanding of this relationship, an intensive campaign was undertaken where particles and small ions of both signs were monitored over two week periods at each of three sites A, B and C that were affected to varying degrees by vehicle emissions. Site A was close to a major road and reported the highest particle number and lowest small ion concentrations. Precursors from motor vehicle emissions gave rise to clear particle formation events on five days and, on each day this was accompanied by a suppression of small ions. Observations at Site B, which was located within the urban airshed, though not adjacent to motor traffic, showed particle enhancement but no formation events. Site C was a clean site, away from urban sources. This site reported the lowest particle number and highest small ion concentration. The positive small ion concentration was 10%-40% higher than the corresponding negative value at all sites. These results confirm previous findings that there is a clear inverse relationship between small ions and particles in urban environments dominated by motor vehicle emissions.

  17. Source-to-exposure assessment with the Pangea multi-scale framework - case study in Australia.

    PubMed

    Wannaz, Cedric; Fantke, Peter; Lane, Joe; Jolliet, Olivier

    2018-01-24

    Effective planning of airshed pollution mitigation is often constrained by a lack of integrative analysis able to relate the relevant emitters to the receptor populations at risk. Both emitter and receptor perspectives are therefore needed to consistently inform emission and exposure reduction measures. This paper aims to extend the Pangea spatial multi-scale multimedia framework to evaluate source-to-receptor relationships of industrial sources of organic pollutants in Australia. Pangea solves a large compartmental system in parallel by block to determine arrays of masses at steady-state for 100 000+ compartments and 4000+ emission scenarios, and further computes population exposure by inhalation and ingestion. From an emitter perspective, radial spatial distributions of population intakes show high spatial variation in intake fractions from 0.68 to 33 ppm for benzene, and from 0.006 to 9.5 ppm for formaldehyde, contrasting urban, rural, desert, and sea source locations. Extending analyses to the receptor perspective, population exposures from the combined emissions of 4101 Australian point sources are more extended for benzene that travels over longer distances, versus formaldehyde that has a more local impact. Decomposing exposure per industrial sector shows petroleum and steel industry as the highest contributing industrial sectors for benzene, whereas the electricity sector and petroleum refining contribute most to formaldehyde exposures. The source apportionment identifies the main sources contributing to exposure at five locations. Overall, this paper demonstrates high interest in addressing exposures from both an emitter perspective well-suited to inform product oriented approaches such as LCA, and from a receptor perspective for health risk mitigation.

  18. Summarising climate and air quality (ozone) data on self-organising maps: a Sydney case study.

    PubMed

    Jiang, Ningbo; Betts, Alan; Riley, Matt

    2016-02-01

    This paper explores the classification and visualisation utility of the self-organising map (SOM) method in the context of New South Wales (NSW), Australia, using gridded NCEP/NCAR geopotential height reanalysis for east Australia, together with multi-site meteorological and air quality data for Sydney from the NSW Office of Environment and Heritage Air Quality Monitoring Network. A twice-daily synoptic classification has been derived for east Australia for the period of 1958-2012. The classification has not only reproduced the typical synoptic patterns previously identified in the literature but also provided an opportunity to visualise the subtle, non-linear change in the eastward-migrating synoptic systems influencing NSW (including Sydney). The summarisation of long-term, multi-site air quality/meteorological data from the Sydney basin on the SOM plane has identified a set of typical air pollution/meteorological spatial patterns in the region. Importantly, the examination of these patterns in relation to synoptic weather types has provided important visual insights into how local and synoptic meteorological conditions interact with each other and affect the variability of air quality in tandem. The study illustrates that while synoptic circulation types are influential, the within-type variability in mesoscale flows plays a critical role in determining local ozone levels in Sydney. These results indicate that the SOM can be a useful tool for assessing the impact of weather and climatic conditions on air quality in the regional airshed. This study further promotes the use of the SOM method in environmental research.

  19. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  20. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  1. Global Carbon Cycle Modeling in GISS ModelE2 GCM

    NASA Astrophysics Data System (ADS)

    Aleinov, I. D.; Kiang, N. Y.; Romanou, A.; Romanski, J.

    2014-12-01

    Consistent and accurate modeling of the Global Carbon Cycle remains one of the main challenges for the Earth System Models. NASA Goddard Institute for Space Studies (GISS) ModelE2 General Circulation Model (GCM) was recently equipped with a complete Global Carbon Cycle algorithm, consisting of three integrated components: Ent Terrestrial Biosphere Model (Ent TBM), Ocean Biogeochemistry Module and atmospheric CO2 tracer. Ent TBM provides CO2 fluxes from the land surface to the atmosphere. Its biophysics utilizes the well-known photosynthesis functions of Farqhuar, von Caemmerer, and Berry and Farqhuar and von Caemmerer, and stomatal conductance of Ball and Berry. Its phenology is based on temperature, drought, and radiation fluxes, and growth is controlled via allocation of carbon from labile carbohydrate reserve storage to different plant components. Soil biogeochemistry is based on the Carnegie-Ames-Stanford (CASA) model of Potter et al. Ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. Atmospheric CO2 is advected with a quadratic upstream algorithm implemented in atmospheric part of ModelE2. Here we present the results for pre-industrial equilibrium and modern transient simulations and provide comparison to available observations. We also discuss the process of validation and tuning of particular algorithms used in the model.

  2. Target Scattering Metrics: Model-Model and Model-Data Comparisons

    DTIC Science & Technology

    2017-12-13

    measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for

  3. Target Scattering Metrics: Model-Model and Model Data comparisons

    DTIC Science & Technology

    2017-12-13

    measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for

  4. Comparative Protein Structure Modeling Using MODELLER

    PubMed Central

    Webb, Benjamin; Sali, Andrej

    2016-01-01

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406

  5. [Bone remodeling and modeling/mini-modeling.

    PubMed

    Hasegawa, Tomoka; Amizuka, Norio

    Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.

  6. Vector models and generalized SYK models

    DOE PAGES

    Peng, Cheng

    2017-05-23

    Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.

  7. Comparative Protein Structure Modeling Using MODELLER.

    PubMed

    Webb, Benjamin; Sali, Andrej

    2014-09-08

    Functional characterization of a protein sequence is one of the most frequent problems in biology. This task is usually facilitated by accurate three-dimensional (3-D) structure of the studied protein. In the absence of an experimentally determined structure, comparative or homology modeling can sometimes provide a useful 3-D model for a protein that is related to at least one known protein structure. Comparative modeling predicts the 3-D structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. Copyright © 2014 John Wiley & Sons, Inc.

  8. The Essential Role of Tethered Balloons in Characterizing Boundary Layer Structure and Evolution during Discover-AQ

    NASA Astrophysics Data System (ADS)

    Clark, R. D.

    2014-12-01

    The NASA DISCOVER-AQ (Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality) provided the opportunity to observe the influence of local and regional circulations on the structure and evolution of the boundary layer (BL) and in turn study the associated effects on air quality and aerosol trends within four different airsheds. An extended network of ground-based instruments, balloon-borne profilers, and remote sensing instruments supported the in-situ airborne measurements made by the NASA aircraft in capturing the structure and evolution of the daytime BL. The Millersville University Atmospheric Research and Aerostat Facility (MARAF) is one of many assets deployed for DISCOVER-AQ. Central to MARAF is a heavy-lift-capacity tethered balloon (aerostat) used to obtain high resolution profiles of meteorological variables, trace gases, and particulates in the BL. The benefit of including a tethered balloon is that it can fill a data void between the surface and the lowest altitudes flown by the aircraft and provide critical time-height series for ground-based remote sensing instruments in the layer below their first range gate. MARAF also includes an acoustic sodar with RASS, MPL4 micropulse Lidar, 4-meter flux tower, rawinsonde system, and a suite of trace gas analyzers (O3, NOx/NO2/NO, CO, and SO2), 3-wavelength nephelometer, and particle sizers/counters spanning the range from 10 nm to 10 microns. MARAF is capable of providing a detailed and nearly continuous Eulerian characterization of the surface layer and lower BL, and with proper FAA airspace authorization, can be deployed both day and night. Three case studies will be presented that incorporate the MARAF into the combined assets of DISCOVER-AQ to better characterize: 1) bay breeze convergence, recirculation, and ramp-up events in Edgewood, MD in July 2011; 2) aerosol transport over Central Valley, CA in January 2013; and 3) multiple sea-bay breeze

  9. The SELGIFS data challenge: generating synthetic observations of CALIFA galaxies from hydrodynamical simulations

    NASA Astrophysics Data System (ADS)

    Guidi, G.; Casado, J.; Ascasibar, Y.; García-Benito, R.; Galbany, L.; Sánchez-Blázquez, P.; Sánchez, S. F.; Rosales-Ortega, F. F.; Scannapieco, C.

    2018-06-01

    In this work we present a set of synthetic observations that mimic the properties of the Integral Field Spectroscopy (IFS) survey CALIFA, generated using radiative transfer techniques applied to hydrodynamical simulations of galaxies in a cosmological context. The simulated spatially-resolved spectra include stellar and nebular emission, kinematic broadening of the lines, and dust extinction and scattering. The results of the radiative transfer simulations have been post-processed to reproduce the main properties of the CALIFA V500 and V1200 observational setups. The data has been further formatted to mimic the CALIFA survey in terms of field of view size, spectral range and sampling. We have included the effect of the spatial and spectral Point Spread Functions affecting CALIFA observations, and added detector noise after characterizing it on a sample of 367 galaxies. The simulated datacubes are suited to be analysed by the same algorithms used on real IFS data. In order to provide a benchmark to compare the results obtained applying IFS observational techniques to our synthetic datacubes, and test the calibration and accuracy of the analysis tools, we have computed the spatially-resolved properties of the simulations. Hence, we provide maps derived directly from the hydrodynamical snapshots or the noiseless spectra, in a way that is consistent with the values recovered by the observational analysis algorithms. Both the synthetic observations and the product datacubes are public and can be found in the collaboration website http://astro.ft.uam.es/selgifs/data_challenge/.

  10. Temperature Influences the Production and Transport of Saxitoxin and the Expression of sxt Genes in the Cyanobacterium Aphanizomenon gracile

    PubMed Central

    Delgado, Adrián; González-Pleiter, Miguel

    2017-01-01

    The cyanobacterium Aphanizomenon gracile is the most widely distributed producer of the potent neurotoxin saxitoxin in freshwaters. In this work, total and extracellular saxitoxin and the transcriptional response of three genes linked to saxitoxin biosynthesis (sxtA) and transport (sxtM, sxtPer) were assessed in Aphanizomenon gracile UAM529 cultures under temperatures covering its annual cycle (12 °C, 23 °C, and 30 °C). Temperature influenced saxitoxin production being maximum at high temperatures (30 °C) above the growth optimum (23 °C), concurring with a 4.3-fold increased sxtA expression at 30 °C. Extracellular saxitoxin transport was temperature-dependent, with maxima at extremes of temperature (12 °C with 16.9% extracellular saxitoxin; and especially 30 °C with 53.8%) outside the growth optimum (23 °C), coinciding with a clear upregulation of sxtM at both 12 °C and 30 °C (3.8–4.1 fold respectively), and yet with just a slight upregulation of sxtPer at 30 °C (2.1-fold). Nitrate depletion also induced a high extracellular saxitoxin release (51.2%), although without variations of sxtM and sxtPer transcription, and showing evidence of membrane damage. This is the first study analysing the transcriptional response of sxtPer under environmental gradients, as well as the effect of temperature on putative saxitoxin transporters (sxtM and sxtPer) in cyanobacteria in general. PMID:29027918

  11. Temperature Influences the Production and Transport of Saxitoxin and the Expression of sxt Genes in the Cyanobacterium Aphanizomenon gracile.

    PubMed

    Cirés, Samuel; Delgado, Adrián; González-Pleiter, Miguel; Quesada, Antonio

    2017-10-13

    The cyanobacterium Aphanizomenon gracile is the most widely distributed producer of the potent neurotoxin saxitoxin in freshwaters. In this work, total and extracellular saxitoxin and the transcriptional response of three genes linked to saxitoxin biosynthesis ( sxtA ) and transport ( sxtM , sxtPer ) were assessed in Aphanizomenon gracile UAM529 cultures under temperatures covering its annual cycle (12 °C, 23 °C, and 30 °C). Temperature influenced saxitoxin production being maximum at high temperatures (30 °C) above the growth optimum (23 °C), concurring with a 4.3-fold increased sxtA expression at 30 °C. Extracellular saxitoxin transport was temperature-dependent, with maxima at extremes of temperature (12 °C with 16.9% extracellular saxitoxin; and especially 30 °C with 53.8%) outside the growth optimum (23 °C), coinciding with a clear upregulation of sxtM at both 12 °C and 30 °C (3.8-4.1 fold respectively), and yet with just a slight upregulation of sxtPer at 30 °C (2.1-fold). Nitrate depletion also induced a high extracellular saxitoxin release (51.2%), although without variations of sxtM and sxtPer transcription, and showing evidence of membrane damage. This is the first study analysing the transcriptional response of sxtPer under environmental gradients, as well as the effect of temperature on putative saxitoxin transporters ( sxtM and sxtPer ) in cyanobacteria in general.

  12. Geologic Framework Model Analysis Model Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompassmore » the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models

  13. Pre-Modeling Ensures Accurate Solid Models

    ERIC Educational Resources Information Center

    Gow, George

    2010-01-01

    Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…

  14. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  15. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    PubMed

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  16. Translating building information modeling to building energy modeling using model view definition.

    PubMed

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  17. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    PubMed Central

    Kim, Jong Bum; Clayton, Mark J.; Haberl, Jeff S.

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process. PMID:25309954

  18. Models Archive and ModelWeb at NSSDC

    NASA Astrophysics Data System (ADS)

    Bilitza, D.; Papitashvili, N.; King, J. H.

    2002-05-01

    In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.

  19. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  20. Evolution of computational models in BioModels Database and the Physiome Model Repository.

    PubMed

    Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar

    2018-04-12

    A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.

  1. Integrity modelling of tropospheric delay models

    NASA Astrophysics Data System (ADS)

    Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó

    2017-04-01

    The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual

  2. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  3. Modeling complexes of modeled proteins.

    PubMed

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Metabolic network modeling with model organisms.

    PubMed

    Yilmaz, L Safak; Walhout, Albertha Jm

    2017-02-01

    Flux balance analysis (FBA) with genome-scale metabolic network models (GSMNM) allows systems level predictions of metabolism in a variety of organisms. Different types of predictions with different accuracy levels can be made depending on the applied experimental constraints ranging from measurement of exchange fluxes to the integration of gene expression data. Metabolic network modeling with model organisms has pioneered method development in this field. In addition, model organism GSMNMs are useful for basic understanding of metabolism, and in the case of animal models, for the study of metabolic human diseases. Here, we discuss GSMNMs of most highly used model organisms with the emphasis on recent reconstructions. Published by Elsevier Ltd.

  5. Metabolic network modeling with model organisms

    PubMed Central

    Yilmaz, L. Safak; Walhout, Albertha J.M.

    2017-01-01

    Flux balance analysis (FBA) with genome-scale metabolic network models (GSMNM) allows systems level predictions of metabolism in a variety of organisms. Different types of predictions with different accuracy levels can be made depending on the applied experimental constraints ranging from measurement of exchange fluxes to the integration of gene expression data. Metabolic network modeling with model organisms has pioneered method development in this field. In addition, model organism GSMNMs are useful for basic understanding of metabolism, and in the case of animal models, for the study of metabolic human diseases. Here, we discuss GSMNMs of most highly used model organisms with the emphasis on recent reconstructions. PMID:28088694

  6. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  7. EasyModeller: A graphical interface to MODELLER

    PubMed Central

    2010-01-01

    Background MODELLER is a program for automated protein Homology Modeling. It is one of the most widely used tool for homology or comparative modeling of protein three-dimensional structures, but most users find it a bit difficult to start with MODELLER as it is command line based and requires knowledge of basic Python scripting to use it efficiently. Findings The study was designed with an aim to develop of "EasyModeller" tool as a frontend graphical interface to MODELLER using Perl/Tk, which can be used as a standalone tool in windows platform with MODELLER and Python preinstalled. It helps inexperienced users to perform modeling, assessment, visualization, and optimization of protein models in a simple and straightforward way. Conclusion EasyModeller provides a graphical straight forward interface and functions as a stand-alone tool which can be used in a standard personal computer with Microsoft Windows as the operating system. PMID:20712861

  8. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  9. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  10. Radiation Environment Modeling for Spacecraft Design: New Model Developments

    NASA Technical Reports Server (NTRS)

    Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray

    2006-01-01

    A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.

  11. Using Moss to Detect Fine-Scaled Deposition of Heavy Metals in Urban Environments

    NASA Astrophysics Data System (ADS)

    Jovan, S.; Donovan, G.; Demetrios, G.; Monleon, V. J.; Amacher, M. C.

    2017-12-01

    Mosses are commonly used as bio-indicators of heavy metal deposition to forests. Their application in urban airsheds is relatively rare. Our objective was to develop fine-scaled, city-wide maps for heavy metals in Portland, Oregon, to identify pollution "hotspots" and serve as a screening tool for more effective placement of air quality monitoring instruments. In 2013 we measured twenty-two elements in epiphytic moss sampled on a 1km x1km sampling grid (n = 346). We detected large hotspots of cadmium and arsenic in two neighborhoods associated with stained glass manufacturers. Air instruments deployed by local regulators measured cadmium concentrations 49 times and arsenic levels 155 times the state health benchmarks. Moss maps also detected a large nickel hotspot in a neighborhood near a forge where air instruments later measured concentrations 4 times the health benchmark. In response, the facilities implemented new pollution controls, air quality improved in all three affected neighborhoods, revision of regulations for stained glass furnace emissions are underway, and Oregon's governor launched an initiative to develop health-based (vs technology-based) regulations for air toxics in the state. The moss maps also indicated a couple dozen smaller hotspots of heavy metals, including lead, chromium, and cobalt, in Portland neighborhoods. Ongoing follow-up work includes: 1) use of moss sampling by local regulators to investigate source and extent of the smaller hotspots, 2) use of lead isotopes to determine origins of higher lead levels observed in moss collected from the inner city, and 3) co-location of air instruments and moss sampling to determine accuracy, timeframe represented, and seasonality of heavy metals in moss.

  12. A satellite view of the sources and interannual variability of free tropospheric PAN over the eastern Pacific Ocean during summer and its timeline for trend detection

    NASA Astrophysics Data System (ADS)

    Zhu, L.; Fischer, E. V.; Payne, V.; Walker, T. W.; Worden, J. R.; Jiang, Z.; Kulawik, S. S.

    2016-12-01

    Peroxyacetyl nitrate (PAN) is the most important reservoir for nitrogen oxide radicals (NOx = NO + NO2) in the troposphere and plays a significant role in the redistribution of NOx to remote regions. There is strong evidence that PAN decomposition in specific plumes of Asian origin subsiding over the eastern Pacific Ocean can lead to significant ozone (O3) enhancements in the troposphere. Thus quantifying the spatial and temporal variability of PAN over the eastern Pacific Ocean is an important part of understanding the O3 budget upwind of the North American airshed. Here we present observations of PAN from the Tropospheric Emission Spectrometer (TES) over the eastern Pacific for July 2006-2010. We focus our analysis on July because prior work based on in situ observations has primarily addressed the transpacific transport of PAN in spring. Plumes containing elevated PAN are present almost every day in the month of July, and we show that elevated PAN observed in July has multiple sources, including fires in Siberia, anthropogenic and lightning sources in eastern China, and re-circulated pollution from the continental U.S. We provide examples of each type of source using both HYPLIT trajectories and a GEOS-Chem adjoint sensitivity analysis. Based on the variability observed in the TES PAN retrievals over this region, we predict it would be faster to detect a trend of a given magnitude in PAN using satellite observations over the eastern Pacific Ocean region rather than surface in situ observations at one site, and that a trend of a given magnitude would be more quickly detected in summer than spring.

  13. Atmospheric mercury and fine particulate matter in coastal New England: implications for mercury and trace element sources in the northeastern United States

    USGS Publications Warehouse

    Kolker, Allan; Engle, Mark A.; Peucker-Ehrenbrink, Bernhard; Geboy, Nicholas J.; Krabbenhotft, David P.; Bothner, Michael H.; Tate, Michael T.

    2013-01-01

    Intensive sampling of ambient atmospheric fine particulate matter was conducted at Woods Hole, Massachusetts over a four-month period from 3 April to 29 July, 2008, in conjunction with year-long deployment of the USGS Mobile Mercury Lab. Results were obtained for trace elements in fine particulate matter concurrently with determination of ambient atmospheric mercury speciation and concentrations of ancillary gasses (SO2, NOx, and O3). For particulate matter, trace element enrichment factors greater than 10 relative to crustal background values were found for As, Bi, Cd, Cu, Hg, Pb, Sb, V, and Zn, indicating contribution of these elements by anthropogenic sources. For other elements, enrichments are consistent with natural marine (Na, Ca, Mg, Sr) or crustal (Ba, Ce, Co, Cs, Fe, Ga, La, Rb, Sc, Th, Ti, U, Y) sources, respectively. Positive matrix factorization was used together with concentration weighted air-mass back trajectories to better define element sources and their locations. Our analysis, based on events exhibiting the 10% highest PM2.5 contributions for each source category, identifies coal-fired power stations concentrated in the U.S. Ohio Valley, metal smelting in eastern Canada, and marine and crustal sources showing surprisingly similar back trajectories, at times each sampling Atlantic coastal airsheds. This pattern is consistent with contribution of Saharan dust by a summer maximum at the latitude of Florida and northward transport up the Atlantic Coast by clockwise circulation of the summer Bermuda High. Results for mercury speciation show diurnal production of RGM by photochemical oxidation of Hg° in a marine environment, and periodic traverse of the study area by correlated RGM-SO2(NOx) plumes, indicative of coal combustion sources.

  14. High Precision, Absolute Total Column Ozone Measurements from the Pandora Spectrometer System: Comparisons with Data from a Brewer Double Monochromator and Aura OMI

    NASA Technical Reports Server (NTRS)

    Tzortziou, Maria A.; Herman, Jay R.; Cede, Alexander; Abuhassan, Nader

    2012-01-01

    We present new, high precision, high temporal resolution measurements of total column ozone (TCO) amounts derived from ground-based direct-sun irradiance measurements using our recently deployed Pandora single-grating spectrometers. Pandora's small size and portability allow deployment at multiple sites within an urban air-shed and development of a ground-based monitoring network for studying small-scale atmospheric dynamics, spatial heterogeneities in trace gas distribution, local pollution conditions, photochemical processes and interdependencies of ozone and its major precursors. Results are shown for four mid- to high-latitude sites where different Pandora instruments were used. Comparisons with a well calibrated double-grating Brewer spectrometer over a period of more than a year in Greenbelt MD showed excellent agreement and a small bias of approximately 2 DU (or, 0.6%). This was constant with slant column ozone amount over the full range of observed solar zenith angles (15-80), indicating adequate Pandora stray light correction. A small (1-2%) seasonal difference was found, consistent with sensitivity studies showing that the Pandora spectral fitting TCO retrieval has a temperature dependence of 1% per 3K, with an underestimation in temperature (e.g., during summer) resulting in an underestimation of TCO. Pandora agreed well with Aura-OMI (Ozone Measuring Instrument) satellite data, with average residuals of <1% at the different sites when the OMI view was within 50 km from the Pandora location and OMI-measured cloud fraction was <0.2. The frequent and continuous measurements by Pandora revealed significant short-term (hourly) temporal changes in TCO, not possible to capture by sun-synchronous satellites, such as OMI, alone.

  15. Leadership Models.

    ERIC Educational Resources Information Center

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  16. Building mental models by dissecting physical models.

    PubMed

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to ensure focused learning; models that are too constrained require less supervision, but can be constructed mechanically, with little to no conceptual engagement. We propose "model-dissection" as an alternative to "model-building," whereby instructors could make efficient use of supervisory resources, while simultaneously promoting focused learning. We report empirical results from a study conducted with biology undergraduate students, where we demonstrate that asking them to "dissect" out specific conceptual structures from an already built 3D physical model leads to a significant improvement in performance than asking them to build the 3D model from simpler components. Using questionnaires to measure understanding both before and after model-based interventions for two cohorts of students, we find that both the "builders" and the "dissectors" improve in the post-test, but it is the latter group who show statistically significant improvement. These results, in addition to the intrinsic time-efficiency of "model dissection," suggest that it could be a valuable pedagogical tool. © 2015 The International Union of Biochemistry and Molecular Biology.

  17. Better models are more effectively connected models

    NASA Astrophysics Data System (ADS)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  18. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  19. SUMMA and Model Mimicry: Understanding Differences Among Land Models

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.

    2016-12-01

    Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in

  20. When smoke comes to town - effects of biomass burning smoke on air quality down under

    NASA Astrophysics Data System (ADS)

    Keywood, Melita; Cope, Martin; (C. P) Meyer, Mick; Iinuma, Yoshi; Emmerson, Kathryn

    2014-05-01

    Annually, biomass burning results in the emission of quantities of trace gases and aerosol to the atmosphere. Biomass burning emissions have a significant effect on atmospheric chemistry due to the presence of reactive species. Biomass burning aerosols influence the radiative balance of the earth-atmosphere system directly through the scattering and absorption of radiation, and indirectly through their influence on cloud microphysical processes, and therefore constitute an important forcing in climate models. They also reduce visibility, influence atmospheric photochemistry and can be inhaled into the deepest parts of the lungs, so that they can have a significant effect on human health. Australia experiences bushfires on an annual basis. In most years fires are restricted to the tropical savannah forests of Northern Australia. However in the summer of 2006/2007 (December 2006 - February 2007), South Eastern Australia was affected by the longest recorded fires in its history. During this time the State of Victoria was ravaged by 690 separate bushfires, including the major Great Divide Fire, which devastated 1,048,238 hectares over 69 days. On several occasions, thick smoke haze was transported to the Melbourne central business district and PM10 concentrations at several air quality monitoring stations peaked at over 200 µg m-3 (four times the National Environment Protection Measure PM10 24 hour standard). During this period, a comprehensive suite of air quality measurements was carried out at a location 25 km south of the Melbourne CBD, including detailed aerosol microphysical and chemical composition measurements. Here we examine the chemical and physical properties of the smoke plume as it impacted Melbourne's air shed and discuss its impact on air quality over the city. We estimate the aerosol emission rates of the source fires, the age of the plumes and investigate the transformation of the smoke as it progressed from its source to the Melbourne airshed. We

  1. Modeller's attitude in catchment modelling: a comparative study

    NASA Astrophysics Data System (ADS)

    Battista Chirico, Giovanni

    2010-05-01

    Ten modellers have been invited to predict, independently from each other, the discharge of the artificial Chicken Creek catchment in North-East Germany for simulation period of three years, providing them only soil texture, terrain and meteorological data. No data concerning the discharge or other sources of state variables and fluxes within the catchment have been provided. Modellers had however the opportunity to visit the experimental catchment and inspect areal photos of the catchments since its initial development stage. This study has been a unique comparative study focussing on how different modellers deal with the key issues in predicting the discharge in ungauged catchments: 1) choice of the model structure; 2) identification of model parameters; 3) identification of model initial and boundary conditions. The first general lesson learned during this study was that the modeller is just part of the entire modelling process and has a major bearing on the model results, particularly in ungauged catchments where there are more degrees of freedom in making modelling decisions. Modellers' attitudes during the stages of the model implementation and parameterisation have been deeply influenced by their own experience from previous modelling studies. A common outcome was that modellers have been mainly oriented to apply process-based models able to exploit the available data concerning the physical properties of the catchment and therefore could be more suitable to cope with the lack of data concerning state variables or fluxes. The second general lesson learned during this study was the role of dominant processes. We believed that the modelling task would have been much easier in an artificial catchment, where heterogeneity were expected to be negligible and processes simpler, than in catchments that have evolved over a longer time period. The results of the models were expected to converge, and this would have been a good starting point to proceed for a model

  2. Model selection for logistic regression models

    NASA Astrophysics Data System (ADS)

    Duller, Christine

    2012-09-01

    Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.

  3. Building Mental Models by Dissecting Physical Models

    ERIC Educational Resources Information Center

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…

  4. Multiscale musculoskeletal modelling, data–model fusion and electromyography-informed modelling

    PubMed Central

    Zhang, J.; Heidlauf, T.; Sartori, M.; Besier, T.; Röhrle, O.; Lloyd, D.

    2016-01-01

    This paper proposes methods and technologies that advance the state of the art for modelling the musculoskeletal system across the spatial and temporal scales; and storing these using efficient ontologies and tools. We present population-based modelling as an efficient method to rapidly generate individual morphology from only a few measurements and to learn from the ever-increasing supply of imaging data available. We present multiscale methods for continuum muscle and bone models; and efficient mechanostatistical methods, both continuum and particle-based, to bridge the scales. Finally, we examine both the importance that muscles play in bone remodelling stimuli and the latest muscle force prediction methods that use electromyography-assisted modelling techniques to compute musculoskeletal forces that best reflect the underlying neuromuscular activity. Our proposal is that, in order to have a clinically relevant virtual physiological human, (i) bone and muscle mechanics must be considered together; (ii) models should be trained on population data to permit rapid generation and use underlying principal modes that describe both muscle patterns and morphology; and (iii) these tools need to be available in an open-source repository so that the scientific community may use, personalize and contribute to the database of models. PMID:27051510

  5. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  6. Modelling, teachers' views on the nature of modelling, and implications for the education of modellers

    NASA Astrophysics Data System (ADS)

    Justi, Rosária S.; Gilbert, John K.

    2002-04-01

    In this paper, the role of modelling in the teaching and learning of science is reviewed. In order to represent what is entailed in modelling, a 'model of modelling' framework is proposed. Five phases in moving towards a full capability in modelling are established by a review of the literature: learning models; learning to use models; learning how to revise models; learning to reconstruct models; learning to construct models de novo. In order to identify the knowledge and skills that science teachers think are needed to produce a model successfully, a semi-structured interview study was conducted with 39 Brazilian serving science teachers: 10 teaching at the 'fundamental' level (6-14 years); 10 teaching at the 'medium'-level (15-17 years); 10 undergraduate pre-service 'medium'-level teachers; 9 university teachers of chemistry. Their responses are used to establish what is entailed in implementing the 'model of modelling' framework. The implications for students, teachers, and for teacher education, of moving through the five phases of capability, are discussed.

  7. Measurement of the Isotopic Signature of Soil Carbon Dioxide: Methods Development and Initial Field Results

    NASA Astrophysics Data System (ADS)

    Kayler, Z.; Rugh, W.; Mix, A. C.; Bond, B. J.; Sulzman, E. W.

    2005-12-01

    Soil respiration is a significant component of ecosystem respiration and its isotopic composition is likely to lend insight into ecosystem processes. We have designed probes to determine the isotopic signature of soil-respired CO2 using a two end-member mixing model approach (i.e., Keeling plot). Each probe consists of three 35 ml PVC chambers cased in fiberglass mesh and connected to the soil surface via stainless steel tubing with a septa-lined swagelok fitting. Chambers are vertically connected such that they sample gases at depth intervals centered on 5, 15, and 30 cm. Gases are sampled via a hand vacuum pump equipped with a two-way valve, which allows vials pre-filled with N2 gas in the laboratory to be evacuated and re-filled with only a single septa puncture in the field. Data indicate samples can be stored reliably for up to three days if punctured septa are coated in silicone sealant. To test whether this field sampling method was robust, we constructed a carbon-free sand column out of PVC pipe into which we plumbed a tank of known CO2 concentration and isotopic composition. We have tested the effects of wetting and flow rate on our ability to reproduce tank values. A linear model (geometric mean regression) yielded a more negative isotopic value than the actual gas, but a simple polynomial curve fit the tank value. After laboratory testing, the probes were established in a steep drainage in the H.J. Andrews LTER site in the Cascade Mountains of western Oregon (as part of the Andrews Airshed project). We established a transect of five 10 m2 plots with four soil probes and a companion respiration collar and measured soil CO2 efflux and soil δ13CO2 values biweekly from June-Sept. Results indicate there is a clear difference in isotopic and respiration flux patterns between the north- and south-facing slopes, with the north facing slope exhibiting higher fluxes and more 13C enriched respiration. The temporal pattern of respiration correlates well with

  8. NARSTO critical review of photochemical models and modeling

    NASA Astrophysics Data System (ADS)

    Russell, Armistead; Dennis, Robin

    Photochemical air quality models play a central role in both schentific investigation of how pollutants evlove in the atmosphere as well as developing policies to manage air quality. In the past 30 years, these models have evolved from rather crude representations of the physics and chemistry impacting trace species to their current state: comprehensive, but not complete. The evolution has included advancements in not only the level of process descriptions, but also the computational implementation, including numerical methods. As part of the NARSTO Critical Reviews, this article discusses the current strengths and weaknesses of air quality models and the modeling process. Current Eulerian models are found to represent well the primary processes impacting the evolution of trace species in most cases though some exceptions may exist. For example, sub-grid-scale processes, such as concentrated power plant plumes, are treated only approximately. It is not apparent how much such approximations affect their results and the polices based upon those results. A significant weakness has been in how investigators have addressed, and communicated, such uncertainties. Studies find that major uncertainties are due to model inputs, e.g., emissions and meteorology, more so than the model itself. One of the primary weakness identified is in the modeling process, not the models. Evaluation has been limited both due to data constraints. Seldom is there ample observational data to conduct a detailed model intercomparison using consistent data (e.g., the same emissions and meteorology). Further model advancement, and development of greater confidence in the use of models, is hampered by the lack of thorough evaluation and intercomparisons. Model advances are seen in the use of new tools for extending the interpretation of model results, e.g., process and sensitivity analysis, modeling systems to facilitate their use, and extension of model capabilities, e.g., aerosol dynamics

  9. Coupling Climate Models and Forward-Looking Economic Models

    NASA Astrophysics Data System (ADS)

    Judd, K.; Brock, W. A.

    2010-12-01

    Authors: Dr. Kenneth L. Judd, Hoover Institution, and Prof. William A. Brock, University of Wisconsin Current climate models range from General Circulation Models (GCM’s) with millions of degrees of freedom to models with few degrees of freedom. Simple Energy Balance Climate Models (EBCM’s) help us understand the dynamics of GCM’s. The same is true in economics with Computable General Equilibrium Models (CGE’s) where some models are infinite-dimensional multidimensional differential equations but some are simple models. Nordhaus (2007, 2010) couples a simple EBCM with a simple economic model. One- and two- dimensional ECBM’s do better at approximating damages across the globe and positive and negative feedbacks from anthroprogenic forcing (North etal. (1981), Wu and North (2007)). A proper coupling of climate and economic systems is crucial for arriving at effective policies. Brock and Xepapadeas (2010) have used Fourier/Legendre based expansions to study the shape of socially optimal carbon taxes over time at the planetary level in the face of damages caused by polar ice cap melt (as discussed by Oppenheimer, 2005) but in only a “one dimensional” EBCM. Economists have used orthogonal polynomial expansions to solve dynamic, forward-looking economic models (Judd, 1992, 1998). This presentation will couple EBCM climate models with basic forward-looking economic models, and examine the effectiveness and scaling properties of alternative solution methods. We will use a two dimensional EBCM model on the sphere (Wu and North, 2007) and a multicountry, multisector regional model of the economic system. Our aim will be to gain insights into intertemporal shape of the optimal carbon tax schedule, and its impact on global food production, as modeled by Golub and Hertel (2009). We will initially have limited computing resources and will need to focus on highly aggregated models. However, this will be more complex than existing models with forward

  10. EIA model documentation: Petroleum Market Model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-12-30

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models (Public Law 94-385, section 57.b.2). The PMM models petroleum refining activities, the marketing of products, the production of natural gas liquids and domestic methanol, projects petroleum provides and sources of supplies for meeting demand. In addition, the PMMmore » estimates domestic refinery capacity expansion and fuel consumption.« less

  11. The Impact of TexAQS 2000 on Air Quality Planning in Houston

    NASA Astrophysics Data System (ADS)

    Thomas, J. W.; Price, J. H.

    2002-12-01

    Before the Texas 2000 Air Quality Study (TexAQS 2000) the State used the Urban Airshed Model to model nine different episodes in Houston with very poor results: only one episode met EPA model performance criteria. Questions existed regarding emissions uncertainties, meteorological modeling, and model chemistry. NOAA, DOE, and SOS led more than 35 organizations and 250 investigators who participated in TexAQS 2000. Major findings from TexAQS 2000 are: 1. There are two types of meteorological patterns that lead to ozone episodes in the Houston area: (i) stagnation associated with the sea breeze flow reversal causes a pool of industrial emissions and ozone to accumulate, then to move across the city as the wind flow picks up and (ii) plumes of ozone form when relatively persistent winds carry the emissions away from the city and industrial areas. 2. The chemistry that produces high ozone concentrations and rapid rises in ozone in the Houston area has been explained: multiple investigators in TexAQS 2000 have documented more rapid and more efficient formation of ozone in the plume from the Houston industrial area than any of them has observed in any previous field study. Houston's exceptionally rapid ozone formation arises from large amounts of anthropogenic VOCs in the atmosphere, often from the same plants that provide sufficient NOx. 3. This rapid and efficient ozone formation results most often from the presence of a specific subclass of hydrocarbons called light olefins, primarily ethylene and propylene. 4. Sometimes it is other specific hydrocarbons that cause the rapid formation of high concentrations of ozone, and sometimes it is just the total mass of a lot of relatively unreactive hydrocarbons. 5. The current emissions inventory for ethylene and propylene, as well as other VOCs, underestimates their routine emissions by a factor of roughly five to ten or perhaps even more. 6. It is not clear whether the emissions causing Houston's rapid ozone formation are

  12. Advances in Geoscience Modeling: Smart Modeling Frameworks, Self-Describing Models and the Role of Standardized Metadata

    NASA Astrophysics Data System (ADS)

    Peckham, Scott

    2016-04-01

    Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders

  13. Modeling Methods

    USGS Publications Warehouse

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  14. Models for Models: An Introduction to Polymer Models Employing Simple Analogies

    NASA Astrophysics Data System (ADS)

    Tarazona, M. Pilar; Saiz, Enrique

    1998-11-01

    An introduction to the most common models used in the calculations of conformational properties of polymers, ranging from the freely jointed chain approximation to Monte Carlo or molecular dynamics methods, is presented. Mathematical formalism is avoided and simple analogies, such as human chains, gases, opinion polls, or marketing strategies, are used to explain the different models presented. A second goal of the paper is to teach students how models required for the interpretation of a system can be elaborated, starting with the simplest model and introducing successive improvements until the refinements become so sophisticated that it is much better to use an alternative approach.

  15. Hybrid Model of IRT and Latent Class Models.

    ERIC Educational Resources Information Center

    Yamamoto, Kentaro

    This study developed a hybrid of item response theory (IRT) models and latent class models, which combined the strengths of each type of model. The primary motivation for developing the new model is to describe characteristics of examinees' knowledge at the time of the examination. Hence, the application of the model lies mainly in so-called…

  16. New 3D model for dynamics modeling

    NASA Astrophysics Data System (ADS)

    Perez, Alain

    1994-05-01

    The wrist articulation represents one of the most complex mechanical systems of the human body. It is composed of eight bones rolling and sliding along their surface and along the faces of the five metacarpals of the hand and the two bones of the arm. The wrist dynamics are however fundamental for the hand movement, but it is so complex that it still remains incompletely explored. This work is a part of a new concept of computer-assisted surgery, which consists in developing computer models to perfect surgery acts by predicting their consequences. The modeling of the wrist dynamics are based first on the static model of its bones in three dimensions. This 3D model must optimise the collision detection procedure which is the necessary step to estimate the physical contact constraints. As many other possible computer vision models do not fit with enough precision to this problem, a new 3D model has been developed thanks to the median axis of the digital distance map of the bones reconstructed volume. The collision detection procedure is then simplified for contacts are detected between spheres. The experiment of this original 3D dynamic model products realistic computer animation images of solids in contact. It is now necessary to detect ligaments on digital medical images and to model them in order to complete a wrist model.

  17. Bayesian model evidence as a model evaluation metric

    NASA Astrophysics Data System (ADS)

    Guthke, Anneli; Höge, Marvin; Nowak, Wolfgang

    2017-04-01

    When building environmental systems models, we are typically confronted with the questions of how to choose an appropriate model (i.e., which processes to include or neglect) and how to measure its quality. Various metrics have been proposed that shall guide the modeller towards a most robust and realistic representation of the system under study. Criteria for evaluation often address aspects of accuracy (absence of bias) or of precision (absence of unnecessary variance) and need to be combined in a meaningful way in order to address the inherent bias-variance dilemma. We suggest using Bayesian model evidence (BME) as a model evaluation metric that implicitly performs a tradeoff between bias and variance. BME is typically associated with model weights in the context of Bayesian model averaging (BMA). However, it can also be seen as a model evaluation metric in a single-model context or in model comparison. It combines a measure for goodness of fit with a penalty for unjustifiable complexity. Unjustifiable refers to the fact that the appropriate level of model complexity is limited by the amount of information available for calibration. Derived in a Bayesian context, BME naturally accounts for measurement errors in the calibration data as well as for input and parameter uncertainty. BME is therefore perfectly suitable to assess model quality under uncertainty. We will explain in detail and with schematic illustrations what BME measures, i.e. how complexity is defined in the Bayesian setting and how this complexity is balanced with goodness of fit. We will further discuss how BME compares to other model evaluation metrics that address accuracy and precision such as the predictive logscore or other model selection criteria such as the AIC, BIC or KIC. Although computationally more expensive than other metrics or criteria, BME represents an appealing alternative because it provides a global measure of model quality. Even if not applicable to each and every case, we aim

  18. Integrative structure modeling with the Integrative Modeling Platform.

    PubMed

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  19. Modeling volatility using state space models.

    PubMed

    Timmer, J; Weigend, A S

    1997-08-01

    In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).

  20. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  1. Downscaling GISS ModelE Boreal Summer Climate over Africa

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.; Fulakeza, Matthew

    2015-01-01

    The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.

  2. An online model composition tool for system biology models

    PubMed Central

    2013-01-01

    Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914

  3. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  4. Mineralogic Model (MM3.0) Analysis Model Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Lum

    2002-02-12

    The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing),more » and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M&O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M&O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components

  5. Simplified subsurface modelling: data assimilation and violated model assumptions

    NASA Astrophysics Data System (ADS)

    Erdal, Daniel; Lange, Natascha; Neuweiler, Insa

    2017-04-01

    Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated

  6. BATMAN: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2017-04-01

    This paper describes the Bayesian Technique for Multi-image Analysis (BATMAN), a novel image-segmentation technique based on Bayesian statistics that characterizes any astronomical data set containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (I.e. identical signal within the errors). We illustrate its operation and performance with a set of test cases including both synthetic and real integral-field spectroscopic data. The output segmentations adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. The quality of the recovered signal represents an improvement with respect to the input, especially in regions with low signal-to-noise ratio. However, the algorithm may be sensitive to small-scale random fluctuations, and its performance in presence of spatial gradients is limited. Due to these effects, errors may be underestimated by as much as a factor of 2. Our analysis reveals that the algorithm prioritizes conservation of all the statistically significant information over noise reduction, and that the precise choice of the input data has a crucial impact on the results. Hence, the philosophy of BaTMAn is not to be used as a 'black box' to improve the signal-to-noise ratio, but as a new approach to characterize spatially resolved data prior to its analysis. The source code is publicly available at http://astro.ft.uam.es/SELGIFS/BaTMAn.

  7. Development of an unbiased, semi-automated approach for classifying plasma cell immunophenotype following multicolor flow cytometry of bone marrow aspirates.

    PubMed

    Post, Steven R; Post, Ginell R; Nikolic, Dejan; Owens, Rebecca; Insuasti-Beltran, Giovanni

    2018-03-24

    Despite increased usage of multiparameter flow cytometry (MFC) to assess diagnosis, prognosis, and therapeutic efficacy (minimal residual disease, MRD) in plasma cell neoplasms (PCNs), standardization of methodology and data analysis is suboptimal. We investigated the utility of using the mean and median fluorescence intensities (FI) obtained from MFC to objectively describe parameters that distinguish plasma cell (PC) phenotypes. In this retrospective study, flow cytometry results from bone marrow aspirate specimens from 570 patients referred to the Myeloma Institute at UAMS were evaluated. Mean and median FI data were obtained from 8-color MFC of non-neoplastic, malignant, and mixed PC populations using antibodies to CD38, CD138, CD19, CD20, CD27, CD45, CD56, and CD81. Of 570 cases, 252 cases showed only non-neoplastic PCs, 168 showed only malignant PCs, and 150 showed mixed PC populations. Statistical analysis of median FI data for each CD marker showed no difference in expression intensity on non-neoplastic and malignant PCs, between pure and mixed PC populations. ROC analysis of the median FI of CD expression in non-neoplastic and malignant PCs was used to develop an algorithm to convert quantitative FI values to qualitative assessments including "negative," "positive," "dim," and "heterogeneous" expression. FI data derived from 8-color MFC can be used to define marker expression on PCs. Translation of FI data from Infinicyt software to an Excel worksheet streamlines workflow and eliminates transcriptional errors when generating flow reports. © 2018 International Clinical Cytometry Society. © 2018 International Clinical Cytometry Society.

  8. The immunoprofile of odontogenic keratocyst (keratocystic odontogenic tumor) that includes expression of PTCH, SMO, GLI-1 and bcl-2 is similar to ameloblastoma but different from odontogenic cysts.

    PubMed

    Vered, M; Peleg, O; Taicher, S; Buchner, A

    2009-08-01

    The aggressive biological behavior of odontogenic keratocysts (OKCs), unlike that of other odontogenic cysts, has argued for its recent re-classification as a neoplasm, 'keratocystic odontogenic tumor'. Identification of mutations in the PTCH gene in some of the OKCs that were expected to produce truncated proteins, resulting in loss of control of the cell cycle, provided additional support for OKCs having a neoplastic nature. We investigated the immunohistochemical expression of the sonic hedgehog (SHH) signaling pathway-related proteins, PTCH, smoothened (SMO) and GLI-1, and of the SHH-induced bcl-2 oncoprotein in a series of primary OKC (pOKC), recurrent OKC (rOKC) and nevoid basal cell carcinoma syndrome-associated OKCs (NBCCS-OKCs), and compared them to solid ameloblastomas (SAMs), unicystic ameloblastomas (UAMs), 'orthokeratinized' OKCs (oOKCs), dentigerous cysts (DCs) and radicular cysts (RCs). All studied lesions expressed the SHH pathway-related proteins in a similar pattern. The expression of bcl-2 in OKCs (pOKCs and NBCCS-OKCs) and SAMs was significantly higher than in oOKCs, DCs and RCs (P < 0.001). The present results of the immunoprofile of OKCs (that includes the expression of the SHH-related proteins and the SHH-induced bcl-2 oncoprotein) further support the notion of OKC having a neoplastic nature. As OKCs vary considerably in their biologic behavior, it is suggested that the quality and quantity of interactions between the SHH and other cell cycle regulatory pathways are likely to work synergistically to define the individual phenotype and corresponding biological behavior of this lesion.

  9. Non-target adjacent stimuli classification improves performance of classical ERP-based brain computer interface

    NASA Astrophysics Data System (ADS)

    Ceballos, G. A.; Hernández, L. F.

    2015-04-01

    Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.

  10. ERM model analysis for adaptation to hydrological model errors

    NASA Astrophysics Data System (ADS)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  11. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  12. TUNS/TCIS information model/process model

    NASA Technical Reports Server (NTRS)

    Wilson, James

    1992-01-01

    An Information Model is comprised of graphical and textual notation suitable for describing and defining the problem domain - in our case, TUNS or TCIS. The model focuses on the real world under study. It identifies what is in the problem and organizes the data into a formal structure for documentation and communication purposes. The Information Model is composed of an Entity Relationship Diagram (ERD) and a Data Dictionary component. The combination of these components provide an easy to understand methodology for expressing the entities in the problem space, the relationships between entities and the characteristics (attributes) of the entities. This approach is the first step in information system development. The Information Model identifies the complete set of data elements processed by TUNS. This representation provides a conceptual view of TUNS from the perspective of entities, data, and relationships. The Information Model reflects the business practices and real-world entities that users must deal with.

  13. Modelling human skull growth: a validated computational model

    PubMed Central

    Marghoub, Arsalan; Johnson, David; Khonsari, Roman H.; Fagan, Michael J.; Moazen, Mehran

    2017-01-01

    During the first year of life, the brain grows rapidly and the neurocranium increases to about 65% of its adult size. Our understanding of the relationship between the biomechanical forces, especially from the growing brain, the craniofacial soft tissue structures and the individual bone plates of the skull vault is still limited. This basic knowledge could help in the future planning of craniofacial surgical operations. The aim of this study was to develop a validated computational model of skull growth, based on the finite-element (FE) method, to help understand the biomechanics of skull growth. To do this, a two-step validation study was carried out. First, an in vitro physical three-dimensional printed model and an in silico FE model were created from the same micro-CT scan of an infant skull and loaded with forces from the growing brain from zero to two months of age. The results from the in vitro model validated the FE model before it was further developed to expand from 0 to 12 months of age. This second FE model was compared directly with in vivo clinical CT scans of infants without craniofacial conditions (n = 56). The various models were compared in terms of predicted skull width, length and circumference, while the overall shape was quantified using three-dimensional distance plots. Statistical analysis yielded no significant differences between the male skull models. All size measurements from the FE model versus the in vitro physical model were within 5%, with one exception showing a 7.6% difference. The FE model and in vivo data also correlated well, with the largest percentage difference in size being 8.3%. Overall, the FE model results matched well with both the in vitro and in vivo data. With further development and model refinement, this modelling method could be used to assist in preoperative planning of craniofacial surgery procedures and could help to reduce reoperation rates. PMID:28566514

  14. Modelling human skull growth: a validated computational model.

    PubMed

    Libby, Joseph; Marghoub, Arsalan; Johnson, David; Khonsari, Roman H; Fagan, Michael J; Moazen, Mehran

    2017-05-01

    During the first year of life, the brain grows rapidly and the neurocranium increases to about 65% of its adult size. Our understanding of the relationship between the biomechanical forces, especially from the growing brain, the craniofacial soft tissue structures and the individual bone plates of the skull vault is still limited. This basic knowledge could help in the future planning of craniofacial surgical operations. The aim of this study was to develop a validated computational model of skull growth, based on the finite-element (FE) method, to help understand the biomechanics of skull growth. To do this, a two-step validation study was carried out. First, an in vitro physical three-dimensional printed model and an in silico FE model were created from the same micro-CT scan of an infant skull and loaded with forces from the growing brain from zero to two months of age. The results from the in vitro model validated the FE model before it was further developed to expand from 0 to 12 months of age. This second FE model was compared directly with in vivo clinical CT scans of infants without craniofacial conditions ( n = 56). The various models were compared in terms of predicted skull width, length and circumference, while the overall shape was quantified using three-dimensional distance plots. Statistical analysis yielded no significant differences between the male skull models. All size measurements from the FE model versus the in vitro physical model were within 5%, with one exception showing a 7.6% difference. The FE model and in vivo data also correlated well, with the largest percentage difference in size being 8.3%. Overall, the FE model results matched well with both the in vitro and in vivo data. With further development and model refinement, this modelling method could be used to assist in preoperative planning of craniofacial surgery procedures and could help to reduce reoperation rates. © 2017 The Author(s).

  15. A Model-Model and Data-Model Comparison for the Early Eocene Hydrological Cycle

    NASA Technical Reports Server (NTRS)

    Carmichael, Matthew J.; Lunt, Daniel J.; Huber, Matthew; Heinemann, Malte; Kiehl, Jeffrey; LeGrande, Allegra; Loptson, Claire A.; Roberts, Chris D.; Sagoo, Navjit; Shields, Christine

    2016-01-01

    A range of proxy observations have recently provided constraints on how Earth's hydrological cycle responded to early Eocene climatic changes. However, comparisons of proxy data to general circulation model (GCM) simulated hydrology are limited and inter-model variability remains poorly characterised. In this work, we undertake an intercomparison of GCM-derived precipitation and P - E distributions within the extended EoMIP ensemble (Eocene Modelling Intercomparison Project; Lunt et al., 2012), which includes previously published early Eocene simulations performed using five GCMs differing in boundary conditions, model structure, and precipitation-relevant parameterisation schemes. We show that an intensified hydrological cycle, manifested in enhanced global precipitation and evaporation rates, is simulated for all Eocene simulations relative to the preindustrial conditions. This is primarily due to elevated atmospheric paleo-CO2, resulting in elevated temperatures, although the effects of differences in paleogeography and ice sheets are also important in some models. For a given CO2 level, globally averaged precipitation rates vary widely between models, largely arising from different simulated surface air temperatures. Models with a similar global sensitivity of precipitation rate to temperature (dP=dT ) display different regional precipitation responses for a given temperature change. Regions that are particularly sensitive to model choice include the South Pacific, tropical Africa, and the Peri-Tethys, which may represent targets for future proxy acquisition. A comparison of early and middle Eocene leaf-fossil-derived precipitation estimates with the GCM output illustrates that GCMs generally underestimate precipitation rates at high latitudes, although a possible seasonal bias of the proxies cannot be excluded. Models which warm these regions, either via elevated CO2 or by varying poorly constrained model parameter values, are most successful in simulating a

  16. A Distributed Snow Evolution Modeling System (SnowModel)

    NASA Astrophysics Data System (ADS)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  17. Implementing multiresolution models and families of models: from entity-level simulation to desktop stochastic models and "repro" models

    NASA Astrophysics Data System (ADS)

    McEver, Jimmie; Davis, Paul K.; Bigelow, James H.

    2000-06-01

    We have developed and used families of multiresolution and multiple-perspective models (MRM and MRMPM), both in our substantive analytic work for the Department of Defense and to learn more about how such models can be designed and implemented. This paper is a brief case history of our experience with a particular family of models addressing the use of precision fires in interdicting and halting an invading army. Our models were implemented as closed-form analytic solutions, in spreadsheets, and in the more sophisticated AnalyticaTM environment. We also drew on an entity-level simulation for data. The paper reviews the importance of certain key attributes of development environments (visual modeling, interactive languages, friendly use of array mathematics, facilities for experimental design and configuration control, statistical analysis tools, graphical visualization tools, interactive post-processing, and relational database tools). These can go a long way towards facilitating MRMPM work, but many of these attributes are not yet widely available (or available at all) in commercial model-development tools--especially for use with personal computers. We conclude with some lessons learned from our experience.

  18. The Relationships Between Modelling and Argumentation from the Perspective of the Model of Modelling Diagram

    NASA Astrophysics Data System (ADS)

    Cardoso Mendonça, Paula Cristina; Justi, Rosária

    2013-09-01

    Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities were planned from the transposition of the main modelling stages that constitute the 'Model of Modelling Diagram' so that students could experience each of such stages. All the lessons were video recorded and their transcriptions supported the elaboration of case studies for each group of students. From the analysis of the case studies, we identified argumentative situations when students performed all of the modelling stages. Our data show that the argumentative situations were related to sense making, articulating and persuasion purposes, and were closely related to the generation of explanations in the modelling processes. They also show that representations are important resources for argumentation. Our results are consistent with some of those already reported in the literature regarding the relationship between modelling and argumentation, but are also divergent when they show that argumentation is not only related to the model evaluation phase.

  19. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    NASA Astrophysics Data System (ADS)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  20. `Models of' versus `Models for'. Toward an Agent-Based Conception of Modeling in the Science Classroom

    NASA Astrophysics Data System (ADS)

    Gouvea, Julia; Passmore, Cynthia

    2017-03-01

    The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.

  1. A Model for Math Modeling

    ERIC Educational Resources Information Center

    Lin, Tony; Erfan, Sasan

    2016-01-01

    Mathematical modeling is an open-ended research subject where no definite answers exist for any problem. Math modeling enables thinking outside the box to connect different fields of studies together including statistics, algebra, calculus, matrices, programming and scientific writing. As an integral part of society, it is the foundation for many…

  2. Model fit evaluation in multilevel structural equation models

    PubMed Central

    Ryu, Ehri

    2014-01-01

    Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882

  3. Using the Model Coupling Toolkit to couple earth system models

    USGS Publications Warehouse

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  4. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    PubMed

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  5. The Real and the Mathematical in Quantum Modeling: From Principles to Models and from Models to Principles

    NASA Astrophysics Data System (ADS)

    Plotnitsky, Arkady

    2017-06-01

    The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The

  6. A model-averaging method for assessing groundwater conceptual model uncertainty.

    PubMed

    Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M

    2010-01-01

    This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.

  7. Climate Models

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.

    2012-01-01

    Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.

  8. Predicting category intuitiveness with the rational model, the simplicity model, and the generalized context model.

    PubMed

    Pothos, Emmanuel M; Bailey, Todd M

    2009-07-01

    Naïve observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported models of supervised categorization, the generalized context model (GCM). Considering different category assignments for a set of instances, the authors asked how well the GCM can predict the classification of each instance on the basis of all the other instances. The category assignment that results in the smallest prediction error is interpreted as the most intuitive for the GCM-the authors refer to this way of applying the GCM as "unsupervised GCM." The authors systematically compared predictions of category intuitiveness from the unsupervised GCM and two models of unsupervised categorization: the simplicity model and the rational model. The unsupervised GCM compared favorably with the simplicity model and the rational model. This success of the unsupervised GCM illustrates that the distinction between supervised and unsupervised categorization may need to be reconsidered. However, no model emerged as clearly superior, indicating that there is more work to be done in understanding and modeling category intuitiveness.

  9. Personalized Modeling for Prediction with Decision-Path Models

    PubMed Central

    Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.

    2015-01-01

    Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570

  10. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  11. Modelling MIZ dynamics in a global model

    NASA Astrophysics Data System (ADS)

    Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto

    2016-04-01

    Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.

  12. Variable selection and model choice in geoadditive regression models.

    PubMed

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  13. Modeling arson - An exercise in qualitative model building

    NASA Technical Reports Server (NTRS)

    Heineke, J. M.

    1975-01-01

    A detailed example is given of the role of von Neumann and Morgenstern's 1944 'expected utility theorem' (in the theory of games and economic behavior) in qualitative model building. Specifically, an arsonist's decision as to the amount of time to allocate to arson and related activities is modeled, and the responsiveness of this time allocation to changes in various policy parameters is examined. Both the activity modeled and the method of presentation are intended to provide an introduction to the scope and power of the expected utility theorem in modeling situations of 'choice under uncertainty'. The robustness of such a model is shown to vary inversely with the number of preference restrictions used in the analysis. The fewer the restrictions, the wider is the class of agents to which the model is applicable, and accordingly more confidence is put in the derived results. A methodological discussion on modeling human behavior is included.

  14. Air Quality Dispersion Modeling - Alternative Models

    EPA Pesticide Factsheets

    Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.

  15. Model Organisms and Traditional Chinese Medicine Syndrome Models

    PubMed Central

    Xu, Jin-Wen

    2013-01-01

    Traditional Chinese medicine (TCM) is an ancient medical system with a unique cultural background. Nowadays, more and more Western countries due to its therapeutic efficacy are accepting it. However, safety and clear pharmacological action mechanisms of TCM are still uncertain. Due to the potential application of TCM in healthcare, it is necessary to construct a scientific evaluation system with TCM characteristics and benchmark the difference from the standard of Western medicine. Model organisms have played an important role in the understanding of basic biological processes. It is easier to be studied in certain research aspects and to obtain the information of other species. Despite the controversy over suitable syndrome animal model under TCM theoretical guide, it is unquestionable that many model organisms should be used in the studies of TCM modernization, which will bring modern scientific standards into mysterious ancient Chinese medicine. In this review, we aim to summarize the utilization of model organisms in the construction of TCM syndrome model and highlight the relevance of modern medicine with TCM syndrome animal model. It will serve as the foundation for further research of model organisms and for its application in TCM syndrome model. PMID:24381636

  16. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  17. Examination of simplified travel demand model. [Internal volume forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.L. Jr.; McFarlane, W.J.

    1978-01-01

    A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less

  18. Development of Ensemble Model Based Water Demand Forecasting Model

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop

    2014-05-01

    In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)

  19. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  20. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical

  1. Particle Tracking Model (PTM) with Coastal Modeling System (CMS)

    DTIC Science & Technology

    2015-11-04

    Coastal Inlets Research Program Particle Tracking Model (PTM) with Coastal Modeling System ( CMS ) The Particle Tracking Model (PTM) is a Lagrangian...currents and waves. The Coastal Inlets Research Program (CIRP) supports the PTM with the Coastal Modeling System ( CMS ), which provides coupled wave...and current forcing for PTM simulations. CMS -PTM is implemented in the Surface-water Modeling System, a GUI environment for input development

  2. Modelling Spatial Dependence Structures Between Climate Variables by Combining Mixture Models with Copula Models

    NASA Astrophysics Data System (ADS)

    Khan, F.; Pilz, J.; Spöck, G.

    2017-12-01

    Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan

  3. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation

    NASA Astrophysics Data System (ADS)

    Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.

    2016-12-01

    Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic

  4. A BRDF statistical model applying to space target materials modeling

    NASA Astrophysics Data System (ADS)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  5. EIA model documentation: Petroleum market model of the national energy modeling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-12-28

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions, the production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supplymore » for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level.« less

  6. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    USGS Publications Warehouse

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  7. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  8. AIR QUALITY MODELING OF AMMONIA: A REGIONAL MODELING PERSPECTIVE

    EPA Science Inventory

    The talk will address the status of modeling of ammonia from a regional modeling perspective, yet the observations and comments should have general applicability. The air quality modeling system components that are central to modeling ammonia will be noted and a perspective on ...

  9. Temperature-based modeling of reference evapotranspiration using several artificial intelligence models: application of different modeling scenarios

    NASA Astrophysics Data System (ADS)

    Sanikhani, Hadi; Kisi, Ozgur; Maroufpoor, Eisa; Yaseen, Zaher Mundher

    2018-02-01

    The establishment of an accurate computational model for predicting reference evapotranspiration (ET0) process is highly essential for several agricultural and hydrological applications, especially for the rural water resource systems, water use allocations, utilization and demand assessments, and the management of irrigation systems. In this research, six artificial intelligence (AI) models were investigated for modeling ET0 using a small number of climatic data generated from the minimum and maximum temperatures of the air and extraterrestrial radiation. The investigated models were multilayer perceptron (MLP), generalized regression neural networks (GRNN), radial basis neural networks (RBNN), integrated adaptive neuro-fuzzy inference systems with grid partitioning and subtractive clustering (ANFIS-GP and ANFIS-SC), and gene expression programming (GEP). The implemented monthly time scale data set was collected at the Antalya and Isparta stations which are located in the Mediterranean Region of Turkey. The Hargreaves-Samani (HS) equation and its calibrated version (CHS) were used to perform a verification analysis of the established AI models. The accuracy of validation was focused on multiple quantitative metrics, including root mean squared error (RMSE), mean absolute error (MAE), correlation coefficient (R 2), coefficient of residual mass (CRM), and Nash-Sutcliffe efficiency coefficient (NS). The results of the conducted models were highly practical and reliable for the investigated case studies. At the Antalya station, the performance of the GEP and GRNN models was better than the other investigated models, while the performance of the RBNN and ANFIS-SC models was best compared to the other models at the Isparta station. Except for the MLP model, all the other investigated models presented a better performance accuracy compared to the HS and CHS empirical models when applied in a cross-station scenario. A cross-station scenario examination implies the

  10. The medical model versus the just deserts model.

    PubMed

    Wolfgang, M E

    1988-01-01

    This paper traces the history of two models that have been influential in shaping modern views toward criminals. One of these two--the medical model--is based on the concept of rehabilitation, that is, treatment predicated on the attributes of the offender. The second of these two--the just deserts model--centers on retribution, that is, punishment deserved for the seriousness of the crime. Each model has been dominant in various periods of history.

  11. DRI Model of the U.S. Economy -- Model Documentation

    EIA Publications

    1993-01-01

    Provides documentation on Data Resources, Inc., DRI Model of the U.S. Economy and the DRI Personal Computer Input/Output Model. It also describes the theoretical basis, structure and functions of both DRI models; and contains brief descriptions of the models and their equations.

  12. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  13. Quelccaya Ice Core Evidence of Widespread Atmospheric Pollution from Colonial Metallurgy after the Spanish Conquest of South America (1532 AD)

    NASA Astrophysics Data System (ADS)

    Gabrielli, P.; Uglietti, C.; Cooke, C. A.; Thompson, L. G.

    2014-12-01

    A few ice core records recovered from remote arctic regions suggest a widespread impact of toxic trace elements (Pb, Cu, Sb, As and Bi) to the North Hemisphere atmosphere prior to the onset of the Industrial Revolution (1780s-1830s). In the Southern Hemisphere, evidence for preindustrial trace element emissions are, to date, limited to sediment cores recovered from lakes located within the immediate airshed of major metallurgical centers of South America. Thus it remains unresolved whether they could have had a larger scale impact. Here, we present an annually resolved ice core record of anthropogenic trace element deposition from the remote drilling site of the Quelccaya Ice Cap (Peru) that spans 793-1989 AD. During the pre-Inca period (i.e., prior to ~1450 AD) the deposition of trace elements was dominated by the fallout of aeolian dust from the deglaciated margins of the ice cap and of ash from occasional volcanic eruptions. In contrast, the ice core record indicates a clear anthropogenic signal emerging after the onset of large scale colonial mining and metallurgy (1532-1820 AD), ~300 years prior to the Industrial Revolution during the last part of the Little Ice Age. This shift was coincidental with a major technological transition for silver extraction (1572 AD), from lead-based smelting to mercury amalgamation, that initiated a major increase in ore mining and milling that likely resulted in an increase of metallic dust emissions. While atmospheric trace element deposition resulting from colonial metallurgy was certainly much larger than during the pre-Colonial period, trace element fallout during the Colonial era was still several factors lower than during the 20th century, when the construction of the trans-Andean railway and highways promoted a widespread societal and industrial development of South America.

  14. Optimizing best management practices to control anthropogenic sources of atmospheric phosphorus deposition to inland lakes.

    PubMed

    Weiss, Lee; Thé, Jesse; Winter, Jennifer; Gharabaghi, Bahram

    2018-04-18

    Excessive phosphorus loading to inland freshwater lakes around the globe has resulted in nuisance plant growth along the waterfronts, degraded habitat for cold water fisheries, and impaired beaches, marinas and waterfront property. The direct atmospheric deposition of phosphorus can be a significant contributing source to inland lakes. The atmospheric deposition monitoring program for Lake Simcoe, Ontario indicates roughly 20% of the annual total phosphorus load (2010-2014 period) is due to direct atmospheric deposition (both wet and dry deposition) on the lake. This novel study presents a first-time application of the Genetic Algorithm (GA) methodology to optimize the application of best management practices (BMPs) related to agriculture and mobile sources to achieve atmospheric phosphorus reduction targets and restore the ecological health of the lake. The novel methodology takes into account the spatial distribution of the emission sources in the airshed, the complex atmospheric long-range transport and deposition processes, cost and efficiency of the popular management practices and social constraints related to the adoption of BMPs. The optimization scenarios suggest that the optimal overall capital investment of approximately $2M, $4M, and $10M annually can achieve roughly 3, 4 and 5 tonnes reduction in atmospheric P load to the lake, respectively. The exponential trend indicates diminishing returns for the investment beyond roughly $3M per year and that focussing much of this investment in the upwind, nearshore area will significantly impact deposition to the lake. The optimization is based on a combination of the lowest-cost, most-beneficial and socially-acceptable management practices that develops a science-informed promotion of implementation/BMP adoption strategy. The geospatial aspect to the optimization (i.e. proximity and location with respect to the lake) will help land managers to encourage the use of these targeted best practices in areas that

  15. Source Characterization of Volatile Organic Compounds Affecting the Air Quality in a Coastal Urban Area of South Texas

    PubMed Central

    Sanchez, Marciano; Karnae, Saritha; John, Kuruvilla

    2008-01-01

    Selected Volatile Organic Compounds (VOC) emitted from various anthropogenic sources including industries and motor vehicles act as primary precursors of ozone, while some VOC are classified as air toxic compounds. Significantly large VOC emission sources impact the air quality in Corpus Christi, Texas. This urban area is located in a semi-arid region of South Texas and is home to several large petrochemical refineries and industrial facilities along a busy ship-channel. The Texas Commission on Environmental Quality has setup two continuous ambient monitoring stations (CAMS 633 and 634) along the ship channel to monitor VOC concentrations in the urban atmosphere. The hourly concentrations of 46 VOC compounds were acquired from TCEQ for a comprehensive source apportionment study. The primary objective of this study was to identify and quantify the sources affecting the ambient air quality within this urban airshed. Principal Component Analysis/Absolute Principal Component Scores (PCA/APCS) was applied to the dataset. PCA identified five possible sources accounting for 69% of the total variance affecting the VOC levels measured at CAMS 633 and six possible sources affecting CAMS 634 accounting for 75% of the total variance. APCS identified natural gas emissions to be the major source contributor at CAMS 633 and it accounted for 70% of the measured VOC concentrations. The other major sources identified at CAMS 633 included flare emissions (12%), fugitive gasoline emissions (9%), refinery operations (7%), and vehicle exhaust (2%). At CAMS 634, natural gas sources were identified as the major source category contributing to 31% of the observed VOC. The other sources affecting this site included: refinery operations (24%), flare emissions (22%), secondary industrial processes (12%), fugitive gasoline emissions (8%) and vehicle exhaust (3%). PMID:19139530

  16. Gas and Particulate Aircraft Emissions Measurements: Impacts on local air quality.

    NASA Astrophysics Data System (ADS)

    Jayne, J. T.; Onasch, T.; Northway, M.; Canagaratna, M.; Worsnop, D.; Timko, M.; Wood, E.; Miake-Lye, R.; Herndon, S.; Knighton, B.; Whitefield, P.; Hagen, D.; Lobo, P.; Anderson, B.

    2007-12-01

    Air travel and freight shipping by air are becoming increasingly important and are expected to continue to expand. The resulting increases in the local concentrations of pollutants, including particulate matter (PM), volatile organic compounds (VOCs), and nitrogen oxides (NOX), can have negative impacts on regional air quality, human health and can impact climate change. In order to construct valid emission inventories, accurate measurements of aircraft emissions are needed. These measurements must be done both at the engine exit plane (certification) and downwind following the rapid cooling, dilution and initial atmospheric processing of the exhaust plume. We present here results from multiple field experiments which include the Experiment to Characterize Volatile Aerosol and Trace Species Emissions (EXCAVATE) and the four Aircraft Particle Emissions eXperiments (APEX- 1/Atlanta/2/3) which characterized gas and particle emissions from both stationary or in-use aircraft. Emission indices (EIs) for NOx and VOCs and for particle number concentration, refractory PM (black carbon soot) and volatile PM (primarily sulfate and organic) particles are reported. Measurements were made at the engine exit plane and at several downstream locations (10 and 30 meters) for a number of different engine types and engine thrust settings. A significant fraction of organic particle mass is composed of low volatility oil-related compounds and is not combustion related, potentially emitted by vents or heated surfaces within aircraft engines. Advected plumes measurements from in-use aircraft show that the practice of reduced thrust take-offs has a significant effect on total NOx and soot emitted in the vicinity of the airport. The measurements reported here represent a first observation of this effect and new insights have been gained with respect to the chemical processing of gases and particulates important to the urban airshed.

  17. Patterns of mercury dispersion from local and regional emission sources, rural Central Wisconsin, USA

    USGS Publications Warehouse

    Kolker, A.; Olson, M.L.; Krabbenhoft, D.P.; Tate, M.T.; Engle, M.A.

    2010-01-01

    Simultaneous real-time changes in mercury (Hg) speciation ?????" reactive gaseous Hg (RGM), elemental Hg (Hg??), and fine particulate Hg (Hg-PM2.5), were determined from June to November 2007, in ambient air at three locations in rural Central Wisconsin. Known Hg emission sources within the airshed of the monitoring sites include: 1) a 1114 megawatt (MW) coal-fired electric utility generating station; 2) a Hg-bed chlor-alkali plant; and 3) a smaller (465 MW) coal-burning electric utility. Monitoring sites, showing sporadic elevation of RGM, Hg?? and Hg-PM 2.5, were positioned at distances of 25, 50 and 100 km northward of the larger electric utility. A series of RGM events were recorded at each site. The largest, on 23 September, occurred under prevailing southerly winds, with a maximum RGM value (56.8 pg m-3) measured at the 100 km site, and corresponding elevated SO2 (10.41 ppbv; measured at 50 km site). The finding that RGM, Hg??, and Hg-PM2.5 are not always highest at the 25 km site, closest to the large generating station, contradicts the idea that RGM decreases with distance from a large point source. This may be explained if: 1) the 100 km site was influenced by emissions from the chlor-alkali facility or by RGM from regional urban sources; 2) the emission stack height of the larger power plant promoted plume transport at an elevation where the Hg is carried over the closest site; or 3) RGM was being generated in the plume through oxidation of Hg??. Operational changes at each emitter since 2007 should reduce their Hg output, potentially allowing quantification of the environmental benefit in future studies.

  18. Assessing the spatial and temporal variability of fine particulate matter components in Israeli, Jordanian, and Palestinian cities

    NASA Astrophysics Data System (ADS)

    Sarnat, Jeremy A.; Moise, Tamar; Shpund, Jacob; Liu, Yang; Pachon, Jorge E.; Qasrawi, Radwan; Abdeen, Ziad; Brenner, Shmuel; Nassar, Khaled; Saleh, Rami; Schauer, James J.

    2010-07-01

    This manuscript presents results from an extensive, multi-country comparative monitoring study of fine particulate matter (PM 2.5) and its primary chemical components in Israeli, Jordanian and Palestinian cities. This study represented the first time that researchers from these countries have worked together to examine spatial and temporal relationships for PM 2.5 and its major components among the study sites. The findings indicated that total PM 2.5 mass was relatively homogenous among many of the 11 sites as shown from strong between-site correlations. Mean annual concentrations ranged from 19.9 to 34.9 μg m -3 in Haifa and Amman, respectively, and exceeded accepted international air quality standards for annual PM 2.5 mass. Similarity of total mass was largely driven by SO 42- and crustal PM 2.5 components. Despite the close proximity of the seven, well correlated sites with respect to PM 2.5, there were pronounced differences among the cities for EC and, to a lesser degree, OC. EC, in particular, exhibited spatiotemporal trends that were indicative of strong local source contributions. Interestingly, there were moderate to strong EC correlations ( r > 0.65) among the large metropolitan cities, West Jerusalem, Tel Aviv and Amman. For these relatively large cities, (i.e., West Jerusalem, Tel Aviv and Amman), EC sources from the fleet of buses and cars typical for many urban areas predominate and likely drive spatiotemporal EC distributions. As new airshed management strategies and public health interventions are implemented throughout the Middle East, our findings support regulatory strategies that target integrated regional and local control strategies to reduce PM 2.5 mass and specific components suspected to drive adverse health effects of particulate matter exposure.

  19. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  20. Predictive QSAR modeling workflow, model applicability domains, and virtual screening.

    PubMed

    Tropsha, Alexander; Golbraikh, Alexander

    2007-01-01

    Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.

  1. Benchmarking an Unstructured-Grid Model for Tsunami Current Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Yinglong J.; Priest, George; Allan, Jonathan; Stimely, Laura

    2016-12-01

    We present model results derived from a tsunami current benchmarking workshop held by the NTHMP (National Tsunami Hazard Mitigation Program) in February 2015. Modeling was undertaken using our own 3D unstructured-grid model that has been previously certified by the NTHMP for tsunami inundation. Results for two benchmark tests are described here, including: (1) vortex structure in the wake of a submerged shoal and (2) impact of tsunami waves on Hilo Harbor in the 2011 Tohoku event. The modeled current velocities are compared with available lab and field data. We demonstrate that the model is able to accurately capture the velocity field in the two benchmark tests; in particular, the 3D model gives a much more accurate wake structure than the 2D model for the first test, with the root-mean-square error and mean bias no more than 2 cm s-1 and 8 mm s-1, respectively, for the modeled velocity.

  2. Modelling Students' Construction of Energy Models in Physics.

    ERIC Educational Resources Information Center

    Devi, Roshni; And Others

    1996-01-01

    Examines students' construction of experimentation models for physics theories in energy storage, transformation, and transfers involving electricity and mechanics. Student problem solving dialogs and artificial intelligence modeling of these processes is analyzed. Construction of models established relations between elements with linear causal…

  3. An empirical model to forecast solar wind velocity through statistical modeling

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Ridley, A. J.

    2013-12-01

    The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.

  4. A Primer for Model Selection: The Decisive Role of Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  5. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Spin-Up and Tuning of the Global Carbon Cycle Model Inside the GISS ModelE2 GCM

    NASA Technical Reports Server (NTRS)

    Aleinov, Igor; Kiang, Nancy Y.; Romanou, Anastasia

    2015-01-01

    Planetary carbon cycle involves multiple phenomena, acting at variety of temporal and spacial scales. The typical times range from minutes for leaf stomata physiology to centuries for passive soil carbon pools and deep ocean layers. So, finding a satisfactory equilibrium state becomes a challenging and computationally expensive task. Here we present the spin-up processes for different configurations of the GISS Carbon Cycle model from the model forced with MODIS observed Leaf Area Index (LAI) and prescribed ocean to the prognostic LAI and to the model fully coupled to the dynamic ocean and ocean biology. We investigate the time it takes the model to reach the equilibrium and discuss the ways to speed up this process. NASA Goddard Institute for Space Studies General Circulation Model (GISS ModelE2) is currently equipped with all major algorithms necessary for the simulation of the Global Carbon Cycle. The terrestrial part is presented by Ent Terrestrial Biosphere Model (Ent TBM), which includes leaf biophysics, prognostic phenology and soil biogeochemistry module (based on Carnegie-Ames-Stanford model). The ocean part is based on the NASA Ocean Biogeochemistry Model (NOBM). The transport of atmospheric CO2 is performed by the atmospheric part of ModelE2, which employs quadratic upstream algorithm for this purpose.

  7. Culturicon model: A new model for cultural-based emoticon

    NASA Astrophysics Data System (ADS)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  8. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  9. On temporal stochastic modeling of precipitation, nesting models across scales

    NASA Astrophysics Data System (ADS)

    Paschalis, Athanasios; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2014-01-01

    We analyze the performance of composite stochastic models of temporal precipitation which can satisfactorily reproduce precipitation properties across a wide range of temporal scales. The rationale is that a combination of stochastic precipitation models which are most appropriate for specific limited temporal scales leads to better overall performance across a wider range of scales than single models alone. We investigate different model combinations. For the coarse (daily) scale these are models based on Alternating renewal processes, Markov chains, and Poisson cluster models, which are then combined with a microcanonical Multiplicative Random Cascade model to disaggregate precipitation to finer (minute) scales. The composite models were tested on data at four sites in different climates. The results show that model combinations improve the performance in key statistics such as probability distributions of precipitation depth, autocorrelation structure, intermittency, reproduction of extremes, compared to single models. At the same time they remain reasonably parsimonious. No model combination was found to outperform the others at all sites and for all statistics, however we provide insight on the capabilities of specific model combinations. The results for the four different climates are similar, which suggests a degree of generality and wider applicability of the approach.

  10. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  11. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models.

    PubMed

    Gomes, Anna; van der Wijk, Lars; Proost, Johannes H; Sinha, Bhanu; Touw, Daan J

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  12. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models

    PubMed Central

    van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  13. Agent-based modeling: case study in cleavage furrow models

    PubMed Central

    Mogilner, Alex; Manhart, Angelika

    2016-01-01

    The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as “differential equation based” (DE) or “agent based” (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem—positioning of the cleavage furrow in dividing cells—to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. PMID:27811328

  14. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  15. Groundwater modelling in conceptual hydrological models - introducing space

    NASA Astrophysics Data System (ADS)

    Boje, Søren; Skaugen, Thomas; Møen, Knut; Myrabø, Steinar

    2017-04-01

    The tiny Sæternbekken Minifelt (Muren) catchment (7500 m2) in Bærumsmarka, Norway, was during the 1990s, densely instrumented with more than a 100 observation points for measuring groundwater levels. The aim was to investigate the link between shallow groundwater dynamics and runoff. The DDD (Distance Distribution Dynamics) model is a newly developed rainfall-runoff model used operationally by the Norwegian Flood-Forecasting service at NVE. The model estimates the capacity of the subsurface reservoir at different levels of saturation and predicts overland flow. The subsurface in the DDD model has a 2-D representation that calculates the saturated and unsaturated soil moisture along a hillslope representing the entire catchment in question. The groundwater observations from more than two decades ago are used to verify assumptions of the subsurface reservoir in the DDD model and to validate its spatial representation of the subsurface reservoir. The Muren catchment will, during 2017, be re-instrumented in order to continue the work to bridge the gap between conceptual hydrological models, with typically single value or 0-dimension representation of the subsurface, and models with more realistic 2- or 3-dimension representation of the subsurface.

  16. Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum

    2011-01-01

    Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…

  17. BioModels: expanding horizons to include more modelling approaches and formats

    PubMed Central

    Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi

    2018-01-01

    Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614

  18. Is the Voter Model a Model for Voters?

    NASA Astrophysics Data System (ADS)

    Fernández-Gracia, Juan; Suchecki, Krzysztof; Ramasco, José J.; San Miguel, Maxi; Eguíluz, Víctor M.

    2014-04-01

    The voter model has been studied extensively as a paradigmatic opinion dynamics model. However, its ability to model real opinion dynamics has not been addressed. We introduce a noisy voter model (accounting for social influence) with recurrent mobility of agents (as a proxy for social context), where the spatial and population diversity are taken as inputs to the model. We show that the dynamics can be described as a noisy diffusive process that contains the proper anisotropic coupling topology given by population and mobility heterogeneity. The model captures statistical features of U.S. presidential elections as the stationary vote-share fluctuations across counties and the long-range spatial correlations that decay logarithmically with the distance. Furthermore, it recovers the behavior of these properties when the geographical space is coarse grained at different scales—from the county level through congressional districts, and up to states. Finally, we analyze the role of the mobility range and the randomness in decision making, which are consistent with the empirical observations.

  19. Sequence modelling and an extensible data model for genomic database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Peter Wei-Der

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less

  20. Sequence modelling and an extensible data model for genomic database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Peter Wei-Der

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less

  1. Log-Multiplicative Association Models as Item Response Models

    ERIC Educational Resources Information Center

    Anderson, Carolyn J.; Yu, Hsiu-Ting

    2007-01-01

    Log-multiplicative association (LMA) models, which are special cases of log-linear models, have interpretations in terms of latent continuous variables. Two theoretical derivations of LMA models based on item response theory (IRT) arguments are presented. First, we show that Anderson and colleagues (Anderson & Vermunt, 2000; Anderson & Bockenholt,…

  2. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    ERIC Educational Resources Information Center

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  3. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-02-26

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of amore » two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.« less

  4. Dynamic Emulation Modelling (DEMo) of large physically-based environmental models

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2012-12-01

    In environmental modelling large, spatially-distributed, physically-based models are widely adopted to describe the dynamics of physical, social and economic processes. Such an accurate process characterization comes, however, to a price: the computational requirements of these models are considerably high and prevent their use in any problem requiring hundreds or thousands of model runs to be satisfactory solved. Typical examples include optimal planning and management, data assimilation, inverse modelling and sensitivity analysis. An effective approach to overcome this limitation is to perform a top-down reduction of the physically-based model by identifying a simplified, computationally efficient emulator, constructed from and then used in place of the original model in highly resource-demanding tasks. The underlying idea is that not all the process details in the original model are equally important and relevant to the dynamics of the outputs of interest for the type of problem considered. Emulation modelling has been successfully applied in many environmental applications, however most of the literature considers non-dynamic emulators (e.g. metamodels, response surfaces and surrogate models), where the original dynamical model is reduced to a static map between input and the output of interest. In this study we focus on Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original physically-based model, with consequent advantages in a wide variety of problem areas. In particular, we propose a new data-driven DEMo approach that combines the many advantages of data-driven modelling in representing complex, non-linear relationships, but preserves the state-space representation typical of process-based models, which is both particularly effective in some applications (e.g. optimal management and data assimilation) and facilitates the ex-post physical interpretation of the emulator structure, thus enhancing the

  5. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  6. Models of ovarian cancer metastasis: Murine models

    PubMed Central

    Šale, Sanja; Orsulic, Sandra

    2008-01-01

    Mice have mainly been used in ovarian cancer research as immunodeficient hosts for cell lines derived from the primary tumors and ascites of ovarian cancer patients. These xenograft models have provided a valuable system for pre-clinical trials, however, the genetic complexity of human tumors has precluded the understanding of key events that drive metastatic dissemination. Recently developed immunocompetent, genetically defined mouse models of epithelial ovarian cancer represent significant improvements in the modeling of metastatic disease. PMID:19337569

  7. Addressing Hydro-economic Modeling Limitations - A Limited Foresight Sacramento Valley Model and an Open-source Modeling Platform

    NASA Astrophysics Data System (ADS)

    Harou, J. J.; Hansen, K. M.

    2008-12-01

    Increased scarcity of world water resources is inevitable given the limited supply and increased human pressures. The idea that "some scarcity is optimal" must be accepted for rational resource use and infrastructure management decisions to be made. Hydro-economic systems models are unique at representing the overlap of economic drivers, socio-political forces and distributed water resource systems. They demonstrate the tangible benefits of cooperation and integrated flexible system management. Further improvement of models, quality control practices and software will be needed for these academic policy tools to become accepted into mainstream water resource practice. Promising features include: calibration methods, limited foresight optimization formulations, linked simulation-optimization approaches (e.g. embedding pre-existing calibrated simulation models), spatial groundwater models, stream-aquifer interactions and stream routing, etc.. Conventional user-friendly decision support systems helped spread simulation models on a massive scale. Hydro-economic models must also find a means to facilitate construction, distribution and use. Some of these issues and model features are illustrated with a hydro-economic optimization model of the Sacramento Valley. Carry-over storage value functions are used to limit hydrologic foresight of the multi- period optimization model. Pumping costs are included in the formulation by tracking regional piezometric head of groundwater sub-basins. To help build and maintain this type of network model, an open-source water management modeling software platform is described and initial project work is discussed. The objective is to generically facilitate the connection of models, such as those developed in a modeling environment (GAMS, MatLab, Octave, "), to a geographic user interface (drag and drop node-link network) and a database (topology, parameters and time series). These features aim to incrementally move hydro- economic models

  8. JEDI International Model | Jobs and Economic Development Impact Models |

    Science.gov Websites

    NREL International Model JEDI International Model The Jobs and Economic Development Impacts (JEDI) International Model allows users to estimate economic development impacts from international

  9. Modeling of the Global Water Cycle - Analytical Models

    Treesearch

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  10. Seven challenges for metapopulation models of epidemics, including households models.

    PubMed

    Ball, Frank; Britton, Tom; House, Thomas; Isham, Valerie; Mollison, Denis; Pellis, Lorenzo; Scalia Tomba, Gianpaolo

    2015-03-01

    This paper considers metapopulation models in the general sense, i.e. where the population is partitioned into sub-populations (groups, patches,...), irrespective of the biological interpretation they have, e.g. spatially segregated large sub-populations, small households or hosts themselves modelled as populations of pathogens. This framework has traditionally provided an attractive approach to incorporating more realistic contact structure into epidemic models, since it often preserves analytic tractability (in stochastic as well as deterministic models) but also captures the most salient structural inhomogeneity in contact patterns in many applied contexts. Despite the progress that has been made in both the theory and application of such metapopulation models, we present here several major challenges that remain for future work, focusing on models that, in contrast to agent-based ones, are amenable to mathematical analysis. The challenges range from clarifying the usefulness of systems of weakly-coupled large sub-populations in modelling the spread of specific diseases to developing a theory for endemic models with household structure. They include also developing inferential methods for data on the emerging phase of epidemics, extending metapopulation models to more complex forms of human social structure, developing metapopulation models to reflect spatial population structure, developing computationally efficient methods for calculating key epidemiological model quantities, and integrating within- and between-host dynamics in models. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  11. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  12. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  13. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification

    PubMed Central

    Sager, Jennifer E.; Yu, Jingjing; Ragueneau-Majlessi, Isabelle

    2015-01-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms “PBPK” and “physiologically based pharmacokinetic model” to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines. PMID:26296709

  14. Bayesian Modeling of Exposure and Airflow Using Two-Zone Models

    PubMed Central

    Zhang, Yufen; Banerjee, Sudipto; Yang, Rui; Lungu, Claudiu; Ramachandran, Gurumurthy

    2009-01-01

    Mathematical modeling is being increasingly used as a means for assessing occupational exposures. However, predicting exposure in real settings is constrained by lack of quantitative knowledge of exposure determinants. Validation of models in occupational settings is, therefore, a challenge. Not only do the model parameters need to be known, the models also need to predict the output with some degree of accuracy. In this paper, a Bayesian statistical framework is used for estimating model parameters and exposure concentrations for a two-zone model. The model predicts concentrations in a zone near the source and far away from the source as functions of the toluene generation rate, air ventilation rate through the chamber, and the airflow between near and far fields. The framework combines prior or expert information on the physical model along with the observed data. The framework is applied to simulated data as well as data obtained from the experiments conducted in a chamber. Toluene vapors are generated from a source under different conditions of airflow direction, the presence of a mannequin, and simulated body heat of the mannequin. The Bayesian framework accounts for uncertainty in measurement as well as in the unknown rate of airflow between the near and far fields. The results show that estimates of the interzonal airflow are always close to the estimated equilibrium solutions, which implies that the method works efficiently. The predictions of near-field concentration for both the simulated and real data show nice concordance with the true values, indicating that the two-zone model assumptions agree with the reality to a large extent and the model is suitable for predicting the contaminant concentration. Comparison of the estimated model and its margin of error with the experimental data thus enables validation of the physical model assumptions. The approach illustrates how exposure models and information on model parameters together with the knowledge of

  15. Comparison of chiller models for use in model-based fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya; Haves, Philip

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  16. Equivalent Dynamic Models.

    PubMed

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  17. From Spiking Neuron Models to Linear-Nonlinear Models

    PubMed Central

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-01

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777

  18. From spiking neuron models to linear-nonlinear models.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  19. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  20. IHY Modeling Support at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Chulaki, A.; Hesse, Michael; Kuznetsova, Masha; MacNeice, P.; Rastaetter, L.

    2005-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-onrequest" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities during the International Heliospheric Year. In order to tailor CCMC activities to IHY needs, we will also invite community input into our IHY planning activities.

  1. Energy modeling. Volume 2: Inventory and details of state energy models

    NASA Astrophysics Data System (ADS)

    Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.

    1981-05-01

    An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.

  2. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  3. Modeling of batch sorber system: kinetic, mechanistic, and thermodynamic modeling

    NASA Astrophysics Data System (ADS)

    Mishra, Vishal

    2017-10-01

    The present investigation has dealt with the biosorption of copper and zinc ions on the surface of egg-shell particles in the liquid phase. Various rate models were evaluated to elucidate the kinetics of copper and zinc biosorptions, and the results indicated that the pseudo-second-order model was more appropriate than the pseudo-first-order model. The curve of the initial sorption rate versus the initial concentration of copper and zinc ions also complemented the results of the pseudo-second-order model. Models used for the mechanistic modeling were the intra-particle model of pore diffusion and Bangham's model of film diffusion. The results of the mechanistic modeling together with the values of pore and film diffusivities indicated that the preferential mode of the biosorption of copper and zinc ions on the surface of egg-shell particles in the liquid phase was film diffusion. The results of the intra-particle model showed that the biosorption of the copper and zinc ions was not dominated by the pore diffusion, which was due to macro-pores with open-void spaces present on the surface of egg-shell particles. The thermodynamic modeling reproduced the fact that the sorption of copper and zinc was spontaneous, exothermic with the increased order of the randomness at the solid-liquid interface.

  4. SWIFT MODELLER: a Java based GUI for molecular modeling.

    PubMed

    Mathur, Abhinav; Shankaracharya; Vidyarthi, Ambarish S

    2011-10-01

    MODELLER is command line argument based software which requires tedious formatting of inputs and writing of Python scripts which most people are not comfortable with. Also the visualization of output becomes cumbersome due to verbose files. This makes the whole software protocol very complex and requires extensive study of MODELLER manuals and tutorials. Here we describe SWIFT MODELLER, a GUI that automates formatting, scripting and data extraction processes and present it in an interactive way making MODELLER much easier to use than before. The screens in SWIFT MODELLER are designed keeping homology modeling in mind and their flow is a depiction of its steps. It eliminates the formatting of inputs, scripting processes and analysis of verbose output files through automation and makes pasting of the target sequence as the only prerequisite. Jmol (3D structure visualization tool) has been integrated into the GUI which opens and demonstrates the protein data bank files created by the MODELLER software. All files required and created by the software are saved in a folder named after the work instance's date and time of execution. SWIFT MODELLER lowers the skill level required for the software through automation of many of the steps in the original software protocol, thus saving an enormous amount of time per instance and making MODELLER very easy to work with.

  5. Petroleum Market Model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcoholsmore » and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule.« less

  6. Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread

    PubMed Central

    Miller, Joel C.; Volz, Erik M.

    2012-01-01

    We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242

  7. The Relationships between Modelling and Argumentation from the Perspective of the Model of Modelling Diagram

    ERIC Educational Resources Information Center

    Mendonça, Paula Cristina Cardoso; Justi, Rosária

    2013-01-01

    Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities…

  8. Multiscale Modeling of Structurally-Graded Materials Using Discrete Dislocation Plasticity Models and Continuum Crystal Plasticity Models

    NASA Technical Reports Server (NTRS)

    Saether, Erik; Hochhalter, Jacob D.; Glaessgen, Edward H.

    2012-01-01

    A multiscale modeling methodology that combines the predictive capability of discrete dislocation plasticity and the computational efficiency of continuum crystal plasticity is developed. Single crystal configurations of different grain sizes modeled with periodic boundary conditions are analyzed using discrete dislocation plasticity (DD) to obtain grain size-dependent stress-strain predictions. These relationships are mapped into crystal plasticity parameters to develop a multiscale DD/CP model for continuum level simulations. A polycrystal model of a structurally-graded microstructure is developed, analyzed and used as a benchmark for comparison between the multiscale DD/CP model and the DD predictions. The multiscale DD/CP model follows the DD predictions closely up to an initial peak stress and then follows a strain hardening path that is parallel but somewhat offset from the DD predictions. The difference is believed to be from a combination of the strain rate in the DD simulation and the inability of the DD/CP model to represent non-monotonic material response.

  9. Modeling influenza-like illnesses through composite compartmental models

    NASA Astrophysics Data System (ADS)

    Levy, Nir; , Michael, Iv; Yom-Tov, Elad

    2018-03-01

    Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers (R 0) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population.

  10. Continuous system modeling

    NASA Technical Reports Server (NTRS)

    Cellier, Francois E.

    1991-01-01

    A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.

  11. Agent-based modeling and systems dynamics model reproduction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Macal, C. M.

    2009-01-01

    Reproducibility is a pillar of the scientific endeavour. We view computer simulations as laboratories for electronic experimentation and therefore as tools for science. Recent studies have addressed model reproduction and found it to be surprisingly difficult to replicate published findings. There have been enough failed simulation replications to raise the question, 'can computer models be fully replicated?' This paper answers in the affirmative by reporting on a successful reproduction study using Mathematica, Repast and Swarm for the Beer Game supply chain model. The reproduction process was valuable because it demonstrated the original result's robustness across modelling methodologies and implementation environments.

  12. National Transonic Facility model and model support vibration problems

    NASA Technical Reports Server (NTRS)

    Young, Clarence P., Jr.; Popernack, Thomas G., Jr.; Gloss, Blair B.

    1990-01-01

    Vibrations of models and model support system were encountered during testing in the National Transonic Facility. Model support system yaw plane vibrations have resulted in model strain gage balance design load limits being reached. These high levels of vibrations resulted in limited aerodynamic testing for several wind tunnel models. The yaw vibration problem was the subject of an intensive experimental and analytical investigation which identified the primary source of the yaw excitation and resulted in attenuation of the yaw oscillations to acceptable levels. This paper presents the principal results of analyses and experimental investigation of the yaw plane vibration problems. Also, an overview of plans for development and installation of a permanent model system dynamic and aeroelastic response measurement and monitoring system for the National Transonic Facility is presented.

  13. An improved interfacial bonding model for material interface modeling

    PubMed Central

    Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei

    2016-01-01

    An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343

  14. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial

  15. Coupled atmosphere-biophysics-hydrology models for environmental modeling

    USGS Publications Warehouse

    Walko, R.L.; Band, L.E.; Baron, Jill S.; Kittel, T.G.F.; Lammers, R.; Lee, T.J.; Ojima, D.; Pielke, R.A.; Taylor, C.; Tague, C.; Tremback, C.J.; Vidale, P.L.

    2000-01-01

    The formulation and implementation of LEAF-2, the Land Ecosystem–Atmosphere Feedback model, which comprises the representation of land–surface processes in the Regional Atmospheric Modeling System (RAMS), is described. LEAF-2 is a prognostic model for the temperature and water content of soil, snow cover, vegetation, and canopy air, and includes turbulent and radiative exchanges between these components and with the atmosphere. Subdivision of a RAMS surface grid cell into multiple areas of distinct land-use types is allowed, with each subgrid area, or patch, containing its own LEAF-2 model, and each patch interacts with the overlying atmospheric column with a weight proportional to its fractional area in the grid cell. A description is also given of TOPMODEL, a land hydrology model that represents surface and subsurface downslope lateral transport of groundwater. Details of the incorporation of a modified form of TOPMODEL into LEAF-2 are presented. Sensitivity tests of the coupled system are presented that demonstrate the potential importance of the patch representation and of lateral water transport in idealized model simulations. Independent studies that have applied LEAF-2 and verified its performance against observational data are cited. Linkage of RAMS and TOPMODEL through LEAF-2 creates a modeling system that can be used to explore the coupled atmosphere–biophysical–hydrologic response to altered climate forcing at local watershed and regional basin scales.

  16. Modeling fractal cities using the correlated percolation model.

    NASA Astrophysics Data System (ADS)

    Makse, Hernán A.; Havlin, Shlomo; Stanley, H. Eugene

    1996-03-01

    Cities grow in a way that might be expected to resemble the growth of two-dimensional aggregates of particles, and this has led to recent attempts to model urban growth using ideas from the statistical physics of clusters. In particular, the model of diffusion limited aggregation (DLA) has been invoked to rationalize the apparently fractal nature of urban morphologies(M. Batty and P. Longley, Fractal Cities) (Academic, San Diego, 1994). The DLA model predicts that there should exist only one large fractal cluster, which is almost perfectly screened from incoming 'development units' (representing, for example, people, capital or resources), so that almost all of the cluster growth takes place at the tips of the cluster's branches. We show that an alternative model(H. A. Makse, S. Havlin, H. E. Stanley, Nature 377), 608 (1995), in which development units are correlated rather than being added to the cluster at random, is better able to reproduce the observed morphology of cities and the area distribution of sub-clusters ('towns') in an urban system, and can also describe urban growth dynamics. Our physical model, which corresponds to the correlated percolation model in the presence of a density gradient, is motivated by the fact that in urban areas development attracts further development. The model offers the possibility of predicting the global properties (such as scaling behavior) of urban morphologies.

  17. Competency Modeling in Extension Education: Integrating an Academic Extension Education Model with an Extension Human Resource Management Model

    ERIC Educational Resources Information Center

    Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.

    2011-01-01

    The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…

  18. Phoenix model

    EPA Science Inventory

    Phoenix (formerly referred to as the Second Generation Model or SGM) is a global general equilibrium model designed to analyze energy-economy-climate related questions and policy implications in the medium- to long-term. This model disaggregates the global economy into 26 industr...

  19. BioModels Database: a repository of mathematical models of biological processes.

    PubMed

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  20. A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.

    2015-12-01

    Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014

  1. Orbital Debris Modeling

    NASA Technical Reports Server (NTRS)

    Liou, J. C.

    2012-01-01

    Presentation outlne: (1) The NASA Orbital Debris (OD) Engineering Model -- A mathematical model capable of predicting OD impact risks for the ISS and other critical space assets (2) The NASA OD Evolutionary Model -- A physical model capable of predicting future debris environment based on user-specified scenarios (3) The NASA Standard Satellite Breakup Model -- A model describing the outcome of a satellite breakup (explosion or collision)

  2. Combining abiotic and biotic models - Hydraulical modeling to fill the gap between catchment and hydro-dynamic models

    NASA Astrophysics Data System (ADS)

    Guse, B.; Sulc, D.; Schmalz, B.; Fohrer, N.

    2012-04-01

    The European Water Framework Directive (WFD) requires a catchment-based approach, which is assessed in the IMPACT project by combining abiotic and biotic models. The core point of IMPACT is a model chain (catchment model -> 1-D-hydraulic model -> 3-D-hydro-morphodynamic model -> biotic habitat model) with the aim to estimate the occurrence of the target species of the WFD. Firstly, the model chain is developed for the current land use and climate conditions. Secondly, land use and climate change scenarios are developed at the catchment scale. The outputs of the catchment model for the scenarios are used as input for the next models within the model chain to estimate the effect of these changes on the target species. The eco-hydrological catchment model SWAT is applied for the Treene catchment in Northern Germany and delivers discharge and water quality parameters as a spatial explicit output for each subbasin. There is no water level information given by SWAT. However, water level values are needed as lower boundary condition for the hydro-dynamic and habitat models which are applied for the 300 m candidate reference reach. In order to fill the gap between the catchment and the hydro-morphodynamic model, the 1-D hydraulic model HEC-RAS is applied for a 3 km long reach transect from the next upstream hydrological station until the upper bound of the candidate study reach. The channel geometry for HEC-RAS was estimated based on 96 cross-sections which were measured in the IMPACT project. By using available discharge and water level measurements from the hydrological station and own flow velocity measurements, the channel resistence was estimated. HEC-RAS was run with different statistical indices (mean annual drought, mean discharge, …) for steady flow conditions. The rating curve was then constructed for the target cross-section, i.e. the lower bound of the candidate study reach, to fulfill the combining with the hydro- and morphodynamic models. These statistical

  3. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  4. Model verification of large structural systems. [space shuttle model response

    NASA Technical Reports Server (NTRS)

    Lee, L. T.; Hasselman, T. K.

    1978-01-01

    A computer program for the application of parameter identification on the structural dynamic models of space shuttle and other large models with hundreds of degrees of freedom is described. Finite element, dynamic, analytic, and modal models are used to represent the structural system. The interface with math models is such that output from any structural analysis program applied to any structural configuration can be used directly. Processed data from either sine-sweep tests or resonant dwell tests are directly usable. The program uses measured modal data to condition the prior analystic model so as to improve the frequency match between model and test. A Bayesian estimator generates an improved analytical model and a linear estimator is used in an iterative fashion on highly nonlinear equations. Mass and stiffness scaling parameters are generated for an improved finite element model, and the optimum set of parameters is obtained in one step.

  5. Modeling Operations Other Than War: Non-Combatants in Combat Modeling

    DTIC Science & Technology

    1994-09-01

    supposition that non-combatants are an essential feature in OOTW. The model proposal includes a methodology for civilian unit decision making . The model...combatants are an essential feature in OOTW. The model proposal includes a methodology for civilian unit decision making . Thi- model also includes...numerical example demonstrated that the model appeared to perform in an acceptable manner, in that it produced output within a reasonable range. During the

  6. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, James C.

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  7. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  8. Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors

    NASA Astrophysics Data System (ADS)

    Marti, Alejandro; Folch, Arnau

    2018-03-01

    Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally

  9. Pharmacokinetic modeling in aquatic animals. 1. Models and concepts

    USGS Publications Warehouse

    Barron, M.G.; Stehly, Guy R.; Hayton, W.L.

    1990-01-01

    While clinical and toxicological applications of pharmacokinetics have continued to evolve both conceptually and experimentally, pharmacokinetics modeling in aquatic animals has not progressed accordingly. In this paper we present methods and concepts of pharmacokinetic modeling in aquatic animals using multicompartmental, clearance-based, non-compartmental and physiologically-based pharmacokinetic models. These models should be considered as alternatives to traditional approaches, which assume that the animal acts as a single homogeneous compartment based on apparent monoexponential elimination.

  10. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    PubMed

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  11. A Generic Modeling Process to Support Functional Fault Model Development

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  12. Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program

    NASA Technical Reports Server (NTRS)

    Moore, Berrien, III; Sahagian, Dork

    1997-01-01

    The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.

  13. REGIONAL PARTICULATE MODEL - 1. MODEL DESCRIPTION AND PRELIMINARY RESULTS

    EPA Science Inventory

    The gas-phase chemistry and transport mechanisms of the Regional Acid Deposition Model have been modified to create the Regional Particulate Model, a three-dimensional Eulerian model that simulates the chemistry, transport, and dynamics of sulfuric acid aerosol resulting from pri...

  14. Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis

    DTIC Science & Technology

    2017-02-01

    Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning

  15. WASP TRANSPORT MODELING AND WASP ECOLOGICAL MODELING

    EPA Science Inventory

    A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...

  16. Integrated Exoplanet Modeling with the GSFC Exoplanet Modeling & Analysis Center (EMAC)

    NASA Astrophysics Data System (ADS)

    Mandell, Avi M.; Hostetter, Carl; Pulkkinen, Antti; Domagal-Goldman, Shawn David

    2018-01-01

    Our ability to characterize the atmospheres of extrasolar planets will be revolutionized by JWST, WFIRST and future ground- and space-based telescopes. In preparation, the exoplanet community must develop an integrated suite of tools with which we can comprehensively predict and analyze observations of exoplanets, in order to characterize the planetary environments and ultimately search them for signs of habitability and life.The GSFC Exoplanet Modeling and Analysis Center (EMAC) will be a web-accessible high-performance computing platform with science support for modelers and software developers to host and integrate their scientific software tools, with the goal of leveraging the scientific contributions from the entire exoplanet community to improve our interpretations of future exoplanet discoveries. Our suite of models will include stellar models, models for star-planet interactions, atmospheric models, planet system science models, telescope models, instrument models, and finally models for retrieving signals from observational data. By integrating this suite of models, the community will be able to self-consistently calculate the emergent spectra from the planet whether from emission, scattering, or in transmission, and use these simulations to model the performance of current and new telescopes and their instrumentation.The EMAC infrastructure will not only provide a repository for planetary and exoplanetary community models, modeling tools and intermodal comparisons, but it will include a "run-on-demand" portal with each software tool hosted on a separate virtual machine. The EMAC system will eventually include a means of running or “checking in” new model simulations that are in accordance with the community-derived standards. Additionally, the results of intermodal comparisons will be used to produce open source publications that quantify the model comparisons and provide an overview of community consensus on model uncertainties on the climates of

  17. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    NASA Technical Reports Server (NTRS)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  18. Modelling Complex Fenestration Systems using physical and virtual models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanachareonkit, Anothai; Scartezzini, Jean-Louis

    2010-04-15

    Physical or virtual models are commonly used to visualize the conceptual ideas of architects, lighting designers and researchers; they are also employed to assess the daylighting performance of buildings, particularly in cases where Complex Fenestration Systems (CFS) are considered. Recent studies have however revealed a general tendency of physical models to over-estimate this performance, compared to those of real buildings; these discrepancies can be attributed to several reasons. In order to identify the main error sources, a series of comparisons in-between a real building (a single office room within a test module) and the corresponding physical and virtual models wasmore » undertaken. The physical model was placed in outdoor conditions, which were strictly identical to those of the real building, as well as underneath a scanning sky simulator. The virtual model simulations were carried out by way of the Radiance program using the GenSky function; an alternative evaluation method, named Partial Daylight Factor method (PDF method), was also employed with the physical model together with sky luminance distributions acquired by a digital sky scanner during the monitoring of the real building. The overall daylighting performance of physical and virtual models were assessed and compared. The causes of discrepancies between the daylighting performance of the real building and the models were analysed. The main identified sources of errors are the reproduction of building details, the CFS modelling and the mocking-up of the geometrical and photometrical properties. To study the impact of these errors on daylighting performance assessment, computer simulation models created using the Radiance program were also used to carry out a sensitivity analysis of modelling errors. The study of the models showed that large discrepancies can occur in daylighting performance assessment. In case of improper mocking-up of the glazing for instance, relative divergences of 25

  19. The Trimeric Model: A New Model of Periodontal Treatment Planning

    PubMed Central

    Tarakji, Bassel

    2014-01-01

    Treatment of periodontal disease is a complex and multidisciplinary procedure, requiring periodontal, surgical, restorative, and orthodontic treatment modalities. Several authors attempted to formulate models for periodontal treatment that orders the treatment steps in a logical and easy to remember manner. In this article, we discuss two models of periodontal treatment planning from two of the most well-known textbook in the specialty of periodontics internationally. Then modify them to arrive at a new model of periodontal treatment planning, The Trimeric Model. Adding restorative and orthodontic interrelationships with periodontal treatment allows us to expand this model into the Extended Trimeric Model of periodontal treatment planning. These models will provide a logical framework and a clear order of the treatment of periodontal disease for general practitioners and periodontists alike. PMID:25177662

  20. Modelling total solar irradiance using a flux transport model

    NASA Astrophysics Data System (ADS)

    Dasi Espuig, Maria; Jiang, Jie; Krivova, Natalie; Solanki, Sami

    2014-05-01

    Reconstructions of solar irradiance into the past are of considerable interest for studies of solar influence on climate. Models based on the assumption that irradiance changes are caused by the evolution of the photospheric magnetic field have been the most successful in reproducing the measured irradiance variations. Our SATIRE-S model is one of these. It uses solar full-disc magnetograms as an input, and these are available for less than four decades. Thus, to reconstruct the irradiance back to times when no observed magnetograms are available, we combine the SATIRE-S model with synthetic magnetograms, produced using a surface flux transport model. The model is fed with daily, observed or modelled statistically, records of sunspot positions, areas, and tilt angles. To describe the secular change in the irradiance, we used the concept of overlapping ephemeral region cycles. With this technique TSI can be reconstructed back to 1700.

  1. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  2. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  3. Evaluating Conceptual Site Models with Multicomponent Reactive Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, Z.; Heffner, D.; Price, V.; Temples, T. J.; Nicholson, T. J.

    2005-05-01

    Modeling ground-water flow and multicomponent reactive chemical transport is a useful approach for testing conceptual site models and assessing the design of monitoring networks. A graded approach with three conceptual site models is presented here with a field case of tetrachloroethene (PCE) transport and biodegradation near Charleston, SC. The first model assumed a one-layer homogeneous aquifer structure with semi-infinite boundary conditions, in which an analytical solution of the reactive solute transport can be obtained with BIOCHLOR (Aziz et al., 1999). Due to the over-simplification of the aquifer structure, this simulation cannot reproduce the monitoring data. In the second approach we used GMS to develop the conceptual site model, a layer-cake multi-aquifer system, and applied a numerical module (MODFLOW and RT3D within GMS) to solve the flow and reactive transport problem. The results were better than the first approach but still did not fit the plume well because the geological structures were still inadequately defined. In the third approach we developed a complex conceptual site model by interpreting log and seismic survey data with Petra and PetraSeis. We detected a major channel and a younger channel, through the PCE source area. These channels control the local ground-water flow direction and provide a preferential chemical transport pathway. Results using the third conceptual site model agree well with the monitoring concentration data. This study confirms that the bias and uncertainty from inadequate conceptual models are much larger than those introduced from an inadequate choice of model parameter values (Neuman and Wierenga, 2003; Meyer et al., 2004). Numerical modeling in this case provides key insight into the hydrogeology and geochemistry of the field site for predicting contaminant transport in the future. Finally, critical monitoring points and performance indicator parameters are selected for future monitoring to confirm system

  4. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  5. Selected aspects of modelling monetary transmission mechanism by BVAR model

    NASA Astrophysics Data System (ADS)

    Vaněk, Tomáš; Dobešová, Anna; Hampel, David

    2013-10-01

    In this paper we use the BVAR model with the specifically defined prior to evaluate data including high-lag dependencies. The results are compared to both restricted and common VAR model. The data depicts the monetary transmission mechanism in the Czech Republic and Slovakia from January 2002 to February 2013. The results point to the inadequacy of the common VAR model. The restricted VAR model and the BVAR model appear to be similar in the sense of impulse responses.

  6. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    PubMed Central

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  7. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement.

    PubMed

    Wu, Alex; Song, Youhong; van Oosterom, Erik J; Hammer, Graeme L

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  8. EPA EXPOSURE MODELS LIBRARY AND INTEGRATED MODEL EVALUATION SYSTEM

    EPA Science Inventory

    The third edition of the U.S. Environmental Protection Agencys (EPA) EML/IMES (Exposure Models Library and Integrated Model Evaluation System) on CD-ROM is now available. The purpose of the disc is to provide a compact and efficient means to distribute exposure models, documentat...

  9. Modelling the Shuttle Remote Manipulator System: Another flexible model

    NASA Technical Reports Server (NTRS)

    Barhorst, Alan A.

    1993-01-01

    High fidelity elastic system modeling algorithms are discussed. The particular system studied is the Space Shuttle Remote Manipulator System (RMS) undergoing full articulated motion. The model incorporates flexibility via a methodology the author has been developing. The technique is based in variational principles, so rigorous boundary condition generation and weak formulations for the associated partial differential equations are realized, yet the analyst need not integrate by parts. The methodology is formulated using vector-dyad notation with minimal use of tensor notation, therefore the technique is believed to be affable to practicing engineers. The objectives of this work are as follows: (1) determine the efficacy of the modeling method; and (2) determine if the method affords an analyst advantages in the overall modeling and simulation task. Generated out of necessity were Mathematica algorithms that quasi-automate the modeling procedure and simulation development. The project was divided into sections as follows: (1) model development of a simplified manipulator; (2) model development of the full-freedom RMS including a flexible movable base on a six degree of freedom orbiter (a rigid-body is attached to the manipulator end-effector); (3) simulation development for item 2; and (4) comparison to the currently used model of the flexible RMS in the Structures and Mechanics Division of NASA JSC. At the time of the writing of this report, items 3 and 4 above were not complete.

  10. Two-Stage Bayesian Model Averaging in Endogenous Variable Models*

    PubMed Central

    Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.

    2013-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  11. Generic magnetohydrodynamic model at the Community Coordinated Modeling Center

    NASA Astrophysics Data System (ADS)

    Honkonen, I. J.; Rastaetter, L.; Glocer, A.

    2016-12-01

    The Community Coordinated Modeling Center (CCMC) at NASA Goddard Space Flight Center is a multi-agency partnership to enable, support and perform research and development for next-generation space science and space weather models. CCMC currently hosts nearly 100 numerical models and a cornerstone of this activity is the Runs on Request (RoR) system which allows anyone to request a model run and analyse/visualize the results via a web browser. CCMC is also active in the education community by organizing student research contests, heliophysics summer schools, and space weather forecaster training for students, government and industry representatives. Recently a generic magnetohydrodynamic (MHD) model was added to the CCMC RoR system which allows the study of a variety of fluid and plasma phenomena in one, two and three dimensions using a dynamic point-and-click web interface. For example students can experiment with the physics of fundamental wave modes of hydrodynamic and MHD theory, behavior of discontinuities and shocks as well as instabilities such as Kelvin-Helmholtz.Students can also use the model to experiments with numerical effects of models, i.e. how the process of discretizing a system of equations and solving them on a computer changes the solution. This can provide valuable background understanding e.g. for space weather forecasters on the effects of model resolution, numerical resistivity, etc. on the prediction.

  12. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  13. Constraints Modeling in FRBR Data Model Using OCL

    NASA Astrophysics Data System (ADS)

    Rudić, Gordana

    2011-09-01

    Transformation of the conceptual FRBR data model to the class diagram in UML 2.0 notation is given. The class diagram is formed using MagicDraw CASE tool. The paper presents a class diagram for the first group of FRBR entities ie. classes (the product of intellectual or artistic endeavour). It is demonstrated how to model constraints over relationships between classes in FRBR object data model using OCL 2.0.

  14. Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.

    1997-01-01

    In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.

  15. The CAFE model: A net production model for global ocean phytoplankton

    NASA Astrophysics Data System (ADS)

    Silsbe, Greg M.; Behrenfeld, Michael J.; Halsey, Kimberly H.; Milligan, Allen J.; Westberry, Toby K.

    2016-12-01

    The Carbon, Absorption, and Fluorescence Euphotic-resolving (CAFE) net primary production model is an adaptable framework for advancing global ocean productivity assessments by exploiting state-of-the-art satellite ocean color analyses and addressing key physiological and ecological attributes of phytoplankton. Here we present the first implementation of the CAFE model that incorporates inherent optical properties derived from ocean color measurements into a mechanistic and accurate model of phytoplankton growth rates (μ) and net phytoplankton production (NPP). The CAFE model calculates NPP as the product of energy absorption (QPAR), and the efficiency (ϕμ) by which absorbed energy is converted into carbon biomass (CPhyto), while μ is calculated as NPP normalized to CPhyto. The CAFE model performance is evaluated alongside 21 other NPP models against a spatially robust and globally representative set of direct NPP measurements. This analysis demonstrates that the CAFE model explains the greatest amount of variance and has the lowest model bias relative to other NPP models analyzed with this data set. Global oceanic NPP from the CAFE model (52 Pg C m-2 yr-1) and mean division rates (0.34 day-1) are derived from climatological satellite data (2002-2014). This manuscript discusses and validates individual CAFE model parameters (e.g., QPAR and ϕμ), provides detailed sensitivity analyses, and compares the CAFE model results and parameterization to other widely cited models.

  16. A review of surrogate models and their application to groundwater modeling

    NASA Astrophysics Data System (ADS)

    Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.

    2015-08-01

    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.

  17. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    PubMed

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  18. Modeling Ni-Cd performance. Planned alterations to the Goddard battery model

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1986-01-01

    The Goddard Space Flight Center (GSFC) currently has a preliminary computer model to simulate a Nickel Cadmium (Ni-Cd) performance. The basic methodology of the model was described in the paper entitled Fundamental Algorithms of the Goddard Battery Model. At present, the model is undergoing alterations to increase its efficiency, accuracy, and generality. A review of the present battery model is given, and the planned charges of the model are described.

  19. A Test of Maxwell's Z Model Using Inverse Modeling

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Schultz, P. H.; Heineck, T.

    2003-01-01

    In modeling impact craters a small region of energy and momentum deposition, commonly called a "point source", is often assumed. This assumption implies that an impact is the same as an explosion at some depth below the surface. Maxwell's Z Model, an empirical point-source model derived from explosion cratering, has previously been compared with numerical impact craters with vertical incidence angles, leading to two main inferences. First, the flowfield center of the Z Model must be placed below the target surface in order to replicate numerical impact craters. Second, for vertical impacts, the flow-field center cannot be stationary if the value of Z is held constant; rather, the flow-field center migrates downward as the crater grows. The work presented here evaluates the utility of the Z Model for reproducing both vertical and oblique experimental impact data obtained at the NASA Ames Vertical Gun Range (AVGR). Specifically, ejection angle data obtained through Three-Dimensional Particle Image Velocimetry (3D PIV) are used to constrain the parameters of Maxwell's Z Model, including the value of Z and the depth and position of the flow-field center via inverse modeling.

  20. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  1. Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C

    System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less

  2. Forest-fire models

    Treesearch

    Haiganoush Preisler; Alan Ager

    2013-01-01

    For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...

  3. SPITFIRE within the MPI Earth system model: Model development and evaluation

    NASA Astrophysics Data System (ADS)

    Lasslop, Gitta; Thonicke, Kirsten; Kloster, Silvia

    2014-09-01

    Quantification of the role of fire within the Earth system requires an adequate representation of fire as a climate-controlled process within an Earth system model. To be able to address questions on the interaction between fire and the Earth system, we implemented the mechanistic fire model SPITFIRE, in JSBACH, the land surface model of the MPI Earth system model. Here, we document the model implementation as well as model modifications. We evaluate our model results by comparing the simulation to the GFED version 3 satellite-based data set. In addition, we assess the sensitivity of the model to the meteorological forcing and to the spatial variability of a number of fire relevant model parameters. A first comparison of model results with burned area observations showed a strong correlation of the residuals with wind speed. Further analysis revealed that the response of the fire spread to wind speed was too strong for the application on global scale. Therefore, we developed an improved parametrization to account for this effect. The evaluation of the improved model shows that the model is able to capture the global gradients and the seasonality of burned area. Some areas of model-data mismatch can be explained by differences in vegetation cover compared to observations. We achieve benchmarking scores comparable to other state-of-the-art fire models. The global total burned area is sensitive to the meteorological forcing. Adjustment of parameters leads to similar model results for both forcing data sets with respect to spatial and seasonal patterns. This article was corrected on 29 SEP 2014. See the end of the full text for details.

  4. Modeling Heterogeneous Variance-Covariance Components in Two-Level Models

    ERIC Educational Resources Information Center

    Leckie, George; French, Robert; Charlton, Chris; Browne, William

    2014-01-01

    Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…

  5. Measurement Model Specification Error in LISREL Structural Equation Models.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  6. Radiation Models

    ERIC Educational Resources Information Center

    James, W. G. G.

    1970-01-01

    Discusses the historical development of both the wave and the corpuscular photon model of light. Suggests that students should be informed that the two models are complementary and that each model successfully describes a wide range of radiation phenomena. Cites 19 references which might be of interest to physics teachers and students. (LC)

  7. Gene expression profiling in multiple myeloma--reporting of entities, risk, and targets in clinical routine.

    PubMed

    Meissner, Tobias; Seckinger, Anja; Rème, Thierry; Hielscher, Thomas; Möhler, Thomas; Neben, Kai; Goldschmidt, Hartmut; Klein, Bernard; Hose, Dirk

    2011-12-01

    Multiple myeloma is an incurable malignant plasma cell disease characterized by survival ranging from several months to more than 15 years. Assessment of risk and underlying molecular heterogeneity can be excellently done by gene expression profiling (GEP), but its way into clinical routine is hampered by the lack of an appropriate reporting tool and the integration with other prognostic factors into a single "meta" risk stratification. The GEP-report (GEP-R) was built as an open-source software developed in R for gene expression reporting in clinical practice using Affymetrix microarrays. GEP-R processes new samples by applying a documentation-by-value strategy to the raw data to be able to assign thresholds and grouping algorithms defined on a reference cohort of 262 patients with multiple myeloma. Furthermore, we integrated expression-based and conventional prognostic factors within one risk stratification (HM-metascore). The GEP-R comprises (i) quality control, (ii) sample identity control, (iii) biologic classification, (iv) risk stratification, and (v) assessment of target genes. The resulting HM-metascore is defined as the sum over the weighted factors gene expression-based risk-assessment (UAMS-, IFM-score), proliferation, International Staging System (ISS) stage, t(4;14), and expression of prognostic target genes (AURKA, IGF1R) for which clinical grade inhibitors exist. The HM-score delineates three significantly different groups of 13.1%, 72.1%, and 14.7% of patients with a 6-year survival rate of 89.3%, 60.6%, and 18.6%, respectively. GEP reporting allows prospective assessment of risk and target gene expression and integration of current prognostic factors in clinical routine, being customizable about novel parameters or other cancer entities. ©2011 AACR.

  8. Agent-based modeling: case study in cleavage furrow models.

    PubMed

    Mogilner, Alex; Manhart, Angelika

    2016-11-07

    The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as "differential equation based" (DE) or "agent based" (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem-positioning of the cleavage furrow in dividing cells-to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. © 2016 Mogilner and Manhart. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  9. Communication system modeling

    NASA Technical Reports Server (NTRS)

    Holland, L. D.; Walsh, J. R., Jr.; Wetherington, R. D.

    1971-01-01

    This report presents the results of work on communications systems modeling and covers three different areas of modeling. The first of these deals with the modeling of signals in communication systems in the frequency domain and the calculation of spectra for various modulations. These techniques are applied in determining the frequency spectra produced by a unified carrier system, the down-link portion of the Command and Communications System (CCS). The second modeling area covers the modeling of portions of a communication system on a block basis. A detailed analysis and modeling effort based on control theory is presented along with its application to modeling of the automatic frequency control system of an FM transmitter. A third topic discussed is a method for approximate modeling of stiff systems using state variable techniques.

  10. Modeling and control design of a wind tunnel model support

    NASA Technical Reports Server (NTRS)

    Howe, David A.

    1990-01-01

    The 12-Foot Pressure Wind Tunnel at Ames Research Center is being restored. A major part of the restoration is the complete redesign of the aircraft model supports and their associated control systems. An accurate trajectory control servo system capable of positioning a model (with no measurable overshoot) is needed. Extremely small errors in scaled-model pitch angle can increase airline fuel costs for the final aircraft configuration by millions of dollars. In order to make a mechanism sufficiently accurate in pitch, a detailed structural and control-system model must be created and then simulated on a digital computer. The model must contain linear representations of the mechanical system, including masses, springs, and damping in order to determine system modes. Electrical components, both analog and digital, linear and nonlinear must also be simulated. The model of the entire closed-loop system must then be tuned to control the modes of the flexible model-support structure. The development of a system model, the control modal analysis, and the control-system design are discussed.

  11. Parametric regression model for survival data: Weibull regression model as an example

    PubMed Central

    2016-01-01

    Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846

  12. Foraminifera Models to Interrogate Ostensible Proxy-Model Discrepancies During Late Pliocene

    NASA Astrophysics Data System (ADS)

    Jacobs, P.; Dowsett, H. J.; de Mutsert, K.

    2017-12-01

    Planktic foraminifera faunal assemblages have been used in the reconstruction of past oceanic states (e.g. the Last Glacial Maximum, the mid-Piacenzian Warm Period). However these reconstruction efforts have typically relied on inverse modeling using transfer functions or the modern analog technique, which by design seek to translate foraminifera into one or two target oceanic variables, primarily sea surface temperature (SST). These reconstructed SST data have then been used to test the performance of climate models, and discrepancies have been attributed to shortcomings in climate model processes and/or boundary conditions. More recently forward proxy models or proxy system models have been used to leverage the multivariate nature of proxy relationships to their environment, and to "bring models into proxy space". Here we construct ecological models of key planktic foraminifera taxa, calibrated and validated with World Ocean Atlas (WO13) oceanographic data. Multiple modeling methods (e.g. multilayer perceptron neural networks, Mahalanobis distance, logistic regression, and maximum entropy) are investigated to ensure robust results. The resulting models are then driven by a Late Pliocene climate model simulation with biogeochemical as well as temperature variables. Similarities and differences with previous model-proxy comparisons (e.g. PlioMIP) are discussed.

  13. Truth, models, model sets, AIC, and multimodel inference: a Bayesian perspective

    USGS Publications Warehouse

    Barker, Richard J.; Link, William A.

    2015-01-01

    Statistical inference begins with viewing data as realizations of stochastic processes. Mathematical models provide partial descriptions of these processes; inference is the process of using the data to obtain a more complete description of the stochastic processes. Wildlife and ecological scientists have become increasingly concerned with the conditional nature of model-based inference: what if the model is wrong? Over the last 2 decades, Akaike's Information Criterion (AIC) has been widely and increasingly used in wildlife statistics for 2 related purposes, first for model choice and second to quantify model uncertainty. We argue that for the second of these purposes, the Bayesian paradigm provides the natural framework for describing uncertainty associated with model choice and provides the most easily communicated basis for model weighting. Moreover, Bayesian arguments provide the sole justification for interpreting model weights (including AIC weights) as coherent (mathematically self consistent) model probabilities. This interpretation requires treating the model as an exact description of the data-generating mechanism. We discuss the implications of this assumption, and conclude that more emphasis is needed on model checking to provide confidence in the quality of inference.

  14. Pseudo-Boltzmann model for modeling the junctionless transistors

    NASA Astrophysics Data System (ADS)

    Avila-Herrera, F.; Cerdeira, A.; Roldan, J. B.; Sánchez-Moreno, P.; Tienda-Luna, I. M.; Iñiguez, B.

    2014-05-01

    Calculation of the carrier concentrations in semiconductors using the Fermi-Dirac integral requires complex numerical calculations; in this context, practically all analytical device models are based on Boltzmann statistics, even though it is known that it leads to an over-estimation of carriers densities for high doping concentrations. In this paper, a new approximation to Fermi-Dirac integral, called Pseudo-Boltzmann model, is presented for modeling junctionless transistors with high doping concentrations.

  15. Nonlinear Modeling by Assembling Piecewise Linear Models

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  16. Model selection for multi-component frailty models.

    PubMed

    Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert

    2007-11-20

    Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.

  17. Aerosol Modeling for the Global Model Initiative

    NASA Technical Reports Server (NTRS)

    Weisenstein, Debra K.; Ko, Malcolm K. W.

    2001-01-01

    The goal of this project is to develop an aerosol module to be used within the framework of the Global Modeling Initiative (GMI). The model development work will be preformed jointly by the University of Michigan and AER, using existing aerosol models at the two institutions as starting points. The GMI aerosol model will be tested, evaluated against observations, and then applied to assessment of the effects of aircraft sulfur emissions as needed by the NASA Subsonic Assessment in 2001. The work includes the following tasks: 1. Implementation of the sulfur cycle within GMI, including sources, sinks, and aqueous conversion of sulfur. Aerosol modules will be added as they are developed and the GMI schedule permits. 2. Addition of aerosol types other than sulfate particles, including dust, soot, organic carbon, and black carbon. 3. Development of new and more efficient parameterizations for treating sulfate aerosol nucleation, condensation, and coagulation among different particle sizes and types.

  18. Respectful Modeling: Addressing Uncertainty in Dynamic System Models for Molecular Biology.

    PubMed

    Tsigkinopoulou, Areti; Baker, Syed Murtuza; Breitling, Rainer

    2017-06-01

    Although there is still some skepticism in the biological community regarding the value and significance of quantitative computational modeling, important steps are continually being taken to enhance its accessibility and predictive power. We view these developments as essential components of an emerging 'respectful modeling' framework which has two key aims: (i) respecting the models themselves and facilitating the reproduction and update of modeling results by other scientists, and (ii) respecting the predictions of the models and rigorously quantifying the confidence associated with the modeling results. This respectful attitude will guide the design of higher-quality models and facilitate the use of models in modern applications such as engineering and manipulating microbial metabolism by synthetic biology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. CMIP5 Historical Simulations (1850-2012) with GISS ModelE2

    NASA Technical Reports Server (NTRS)

    Miller, Ronald Lindsay; Schmidt, Gavin A.; Nazarenko, Larissa S.; Tausnev, Nick; Bauer, Susanne E.; DelGenio, Anthony D.; Kelley, Max; Lo, Ken K.; Ruedy, Reto; Shindell, Drew T.; hide

    2014-01-01

    Observations of climate change during the CMIP5 extended historical period (1850-2012) are compared to trends simulated by six versions of the NASA Goddard Institute for Space Studies ModelE2 Earth System Model. The six models are constructed from three versions of the ModelE2 atmospheric general circulation model, distinguished by their treatment of atmospheric composition and the aerosol indirect effect, combined with two ocean general circulation models, HYCOM and Russell. Forcings that perturb the model climate during the historical period are described. Five-member ensemble averages from each of the six versions of ModelE2 simulate trends of surface air temperature, atmospheric temperature, sea ice and ocean heat content that are in general agreement with observed trends, although simulated warming is slightly excessive within the past decade. Only simulations that include increasing concentrations of long-lived greenhouse gases match the warming observed during the twentieth century. Differences in twentieth-century warming among the six model versions can be attributed to differences in climate sensitivity, aerosol and ozone forcing, and heat uptake by the deep ocean. Coupled models with HYCOM export less heat to the deep ocean, associated with reduced surface warming in regions of deepwater formation, but greater warming elsewhere at high latitudes along with reduced sea ice. All ensembles show twentieth-century annular trends toward reduced surface pressure at southern high latitudes and a poleward shift of the midlatitude westerlies, consistent with observations.

  20. Building generic anatomical models using virtual model cutting and iterative registration.

    PubMed

    Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W

    2010-02-08

    Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java