Sample records for cetop dnb model

  1. Evaluation of CASL boiling model for DNB performance in full scale 5x5 fuel bundle with spacer grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seung Jun

    As one of main tasks for FY17 CASL-THM activity, Evaluation study on applicability of the CASL baseline boiling model for 5x5 DNB application is conducted and the predictive capability of the DNB analysis is reported here. While the baseline CASL-boiling model (GEN- 1A) approach has been successfully implemented and validated with a single pipe application in the previous year’s task, the extended DNB validation for realistic sub-channels with detailed spacer grid configurations are tasked in FY17. The focus area of the current study is to demonstrate the robustness and feasibility of the CASL baseline boiling model for DNB performance inmore » a full 5x5 fuel bundle application. A quantitative evaluation of the DNB predictive capability is performed by comparing with corresponding experimental measurements (i.e. reference for the model validation). The reference data are provided from the Westinghouse Electricity Company (WEC). Two different grid configurations tested here include Non-Mixing Vane Grid (NMVG), and Mixing Vane Grid (MVG). Thorough validation studies with two sub-channel configurations are performed at a wide range of realistic PWR operational conditions.« less

  2. Descriptive normative beliefs and the self-regulation in alcohol use among Slovak university students.

    PubMed

    Brutovská, Monika; Orosova, Olga; Kalina, Ondrej; Šebeňa, René

    2015-12-01

    This study aims (i) to understand how descriptive normative beliefs (DNB) about typical students' alcohol use and self-regulation (SRG) are related to alcohol use (AU) by exploring the indirect effect of SRG on AU through DNB and (ii) to explore gender differences and the differences between universities in DNB, SRG and AU. The cross-sectional data were collected online from 817 Slovak university students from four universities (75.22% females; Mage = 19.61; SD = 1.42), who filled in the AUDIT-C items, items measuring the DNB about typical students' AU and SRG. T-tests, one-way Anova and structural equation modelling were used for data analysis. Gender differences in AU and DNB were found with males having higher levels of both AU and DNB. The tested model of AU fits the data well. A significant association was found between DNB and (i) AU (positive) and (ii) SRG (negative). The analysis confirmed the existence of an indirect effect of SRG on AU through DNB. The study contributes to research concerning AU by the way in which DNB and SRG are linked to AU among Slovak university students. The research findings can also be used in developing prevention and intervention programs. © The Author 2014. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Temporal monitoring of vessels activity using day/night band in Suomi NPP on South China Sea

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Takashi; Asanuma, Ichio; Park, Jong Geol; Mackin, Kenneth J.; Mittleman, John

    2017-05-01

    In this research, we focus on vessel detection using the satellite imagery of day/night band (DNB) on Suomi NPP in order to monitor the change of vessel activity on the region of South China Sea. In this paper, we consider the relation between the temporal change of vessel activities and the events on maritime environment based on the vessel traffic density estimation using DNB. DNB is a moderate resolution (350-700m) satellite imagery but can detect the fishing light of fishery boats in night time for every day. The advantage of DNB is the continuous monitoring on wide area compared to another vessel detection and locating system. However, DNB gave strong influence of cloud and lunar refection. Therefore, we additionally used Brightness Temperature at 3.7μm(BT3.7) for cloud information. In our previous research, we construct an empirical vessel detection model that based on the DNB contrast and the estimation of cloud condition using BT3.7. Moreover, we proposed a vessel traffic density estimation method based on empirical model. In this paper, we construct the time temporal density estimation map on South China Sea and East China Sea in order to extract the knowledge from vessel activities change.

  4. Mixed Inhibition of Adenosine Deaminase Activity by 1,3-Dinitrobenzene: A Model for Understanding Cell-Selective Neurotoxicity in Chemically-Induced Energy Deprivation Syndromes in Brain

    PubMed Central

    Wang, Yipei; Liu, Xin; Schneider, Brandon; Zverina, Elaina A.; Russ, Kristen; Wijeyesakere, Sanjeeva J.; Fierke, Carol A.; Richardson, Rudy J.; Philbert, Martin A.

    2012-01-01

    Astrocytes are acutely sensitive to 1,3-dinitrobenzene (1,3-DNB) while adjacent neurons are relatively unaffected, consistent with other chemically-induced energy deprivation syndromes. Previous studies have investigated the role of astrocytes in protecting neurons from hypoxia and chemical injury via adenosine release. Adenosine is considered neuroprotective, but it is rapidly removed by extracellular deaminases such as adenosine deaminase (ADA). The present study tested the hypothesis that ADA is inhibited by 1,3-DNB as a substrate mimic, thereby preventing adenosine catabolism. ADA was inhibited by 1,3-DNB with an IC50 of 284μM, Hill slope, n = 4.8 ± 0.4. Native gel electrophoresis showed that 1,3-DNB did not denature ADA. Furthermore, adding Triton X-100 (0.01–0.05%, wt/vol), Nonidet P-40 (0.0015–0.0036%, wt/vol), or bovine serum albumin (0.05 mg/ml or changing [ADA] (0.2 and 2nM) did not substantially alter the 1,3-DNB IC50 value. Likewise, dynamic light scattering showed no particle formation over a (1,3-DNB) range of 149–1043μM. Kinetics revealed mixed inhibition with 1,3-DNB binding to ADA (KI = 520 ± 100μM, n = 1 ± 0.6) and the ADA-adenosine complex (KIS = 262 ± 7μM, n = 6 ± 0.6, indicating positive cooperativity). In accord with the kinetics, docking predicted binding of 1,3-DNB to the active site and three peripheral sites. In addition, exposure of DI TNC-1 astrocytes to 10–500μM 1,3-DNB produced concentration-dependent increases in extracellular adenosine at 24 h. Overall, the results demonstrate that 1,3-DNB is a mixed inhibitor of ADA and may thus lead to increases in extracellular adenosine. The finding may provide insights to guide future work on chemically-induced energy deprivation. PMID:22106038

  5. The absorption and first-pass metabolism of [14C]-1,3-dinitrobenzene in the isolated vascularly perfused rat small intestine.

    PubMed

    Adams, P C; Rickert, D E

    1996-11-01

    We tested the hypothesis that the small intestine is capable of the first-pass, reductive metabolism of xenobiotics. A simplified version of the isolated vascularly perfused rat small intestine was developed to test this hypothesis with 1,3-dinitrobenzene (1,3-DNB) as a model xenobiotic. Both 3-nitroaniline (3-NA) and 3-nitroacetanilide (3-NAA) were formed and absorbed following intralumenal doses of 1,3-DNB (1.8 or 4.2 mumol) to isolated vascularly perfused rat small intestine. Dose, fasting, or antibiotic pretreatment had no effect on the absorption and metabolism of 1,3-DNB in this model system. The failure of antibiotic pretreatment to alter the metabolism of 1,3-DNA indicated that 1,3-DNB metabolism was mammalian rather than microfloral in origin. All data from experiments initiated with lumenal 1,3-DNB were fit to a pharmacokinetic model (model A). ANOVA analysis revealed that dose, fasting, or antibiotic pretreatment had no statistically significant effect on the model-dependent parameters. 3-NA (1.5 mumol) was administered to the lumen of isolated vascularly perfused rat small intestine to evaluate model A predictions for the absorption and metabolism of this metabolite. All data from experiments initiated with 3-NA were fit to a pharmacokinetic model (model B). Comparison of corresponding model-dependent pharmacokinetic parameters (i.e. those parameters which describe the same processes in models A and B) revealed quantitative differences. Evidence for significant quantitative differences in the pharmacokinetics or metabolism of formed versus preformed 3-NA in rat small intestine may require better definition of the rate constants used to describe tissue and lumenal processes or identification and incorporation of the remaining unidentified metabolites into the models.

  6. CL-20/DNB co-crystal based PBX with PEG: molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiang; Gao, Pei; Xiao, Ji Jun; Zhao, Feng; Xiao, He Ming

    2016-12-01

    Molecular dynamics simulation was carried out for CL-20/DNB co-crystal based PBX (polymer-bonded explosive) blended with polymer PEG (polyethylene glycol). In this paper, the miscibility of the PBX models is investigated through the calculated binding energy. Pair correlation function (PCF) analysis is applied to study the interaction of the interface structures in the PBX models. The mechanical properties of PBXs are also discussed to understand the change of the mechanical properties after adding the polymer. Moreover, the calculated diffusion coefficients of the interfacial explosive molecules are used to discuss the dispersal ability of CL-20 and DNB molecules in the interface layer.

  7. Aurora Research: Earth/Space Data Fusion Powered by GIS and Python

    NASA Astrophysics Data System (ADS)

    Kalb, V. L.; Collado-Vega, Y. M.; MacDonald, E.; Kosar, B.

    2017-12-01

    The Aurora Borealis and Australis Borealis are visually spectacular, but are also an indicator of Sun-magnetosphere-ionosphere energy transfer during geomagnetic storms. The Saint Patrick's Day Storm of 2015 is a stellar example of this, and is the focus of our study that utilizes the Geographical Information Services of ArcGIS to bring together diverse and cross disciplinary data for analysis. This research leverages data from a polar-orbiting Earth science sensor band that is exquisitely sensitive to visible light, namely the Day/Night Band (DNB) of the VIIRS instrument onboard the Suomi NPP satellite. This Sun-synchronous data source can provide high temporal and spatial resolution observations of the aurorae, which is not possible with current space science instruments. This data can be compared with auroral model data, solar wind measurements, and citizen science data of aurora observations and tweets. While the proposed data sources are diverse in type and format, their common attribute is location. This is exploited by bringing all the data into ArcGIS for mapping and analysis. The Python programming language is used extensively to automate the data preprocessing, group the DNB and citizen science observations to temporal windows associated with an auroral model timestep, and print the data to a pdf mapbook for sharing with team members. There are several goals for this study: compare the auroral model predictions with DNB data, look for fine-grained structure of the aurora in the DNB data, compare citizen science data with DNB values, and correlate DNB intensity with solar wind data. This study demonstrates the benefits of using a GIS platform to bring together data that is diverse in type and format for scientific exploration, and shows how Python can be used to scale up to large datasets.

  8. Lunar BRDF Correction of Suomi-NPP VIIRS Day/Night Band Time Series Product

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Roman, M. O.; Kalb, V.; Stokes, E.; Miller, S. D.

    2015-12-01

    Since the first-light images from the Suomi-NPP VIIRS low-light visible Day/Night Band (DNB) sensor were received in November 2011, the NASA Suomi-NPP Land Science Investigator Processing System (SIPS) has focused on evaluating this new capability for quantitative science applications, as well as developing and testing refined algorithms to meet operational and Land science research needs. While many promising DNB applications have been developed since the Suomi-NPP launch, most studies to-date have been limited by the traditional qualitative image display and spatial-temporal aggregated statistical analysis methods inherent in current heritage algorithms. This has resulted in strong interest for a new generation of science-quality products that can be used to monitor both the magnitude and signature of nighttime phenomena and anthropogenic sources of light emissions. In one particular case study, Román and Stokes (2015) demonstrated that tracking daily dynamic DNB radiances can provide valuable information about the character of the human activities and behaviors that influence energy, consumption, and vulnerability. Here we develop and evaluate a new suite of DNB science-quality algorithms that can exclude a primary source of background noise: i.e., the Lunar BRDF (Bidirectional Reflectance Distribution Function) effect. Every day, the operational NASA Land SIPS DNB algorithm makes use of 16 days worth of DNB-derived surface reflectances (SR) (based on the heritage MODIS SR algorithm) and a semiempirical kernel-driven bidirectional reflectance model to determine a global set of parameters describing the BRDF of the land surface. The nighttime period of interest is heavily weighted as a function of observation coverage. These gridded parameters, combined with Miller and Turner's [2009] top-of-atmosphere spectral irradiance model, are then used to determine the DNB's lunar radiance contribution at any point in time and under specific illumination conditions.

  9. Removal of Direct N Blue-106 from artificial textile dye effluent using activated carbon from orange peel: adsorption isotherm and kinetic studies.

    PubMed

    Khaled, Azza; El Nemr, Ahmed; El-Sikaily, Amany; Abdelwahab, Ola

    2009-06-15

    The purpose of this study is to suggest an efficient process, which does not require a huge investment for the removal of direct dye from wastewater. Activated carbon developed from agricultural waste material was characterized and utilized for the removal of Direct Navy Blue 106 (DNB-106) from wastewater. Systematic studies on DNB-106 adsorption equilibrium and kinetics by low-cost activated carbons were carried out. Adsorption studies were carried out at different initial concentrations of DNB-106 (50, 75, 100, 125 and 150 mg l(-1)), contact time (5-180 min), pH (2.0, 3.0, 4.7, 6.3, 7.2, 8.0, 10.3 and 12.7) and sorbent doses (2.0, 4.0 and 6.0 g l(-1)). Both Langmuir and Freundlich models fitted the adsorption data quite reasonably (R(2)>97). The maximum adsorption capacity was 107.53 mg g(-1) for 150 mg l(-1) of DNB-106 concentration and 2 g l(-1) carbon concentration. Various mechanisms were established for DNB-106 adsorption on developed adsorbents. The kinetic studies were conducted to delineate the effect of initial dye concentration, contact time and solid to liquid concentration. The developed carbon might be successfully used for the removal of DNB-106 from liquid industrial wastes.

  10. Theoretical investigation of the structures and properties of CL-20/DNB cocrystal and associated PBXs by molecular dynamics simulation.

    PubMed

    Hang, Gui-Yun; Yu, Wen-Li; Wang, Tao; Li, Zhen

    2018-03-19

    In this work, a CL-20/DNB cocrystal explosive model was established and six different kinds of fluoropolymers, i.e., PVDF, PCTFE, F 2311 , F 2312 , F 2313 and F 2314 were added into the (1 0 0), (0 1 0), (0 0 1) crystal orientations to obtain the corresponding polymer bonded explosives (PBXs). The influence of fluoropolymers on PBX properties (energetic property, stability and mechanical properties) was investigated and evaluated using molecular dynamics (MD) methods. The results reveal a decrease in engineering moduli, an increase in Cauchy pressure (i.e., rigidity and stiffness is lessened), and an increase in plastic properties and ductility, thus indicating that the fluoropolymers have a beneficial influence on the mechanical properties of PBXs. Of all the PBXs models tested, the mechanical properties of CL-20/DNB/F 2311 were the best. Binding energies show that CL-20/DNB/F 2311 has the highest intermolecular interaction energy and best compatibility and stability. Therefore, F 2311 is the most suitable fluoropolymer for PBXs. The mechanical properties and binding energies of the three crystal orientations vary in the order (0 1 0) > (0 0 1) > (1 0 0), i.e., the mechanical properties of the (0 1 0) crystal orientation are best, and this is the most stable crystal orientation. Detonation performance results show that the density and detonation parameters of PBXs are lower than those of the pure CL-20 and CL-20/DNB cocrystal explosive. The power and energetic performance of PBXs are thus weakened; however, these PBXs still have excellent detonation performance and are very promising. The results and conclusions provide some helpful guidance and novel instructions for the design and manufacture of PBXs.

  11. Trauma associated sleep disorder: a proposed parasomnia encompassing disruptive nocturnal behaviors, nightmares, and REM without atonia in trauma survivors.

    PubMed

    Mysliwiec, Vincent; O'Reilly, Brian; Polchinski, Jason; Kwon, Herbert P; Germain, Anne; Roth, Bernard J

    2014-10-15

    To characterize the clinical, polysomnographic and treatment responses of patients with disruptive nocturnal behaviors (DNB) and nightmares following traumatic experiences. A case series of four young male, active duty U.S. Army Soldiers who presented with DNB and trauma related nightmares. Patients underwent a clinical evaluation in a sleep medicine clinic, attended overnight polysomnogram (PSG) and received treatment. We report pertinent clinical and PSG findings from our patients and review prior literature on sleep disturbances in trauma survivors. DNB ranged from vocalizations, somnambulism to combative behaviors that injured bed partners. Nightmares were replays of the patient's traumatic experiences. All patients had REM without atonia during polysomnography; one patient had DNB and a nightmare captured during REM sleep. Prazosin improved DNB and nightmares in all patients. We propose Trauma associated Sleep Disorder (TSD) as a unique sleep disorder encompassing the clinical features, PSG findings, and treatment responses of patients with DNB, nightmares, and REM without atonia after trauma.

  12. Using Ground Targets to Validate S-NPP VIIRS Day-Night Band Calibration

    NASA Technical Reports Server (NTRS)

    Chen, Xuexia; Wu, Aisheng; Xiong, Xiaoxiong; Lei, Ning; Wang, Zhipeng; Chiang, Kwofu

    2016-01-01

    In this study, the observations from S-NPP VIIRS Day-Night band (DNB) and Moderate resolution bands (M bands) of Libya 4 and Dome C over the first four years of the mission are used to assess the DNB low gain calibration stability. The Sensor Data Records produced by NASA Land Product Evaluation and Algorithm Testing Element (PEATE) are acquired from nearly nadir overpasses for Libya 4 desert and Dome C snow surfaces. A kernel-driven bidirectional reflectance distribution function (BRDF) correction model is used for both Libya 4 and Dome C sites to correct the surface BRDF influence. At both sites, the simulated top-of-atmosphere (TOA) DNB reflectances based on SCIAMACHY spectral data are compared with Land PEATE TOA reflectances based on modulated Relative Spectral Response (RSR). In the Libya 4 site, the results indicate a decrease of 1.03% in Land PEATE TOA reflectance and a decrease of 1.01% in SCIAMACHY derived TOA reflectance over the period from April 2012 to January 2016. In the Dome C site, the decreases are 0.29% and 0.14%, respectively. The consistency between SCIAMACHY and Land PEATE data trends is good. The small difference between SCIAMACHY and Land PEATE derived TOA reflectances could be caused by changes in the surface targets, atmosphere status, and on-orbit calibration. The reflectances and radiances of Land PEATE DNB are also compared with matching M bands and the integral M bands based on M4, M5, and M7. The fitting trends of the DNB to integral M bands ratios indicate a 0.75% decrease at the Libya 4 site and a 1.89% decrease at the Dome C site. Part of the difference is due to an insufficient number of sampled bands available within the DNB wavelength range. The above results indicate that the Land PEATE VIIRS DNB product is accurate and stable. The methods used in this study can be used on other satellite instruments to provide quantitative assessments for calibration stability.

  13. Understanding the links between humans, climate change, water and carbon and in a Corn Belt Watershed

    NASA Astrophysics Data System (ADS)

    Secchi, S.; Perez Lapena, B.; Teshager, A. D.; Bhattarai, M. D.; Schoof, J. T.

    2014-12-01

    As part of the High Latitude Proving Ground, the Geographic Information Network of Alaska (GINA) at the University of Alaska Fairbanks (UAF) receives data from the Suomi National Polar-orbiting Partnership (SNPP) satellite via direct broadcast antennas in Fairbanks, including data from the SNPP's Visible Infrared Imaging Radiometer Suite (VIIRS) instrument. These data are processed by GINA, and the resulting imagery is delivered in near real-time to the National Weather Service (NWS) in Alaska for use in weather analysis and forecasting. The VIIRS' Day-Night Band (DNB) produces what is functionally visible imagery at night and has been used extensively by operational meteorologists in Alaska, especially during the prolonged darkness of the arctic winter. The DNB has proven to be a powerful tool when combined with other observational and model data sets and has offered NWS meteorologists a more complete picture of weather processes in a region where coverage from surface-based observations is generally poor. Thanks to its high latitude, Alaska benefits from much more frequent coverage in time by polar orbiting satellites such as SNPP and its DNB channel. Also, the sparse population of Alaska and the vast stretches of ocean that surround Alaska on three sides allow meteorological and topographical signatures to be detected by the DNB with minimal interference from anthropogenic sources of light. Examples of how the DNB contributes to the NWS' forecast process in Alaska will be presented and discussed.

  14. Proteomic Identification of Carbonylated Proteins in 1,3-Dinitrobenzene Neurotoxicity

    PubMed Central

    Steiner, Stephen R.; Philbert, Martin A.

    2011-01-01

    This study demonstrated that 1,3-dinitrobenzene-induced (1,3-DNB) oxidative stress led to the oxidative carbonlyation of specific protein targets in DI TNC1 cells. 1,3-DNB-induced mitochondrial dysfunction, as indicated by loss of tetramethyl rhodamine methyl ester (TMRM) fluorescence, was initially observed at 5 h and coincided with peak reactive oxygen species (ROS) production. ROS production was inhibited in cells pre-treated with the mitochondrial permeability transition (MPT) inhibitor, bonkrekic acid (BkA). Pre-incubation with the antioxidant deferoxamine inhibited loss of TMRM fluorescence until 24 h after initial exposure to 1,3-DNB. Two-dimensional polyacrylamide gel electrophoresis (2D PAGE) and subsequent Oxyblot analysis were used to determine if 1,3-DNB exposure led to the formation of protein carbonyls. Exposing DI TNC1 cells to 1,3-DNB led to marked protein carbonylation 45 min following initial exposure. Pre-treatment with deferoxamine or Trolox reduced the intensity of protein carbonylation in DI TNC1 cells exposed to 1mM 1,3-DNB. Tandem MS/MS performed on protein samples isolated from 1,3-DNB-treated cells revealed that specific proteins within the mitochondria, endoplasmic reticulum (ER), and cytosol are targets of protein carbonylation. The results presented in this study are the first to suggest that the molecular mechanism of 1,3-DNB neurotoxicity may occur through selective carbonylation of protein targets found within certain intracellular compartments of susceptible cells. PMID:21402099

  15. The para isomer of dinitrobenzene disrupts redox homeostasis in liver and kidney of male wistar rats.

    PubMed

    Sangodele, Janet Olayemi; Olaleye, Mary Tolulope; Monsees, Thomas K; Akinmoladun, Afolabi Clement

    2017-07-01

    Para - Dinitrobenzene (p -DNB) is one of the isomers of dinitrobenzene which have been detected as environmental toxicants. Skin irritation and organ toxicities are likely for industrial workers exposed to p -DNB. This study evaluated the effect of sub-chronic exposure of rats to p -DNB on cellular redox balance, hepatic and renal integrity. Forty eight male Wistar rats weighing 160-180 g were administered 50, 75, 1000 and 2000 mg/kg b.wt (body weight) of p -DNB or an equivalent volume of vehicle (control) orally and topically for 14 days. After the period of treatment, the activities of kidney and liver catalase (CAT), alkaline phosphatase (ALP) and superoxide dismutase (SOD) as well as extent of renal and hepatic lipid peroxidation (LPO) were determined. Serum ALP activity and plasma urea concentration were also evaluated. Compared with control animals, p -DNB -administered rats showed decrease in the body and relative kidney and liver weights as well as increased renal and hepatic hydrogen peroxide and lipid peroxidation levels accompanied by decreased superoxide dismutase and catalase activities. However, p -DNB caused a significant increase in plasma urea concentration and serum, liver and kidney ALP activities relative to control. In addition, p -DNB caused periportal infiltration, severe macro vesicular steatosis and hepatic necrosis in the liver. Our findings show that sub-chronic oral and sub-dermal administration of p -DNB may produce hepato-nephrotoxicity through oxidative stress.

  16. New Departure from Nucleate Boiling model relying on first principle energy balance at the boiling surface

    NASA Astrophysics Data System (ADS)

    Demarly, Etienne; Baglietto, Emilio

    2017-11-01

    Predictions of Departure from Nucleate Boiling have been a longstanding challenge when designing heat exchangers such as boilers or nuclear reactors. Many mechanistic models have been postulated over more than 50 years in order to explain this phenomenon but none is able to predict accurately the conditions which trigger the sudden change of heat transfer mode. This work aims at demonstrating the pertinence of a new approach for detecting DNB by leveraging recent experimental insights. The new model proposed departs from all the previous models by making the DNB inception come from an energy balance instability at the heating surface rather than a hydrodynamic instability of the bubbly layer above the surface (Zuber, 1959). The main idea is to modulate the amount of heat flux being exchanged via the nucleate boiling mechanism by the wetted area fraction on the surface, thus allowing a completely automatic trigger of DNB that doesn't require any parameter prescription. This approach is implemented as a surrogate model in MATLAB in order to validate the principles of the model in a simple and controlled geometry. Good agreement is found with the experimental data leveraged from the MIT Flow Boiling at various flow regimes. Consortium for Advanced Simulation of Light Water Reactors (CASL).

  17. Energy metabolism and biotransformation as endpoints to pre-screen hepatotoxicity using a liver spheroid model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Jinsheng; Purcell, Wendy M.

    2006-10-15

    The current study investigated liver spheroid culture as an in vitro model to evaluate the endpoints relevant to the status of energy metabolism and biotransformation after exposure to test toxicants. Mature rat liver spheroids were exposed to diclofenac, galactosamine, isoniazid, paracetamol, m-dinitrobenzene (m-DNB) and 3-nitroaniline (3-NA) for 24 h. Pyruvate uptake, galactose biotransformation, lactate release and glucose secretion were evaluated after exposure. The results showed that pyruvate uptake and lactate release by mature liver spheroids in culture were maintained at a relatively stable level. These endpoints, together with glucose secretion and galactose biotransformation, were related to and could reflect themore » status of energy metabolism and biotransformation in hepatocytes. After exposure, all of the test agents significantly reduced glucose secretion, which was shown to be the most sensitive endpoint of those evaluated. Diclofenac, isoniazid, paracetamol and galactosamine reduced lactate release (P < 0.01), but m-DNB increased lactate release (P < 0.01). Diclofenac, isoniazid and paracetamol also reduced pyruvate uptake (P < 0.01), while galactosamine had little discernible effect. Diclofenac, galactosamine, paracetamol and m-DNB also reduced galactose biotransformation (P < 0.01), by contrast, isoniazid did not. The metabolite of m-DNB, 3-NA, which served as a negative control, did not cause significant changes in lactate release, pyruvate uptake or galactose biotransformation. It is concluded that pyruvate uptake, galactose biotransformation, lactate release and glucose secretion can be used as endpoints for evaluating the status of energy metabolism and biotransformation after exposure to test agents using the liver spheroid model to pre-screen hepatotoxicity.« less

  18. Indian Test Facility (INTF) and its updates

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, M.; Chakraborty, A.; Rotti, C.; Joshi, J.; Patel, H.; Yadav, A.; Shah, S.; Tyagi, H.; Parmar, D.; Sudhir, Dass; Gahlaut, A.; Bansal, G.; Soni, J.; Pandya, K.; Pandey, R.; Yadav, R.; Nagaraju, M. V.; Mahesh, V.; Pillai, S.; Sharma, D.; Singh, D.; Bhuyan, M.; Mistry, H.; Parmar, K.; Patel, M.; Patel, K.; Prajapati, B.; Shishangiya, H.; Vishnudev, M.; Bhagora, J.

    2017-04-01

    To characterize ITER Diagnostic Neutral Beam (DNB) system with full specification and to support IPR’s negative ion beam based neutral beam injector (NBI) system development program, a R&D facility, named INTF is under commissioning phase. Implementation of a successful DNB at ITER requires several challenges need to be overcome. These issues are related to the negative ion production, its neutralization and corresponding neutral beam transport over the path lengths of ∼ 20.67 m to reach ITER plasma. DNB is a procurement package for INDIA, as an in-kind contribution to ITER. Since ITER is considered as a nuclear facility, minimum diagnostic systems, linked with safe operation of the machine are planned to be incorporated in it and so there is difficulty to characterize DNB after onsite commissioning. Therefore, the delivery of DNB to ITER will be benefited if DNB is operated and characterized prior to onsite commissioning. INTF has been envisaged to be operational with the large size ion source activities in the similar timeline, as with the SPIDER (RFX, Padova) facility. This paper describes some of the development updates of the facility.

  19. Organic/inorganic-doped aromatic derivative crystals: Growth and properties

    NASA Astrophysics Data System (ADS)

    Stanculescu, F.; Ionita, I.; Stanculescu, A.

    2014-09-01

    Results of a comparative study on the growth from melt by the Bridgman-Stockbarger method of meta-dinitrobenzene (m-DNB) and benzil (Bz) crystals in the same experimental set-up and the same experimental conditions are presented. The incorporation of an inorganic (iodine) dopant in m-DNB was analyzed in the given experimental conditions from the point of view of the solid-liquid interface stability. The limits for a stable growth and the conditions that favor the generation of morphological instability are emphasized. These limits for m-DNB are compatible with those previously determined for Bz, and therefore, even for a high gradient concentration at the growth interface, it is possible to grow m-DNB and Bz crystals in the same experimental conditions characterized by a high ΔT and v. The optical properties were investigated in relation with the dopant incorporation in the crystal in the mentioned experimental conditions. Effects of the dopant (m-DNB/iodine in Bz and iodine in m-DNB) on the optical band gap and optical non-linear properties of the crystals are discussed.

  20. Recend advances of using VIIRS DNB for surface PM2.5 and fire monitoring

    NASA Astrophysics Data System (ADS)

    Wang, J.; Polivka, T. N.; Hyer, E. J.; Xu, X.; Ichoku, I.

    2017-12-01

    The launch of the Suomi National Polar-orbiting Partner- ship (S-NPP) satellite on 28 October 2011 has opened up unprecedented capabilities with the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument. With a heritage extending back over 40 years to the Defense Meteorological Satel- lite Program (DMSP) Sensor Aerospace Vehicle Electronics Package (SAP), first launched in 1970, Advanced Very High Resolution Radiometer (AVHRR, first launched 1978), and Moderate Resolution Imaging Spectroradiometer (MODIS, first launched in 1999), VIIRS boasts improved spatial resolution and a higher signal-to-noise ratio than these legacy sensors. In particular, at the spatial resolution of 750 m, the VIIRS' day-and-night band (DNB) can monitor the visible light reflected by the Earth and atmsophere in all conditions, from strong reflection of sun light by cloud to weak reflection of moon light by desert at night. While several studies have looked into the potential use of DNB for mapping city lights and for retrieving aerosol optical depth at night, there are still lots of learn about DNB. Here, we will present our recent work of using DNB together with other VIIRS data to improve detection of smaller and cooler fires, to characterize the smoldering vs. flamming phase of fires , and to derive surface PM2.5 at night. Quantiitve understanding of visible light trasnfer from surface to the top of atmospehre will be presented, along with the study to undertand the radiation of fires from visible to infrared spectrum. Varous case studies will be shown in which 30% more fire pixels were detected as comapred to tradiational infrared-mehod only. Cross validation of DNB-based regression model shows that the estimated surface PM2.5 concentration has nearly no bias and a linear correlation coefficient (R) of 0.67 with respect to the corresponding hourly observed surface PM2.5 concentration.

  1. Seeing the Night in a New Light—VIIRS Day/Night Band Capabilities and Prospects for a Joint Suomi/JPSS-1 Observing System

    NASA Astrophysics Data System (ADS)

    Solbrig, J. E.; Miller, S. D.; Straka, W. C.; Seaman, C.; Combs, C.; Heidinger, A.; Walther, A.

    2017-12-01

    The Day/Night Band (DNB), a special sensor on board the Visible/Infrared Imaging Radiometer Suite (VIIRS) devoted to low-light visible imaging, has representated a kind of `disruptive technology' in terms of how we observe the nocturnal environment. Since its debut on the Suomi National Polar-orbiting Partnership (NPP), launched in Fall 2011, the DNB has solidified its claim to fame as the most novel addition to the National Oceanic and Atmospheric Administration's future polar-oribitng program, represented by the Joint Polar Satellite System (JPSS). The first member of which (JPSS-1) is scheduled to launch in Fall of 2017, joining Suomi in its 1330 local time ascending node orbit. JPSS-1 will be displaced by ½ orbit ahead of Suomi, providing roughly 50 min between overpasses. Importantly, JPSS-1 will provide a second DNB observation, enabling the first time-resolved measurements of low-light visible at low and mid-latitudes from this new sensor technology. The DNB provides unprecedented capability to leverage light emissions from natural and artificial nocturnal sources, ranging from moonlight and city lights, ships, fires, lightning flashes, and even atmospheric nightglow. The calibrated DNB observations enable use of moonlight in similar way to daytime visible, allowing for quantitative description of cloud and aerosol optical properties. This presentation updates the community on DNB-related research initiatives. Statistics based on a multi-year collection of data at Salar de Uyuni, Bolivia and White Sands, New Mexico lend confidence to the performance of a lunar irradiance model used to enable nighttime optical property retrievals. Selected examples of notable events, including the devastating Portugal wildfires, emergence of the massive rift in the Larsen C ice shelf, and examples from the growing compilation of atmospheric gravity waves in nightglow, will also be highlighted.

  2. NPP-VIIRS DNB-based reallocating subpopulations to mercury in Urumqi city cluster, central Asia

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Feng, X. B.; Dai, W.; Li, P.; Ju, C. Y.; Bao, Z. D.; Han, Y. L.

    2017-02-01

    Accurate and update assignment of population-related environmental matters onto fine grid cells in oasis cities of arid areas remains challenging. We present the approach based on Suomi National Polar-orbiting Partnership (S-NPP) -Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) to reallocate population onto a regular finer surface. The number of potential population to the mercury were reallocated onto 0.1x0.1 km reference grid in Urumqi city cluster of China’s Xinjiang, central Asia. The result of Monte Carlo modelling indicated that the range of 0.5 to 2.4 million people was reliable. The study highlights that the NPP-VIIRS DNB-based multi-layered, dasymetric, spatial method enhances our abilities to remotely estimate the distribution and size of target population at the street-level scale and has the potential to transform control strategies for epidemiology, public policy and other socioeconomic fields.

  3. Mapping nighttime PM2.5 from VIIRS DNB using a linear mixed-effect model

    NASA Astrophysics Data System (ADS)

    Fu, D.; Xia, X.; Duan, M.; Zhang, X.; Li, X.; Wang, J.; Liu, J.

    2018-04-01

    Estimation of particulate matter with aerodynamic diameter less than 2.5 μm (PM2.5) from daytime satellite aerosol products is widely reported in the literature; however, remote sensing of nighttime surface PM2.5 from space is very limited. PM2.5 shows a distinct diurnal cycle and PM2.5 concentration at 1:00 local standard time (LST) has a linear correlation coefficient (R) of 0.80 with daily-mean PM2.5. Therefore, estimation of nighttime PM2.5 is required toward an improved understanding of temporal variation of PM2.5 and its effects on air quality. Using data from the Day/Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) and hourly PM2.5 data at 35 stations in Beijing, a mixed-effect model is developed here to estimate nighttime PM2.5 from nighttime light radiance measurements based on the assumption that the DNB-PM2.5 relationship is constant spatially but varies temporally. Cross-validation showed that the model developed using all stations predict daily PM2.5 with mean determination coefficient (R2) of 0.87 ± 0.12, 0.83 ± 0.10 , 0.87 ± 0.09, 0.83 ± 0.10 in spring, summer, autumn and winter. Further analysis showed that the best model performance was achieved in urban stations with average cross-validation R2 of 0.92. In rural stations, DNB light signal is weak and was likely smeared by lunar illuminance that resulted in relatively poor estimation of PM2.5. The fixed and random parameters of the mixed-effect model in urban stations differed from those in suburban stations, which indicated that the assumption of the mixed-effect model should be carefully evaluated when used at a regional scale.

  4. On-Orbit Calibration and Performance of S-NPP VIIRS DNB

    NASA Technical Reports Server (NTRS)

    Chen, H.; Sun, C.; Chen, X.; Chiang, K.; Xiong, X.

    2016-01-01

    The S-NPP VIIRS instrument has successfully operated since its launch in October 2011. The VIIRS Day-Night Band (DNB) is a panchromatic channel covering wavelengths from 0.5 to 0.9 m that is capable of observing Earth scenes during both day and nighttime orbits at a spatial resolution of 750 m. To cover the large dynamic range, the DNB operates at low, mid, or high gain stages, and it uses an onboard solar diffuser (SD) for its low gain stage calibration. The SD observations also provide a means to compute gain ratios of low-to-mid and mid-to-high gain stages. This paper describes the DNB on-orbit calibration methodologies used by the VIIRS Characterization Support Team (VCST) in supporting the NASA earth science community with consistent VIIRS sensor data records (SDRs) made available by the Land Science Investigator-led Processing Systems (SIPS). It provides an assessment and update of DNB on-orbit performance, including the SD degradation in the DNB spectral range, detector gain and gain ratio trending, stray light contamination and its correction. Also presented in this paper are performance validations based on earth scenes and lunar observations.

  5. Species differences in susceptibility to 1,3-dinitrobenzene-induced testicular toxicity and methemoglobinemia.

    PubMed

    Obasaju, M F; Katz, D F; Miller, M G

    1991-02-01

    The testicular toxicity and methemoglobinemia induced by 1,3-dinitrobenzene (1,3-DNB) was compared in two species, the Sprague-Dawley rat and the golden Syrian hamster. A marked difference in susceptibility to both endpoints of toxicity was observed. The hamster showed no testicular lesions at dose levels up to 50 mg/kg whereas, as previously reported by others, damage to rat testicular tubules in later stages of spermatogenesis was readily apparent at a 25 mg/kg dose level. Similarly, administration of 1,3-DNB induced substantially less methemoglobinemia in the hamster than in the rat. For example, at the 25 mg/kg dose level peak levels of methemoglobin in the hamster were 15% compared with 80% in the rat. Mortality in the rat also occurred at lower doses than in the hamster (50 vs 100 mg/kg, respectively). In in vitro studies, the capacity of 1,3-DNB and 1,3-DNB metabolites (nitroaniline, nitroacetanilide, aminoacetanilide, diacetamidobenzene) to induce methemoglobinemia was examined in suspensions of red blood cells obtained from both species. Only 1,3-DNB caused the formation of methemoglobin and rat red blood cells were twice as sensitive as hamster red blood cells. The species difference in susceptibility to both methemoglobinemia and testicular toxicity could indicate differences in 1,3-DNB clearance and/or formation of toxic metabolites. Additional metabolic work is under way. This study demonstrates that the hamster is more resistant than the rat to the testicular lesion and methemoglobinemia induced by 1,3-DNB.

  6. Status and Prospects for Low-Light Visible Sensing from the VIIRS Day/Night Band on Suomi NPP and JPSS-1

    NASA Astrophysics Data System (ADS)

    Miller, S. D.; Seaman, C.; Combs, C.; Solbrig, J. E.; Straka, W. C.; Walther, A.; NOH, Y. J.; Heidinger, A.

    2016-12-01

    Since its launch in October 2011, the Visible/Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) on the Suomi National Polar-orbiting Partnership (S-NPP) satellite has delivered above and beyond expectations, revolutionizing our ability to observe and characterize the nocturnal environment. Taking advantage of natural and artificial (man-made) light sources, the DNB offers unique information content ranging from the surface to the upper atmosphere. Notable developments include the quantitative use of moonlight for cloud property retrievals and the discovery of nightglow sensitivity revealing the signatures of gravity waves. The DNB represents a remarkable advance to the heritage low-light visible sensing of the Operational Linescan System (OLS), providing spatial and radiometric resolution unprecedented to the space platform. Soon, we will have yet another dimension of resolution to consider—temporal. In early 2017, NOAA's Joint Polar Satellite System-1 (J1) will join S-NPP in early afternoon (1330 local time, ascending node) sun-synchronous orbital plane, displaced ½ orbit ( 50 min) from S-NPP. Having two DNB sensors will offer an expanded ability (lower latitudes) to examine the temporal properties of various light sources, track the motion of ships, low-level clouds and dust storms, fire line evolution, cloud optical properties, and even the dynamics of mesospheric gravity wave structures such as thunderstorm-induced concentric gravity waves and mesospheric bores. This presentation will provide an update to the science and application-oriented research involving the S-NPP/DNB, examples of key capabilities, first results of lunar irradiance model validation, and a look ahead toward the new research opportunities to be afforded by tandem S-NPP/J1 observations. The AGU is well-positioned for anticipating these capabilities "on the eve" of the J1 launch.

  7. Verification and Enhancement of VIIRS Day-Night Band (DNB) Power Outage Detection Product

    NASA Technical Reports Server (NTRS)

    Burke, Angela; Schultz, Lori A.; Omitaomu, Olufemi; Molthan, Andrew L.; Cole, Tony; Griffin, Robert

    2017-01-01

    This case study of Hurricane Matthew (October 2016) uses the NASA Short-Term Prediction Research and Transition (SPoRT) Center DNB power outage product (using GSFC VIIRS DNB preliminary Black Marble product, Roman et al.. 2017) and 2013 LandScan Global population data to look for correlations between the post-event %-of-normal radiance and the utility company-reported outage numbers (obtained from EAGLE-1).

  8. Trauma Associated Sleep Disorder: A Proposed Parasomnia Encompassing Disruptive Nocturnal Behaviors, Nightmares, and REM without Atonia in Trauma Survivors

    PubMed Central

    Mysliwiec, Vincent; O'Reilly, Brian; Polchinski, Jason; Kwon, Herbert P.; Germain, Anne; Roth, Bernard J.

    2014-01-01

    Study Objectives: To characterize the clinical, polysomnographic and treatment responses of patients with disruptive nocturnal behaviors (DNB) and nightmares following traumatic experiences. Methods: A case series of four young male, active duty U.S. Army Soldiers who presented with DNB and trauma related nightmares. Patients underwent a clinical evaluation in a sleep medicine clinic, attended overnight polysomnogram (PSG) and received treatment. We report pertinent clinical and PSG findings from our patients and review prior literature on sleep disturbances in trauma survivors. Results: DNB ranged from vocalizations, somnambulism to combative behaviors that injured bed partners. Nightmares were replays of the patient's traumatic experiences. All patients had REM without atonia during polysomnography; one patient had DNB and a nightmare captured during REM sleep. Prazosin improved DNB and nightmares in all patients. Conclusions: We propose Trauma associated Sleep Disorder (TSD) as a unique sleep disorder encompassing the clinical features, PSG findings, and treatment responses of patients with DNB, nightmares, and REM without atonia after trauma. Citation: Mysliwiec V, O'Reilly B, Polchinski J, Kwon HP, Germain A, Roth BJ. Trauma associated sleep disorder: a proposed parasomnia encompassing disruptive nocturnal behaviors, nightmares, and REM without atonia in trauma survivors. J Clin Sleep Med 2014;10(10):1143-1148. PMID:25317096

  9. VIIRS S-NPP Nighttime DNB Spectral Response Function (SRF): The At-launch Characteristics and How the SRF Changes with Time Due to Tungsten Oxides Chromaticity

    NASA Astrophysics Data System (ADS)

    Guenther, B.; Lei, N.; Moeller, C.

    2015-12-01

    The VIIRS Day-Night Band (DNB) is designed with 3 gain stages: Low (LGS), Mid (MGS) and High (HGS) to span bright daytime to moonlit night earth scene signal levels. The published at-launch DNB relative spectral response (RSR) is based upon the LGS spectral measurements, since it was well measured in the pre-launch test program and the LGS can be calibrated by the on-board solar diffuser (MGS and HGS saturate on the SD). The LGS RSR however does not fully represent the spectral characteristics of nighttime DNB data from the MGS and HGS. Nighttime data users who apply the detailed DNB spectral characteristics in their analyses should use modulated RSR appropriate to the MGS and HGS observations. The RSR modulation is due to spectral darkening of the 4 mirrors of the S-NPP VIIRS telescope, which were contaminated with tungsten oxides in fabrication. These tungsten oxides are 'in family' with transition lenses on eyeglasses that darken when exposed to sunlight but do not recover when VIIRS goes into darkness because VIIRS in space is in a vacuum (transition lenses require atmospheric oxygen to recover). The on-going mirror darkening has caused a time-dependent shift in DNB RSR towards blue wavelengths. This presentation will provide access to the correct RSR to use for S-NPP DNB nighttime data over the mission time on-orbit. The changes in characteristics will be described in engineering terms to facilitate clear user understanding of how to handle RSR for nighttime observations over the mission lifetime.

  10. Anti-D in a mother, hemizygous for the variant RHD*DNB gene, associated with hemolytic disease of the fetus and newborn.

    PubMed

    Quantock, Kelli M; Lopez, Genghis H; Hyland, Catherine A; Liew, Yew-Wah; Flower, Robert L; Niemann, Frans J; Joyce, Arthur

    2017-08-01

    Individuals with the partial D phenotype when exposed to D+ red blood cells (RBCs) carrying the epitopes they lack may develop anti-D specific for the missing epitopes. DNB is the most common partial D in Caucasians and the clinical significance for anti-D in these individuals is unknown. This article describes the serologic genotyping results and clinical manifestations in two group D+ babies of a mother presenting as group O, D+ with alloanti-D. The mother was hemizygous for RHD*DNB gene and sequencing confirmed a single-nucleotide change at c.1063G>A. One baby (group A, D+) displayed bilirubinemia at birth with a normal hemoglobin level. Anti-A and anti-D were eluted from the RBCs. For the next ongoing pregnancy, the anti-D titer increased from 32 to 256. On delivery the baby typed group O and anti-D was eluted from the RBCs. This baby at birth exhibited anemia, reticulocytosis, and hyperbilirubinemia requiring intensive phototherapy treatment from Day 0 to Day 9 after birth and was discharged on Day 13. Intravenous immunoglobulin was also administered. Both babies were heterozygous for RHD and RHD*DNB. The anti-D produced by this woman with partial D DNB resulted in a case of hemolytic disease of the fetus and newborn (HDFN) requiring intensive treatment in the perinatal period. Anti-D formed by women with the partial D DNB phenotype has the potential to cause HDFN where the fetus is D+. Women carrying RHD*DNB should be offered appropriate prophylactic anti-D and be transfused with D- RBCs if not already alloimmunized. © 2017 AABB.

  11. Interactions among K+-Ca2+ exchange, sorption of m-dinitrobenzene, and smectite quasicrystal dynamics.

    PubMed

    Chatterjee, Ritushree; Laird, David A; Thompson, Michael L

    2008-12-15

    The fate of organic contaminants in soils and sediments is influenced by sorption of the compounds to surfaces of soil materials. We investigated the interaction among sorption of an organic compound, cation exchange reactions, and both the size and swelling of smectite quasicrystals. Two reference smectites that vary in location and amount of layer charge, SPV (a Wyoming bentonite) and SAz-1 were initially Ca- and K-saturated and then equilibrated with mixed 0.01 M KCl and 0.005 M CaCl2 salt solutions both with and without the presence of 200 mg L(-1) m-dinitrobenzene (m-DNB). In general, sorption of m-DNB increased with the amount of K+ in the system for both clays, and the SPV sorbed more m-DNB than the SAz-1. Sorption of m-DNB increased the preference of Ca-SPV for K+ relative to Ca2+ but had little effect on K+-Ca2+ selectivity for K-SPV. Selectivity for K+ relative to Ca2+ was slightly higher for both K-SAz-1 and Ca-SAz-1 in the presence of m-DNB than in its absence. Distinct hysteresis loops were observed for the K+-Ca2+ cation exchange reactions for both clays, and the legacy of having been initially Ca- or K-saturated influenced sorption of m-DNB by SPV but had little effect for SAz-1. Suspension X-ray diffraction was used to measure changes in d-spacing and the relative thickness of smectite quasicrystals during the cation exchange and m-DNB sorption reactions. The results suggest that interactions among cation exchange and organic sorption reactions are controlled byan inherently hysteretic complex feedback process that is regulated by changes in the size and extent of swelling of smectite quasicrystals.

  12. Experimental validation of prototype high voltage bushing

    NASA Astrophysics Data System (ADS)

    Shah, Sejal; Tyagi, H.; Sharma, D.; Parmar, D.; M. N., Vishnudev; Joshi, K.; Patel, K.; Yadav, A.; Patel, R.; Bandyopadhyay, M.; Rotti, C.; Chakraborty, A.

    2017-08-01

    Prototype High voltage bushing (PHVB) is a scaled down configuration of DNB High Voltage Bushing (HVB) of ITER. It is designed for operation at 50 kV DC to ensure operational performance and thereby confirming the design configuration of DNB HVB. Two concentric insulators viz. Ceramic and Fiber reinforced polymer (FRP) rings are used as double layered vacuum boundary for 50 kV isolation between grounded and high voltage flanges. Stress shields are designed for smooth electric field distribution. During ceramic to Kovar brazing, spilling cannot be controlled which may lead to high localized electrostatic stress. To understand spilling phenomenon and precise stress calculation, quantitative analysis was performed using Scanning Electron Microscopy (SEM) of brazed sample and similar configuration modeled while performing the Finite Element (FE) analysis. FE analysis of PHVB is performed to find out electrical stresses on different areas of PHVB and are maintained similar to DNB HV Bushing. With this configuration, the experiment is performed considering ITER like vacuum and electrical parameters. Initial HV test is performed by temporary vacuum sealing arrangements using gaskets/O-rings at both ends in order to achieve desired vacuum and keep the system maintainable. During validation test, 50 kV voltage withstand is performed for one hour. Voltage withstand test for 60 kV DC (20% higher rated voltage) have also been performed without any breakdown. Successful operation of PHVB confirms the design of DNB HV Bushing. In this paper, configuration of PHVB with experimental validation data is presented.

  13. VIIRS day-night band gain and offset determination and performance

    NASA Astrophysics Data System (ADS)

    Geis, J.; Florio, C.; Moyer, D.; Rausch, K.; De Luccia, F. J.

    2012-09-01

    On October 28th, 2011, the Visible-Infrared Imaging Radiometer Suite (VIIRS) was launched on-board the Suomi National Polar-orbiting Partnership (NPP) spacecraft. The instrument has 22 spectral bands: 14 reflective solar bands (RSB), 7 thermal emissive bands (TEB), and a Day Night Band (DNB). The DNB is a panchromatic, solar reflective band that provides visible through near infrared (IR) imagery of earth scenes with radiances spanning 7 orders of magnitude. In order to function over this large dynamic range, the DNB employs a focal plane array (FPA) consisting of three gain stages: the low gain stage (LGS), the medium gain stage (MGS), and the high gain stage (HGS). The final product generated from a DNB raw data record (RDR) is a radiance sensor data record (SDR). Generation of the SDR requires accurate knowledge of the dark offsets and gain coefficients for each DNB stage. These are measured on-orbit and stored in lookup tables (LUT) that are used during ground processing. This paper will discuss the details of the offset and gain measurement, data analysis methodologies, the operational LUT update process, and results to date including a first look at trending of these parameters over the early life of the instrument.

  14. Improvements to Lunar BRDF-Corrected Nighttime Satellite Imagery: Uses and Applications

    NASA Technical Reports Server (NTRS)

    Cole, Tony A.; Molthan, Andrew L.; Schultz, Lori A.; Roman, Miguel O.; Wanik, David W.

    2016-01-01

    Observations made by the VIIRS day/night band (DNB) provide daily, nighttime measurements to monitor Earth surface processes.However, these observations are impacted by variations in reflected solar radiation on the moon's surface. As the moon transitions from new to full phase, increasing radiance is reflected to the Earth's surface and contributes additional reflected moonlight from clouds and land surface, in addition to emissions from other light sources observed by the DNB. The introduction of a bi-directional reflectance distribution function (BRDF) algorithm serves to remove these lunar variations and normalize observed radiances. Provided by the Terrestrial Information Systems Laboratory at Goddard Space Flight Center, a 1 km gridded lunar BRDF-corrected DNB product and VIIRS cloud mask can be used for a multitude of nighttime applications without influence from the moon. Such applications include the detection of power outages following severe weather events using pre-and post-event DNB imagery, as well as the identification of boat features to curtail illegal fishing practices. This presentation will provide context on the importance of the lunar BRDF correction algorithm and explore the aforementioned uses of this improved DNB product for applied science applications.

  15. Improvements to Lunar BRDF-Corrected Nighttime Satellite Imagery: Uses and Applications

    NASA Astrophysics Data System (ADS)

    Cole, T.; Molthan, A.; Schultz, L. A.; Roman, M. O.; Wanik, D. W.

    2016-12-01

    Observations made by the VIIRS day/night band (DNB) provide daily, nighttime measurements to monitor Earth surface processes. However, these observations are impacted by variations in reflected solar radiation on the moon's surface. As the moon transitions from new to full phase, increasing radiance is reflected to the Earth's surface and contributes additional reflected moonlight from clouds and land surface, in addition to emissions from other light sources observed by the DNB. The introduction of a bi-directional reflectance distribution function (BRDF) algorithm serves to remove these lunar variations and normalize observed radiances. Provided by the Terrestrial Information Systems Laboratory at Goddard Space Flight Center, a 1 km gridded lunar BRDF-corrected DNB product and VIIRS cloud mask can be used for a multitude of nighttime applications without influence from the moon. Such applications include the detection of power outages following severe weather events using pre- and post-event DNB imagery, as well as the identification of boat features to curtail illegal fishing practices. This presentation will provide context on the importance of the lunar BRDF correction algorithm and explore the aforementioned uses of this improved DNB product for applied science applications.

  16. Fine spatiotemporal control of nitric oxide release by infrared pulse-laser irradiation of a photolabile donor.

    PubMed

    Nakagawa, Hidehiko; Hishikawa, Kazuhiro; Eto, Kei; Ieda, Naoya; Namikawa, Tomotaka; Kamada, Kenji; Suzuki, Takayoshi; Miyata, Naoki; Nabekura, Jun-ichi

    2013-11-15

    Two-photon-excitation release of nitric oxide (NO) from our recently synthesized photolabile NO donor, Flu-DNB, was confirmed to allow fine spatial and temporal control of NO release at the subcellular level in vitro. We then evaluated in vivo applications. Femtosecond near-infrared pulse laser irradiation of predefined regions of interest in living mouse brain treated with Flu-DNB induced NO-release-dependent, transient vasodilation specifically at the irradiated site. Photoirradiation in the absence of Flu-DNB had no effect. Further, NO release from Flu-DNB by pulse laser irradiation was shown to cause chemoattraction of microglial processes to the irradiated area in living mouse brain. To our knowledge, this is the first demonstration of induction of biological responses in vitro and in vivo by means of precisely controlled, two-photon-mediated release of NO.

  17. Suomi-NPP VIIRS Day-Night Band On-Orbit Calibration and Performance

    NASA Technical Reports Server (NTRS)

    Chen, Hongda; Xiong, Xiaoxiong; Sun, Chengbo; Chen, Xuexia; Chiang, Kwofu

    2017-01-01

    The Suomi national polar-orbiting partnership Visible Infrared Imaging Radiometer Suite (VIIRS) instrument has successfully operated since its launch in October 2011. The VIIRS day-night band (DNB) is a panchromatic channel covering wavelengths from 0.5 to 0.9 microns that is capable of observing Earth scenes during both daytime and nighttime at a spatial resolution of 750 m. To cover the large dynamic range, the DNB operates at low-, middle-, and high-gain stages, and it uses an on-board solar diffuser (SD) for its low-gain stage calibration. The SD observations also provide a means to compute the gain ratios of low-to-middle and middle-to-high gain stages. This paper describes the DNB on-orbit calibration methodology used by the VIIRS characterization support team in supporting the NASA Earth science community with consistent VIIRS sensor data records made available by the land science investigator-led processing systems. It provides an assessment and update of the DNB on-orbit performance, including the SD degradation in the DNB spectral range, detector gain and gain ratio trending, and stray-light contamination and its correction. Also presented in this paper are performance validations based on Earth scenes and lunar observations, and comparisons to the calibration methodology used by the operational interface data processing segment.

  18. DNB heat flux on inner side of a vertical pipe in forced flow of liquid hydrogen and liquid nitrogen

    NASA Astrophysics Data System (ADS)

    Shirai, Yasuyuki; Tatsumoto, Hideki; Shiotsu, Masahiro; Hata, Koichi; Kobayashi, Hiroaki; Naruo, Yoshihiro; Inatani, Yoshifumi

    2018-06-01

    Heat transfer from inner side of a heated vertical pipe to liquid hydrogen flowing upward was measured at the pressures of 0.4, 0.7 and 1.1 MPa for wide ranges of flow rate and liquid temperature. Nine test heaters with different inner diameters of 3, 4, 6 and 9 mm and the lengths of 50, 100, 150, 200, 250 and 300 mm were used. The DNB (departure from nucleate boiling) heat fluxes in forced flow of liquid hydrogen were measured for various subcoolings and flow velocities at pressures of 0.4, 0.7 and 1.1 MPa. Effect of L/d (ratio of heater length to diameter) was clarified for the range of L / d ⩽ 50 . A new correlation of DNB heat flux was presented based on a simple model and the experimental data. Similar experiments were performed for liquid nitrogen at pressures of 0.5 MPa and 1.0 MPa by using the same experimental system and some of the test heaters. It was confirmed that the new correlation can describe not only the hydrogen data, but also the data of liquid nitrogen.

  19. Leveraging CubeSat Technology to Address Nighttime Imagery Requirements over the Arctic

    NASA Astrophysics Data System (ADS)

    Pereira, J. J.; Mamula, D.; Caulfield, M.; Gallagher, F. W., III; Spencer, D.; Petrescu, E. M.; Ostroy, J.; Pack, D. W.; LaRosa, A.

    2017-12-01

    The National Oceanic and Atmospheric Administration (NOAA) has begun planning for the future operational environmental satellite system by conducting the NOAA Satellite Observing System Architecture (NSOSA) study. In support of the NSOSA study, NOAA is exploring how CubeSat technology funded by NASA can be used to demonstrate the ability to measure three-dimensional profiles of global temperature and water vapor. These measurements are critical for the National Weather Service's (NWS) weather prediction mission. NOAA is conducting design studies on Earth Observing Nanosatellites (EON) for microwave (EON-MW) and infrared (EON-IR) soundings, with MIT Lincoln Laboratory and NASA JPL, respectively. The next step is to explore the technology required for a CubeSat mission to address NWS nighttime imagery requirements over the Arctic. The concept is called EON-Day/Night Band (DNB). The DNB is a 0.5-0.9 micron channel currently on the operational Visible Infrared Imaging Radiometer Suite (VIIRS) instrument, which is part of the Suomi-National Polar-orbiting Partnership and Joint Polar Satellite System satellites. NWS has found DNB very useful during the long periods of darkness that occur during the Alaskan cold season. The DNB enables nighttime imagery products of fog, clouds, and sea ice. EON-DNB will leverage experiments carried out by The Aerospace Corporation's CUbesat MULtispectral Observation System (CUMULOS) sensor and other related work. CUMULOS is a DoD-funded demonstration of COTS camera technology integrated as a secondary mission on the JPL Integrated Solar Array and Reflectarray Antenna mission. CUMULOS is demonstrating a staring visible Si CMOS camera. The EON-DNB project will leverage proven, advanced compact visible lens and focal plane camera technologies to meet NWS user needs for nighttime visible imagery. Expanding this technology to an operational demonstration carries several areas of risk that need to be addressed prior to an operational mission. These include, but are not limited to: calibration, swath coverage, resolution, scene gain control, compact fast optical systems, downlink choices, and mission life. NOAA plans to conduct risk reduction efforts similar to those on EON-MW and EON-IR. This paper will explore EON-DNB risks and mitigation options.

  20. Comparison between DMSP-OLS and S-NPP Day-Night Band in Correlating with Regional Socio-economic Variables

    NASA Astrophysics Data System (ADS)

    Jing, X.; Shao, X.; Cao, C.; Fu, X.

    2013-12-01

    Night-time light imagery offers a unique view of the Earth's surface. In the past, the nighttime light data collected by the DMSP-OLS sensors have been used as efficient means to correlate with the global socio-economic activities. With the launch of Suomi National Polar-orbiting Partnership (S-NPP) satellite in October 2011, the Day Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard S-NPP represents a major advancement in night time imaging capabilities because it surpassed its predecessor DMSP-OLS in radiometric accuracy, spatial resolution, and geometric quality. In this paper, we compared the performance of DNB image and DMSP image in correlating regional socio-economic activities and analyzed the leading causes for the differences. The correlation coefficients between the socio-economic variables such as population, regional GDP etc. and the characteristic variables derived from the night time light images of DNB and DMSP at provincial level in China were computed as performance metrics for comparison. In general, the correlation between DNB data and socio-economic data is better than that of DMSP data. To explain the difference in the correlation, we further analyzed the effects of several factors such as radiometric saturation and quantization of DMSP data, low spatial resolution, different data acquisition times between DNB and DMSP images, and difference in the transformation used in converting digital number (DN) value to radiance.

  1. Investigation of emission properties of doped aromatic derivative organic semiconductor crystals

    NASA Astrophysics Data System (ADS)

    Stanculescu, A.; Mihut, L.; Stanculescu, F.; Alexandru, H.

    2008-04-01

    Fluorescence measurements have been made on pure and doped bulk, mechanically polished wafers of crystalline m-DNB and benzil obtained by cutting ingots grown by the Bridgman-Stockbarger method modified for organic compounds crystallization. By comparison with pure matrices, we have investigated the effect of an inorganic dopant (iodine, silver, sodium) and of an organic dopant (m-DNB, naphthalene) on the emission characteristics (position and shape) of these molecular crystals. A slight shift of the emission peaks through high energy and an intense emission peak situated around 2.35 eV correlated with the local trapping level attributed to structural defects, which are involved in radiative processes, have been evidenced in iodine-doped m-DNB. The emission peak of m-DNB-doped benzil situated in the high-energy range (2.97 eV) is associated with direct emission activity of m-DNB, suggesting that this is an active impurity in benzil molecular matrix. We have not observed in benzil any evidence of indirect action of the impurity molecules (atoms) associated with the traps represented by the structural defects that generate changes in the energy levels of the neighbouring molecules and are correlated with different growth conditions. We have not remarked any involvement of the studied inorganic metallic impurities and of some organic impurities, such as naphthalene, in the radiative recombination processes in benzil matrix.

  2. 76 FR 24883 - DNB Exports LLC, and AFI Elektromekanikanik Ve Elektronik San. Tic. Ltd. Sti. v. Barsan Global...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-03

    ... FEDERAL MARITIME COMMISSION [Docket No. 11-07] DNB Exports LLC, and AFI Elektromekanikanik Ve Elektronik San. Tic. Ltd. Sti. v. Barsan Global Lojistiks Ve Gumruk Musavirligi A.S., Barsan International... AFI Elektromekanikanik Ve Elektronik San. Tic. Ltd. Sti. (``AFI''), hereinafter ``Complainants...

  3. Synergistic Use of Nighttime Satellite Data, Electric Utility Infrastructure, and Ambient Population to Improve Power Outage Detections in Urban Areas

    NASA Technical Reports Server (NTRS)

    Cole, Tony A.; Wanik, David W.; Molthan, Andrew L.; Roman, Miguel O.; Griffin, Robert E.

    2017-01-01

    Natural and anthropogenic hazards are frequently responsible for disaster events, leading to damaged physical infrastructure, which can result in loss of electrical power for affected locations. Remotely-sensed, nighttime satellite imagery from the Suomi National Polar-orbiting Partnership (Suomi-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) can monitor power outages in disaster-affected areas through the identification of missing city lights. When combined with locally-relevant geospatial information, these observations can be used to estimate power outages, defined as geographic locations requiring manual intervention to restore power. In this study, we produced a power outage product based on Suomi-NPP VIIRS DNB observations to estimate power outages following Hurricane Sandy in 2012. This product, combined with known power outage data and ambient population estimates, was then used to predict power outages in a layered, feedforward neural network model. We believe this is the first attempt to synergistically combine such data sources to quantitatively estimate power outages. The VIIRS DNB power outage product was able to identify initial loss of light following Hurricane Sandy, as well as the gradual restoration of electrical power. The neural network model predicted power outages with reasonable spatial accuracy, achieving Pearson coefficients (r) between 0.48 and 0.58 across all folds. Our results show promise for producing a continental United States (CONUS)- or global-scale power outage monitoring network using satellite imagery and locally-relevant geospatial data.

  4. DNB heat flux in forced convection of liquid hydrogen for a wire set in central axis of vertically mounted flow channel

    NASA Astrophysics Data System (ADS)

    Matsumoto, T.; Shirai, Y.; Shiotsu, M.; Fujita, K.; Kainuma, T.; Tatsumoto, H.; Naruo, Y.; Kobayashi, H.; Nonaka, S.; Inatani, Y.

    2017-12-01

    Liquid hydrogen has excellent physical properties, high latent heat and low viscosity of liquid, as a coolant for superconductors like MgB2. The knowledge of Departure from Nucleate Boiling (DNB) heat flux of liquid hydrogen is necessary for designing and cooling analysis of high critical temperature superconducting devices. In this paper, DNB heat fluxes of liquid hydrogen were measured under saturated and subcooled conditions at absolute pressures of 400, 700 and 1100 kPa for various flow velocities. Two wire test heaters made by Pt-Co alloy with the length of 200 mm and the diameter of 0.7 mm were used. And these round heaters were set in central axis of a flow channel made of Fiber Reinforced Plastic (FRP) with inner diameters of 8 mm and 12 mm. These test bodies were vertically mounted and liquid hydrogen flowed upward through the channel. From these experimental values, the correlations of DNB heat flux under saturated and subcooled conditions are presented in this paper.

  5. VIIRS day-night band (DNB) electronic hysteresis: characterization and correction

    NASA Astrophysics Data System (ADS)

    Mills, Stephen

    2016-09-01

    The VIIRS Day-Night Band (DNB) offers measurements over a dynamic range from full daylight to the dimmest nighttime. This makes radiometric calibration difficult because effects that are otherwise negligible become significant for the DNB. One of these effects is electronic hysteresis and this paper evaluates this effect and its impact on calibration. It also considers possible correction algorithms. The cause of this hysteresis is uncertain, but since the DNB uses a charge-coupled device (CCD) detector array, it is likely the result of residual charge or charge depletion. The effects of hysteresis are evident in DNB imagery. Steaks are visible in the cross-track direction around very bright objects such as gas flares. Dark streaks are also visible after lightning flashes. Each VIIRS scan is a sequence of 4 sectors: space view (SV); Earth-view (EV); blackbody (BB) view; and solar diffuser (SD) view. There are differences among these sectors in offset that can only be explained as being the result of hysteresis from one sector to the next. The most dramatic hysteresis effect is when the sun illuminates the SD and hysteresis is then observed in the SV and EV. Previously this was hypothesized to be due to stray light leaking from the SD chamber, but more careful evaluation shows that this can only be the result of hysteresis. There is a stray light correction algorithm that treats this as stray light, but there are problems with this that could be remedied by instead using the characterization presented here.

  6. Aurora Activities Observed by SNPP VIIRS Day-Night Band during St. Patrick's Day, 2015 G4 Level Geomagnetic Storm

    NASA Astrophysics Data System (ADS)

    Liu, T. C.; Shao, X.; Cao, C.; Zhang, B.; Fung, S. F.; Sharma, S.

    2015-12-01

    A G4 level (severe) geomagnetic storm occurred on March 17 (St. Patrick's Day), 2015 and it is among the strongest geomagnetic storms of the current solar cycle (Solar Cycle 24). The storm is identified as due to the Coronal Mass Ejections (CMEs) which erupted on March 15 from Region 2297 of solar surface. During this event, the geomagnetic storm index Dst reached -223 nT and the geomagnetic aurora electrojet (AE) index increased and reached as high as >2200 nT with large amplitude fluctuations. Aurora occurred in both hemispheres. Ground auroral sightings were reported from Michigan to Alaska and as far south as southern Colorado. The Day Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard Suomi-NPP represents a major advancement in night time imaging capabilities. The DNB senses radiance that can span 7 orders of magnitude in one panchromatic (0.5-0.9 μm) reflective solar band and provides imagery of clouds and other Earth features over illumination levels ranging from full sunlight to quarter moon. In this paper, DNB observations of aurora activities during the St. Patrick's Day geomagnetic storm are analyzed. Aurora are observed to evolve with salient features by DNB for orbital pass on the night side (~local time 1:30am) in both hemispheres. The radiance data from DNB observation are collected at the night sides of southern and northern hemispheres and geo-located onto geomagnetic local time (MLT) coordinates. Regions of aurora during each orbital pass are identified through image processing by contouring radiance values and excluding regions with stray light near day-night terminator. The evolution of aurora are characterized with time series of the poleward and low latitude boundary of aurora, their latitude-span and area, peak radiance and total light emission of the aurora region in DNB observation. These characteristic parameters are correlated with solar wind and geomagnetic index parameters.

  7. Ab initio kinetics and thermal decomposition mechanism of mononitrobiuret and 1,5-dinitrobiuret

    NASA Astrophysics Data System (ADS)

    Sun, Hongyan; Vaghjiani, Ghanshyam L.

    2015-05-01

    Mononitrobiuret (MNB) and 1,5-dinitrobiuret (DNB) are tetrazole-free, nitrogen-rich, energetic compounds. For the first time, a comprehensive ab initio kinetics study on the thermal decomposition mechanisms of MNB and DNB is reported here. In particular, the intramolecular interactions of amine H-atom with electronegative nitro O-atom and carbonyl O-atom have been analyzed for biuret, MNB, and DNB at the M06-2X/aug-cc-pVTZ level of theory. The results show that the MNB and DNB molecules are stabilized through six-member-ring moieties via intramolecular H-bonding with interatomic distances between 1.8 and 2.0 Å, due to electrostatic as well as polarization and dispersion interactions. Furthermore, it was found that the stable molecules in the solid state have the smallest dipole moment amongst all the conformers in the nitrobiuret series of compounds, thus revealing a simple way for evaluating reactivity of fuel conformers. The potential energy surface for thermal decomposition of MNB was characterized by spin restricted coupled cluster theory at the RCCSD(T)/cc-pV∞ Z//M06-2X/aug-cc-pVTZ level. It was found that the thermal decomposition of MNB is initiated by the elimination of HNCO and HNN(O)OH intermediates. Intramolecular transfer of a H-atom, respectively, from the terminal NH2 group to the adjacent carbonyl O-atom via a six-member-ring transition state eliminates HNCO with an energy barrier of 35 kcal/mol and from the central NH group to the adjacent nitro O-atom eliminates HNN(O)OH with an energy barrier of 34 kcal/mol. Elimination of HNN(O)OH is also the primary process involved in the thermal decomposition of DNB, which processes C2v symmetry. The rate coefficients for the primary decomposition channels for MNB and DNB were quantified as functions of temperature and pressure. In addition, the thermal decomposition of HNN(O)OH was analyzed via Rice-Ramsperger-Kassel-Marcus/multi-well master equation simulations, the results of which reveal the formation of (NO2 + H2O) to be the major decomposition path. Furthermore, we provide fundamental interpretations for the experimental results of Klapötke et al. [Combust. Flame 139, 358-366 (2004)] regarding the thermal stability of MNB and DNB, and their decomposition products. Notably, a fundamental understanding of fuel stability, decomposition mechanism, and key reactions leading to ignition is essential in the design and manipulation of molecular systems for the development of new energetic materials for advanced propulsion applications.

  8. Ab Initio Kinetics and Thermal Decomposition Mechanism of Mononitrobiuret and 1,5- Dinitrobiuret

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Hongyan; Vaghjiani, Ghanshyam G.

    2015-05-26

    Mononitrobiuret (MNB) and 1,5-dinitrobiuret (DNB) are tetrazole-free, nitrogen-rich, energetic compounds. For the first time, a comprehensive ab initio kinetics study on the thermal decomposition mechanisms of MNB and DNB is reported here. In particular, the intramolecular interactions of amine H-atom with electronegative nitro O-atom and carbonyl O-atom have been analyzed for biuret, MNB, and DNB at the M06-2X/aug-cc-pVTZ level of theory. The results show that the MNB and DNB molecules are stabilized through six-member-ring moieties via intramolecular H-bonding with interatomic distances between 1.8 and 2.0 Å, due to electrostatic as well as polarization and dispersion interactions. Furthermore, it was foundmore » that the stable molecules in the solid state have the smallest dipole moment amongst all the conformers in the nitrobiuret series of compounds, thus revealing a simple way for evaluating reactivity of fuel conformers. The potential energy surface for thermal decomposition of MNB was characterized by spin restricted coupled cluster theory at the RCCSD(T)/cc-pV∞ Z//M06-2X/aug-cc-pVTZ level. It was found that the thermal decomposition of MNB is initiated by the elimination of HNCO and HNN(O)OH intermediates. Intramolecular transfer of a H-atom, respectively, from the terminal NH2 group to the adjacent carbonyl O-atom via a six-member-ring transition state eliminates HNCO with an energy barrier of 35 kcal/mol and from the central NH group to the adjacent nitro O-atom eliminates HNN(O)OH with an energy barrier of 34 kcal/mol. Elimination of HNN(O)OH is also the primary process involved in the thermal decomposition of DNB, which processes C2v symmetry. The rate coefficients for the primary decomposition channels for MNB and DNB were quantified as functions of temperature and pressure. In addition, the thermal decomposition of HNN(O)OH was analyzed via Rice–Ramsperger–Kassel–Marcus/multi-well master equation simulations, the results of which reveal the formation of (NO2 + H2O) to be the major decomposition path. Furthermore, we provide fundamental interpretations for the experimental results of Klapötke et al. [Combust. Flame 139, 358–366 (2004)] regarding the thermal stability of MNB and DNB, and their decomposition products. Notably, a fundamental understanding of fuel stability, decomposition mechanism, and key reactions leading to ignition is essential in the design and manipulation of molecular systems for the development of new energetic materials for advanced propulsion applications.« less

  9. Ab initio kinetics and thermal decomposition mechanism of mononitrobiuret and 1,5-dinitrobiuret.

    PubMed

    Sun, Hongyan; Vaghjiani, Ghanshyam L

    2015-05-28

    Mononitrobiuret (MNB) and 1,5-dinitrobiuret (DNB) are tetrazole-free, nitrogen-rich, energetic compounds. For the first time, a comprehensive ab initio kinetics study on the thermal decomposition mechanisms of MNB and DNB is reported here. In particular, the intramolecular interactions of amine H-atom with electronegative nitro O-atom and carbonyl O-atom have been analyzed for biuret, MNB, and DNB at the M06-2X/aug-cc-pVTZ level of theory. The results show that the MNB and DNB molecules are stabilized through six-member-ring moieties via intramolecular H-bonding with interatomic distances between 1.8 and 2.0 Å, due to electrostatic as well as polarization and dispersion interactions. Furthermore, it was found that the stable molecules in the solid state have the smallest dipole moment amongst all the conformers in the nitrobiuret series of compounds, thus revealing a simple way for evaluating reactivity of fuel conformers. The potential energy surface for thermal decomposition of MNB was characterized by spin restricted coupled cluster theory at the RCCSD(T)/cc-pV∞ Z//M06-2X/aug-cc-pVTZ level. It was found that the thermal decomposition of MNB is initiated by the elimination of HNCO and HNN(O)OH intermediates. Intramolecular transfer of a H-atom, respectively, from the terminal NH2 group to the adjacent carbonyl O-atom via a six-member-ring transition state eliminates HNCO with an energy barrier of 35 kcal/mol and from the central NH group to the adjacent nitro O-atom eliminates HNN(O)OH with an energy barrier of 34 kcal/mol. Elimination of HNN(O)OH is also the primary process involved in the thermal decomposition of DNB, which processes C2v symmetry. The rate coefficients for the primary decomposition channels for MNB and DNB were quantified as functions of temperature and pressure. In addition, the thermal decomposition of HNN(O)OH was analyzed via Rice-Ramsperger-Kassel-Marcus/multi-well master equation simulations, the results of which reveal the formation of (NO2 + H2O) to be the major decomposition path. Furthermore, we provide fundamental interpretations for the experimental results of Klapötke et al. [Combust. Flame 139, 358-366 (2004)] regarding the thermal stability of MNB and DNB, and their decomposition products. Notably, a fundamental understanding of fuel stability, decomposition mechanism, and key reactions leading to ignition is essential in the design and manipulation of molecular systems for the development of new energetic materials for advanced propulsion applications.

  10. Validation of S-NPP VIIRS Day-Night Band and M Bands Performance Using Ground Reference Targets of Libya 4 and Dome C

    NASA Technical Reports Server (NTRS)

    Chen, Xuexia; Wu, Aisheng; Xiong, Xiaoxiong; Lei, Ning; Wang, Zhipeng; Chiang, Kwofu

    2015-01-01

    This paper provides methodologies developed and implemented by the NASA VIIRS Calibration Support Team (VCST) to validate the S-NPP VIIRS Day-Night band (DNB) and M bands calibration performance. The Sensor Data Records produced by the Interface Data Processing Segment (IDPS) and NASA Land Product Evaluation and Algorithm Testing Element (PEATE) are acquired nearly nadir overpass for Libya 4 desert and Dome C snow surfaces. In the past 3.5 years, the modulated relative spectral responses (RSR) change with time and lead to 3.8% increase on the DNB sensed solar irradiance and 0.1% or less increases on the M4-M7 bands. After excluding data before April 5th, 2013, IDPS DNB radiance and reflectance data are consistent with Land PEATE data with 0.6% or less difference for Libya 4 site and 2% or less difference for Dome C site. These difference are caused by inconsistent LUTs and algorithms used in calibration. In Libya 4 site, the SCIAMACHY spectral and modulated RSR derived top of atmosphere (TOA) reflectance are compared with Land PEATE TOA reflectance and they indicate a decrease of 1.2% and 1.3%, respectively. The radiance of Land PEATE DNB are compared with the simulated radiance from aggregated M bands (M4, M5, and M7). These data trends match well with 2% or less difference for Libya 4 site and 4% or less difference for Dome C. This study demonstrate the consistent quality of DNB and M bands calibration for Land PEATE products during operational period and for IDPS products after April 5th, 2013.

  11. Effect of culture age on 1,3-dinitrobenzene metabolism and indicators of cellular toxicity in rat testicular cells.

    PubMed

    Brown, C D; Miller, M G

    1991-01-01

    The metabolism and toxicity of 1,3-dinitrobenzene(1,3-DNB) were examined in rat testicular cells that had been cultured for various amounts of time. The three cell systems utilized were: freshly isolated suspensions of Sertoli/germ cells; the same Sertoli/germ cells co-cultured for 24 hr; and Sertoli cell-enriched monolayers derived from the co-cultures and cultured for 96 hr. Indicators of toxicity were MTT reduction, neutral red incorporation, cellular ATP levels and lactate secretion into the media. 1,3-DNB (5-50 mum) caused a significant concentration-dependent decline in cellular ATP levels in the fresh cell suspension, but not in the cells that had been cultured for longer. No changes were observed either in MTT reduction or neutral red incorporation. Increased secretion of lactate into the media also did not prove to be a sensitive indicator of toxicity. Interestingly, 1,3-DNB metabolism to nitroaniline, nitroacetanilide and a covalently bound species was two to three times greater in the fresh cells, compared with either the 24- or 96-hr cell cultures. The data indicate that time in culture may have significant effects on both the capacity of testicular cells to metabolize 1,3-DNB and susceptibility to toxicity.

  12. [Mechanism of oxidation reaction of NADH models and phynylglyoxal with hydrogen peroxide. Hypothesis on separate transport of hydrogen and electron atom in certain enzymatic reactions with the participation of NADH and NADPH].

    PubMed

    Iasnikov, A A; Ponomarenko, S P

    1976-05-01

    Kinetics of co-oxidation of 1-benzen-3-carbamido-1,4-dihydropyridine (BDN) and phenylglyoxal (PG) with hydrogen peroxide is studied. Dimeric product (di-e11-benzen-5-carbamido-1,2-dihydropyridyl-2]) is found to be formed at pH 9, and quaternal pyridinium salt (BNA)--at pH 7. Molecular oxigen is determined to participate in the reaction at pH 7. Copper (II) ions catalyze this process. Significant catalytic effect of p-dinitrobenzen (p-DNB) is found. The reaction mechanism is postulated to form hydroperoxide from PG and hydrogen peroxide which are capable to split the hydrogen attom from dihydropyridine, molecular oxigen or p-DNB being an acceptor of the electrone. Hypothesis on separate transfer of hydrogen atom and electrone in biological systems are proposed.

  13. Ab initio kinetics and thermal decomposition mechanism of mononitrobiuret and 1,5-dinitrobiuret

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Hongyan, E-mail: hongyan.sun1@gmail.com, E-mail: ghanshyam.vaghjiani@us.af.mil; Vaghjiani, Ghanshyam L., E-mail: hongyan.sun1@gmail.com, E-mail: ghanshyam.vaghjiani@us.af.mil

    2015-05-28

    Mononitrobiuret (MNB) and 1,5-dinitrobiuret (DNB) are tetrazole-free, nitrogen-rich, energetic compounds. For the first time, a comprehensive ab initio kinetics study on the thermal decomposition mechanisms of MNB and DNB is reported here. In particular, the intramolecular interactions of amine H-atom with electronegative nitro O-atom and carbonyl O-atom have been analyzed for biuret, MNB, and DNB at the M06-2X/aug-cc-pVTZ level of theory. The results show that the MNB and DNB molecules are stabilized through six-member-ring moieties via intramolecular H-bonding with interatomic distances between 1.8 and 2.0 Å, due to electrostatic as well as polarization and dispersion interactions. Furthermore, it was foundmore » that the stable molecules in the solid state have the smallest dipole moment amongst all the conformers in the nitrobiuret series of compounds, thus revealing a simple way for evaluating reactivity of fuel conformers. The potential energy surface for thermal decomposition of MNB was characterized by spin restricted coupled cluster theory at the RCCSD(T)/cc-pV∞ Z//M06-2X/aug-cc-pVTZ level. It was found that the thermal decomposition of MNB is initiated by the elimination of HNCO and HNN(O)OH intermediates. Intramolecular transfer of a H-atom, respectively, from the terminal NH{sub 2} group to the adjacent carbonyl O-atom via a six-member-ring transition state eliminates HNCO with an energy barrier of 35 kcal/mol and from the central NH group to the adjacent nitro O-atom eliminates HNN(O)OH with an energy barrier of 34 kcal/mol. Elimination of HNN(O)OH is also the primary process involved in the thermal decomposition of DNB, which processes C{sub 2v} symmetry. The rate coefficients for the primary decomposition channels for MNB and DNB were quantified as functions of temperature and pressure. In addition, the thermal decomposition of HNN(O)OH was analyzed via Rice–Ramsperger–Kassel–Marcus/multi-well master equation simulations, the results of which reveal the formation of (NO{sub 2} + H{sub 2}O) to be the major decomposition path. Furthermore, we provide fundamental interpretations for the experimental results of Klapötke et al. [Combust. Flame 139, 358–366 (2004)] regarding the thermal stability of MNB and DNB, and their decomposition products. Notably, a fundamental understanding of fuel stability, decomposition mechanism, and key reactions leading to ignition is essential in the design and manipulation of molecular systems for the development of new energetic materials for advanced propulsion applications.« less

  14. Sensitive, Selective Test For Hydrazines

    NASA Technical Reports Server (NTRS)

    Roundbehler, David; Macdonald, Stephen

    1993-01-01

    Derivatives of hydrazines formed, then subjected to gas chromatography and detected via chemiluminescence. In method of detecting and quantifying hydrazine vapors, vapors reacted with dinitro compound to enhance sensitivity and selectivity. Hydrazine (HZ), monomethyl hydrazine, (MMH), and unsymmetrical dimethylhydrazine (UDMH) analyzed quantitatively and qualitatively, either alone or in mixtures. Vapors collected and reacted with 2,4-dinitrobenzaldehyde, (DNB), making it possible to concentrate hydrazine in derivative form, thereby increasing sensitivity to low initial concentrations. Increases selectivity because only those constituents of sample reacting with DNB concentrated for analysis.

  15. Ab initio Kinetics and Thermal Decomposition Mechanism of Mononitrobiuret and 1,5-Dinitrobiuret

    DTIC Science & Technology

    2016-03-14

    Journal Article 3. DATES COVERED (From - To) Feb 2015-May 2015 4. TITLE AND SUBTITLE Ab initio Kinetics and Thermal Decomposition Mechanism of 5a...tetrazole-free, nitrogen-rich, energetic compounds. For the first time, the thermal decomposition mechanisms of MNB and DNB have been investigated...potential energy surfaces for thermal decomposition of MNB and DNB were characterized at the RCCSD(T)/cc-pV∞Z//M06-2X/aug- cc-pVTZ level of theory

  16. Subchronic toxicity studies on 1,3,5-trinitrobenzene, l,3-dinitrobenzene, and tetryl in rats. Final report, 15 January 1992-20 September 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddy, T.V.; Daniel, F.B.; Robinson, M.

    Nitroaromatics, such as 1 ,3-dinitrobenzene (DNB), 1 ,3,5-trinitrobenzene (TN B), and N-methyl-N,2,4,6-tetranitroaniline (tetryl), have been detected as environmental contaminants of groundwater and soil near production sites and in some instances at military test grounds. DNB is formed as a by-product during 2,4,6-trinitrotoluene (TNT) production. It is also formed through photochemical oxidative degradation of 2,4- dinitrotoluene a by-product released into the environment from TNT manufacturing (Spanggord et.al., 1980). DNB and TNB are not easily biodegradable, persist in the environment, eventually leach out, and contaminate groundwater near waste disposal sites. Tetryl is an explosive that has been in use, largely for militarymore » purposes, since 1906. Wastewaters and soil at the original production sites and other plants devoted to munitions assembly, contain large quantities of these compounds (Walsh and Jenkins, 1992).« less

  17. Estimating global per-capita carbon emissions with VIIRS nighttime lights satellite data

    NASA Astrophysics Data System (ADS)

    Jasmin, T.; Desai, A. R.; Pierce, R. B.

    2015-12-01

    With the launch of the Suomi National Polar-orbiting Partnership (NPP) satellite in November 2011, we now have nighttime lights remote sensing capability vastly improved over the predecessor Defense Meteorological Satellite Program (DMSP), owing to improved spatial and radiometric resolution provided by the Visible Infrared Imaging Radiometer Suite (VIIRS) Day Night Band (DNB) along with technology improvements in data transfer, processing, and storage. This development opens doors for improving novel scientific applications utilizing remotely sensed low-level visible light, for purposes ranging from estimating population to inferring factors relating to economic development. For example, the success of future international agreements to reduce greenhouse gas emissions will be dependent on mechanisms to monitor remotely for compliance. Here, we discuss implementation and evaluation of the VRCE system (VIIRS Remote Carbon Estimates), developed at the University of Wisconsin-Madison, which provides monthly independent, unbiased estimates of per-capita carbon emissions. Cloud-free global composites of Earth nocturnal lighting are generated from VIIRS DNB at full spatial resolution (750 meter). A population equation is derived from a linear regression of DNB radiance sums at state level to U.S. Census data. CO2 emissions are derived from a linear regression of VIIRS DNB radiance sums to U.S. Department of Energy emission estimates. Regional coefficients for factors such as percentage of energy use from renewable sources are factored in, and together these equations are used to generate per-capita CO2 emission estimates at the country level.

  18. Improving Nocturnal Fire Detection with the VIIRS Day-Night Band

    NASA Technical Reports Server (NTRS)

    Polivka, Thomas N.; Wang, Jun; Ellison, Luke T.; Hyer, Edward J.; Ichoku, Charles M.

    2016-01-01

    Building on existing techniques for satellite remote sensing of fires, this paper takes advantage of the day-night band (DNB) aboard the Visible Infrared Imaging Radiometer Suite (VIIRS) to develop the Firelight Detection Algorithm (FILDA), which characterizes fire pixels based on both visible-light and infrared (IR) signatures at night. By adjusting fire pixel selection criteria to include visible-light signatures, FILDA allows for significantly improved detection of pixels with smaller and/or cooler subpixel hotspots than the operational Interface Data Processing System (IDPS) algorithm. VIIRS scenes with near-coincident Advanced Spaceborne Thermal Emission and Reflection (ASTER) overpasses are examined after applying the operational VIIRS fire product algorithm and including a modified "candidate fire pixel selection" approach from FILDA that lowers the 4-µm brightness temperature (BT) threshold but includes a minimum DNB radiance. FILDA is shown to be effective in detecting gas flares and characterizing fire lines during large forest fires (such as the Rim Fire in California and High Park fire in Colorado). Compared with the operational VIIRS fire algorithm for the study period, FILDA shows a large increase (up to 90%) in the number of detected fire pixels that can be verified with the finer resolution ASTER data (90 m). Part (30%) of this increase is likely due to a combined use of DNB and lower 4-µm BT thresholds for fire detection in FILDA. Although further studies are needed, quantitative use of the DNB to improve fire detection could lead to reduced response times to wildfires and better estimate of fire characteristics (smoldering and flaming) at night.

  19. An investigation of transition boiling mechanisms of subcooled water under forced convective conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwang-Won, Lee; Sang-Yong, Lee

    1995-09-01

    A mechanistic model for forced convective transition boiling has been developed to investigate transition boiling mechanisms and to predict transition boiling heat flux realistically. This model is based on a postulated multi-stage boiling process occurring during the passage time of the elongated vapor blanket specified at a critical heat flux (CHF) condition. Between the departure from nucleate boiling (DNB) and the departure from film boiling (DFB) points, the boiling heat transfer is established through three boiling stages, namely, the macrolayer evaporation and dryout governed by nucleate boiling in a thin liquid film and the unstable film boiling characterized by themore » frequent touches of the interface and the heated wall. The total heat transfer rates after the DNB is weighted by the time fractions of each stage, which are defined as the ratio of each stage duration to the vapor blanket passage time. The model predictions are compared with some available experimental transition boiling data. The parametric effects of pressure, mass flux, inlet subcooling on the transition boiling heat transfer are also investigated. From these comparisons, it can be seen that this model can identify the crucial mechanisms of forced convective transition boiling, and that the transition boiling heat fluxes including the maximum heat flux and the minimum film boiling heat flux are well predicted at low qualities/high pressures near 10 bar. In future, this model will be improved in the unstable film boiling stage and generalized for high quality and low pressure situations.« less

  20. Long-Term Impact of Earthquakes on Sleep Quality

    PubMed Central

    Tempesta, Daniela; Curcio, Giuseppe; De Gennaro, Luigi; Ferrara, Michele

    2013-01-01

    Purpose We investigated the impact of the 6.3 magnitude 2009 L’Aquila (Italy) earthquake on standardized self-report measures of sleep quality (Pittsburgh Sleep Quality Index, PSQI) and frequency of disruptive nocturnal behaviours (Pittsburgh Sleep Quality Index-Addendum, PSQI-A) two years after the natural disaster. Methods Self-reported sleep quality was assessed in 665 L’Aquila citizens exposed to the earthquake compared with a different sample (n = 754) of L'Aquila citizens tested 24 months before the earthquake. In addition, sleep quality and disruptive nocturnal behaviours (DNB) of people exposed to the traumatic experience were compared with people that in the same period lived in different areas ranging between 40 and 115 km from the earthquake epicenter (n = 3574). Results The comparison between L’Aquila citizens before and after the earthquake showed a significant deterioration of sleep quality after the exposure to the trauma. In addition, two years after the earthquake L'Aquila citizens showed the highest PSQI scores and the highest incidence of DNB compared to subjects living in the surroundings. Interestingly, above-the-threshold PSQI scores were found in the participants living within 70 km from the epicenter, while trauma-related DNBs were found in people living in a range of 40 km. Multiple regressions confirmed that proximity to the epicenter is predictive of sleep disturbances and DNB, also suggesting a possible mediating effect of depression on PSQI scores. Conclusions The psychological effects of an earthquake may be much more pervasive and long-lasting of its building destruction, lasting for years and involving a much larger population. A reduced sleep quality and an increased frequency of DNB after two years may be a risk factor for the development of depression and posttraumatic stress disorder. PMID:23418478

  1. Long-term impact of earthquakes on sleep quality.

    PubMed

    Tempesta, Daniela; Curcio, Giuseppe; De Gennaro, Luigi; Ferrara, Michele

    2013-01-01

    We investigated the impact of the 6.3 magnitude 2009 L'Aquila (Italy) earthquake on standardized self-report measures of sleep quality (Pittsburgh Sleep Quality Index, PSQI) and frequency of disruptive nocturnal behaviours (Pittsburgh Sleep Quality Index-Addendum, PSQI-A) two years after the natural disaster. Self-reported sleep quality was assessed in 665 L'Aquila citizens exposed to the earthquake compared with a different sample (n = 754) of L'Aquila citizens tested 24 months before the earthquake. In addition, sleep quality and disruptive nocturnal behaviours (DNB) of people exposed to the traumatic experience were compared with people that in the same period lived in different areas ranging between 40 and 115 km from the earthquake epicenter (n = 3574). The comparison between L'Aquila citizens before and after the earthquake showed a significant deterioration of sleep quality after the exposure to the trauma. In addition, two years after the earthquake L'Aquila citizens showed the highest PSQI scores and the highest incidence of DNB compared to subjects living in the surroundings. Interestingly, above-the-threshold PSQI scores were found in the participants living within 70 km from the epicenter, while trauma-related DNBs were found in people living in a range of 40 km. Multiple regressions confirmed that proximity to the epicenter is predictive of sleep disturbances and DNB, also suggesting a possible mediating effect of depression on PSQI scores. The psychological effects of an earthquake may be much more pervasive and long-lasting of its building destruction, lasting for years and involving a much larger population. A reduced sleep quality and an increased frequency of DNB after two years may be a risk factor for the development of depression and posttraumatic stress disorder.

  2. Intermolecular electron-transfer mechanisms via quantitative structures and ion-pair equilibria for self-exchange of anionic (dinitrobenzenide) donors.

    PubMed

    Rosokha, Sergiy V; Lü, Jian-Ming; Newton, Marshall D; Kochi, Jay K

    2005-05-25

    Definitive X-ray structures of "separated" versus "contact" ion pairs, together with their spectral (UV-NIR, ESR) characterizations, provide the quantitative basis for evaluating the complex equilibria and intrinsic (self-exchange) electron-transfer rates for the potassium salts of p-dinitrobenzene radical anion (DNB(-)). Three principal types of ion pairs, K(L)(+)DNB(-), are designated as Classes S, M, and C via the specific ligation of K(+) with different macrocyclic polyether ligands (L). For Class S, the self-exchange rate constant for the separated ion pair (SIP) is essentially the same as that of the "free" anion, and we conclude that dinitrobenzenide reactivity is unaffected when the interionic distance in the separated ion pair is r(SIP) > or =6 Angstroms. For Class M, the dynamic equilibrium between the contact ion pair (with r(CIP) = 2.7 Angstroms) and its separated ion pair is quantitatively evaluated, and the rather minor fraction of SIP is nonetheless the principal contributor to the overall electron-transfer kinetics. For Class C, the SIP rate is limited by the slow rate of CIP right arrow over left arrow SIP interconversion, and the self-exchange proceeds via the contact ion pair by default. Theoretically, the electron-transfer rate constant for the separated ion pair is well-accommodated by the Marcus/Sutin two-state formulation when the precursor in Scheme 2 is identified as the "separated" inner-sphere complex (IS(SIP)) of cofacial DNB(-)/DNB dyads. By contrast, the significantly slower rate of self-exchange via the contact ion pair requires an associative mechanism (Scheme 3) in which the electron-transfer rate is strongly governed by cationic mobility of K(L)(+) within the "contact" precursor complex (IS(CIP)) according to the kinetics in Scheme 4.

  3. Thermal Decomposition of 1,5-Dinitrobiuret (DNB): Direct Dynamics Trajectory Simulations and Statistical Modeling

    DTIC Science & Technology

    2011-05-03

    18 . NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON Dr. Tommy W. Hawkins a. REPORT Unclassified b. ABSTRACT Unclassified c. THIS PAGE...branching using Rice-Ramsperger-Kassel-Marcus (RRKM) theory, 18 and finally to the analysis of inter-conversions of primary decomposition products...theory, 18 was employed to examine the properties of the reactant, intermediate complex and transition states as a function of the total internal energy

  4. Supporting Disaster Assessment and Response with the VIIRS Day-Night Band

    NASA Technical Reports Server (NTRS)

    Schultz, Lori A.; Cole, Tony; Molthan, Andrew L.

    2015-01-01

    When meteorological or man-made disasters occur, first responders often focus on impacts to the affected population and other human activities. Often, these disasters result in significant impacts to local infrastructure and power, resulting in widespread power outages. For minor events, these power outages are often short-lived, but major disasters often include long-term outages that have a significant impact on wellness, safety, and recovery efforts within the affected areas. Staff at NASA's Short-term Prediction Research and Transition (SPoRT) Center have been investigating the use of the VIIRS day-night band for monitoring power outages that result from significant disasters, and developing techniques to identify damaged areas in near real-time following events. In addition to immediate assessment, the VIIRS DNB can be used to monitor and assess ongoing recovery efforts. In this presentation, we will highlight previous applications of the VIIRS DNB following Superstorm Sandy in 2012, and other applications of the VIIRS DNB to more recent disaster events, including detection of outages following the Moore, Oklahoma tornado of May 2013 and the Chilean earthquake of April 2014. Examples of current products will be shown, along with future work and other goals for supporting disaster assessment and response with VIIRS capabilities.

  5. Tweezering the core of dendrimers: medium effect on the kinetic and thermodynamic properties.

    PubMed

    Giansante, Carlo; Mazzanti, Andrea; Baroncini, Massimo; Ceroni, Paola; Venturi, Margherita; Klärner, Frank-Gerrit; Vögtle, Fritz

    2009-10-02

    We have investigated the complex formation between dendritic guests and a molecular tweezer host by NMR, absorption, and emission spectroscopy as well as electrochemical techniques. The dendrimers are constituted by an electron-acceptor 4,4'-bipyridinium core appended with one (DnB(2+)) or two (Dn(2)B(2+)) polyaryl-ether dendrons. Tweezer T comprises a naphthalene and four benzene components bridged by four methylene groups. Medium effects on molecular recognition phenomena are discussed and provide insight into the conformation of dendrimers: change in solvent polarity from pure CH(2)Cl(2) to CH(2)Cl(2)/CH(3)CN mixtures and addition of tetrabutylammonium hexafluorophosphate (NBu(4)PF(6), up to 0.15 M), the supporting electrolyte used in the electrochemical measurements, have been investigated. The association constants measured in different media show the following trend: (i) they decrease upon increasing polarity of the solvent, as expected for host-guest complexes stabilized by electron donor-acceptor interactions; (ii) no effect of generation and number of dendrons (one for the DnB(2+) family and two for the Dn(2)B(2+) family) appended to the core is observed in higher polarity media; and (iii) in a low-polarity solvent, like CH(2)Cl(2), the stability of the inclusion complexes is higher for DnB(2+) dendrimers than for Dn(2)B(2+) ones, while within each dendrimer family it increases by decreasing dendron generation, and upon addition of NBu(4)PF(6). The last result has been ascribed to a partial dendron unfolding. Kinetic investigations performed in lower polarity media evidence that the rate constants of complex formation are slower for symmetric Dn(2)B(2+) dendrimers than for the nonsymmetric DnB(2+) ones, and that within the Dn(2)B(2+) family, they decrease by increasing dendron generation. The dependence of the rate constants for the formation and dissociation of the complexes upon addition of NBu(4)PF(6) has also been investigated and discussed.

  6. The 2015 Academic College of Emergency Experts in India's INDO-US Joint Working Group White Paper on Establishing an Academic Department and Training Pediatric Emergency Medicine Specialists in India

    PubMed Central

    Mahajan, Prashant; Batra, Prerna; Shah, Binita R; Saha, Abhijeet; Galwankar, Sagar; Aggrawal, Praveen; Hassoun, Ameer; Batra, Bipin; Bhoi, Sanjeev; Kalra, Om Prakash; Shah, Dheeraj

    2015-01-01

    The concept of pediatric emergency medicine (PEM) is virtually nonexistent in India. Suboptimally, organized prehospital services substantially hinder the evaluation, management, and subsequent transport of the acutely ill and/or injured child to an appropriate facility. Furthermore, the management of the ill child at the hospital level is often provided by overburdened providers who, by virtue of their training, lack experience in the skills required to effectively manage pediatric emergencies. Finally, the care of the traumatized child often requires the involvement of providers trained in different specialities, which further impedes timely access to appropriate care. The recent recognition of Doctor of Medicine (MD) in Emergency Medicine (EM) as an approved discipline of study as per the Indian Medical Council Act provides an unprecedented opportunity to introduce PEM as a formal academic program in India. PEM has to be developed as a 3-year superspeciality course (in PEM) after completion of MD/Diplomate of National Board (DNB) Pediatrics or MD/DNB in EM. The National Board of Examinations (NBE) that accredits and administers postgraduate and postdoctoral programs in India also needs to develop an academic program – DNB in PEM. The goals of such a program would be to impart theoretical knowledge, training in the appropriate skills and procedures, development of communication and counseling techniques, and research. In this paper, the Joint Working Group of the Academic College of Emergency Experts in India (JWG-ACEE-India) gives its recommendations for starting 3-year DM/DNB in PEM, including the curriculum, infrastructure, staffing, and training in India. This is an attempt to provide an uniform framework and a set of guiding principles to start PEM as a structured superspeciality to enhance emergency care for Indian children. PMID:26807394

  7. Interactive chemistry management system (ICMS); Field demonstration results at United Illuminating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noto, F.A.; Farrell, D.M.; Lombard, E.V.

    1988-01-01

    The authors report on a field demonstration of the interactive chemistry management system (ICMS) performed in the late summer of 1987 at the New Haven Harbor Station of United Illuminating Co. This demonstration was the first installation of the ICMS at an actual plant site. The ICMS is a computer-based system designed to monitor, diagnose, and provide optional automatic control of water and steam chemistry throughout the steam generator cycle. It is one of the diagnostic modules that comprises CE-TOPS (combustion engineering total on-line performance system), which continuously monitors operating conditions and suggests priority actions to increase operation efficiency, extendmore » the performance life of boiler components and reduce maintenance costs. By reducing the number of forced outages through early identification of potentially detrimental conditions, diagnosis of possible causes, and execution of corrective actions, improvements in unit availability and reliability will result.« less

  8. MEH-PPV film thickness influenced fluorescent quenching of tip-coated plastic optical fiber sensors

    NASA Astrophysics Data System (ADS)

    Yusufu, A. M.; Noor, A. S. M.; Tamchek, N.; Abidin, Z. Z.

    2017-12-01

    The performance of plastic optical fiber sensors in detecting nitro aromatic explosives 1,4-dinitrobenzene (DNB) have been investigated by fluorescence spectroscopy and analyzed by using fluorescence quenching technique. The plastic optical fiber utilized is 90 degrees cut tip and dip-coated with conjugated polymer MEH-PPV poly[2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene] thin films for detection conjugants. The thicknesses of the MEH-PPV coating were varied to improvise the sensitivity whilst slowly reducing the fluorescence intensity. It was shown that fluorescence intensity from thinner film decreased by (82% in 40 s) in the presence of DNB signifying an improvement of 28% reduction with time 13 s less than that of the thicker film.

  9. Charge Exchange Recombination Spectroscopy Based on Diagnostic Neutral Beam in HT-7 Tokamak

    NASA Astrophysics Data System (ADS)

    Shi, Yuejiang; Fu, Jia; Li, Yingying; William, Rowan; Huang, He; Wang, Fudi; Gao, Huixian; Huang, Juann; Zhou, Qian; Liu, Sheng; Zhang, Jian; Li, Jun; Xie, Yuanlai; Liu, Zhimin; Huang, Yiyun; Hu, Chundong; Wan, Baonian

    2010-02-01

    Charge exchange recombination spectroscopy (CXRS) based on a diagnostic neutral beam (DNB) installed in the HT-7 tokamak is introduced. DNB can provide a 6 A extracted current at 50 kV for 0.1 s in hydrogen. It can penetrate into the core plasma in HT-7. The CXRS system is designed to observe charge exchange (CX) transitions in the visible spectrum. CX light from the beam is focused onto 10 optical fibers, which view the plasma from -5 cm to 20 cm. The CXRS system can measure the ion temperature as low as 0.1 keV. With CXRS, the local ion temperature profile in HT-7 was obtained for the first time.

  10. Nitroreductase-dependent mutagenicity of p-nitrophenylhydroxylamine and its N-acetyl and N-formyl hydroxamic acids.

    PubMed

    Corbett, M D; Wei, C; Corbett, B R

    1985-05-01

    p-Nitrophenylhydroxylamine (NPH) and two hydroxamic acids derived from it were synthesized and subjected to mutagenicity testing in Salmonella typhimurium strains TA98, TA98NR, TA1538 and TA1538NR. In addition, p-dinitrobenzene (DNB), p-nitroaniline (NA) and p-nitroacetanilide (AcNA) were simultaneously examined for mutagenic action against these four tester strains. NPH, its N-acetyl (AcNPH) and N-formyl (FoNPH) derivatives, and also DNB displayed strong mutagenic action to the nitroreductase-containing strains, TA98 and TA1538. NPH was the most potent chemical in this series against both of these strains, while the two hydroxamic acids AcNPH and FoNPH, and also DNB displayed approximately the same degree of mutagenicity. In the nitroreductase-deficient strains, TA98NR and TA1538NR, the mutagenicity of these four compounds was markedly reduced. The necessity for nitroreduction in order to activate these promutagens is fairly certain; however, the lack of mutagenicity of NA and AcNA towards all four tester strains made the interpretation of these data somewhat more complicated. Several possible bioactivation pathways were presented, with one mechanism in particular being proposed. This mechanism requires only that the strong electron-withdrawing nitro group be converted to an electron-donating group by bacterial nitroreductase. Such a mechanism is unique for the bioactivation of nitro aromatics by nitroreductase, since the enzymatic reduction need not produce the intermediary hydroxylamine metabolite.

  11. Verification of bubble tracking method and DNS examinations of single- and two-phase turbulent channel flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tryggvason, Gretar; Bolotnov, Igor; Fang, Jun

    2017-03-30

    Direct numerical simulation (DNS) has been regarded as a reliable data source for the development and validation of turbulence models along with experiments. The realization of DNS usually involves a very fine mesh that should be able to resolve all relevant turbulence scales down to Kolmogorov scale [1]. As the most computationally expensive approach compared to other CFD techniques, DNS applications used to be limited to flow studies at very low Reynolds numbers. Thanks to the tremendous growth of computing power over the past decades, the simulation capability of DNS has now started overlapping with some of the most challengingmore » engineering problems. One of those examples in nuclear engineering is the turbulent coolant flow inside reactor cores. Coupled with interface tracking methods (ITM), the simulation capability of DNS can be extended to more complicated two-phase flow regimes. Departure from nucleate boiling (DNB) is the limiting critical heat flux phenomena for the majority of accidents that are postulated to occur in pressurized water reactors (PWR) [2]. As one of the major modeling and simulation (M&S) challenges pursued by CASL, the prediction capability is being developed for the onset of DNB utilizing multiphase-CFD (M-CFD) approach. DNS (coupled with ITM) can be employed to provide closure law information for the multiphase flow modeling at CFD scale. In the presented work, research groups at NCSU and UND will focus on applying different ITM to different geometries. Higher void fraction flow analysis at reactor prototypical conditions will be performed, and novel analysis methods will be developed, implemented and verified for the challenging flow conditions.« less

  12. Detorsion night-time bracing for the treatment of early onset idiopathic scoliosis.

    PubMed

    Moreau, S; Lonjon, G; Mazda, K; Ilharreborde, B

    2014-12-01

    Management for early onset scoliosis has recently changed, with the development of new surgical procedures. However, multiple surgeries are often required and high complication rates are still reported. Conservative management remains an alternative, serial casting achieving excellent results in young children. Better compliance and improvement over natural history have been reported with night-time bracing in adolescent idiopathic scoliosis (AIS), but this treatment has never been reported in early onset idiopathic scoliosis (EIOS). All patients treated for progressive EOIS by detorsion night-time bracing (DNB), and meeting the Scoliosis Research Society (SRS) criteria for brace studies were reviewed. Recommendations were given to wear the DNB 8h/night and no restriction was given regarding sports activities. Radiological parameters were compared between referral and latest follow-up. Based on the SRS criteria defined for AIS, a similar classification was used as follows to analyze the course of the curves: success group: patients with a progression of 5° or less; unsuccess group (progression or failure): patients with a progression>5°, patients with curves exceeding 45° at maturity, or who have had recommendation for/undergone surgery, or patients who changed orthopaedic treatment, or who were lost to follow-up. Thirty-three patients were included (21 girls and 12 boys), with a median Cobb angle of 31° (Q1-Q3: 22-40). Age at brace initiation averaged 50months (Q1-Q3: 25-60). Median follow-up was 102-months (Q1-Q3: 63-125). Fifteen patients (45.5%) had reached skeletal maturity at last follow-up. The success rate was 67% (22 patients), with a median Cobb angle reduction of 15° (P<0.001). Four patients stopped DNB due to an important regression. Eleven patients were in the unsuccessful group (33%). Only one had surgery. All patients remained balanced in the frontal plane and normokyphotic. Initial curve magnitude and age at brace initiation appeared to be important prognostic factors. DNB is an effective conservative treatment, which can be considered a delaying tactic in the management of EOIS. This brace offers potential psychosocial and compliance benefits, and allows unconstrained spinal and chest wall growth, resulting in normokyphosis at maturity. Therapeutic study (retrospective consecutive case series): Level IV. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  13. 48 CFR 2909.105 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...'s responsibility in accordance with FAR 9.105. In addition to past performance information, the...” (available on the Internet at www.epls.gov). In addition, contracting officers should base their... & Bradstreet (available on the Internet for a fee at http://www.dnb.com/). ...

  14. 48 CFR 52.204-6 - Data Universal Numbering System (DUNS) Number.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Via the Internet at http://fedgov.dnb.com/webform or if the offeror does not have internet access, it...) accounts (see Subpart 32.11) for the same concern. (b) If the offeror does not have a DUNS number, it...

  15. Cationic metal complex, carbonatobis(1,10-phenanthroline)cobalt(III) as anion receptor: Synthesis, characterization, single crystal X-ray structure and packing analysis of [Co(phen) 2CO 3](3,5-dinitrobenzoate)·5H 2O

    NASA Astrophysics Data System (ADS)

    Sharma, Raj Pal; Singh, Ajnesh; Brandão, Paula; Felix, Vitor; Venugopalan, Paloth

    2009-03-01

    To explore the potential of [Co(phen) 2CO 3] + as anion receptor, red coloured single crystals of [Co(phen) 2CO 3](dnb)·5H 2O (dnb = 3,5-dinitrobenzoate) were obtained by recrystallizing the red microcrystalline product synthesised by the reaction of carbonatobis (1,10-phenanthroline)cobalt(III)chloride with sodium salt of 3,5-dinitrobenzoic acid in aqueous medium (1:1 molar ratio). The newly synthesized complex salt has been characterized by elemental analysis, spectroscopic studies (IR, UV/visible, 1H and 13C NMR), solubility and conductance measurements. The complex salt crystallizes in the triclinic crystal system with space group P1¯, having the cell dimensions a = 10.3140(8), b = 12.2885(11), c = 12.8747(13), α = 82.095(4), β = 85.617(4), γ = 79.221(4)°, V = 1585.6(2) Å 3, Z = 2. Single crystal X-ray structure determination revealed ionic structure consisting of cationic carbonatobis(1,10-phenanthroline)cobalt(III), dnb anion and five lattice water molecule. In the complex cation [Co(phen) 2CO 3] +, the cobalt(III) is bonded to four nitrogen atoms, originating from two phenanthroline ligands and two oxygen atoms from the bidentate carbonato group showing an octahedral geometry around cobalt(III) center. Supramolecular networks between ionic groups [ CHphen+⋯Xanion-] by second sphere coordination i.e. C sbnd H⋯O (benzoate), C sbnd H⋯O (nitro), C sbnd H⋯O (water) besides electrostatic forces of attraction alongwith π-π interactions stabilize the crystal lattice.

  16. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  17. LFER and CoMFA studies on optical resolution of alpha-alkyl alpha-aryloxy acetic acid methyl esters on DACH-DNB chiral stationary phase.

    PubMed

    Carotti, A; Altomare, C; Cellamare, S; Monforte, A; Bettoni, G; Loiodice, F; Tangari, N; Tortorella, V

    1995-04-01

    The HPLC resolution of a series of racemic alpha-substituted alpha-aryloxy acetic acid methyl esters I on a pi-acid N,N'-(3,5-dinitrobenzoyl)-trans-1,2-diaminocyclohexane as chiral selector was modelled by linear free energy-related (LFER) equations and comparative molecular field analysis (CoMFA). Our results indicate that the retention process mainly depends on solute lipophilicity and steric properties, whereas enantioselectivity is primarily influenced by electrostatic and steric interactions. CoMFA provided additional information with respect to the LFER study, allowed the mixing of different subsets of I and led to a quantitative 3D model of steric and electrostatic factors responsible for chiral recognition.

  18. Synthesis, spectroscopic characterization and structural investigations of a new charge transfer complex of 2,6-diaminopyridine with 3,5-dinitrobenzoic acid: DNA binding and antimicrobial studies

    NASA Astrophysics Data System (ADS)

    Khan, Ishaat M.; Ahmad, Afaq; Kumar, Sarvendra

    2013-03-01

    A new charge transfer (CT) complex [(DAPH)+(DNB)-] consisting of 2,6-diaminopyridine (DAP) as donor and 3,5-dinitrobenzoic acid (DNB-H) as acceptor, was synthesized and characterized by FTIR, 1H and 13C NMR, ESI mass spectroscopic and X-ray crystallographic techniques. The hydrogen bonding (N+-H⋯O-) plays an important role to consolidate the cation and anion together. CT complex shows a considerable interaction with Calf thymus DNA. The CT complex was also tested for its antibacterial activity against two Gram-positive bacteria Staphylococcus aureus and Bacillus subtilis and two Gram-negative bacteria Escherichia coli and Pseudomonas aeruginosa strains by using Tetracycline as standard, and antifungal property against Aspergillus niger, Candida albicans, and Penicillium sp. by using Nystatin as standard. The results were compared with standard drugs and significant conclusions were obtained. A polymeric net work through H-bonding interactions between neighboring moieties was observed. This has been attributed to the formation of 1:1 type CT complex.

  19. The use of nitrate, bacteria and fluorescent tracers to characterize groundwater recharge and contamination in a karst catchment, Chongqing, China

    NASA Astrophysics Data System (ADS)

    He, Qiufang; Yang, Pingheng; Yuan, Wenhao; Jiang, Yongjun; Pu, Junbin; Yuan, Daoxian; Kuang, Yinglun

    2010-08-01

    The Qingmuguan subterranean river system is located in the suburb of Chongqing, China, and it is the drinking water source that local people downstream rely on. The study aims to provide a scientific basis for groundwater protection in that area, using a hydrogeological framework, tracer tests, hydrological online monitoring, and hydrochemical and microbiological investigation, including heterotrophic plate count (HPC) and the analysis of denitrifying bacteria (DNB) and nitrobacteria (NB). The tracer tests proved simple and direct connections between two important sinkholes and the main springs, and also proved that the underground flows here are fast and turbulent. DNB and NB analyses revealed that the main recharge to the underground river in the dry season is the soil-leached water passing through the fissures of the epikarst, while in the rainy season, it is the surface water flow through sinkholes. The hydrochemical and microbiological data confirmed the notable impact of agriculture and sewage on the spring water quality. In the future, groundwater protection here should focus on targeted vulnerability mapping that yields different protection strategies for different seasons.

  20. Reproductive toxicity of a single dose of 1,3-dinitrobenzene in two ages of young adult male rats

    EPA Science Inventory

    These studies evaluated the reproductive response and the possible influence of testicular maturation on the reproductive parameters, in male rats treated with 1,3-dinitrobenzene (m-DNB). Young adult male rats (75 or 105 days of age) were given a single oral dose of 0, 8, 16, 24,...

  1. 2 CFR 176.50 - Award term-Reporting and registration requirements under section 1512 of the Recovery Act.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... registrations in the Central Contractor Registration (http://www.ccr.gov) at all times during which they have... System (DUNS) Number (http://www.dnb.com) is one of the requirements for registration in the Central... Recovery Act using the reporting instructions and data elements that will be provided online at http://www...

  2. Ignition of Ionic Liquids. Volume 2

    DTIC Science & Technology

    2010-09-01

    TOFMS time-of-flight-mass-spectrometry TS transition state VUV vacuum ultraviolet ZPE zero-point energy Approved for public...energies ( ZPEs ) were scaled by a factor of 0.9613 and 0.9804, respectively, and when necessary intrinsic reaction coordinate (IRC) calculations were...oscillations in the PE reflect the vibration of the DNB molecule, including ZPE . The trajectory shows three dissociation steps, eliminating NO2 followed

  3. 78 FR 33445 - Office of Small Credit Unions (OSCUI) Grant Program Access For Credit Unions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-04

    ... hardware and software necessary to convert to computerized operations. The maximum award amount for this...,000 per credit union for financial education projects that improve financial capability in the... found on D&B's Web site at http://fedgov.dnb.com/webform or by calling D&B, toll-free, at 1-866-705-5711...

  4. Vibrational signatures in the THz spectrum of 1,3-DNB: A first-principles and experimental study

    NASA Astrophysics Data System (ADS)

    Ahmed, Towfiq; Azad, Abul K.; Chellappa, Raja; Higginbotham-Duque, Amanda; Dattelbaum, Dana M.; Zhu, Jian-Xin; Moore, David; Graf, Matthias J.

    2016-05-01

    Understanding the fundamental processes of light-matter interaction is important for detection of explosives and other energetic materials, which are active in the infrared and terahertz (THz) region. We report a comprehensive study on electronic and vibrational lattice properties of structurally similar 1,3-dinitrobenzene (1,3-DNB) crystals through first-principles electronic structure calculations and THz spectroscopy measurements on polycrystalline samples. Starting from reported x-ray crystal structures, we use density-functional theory (DFT) with periodic boundary conditions to optimize the structures and perform linear response calculations of the vibrational properties at zero phonon momentum. The theoretically identified normal modes agree qualitatively with those obtained experimentally in a frequency range up to 2.5 THz and quantitatively at much higher frequencies. The latter frequencies are set by intra-molecular forces. Our results suggest that van der Waals dispersion forces need to be included to improve the agreement between theory and experiment in the THz region, which is dominated by intermolecular modes and sensitive to details in the DFT calculation. An improved comparison is needed to assess and distinguish between intra- and intermolecular vibrational modes characteristic of energetic materials.

  5. The optimal ecological factors and the denitrification populationof a denitrifying process for sulfate reducing bacteriainhibition

    NASA Astrophysics Data System (ADS)

    Li, Chunying

    2018-02-01

    SRB have great negative impacts on the oil production in Daqing Oil field. A continuous-flow anaerobic baffled reactors (ABR) are applied to investigate the feasibility and optimal ecological factors for the inhibition of SRB by denitrifying bacteria (DNB). The results showed that the SO42- to NO3- concentration ratio (SO42-/NO3-) are the most important ecological factor. The input of NO3- and lower COD can enhance the inhibition of S2-production effectively. The effective time of sulfate reduction is 6 h. Complete inhibition of SRB is obtained when the influent COD concentration is 600 mg/L, the SO42-/NO3- is 1/1 (600 mg/L for each), N is added simultaneously in the 2# and the 5# ABR chambers. By extracting the total DNA of wastewater from the effective chamber, 16SrDNA clones of a bacterium had been constructed. It is showed that the Proteobacteria accounted for eighty- four percent of the total clones. The dominant species was the Neisseria. Sixteen percent of the total clones were the Bacilli of Frimicutes. It indicated that DNB was effective and feasible for SRB inhibition.

  6. Forced Convection Heat Transfer of Subcooled Liquid Nitrogen in Horizontal Tube

    NASA Astrophysics Data System (ADS)

    Tatsumoto, H.; Shirai, Y.; Hata, K.; Kato, T.; Shiotsu, M.

    2008-03-01

    The knowledge of forced convection heat transfer of liquid hydrogen is important for the cooling design of a HTS superconducting magnet and a cold neutron moderator material. An experimental apparatus that could obtain forced flow without a pump was developed. As a first step of the study, the forced flow heat transfer of subcooled liquid nitrogen in a horizontal tube, instead of liquid hydrogen, was measured for the pressures ranging from 0.3 to 2.5 MPa. The inlet temperature was varied from 78 K to around its saturation temperature. The flow velocities were varied from 0.1 to 7 m/s. The heat transfer coefficients in the non-boiling region and the departure from nucleate boiling (DNB) heat fluxes were higher for higher flow velocity and higher subcooling. The measured values of Nu/Pr0.4 in the non-boiling region were proportional to Reynolds number (Re) to the power of 0.8. With a decrease in Re, Nu/Pr0.4 approached a constant value corresponding to that in a pool of liquid nitrogen. The correlation of DNB heat flux was derived that can describe the experimental data within ±15% difference.

  7. SNPP VIIRS Spectral Bands Co-Registration and Spatial Response Characterization

    NASA Technical Reports Server (NTRS)

    Lin, Guoqing; Tilton, James C.; Wolfe, Robert E.; Tewari, Krishna P.; Nishihama, Masahiro

    2013-01-01

    The Visible Infrared Imager Radiometer Suite (VIIRS) instrument onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite was launched on 28 October 2011. The VIIRS has 5 imagery spectral bands (I-bands), 16 moderate resolution spectral bands (M-bands) and a panchromatic day/night band (DNB). Performance of the VIIRS spatial response and band-to-band co-registration (BBR) was measured through intensive pre-launch tests. These measurements were made in the non-aggregated zones near the start (or end) of scan for the I-bands and M-bands and for a limited number of aggregation modes for the DNB in order to test requirement compliance. This paper presents results based on a recently re-processed pre-launch test data. Sensor (detector) spatial impulse responses in the scan direction are parameterized in terms of ground dynamic field of view (GDFOV), horizontal spatial resolution (HSR), modulation transfer function (MTF), ensquared energy (EE) and integrated out-of-pixel (IOOP) spatial response. Results are presented for the non-aggregation, 2-sample and 3-sample aggregation zones for the I-bands and M-bands, and for a limited number of aggregation modes for the DNB. On-orbit GDFOVs measured for the 5 I-bands in the scan direction using a straight bridge are also presented. Band-to-band co-registration (BBR) is quantified using the prelaunch measured band-to-band offsets. These offsets may be expressed as fractions of horizontal sampling intervals (HSIs), detector spatial response parameters GDFOV or HSR. BBR bases on HSIs in the non-aggregation, 2-sample and 3-sample aggregation zones are presented. BBR matrices based on scan direction GDFOV and HSR are compared to the BBR matrix based on HSI in the non-aggregation zone. We demonstrate that BBR based on GDFOV is a better representation of footprint overlap and so this definition should be used in BBR requirement specifications. We propose that HSR not be used as the primary image quality indicator, since we show that it is neither an adequate representation of the size of sensor spatial response nor an adequate measure of imaging quality.

  8. Applying n-bit floating point numbers and integers, and the n-bit filter of HDF5 to reduce file sizes of remote sensing products in memory-sensitive environments

    NASA Astrophysics Data System (ADS)

    Zinke, Stephan

    2017-02-01

    Memory sensitive applications for remote sensing data require memory-optimized data types in remote sensing products. Hierarchical Data Format version 5 (HDF5) offers user defined floating point numbers and integers and the n-bit filter to create data types optimized for memory consumption. The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) applies a compaction scheme to the disseminated products of the Day and Night Band (DNB) data of Suomi National Polar-orbiting Partnership (S-NPP) satellite's instrument Visible Infrared Imager Radiometer Suite (VIIRS) through the EUMETSAT Advanced Retransmission Service, converting the original 32 bits floating point numbers to user defined floating point numbers in combination with the n-bit filter for the radiance dataset of the product. The radiance dataset requires a floating point representation due to the high dynamic range of the DNB. A compression factor of 1.96 is reached by using an automatically determined exponent size and an 8 bits trailing significand and thus reducing the bandwidth requirements for dissemination. It is shown how the parameters needed for user defined floating point numbers are derived or determined automatically based on the data present in a product.

  9. Validation of VIIRS Cloud Base Heights at Night Using Ground and Satellite Measurements over Alaska

    NASA Astrophysics Data System (ADS)

    NOH, Y. J.; Miller, S. D.; Seaman, C.; Forsythe, J. M.; Brummer, R.; Lindsey, D. T.; Walther, A.; Heidinger, A. K.; Li, Y.

    2016-12-01

    Knowledge of Cloud Base Height (CBH) is critical to describing cloud radiative feedbacks in numerical models and is of practical significance to aviation communities. We have developed a new CBH algorithm constrained by Cloud Top Height (CTH) and Cloud Water Path (CWP) by performing a statistical analysis of A-Train satellite data. It includes an extinction-based method for thin cirrus. In the algorithm, cloud geometric thickness is derived with upstream CTH and CWP input and subtracted from CTH to generate the topmost layer CBH. The CBH information is a key parameter for an improved Cloud Cover/Layers product. The algorithm has been applied to the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi NPP spacecraft. Nighttime cloud optical properties for CWP are retrieved from the nighttime lunar cloud optical and microphysical properties (NLCOMP) algorithm based on a lunar reflectance model for the VIIRS Day/Night Band (DNB) measuring nighttime visible light such as moonlight. The DNB has innovative capabilities to fill the polar winter and nighttime gap of cloud observations which has been an important shortfall from conventional radiometers. The CBH products have been intensively evaluated against CloudSat data. The results showed the new algorithm yields significantly improved performance over the original VIIRS CBH algorithm. However, since CloudSat is now operational during daytime only due to a battery anomaly, the nighttime performance has not been fully assessed. This presentation will show our approach to assess the performance of the CBH algorithm at night. VIIRS CBHs are retrieved over the Alaska region from October 2015 to April 2016 using the Clouds from AVHRR Extended (CLAVR-x) processing system. Ground-based measurements from ceilometer and micropulse lidar at the Atmospheric Radiation Measurement (ARM) site on the North Slope of Alaska are used for the analysis. Local weather conditions are checked using temperature and precipitation observations at the site. CALIPSO data with near-simultaneous colocation are added for multi-layered cloud cases which may have high clouds aloft beyond the ground measurements. Multi-month statistics of performance and case studies will be shown. Additional efforts for algorithm refinements will be also discussed.

  10. Shared Roles of Halobacteriovorax and Viruses in Bacterial Mortality: The Environment Dictates the Winner

    NASA Astrophysics Data System (ADS)

    Chen, H.; Laws, E. A.; Gulig, P. A.; Berhane, T. K.; Martin, J. L.; Williams, H.

    2016-02-01

    Bacteriophages (phages) are considered to be a major contributor to bacterial mortality. Although recent evidence shows a similar role for the predatory bacterium, Halobacteriovorax (HBx), this organism has been largely ignored. We designed controlled laboratory microcosm studies to examine and compare the predation of a virus and an HBx strain on Vibrio vulnificus (Vv), under a range of environmental conditions. Predator-prey models were used to simulate the results and interpolated using Matlab software. The results show that although the HBx and virus both preyed on Vv, the magnitudes of their respective responses were different and were largely driven by environmental conditions. In low nutrient seawater, HBx was highly active in preying on Vv, resulting in a 4.4 log reduction of prey within 40 hours, whereas phage contributed little to bacterial mortality. However, when nutrients were added to the seawater, phage was the more active predator. At moderate levels of nutrient concentrations (DNB 1:10 and DNB 1:100) both predators were active. Both virus and HBx grew well at salt concentrations ranging from 9 to 30 ppt. Phage reproduction was optimized at 30 ppt and also occurred at higher levels at 40 and 45 ppt. HBx, on the other hand, grew best at 9 ppt and did not grow at 40 and 45 ppt. At temperatures between 15 and 37˚C both predators grew well. The impact of predation on Vv was positively correlated with temperature. The collective results of this study suggest that both HBx and phages can play significant roles in bacterial mortality and hence in shaping microbial communities and cycling nutrients. However, whether HBx or phages play the larger role in any circumstance may be orchestrated by environmental conditions. These results warrant reconsideration of the roles of different biological agents and the environment in bacteria mortality.

  11. Substituent effects on the enantioselective retention of anti-HIV 5-aryl-delta 2-1,2,4-oxadiazolines on R,R-DACH-DNB chiral stationary phase.

    PubMed

    Altomare, C; Cellamare, S; Carotti, A; Barreca, M L; Chimirri, A; Monforte, A M; Gasparrini, F; Villani, C; Cirilli, M; Mazza, F

    1996-01-01

    A series of racemic 3-phenyl-4-(1-adamantyl)-5-X-phenyl- delta 2-1,2,4-oxadiazo lines (PAdOx) were directly resolved by HPLC using a Pirkle-type stationary phase containing N,N'-(3,5-dinitrobenzoyl)-1(R),2(R)-diaminocyclohexane as chiral selector. The more retained enantiomers have S configuration, as demonstrated by X-ray crystallography and circular dichroism measurements. The influence of aromatic ring substituents on enantioselective retention was quantitatively assessed by traditional linear free energy-related (LFER) equations and comparative molecular field analysis (CoMFA). In good agreement with previous findings, the results from this study indicate that the increase in retention (k') is favoured mainly by the phi-basicity and the hydrophilicity of solute, whereas enantioselectivity (alpha) can be satisfactorily modeled by electronic and bulk parameters or CoMFA descriptors. The LFER equations and CoMFA models gave helpful insights into chiral recognition mechanisms.

  12. Dual targeted polymeric nanoparticles based on tumor endothelium and tumor cells for enhanced antitumor drug delivery.

    PubMed

    Gupta, Madhu; Chashoo, Gousia; Sharma, Parduman Raj; Saxena, Ajit Kumar; Gupta, Prem Narayan; Agrawal, Govind Prasad; Vyas, Suresh Prasad

    2014-03-03

    Some specific types of tumor cells and tumor endothelial cells represented CD13 proteins and act as receptors for Asn-Gly-Arg (NGR) motifs containing peptide. These CD13 receptors can be specifically recognized and bind through the specific sequence of cyclic NGR (cNGR) peptide and presented more affinity and specificity toward them. The cNGR peptide was conjugated to the poly(ethylene glycol) (PEG) terminal end in the poly(lactic-co-glycolic) acid PLGA-PEG block copolymer. Then, the ligand conjugated nanoparticles (cNGR-DNB-NPs) encapsulating docetaxel (DTX) were synthesized from preformed block copolymer by the emulsion/solvent evaporation method and characterized for different parameters. The various studies such as in vitro cytotoxicity, cell apoptosis, and cell cycle analysis presented the enhanced therapeutic potential of cNGR-DNB-NPs. The higher cellular uptake was also found in cNGR peptide anchored NPs into HUVEC and HT-1080 cells. However, free cNGR could inhibit receptor mediated intracellular uptake of NPs into both types of cells at 37 and 4 °C temperatures, revealing the involvement of receptor-mediated endocytosis. The in vivo biodistribution and antitumor efficacy studies indicated that targeted NPs have a higher therapeutic efficacy through targeting the tumor-specific site. Therefore, the study exhibited that cNGR-functionalized PEG-PLGA-NPs could be a promising approach for therapeutic applications to efficient antitumor drug delivery.

  13. Platelet-rich plasma, low-level laser therapy, or their combination promotes periodontal regeneration in fenestration defects: a preliminary in vivo study.

    PubMed

    Nagata, Maria J H; de Campos, Natália; Messora, Michel R; Pola, Natália M; Santinoni, Carolina S; Bomfim, Suely R M; Fucini, Stephen E; Ervolino, Edilson; de Almeida, Juliano M; Theodoro, Letícia H; Garcia, Valdir G

    2014-06-01

    This study histomorphometrically analyzes the influence of platelet-rich plasma (PRP), low-level laser therapy (LLLT), or their combination on the healing of periodontal fenestration defects (PFDs) in rats. PFDs were surgically created in the mandibles of 80 rats. The animals were randomly divided into four groups: 1) C (control) and 2) PRP, defects were filled with blood clot or PRP, respectively; 3) LLLT and 4) PRP/LLLT, defects received laser irradiation, were filled with blood clot or PRP, respectively, and then irradiated again. Animals were euthanized at either 10 or 30 days post-surgery. Percentage of new bone (NB), density of newly formed bone (DNB), new cementum (NC), and extension of remaining defect (ERD) were histomorphometrically evaluated. Data were statistically analyzed (analysis of variance; Tukey test, P <0.05). At 10 days, group PRP presented ERD significantly lower than group C. At 30 days, group PRP presented NB and DNB significantly greater than group C. Groups LLLT, PRP, and PRP/LLLT showed significant NC formation at 30 days, with collagen fibers inserted obliquely or perpendicularly to the root surface. NC formation was not observed in any group C specimen. LLLT, PRP, or their combination all promoted NC formation with a functional periodontal ligament. The combination PRP/LLLT did not show additional positive effects compared to the use of either therapy alone.

  14. From OLS to VIIRS, an overview of nighttime satellite aerosol retrievals using artificial light sources

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Miller, S. D.; Reid, J. S.; Hyer, E. J.; McHardy, T. M.

    2015-12-01

    Compared to abundant daytime satellite-based observations of atmospheric aerosol, observations at night are relatively scarce. In particular, conventional satellite passive imaging radiometers, which offer expansive swaths of spatial coverage compared to non-scanning lidar systems, lack sensitivity to most aerosol types via the available thermal infrared bands available at night. In this talk, we make the fundamental case for the importance of nighttime aerosol information in forecast models, and the need to mitigate the existing nocturnal gap. We review early attempts at estimating nighttime aerosol optical properties using the modulation of stable artificial surface lights. Initial algorithm development using DMSP Operational Linescan System (OLS) has graduated to refined techniques based on the Suomi-NPP Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB). We present examples of these retrievals for selected cases and compare the results to available surface-based point-source validation data.

  15. Perspectives on the neuroscience of alcohol from the National Institute on Alcohol Abuse and Alcoholism.

    PubMed

    Reilly, Matthew T; Noronha, Antonio; Warren, Kenneth

    2014-01-01

    Mounting evidence over the last 40 years clearly indicates that alcoholism (alcohol dependence) is a disorder of the brain. The National Institute on Alcohol Abuse and Alcoholism (NIAAA) has taken significant steps to advance research into the neuroscience of alcohol. The Division of Neuroscience and Behavior (DNB) was formed within NIAAA in 2002 to oversee, fund, and direct all research areas that examine the effects of alcohol on the brain, the genetic underpinnings of alcohol dependence, the neuroadaptations resulting from excessive alcohol consumption, advanced behavioral models of the various stages of the addiction cycle, and preclinical medications development. This research portfolio has produced important discoveries in the etiology, treatment, and prevention of alcohol abuse and dependence. Several of these salient discoveries are highlighted and future areas of neuroscience research on alcohol are presented. © 2014 Elsevier B.V. All rights reserved.

  16. Development of Sampling and Preservation Techniques to Retard Chemical and Biological Changes in Water Samples

    DTIC Science & Technology

    1983-06-24

    and vali- date methods for the analysis of the 12 munitions in water and sediment. Two high performance liquid chromatographic (IIPLC-UV) systems...t from Re.po.r) 11i. SUP•L.EMENTARY NOTES 1S. KEY WORDS (Conrinuo.on rovers* old* It necessary and Identify by biock number) Methods development...munition and 4-munition groups in sediment The method for eight munitions (DNP, RDX, TNB, DNB, 2,4-DNT, TNT, tetryl and DPA) in water samples consists of

  17. Ab initio Quantum Chemical Reaction Kinetics: Recent Applications in Combustion Chemistry (Briefing Charts)

    DTIC Science & Technology

    2015-06-28

    HMX RDX  Recent Works  See Geith et al...Propellants, Explosives, Pyrotechnics, 29, 3 (2004)  ∆Hcomb(DNB) = (5195 ± 300) kJ kg-1 (bomb calorimetry and MP2/cc-pVTZ ∆Hf) cf HMX 9435 & RDX 9560 kJ...kg-1  Vd = 8660 ms-1, cf HMX 9100 & RDX 8750 ms-1  See Geith et al., Combust and Flame, 139, 358 (2004)  Recent synthesis (known since 1898 by

  18. Preparation of SRN1-type coupling adducts from aliphatic gem-dinitro compounds in ionic liquids.

    PubMed

    Kamimura, Akio; Toyoshima, Seiichi

    2012-04-25

    S(RN)1-type coupling adducts are readily prepared by the reaction between a-sulfonylesters or a-cyanosulfones and gem-dinitro compounds in ionic liquids. The reactions progress smoothly and recovered ionic liquids can be used for several iterations, as long as they are washed with water to remove alkali metallic salts. The reaction rate is slower than the corresponding S(RN)1 reaction in DMSO, but no acceleration on irradiation or no inhibition in the presence of m-DNB are observed.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinh, Nam; Athe, Paridhi; Jones, Christopher

    The Virtual Environment for Reactor Applications (VERA) code suite is assessed in terms of capability and credibility against the Consortium for Advanced Simulation of Light Water Reactors (CASL) Verification and Validation Plan (presented herein) in the context of three selected challenge problems: CRUD-Induced Power Shift (CIPS), Departure from Nucleate Boiling (DNB), and Pellet-Clad Interaction (PCI). Capability refers to evidence of required functionality for capturing phenomena of interest while capability refers to the evidence that provides confidence in the calculated results. For this assessment, each challenge problem defines a set of phenomenological requirements against which the VERA software is assessed. Thismore » approach, in turn, enables the focused assessment of only those capabilities relevant to the challenge problem. The evaluation of VERA against the challenge problem requirements represents a capability assessment. The mechanism for assessment is the Sandia-developed Predictive Capability Maturity Model (PCMM) that, for this assessment, evaluates VERA on 8 major criteria: (1) Representation and Geometric Fidelity, (2) Physics and Material Model Fidelity, (3) Software Quality Assurance and Engineering, (4) Code Verification, (5) Solution Verification, (6) Separate Effects Model Validation, (7) Integral Effects Model Validation, and (8) Uncertainty Quantification. For each attribute, a maturity score from zero to three is assigned in the context of each challenge problem. The evaluation of these eight elements constitutes the credibility assessment for VERA.« less

  20. Suomi NPP VIIRS Prelaunch and On-orbit Geometric Calibration and Characterization

    NASA Technical Reports Server (NTRS)

    Wolfe, Robert E.; Lin, Guoqing; Nishihama, Masahiro; Tewari, Krishna P.; Tilton, James C.; Isaacman, Alice R.

    2013-01-01

    The Visible Infrared Imager Radiometer Suite (VIIRS) sensor was launched 28 October 2011 on the Suomi National Polarorbiting Partnership (SNPP) satellite. VIIRS has 22 spectral bands covering the spectrum between 0.412 m and 12.01 m, including 16 moderate resolution bands (M-bands) with a spatial resolution of 750 m at nadir, 5 imaging resolution bands (I-bands) with a spatial resolution of 375 m at nadir, and 1 day-night band (DNB) with a near-constant 750 m spatial resolution throughout the scan. These bands are located in a visible and near infrared (VisNIR) focal plane assembly (FPA), a short- and mid-wave infrared (SWMWIR) FPA and a long-wave infrared (LWIR) FPA. All bands, except the DNB, are co-registered for proper environmental data records (EDRs) retrievals. Observations from VIIRS instrument provide long-term measurements of biogeophysical variables for climate research and polar satellite data stream for the operational communitys use in weather forecasting and disaster relief and other applications. Well Earth-located (geolocated) instrument data is important to retrieving accurate biogeophysical variables. This paper describes prelaunch pointing and alignment measurements, and the two sets of on-orbit correction of geolocation errors, the first of which corrected error from 1,300 m to within 75 m (20 I-band pixel size), and the second of which fine tuned scan angle dependent errors, bringing VIIRS geolocation products to high maturity in one and a half years of the SNPP VIIRS on-orbit operations. Prelaunch calibration and the on-orbit characterization of sensor spatial impulse responses and band-to-band co-registration (BBR) are also described.

  1. A deep belief network approach using VDRAS data for nowcasting

    NASA Astrophysics Data System (ADS)

    Han, Lei; Dai, Jie; Zhang, Wei; Zhang, Changjiang; Feng, Hanlei

    2018-04-01

    Nowcasting or very short-term forecasting convective storms is still a challenging problem due to the high nonlinearity and insufficient observation of convective weather. As the understanding of the physical mechanism of convective weather is also insufficient, the numerical weather model cannot predict convective storms well. Machine learning approaches provide a potential way to nowcast convective storms using various meteorological data. In this study, a deep belief network (DBN) is proposed to nowcast convective storms using the real-time re-analysis meteorological data. The nowcasting problem is formulated as a classification problem. The 3D meteorological variables are fed directly to the DBN with dimension of input layer 6*6*80. Three hidden layers are used in the DBN and the dimension of output layer is two. A box-moving method is presented to provide the input features containing the temporal and spatial information. The results show that the DNB can generate reasonable prediction results of the movement and growth of convective storms.

  2. Powerloads on the front end components and the duct of the heating and diagnostic neutral beam lines at ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, M. J.; Boilson, D.; Hemsworth, R. S.

    2015-04-08

    The heating and current drive beam lines (HNB) at ITER are expected to deliver ∼16.7 MW power per beam line for H beams at 870 keV and D beams at 1 MeV during the H-He and the DD/DT phases of ITER operation respectively. On the other hand the diagnostic neutral beam (DNB) line shall deliver ∼2 MW power for H beams at 100 keV during both the phases. The path lengths over which the beams from the HNB and DNB beam lines need to be transported are 25.6 m and 20.7 m respectively. The transport of the beams over these path lengths resultsmore » in beam losses, mainly by the direct interception of the beam with the beam line components and reionisation. The lost power is deposited on the surfaces of the various components of the beam line. In order to ensure the survival of these components over the operational life time of ITER, it is important to determine to the best possible extent the operational power loads and power densities on the various surfaces which are impacted by the beam in one way or the other during its transport. The main factors contributing to these are the divergence of the beamlets and the halo fraction in the beam, the beam aiming, the horizontal and vertical misalignment of the beam, and the gas profile along the beam path, which determines the re-ionisation loss, and the re-ionisation cross sections. The estimations have been made using a combination of the modified version of the Monte Carlo Gas Flow code (MCGF) and the BTR code. The MCGF is used to determine the gas profile in the beam line and takes into account the active gas feed into the ion source and neutraliser, the HNB-DNB cross over, the gas entering the beamline from the ITER machine, the additional gas atoms generated in the beam line due to impacting ions and the pumping speed of the cryopumps. The BTR code has been used to obtain the power loads and the power densities on the various surfaces of the front end components and the duct modules for different scenarios of ITER operation. The gas profile and the magnetic field distribution for each scenario has been considered in these evaluations. The worst case power loads and power densities for each surface have been used to study their thermo-mechanical behaviour and manufacturing feasibility. The details of these calculations and results obtained are presented and discussed.« less

  3. Engagement of National Board of Examinations in strengthening public health education in India: present landscape, opportunities and future directions.

    PubMed

    Sharma, Anjali; Zodpey, Sanjay; Batra, Bipin

    2014-01-01

    A trained and adequate heath workforce forms the crux in designing, implementing and monitoring health programs and delivering quality health services. Education is recognized as a critical instrument for creating such trained health professionals who can effectively address the 21 st century health challenges. At present, the Public Health Education in India is offered through medical colleges and also outside the corridors of medical colleges which was not the scenario earlier. Traditionally, Public Health Education has been a domain of medical colleges and was open for medical graduates only. In order to standardize the Postgraduate Medical Education in India, the National Board of Examinations (NBE) was set up as an independent autonomous body of its kind in the country in the field of medical sciences with the prime objective of improving the quality of the medical education. NBE has also played a significant role in enhancing Public Health Education in India through its Diplomat of National Board (DNB) Programs in Social and Preventive Medicine, Health and Hospital Administration, Maternal and Child Health, Family Medicine and Field Epidemiology. It envisions creating a cadre of skilled and motivated public health professionals and also developing a roadmap for postgraduate career pathways. However, there still exists gamut of opportunities for it to engage in expanding the scope of Public Health Education. It can play a key role in accreditation of public health programs and institutions which can transform the present landscape of education of health professionals. It also needs to revisit and re-initiate programs like DNB in Tropical Medicine and Occupational Health which were discontinued. The time is imperative for NBE to seize these opportunities and take necessary actions in strengthening and expanding the scope of Public Health Education in India.

  4. Automated JPSS VIIRS GEO code change testing by using Chain Run Scripts

    NASA Astrophysics Data System (ADS)

    Chen, W.; Wang, W.; Zhao, Q.; Das, B.; Mikles, V. J.; Sprietzer, K.; Tsidulko, M.; Zhao, Y.; Dharmawardane, V.; Wolf, W.

    2015-12-01

    The Joint Polar Satellite System (JPSS) is the next generation polar-orbiting operational environmental satellite system. The first satellite in the JPSS series of satellites, J-1, is scheduled to launch in early 2017. J1 will carry similar versions of the instruments that are on board of Suomi National Polar-Orbiting Partnership (S-NPP) satellite which was launched on October 28, 2011. The center for Satellite Applications and Research Algorithm Integration Team (STAR AIT) uses the Algorithm Development Library (ADL) to run S-NPP and pre-J1 algorithms in a development and test mode. The ADL is an offline test system developed by Raytheon to mimic the operational system while enabling a development environment for plug and play algorithms. The Perl Chain Run Scripts have been developed by STAR AIT to automate the staging and processing of multiple JPSS Sensor Data Record (SDR) and Environmental Data Record (EDR) products. JPSS J1 VIIRS Day Night Band (DNB) has anomalous non-linear response at high scan angles based on prelaunch testing. The flight project has proposed multiple mitigation options through onboard aggregation, and the Option 21 has been suggested by the VIIRS SDR team as the baseline aggregation mode. VIIRS GEOlocation (GEO) code analysis results show that J1 DNB GEO product cannot be generated correctly without the software update. The modified code will support both Op21, Op21/26 and is backward compatible with SNPP. J1 GEO code change version 0 delivery package is under development for the current change request. In this presentation, we will discuss how to use the Chain Run Script to verify the code change and Lookup Tables (LUTs) update in ADL Block2.

  5. Proton transfer complexes based on some π-acceptors having acidic protons with 3-amino-6-[2-(2-thienyl)vinyl]-1,2,4-triazin-5(4 H)-one donor: Synthesis and spectroscopic characterizations

    NASA Astrophysics Data System (ADS)

    Refat, Moamen S.; Saad, Hosam A.; Adam, Abdel Majid A.

    2011-05-01

    Charge transfer complexes based on 3-amino-6-[2-(2-thienyl)vinyl]-1,2,4-triazin-5(4 H)-one (ArNH 2) organic basic donor and pi-acceptors having acidic protons such as picric acid (PiA), hydroquinone (Q(OH) 2) and 3,5-dinitrobenzene (DNB) have been synthesized and spectroscopically studied. The sbnd NH3+ ammonium ion was formed under the acid-base theory through proton transfer from an acidic to basic centers in all charge transfer complexes resulted. The values of formation constant ( KCT) and molar extinction coefficient ( ɛCT) which were estimated from the spectrophotometric studies have a dramatic effect for the charge transfer complexes with differentiation of pi-acceptors. For further studies the vibrational spectroscopy of the [( ArNH3+)(PiA -)] (1), [( ArNH3+)(Q (OH)2-)] (2) and [( ArNH3+)(DNB -)] (3) of (1:1) charge transfer complexes of (donor: acceptor) were characterized by elemental analysis, infrared spectra, Raman spectra, 1H and 13CNMR spectra. The experimental data of elemental analyses of the charge transfer complexes (1), (2) and (3) were in agreement with calculated data. The IR and Raman spectra of (1), (2) and (3) are indicated to the presence of bands around 3100 and 1600 cm -1 distinguish to sbnd NH3+. The thermogravimetric analysis (TG) and differential scanning calorimetry (DSC) techniques were performed to give knowledge about thermal stability behavior of the synthesized charge transfer complexes. The morphological features of start materials and charge transfer complexes were investigated using scanning electron microscopy (SEM) and optical microscopy.

  6. Development of an Analytical Method for Explosive Residues in Soil,

    DTIC Science & Technology

    1987-06-01

    confirm peak identities. The eluent for both columns should be 50:50 methanol-water. The elution time for all the analytes of interest on the LC -18 column...nitrate at 1.77 min for LC -8, 1.73 min for LC -DP, and 1.80 for LC -1. 23 Table A2. Instrument calibration results for HMX. Concentration Solution Soil* Peak ...LCT 12 AUG 2 0 W 1M 2j TNT Owl ""r’ L ,,,O MRYX TN L DNS 2 HMX 0 12 LC -CN 110 KMX S 8 TETRYL 6 RDXW 4 DNB and TNB 0 Approved for public release

  7. CASL VMA FY16 Milestone Report (L3:VMA.VUQ.P13.07) Westinghouse Mixing with COBRA-TF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, Natalie

    2016-09-30

    COBRA-TF (CTF) is a low-resolution code currently maintained as CASL's subchannel analysis tool. CTF operates as a two-phase, compressible code over a mesh comprised of subchannels and axial discretized nodes. In part because CTF is a low-resolution code, simulation run time is not computationally expensive, only on the order of minutes. Hi-resolution codes such as STAR-CCM+ can be used to train lower-fidelity codes such as CTF. Unlike STAR-CCM+, CTF has no turbulence model, only a two-phase turbulent mixing coefficient, β. β can be set to a constant value or calculated in terms of Reynolds number using an empirical correlation. Resultsmore » from STAR-CCM+ can be used to inform the appropriate value of β. Once β is calibrated, CTF runs can be an inexpensive alternative to costly STAR-CCM+ runs for scoping analyses. Based on the results of CTF runs, STAR-CCM+ can be run for specific parameters of interest. CASL areas of application are CIPS for single phase analysis and DNB-CTF for two-phase analysis.« less

  8. Assessing microbial competition in a hydrogen-based membrane biofilm reactor (MBfR) using multidimensional modeling.

    PubMed

    Martin, Kelly J; Picioreanu, Cristian; Nerenberg, Robert

    2015-09-01

    The membrane biofilm reactor (MBfR) is a novel technology that safely delivers hydrogen to the base of a denitrifying biofilm via gas-supplying membranes. While hydrogen is an effective electron donor for denitrifying bacteria (DNB), it also supports sulfate-reducing bacteria (SRB) and methanogens (MET), which consume hydrogen and create undesirable by-products. SRB and MET are only competitive for hydrogen when local nitrate concentrations are low, therefore SRB and MET primarily grow near the base of the biofilm. In an MBfR, hydrogen concentrations are greatest at the base of the biofilm, making SRB and MET more likely to proliferate in an MBfR system than a conventional biofilm reactor. Modeling results showed that because of this, control of the hydrogen concentration via the intramembrane pressure was a key tool for limiting SRB and MET development. Another means is biofilm management, which supported both sloughing and erosive detachment. For the conditions simulated, maintaining thinner biofilms promoted higher denitrification fluxes and limited the presence of SRB and MET. The 2-d modeling showed that periodic biofilm sloughing helped control slow-growing SRB and MET. Moreover, the rough (non-flat) membrane assembly in the 2-d model provided a special niche for SRB and MET that was not represented in the 1-d model. This study compared 1-d and 2-d biofilm model applicability for simulating competition in counter-diffusional biofilms. Although more computationally expensive, the 2-d model captured important mechanisms unseen in the 1-d model. © 2015 Wiley Periodicals, Inc.

  9. Development of Optimized Core Design and Analysis Methods for High Power Density BWRs

    NASA Astrophysics Data System (ADS)

    Shirvan, Koroush

    Increasing the economic competitiveness of nuclear energy is vital to its future. Improving the economics of BWRs is the main goal of this work, focusing on designing cores with higher power density, to reduce the BWR capital cost. Generally, the core power density in BWRs is limited by the thermal Critical Power of its assemblies, below which heat removal can be accomplished with low fuel and cladding temperatures. The present study investigates both increases in the heat transfer area between ~he fuel and coolant and changes in operating parameters to achieve higher power levels while meeting the appropriate thermal as well as materials and neutronic constraints. A scoping study is conducted under the constraints of using fuel with cylindrical geometry, traditional materials and enrichments below 5% to enhance its licensability. The reactor vessel diameter is limited to the largest proposed thus far. The BWR with High power Density (BWR-HD) is found to have a power level of 5000 MWth, equivalent to 26% uprated ABWR, resulting into 20% cheaper O&M and Capital costs. This is achieved by utilizing the same number of assemblies, but with wider 16x16 assemblies and 50% shorter active fuel than that of the ABWR. The fuel rod diameter and pitch are reduced to just over 45% of the ABWR values. Traditional cruciform form control rods are used, which restricts the assembly span to less than 1.2 times the current GE14 design due to limitation on shutdown margin. Thus, it is possible to increase the power density and specific power by 65%, while maintaining the nominal ABWR Minimum Critical Power Ratio (MCPR) margin. The plant systems outside the vessel are assumed to be the same as the ABWR-Il design, utilizing a combination of active and passive safety systems. Safety analyses applied a void reactivity coefficient calculated by SIMULA TE-3 for an equilibrium cycle core that showed a 15% less negative coefficient for the BWR-HD compared to the ABWR. The feedwater temperature was kept the same for the BWR-HD and ABWR which resulted in 4 °K cooler core inlet temperature for the BWR-HD given that its feedwater makes up a larger fraction of total core flow. The stability analysis using the STAB and S3K codes showed satisfactory results for the hot channel, coupled regional out-of-phase and coupled core-wide in-phase modes. A RELAPS model of the ABWR system was constructed and applied to six transients for the BWR-HD and ABWR. The 6MCPRs during all the transients were found to be equal or less for the new design and the core remained covered for both. The lower void coefficient along with smaller core volume proved to be advantages for the simulated transients. Helical Cruciform Fuel (HCF) rods were proposed in prior MIT studies to enhance the fuel surface to volume ratio. In this work, higher fidelity models (e.g. CFD instead of subchannel methods for the hydraulic behaviour) are used to investigate the resolution needed for accurate assessment of the HCF design. For neutronics, conserving the fuel area of cylindrical rods results in a different reactivity level with a lower void coefficient for the HCF design. In single-phase flow, for which experimental results existed, the friction factor is found to be sensitive to HCF geometry and cannot be calculated using current empirical models. A new approach for analysis of flow crisis conditions for HCF rods in the context of Departure from Nucleate Boiling (DNB) and dryout using the two phase interface tracking method was proposed and initial results are presented. It is shown that the twist of the HCF rods promotes detachment of a vapour bubble along the elbows which indicates no possibility for an early DNB for the HCF rods and in fact a potential for a higher DNB heat flux. Under annular flow conditions, it was found that the twist suppressed the liquid film thickness on the HCF rods, at the locations of the highest heat flux, which increases the possibility of reaching early dryout. It was also shown that modeling the 3D heat and stress distribution in the HCF rods is necessary for accurate steady state and transient analyses. (Abstract shortened by UMI.) (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  10. Superfund Record of Decision (EPA Region 4): Milan Army Ammunition Plant, TN. (First remedial action), September 1992. Interim report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-09-30

    The 22,436-acre Milan Army Ammunition Plant (MAAP) is located in western Tennessee, 5 miles east of Milan, Tennessee. The facility was constructed in 1941 to produce and store fuses, boosters, and small- and large-caliber ammunition. The ROD addresses an interim remedy for the contaminated ground water beneath and immediately downgradient from the former ponds as OU1. The primary contaminants of concern affecting the ground water are VOCs, including carbon disulfide; other organics, including HMX, RDX, 2,4,6-TNT, 2,4-DNT, 2,6-DNT, 1,3-DNB, 1,3,5-trinitrobenzene, and nitrobenzene; and inorganics, including nitrate.

  11. MORB mantle hosts the missing Eu (Sr, Nb, Ta and Ti) in the continental crust: New perspectives on crustal growth, crust-mantle differentiation and chemical structure of oceanic upper mantle

    NASA Astrophysics Data System (ADS)

    Niu, Yaoling; O'Hara, Michael J.

    2009-09-01

    We have examined the high quality data of 306 mid-ocean ridge basalt (MORB) glass samples from the East Pacific Rise (EPR), near-EPR seamounts, Pacific Antarctic Ridge (PAR), near-PAR seamounts, Mid-Atlantic Ridge (MAR), and near-MAR seamounts. The data show a correlated variation between Eu/Eu* and Sr/Sr*, and both decrease with decreasing MgO, pointing to the effect of plagioclase crystallization. The observation that samples with MgO > 9.5 wt.% (before plagioclase on the liquidus) show Eu/Eu* > 1 and Sr/Sr* > 1 and that none of the major phases (i.e., olivine, orthopyroxene, clinopyroxene, spinel and garnet) in the sub-ridge mantle melting region can effectively fractionate Eu and Sr from otherwise similarly incompatible elements indicates that the depleted MORB mantle (DMM) possesses excess Sr and Eu, i.e., [Sr/Sr*]DMM > 1 and [Eu/Eu*]DMM > 1. Furthermore, the well-established observation that DNb ≈ DTh, DTa ≈ DU and DTi ≈ DSm during MORB mantle melting, yet primitive MORB melts all have [Nb/Th]PMMORB > 1, [Ta/U]PMMORB > 1 and [Ti/Sm]PMMORB > 1 (where PM indicates primitive mantle normalized), also points to the presence of excess Nb, Ta and Ti in the DMM, i.e., [Nb/Th]PMDMM > 1, [Ta/U]PMDMM > 1 and [Ti/Sm]PMDMM > 1. The excesses of Eu, Sr, Nb, Ta and Ti in the DMM complement the well-known deficiencies of these elements in the bulk continental crust (BCC). These new observations, which support the notion that the DMM and BCC are complementary in terms of the overall abundances of incompatible elements, offer new insights into the crust-mantle differentiation. These observations are best explained by partial melting of amphibolite of MORB protolith during continental collision, which produces andesitic melts with a remarkable compositional (major and trace element abundances as well as key elemental ratios) similarity to the BCC, as revealed by andesites in southern Tibet produced during the India-Asia continental collision. An average amphibolite of MORB protolith consists of ~ 66.4% amphibole, ~ 29.2% plagioclase and 4.4% ilmenite. In terms of simple modal melting models, the bulk distribution coefficient ratios D2Eu/(Sm + Gd) = 1.21, D2Sr/(Pr + Nd) = 1.04, DNb/Th = 44, DTa/U = 57, DTi/Sm = 3.39 and DNb/Ta = 1.30 readily explains the small but significant negative Eu and Sr anomalies, moderate negative Ti anomaly and huge negative Nb and Ta anomalies as well as the more sub-chondritic Nb/Ta ratio in the syncollisional andesitic melt that is characteristic of and contributes to the continental crust mass. These results support the hypothesis that continental collision zones are primary sites of net continental crust growth, whereas the standard "island arc" model has many more difficulties than certainties. That is, it is the continental collision (vs. "island arc magmatism" or "episodic super mantle avalanche events") that produces and preserves the juvenile crust, and hence maintains net continental growth. The data also allow us to establish the robust composition of depleted and most primitive (or "primary") MORB melt with 13% MgO. This, together with the estimated positive Eu and Sr anomalies in the DMM, further permits estimation that the DMM may occupy the uppermost ~ 680 km of the convective mantle following the tradition that the DMM lies in the shallowest mantle. However, the tradition may be in error. The seismic low velocity zone (LVZ) may be compositionally stratified with small melt fractions concentrated towards the interface with the growing lithosphere because of buoyancy. Such small melt fractions, enriched in volatiles and incompatible elements, continue to metasomatize the growing lithosphere before it reaches the full thickness after ~ 70 Myrs. Hence, the oceanic mantle lithosphere is a huge enriched geochemical reservoir. On the other hand, deep portions of the LVZ, which are thus relatively depleted, become the primary source feeding the ridge because of ridge-suction-driven lateral material supply to form the crust and much of the lithosphere at and in the vicinity of the ridge.

  12. Amphibole Fractional Crystallization and Delamination in Arc Roots: Implications for the `Missing' Nb Reservoir in the Earth

    NASA Astrophysics Data System (ADS)

    Galster, F.; Chatterjee, R. N.; Stockli, D. F.

    2017-12-01

    Most geologic processes should not fractionate Nb from Ta but Earth's major silicate reservoirs have subchondritic Nb/Ta values. Nb/Ta of >10000 basalts and basaltic andesites from different tectonic settings (GEOROC) cluster around 16, indistinguishable from upper mantle values. In contrast, Nb/Ta in more evolved arc volcanics have progressively lower values, reaching continental crust estimates, and correlate negatively with SiO2 (see figure) and positively with TiO2 and MgO. This global trend suggests that differentiation processes in magmatic arcs could explain bulk crustal Nb/Ta estimates. Understanding processes that govern fractionation of Nb from Ta in arcs can provide key insights on continental crust formation and help identify Earth's `missing' Nb reservoir. Ti-rich phases (rutile, titanite and ilmenite) have DNb/DTa <1, and therefore, their fractionation from mafic to intermediate liquids cannot explain the observed trend. Instead, fractionation of biotite and amphibole could lower Nb/Ta values in the evolved liquid. Lack of correlation between Nb/Ta and K2O in global volcanic rocks implies that biotite plays a minor role in fractionating Nb from Ta during differentiation. Experimental petrology and evidence from exposed arc sections indicate that amphibole fractionation and delamination of island arc roots can explain the andesitic composition of bulk continental crust. Experimental studies have shown that amphibole Mg# correlate with DNb/DTa and amphibole could effectively fractionate Nb from Ta. Preliminary data from lower to middle crustal amphiboles from preserved arcs show sub- to super-chondritic Nb/Ta up to >60. This suggests that delamination of amphibole-rich cumulates can be a viable mechanism for the preferential removal of Nb from the continental crust. Future examination of Nb/Ta ratios in lower crustal amphiboles from various preserved arcs will provide improved constraints on the Nb-Ta paradox of the silicate Earth.

  13. Use of VIIRS DNB Data to Monitor Power Outages and Restoration for Significant Weather Events

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Molthan, Andrew

    2008-01-01

    NASA fs Short-term Prediction Research and Transition (SPoRT) project operates from NASA's Marshall Space Flight Center in Huntsville, Alabama. The team provides unique satellite data to the National Weather Service (NWS) and other agencies and organizations for weather analysis. While much of its work is focused on improving short-term weather forecasting, the SPoRT team supported damage assessment and response to Hurricane Superstorm Sandy by providing imagery that highlighted regions without power. The team used data from the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi NPP) satellite. The VIIRS low-light sensor, known as the day-night-band (DNB), can detect nighttime light from wildfires, urban and rural communities, and other human activity which emits light. It can also detect moonlight reflected from clouds and surface features. Using real time VIIRS data collected by our collaborative partner at the Space Science and Engineering Center of the University of Wisconsin, the SPoRT team created composite imagery to help detect power outages and restoration. This blackout imagery allowed emergency response teams from a variety of agencies to better plan and marshal resources for recovery efforts. The blackout product identified large-scale outages, offering a comprehensive perspective beyond a patchwork GIS mapping of outages that utility companies provide based on customer complaints. To support the relief efforts, the team provided its imagery to the USGS data portal, which the Federal Emergency Management Agency (FEMA) and other agencies used in their relief efforts. The team fs product helped FEMA, the U.S. Army Corps of Engineers, and U.S. Army monitor regions without power as part of their disaster response activities. Disaster responders used the images to identify possible outages and effectively distribute relief resources. An enhanced product is being developed and integrated into a web mapping service (WMS) for dissemination and use by a broader end user community.

  14. A Antarctic Magnetometer Chain Along the Cusp Latitude: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Liu, Y.

    2016-12-01

    A magnetometer chain from Zhongshan Station to Dome-A in Antarctica has been established since February 2009, consisting in five fluxgate magnetometers, with one regular magnetometer at Zhongshan Station and four low power magnetometers along the cusp latitude in the southern hemisphere, over a distance of 1260 Km. It is one part of the magnetometer network in Antarctic continent, filling the void area for magnetic observation over east-southern Antarctica, greatly enlarging the coverage of the Zhongshan Station. It is also magnetically conjugated with Svalbard region in the Arctic, with a leg extending to DNB in east coast Greenland. Conjunction observation among these magnetometers could provide excellent tracing of series of the typical space physical phenomena such as FTE, TCV, MIE, ULF waves, etc.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  16. NASA's Black Marble Nighttime Lights Product Suite

    NASA Technical Reports Server (NTRS)

    Wang, Zhuosen; Sun, Qingsong; Seto, Karen C.; Oda, Tomohiro; Wolfe, Robert E.; Sarkar, Sudipta; Stevens, Joshua; Ramos Gonzalez, Olga M.; Detres, Yasmin; Esch, Thomas; hide

    2018-01-01

    NASA's Black Marble nighttime lights product suite (VNP46) is available at 500 meters resolution since January 2012 with data from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) onboard the Suomi National Polar-orbiting Platform (SNPP). The retrieval algorithm, developed and implemented for routine global processing at NASA's Land Science Investigator-led Processing System (SIPS), utilizes all high-quality, cloud-free, atmospheric-, terrain-, vegetation-, snow-, lunar-, and stray light-corrected radiances to estimate daily nighttime lights (NTL) and other intrinsic surface optical properties. Key algorithm enhancements include: (1) lunar irradiance modeling to resolve non-linear changes in phase and libration; (2) vector radiative transfer and lunar bidirectional surface anisotropic reflectance modeling to correct for atmospheric and BRDF (Bidirectional Reflectance Distribution Function) effects; (3) geometric-optical and canopy radiative transfer modeling to account for seasonal variations in NTL; and (4) temporal gap-filling to reduce persistent data gaps. Extensive benchmark tests at representative spatial and temporal scales were conducted on the VNP46 time series record to characterize the uncertainties stemming from upstream data sources. Initial validation results are presented together with example case studies illustrating the scientific utility of the products. This includes an evaluation of temporal patterns of NTL dynamics associated with urbanization, socioeconomic variability, cultural characteristics, and displaced populations affected by conflict. Current and planned activities under the Group on Earth Observations (GEO) Human Planet Initiative are aimed at evaluating the products at different geographic locations and time periods representing the full range of retrieval conditions.

  17. Remedial Investigation Report for Lake City Army Ammunition Plant. Volume 1

    DTIC Science & Technology

    1990-06-01

    EXPLOSIVE COMPOUNDS 1 3-DNB ɘ 61 ɘ.61 ɘ.61 ɘ.61 ɘ.61 1 73 135-7N6 11 7 ɘ.56 ɘ.56 ɘ.56 ɘ.56 0.7 *J 3.24 ə 30 ə 30 ə,30 ə.30 ə.30 POX ɘ...COMPOUNDS * 5- TNB I37 ɘ.56 ɘ. 56 ɘ 56 ɘ. 56 ɘ,56 HMX 17 4 ə.,30 ə 30 < 1 340 ᝻,30 ə.30 POX IS0 1.67 ɘ.63 ɘ.63 ɘ.63 1 84 OTHERS (ALL NO OR...Torkelson and Rowe 1031 5-209 Evidence suggests that 1,1,2-TCA is embryo toxic to chicken eggs (Elovaara 1979). I,I,2-TCA was found to be weakly mutagenic

  18. Triphenylbenzene Sensor for Selective Detection of Picric Acid.

    PubMed

    Nagendran, S; Vishnoi, Pratap; Murugavel, Ramaswamy

    2017-07-01

    A C 3 -symmetric triphenylbenzene based photoluminescent compound, 1,3,5-tris(4'-(N-methylamino)phenyl) benzene ([NHMe] 3 TAPB), has been synthesized by mono-N-methylation of 1,3,5-tris(4'-aminophenyl) benzene (TAPB) and structurally characterized. [NHMe] 3 TAPB acts as a selective fluorescent sensor for picric acid (PA) with a detection limit as low as 2.25 ppm at a signal to noise ratio of 3. Other related analytes (i.e. TNT, DNT and DNB) show very little effect on the fluorescence intensity of [NHMe] 3 TAPB. The selectivity is triggered by proton transfer from picric acid to the fluorophore and ground-state complex formation between the protonated fluorophore and picrate anion through hydrogen bonding interactions. The fluorescence lifetime measurements reveal static nature of fluorescence quenching.

  19. Effects of Bubble-Mediated Processes on Nitrous Oxide Dynamics in Denitrifying Bioreactors

    NASA Astrophysics Data System (ADS)

    McGuire, P. M.; Falk, L. M.; Reid, M. C.

    2017-12-01

    To mitigate groundwater and surface water impacts of reactive nitrogen (N), agricultural and stormwater management practices can employ denitrifying bioreactors (DNBs) as low-cost solutions for enhancing N removal. Due to the variable nature of hydrologic events, DNBs experience dynamic flows which can impact physical and biological processes within the reactors and affect performance. A particular concern is incomplete denitrification, which can release the potent greenhouse gas nitrous oxide (N2O) to the atmosphere. This study aims to provide insight into the effects of varying hydrologic conditions upon the operation of DNBs by disentangling abiotic and biotic controls on denitrification and N2O dynamics within a laboratory-scale bioreactor. We hypothesize that under transient hydrologic flows, rising water levels lead to air entrapment and bubble formation within the DNB porous media. Mass transfer of oxygen (O2) between trapped gas and liquid phases creates aerobic microenvironments that can inhibit N2O reductase (NosZ) enzymes and lead to N2O accumulation. These bubbles also retard N2O transport and make N2O unavailable for biological reduction, further enhancing atmospheric fluxes when water levels fall. The laboratory-scale DNB permits measurements of longitudinal and vertical profiles of dissolved constituents as well as trace gas concentrations in the reactor headspace. We describe a set of experiments quantifying denitrification pathway biokinetics under steady-state and transient hydrologic conditions and evaluate the role of bubble-mediated processes in enhancing N2O accumulation and fluxes. We use sulfur hexafluoride and helium as dissolved gas tracers to examine the impact of bubble entrapment upon retarded gas transport and enhanced trace gas fluxes. A planar optode sensor within the bioreactor provides near-continuous 2-D profiles of dissolved O2 within the bioreactor and allows for identification of aerobic microenvironments. We use qPCR to examine the relative abundance of the denitrifying genes nitrate reductase and NosZ within the bioreactor and explore gradients in denitrification biomarkers coinciding with denitrification intermediate profiles. Insights gained from this study will advance understanding of gas dynamics within environmental porous media.

  20. Cross-links (XL) and Zn action in ferritin related to an H-specific site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yablonski, M.J.; Theil, E.C.

    1991-03-15

    Zn and subunit cross-links (F{sub 2}DNB) alter ferritin iron core formation in vivo and in vitro; the effect is observed in ferritins composed of two subunit types (H and L). Protein coats from sheep spleen ferritin (SSF) ({plus minus} XL), a model for a protein with H and L subunits (1:1), and horse spleen ferritin (HSF), a model for H deficient protein were reconstituted with Fe{sup 2+}, {plus minus} Zn, at pH 6.1 and 7.0 in order to investigate the effects of Zn and XLs on H and L subunits. Core formation was measured both as {Delta}A{sub 420} and themore » accessibility of Fe{sup 2+} to 1,10-phenanthroline. At pH 6.1, Zn decreased the {Delta}A{sub 420} in 1 min {ge} 87X (SSF) or 15X (HSF). XLs ({plus minus}Zn) decreased {Delta}A{sub 420} at 1 min similarly; at pH 7.0, Zn reduced {Delta}A{sub 420} at 1 min in SSF 3X with no effect on HSF. At both values of pH, Zn increased accessibility equally for SSF and HSF. The data indicate that : Zn has different effects on core formation measured as {Delta}A{sub 420} at 1 min or Fe{sup 2+} entry into ferritin; cross-links and Zn affects a common site involved in core formation; and Zn affects an H subunit-specific site which may involve histidine.« less

  1. A formal approach for the prediction of the critical heat flux in subcooled water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lombardi, C.

    1995-09-01

    The critical heat flux (CHF) in subcooled water at high mass fluxes are not yet satisfactory correlated. For this scope a formal approach is here followed, which is based on an extension of the parameters and the correlation used for the dryout prediction for medium high quality mixtures. The obtained correlation, in spite of its simplicity and its explicit form, yields satisfactory predictions, also when applied to more conventional CHF data at low-medium mass fluxes and high pressures. Further improvements are possible, if a more complete data bank will be available. The main and general open item is the definitionmore » of a criterion, depending only on independent parameters, such as mass flux, pressure, inlet subcooling and geometry, to predict whether the heat transfer crisis will result as a DNB or a dryout phenomenon.« less

  2. Inclusion complexes of β-cyclodextrin-dinitrocompounds as UV absorber for ballpoint pen ink.

    PubMed

    Srinivasan, Krishnan; Radhakrishnan, S; Stalin, Thambusamy

    2014-08-14

    2,4-Dinitrophenol (2,4-DNP), 2,4-dinitroaniline (2,4-DNA), 2,6-dinitroaniline (2,6-DNA) and 2,6-dinitrobenzoic acid (2,6-DNB) has appeared for the UV absorption bands in different wavelength region below 400 nm, a combination of these dinitro aromatic compounds gave the broad absorption spectra within the UV region. The absorption intensities have been increased by preparation of the inclusion complex of dinitro compounds with β-cyclodextrin (β-CD). Prepared inclusion complexes are used to improve the UV protection properties of the ball point pen ink against photo degradation. The formation of solid inclusion complexes was characterized by FT-IR, and (1)H NMR spectroscopy. The UV protecting properties of these inclusion complexes were calculated their sun protection factor (SPF) is also discussed. The stability of the ballpoint pen ink has been confirmed by UV-Visible spectroscopic method. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Remedial action suitability for the Cornhusker Army Ammunition Plant site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nonavinakere, S.; Rappa, P. III

    1995-12-31

    Numerous Department of Defense (DOD) sites across the nation are contaminated with explosive wastes due to munitions production during World War II, Korean Conflict and Vietnam Conflict. Production activities included explosives manufacturing, loading, packing, assembling, machining, casting and curing. Contaminants often present at these sites include TNT, RDX, HMX, Tetryl 2,4-DNT, 2,6-DNT, 1,3-DNB, 1,3,5-TNB and nitrobenzene. The Cornhusker Army Ammunition Plant (CAAP) is one such DOD site that has been determined to be contaminated with explosives. The CAAP is located approximately 2 miles west of the City of Grand Island in Hall County, Nebraska. The plant produced artillery, bombs, boosters,more » supplementary charges and various other experimental explosives. The purpose of this paper is to provide an overview of the site background, review of the remedial alternatives evaluation process and rationale behind the selection of present remedial action.« less

  4. Thermo-optical properties of Alexandrite laser crystal

    NASA Astrophysics Data System (ADS)

    Loiko, Pavel; Ghanbari, Shirin; Matrosov, Vladimir; Yumashev, Konstantin; Major, Arkady

    2018-02-01

    Alexandrite is a well-known material for broadly tunable and power-scalable near-IR lasers. We measured the thermal coefficients of the optical path (TCOP) and thermo-optic coefficients (TOCs) of Alexandrite at 632.8 nm for three principal light polarizations, E || a, E || b and E || c. All TOCs are positive and show a notable polarization-anisotropy, dna/dT = 5.5, dnb/dT = 7.0 and dnc/dT = 14.9×10-6 K-1. We also characterized thermal lensing in a continuous-wave Alexandrite laser which used a Brewster-oriented c-cut 0.16 at.% Cr3+ doped BeAl2O4 crystal pumped at 532 nm and emitted at 750.9 nm (E || b). The measured thermal lens was positive and astigmatic. The sensitivity factors of the thermal lens (Mx,y = dDx,y/dPabs) were found to be Mx = 1.74 and My = 2.38 [m-1/W].

  5. Monitoring Disaster-Related Power Outages Using NASA Black Marble Nighttime Light Product

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Román, M. O.; Sun, Q.; Molthan, A. L.; Schultz, L. A.; Kalb, V. L.

    2018-04-01

    Timely and accurate monitoring of disruptions to the electricity grid, including the magnitude, spatial extent, timing, and duration of net power losses, is needed to improve situational awareness of disaster response and long-term recovery efforts. Satellite-derived Nighttime Lights (NTL) provide an indication of human activity patterns and have been successfully used to monitor disaster-related power outages. The global 500 m spatial resolution National Aeronautics and Space Administration (NASA) Black Marble NTL daily standard product suite (VNP46) is generated from Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) onboard the NASA/National Oceanic and Atmospheric Administration (NOAA) Suomi National Polar-orbiting Partnership (Suomi- NPP) satellite, which began operations on 19 January 2012. With its improvements in product accuracy (including critical atmospheric and BRDF correction routines), the VIIRS daily Black Mable product enables systematic monitoring of outage conditions across all stages of the disaster management cycle.

  6. CASL Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mousseau, Vincent Andrew; Dinh, Nam

    2016-06-30

    This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation andmore » verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.« less

  7. Association between nighttime artificial light pollution and sea turtle nest density along Florida coast: A geospatial study using VIIRS remote sensing data.

    PubMed

    Hu, Zhiyong; Hu, Hongda; Huang, Yuxia

    2018-08-01

    Artificial lighting at night has becoming a new type of pollution posing an important anthropogenic environmental pressure on organisms. The objective of this research was to examine the potential association between nighttime artificial light pollution and nest densities of the three main sea turtle species along Florida beaches, including green turtles, loggerheads, and leatherbacks. Sea turtle survey data was obtained from the "Florida Statewide Nesting Beach Survey program". We used the new generation of satellite sensor "Visible Infrared Imaging Radiometer Suite (VIIRS)" (version 1 D/N Band) nighttime annual average radiance composite image data. We defined light pollution as artificial light brightness greater than 10% of the natural sky brightness above 45° of elevation (>1.14 × 10 -11 Wm -2 sr -1 ). We fitted a generalized linear model (GLM), a GLM with eigenvectors spatial filtering (GLM-ESF), and a generalized estimating equations (GEE) approach for each species to examine the potential correlation of nest density with light pollution. Our models are robust and reliable in terms of the ability to deal with data distribution and spatial autocorrelation (SA) issues violating model assumptions. All three models found that nest density is significantly negatively correlated with light pollution for each sea turtle species: the higher light pollution, the lower nest density. The two spatially extended models (GLM-ESF and GEE) show that light pollution influences nest density in a descending order from green turtles, to loggerheads, and then to leatherbacks. The research findings have an implication for sea turtle conservation policy and ordinance making. Near-coastal lights-out ordinances and other approaches to shield lights can protect sea turtles and their nests. The VIIRS DNB light data, having significant improvements over comparable data by its predecessor, the DMSP-OLS, shows promise for continued and improved research about ecological effects of artificial light pollution. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Studies on the nature of the primary reactions of photosystem II in photosynthesis. I. The electrochromic 515 nm absorption change as an appropriate indicator for the functional state of the photochemical active centers of system II in DCMY poisoned chloroplasts.

    PubMed

    Renger, G; Wolff, C

    1975-01-01

    The field indicating electrochromic 515 nm absorption change has been measured under different excitation conditions in DCMU poisoned chloroplasts in the presence of benzylviologen as electron acceptor. It has been found: 1. The amplitude of the 515 nm absorption change is nearly completely suppressed under repetitive single turnover flash excitation conditions which kinetically block the back reaction around system II (P. Bennoun, Biochim. Biophys. Acta 216, 357 [1970]). 2. The amplitude of the 515 nm absorption change measured under repetitive single turnover flash excitation conditions which allow the completion of the back reaction during the dark time between the flashes (measuring light beam switched off) amounts in the presence of 2 mum DCMU nearly 50% of the electrochromic 515 nm amplitude obtained in the absence of DCMU. In DCMU poisoned chloroplasts this amplitude is significantly decreased by hydroxylaminhydrochloride, but nearly doubled in the presence of CDIP+ascorbate. 3. The dependence of the 515 nm amplitude on the time td between the flashes kinetically resembles the back reaction around system ?II. The time course of the back reaction can be fairly described either by a second order reaction or by a two phase exponential kinetics. 4. 1,3-dinitrobenzene (DNE) or alpha-bromo-alpha-benzylmalodinitril (BBMD) reduce the 515 nm amplitude in DCMU poisoned chloroplasts, but seem to influecne only slightly the kinetics of the back reaction. 5. The dependence of the 515 nm amplitude on the flash light intensity (the amplitude normalized to 1 at 100% flash light intensity) is not changed by DNB. Based on these experimental data it has been concluded that in DCMU poisoned chloroplasts the amplitude of the 515 nm absorption change reflects the functional state of photosystem II centers (designated as photoelectric dipole generators II) under suitable excitation conditions. Furthermore, it is inferred that in DCMU poisoned chlorplasts the photoelectric dipole generators II either cooperate (probably as twin-pairs) or exist in two functionally different forms. With respect to BBMD and DNB it is assumed that these agents transform the phtooelectric dipole generators II into powerful nonphotochemical quenchers, which significantly reduce the variable fluorescence in DCMU-poisoned chloroplasts.

  9. Two Polymorphic Forms of a Six-Coordinate Mononuclear Cobalt(II) Complex with Easy-Plane Anisotropy: Structural Features, Theoretical Calculations, and Field-Induced Slow Relaxation of the Magnetization.

    PubMed

    Roy, Subhadip; Oyarzabal, Itziar; Vallejo, Julia; Cano, Joan; Colacio, Enrique; Bauza, Antonio; Frontera, Antonio; Kirillov, Alexander M; Drew, Michael G B; Das, Subrata

    2016-09-06

    A mononuclear cobalt(II) complex [Co(3,5-dnb)2(py)2(H2O)2] {3,5-Hdnb = 3,5-dinitrobenzoic acid; py = pyridine} was isolated in two polymorphs, in space groups C2/c (1) and P21/c (2). Single-crystal X-ray diffraction analyses reveal that 1 and 2 are not isostructural in spite of having equal formulas and ligand connectivity. In both structures, the Co(II) centers adopt octahedral {CoN2O4} geometries filled by pairs of mutually trans terminal 3,5-dnb, py, and water ligands. However, the structures of 1 and 2 disclose distinct packing patterns driven by strong intermolecular O-H···O hydrogen bonds, leading to their 0D→2D (1) or 0D→1D (2) extension. The resulting two-dimensional layers and one-dimensional chains were topologically classified as the sql and 2C1 underlying nets, respectively. By means of DFT theoretical calculations, the energy variations between the polymorphs were estimated, and the binding energies associated with the noncovalent interactions observed in the crystal structures were also evaluated. The study of the direct-current magnetic properties, as well as ab initio calculations, reveal that both 1 and 2 present a strong easy-plane magnetic anisotropy (D > 0), which is larger for the latter polymorph (D is found to exhibit values between +58 and 117 cm(-1) depending on the method). Alternating current dynamic susceptibility measurements show that these polymorphs exhibit field-induced slow relaxation of the magnetization with Ueff values of 19.5 and 21.1 cm(-1) for 1 and 2, respectively. The analysis of the whole magnetic data allows the conclusion that the magnetization relaxation in these polymorphs mainly takes place through a virtual excited state (Raman process). It is worth noting that despite the notable difference between the supramolecular networks of 1 and 2, they exhibit almost identical magnetization dynamics. This fact suggests that the relaxation process is intramolecular in nature and that the virtual state involved in the two-phonon Raman process lies at a similar energy in polymorphs 1 and 2 (∼20 cm(-1)). Interestingly, this value is recurrent in Co(II) single-ion magnets, even for those displaying different coordination number and geometry.

  10. Aurora over North America

    NASA Image and Video Library

    2015-03-23

    Using the “day-night band” (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS), the Suomi National Polar-orbiting Partnership (Suomi NPP) satellite acquired this view of the aurora borealis on March 18, 2015. The northern lights stretch across Canada’s Quebec, Ontario, Manitoba, Nunavut, and Newfoundland provinces in the image, and are part of the auroral oval that expanded to middle latitudes because of a geomagnetic storm on March 17, 2015. The DNB sensor detects dim light signals such as auroras, airglow, gas flares, city lights, and reflected moonlight. In the case of the image above, the sensor detected the visible light emissions as energetic particles rained down from Earth’s magnetosphere and into the gases of the upper atmosphere. The images are similar to those collected by the Operational Linescan System flown on U.S. Defense Meteorological Satellite Program (DMSP) satellites for the past three decades. Auroras typically occur when solar flares and coronal mass ejections—or even an active solar wind stream—disturb and distort the magnetosphere, the cocoon of space protected by Earth’s magnetic field. The collision of solar particles and pressure into our planet’s magnetosphere accelerates particles trapped in the space around Earth (such as in the radiation belts). Those particles are sent crashing down into Earth’s upper atmosphere—at altitudes of 100 to 400 kilometers (60 to 250 miles)—where they excite oxygen and nitrogen molecules and release photons of light. The results are rays, sheets, and curtains of dancing light in the sky. Read more: earthobservatory.nasa.gov/NaturalHazards/view.php?id=8555... NASA Earth Observatory image by Jesse Allen, using VIIRS day-night band data from the Suomi National Polar-orbiting Partnership. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Mike Carlowicz and Adam Voiland. Credit: NASA Earth Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  11. Photochemistry in a 3D metal-organic framework (MOF): monitoring intermediates and reactivity of the fac-to-mer photoisomerization of Re(diimine)(CO)3Cl incorporated in a MOF.

    PubMed

    Easun, Timothy L; Jia, Junhua; Calladine, James A; Blackmore, Danielle L; Stapleton, Christopher S; Vuong, Khuong Q; Champness, Neil R; George, Michael W

    2014-03-03

    The mechanism and intermediates in the UV-light-initiated ligand rearrangement of fac-Re(diimine)(CO)3Cl to form the mer isomer, when incorporated into a 3D metal-organic framework (MOF), have been investigated. The structure hosting the rhenium diimine complex is a 3D network with the formula {Mn(DMF)2[LRe(CO)3Cl]}∞ (ReMn; DMF = N,N-dimethylformamide), where the diimine ligand L, 2,2'-bipyridine-5,5'-dicarboxylate, acts as a strut of the MOF. The incorporation of ReMn into a KBr disk allows spatial distribution of the mer-isomer photoproduct in the disk to be mapped and spectroscopically characterized by both Fourier transform infrared and Raman microscopy. Photoisomerization has been monitored by IR spectroscopy and proceeds via dissociation of a CO to form more than one dicarbonyl intermediate. The dicarbonyl species are stable in the solid state at 200 K. The photodissociated CO ligand appears to be trapped within the crystal lattice and, upon warming above 200 K, readily recombines with the dicarbonyl intermediates to form both the fac-Re(diimine)(CO)3Cl starting material and the mer-Re(diimine)(CO)3Cl photoproduct. Experiments over a range of temperatures (265-285 K) allow estimates of the activation enthalpy of recombination for each process of ca. 16 (±6) kJ mol(-1) (mer formation) and 23 (±4) kJ mol(-1) (fac formation) within the MOF. We have compared the photochemistry of the ReMn MOF with a related alkane-soluble Re(dnb)(CO)3Cl complex (dnb = 4,4'-dinonyl-2,2'-bipyridine). Time-resolved IR measurements clearly show that, in an alkane solution, the photoinduced dicarbonyl species again recombines with CO to both re-form the fac-isomer starting material and form the mer-isomer photoproduct. Density functional theory calculations of the possible dicarbonyl species aids the assignment of the experimental data in that the ν(CO) IR bands of the CO loss intermediate are, as expected, shifted to lower energy when the metal is bound to DMF rather than to an alkane and both solution data and calculations suggest that the ν(CO) band positions in the photoproduced dicarbonyl intermediates of ReMn are consistent with DMF binding.

  12. The motional stark effect with laser-induced fluorescence diagnostic

    NASA Astrophysics Data System (ADS)

    Foley, E. L.; Levinton, F. M.

    2010-05-01

    The motional Stark effect (MSE) diagnostic is the worldwide standard technique for internal magnetic field pitch angle measurements in magnetized plasmas. Traditionally, it is based on using polarimetry to measure the polarization direction of light emitted from a hydrogenic species in a neutral beam. As the beam passes through the magnetized plasma at a high velocity, in its rest frame it perceives a Lorentz electric field. This field causes the H-alpha emission to be split and polarized. A new technique under development adds laser-induced fluorescence (LIF) to a diagnostic neutral beam (DNB) for an MSE measurement that will enable radially resolved magnetic field magnitude as well as pitch angle measurements in even low-field (<1 T) experiments. An MSE-LIF system will be installed on the National Spherical Torus Experiment (NSTX) at the Princeton Plasma Physics Laboratory. It will enable reconstructions of the plasma pressure, q-profile and current as well as, in conjunction with the existing MSE system, measurements of radial electric fields.

  13. Usefulness of charge-transfer complexation for the assessment of sympathomimetic drugs: Spectroscopic properties of drug ephedrine hydrochloride complexed with some π-acceptors

    NASA Astrophysics Data System (ADS)

    Refat, Moamen S.; Ibrahim, Omar B.; Saad, Hosam A.; Adam, Abdel Majid A.

    2014-05-01

    Recently, ephedrine (Eph) assessment in food products, pharmaceutical formulations, human fluids of athletes and detection of drug toxicity and abuse, has gained a growing interest. To provide basic data that can be used to assessment of Eph quantitatively based on charge-transfer (CT) complexation, the CT complexes of Eph with 7‧,8,8‧-tetracyanoquinodimethane (TCNQ), dichlorodicyanobenzoquinone (DDQ), 1,3-dinitrobenzene (DNB) or tetrabromothiophene (TBT) were synthesized and spectroscopically investigated. The newly synthesized complexes have been characterized via elemental analysis, IR, Raman, 1H NMR, and UV-visible spectroscopy. The formation constant (KCT), molar extinction coefficient (εCT) and other spectroscopic data have been determined using the Benesi-Hildebrand method and its modifications. The sharp, well-defined Bragg reflections at specific 2θ angles have been identified from the powder X-ray diffraction patterns. Thermal decomposition behavior of these complexes was also studied, and their kinetic thermodynamic parameters were calculated with Coats-Redfern and Horowitz-Metzger equations.

  14. Environmental process descriptors for TNT, TNT-related compounds and picric acid in marine sediment slurries.

    PubMed

    Yost, Sally L; Pennington, Judith C; Brannon, James M; Hayes, Charolett A

    2007-08-01

    Process descriptors were determined for picric acid, TNT, and the TNT-related compounds 2,4DNT, 2,6DNT, 2ADNT, 4ADNT, 2,4DANT, 2,6DANT, TNB and DNB in marine sediment slurries. Three marine sediments of various physical characteristics (particle size ranging from 15 to >90% fines and total organic carbon ranging from <0.10 to 3.60%) were kept in suspension with 20ppt saline water. Concentrations of TNT and its related compounds decreased immediately upon contact with the marine sediment slurries, with aqueous concentrations slowly declining throughout the remaining test period. Sediment-water partition coefficients could not be determined for these compounds since solution phase concentrations were unstable. Kinetic rates and half-lives were influenced by the sediment properties, with the finer grained, higher organic carbon sediment being the most reactive. Aqueous concentrations of picric acid were very stable, demonstrating little partitioning to the sediments. Degradation to picramic acid was minimal, exhibiting concentrations at or just above the detection limit.

  15. Spectroscopic and physical measurements on charge-transfer complexes: Interactions between norfloxacin and ciprofloxacin drugs with picric acid and 3,5-dinitrobenzoic acid acceptors

    NASA Astrophysics Data System (ADS)

    Refat, Moamen S.; Elfalaky, A.; Elesh, Eman

    2011-03-01

    Charge-transfer complexes formed between norfloxacin (nor) or ciprofloxacin (cip) drugs as donors with picric acid (PA) and/or 3,5-dinitrobenzoic acid (DNB) as π-acceptors have been studied spectrophotometrically in methanol solvent at room temperature. The results indicated the formation of CT-complexes with molar ratio1:1 between donor and acceptor at maximum CT-bands. In the terms of formation constant ( KCT), molar extinction coefficient ( ɛCT), standard free energy (Δ Go), oscillator strength ( f), transition dipole moment (μ), resonance energy ( RN) and ionization potential ( ID) were estimated. IR, H NMR, UV-Vis techniques, elemental analyses (CHN) and TG-DTG investigations were used to characterize the structural of charge-transfer complexes. It indicates that the CT interaction was associated with a proton migration from each acceptor to nor or cip donors which followed by appearing intermolecular hydrogen bond. In addition, X-ray investigation was carried out to scrutinize the crystal structure of the resulted CT-complexes.

  16. Suomi satellite brings to light a unique frontier of nighttime environmental sensing capabilities

    PubMed Central

    Miller, Steven D.; Mills, Stephen P.; Elvidge, Christopher D.; Lindsey, Daniel T.; Lee, Thomas F.; Hawkins, Jeffrey D.

    2012-01-01

    Most environmental satellite radiometers use solar reflectance information when it is available during the day but must resort at night to emission signals from infrared bands, which offer poor sensitivity to low-level clouds and surface features. A few sensors can take advantage of moonlight, but the inconsistent availability of the lunar source limits measurement utility. Here we show that the Day/Night Band (DNB) low-light visible sensor on the recently launched Suomi National Polar-orbiting Partnership (NPP) satellite has the unique ability to image cloud and surface features by way of reflected airglow, starlight, and zodiacal light illumination. Examples collected during new moon reveal not only meteorological and surface features, but also the direct emission of airglow structures in the mesosphere, including expansive regions of diffuse glow and wave patterns forced by tropospheric convection. The ability to leverage diffuse illumination sources for nocturnal environmental sensing applications extends the advantages of visible-light information to moonless nights. PMID:22984179

  17. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    NASA Astrophysics Data System (ADS)

    Schunke, B.; Bora, D.; Hemsworth, R.; Tanga, A.

    2009-03-01

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D- and capable of delivering 16.5 MW of D0 to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option [1]. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H- to 100 keV will inject ≈15 A equivalent of H0 for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion source as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D- and H- current densities as well as long-pulse operation [2, 3]. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R&D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.

  18. Quantification and aging of the post-blast residue of TNT landmines.

    PubMed

    Oxley, Jimmie C; Smith, James L; Resende, Elmo; Pearce, Evan

    2003-07-01

    Post-blast residues are potential interferents to chemical detection of landmines. To assess the potential problem related to 2,4,6-trinitrotoluene (TNT), its post-blast residue was identified and quantified. In the first part of this study laboratory-scale samples of TNT (2 g) were detonated in a small-scale explosivity device (SSED) to evaluate the explosive power and collect post-blast residue for chemical analysis. Initiator size was large relative to the TNT charge; thus, issues arose regarding choice of initiator, residue from the initiator, and afterburning of TNT. The second part of this study detonated 75 to 150 g of military-grade TNT (typical of antipersonnel mines) in 55-gal barrels containing various witness materials (metal plates, sand, barrel walls, the atmosphere). The witness materials were analyzed for explosive residue. In a third set of tests, 75-g samples of TNT were detonated over soil (from Fort Leonard Wood or Sandia National Laboratory) in an indoor firing chamber (100 by 4.6 by 2.7 m high). Targeted in these studies were TNT and four explosive-related compounds (ERC): 2,4-dinitrotoluene (DNT), 1,3-dinitrobenzene (DNB), 2- and 4-aminodinitrotoluene (2-ADNT and 4-ADNT). The latter two are microbial degradation products of TNT. Post-blast residue was allowed to age in the soils as a function of moisture contents (5 and 10%) in order to quantify the rate of degradation of the principal residues (TNT, DNT, and DNB) and formation of the TNT microbial degradation products (2-ADNT and 4-ADNT). The major distinction between landmine leakage and post-blast residue was not the identity of the species but relative ratios of amounts. In landmine leakage the DNT/TNT ratio was usually greater than 1. In post-blast residue it was on the order of 1 to 1/100th of a percent, and the total amount of pre-blast residue (landmine leakage) was a factor of 1/100 to 1/1000 less than post-blast. In addition, landmine leakage resulted in low DNT/ADNT ratios, usually less than 1, whereas pre-blast residues started with ratios above 20. Because with time DNT decreased and ADNT increased, over a month the ratio decreased by a factor of 2. The rate of TNT degradation in soil observed in this study was much slower than that reported when initial concentrations of TNT were lower. Degradation rates yielded half-lives of 40 and 100 days for 2,4-DNT and TNT, respectively.

  19. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunke, B.; Bora, D.; Hemsworth, R.

    2009-03-12

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D{sup -} and capable of delivering 16.5 MW of D{sup 0} to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H{sup -} to 100 keV will inject {approx_equal}15 A equivalent of H{sup 0} for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion sourcemore » as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D{sup -} and H{sup -} current densities as well as long-pulse operation. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R and D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.« less

  20. Light pollution: measuring and modelling skyglow. An application in two Portuguese reserves

    NASA Astrophysics Data System (ADS)

    Lima, Raul Cerveira Pinto Sousa

    Outdoors human-made lighting at night causes sky glow, one of the effects of light pollution. Sky glow is rising with the growth of world population. Urban inhabitants are increasingly deprived from a starry sky. However, since light propagates to regions far from where it is produced, light pollution spreads to places where few or none artificial light at night existed, disturbing the quality of the night sky. In this work we assess for the first time the sky brightness of two regions in Portugal, the Peneda-Geres National Park, and the recently created Starlight Reserve Dark Sky® Alqueva. We used a portable unit, a Unihedron Sky Quality Meter-L (SQM-L), to measure the luminance of the night sky. We also tested the SQM-L in a laboratory to a more thorough analysis of the device, and to check the effect of polarization on the unit, suggested by our observations and other users. Our results suggest that the SQM-L is not affected by any measurable effect of polarization, but some guidelines to use the SQM-L in the field are provided based on our work. The data from the field measurement was used to compare to one light pollution propagation model (Kocifaj, 2007), using VIIRS DNB satellite upwards radiance as input to the model. The results obtained from the model are favourably compared to the field measurements. We proceeded to a set of tests with the model to find the best fit. Our best results were achieved by analysing the data by night rather than the global set of data. Our first results were used to apply to the classification of the region of Alqueva to a Starlight Tourism Destination. That classification was attained during the course of this work (December 2011). A guideline on the Peneda-Geres National Park was also implemented after our first results were provided. We believe we have achieved a set of results in a set of parallel issues all related to light pollution that we hope may contribute to the current knowledge on this area of research.

  1. Upper atmospheric gravity wave details revealed in nightglow satellite imagery

    PubMed Central

    Miller, Steven D.; Straka, William C.; Yue, Jia; Smith, Steven M.; Alexander, M. Joan; Hoffmann, Lars; Setvák, Martin; Partain, Philip T.

    2015-01-01

    Gravity waves (disturbances to the density structure of the atmosphere whose restoring forces are gravity and buoyancy) comprise the principal form of energy exchange between the lower and upper atmosphere. Wave breaking drives the mean upper atmospheric circulation, determining boundary conditions to stratospheric processes, which in turn influence tropospheric weather and climate patterns on various spatial and temporal scales. Despite their recognized importance, very little is known about upper-level gravity wave characteristics. The knowledge gap is mainly due to lack of global, high-resolution observations from currently available satellite observing systems. Consequently, representations of wave-related processes in global models are crude, highly parameterized, and poorly constrained, limiting the description of various processes influenced by them. Here we highlight, through a series of examples, the unanticipated ability of the Day/Night Band (DNB) on the NOAA/NASA Suomi National Polar-orbiting Partnership environmental satellite to resolve gravity structures near the mesopause via nightglow emissions at unprecedented subkilometric detail. On moonless nights, the Day/Night Band observations provide all-weather viewing of waves as they modulate the nightglow layer located near the mesopause (∼90 km above mean sea level). These waves are launched by a variety of physical mechanisms, ranging from orography to convection, intensifying fronts, and even seismic and volcanic events. Cross-referencing the Day/Night Band imagery with conventional thermal infrared imagery also available helps to discern nightglow structures and in some cases to attribute their sources. The capability stands to advance our basic understanding of a critical yet poorly constrained driver of the atmospheric circulation. PMID:26630004

  2. Upper atmospheric gravity wave details revealed in nightglow satellite imagery.

    PubMed

    Miller, Steven D; Straka, William C; Yue, Jia; Smith, Steven M; Alexander, M Joan; Hoffmann, Lars; Setvák, Martin; Partain, Philip T

    2015-12-08

    Gravity waves (disturbances to the density structure of the atmosphere whose restoring forces are gravity and buoyancy) comprise the principal form of energy exchange between the lower and upper atmosphere. Wave breaking drives the mean upper atmospheric circulation, determining boundary conditions to stratospheric processes, which in turn influence tropospheric weather and climate patterns on various spatial and temporal scales. Despite their recognized importance, very little is known about upper-level gravity wave characteristics. The knowledge gap is mainly due to lack of global, high-resolution observations from currently available satellite observing systems. Consequently, representations of wave-related processes in global models are crude, highly parameterized, and poorly constrained, limiting the description of various processes influenced by them. Here we highlight, through a series of examples, the unanticipated ability of the Day/Night Band (DNB) on the NOAA/NASA Suomi National Polar-orbiting Partnership environmental satellite to resolve gravity structures near the mesopause via nightglow emissions at unprecedented subkilometric detail. On moonless nights, the Day/Night Band observations provide all-weather viewing of waves as they modulate the nightglow layer located near the mesopause (∼ 90 km above mean sea level). These waves are launched by a variety of physical mechanisms, ranging from orography to convection, intensifying fronts, and even seismic and volcanic events. Cross-referencing the Day/Night Band imagery with conventional thermal infrared imagery also available helps to discern nightglow structures and in some cases to attribute their sources. The capability stands to advance our basic understanding of a critical yet poorly constrained driver of the atmospheric circulation.

  3. Adsorptive removal of hydrophobic organic compounds by carbonaceous adsorbents: a comparative study of waste-polymer-based, coal-based activated carbon, and carbon nanotubes.

    PubMed

    Lian, Fei; Chang, Chun; Du, Yang; Zhu, Lingyan; Xing, Baoshan; Liu, Chang

    2012-01-01

    Adsorption of the hydrophobic organic compounds (HOCs) trichloroethylene (TCE), 1,3-dichlorobenzene (DCB), 1,3-dinitrobenzene (DNB) and gamma-hexachlorocyclohexane (HCH) on five different carbonaceous materials was compared. The adsorbents included three polymer-based activated carbons, one coal-based activated carbon (F400) and multiwalled carbon nanotubes (MWNT). The polymer-based activated carbons were prepared using KOH activation from waste polymers: polyvinyl chloride (PVC), polyethyleneterephthalate (PET) and tire rubber (TR). Compared with F400 and MWNT, activated carbons derived from PVC and PET exhibited fast adsorption kinetics and high adsorption capacity toward the HOCs, attributed to their extremely large hydrophobic surface area (2700 m2/g) and highly mesoporous structures. Adsorption of small-sized TCE was stronger on the tire-rubber-based carbon and F400 resulting from the pore-filling effect. In contrast, due to the molecular sieving effect, their adsorption on HCH was lower. MWNT exhibited the lowest adsorption capacity toward HOCs because of its low surface area and characteristic of aggregating in aqueous solution.

  4. Utilization of charge-transfer complexation for the detection of carcinogenic substances in foods: Spectroscopic characterization of ethyl carbamate with some traditional π-acceptors

    NASA Astrophysics Data System (ADS)

    Adam, Abdel Majid A.; Refat, Moamen S.; Saad, Hosam A.

    2013-04-01

    The study of toxic and carcinogenic substances in foods represents one of the most demanding areas in food safety, due to their repercussions for public health. One potentially toxic compound for humans is ethyl carbamate (EC). EC is a multi-site genotoxic carcinogen of widespread occurrence in fermented foods and alcoholic beverages. Structural and thermal stability of charge-transfer complexes formed between EC as a donor with quinol (QL), picric acid (PA), chloranilic acid (CLA), p-chloranil (p-CHL) and 1,3-dinitrobenzene (DNB) as acceptors were reported. Elemental analysis (CHN), electronic absorption spectra, photometric titration, IR, and 1H NMR spectra show that the interaction between EC and acceptors was stabilized by hydrogen bonding, via a 1:1 stoichiometry. Thermogravimetric (TG) analysis indicates that the formation of molecular CT complexes was stable, exothermic and spontaneous. Finally, the CT complexes were screened for their antibacterial and antifungal activities. The results indicated that the [(EC)(QL)] complex exhibited strong antimicrobial activities against various bacterial and fungal strains compared with standard drugs.

  5. Application of urban neighborhoods in understanding of local level electricity consumption patterns

    NASA Astrophysics Data System (ADS)

    Roy Chowdhury, P. K.; Bhaduri, B. L.

    2017-12-01

    Aggregated national or regional level electricity consumption data fail to capture the spatial variation in consumption, a function of location, climate, topography, and local economics. Spatial monitoring of electricity usage patterns helps to understand derivers of location specific consumption behavior and develop models to cater to the consumer needs, plan efficiency measures, identify settled areas lacking access, and allows for future planning through assessing requirements. Developed countries have started to deploy sensor systems such as smart meters to gather information on local level consumption patterns, but such infrastructure is virtually nonexistent in developing nations, resulting in serious dearth of reliable data for planners and policy makers. Remote sensing of artificial nighttime lights from human settlements have proven useful to study electricity consumptions from global to regional scales, however, local level studies remain scarce. Using the differences in spatial characteristics among different urban neighborhoods such as industrial, commercial and residential, observable through very high resolution day time satellite images (<0.5 meter), formal urban neighborhoods have been generated through texture analysis. In this study, we explore the applicability of these urban neighborhoods in understanding local level electricity consumption patterns through exploring possible correlations between the spatial characteristics of these neighborhoods, associated general economic activities, and corresponding VIIRS day-night band (DNB) nighttime lights observations, which we use as a proxy for electricity consumption in the absence of ground level consumption data. The overall trends observed through this analysis provides useful explanations helping in understanding of broad electricity consumption patterns in urban areas lacking ground level observations. This study thus highlights possible application of remote sensing data driven methods in providing novel insights into local level socio-economic patterns that were hitherto undetected due to lack of ground data.

  6. Socio-economic impact of Trans-Siberian railway after the collapse of Soviet Union by integrated spatial data analysis

    NASA Astrophysics Data System (ADS)

    Uchida, Seina; Takeuchi, Wataru; Hatoyama, Kiichiro; Mazurov, Yuri

    2016-06-01

    How Russian cities have stood up again after the collapse of Soviet Union will be discussed in this paper. In order to know how the cities has managed the difficult period after the change of social system, transition of urban area, population, and nighttime light is searched. Although Far East will not stop as one of the most important area with abundant resources, overpopulation in towns and depopulation in countryside is going on. By searching the present situation, this research also aims to predict the future of Far East and Russia. First of all, Landsat data from 1987 to 2015 is collected over Moscow, Vladivostok, Novosibirsk, Tynda, and Blagoveshchensk and urban area is calculated by land cover classification. Secondly, population and retail turnover data are collected from year books in Russia. Thirdly, gross regional product (GRP) is estimated by nighttime light images from DMSP-OLS and VIIRS DNB dataset. In addition, these data are compared and difference of development stage after the collapse of Soviet Union between the unstable era (1990s-2000) and development era (2000-) will be discussed. It is expected that these analysis will give us useful information about Russian strategy for the future.

  7. Cross Matching of VIIRS Boat Detection and Vessel Monitoring System Tracks

    NASA Astrophysics Data System (ADS)

    Hsu, F. C.; Elvidge, C.; Zhizhin, M. N.; Baugh, K.; Ghosh, T.

    2016-12-01

    One approach to commercial fishing is to use use bright lights at night to attract catch. This is a widely used practice in East and Southeast Asia, but can also be found in other fisheries. In some cases, the deployed lighting exceeds 100,000 watts. Such lighting is distinctive in dark ocean and can even be seen from space with sensor such as Visible Infrared Imaging Radiometer Suite Day/Night Band (VIIRS-DNB). We have developed a VIIRS Boat Detection (VBD) system, which outputs lists of boat locations in near real time. One of the standard methods fishery agencies use to collect geospatial data on fishing boats is to require boats to carry Vessel Monitoring System beacons. We developed an algorithm to cross-match VBD data with VMS tracks. With this we are able to identify fishing boats that do not carry VMS beacons. In certain situations, this is an indicator of illegal fishing. The other application for this cross-matching is to define the VIIRS detection limits and developing a calibration to estimate deployed wattage. Here we demonstrate results of cross matching VBD and VMS for Indonesia as example to showcase its potential.

  8. VIIRS thermal emissive bands on-orbit calibration coefficient performance using vicarious calibration results

    NASA Astrophysics Data System (ADS)

    Moyer, D.; Moeller, C.; De Luccia, F.

    2013-09-01

    The Visible Infrared Imager Radiometer Suite (VIIRS), a primary sensor on-board the Suomi-National Polar-orbiting Partnership (SNPP) spacecraft, was launched October 28, 2011. It has 22 bands: 7 thermal emissive bands (TEBs), 14 reflective solar bands (RSBs) and a Day Night Band (DNB). The TEBs cover the spectral wavelengths between 3.7 to 12 μm and have two 371 m and five 742 m spatial resolution bands. A VIIRS Key Performance Parameter (KPP) is the sea surface temperature (SST) which uses bands M12 (3.7 μm), M15 (10.8 μm) and M16's (12.0 μm) calibrated Science Data Records (SDRs). The TEB SDRs rely on pre-launch calibration coefficients used in a quadratic algorithm to convert the detector's response to calibrated radiance. This paper will evaluate the performance of these prelaunch calibration coefficients using vicarious calibration information from the Cross-track Infrared Sounder (CrIS) also onboard the SNPP spacecraft and the Infrared Atmospheric Sounding Interferometer (IASI) on-board the Meteorological Operational (MetOp) satellite. Changes to the pre-launch calibration coefficients' offset term c0 to improve the SDR's performance at cold scene temperatures will also be discussed.

  9. Studying the Light Pollution around Urban Observatories: Columbus State University’s WestRock Observatory

    NASA Astrophysics Data System (ADS)

    O'Keeffe, Brendon Andrew; Johnson, Michael

    2017-01-01

    Light pollution plays an ever increasing role in the operations of observatories across the world. This is especially true in urban environments like Columbus, GA, where Columbus State University’s WestRock Observatory is located. Light pollution’s effects on an observatory include high background levels, which results in a lower signal to noise ratio. Overall, this will limit what the telescope can detect, and therefore limit the capabilities of the observatory as a whole.Light pollution has been mapped in Columbus before using VIIRS DNB composites. However, this approach did not provide the detailed resolution required to narrow down the problem areas around the vicinity of the observatory. The purpose of this study is to assess the current state of light pollution surrounding the WestRock observatory by measuring and mapping the brightness of the sky due to light pollution using light meters and geographic information system (GIS) software.Compared to VIIRS data this study allows for an improved spatial resolution and a direct measurement of the sky background. This assessment will enable future studies to compare their results to the baseline established here, ensuring that any changes to the way the outdoors are illuminated and their effects can be accurately measured, and counterbalanced.

  10. Experimental investigation on heat transfer and frictional characteristics of vertical upward rifled tube in supercritical CFB boiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Dong; Pan, Jie; Zhu, Xiaojing

    2011-02-15

    Water wall design is a key issue for supercritical Circulating Fluidized Bed (CFB) boiler. On account of the good heat transfer performance, rifled tube is applied in the water wall design of a 600 MW supercritical CFB boiler in China. In order to investigate the heat transfer and frictional characteristics of the rifled tube with vertical upward flow, an in-depth experiment was conducted in the range of pressure from 12 to 30 MPa, mass flux from 230 to 1200 kg/(m{sup 2} s), and inner wall heat flux from 130 to 720 kW/m{sup 2}. The wall temperature distribution and pressure dropmore » in the rifled tube were obtained in the experiment. The normal, enhanced and deteriorated heat transfer characteristics were also captured. In this paper, the effects of pressure, inner wall heat flux and mass flux on heat transfer characteristics are analyzed, the heat transfer mechanism and the frictional resistance performance are discussed, and the corresponding empirical correlations are presented. The experimental results show that the rifled tube can effectively prevent the occurrence of departure from nucleate boiling (DNB) and keep the tube wall temperature in a permissible range under the operating condition of supercritical CFB boiler. (author)« less

  11. Supersensitive and selective detection of picric acid explosive by fluorescent Ag nanoclusters.

    PubMed

    Zhang, Jian Rong; Yue, Yuan Yuan; Luo, Hong Qun; Li, Nian Bing

    2016-02-07

    Picric acid (PA) explosive is a hazard to public safety and health, so the sensitive and selective detection of PA is very important. In the present work, polyethyleneimine stabilized Ag nanoclusters were successfully used for the sensitive and selective quantification of PA on the basis of fluorescence quenching. The quenching efficiency of Ag nanoclusters is proportional to the concentration of PA and the logarithm of PA concentration over two different concentration ranges (1.0 nM-1 μM for the former and 0.25-20 μM for the latter), thus the proposed quantitative strategy for PA provides a wide linear range of 1.0 nM-20 μM. The detection limit based on 3σ/K is 0.1 nM. The quenching mechanism of Ag nanoclusters by PA is discussed in detail. The results indicate that the selective detection of PA over other nitroaromatics including 2,4,6-trinitrotoluene (TNT), 2,4-dinitrotoluene (2,4-DNT), p-nitrotoluene (p-NT), m-dinitrobenzene (m-DNB), and nitrobenzene (NB), is due to the electron transfer and energy transfer between PA and polyethyleneimine-capped Ag nanoclusters. In addition, the experimental data obtained for the analysis of artificial samples show that the proposed PA sensor is potentially applicable in the determination of trace PA explosive in real samples.

  12. [Determination of nitroaromatics and cyclo ketones in sea water' by gas chromatography coupled with activated carbon fiber solid-phase micro-extraction].

    PubMed

    Ma, Hanna; Zhu, Mengya; Wang, Yalin; Sun, Tonghua; Jia, Jinping

    2009-05-01

    A gas chromatography (GC) coupled with solid-phase micro-extraction using a special activated carbon fiber (ACF) was developed for the analysis of 6 nitroaromatics and cyclic ketones, nitrobenzene (NB), 1,3-dinitrobenzene (1,3-DNB), 2,4-dinitrotoluene (2,4-DNT), 2,6-dinitrotoluene (2,6-DNT), isophorone, 1,4-naphthaquinone (1,4-NPQ), in sea water samples. The sample was extracted for 30 min under saturation of NaCl at 1,500 r/min and 60 degrees C in head space. The desorption was performance at 280 degrees C for 2 min. The linear ranges were from 0.01 to 400 microg/L. The limits of detection (LODs) were 1.4 - 3.2 ng/L. This method has been successfully applied to the determination of nitroaromatics and cyclic ketones in the sea water samples obtained from East China Sea. The concentrations of nitrobenzene, 1,3-dinitrobenzene and 2,6-dinitrotoluene in the sea water sample were 0.756, 0.944, 0.890 microg/L, respectively. The recoveries were 86.3% - 101.8% with the relative standard deviations (RSDs) of 3.7% -7.8%. The method is suitable for analyzing nitroaromatics and cyclic ketones at low concentration levels in sea water samples.

  13. SPoRT Participation in the GOES-R and JPSS Proving Grounds

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Fuell, Kevin; Smith, Matthew

    2013-01-01

    For the last several years, the NASA Short-term Prediction Research and Transition (SPoRT) project at has been working with the various algorithm working groups and science teams to demonstrate the utility of future operational sensors for GOES-R and the suite of instruments for the JPSS observing platforms. For GOES-R, imagery and products have been developed from polar-orbiting sensors such as MODIS and geostationary observations from SEVIRI, simulated imagery, enhanced products derived from existing GOES satellites, and data from ground-based observing systems to generate pseudo or proxy products for the ABI and GLM instruments. The suite of products include GOES-POES basic and RGB hybrid imagery, total lightning flash products, quantitative precipitation estimates, and convective initiation products. SPoRT is using imagery and products from VIIRS, CrIS, ATMS, and OMPS to show the utility of data and products from their operational counterparts on JPSS. The products include VIIRS imagery in swath form, the GOES-POES hybrid, a suite of RGB products including the air mass RGB using water vapor and ozone channels from CrIS, and several DNB products. Over a dozen SPoRT collaborative WFOs and several National Centers are involved in an intensive evaluation of the operational utility of these products.

  14. Silicon controlled rectifier polyphase bridge inverter commutated with gate-turn-off thyristor

    NASA Technical Reports Server (NTRS)

    Edwards, Dean B. (Inventor); Rippel, Wally E. (Inventor)

    1986-01-01

    A polyphase SCR inverter (10) having N switching poles, each comprised of two SCR switches (1A, 1B; 2A, 2B . . . NA, NB) and two diodes (D1B; D1B; D2A, D2B . . . DNA, DNB) in series opposition with saturable reactors (L1A, L1B; L2A, L2B . . . LNA, LNB) connecting the junctions between the SCR switches and diodes to an output terminal (1, 2 . . . 3) is commutated with only one GTO thyristor (16) connected between the common negative terminal of a dc source and a tap of a series inductor (14) connected to the positive terminal of the dc source. A clamp winding (22) and diode (24) are provided, as is a snubber (18) which may have its capacitance (c) sized for maximum load current divided into a plurality of capacitors (C.sub.1, C.sub.2 . . . C.sub.N), each in series with an SCR switch S.sub.1, S.sub.2 . . . S.sub.N). The total capacitance may be selected by activating selected switches as a function of load current. A resistor 28 and SCR switch 26 shunt reverse current when the load acts as a generator, such as a motor while braking.

  15. Construction of 2,4,6-Trinitrotoluene Biosensors with Novel Sensing Elements from Escherichia coli K-12 MG1655.

    PubMed

    Tan, Junjie; Kan, Naipeng; Wang, Wei; Ling, Jingyi; Qu, Guolong; Jin, Jing; Shao, Yu; Liu, Gang; Chen, Huipeng

    2015-06-01

    Detection of 2,4,6-trinitrotoluene (TNT) has been extensively studied since it is a common explosive filling for landmines, posing significant threats to the environment and human safety. The rapid advances in synthetic biology give new hope to detect such toxic and hazardous compounds in a more sensitive and safe way. Biosensor construction anticipates finding sensing elements able to detect TNT. As TNT can induce some physiological responses in E. coli, it may be useful to define the sensing elements from E. coli to detect TNT. An E. coli MG1655 genomic promoter library containing nearly 5,400 elements was constructed. Five elements, yadG, yqgC, aspC, recE, and topA, displayed high sensing specificity to TNT and its indicator compounds 1,3-DNB and 2,4-DNT. Based on this, a whole cell biosensor was constructed using E. coli, in which green fluorescent protein was positioned downstream of the five sensing elements via genetic fusion. The threshold value, detection time, EC200 value, and other aspects of five sensing elements were determined and the minimum responding concentration to TNT was 4.75 mg/L. According to the synthetic biology, the five sensing elements enriched the reservoir of TNT-sensing elements, and provided a more applicable toolkit to be applied in genetic routes and live systems of biosensors in future.

  16. Utilizing Suomi NPP's Day-Night Band to Assess Energy Consumption in Rural and Urban Areas as an Input for Poverty Analysis

    NASA Astrophysics Data System (ADS)

    Baldwin, H. B.; Klug, M.; Tapracharoen, K.; Visudchindaporn, C.

    2017-12-01

    While poverty in Thailand has decreased from 67% in 1986 to 13% in 2012, 6.7 million people were still living within 20% of the poverty line in 2014. Economic uncertainty caused by recurring droughts and decreasing agricultural prices puts this vulnerable part of the population at risk of dropping below the national poverty line in the future. In order to address this issue, the team worked with the Office of Science and Technology (OSTC) at the Royal Thai Embassy, Asian Disaster Preparedness Center (ADPC), and the NASA SERVIR Coordination Office to formulate a new method of analyzing poverty within Thailand. This project utilizes the monthly composite product for 2012-2015 produced by the Earth Observations Group (EOG) at National Oceanic and Atmospheric Administration (NOAA) and National Geophysical Data Center (NGDC). EOG created this product from satellite imagery from Suomi National Polar-Orbiting Visible Infrared Imaging Radiometer Suite's Day Night Band (Suomi NPP VIIRS DNB). Additionally, this project incorporated socio-economic data from Thailand's Ministry of Information and Communication Technology's National Statistical Office and Ministry of Education's National Education Information System to create an enhanced poverty index. This new poverty index will provide the Thai government a cost-effective way to analyze changes of poverty within the nation and inform policy making.

  17. Evaluation of ingredients for the development of new insensitive munitions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maharrey, Sean P.; Johnston, Lois A.; Behrens, Richard, Jr.

    2004-12-01

    Several ingredients being considered by the U.S. Army for the development of new insensitive munitions have been examined. One set of ingredients consists of 2,4-dinitrophenylhydrazine (DNPH) and hexahydro-1,3,5-trinitro-s-triazine (RDX). In this set, the decomposition of the mixture was examined to determine whether adding DNPH to RDX would generate a sufficient quantity of gas to rupture the case of a munition prior to the onset of the rapid reaction of RDX, thus mitigating the violence of reaction. The second set of ingredients consists of three different reduced sensitivity RDX (RS-RDX) powders manufactured by SNPE and Dyno-Nobel. In this set, the objectivemore » was to determine properties of RS-RDX powders that may distinguish them from normal RDX powder and may account for their reduced shock sensitivity. The decomposition reactions and sublimation properties of these materials were examined using two unique instruments: the simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) instrument and the Fourier Transform ion cyclotron resonance (FTICR) mass spectrometry instrument. These instruments provide the capability to examine the details of decomposition reactions in energetic materials. DNPH does not appear to be a good candidate to mitigate the violence of the RDX reaction in a munition. DNPH decomposes between 170 C and 180 C. When mixed with RDX it decomposes between 155 C and 170 C. It decomposes to form 1,3-dintrobenzene (DNB), ammonia, water and nitrogen. Of these compounds only nitrogen and ammonia are capable of generating high pressures within a munition. When DNPH is mixed with RDX, the DNB formed in the decomposition of DNPH interacts with RDX on the surface of the RDX powder leading to a higher rate of formation of CH2O and N2O. The CH2O is consumed by reaction with DNPH to form 2-methylene-1-(2,4-dintrophenyl)hydrazine. As a result, DNPH does not generate a large quantity of gas that will lead to rupture of a munition case. Another compound to consider as an additive is 2-oxo-1,3,5-trinitro-1,3,5-triazacyclohexane (K-6), which generates more gas in the required temperature range. Examination of several different RS-RDX materials has shown that their sublimation rates and decomposition behavior differ from Holston grade RDX. The results suggest that insensitive RDX materials from both SNPE and Dyno-Nobel may have a shell-like structure of RDX on the surface of the particles that is less stable and more reactive than the material in the core of the particles. The origin of this shell-like RDX structure is uncertain, but may be due to some aspect of the manufacturing process. It is possible that this less stable RDX on the surface of the particles may be more fluid than the interior of the particles, allowing more slip between the surface of the particles under impact or shock. This may play a role in the reduced shock sensitivity of the insensitive RDX materials. The results of over 50 experiments with DNPH, mixtures of DNPH and RDX and insensitive RDX are presented. The results characterize the decomposition behavior of each of these materials.« less

  18. Flow regimes and mechanistic modeling of critical heat flux under subcooled flow boiling conditions

    NASA Astrophysics Data System (ADS)

    Le Corre, Jean-Marie

    Thermal performance of heat flux controlled boiling heat exchangers are usually limited by the Critical Heat Flux (CHF) above which the heat transfer degrades quickly, possibly leading to heater overheating and destruction. In an effort to better understand the phenomena, a literature review of CHF experimental visualizations under subcooled flow boiling conditions was performed and systematically analyzed. Three major types of CHF flow regimes were identified (bubbly, vapor clot and slug flow regime) and a CHF flow regime map was developed, based on a dimensional analysis of the phenomena and available data. It was found that for similar geometric characteristics and pressure, a Weber number (We)/thermodynamic quality (x) map can be used to predict the CHF flow regime. Based on the experimental observations and the review of the available CHF mechanistic models under subcooled flow boiling conditions, hypothetical CHF mechanisms were selected for each CHF flow regime, all based on a concept of wall dry spot overheating, rewetting prevention and subsequent dry spot spreading. It is postulated that a high local wall superheat occurs locally in a dry area of the heated wall, due to a cyclical event inherent to the considered CHF two-phase flow regime, preventing rewetting (Leidenfrost effect). The selected modeling concept has the potential to span the CHF conditions from highly subcooled bubbly flow to early stage of annular flow. A numerical model using a two-dimensional transient thermal analysis of the heater undergoing nucleation was developed to mechanistically predict CHF in the case of a bubbly flow regime. In this type of CHF two-phase flow regime, the high local wall superheat occurs underneath a nucleating bubble at the time of bubble departure. The model simulates the spatial and temporal heater temperature variations during nucleation at the wall, accounting for the stochastic nature of the boiling phenomena. The model has also the potential to evaluate the post-DNB heater temperature up to the point of heater melting. Validation of the proposed model was performed using detailed measured wall boiling parameters near CHF, thereby bypassing most needed constitutive relations. It was found that under limiting nucleation conditions; a peak wall temperature at the time of bubble departure can be reached at CHF preventing wall cooling by quenching. The simulations show that the resulting dry patch can survive the surrounding quenching event, preventing further nucleation and leading to a fast heater temperature increase. For more practical applications, the model was applied at known CHF conditions in simple geometry coupled with one-dimensional and three-dimensional (CFD) codes. It was found that, in the case where CHF occurs under bubbly flow conditions, the local wall superheat underneath nucleating bubbles is predicted to reach the Leidenfrost temperature. However, a better knowledge of statistical variations in wall boiling parameters would be necessary to correctly capture the CHF trends with mass flux (or Weber number). In addition, consideration of relevant parameter influences on the Leidenfrost temperature and consideration of interfacial microphysics at the wall would allow improved simulation of the wall rewetting prevention and subsequent dry patch spreading.

  19. Towards the Development of a Low-Cost Device for the Detection of Explosives Vapors by Fluorescence Quenching of Conjugated Polymers in Solid Matrices.

    PubMed

    Martelo, Liliana M; das Neves, Tiago F Pimentel; Figueiredo, João; Marques, Lino; Fedorov, Alexander; Charas, Ana; Berberan-Santos, Mário N; Burrows, Hugh D

    2017-11-03

    Conjugated polymers (CPs) have proved to be promising chemosensory materials for detecting nitroaromatic explosives vapors, as they quickly convert a chemical interaction into an easily-measured high-sensitivity optical output. The nitroaromatic analytes are strongly electron-deficient, whereas the conjugated polymer sensing materials are electron-rich. As a result, the photoexcitation of the CP is followed by electron transfer to the nitroaromatic analyte, resulting in a quenching of the light-emission from the conjugated polymer. The best CP in our studies was found to be poly[(9,9-dioctylfluorenyl-2,7-diyl)-co-bithiophene] (F8T2). It is photostable, has a good absorption between 400 and 450 nm, and a strong and structured fluorescence around 550 nm. Our studies indicate up to 96% quenching of light-emission, accompanied by a marked decrease in the fluorescence lifetime, upon exposure of the films of F8T2 in ethyl cellulose to nitrobenzene (NB) and 1,3-dinitrobenzene (DNB) vapors at room temperature. The effects of the polymeric matrix, plasticizer, and temperature have been studied, and the morphology of films determined by scanning electron microscopy (SEM) and confocal fluorescence microscopy. We have used ink jet printing to produce sensor films containing both sensor element and a fluorescence reference. In addition, a high dynamic range, intensity-based fluorometer, using a laser diode and a filtered photodiode was developed for use with this system.

  20. Alternative method for VIIRS Moon in space view process

    NASA Astrophysics Data System (ADS)

    Anderson, Samuel; Chiang, Kwofu V.; Xiong, Xiaoxiong

    2013-09-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) is a radiometric sensing instrument currently operating onboard the Suomi National Polar-orbiting Partnership (S-NPP) spacecraft. It provides high spatial-resolution images of the emitted and reflected radiation from the Earth and its atmosphere in 22 spectral bands (16 moderate resolution bands M1-M16, 5 imaging bands I1-I5, and 1 day/night pan band DNB) spanning the visible and infrared wavelengths from 412 nm to 12 μm. Just prior to each scan it makes of the Earth, the VIIRS instrument makes a measurement of deep space to serve as a background reference. These space view (SV) measurements form a crucial input to the VIIRS calibration process and are a major determinant of its accuracy. On occasion, the orientation of the Suomi NPP spacecraft coincides with the position of the moon in such a fashion that the SV measurements include light from the moon, rendering the SV measurements unusable for calibration. This paper investigates improvements to the existing baseline SV data processing algorithm of the Sensor Data Record (SDR) processing software. The proposed method makes use of a Moon-in-SV detection algorithm that identifies moon-contaminated SV data on a scan-by-scan basis. Use of this algorithm minimizes the number of SV scans that are rejected initially, so that subsequent substitution processes are always able to find alternative substitute SV scans in the near vicinity of detected moon-contaminated scans.

  1. Polar2Grid 2.0: Reprojecting Satellite Data Made Easy

    NASA Astrophysics Data System (ADS)

    Hoese, D.; Strabala, K.

    2015-12-01

    Polar-orbiting multi-band meteorological sensors such as those on the Suomi National Polar-orbiting Partnership (SNPP) satellite pose substantial challenges for taking imagery the last mile to forecast offices, scientific analysis environments, and the general public. To do this quickly and easily, the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the University of Wisconsin has created an open-source, modular application system, Polar2Grid. This bundled solution automates tools for converting various satellite products like those from VIIRS and MODIS into a variety of output formats, including GeoTIFFs, AWIPS compatible NetCDF files, and NinJo forecasting workstation compatible TIFF images. In addition to traditional visible and infrared imagery, Polar2Grid includes three perceptual enhancements for the VIIRS Day-Night Band (DNB), as well as providing the capability to create sharpened true color, sharpened false color, and user-defined RGB images. Polar2Grid performs conversions and projections in seconds on large swaths of data. Polar2Grid is currently providing VIIRS imagery over the Continental United States, as well as Alaska and Hawaii, from various Direct-Broadcast antennas to operational forecasters at the NOAA National Weather Service (NWS) offices in their AWIPS terminals, within minutes of an overpass of the Suomi NPP satellite. Three years after Polar2Grid development started, the Polar2Grid team is now releasing version 2.0 of the software; supporting more sensors, generating more products, and providing all of its features in an easy to use command line interface.

  2. Demonstrating S-NPP VIIRS Products with the Naval Research Laboratory R&D Websites

    NASA Astrophysics Data System (ADS)

    Kuciauskas, A. P.; Hawkins, J.; Solbrig, J.; Bankert, R.; Richardson, K.; Surratt, M.; Miller, S. D.; Kent, J.

    2014-12-01

    The Naval Research Laboratory, Marine Meteorology Division in Monterey, CA (NRL-MRY) has been developing and providing the global community with VIIRS-derived state of the art image products on three operational websites: · NexSat: www.nrlmry.navy.mil/NEXSAT.html · VIIRS Page: www.nrlmry.navy.mil/VIIRS.html · Tropical Cyclone Page: www.nrlmry.navy.mil/TC.html These user-friendly websites are accessed by the global public with a daily average of 250,000 and 310,000 web hits for NexSat and Tropical Cyclone websites, respectively. Users consist of operational, research, scientific field campaigns, academia, and weather enthusiasts. The websites also contain ancillary products from 5 geostationary and 27 low earth orbiting sensors, ranging from visible through microwave channels. NRL-MRY also leverages the NRL global and regional numerical weather prediction (NWP) models for assessing cloud top measurements and synoptic overlays. With collaborations at CIMSS' Direct Readout site along with the AFWA IDPS-FNMOC and NOAA IDPS portals, a robust component to our websites are product latencies that typically satisfy operational time constraints necessary for planning purposes. Given these resources, NRL-MRY acquires ~2TBytes of data and produces 100,000 image products on a daily basis. In partnership with the COMET program, our product tutorials contain simple and graphically enhanced descriptions that accommodate users ranging from basic to advanced understanding of satellite meteorology. This presentation will provide an overview of our website functionality: animations, co-registered formats, and Google Earth viewing. Through imagery, we will also demonstrate the superiority of VIIRS against its heritage sensor counterparts. A focal aspect will be the demonstration of the VIIRS Day Night Band (DNB) in detecting nighttime features such as wildfires, volcanic ash, Arctic sea ice, and tropical cyclones. We also plan to illustrate how NexSat and VIIRS websites demonstrate CAL/VAL ocean color activity. We will also discuss outreach and training efforts designed for research and operational applications. Our goal is to encourage the audience to add our URLs into their suite of web-based satellite resources.

  3. A User Guide to PARET/ANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, A. P.; Dionne, B.; Marin-Lafleche, A.

    2015-01-01

    PARET was originally created in 1969 at what is now Idaho National Laboratory (INL), to analyze reactivity insertion events in research and test reactor cores cooled by light or heavy water, with fuel composed of either plates or pins. The use of PARET is also appropriate for fuel assemblies with curved fuel plates when their radii of curvatures are large with respect to the fuel plate thickness. The PARET/ANL version of the code has been developed at Argonne National Laboratory (ANL) under the sponsorship of the U.S. Department of Energy/NNSA, and has been used by the Reactor Conversion Program tomore » determine the expected transient behavior of a large number of reactors. PARET/ANL models the various fueled regions of a reactor core as channels. Each of these channels consists of a single flat fuel plate/pin (including cladding and, optionally, a gap) with water coolant on each side. In slab geometry the coolant channels for a given fuel plate are of identical dimensions (mirror symmetry), but they can be of different thickness in each channel. There can be many channels, but each channel is independent and coupled only through reactivity feedback effects to the whole core. The time-dependent differential equations that represent the system are replaced by an equivalent set of finite-difference equations in space and time, which are integrated numerically. PARET/ANL uses fundamentally the same numerical scheme as RELAP5 for the time-integration of the point-kinetics equations. The one-dimensional thermal-hydraulic model includes temperature-dependent thermal properties of the solid materials, such as heat capacity and thermal conductivity, as well as the transient heat production and heat transfer from the fuel meat to the coolant. Temperature- and pressure-dependent thermal properties of the coolant such as enthalpy, density, thermal conductivity, and viscosity are also used in determining parameters such as friction factors and heat transfer coefficients. The code first determines the steady-state solution for the initial state. Then the solution of the transient is obtained by integration in time and space. Multiple heat transfer, DNB and flow instability correlations are available. The code was originally developed to model reactors cooled by an open loop, which was adequate for rapid transients in pool-type cores. An external loop model appropriate for Miniature Neutron Source Reactors (MNSR’s) was also added to PARET/ANL to model natural circulation within the vessel, heat transfer from the vessel to pool and heat loss by evaporation from the pool. PARET/ANL also contains models for decay heat after shutdown, control rod reactivity versus time or position, time-dependent pump flow, and loss-of-flow event with flow reversal as well as logic for trips on period, power, and flow. Feedback reactivity effects from coolant density changes and temperature changes are represented by tables. Feedback reactivity from fuel heat-up (Doppler Effect) is represented by a four-term polynomial in powers of fuel temperature. Photo-neutrons produced in beryllium or in heavy water may be included in the point-kinetics equations by using additional delayed neutron groups.« less

  4. Land, Cryosphere, and Nighttime Environmental Products from Suomi NPP VIIRS: Overview and Status

    NASA Technical Reports Server (NTRS)

    Roman, Miguel O.; Justice, Chris; Csiszar, Ivan

    2014-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument was launched in October 2011 as part of the Suomi National Polar-orbiting Partnership (S-NPP: http://npp.gsfc.nasa.gov/). VIIRS was designed to improve upon the capabilities of the operational Advanced Very High Resolution Radiometer (AVHRR) and provide observation continuity with NASA's Earth Observing System's (EOS) Moderate Resolution Imaging Spectroradiometer (MODIS). Since the VIIRS first-light images were received in November 2011, NASA and NOAA funded scientists have been working to evaluate the instrument performance and derived products to meet the needs of the NOAA operational users and the NASA science community. NOAA's focus has been on refining a suite of operational products known as Environmental Data Records (EDRs), which were developed according to project specifications under the former National Polar-orbiting Environmental Satellite System (NPOESS). The NASA S-NPP Science Team has focused on evaluating the EDRs for science use, developing and testing additional products to meet science data needs and providing MODIS data product continuity. This paper will present to-date findings of the NASA Science Team's evaluation of the VIIRS Land and Cryosphere EDRs, specifically Surface Reflectance, Land Surface Temperature, Surface Albedo, Vegetation Indices, Surface Type, Active Fires, Snow Cover, Ice Surface Temperature, and Sea Ice Characterization (http://viirsland.gsfc.nasa.gov/index.html). The paper will also discuss new capabilities being developed at NASA's Land Product Evaluation and Test Element (http://landweb.nascom.nasa.gov/NPP_QA/); including downstream data and products derived from the VIIRS Day/Night Band (DNB).

  5. Data and Geocomputation: Time Critical Mission Support for the 2017 Hurricane Season

    NASA Astrophysics Data System (ADS)

    Bhaduri, B. L.; Tuttle, M.; Rose, A.; Sanyal, J.; Thakur, G.; White, D.; Yang, H. H.; Laverdiere, M.; Whitehead, M.; Taylor, H.; Jacob, M.

    2017-12-01

    A strong spatial data infrastructure and geospatial analysis capabilities are nucleus to the decision-making process during emergency preparedness, response, and recovery operations. For over a decade, the U.S. Department of Energy's Oak Ridge National Laboratory has been developing critical data and analytical capabilities that provide the Federal Emergency Management Agency (FEMA) and the rest of the federal response community assess and evaluate impacts of natural hazards on population and critical infrastructures including the status of the national electricity and oil and natural gas networks. These capabilities range from identifying structures or buildings from very high-resolution satellite imagery, utilizing machine learning and high-performance computing, to daily assessment of electricity restoration highlighting changes in nighttime lights for the impacted region based on the analysis of NOAA JPSS VIIRS Day/Night Band (DNB) imagery. This presentation will highlight our time critical mission support efforts for the 2017 hurricane season that witnessed unprecedented devastation from hurricanes Harvey, Irma, and Maria. ORNL provided 90m resolution LandScan USA population distribution data for identifying vulnerable population as well as structure (buildings) data extracted from 1m imagery for damage assessment. Spatially accurate data for solid waste facilities were developed and delivered to the response community. Human activity signatures were assessed from large scale collection of open source social media data around points of interests (POI) to ascertain level of destruction. The electricity transmission system was monitored in real time from data integration from hundreds of utilities and electricity outage information were provided back to the response community via standardized web-services.

  6. Satellite-based Tropical Cyclone Monitoring Capabilities

    NASA Astrophysics Data System (ADS)

    Hawkins, J.; Richardson, K.; Surratt, M.; Yang, S.; Lee, T. F.; Sampson, C. R.; Solbrig, J.; Kuciauskas, A. P.; Miller, S. D.; Kent, J.

    2012-12-01

    Satellite remote sensing capabilities to monitor tropical cyclone (TC) location, structure, and intensity have evolved by utilizing a combination of operational and research and development (R&D) sensors. The microwave imagers from the operational Defense Meteorological Satellite Program [Special Sensor Microwave/Imager (SSM/I) and the Special Sensor Microwave Imager Sounder (SSMIS)] form the "base" for structure observations due to their ability to view through upper-level clouds, modest size swaths and ability to capture most storm structure features. The NASA TRMM microwave imager and precipitation radar continue their 15+ yearlong missions in serving the TC warning and research communities. The cessation of NASA's QuikSCAT satellite after more than a decade of service is sorely missed, but India's OceanSat-2 scatterometer is now providing crucial ocean surface wind vectors in addition to the Navy's WindSat ocean surface wind vector retrievals. Another Advanced Scatterometer (ASCAT) onboard EUMETSAT's MetOp-2 satellite is slated for launch soon. Passive microwave imagery has received a much needed boost with the launch of the French/Indian Megha Tropiques imager in September 2011, basically greatly supplementing the very successful NASA TRMM pathfinder with a larger swath and more frequent temporal sampling. While initial data issues have delayed data utilization, current news indicates this data will be available in 2013. Future NASA Global Precipitation Mission (GPM) sensors starting in 2014 will provide enhanced capabilities. Also, the inclusion of the new microwave sounder data from the NPP ATMS (Oct 2011) will assist in mapping TC convective structures. The National Polar orbiting Partnership (NPP) program's VIIRS sensor includes a day night band (DNB) with the capability to view TC cloud structure at night when sufficient lunar illumination exits. Examples highlighting this new capability will be discussed in concert with additional data fusion efforts.

  7. Indigenous Manufacturing realization of TWIN Source

    NASA Astrophysics Data System (ADS)

    Pandey, R.; Bandyopadhyay, M.; Parmar, D.; Yadav, R.; Tyagi, H.; Soni, J.; Shishangiya, H.; Sudhir Kumar, D.; Shah, S.; Bansal, G.; Pandya, K.; Parmar, K.; Vuppugalla, M.; Gahlaut, A.; Chakraborty, A.

    2017-04-01

    TWIN source is two RF driver based negative ion source that has been planned to bridge the gap between single driver based ROBIN source (currently operational) and eight river based DNB source (to be operated under IN-TF test facility). TWIN source experiments have been planned at IPR keeping the objective of long term domestic fusion programme to gain operational experiences on vacuum immersed multi driver RF based negative ion source. High vacuum compatible components of twin source are designed at IPR keeping an aim on indigenous built in attempt. These components of TWIN source are mainly stainless steel and OFC-Cu. Being high heat flux receiving components, one of the major functional requirements is continuous heat removal via water as cooling medium. Hence for the purpose stainless steel parts are provided with externally milled cooling lines and that shall be covered with a layer of OFC-cu which would be on the receiving side of high heat flux. Manufacturability of twin source components requires joining of these dissimilar materials via process like electrode position, electron beam welding and vacuum brazing. Any of these manufacturing processes shall give a vacuum tight joint having proper joint strength at operating temperature and pressure. Taking the indigenous development effort vacuum brazing (in non-nuclear environment) has been opted for joining of dissimilar materials of twin source being one of the most reliable joining techniques and commercially feasible across the suppliers of country. Manufacturing design improvisation for the components has been done to suit the vacuum brazing process requirement and to ease some of the machining without comprising over the functional and operational requirements. This paper illustrates the details on the indigenous development effort, design improvisation to suits manufacturability, vacuum brazing basics and its procedures for twin source components.

  8. Use of JPSS ATMS, CrIS, and VIIRS data to Improve Tropical Cyclone Track and Intensity Forecasting

    NASA Astrophysics Data System (ADS)

    Chirokova, G.; Demaria, M.; DeMaria, R.; Knaff, J. A.; Dostalek, J.; Musgrave, K. D.; Beven, J. L.

    2015-12-01

    JPSS data provide unique information that could be critical for the forecasting of tropical cyclone (TC) track and intensity and is currently underutilized. Preliminary results from several TC applications using data from the Advanced Technology Microwave Sounder (ATMS), the Cross-Track Infrared Sounder (CrIS), and the Visible Infrared Imaging Radiometer Suite (VIIRS), carried by the Suomi National Polar-Orbiting Partnership satellite (SNPP), will be discussed. The first group of applications, which includes applications for moisture flux and for eye-detection, aims to improve rapid intensification (RI) forecasts, which is one of the highest priorities within NOAA. The applications could be used by forecasters directly and will also provide additional input to the Rapid Intensification Index (RII), the statistical-dynamical tool for forecasting RI events that is operational at the National Hurricane Center. The moisture flux application uses bias-corrected ATMS-MIRS (Microwave Integrated Retrieval System) and NUCAPS (NOAA Unique CrIS ATMS Processing System), retrievals that provide very accurate temperature and humidity soundings in the TC environment to detect dry air intrusions. The objective automated eye-detection application uses geostationary and VIIRS data in combination with machine learning and computer vision techniques for determining the onset of eye formation which is very important for TC intensity forecast but is usually determined by subjective methods. First version of the algorithm showed very promising results with a 75% success rate. The second group of applications develops tools to better utilize VIIRS data, including day-night band (DNB) imagery, for tropical cyclone forecasting. Disclaimer: The views, opinions, and findings contained in this article are those of the authors and should not be construed as an official National Oceanic and Atmospheric Administration (NOAA) or U.S. Government position, policy, or decision.

  9. JPSS-1 VIIRS Pre-Launch Response Versus Scan Angle Testing and Performance

    NASA Technical Reports Server (NTRS)

    Moyer, David; McIntire, Jeff; Oudrari, Hassan; McCarthy, James; Xiong, Xiaoxiong; De Luccia, Frank

    2016-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) instruments on-board both the Suomi National Polar-orbiting Partnership (S-NPP) and the first Joint Polar Satellite System (JPSS-1) spacecraft, with launch dates of October 2011 and December 2016 respectively, are cross-track scanners with an angular swath of +/-56.06 deg. A four-mirror Rotating Telescope Assembly (RTA) is used for scanning combined with a Half Angle Mirror (HAM) that directs light exiting from the RTA into the aft-optics. It has 14 Reflective Solar Bands (RSBs), seven Thermal Emissive Bands (TEBs) and a panchromatic Day Night Band (DNB). There are three internal calibration targets, the Solar Diffuser, the BlackBody and the Space View, that have fixed scan angles within the internal cavity of VIIRS. VIIRS has calibration requirements of 2% on RSB reflectance and as tight as 0.4% on TEB radiance that requires the sensor's gain change across the scan or Response Versus Scan angle (RVS) to be well quantified. A flow down of the top level calibration requirements put constraints on the characterization of the RVS to 0.2%-0.3% but there are no specified limitations on the magnitude of response change across scan. The RVS change across scan angle can vary significantly between bands with the RSBs having smaller changes of approximately 2% and some TEBs having approximately 10% variation. Within aband, the RVS has both detector and HAM side dependencies that vary across scan. Errors in the RVS characterization will contribute to image banding and striping artifacts if their magnitudes are above the noise level of the detectors. The RVS was characterized pre-launch for both S-NPP and JPSS-1 VIIRS and a comparison of the RVS curves between these two sensors will be discussed.

  10. Experimental investigation of low temperature garnet-melt partitioning in CMASH, with application to subduction zone processes.

    NASA Astrophysics Data System (ADS)

    Morizet, Y.; Blundy, J.; McDade, P.

    2003-04-01

    During subduction, the slab undergoes several processes such as dehydration and partial melting at pressures of 2-3 GPa and temperatures of 600-900^oC. Under these conditions, there is little or no distinction between melt and fluid phases (Bureau &Keppler, 1999, EPSL 165, 187-196). To investigate the behaviour of trace elements under these conditions we have carried out partitioning experiments in the system CMASH at 2.2 GPa, 700-920^oC. CMAS starting compositions were doped with trace elements, and loaded together with quartz and water into a Pt capsule, which was in turn contained within a Ni-lined Ti capsule. Run durations were 3-7 days. A run at 810^oC produced euhedral calcic garnet, zoisite, quartz, hydrous melt and tiny clinopyroxene interpreted as quench crystals. LA-ICPMS and SIMS were used to quantify trace element concentrations of the phases. Garnet-melt D's for the HREE decrease from ˜300 for Lu to less than 0.2 for La. DSc and D_V are less than 5, consistent with the large X-site dimension in the garnet. DLi DSr and DBa are considerably less than the adjacent REE. There is a very slight negative partitioning anomaly for Zr and Hf relative to Nd and Sm; DHf is slightly greater than DZr. D_U < DTh, due largely to the oxidizing conditions of the experiment (NNO). The most striking result is very high D's for Nb and Ta: 18±10 and 5.4±1.9 (LA-ICPMS), 25.8±11.9 and 6.6±1.3 (SIMS) for Nb and Ta respectively. These are considerably larger than any previously measured (at much higher temperatures). The observed partitioning behaviour is consistent with the large temperature dependence for DREE proposed by Van Westrenen et al. (2001, Contrib Min Pet, 142, 219-234), and an even larger temperature dependence for DNb and DTa. These preliminary results suggest that garnet (rather than rutile) may play the key role in controlling the Nb and Ta budget of arc magmas and the Nb/Ta ratio of residual eclogites. For example, modelling of eclogite melting, using a N-MORB source and the new D's, shows that a residue with Nb > 2 ppm, 19 < Nb/Ta < 37 (as proposed by Rudnick et al., 2000, Science 287, 278-281), can be produced by ˜30% partial melting. Slightly lower melt fractions (˜15%) reproduce their proposed Nb/La (>1.2).

  11. Velvet bean severe mosaic virus: a distinct begomovirus species causing severe mosaic in Mucuna pruriens (L.) DC.

    PubMed

    Zaim, Mohammad; Kumar, Yogesh; Hallan, Vipin; Zaidi, A A

    2011-08-01

    Velvet bean [Mucuna pruriens (L.) DC] is one of the most important medicinal plants. It is used to treat many ailments, but is widely used for the treatment especially for Parkinson's disease because of the presence of 3,4-dihydroxyphenylalanine (L-dopa) in it. It was noticed in last 5 years that the plants in the field showed severe mosaic, downward curling of the leaves, stunting, etc. This is consistently observed over the years in India. The disease was transmitted by whiteflies and by grafting and the causal agent was found to be a bipartite begomovirus. The whole genome was amplified by rolling circle amplification (RCA) using ϕ-29 DNA polymerase and characterized. DNA-A and DNA-B shared a 124-nucleotide (nt) long highly conserved (98%) common region (CR). Comparisons with other begomovirus showed that DNA-A sequence has highest identity (76%) with an isolate of Mungbean yellow mosaic India virus (MYMIV; AY937195) reported from India. This data suggested that the present isolate is a new species of genus Begomovirus for which the name "Velvet bean severe mosaic virus" (VbSMV) is proposed. DNA-B has a maximum sequence identity of 49% with an isolate of Horsegram yellow mosaic virus (HgYMV; AM932426) reported from India. Infectious clones consisting of a 1.7 mer partial tandem repeat of DNA-A and a dimer of DNB-B were constructed and agro-inoculated to Macuna pruriens (L.) DC plants, which showed field observed symptoms 24 days post-infiltration (dpi). In phylogenetic analysis, DNA-A and DNA-B of the present isolate grouped with DNA-A of different begomoviruses reported from fabaceous crops. The study presents first ever molecular evidence of any disease in velvet bean and whole genome analysis of the causative virus which is a distinct bipartite species of Begomovirus.

  12. Single-Cell-Based Analysis Highlights a Surge in Cell-to-Cell Molecular Variability Preceding Irreversible Commitment in a Differentiation Process

    PubMed Central

    Boullu, Loïs; Morin, Valérie; Vallin, Elodie; Guillemin, Anissa; Papili Gao, Nan; Cosette, Jérémie; Arnaud, Ophélie; Kupiec, Jean-Jacques; Espinasse, Thibault

    2016-01-01

    In some recent studies, a view emerged that stochastic dynamics governing the switching of cells from one differentiation state to another could be characterized by a peak in gene expression variability at the point of fate commitment. We have tested this hypothesis at the single-cell level by analyzing primary chicken erythroid progenitors through their differentiation process and measuring the expression of selected genes at six sequential time-points after induction of differentiation. In contrast to population-based expression data, single-cell gene expression data revealed a high cell-to-cell variability, which was masked by averaging. We were able to show that the correlation network was a very dynamical entity and that a subgroup of genes tend to follow the predictions from the dynamical network biomarker (DNB) theory. In addition, we also identified a small group of functionally related genes encoding proteins involved in sterol synthesis that could act as the initial drivers of the differentiation. In order to assess quantitatively the cell-to-cell variability in gene expression and its evolution in time, we used Shannon entropy as a measure of the heterogeneity. Entropy values showed a significant increase in the first 8 h of the differentiation process, reaching a peak between 8 and 24 h, before decreasing to significantly lower values. Moreover, we observed that the previous point of maximum entropy precedes two paramount key points: an irreversible commitment to differentiation between 24 and 48 h followed by a significant increase in cell size variability at 48 h. In conclusion, when analyzed at the single cell level, the differentiation process looks very different from its classical population average view. New observables (like entropy) can be computed, the behavior of which is fully compatible with the idea that differentiation is not a “simple” program that all cells execute identically but results from the dynamical behavior of the underlying molecular network. PMID:28027290

  13. Study and reflections on the functional and organizational role of neuromessenger nitric oxide in learning: An artificial and biological approach

    NASA Astrophysics Data System (ADS)

    Suárez Araujo, C. P.

    2000-05-01

    We present in this work a theoretical and conceptual study and some reflections on a fundamental aspect concerning with the structure and brain function: the Cellular Communication. The main interests of our study are the signal transmission mechanisms and the neuronal mechanisms responsible to learning. We propose the consideration of a new kind of communication mechanisms, different to the synaptic transmission, "Diffusion or Volume Transmission." This new alternative is based on a diffusing messenger as nitric oxide (NO). Our study aims towards the design of a conceptual framework, which covers implications of NO in the artificial neural networks (ANNs), both in neural architecture and learning processing. This conceptual frame might be able to provide possible biological support for many aspects of ANNs and to generate new concepts to improve the structure and operation of them. Some of these new concepts are The Fast Diffusion Neural Propagation (FDNP), the Diffuse Neighborhood (DNB), (1), the Diffusive Hybrid Neuromodulation (DHN), the Virtual Weights. Finally we will propose a new mathematical formulation for the Hebb learning law, taking into account the NO effect. Along the same lines, we will reflect on the possibility of a new formal framework for learning processes in ANNs, which consist of slow and fast learning concerning with co-operation between the classical neurotransmission and FDNP. We will develop this work from a computational neuroscience point of view, proposing a global study framework of diffusion messenger NO (GSFNO), using a hybrid natural/artificial approach. Finally it is important to note that we can consider this paper the first paper of a set of scientific work on nitric oxide (NO) and artificial neural networks (ANNs): NO and ANNs Series. We can say that this paper has a character of search and query on both subjects their implications and co-existence.

  14. Single-Cell-Based Analysis Highlights a Surge in Cell-to-Cell Molecular Variability Preceding Irreversible Commitment in a Differentiation Process.

    PubMed

    Richard, Angélique; Boullu, Loïs; Herbach, Ulysse; Bonnafoux, Arnaud; Morin, Valérie; Vallin, Elodie; Guillemin, Anissa; Papili Gao, Nan; Gunawan, Rudiyanto; Cosette, Jérémie; Arnaud, Ophélie; Kupiec, Jean-Jacques; Espinasse, Thibault; Gonin-Giraud, Sandrine; Gandrillon, Olivier

    2016-12-01

    In some recent studies, a view emerged that stochastic dynamics governing the switching of cells from one differentiation state to another could be characterized by a peak in gene expression variability at the point of fate commitment. We have tested this hypothesis at the single-cell level by analyzing primary chicken erythroid progenitors through their differentiation process and measuring the expression of selected genes at six sequential time-points after induction of differentiation. In contrast to population-based expression data, single-cell gene expression data revealed a high cell-to-cell variability, which was masked by averaging. We were able to show that the correlation network was a very dynamical entity and that a subgroup of genes tend to follow the predictions from the dynamical network biomarker (DNB) theory. In addition, we also identified a small group of functionally related genes encoding proteins involved in sterol synthesis that could act as the initial drivers of the differentiation. In order to assess quantitatively the cell-to-cell variability in gene expression and its evolution in time, we used Shannon entropy as a measure of the heterogeneity. Entropy values showed a significant increase in the first 8 h of the differentiation process, reaching a peak between 8 and 24 h, before decreasing to significantly lower values. Moreover, we observed that the previous point of maximum entropy precedes two paramount key points: an irreversible commitment to differentiation between 24 and 48 h followed by a significant increase in cell size variability at 48 h. In conclusion, when analyzed at the single cell level, the differentiation process looks very different from its classical population average view. New observables (like entropy) can be computed, the behavior of which is fully compatible with the idea that differentiation is not a "simple" program that all cells execute identically but results from the dynamical behavior of the underlying molecular network.

  15. JPSS Products, Applications and Training

    NASA Astrophysics Data System (ADS)

    Torres, J. R.; Connell, B. H.; Miller, S. D.

    2017-12-01

    The Joint Polar Satellite System (JPSS) is a new generation polar-orbiting operational environmental satellite system that will monitor the weather and environment around the globe. JPSS will provide technological and scientific improvements in environmental monitoring via high resolution satellite imagery and derived products that stand to improve weather forecasting capabilities for National Weather Service (NWS) forecasters and complement operational Geostationary satellites. JPSS will consist of four satellites, JPSS-1 through JPSS-4, where JPSS-1 is due to launch in Fall 2017. A predecessor, prototype and operational risk-reduction for JPSS is the Suomi-National Polar-orbiting Partnership (S-NPP) satellite, launched on 28 October 2011. The following instruments on-board S-NPP will also be hosted on JPSS-1: Visible Infrared Imaging Radiometer Suite (VIIRS), Cross-track Infrared Sounder (CrIS), Advanced Technology Microwave Sounder (ATMS), Ozone Mapping and Profiler Suite (OMPS) and the Clouds and Earth's Radiant Energy System (CERES). JPSS-1 instruments will provide satellite imagery, products and applications to users. The applications include detecting water and ice clouds, snow, sea surface temperatures, fog, fire, severe weather, vegetation health, aerosols, and sensing reflected lunar and emitted visible-wavelength light during the nighttime via the Day/Night Band (DNB) sensor included on VIIRS. Currently, there are only a few polar products that are operational for forecasters, however, more products will become available in the near future via Advanced Weather Interactive Processing System-II (AWIPS-II)-a forecasting analysis software package that forecasters can use to analyze meteorological data. To complement the polar products an wealth of training materials are currently in development. Denoted as the Satellite Foundational Course for JPSS (SatFC-J), this training will benefit NWS forecasters to utilize satellite data in their forecasts and daily operations as they discover their operational value in the NWS forecast process. As JPSS-1 launch nears, training materials will be produced in the form of modules, videos, quick guides, fact sheets, and hands-on exercises.

  16. Experimental determination of trace-element partitioning between pargasite and a synthetic hydrous andesitic melt

    NASA Astrophysics Data System (ADS)

    Brenan, J. M.; Shaw, H. F.; Ryerson, F. J.; Phinney, D. L.

    1995-10-01

    In order to more fully establish a basis for quantifying the role of amphibole in trace-element fractionation processes, we have measured pargasite/silicate melt partitioning of a variety of trace elements (Rb, Ba, Nb, Ta, Hf, Zr, Ce, Nd, Sm, Yb), including the first published values for U, Th and Pb. Experiments conducted at 1000°C and 1.5 GPa yielded large crystals free of compositional zoning. Partition coefficients were found to be constant at total concentrations ranging from ˜ 1 to > 100 ppm, indicating Henry's Law is oparative over this interval. Comparison of partition coefficients measured in this study with previous determinations yields good agreement for similar compositions at comparable pressure and temperature. The compatibility of U, Th and Pb in amphibole decreases in the order Pb > Th > U. Partial melting or fractional crystallization of amphibole-bearing assemblages will therefore result in the generation of excesses in 238U activity relative to 230Th, similar in magnitude to that produced by clinopyroxene. The compatibility of Pb in amphibole relative to U or Th indicates that melt generation in the presence of residual amphibole will result in the long-term enrichment in Pb relative to U or Th in the residue. This process is therefore incapable of producing the depletion in Pb relative to U or Th inferred from the Pb isotopic composition of MORB and OIB. Comparison of partition coefficients measured in this study with previous values for clinopyroxene allows some distinction to be made between expected trace-element fractionations produced during dry (cpx present) and wet (cpx + amphibole present) melting. Rb, Ba, Nb and Ta are dramatically less compatible in clinopyroxene than in amphibole, whereas Th, U, Hf and Zr have similar compatibilities in both phases. Interelement fractionations, such as DNb/DBa are also different for clinopyroxene and amphibole. Changes in certain ratios, such as Ba/Nb, Ba/Th, and Nb/Th within comagmatic suites may therefore offer a means to discern the loss of amphibole from the melting assemblage. Elastic strain theory is applied to the partitioning data after the approaches of Beattie and Blundy and Wood and is used to predict amphibole/melt partition coefficients at conditions of P, T and composition other than those employed in this study. Given values of DCa, DTi and DK from previous partitioning studies, this approach yields amphibole/melt trace-element partition coefficients that reproduce measured values from the literature to within 40-45%. This degree of reproducibility is considered reasonable given that model parameters are derived from partitioning relations involving iron- and potassium-free amphibole.

  17. Auroras over North America as Seen from Space

    NASA Image and Video Library

    2017-12-08

    Overnight on October 4-5, 2012, a mass of energetic particles from the atmosphere of the Sun were flung out into space, a phenomenon known as a coronal mass ejection. Three days later, the storm from the Sun stirred up the magnetic field around Earth and produced gorgeous displays of northern lights. NASA satellites track such storms from their origin to their crossing of interplanetary space to their arrival in the atmosphere of Earth. Using the “day-night band” (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS), the Suomi National Polar-orbiting Partnership (Suomi NPP) satellite acquired this view of the aurora borealis early on the morning of October 8, 2012. The northern lights stretch across Canada’s Quebec and Ontario provinces in the image, and are part of the auroral oval that expanded to middle latitudes because of a geomagnetic storm. The DNB sensor detects dim light signals such as auroras, airglow, gas flares, city lights, and reflected moonlight. In the case of the image above, the sensor detected the visible light emissions as energetic particles rained down from Earth’s magnetosphere and into the gases of the upper atmosphere. The images are similar to those collected by the Operational Linescan System flown on U.S. Defense Meteorological Satellite Program (DMSP) satellites for the past three decades. “When I first saw images like this as a graduate student, I was immediately struck by the fluid dynamic characteristics of the aurora,” said Tom Moore, a space physicist at NASA's Goddard Space Flight Center. “Viewing the aurora in this way makes it immediately clear that space weather is an interaction of fluids from the Sun with those of the Earth's upper atmosphere. The electrodynamics make for important differences between plasmas and ordinary fluids, but familiar behaviors (for example, waves and vortices) are still very apparent. It makes me wonder at the ability of apparently empty space to behave like a fluid.” Auroras typically occur when solar flares and coronal mass ejections—or even an active solar wind stream—disturb and distort the magnetosphere, the cocoon of space protected by Earth’s magnetic field. The collision of solar particles and pressure into our planet’s magnetosphere accelerates particles trapped in the space around Earth (such as in the radiation belts). Those particles are sent crashing down into Earth’s upper atmosphere—at altitudes of 100 to 400 kilometers (60 to 250 miles)—where they excite oxygen and nitrogen molecules and release photons of light. The results are rays, sheets, and curtains of dancing light in the sky. Auroras are a beautiful expression of the connection between Sun and Earth, but not all of the connections are benign. Auroras are connected to geomagnetic storms, which can distort radio communications (particularly high frequencies), disrupt electric power systems on the ground, and give slight but detectable doses of radiation to flight crews and passengers on high-latitude airplane flights and on spacecraft. The advantage of images like those from VIIRS and DMSP is resolution, according to space physicist Patrick Newell of the Johns Hopkins University Applied Physics Laboratory. “You can see very fine detail in the aurora because of the low altitude and the high resolution of the camera,” he said. Most aurora scientists prefer to use images from missions dedicated to aurora studies (such as Polar, IMAGE, and ground-based imagers), which can offer many more images of a storm (rather than one per orbit) and can allow researchers to calculate the energy moving through the atmosphere. There are no science satellites flying right now that provide such a view, though astronauts regularly photograph and film auroras from the International Space Station. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day-Night Band data from the Suomi National Polar-orbiting Partnership (Suomi NPP) and the University of Wisconsin's Community Satellite Processing Package. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Mike Carlowicz. Instrument: Suomi NPP - VIIRS Credit: NASA Earth Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  18. Remote sensing of Alaskan boreal forest fires at the pixel and sub-pixel level: multi-sensor approaches and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Waigl, C.; Stuefer, M.; Prakash, A.

    2013-12-01

    Wildfire is the main disturbance regime of the boreal forest ecosystem, a region acutely sensitive to climate change. Large fires impact the carbon cycle, permafrost, and air quality on a regional and even hemispheric scale. Because of their significance as a hazard to human health and economic activity, monitoring wildfires is relevant not only to science but also to government agencies. The goal of this study is to develop pathways towards a near real-time assessment of fire characteristics in the boreal zones of Alaska based on satellite remote sensing data. We map the location of active burn areas and derive fire parameters such as fire temperature, intensity, stage (smoldering or flaming), emission injection points, carbon consumed, and energy released. For monitoring wildfires in the sub-arctic region, we benefit from the high temporal resolution of data (as high as 8 images a day) from MODIS on the Aqua and Terra platforms and VIIRS on NPP/Suomi, downlinked and processed to level 1 by the Geographic Information Network of Alaska at the University of Alaska Fairbanks. To transcend the low spatial resolution of these sensors, a sub-pixel analysis is carried out. By applying techniques from Bayesian inverse modeling to Dozier's two-component approach, uncertainties and sensitivity of the retrieved fire temperatures and fractional pixel areas to background temperature and atmospheric factors are assessed. A set of test cases - large fires from the 2004 to 2013 fire seasons complemented by a selection of smaller burns at the lower end of the MODIS detection threshold - is used to evaluate the methodology. While the VIIRS principal fire detection band M13 (centered at 4.05 μm, similar to MODIS bands 21 and 22 at 3.959 μm) does not usually saturate for Alaskan wildfire areas, the thermal IR band M15 (10.763 μm, comparable to MODIS band 31 at 11.03 μm) indeed saturates for a percentage, though not all, of the fire pixels of intense burns. As this limits the application of the classical version of Dozier's model for this particular combination to lower intensity and smaller fires, or smaller fractional fire areas, other VIIRS band combinations are evaluated as well. Furthermore, the higher spatial resolution of the VIIRS sensor compared to MODIS and its constant along-scan resolution DNB (day/night band) dataset provide additional options for fire mapping, detection and quantification. Higher spatial resolution satellite-borne remote sensing data is used to validate the pixel and sub-pixel level analysis and to assess lower detection thresholds. For each sample fire, moderate-resolution imagery is paired with data from the ASTER instrument (simultaneous with MODIS data on the Terra platform) and/or Landsat scenes acquired in close temporal proximity. To complement the satellite-borne imagery, aerial surveys using a FLIR thermal imaging camera with a broadband TIR sensor provide additional ground truthing and a validation of fire location and background temperature.

  19. JPSS Data Product Applications for Monitoring Severe Weather and Environmental Hazards

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhou, L.; Divakarla, M. G.; Atkins, T.

    2016-12-01

    The Joint Polar Satellite System (JPSS) is the National Oceanic and Atmospheric Administration's (NOAA's) next-generation polar-orbiting operational environmental satellite system. The Suomi National Polar-orbiting Partnership (S-NPP) is the first satellite in the JPSS series. One of the JPSS supported key mission areas is to reduce the loss of life from high-impact weather events while improving efficient economies through environmental information. Combining with the sensors on other polar and geostationary satellite platforms, JPSS observations provided much enhanced capabilities for the Nation's essential products and services, including forecasting severe weather like hurricanes, potential tornadic outbreaks, and blizzards days in advance, and assessing environmental hazards such as droughts, floods, forest fires, poor air quality and harmful coastal waters. Sensor and Environmental Data Records (SDRs/EDRs) derived from S-NPP and follow-on JPSS satellites provide critical data for environmental assessments, forecasts and warnings. This paper demonstrates the use of S-NPP science data products towards analysis events of severe weather and environmental hazards, such as Paraguay Flooding, Hurricane Iselle, the record-breaking winter storm system that impacted the US East Coast area early this year, and Fort McMurray wildfire. A brief description of these examples and a detailed discussion of the winter storm event are presented in this paper. VIIRS (Visible Infrared Imaging Radiometer Suite) and ATMS (Advanced Technology Microwave Sounder) SDR/EDR products collected from multiple days of S-NPP observations are analyzed to study the progression of the winter storm and illustrate how JPSS products captured the storm system. The products used for this study included VIIRS day/night band (DNB) and true color images, ocean turbidity images, snow cover fraction, and the multi-sensor snowfall rates. Quantitative evaluation of the ATMS derived snowfall rates with the radar estimates revealed good agreement. Use of STAR JPSS product monitoring and visualization tools to evaluate these events, and applications of these tools for anomaly detection, mitigation, and science maintenance of the long-term stability of the data products is also presented in this paper.

  20. Characterization of Nighttime Light Variability over the Southeastern United States

    NASA Astrophysics Data System (ADS)

    Cole, T.; Molthan, A.; Schultz, L. A.

    2015-12-01

    Severe meteorological events such as thunderstorms, tropical cyclones and winter ice storms often produce prolonged, widespread power outages affecting large populations and regions. The spatial impact of these events can extend from relatively rural, small towns (i.e. November 17, 2013 Washington, IL EF-4 tornado) to a series of adjoined states (i.e. April 27, 2011 severe weather outbreak) to entire regions (i.e. 2012 Hurricane Sandy) during their lifespans. As such, affected populations can vary greatly, depending on the event's intensity, location and duration. Actions taken by disaster response agencies like FEMA, the American Red Cross and NOAA to provide support to communities during the recovery process need accurate and timely information on the extent and location(s) of power disruption. This information is often not readily available to these agencies given communication interruptions, independent storm damage reports and other response-inhibiting factors. VIIRS DNB observations which provide daily, nighttime measurements of light sources can be used to detect and monitor power outages caused by these meteorological disaster events. To generate such an outage product, normal nighttime light variability must be analyzed and understood at varying spatial scales (i.e individual pixels, clustered land uses/covers, entire city extents). The southeastern portion of the United States serves as the study area in which the mean, median and standard deviation of nighttime lights are examined over numerous temporal periods (i.e. monthly, seasonally, annually, inter-annually). It is expected that isolated pixels with low population density (rural) will have tremendous variability in which an outage "signal" is difficult to detect. Small towns may have more consistent lighting (over a few pixels), making it easier to identify outages and reductions. Finally, large metropolitan areas may be the most "stable" light source, but the entire area may rarely experience a complete outage. The goal is to determine the smallest spatial scale in which an outage can be detected. Presented work will highlight nighttime light variability over the southeastern U.S. which will serve as a baseline for the production of a near real-time power outage product.

  1. Overview of Suomi National Polar-Orbiting Partnership (NPP) Satellite Instrument Calibration and Validation

    NASA Astrophysics Data System (ADS)

    Weng, F.

    2015-12-01

    The Suomi National Polar-Orbiting Partnership (SNPP) satellite carries five instruments on board including ATMS, CrIS, VIIRS, OMPS and CERES. During the SNPP intensive calval, ATMS was pitched over to observe the cold space radiation. This unique data set was used for diagnostics of the ATMS scan-angle dependent bias and a scan-to-scan variation. A new algorithm is proposed to correct the ATMS scan angle dependent bias related to the reflector emission. ATMS radiometric calibration is also revised in IDPS with the full radiance processing (FRP). CrIS is the first Fourier transform Michelson interferometer and measures three infrared spectral bands from 650 to 1095, 1210 to 1750 and 2155 to 2550 cm-1 with spectral resolutions of 0.625 cm-1, respectively. Its spectral calibration is with an accuracy of better than 2 ppm and its noise is also well characterized with the Allan variance. Since CrIS was switched to the transmission of full spectral resolution (FSR) of RDR data to the ground in January 2015. The CrIS FSR SDR data are also produced offline at NOAA STAR. VIIRS has 22 spectral bands covering the spectrum between 0.412 μm and 12.01 μm, including 16 moderate resolution bands (M-bands) with a spatial resolution of 750 m at nadir, five imaging resolution bands (I-bands) with a spatial resolution of 375 m at nadir, and one day-night band (DNB) with a nearly-constant 750 m spatial resolution throughout the scan. The calibration of VIIRS reflective solar bands (RSB) requires a solar diffuser (SD) and a solar diffuser stability monitor (SDSM). Using the SNPP yaw maneuver data, SDSM screen transmission function can be updated to better capture the fine structures of the vignetting function. For OMPS nadir mapper (NM) and nadir profiler (NP), the detector signal-to-noise ratio, and sensor signal-to-noise ratio meet the system requirement. Detector gain and bias performance trends are generally stable. System linearity performance is stable and highly consistent with the prelaunch values. The recent updates on OMPS wavelength, solar flux and radiance coefficients have resulted in viewing angle dependent bias in the earth view observations. OMPS dark currents are updated weekly and monitored for further improving the radiometric calibration.

  2. Health status of and health-care provision to asylum seekers in Germany: protocol for a systematic review and evidence mapping of empirical studies.

    PubMed

    Schneider, Christine; Mohsenpour, Amir; Joos, Stefanie; Bozorgmehr, Kayvan

    2014-11-29

    There are more than 100,000 asylum seekers registered in Germany, who are granted limited access to health services. This study aims to provide a systematic overview of the empirical literature on the health status of and health-care provision to asylum seekers in Germany in order to consolidate knowledge, avoid scientific redundance, and identify research gaps. A systematic review and evidence mapping of empirical literature on the health status of and health-care provision to asylum seekers in Germany will be performed. We will apply a three-tiered search strategy: 1. search in databases (PubMed/MEDLINE, Web of Science, IBSS, Sociological Abstracts, Worldwide Political Science Abstracts, CINAHL, Sowiport, Social Sciences Citation Index, ASSIA, MedPilot, DNB), dissertation and theses databases, and the internet (Google); 2. screening references of included studies; 3. contacting authors and civil society organizations for grey literature. Included will be studies which report quantitative and/or qualitative data or review articles on asylum seekers in Germany, published in German or English language. Outcome measures will include physical, mental, or social well-being, and all aspects of health-care provision (access, availability, affordability, and quality). Search results will be screened for eligibility by screening titles, abstracts and full texts. Data extraction comprises information on study characteristics, research aims, and domains of health or health-care services analyzed. The quality of studies will be appraised and documented by appropriate assessment tools. A descriptive evidence map will be drawn by categorizing all included articles by research design and the health conditions and/or domains of health-care provision analyzed. The body of evidence will be evaluated, and a narrative evidence synthesis will be performed by means of a multi-level approach, whereby quantitative and qualitative evidence are analyzed as separate streams and the product of each stream is configured in a final summary. This systematic review will provide an evidence map and synthesis of available research findings on the health status of and health-care provision to asylum seekers in Germany. In anticipation of identifying areas which are amenable to health-care interventions, deserve immediate action, or further exploration, this review will be of major importance for policy-makers, health-care providers, as well as researchers. PROSPERO 2014: CRD42014013043.

  3. Radiation Environment Modeling for Spacecraft Design: New Model Developments

    NASA Technical Reports Server (NTRS)

    Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray

    2006-01-01

    A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.

  4. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    PubMed

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  5. [Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].

    PubMed

    Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping

    2014-06-01

    The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.

  6. Evaluating the Bias of Alternative Cost Progress Models: Tests Using Aerospace Industry Acquisition Programs

    DTIC Science & Technology

    1992-12-01

    suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model

  7. Experience with turbulence interaction and turbulence-chemistry models at Fluent Inc.

    NASA Technical Reports Server (NTRS)

    Choudhury, D.; Kim, S. E.; Tselepidakis, D. P.; Missaghi, M.

    1995-01-01

    This viewgraph presentation discusses (1) turbulence modeling: challenges in turbulence modeling, desirable attributes of turbulence models, turbulence models in FLUENT, and examples using FLUENT; and (2) combustion modeling: turbulence-chemistry interaction and FLUENT equilibrium model. As of now, three turbulence models are provided: the conventional k-epsilon model, the renormalization group model, and the Reynolds-stress model. The renormalization group k-epsilon model has broadened the range of applicability of two-equation turbulence models. The Reynolds-stress model has proved useful for strongly anisotropic flows such as those encountered in cyclones, swirlers, and combustors. Issues remain, such as near-wall closure, with all classes of models.

  8. Leadership Models.

    ERIC Educational Resources Information Center

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  9. SUMMA and Model Mimicry: Understanding Differences Among Land Models

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.

    2016-12-01

    Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in focusing model development and improvement efforts.

  10. Seven Modeling Perspectives on Teaching and Learning: Some Interrelations and Cognitive Effects

    ERIC Educational Resources Information Center

    Easley, J. A., Jr.

    1977-01-01

    The categories of models associated with the seven perspectives are designated as combinatorial models, sampling models, cybernetic models, game models, critical thinking models, ordinary language analysis models, and dynamic structural models. (DAG)

  11. Pursuing the method of multiple working hypotheses to understand differences in process-based snow models

    NASA Astrophysics Data System (ADS)

    Clark, Martyn; Essery, Richard

    2017-04-01

    When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.

  12. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  13. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  14. The Influence of a Model's Reinforcement Contingency and Affective Response on Children's Perceptions of the Model

    ERIC Educational Resources Information Center

    Thelen, Mark H.; And Others

    1977-01-01

    Assesses the influence of model consequences on perceived model affect and, conversely, assesses the influence of model affect on perceived model consequences. Also appraises the influence of model consequences and model affect on perceived model attractiveness, perceived model competence, and perceived task attractiveness. (Author/RK)

  15. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  16. A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.

    2015-12-01

    Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.

  17. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    USGS Publications Warehouse

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.

  18. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.

  19. Literature review of models on tire-pavement interaction noise

    NASA Astrophysics Data System (ADS)

    Li, Tan; Burdisso, Ricardo; Sandu, Corina

    2018-04-01

    Tire-pavement interaction noise (TPIN) becomes dominant at speeds above 40 km/h for passenger vehicles and 70 km/h for trucks. Several models have been developed to describe and predict the TPIN. However, these models do not fully reveal the physical mechanisms or predict TPIN accurately. It is well known that all the models have both strengths and weaknesses, and different models fit different investigation purposes or conditions. The numerous papers that present these models are widely scattered among thousands of journals, and it is difficult to get the complete picture of the status of research in this area. This review article aims at presenting the history and current state of TPIN models systematically, making it easier to identify and distribute the key knowledge and opinions, and providing insight into the future research trend in this field. In this work, over 2000 references related to TPIN were collected, and 74 models were reviewed from nearly 200 selected references; these were categorized into deterministic models (37), statistical models (18), and hybrid models (19). The sections explaining the models are self-contained with key principles, equations, and illustrations included. The deterministic models were divided into three sub-categories: conventional physics models, finite element and boundary element models, and computational fluid dynamics models; the statistical models were divided into three sub-categories: traditional regression models, principal component analysis models, and fuzzy curve-fitting models; the hybrid models were divided into three sub-categories: tire-pavement interface models, mechanism separation models, and noise propagation models. At the end of each category of models, a summary table is presented to compare these models with the key information extracted. Readers may refer to these tables to find models of their interest. The strengths and weaknesses of the models in different categories were then analyzed. Finally, the modeling trend and future direction in this area are given.

  20. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  1. Expert models and modeling processes associated with a computer-modeling tool

    NASA Astrophysics Data System (ADS)

    Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.

    2006-07-01

    Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.

  2. Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis

    DTIC Science & Technology

    2017-02-01

    Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning

  3. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  4. Conceptual and logical level of database modeling

    NASA Astrophysics Data System (ADS)

    Hunka, Frantisek; Matula, Jiri

    2016-06-01

    Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.

  5. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DOE PAGES

    King, Zachary A.; Lu, Justin; Drager, Andreas; ...

    2015-10-17

    In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less

  6. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    PubMed Central

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  7. Service-oriented model-encapsulation strategy for sharing and integrating heterogeneous geo-analysis models in an open web environment

    NASA Astrophysics Data System (ADS)

    Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian

    2016-04-01

    Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.

  8. Object-oriented biomedical system modelling--the language.

    PubMed

    Hakman, M; Groth, T

    1999-11-01

    The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.

  9. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  10. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  11. An empirical model to forecast solar wind velocity through statistical modeling

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Ridley, A. J.

    2013-12-01

    The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.

  12. A Primer for Model Selection: The Decisive Role of Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  13. Women's Endorsement of Models of Sexual Response: Correlates and Predictors.

    PubMed

    Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert

    2016-02-01

    Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.

  14. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    ERIC Educational Resources Information Center

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  15. Performance and Architecture Lab Modeling Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less

  16. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    DOE PAGES

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.« less

  17. Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  18. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-08-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  19. Airborne Wireless Communication Modeling and Analysis with MATLAB

    DTIC Science & Technology

    2014-03-27

    research develops a physical layer model that combines antenna modeling using computational electromagnetics and the two-ray propagation model to...predict the received signal strength. The antenna is modeled with triangular patches and analyzed by extending the antenna modeling algorithm by Sergey...7  2.7. Propagation Modeling : Statistical Models ............................................................8  2.8. Antenna Modeling

  20. Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology

    ERIC Educational Resources Information Center

    Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…

  1. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.

    PubMed

    Jenness, Samuel M; Goodreau, Steven M; Morris, Martina

    2018-04-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.

  2. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks

    PubMed Central

    Jenness, Samuel M.; Goodreau, Steven M.; Morris, Martina

    2018-01-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel, designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel, designed to facilitate the exploration of novel research questions for advanced modelers. PMID:29731699

  3. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  4. A composite computational model of liver glucose homeostasis. I. Building the composite model.

    PubMed

    Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A

    2012-04-07

    A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.

  5. The applicability of turbulence models to aerodynamic and propulsion flowfields at McDonnell-Douglas Aerospace

    NASA Technical Reports Server (NTRS)

    Kral, Linda D.; Ladd, John A.; Mani, Mori

    1995-01-01

    The objective of this viewgraph presentation is to evaluate turbulence models for integrated aircraft components such as the forebody, wing, inlet, diffuser, nozzle, and afterbody. The one-equation models have replaced the algebraic models as the baseline turbulence models. The Spalart-Allmaras one-equation model consistently performs better than the Baldwin-Barth model, particularly in the log-layer and free shear layers. Also, the Sparlart-Allmaras model is not grid dependent like the Baldwin-Barth model. No general turbulence model exists for all engineering applications. The Spalart-Allmaras one-equation model and the Chien k-epsilon models are the preferred turbulence models. Although the two-equation models often better predict the flow field, they may take from two to five times the CPU time. Future directions are in further benchmarking the Menter blended k-w/k-epsilon and algorithmic improvements to reduce CPU time of the two-equation model.

  6. The determination of third order linear models from a seventh order nonlinear jet engine model

    NASA Technical Reports Server (NTRS)

    Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex

    1989-01-01

    Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.

  7. BioModels: expanding horizons to include more modelling approaches and formats

    PubMed Central

    Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi

    2018-01-01

    Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614

  8. Modelling, teachers' views on the nature of modelling, and implications for the education of modellers

    NASA Astrophysics Data System (ADS)

    Justi, Rosária S.; Gilbert, John K.

    2002-04-01

    In this paper, the role of modelling in the teaching and learning of science is reviewed. In order to represent what is entailed in modelling, a 'model of modelling' framework is proposed. Five phases in moving towards a full capability in modelling are established by a review of the literature: learning models; learning to use models; learning how to revise models; learning to reconstruct models; learning to construct models de novo. In order to identify the knowledge and skills that science teachers think are needed to produce a model successfully, a semi-structured interview study was conducted with 39 Brazilian serving science teachers: 10 teaching at the 'fundamental' level (6-14 years); 10 teaching at the 'medium'-level (15-17 years); 10 undergraduate pre-service 'medium'-level teachers; 9 university teachers of chemistry. Their responses are used to establish what is entailed in implementing the 'model of modelling' framework. The implications for students, teachers, and for teacher education, of moving through the five phases of capability, are discussed.

  9. Modelling land use change with generalized linear models--a multi-model analysis of change between 1860 and 2000 in Gallatin Valley, Montana.

    PubMed

    Aspinall, Richard

    2004-08-01

    This paper develops an approach to modelling land use change that links model selection and multi-model inference with empirical models and GIS. Land use change is frequently studied, and understanding gained, through a process of modelling that is an empirical analysis of documented changes in land cover or land use patterns. The approach here is based on analysis and comparison of multiple models of land use patterns using model selection and multi-model inference. The approach is illustrated with a case study of rural housing as it has developed for part of Gallatin County, Montana, USA. A GIS contains the location of rural housing on a yearly basis from 1860 to 2000. The database also documents a variety of environmental and socio-economic conditions. A general model of settlement development describes the evolution of drivers of land use change and their impacts in the region. This model is used to develop a series of different models reflecting drivers of change at different periods in the history of the study area. These period specific models represent a series of multiple working hypotheses describing (a) the effects of spatial variables as a representation of social, economic and environmental drivers of land use change, and (b) temporal changes in the effects of the spatial variables as the drivers of change evolve over time. Logistic regression is used to calibrate and interpret these models and the models are then compared and evaluated with model selection techniques. Results show that different models are 'best' for the different periods. The different models for different periods demonstrate that models are not invariant over time which presents challenges for validation and testing of empirical models. The research demonstrates (i) model selection as a mechanism for rating among many plausible models that describe land cover or land use patterns, (ii) inference from a set of models rather than from a single model, (iii) that models can be developed based on hypothesised relationships based on consideration of underlying and proximate causes of change, and (iv) that models are not invariant over time.

  10. Investigation of prospective teachers' knowledge and understanding of models and modeling and their attitudes towards the use of models in science education

    NASA Astrophysics Data System (ADS)

    Aktan, Mustafa B.

    The purpose of this study was to investigate prospective science teachers' knowledge and understanding of models and modeling, and their attitudes towards the use of models in science teaching through the following research questions: What knowledge do prospective science teachers have about models and modeling in science? What understandings about the nature of models do these teachers hold as a result of their educational training? What perceptions and attitudes do these teachers hold about the use of models in their teaching? Two main instruments, semi-structured in-depth interviewing and an open-item questionnaire, were used to obtain data from the participants. The data were analyzed from an interpretative phenomenological perspective and grounded theory methods. Earlier studies on in-service science teachers' understanding about the nature of models and modeling revealed that variations exist among teachers' limited yet diverse understanding of scientific models. The results of this study indicated that variations also existed among prospective science teachers' understanding of the concept of model and the nature of models. Apparently the participants' knowledge of models and modeling was limited and they viewed models as materialistic examples and representations. I found that the teachers believed the purpose of a model is to make phenomena more accessible and more understandable. They defined models by referring to an example, a representation, or a simplified version of the real thing. I found no evidence of negative attitudes towards use of models among the participants. Although the teachers valued the idea that scientific models are important aspects of science teaching and learning, and showed positive attitudes towards the use of models in their teaching, certain factors like level of learner, time, lack of modeling experience, and limited knowledge of models appeared to be affecting their perceptions negatively. Implications for the development of science teaching and teacher education programs are discussed. Directions for future research are suggested. Overall, based on the results, I suggest that prospective science teachers should engage in more modeling activities through their preparation programs, gain more modeling experience, and collaborate with their colleagues to better understand and implement scientific models in science teaching.

  11. Validation of Groundwater Models: Meaningful or Meaningless?

    NASA Astrophysics Data System (ADS)

    Konikow, L. F.

    2003-12-01

    Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.

  12. Hierarchical modeling and inference in ecology: The analysis of data from populations, metapopulations and communities

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, Robert M.

    2008-01-01

    A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.

  13. Using the Model Coupling Toolkit to couple earth system models

    USGS Publications Warehouse

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  14. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  15. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  16. Premium analysis for copula model: A case study for Malaysian motor insurance claims

    NASA Astrophysics Data System (ADS)

    Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah

    2014-06-01

    This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.

  17. Utilizing Biological Models to Determine the Recruitment of the IRA by Modeling the Voting Behavior of Sinn Fein

    DTIC Science & Technology

    2006-03-01

    models, the thesis applies a biological model, the Lotka - Volterra predator- prey model, to a highly suggestive case study, that of the Irish Republican...Model, Irish Republican Army, Sinn Féin, Lotka - Volterra Predator Prey Model, Recruitment, British Army 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...weaknesses of sociological and biological models, the thesis applies a biological model, the Lotka - Volterra predator-prey model, to a highly suggestive

  18. Right-Sizing Statistical Models for Longitudinal Data

    PubMed Central

    Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.

    2015-01-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  19. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  20. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  1. Examination of various turbulence models for application in liquid rocket thrust chambers

    NASA Technical Reports Server (NTRS)

    Hung, R. J.

    1991-01-01

    There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.

  2. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.

  3. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.

  4. [The reliability of dento-maxillary models created by cone-beam CT and rapid prototyping:a comparative study].

    PubMed

    Lv, Yan; Yan, Bin; Wang, Lin; Lou, Dong-hua

    2012-04-01

    To analyze the reliability of the dento-maxillary models created by cone-beam CT and rapid prototyping (RP). Plaster models were obtained from 20 orthodontic patients who had been scanned by cone-beam CT and 3-D models were formed after the calculation and reconstruction of software. Then, computerized composite models (RP models) were produced by rapid prototyping technique. The crown widths, dental arch widths and dental arch lengths on each plaster model, 3-D model and RP model were measured, followed by statistical analysis with SPSS17.0 software package. For crown widths, dental arch lengths and crowding, there were significant differences(P<0.05) among the 3 models, but the dental arch widths were on the contrary. Measurements on 3-D models were significantly smaller than those on other two models(P<0.05). Compared with 3-D models, RP models had more numbers which were not significantly different from those on plaster models(P>0.05). The regression coefficient among three models were significantly different(P<0.01), ranging from 0.8 to 0.9. But between RP and plaster models was bigger than that between 3-D and plaster models. There is high consistency within 3 models, while some differences were accepted in clinic. Therefore, it is possible to substitute 3-D and RP models for plaster models in order to save storage space and improve efficiency.

  5. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.

  6. A model-averaging method for assessing groundwater conceptual model uncertainty.

    PubMed

    Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M

    2010-01-01

    This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.

  7. Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.

    1997-01-01

    In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.

  8. 10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  9. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  10. Evolution of computational models in BioModels Database and the Physiome Model Repository.

    PubMed

    Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar

    2018-04-12

    A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.

  11. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  12. Computational Models for Calcium-Mediated Astrocyte Functions.

    PubMed

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2018-01-01

    The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes.

  13. Computational Models for Calcium-Mediated Astrocyte Functions

    PubMed Central

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2018-01-01

    The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes. PMID:29670517

  14. Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM). I: Model intercomparison with current land use

    USGS Publications Warehouse

    Breuer, L.; Huisman, J.A.; Willems, P.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.

    2009-01-01

    This paper introduces the project on 'Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM)' that aims at investigating the envelope of predictions on changes in hydrological fluxes due to land use change. As part of a series of four papers, this paper outlines the motivation and setup of LUCHEM, and presents a model intercomparison for the present-day simulation results. Such an intercomparison provides a valuable basis to investigate the effects of different model structures on model predictions and paves the ground for the analysis of the performance of multi-model ensembles and the reliability of the scenario predictions in companion papers. In this study, we applied a set of 10 lumped, semi-lumped and fully distributed hydrological models that have been previously used in land use change studies to the low mountainous Dill catchment, Germany. Substantial differences in model performance were observed with Nash-Sutcliffe efficiencies ranging from 0.53 to 0.92. Differences in model performance were attributed to (1) model input data, (2) model calibration and (3) the physical basis of the models. The models were applied with two sets of input data: an original and a homogenized data set. This homogenization of precipitation, temperature and leaf area index was performed to reduce the variation between the models. Homogenization improved the comparability of model simulations and resulted in a reduced average bias, although some variation in model data input remained. The effect of the physical differences between models on the long-term water balance was mainly attributed to differences in how models represent evapotranspiration. Semi-lumped and lumped conceptual models slightly outperformed the fully distributed and physically based models. This was attributed to the automatic model calibration typically used for this type of models. Overall, however, we conclude that there was no superior model if several measures of model performance are considered and that all models are suitable to participate in further multi-model ensemble set-ups and land use change scenario investigations. ?? 2008 Elsevier Ltd. All rights reserved.

  15. Benchmarking test of empirical root water uptake models

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman

    2017-01-01

    Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.

  16. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  17. Energy modeling. Volume 2: Inventory and details of state energy models

    NASA Astrophysics Data System (ADS)

    Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.

    1981-05-01

    An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.

  18. Advances in Geoscience Modeling: Smart Modeling Frameworks, Self-Describing Models and the Role of Standardized Metadata

    NASA Astrophysics Data System (ADS)

    Peckham, Scott

    2016-04-01

    Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders, time interpolators and unit converters) as necessary to mediate the differences between them so they can work together. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model or data set to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. Recent efforts to bring powerful uncertainty analysis and inverse modeling toolkits such as DAKOTA into modeling frameworks will also be described. This talk will conclude with an overview of several related modeling projects that have been funded by NSF's EarthCube initiative, namely the Earth System Bridge, OntoSoft and GeoSemantics projects.

  19. [A review on research of land surface water and heat fluxes].

    PubMed

    Sun, Rui; Liu, Changming

    2003-03-01

    Many field experiments were done, and soil-vegetation-atmosphere transfer(SVAT) models were stablished to estimate land surface heat fluxes. In this paper, the processes of experimental research on land surface water and heat fluxes are reviewed, and three kinds of SVAT model(single layer model, two layer model and multi-layer model) are analyzed. Remote sensing data are widely used to estimate land surface heat fluxes. Based on remote sensing and energy balance equation, different models such as simplified model, single layer model, extra resistance model, crop water stress index model and two source resistance model are developed to estimate land surface heat fluxes and evapotranspiration. These models are also analyzed in this paper.

  20. Examination of simplified travel demand model. [Internal volume forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.L. Jr.; McFarlane, W.J.

    1978-01-01

    A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less

  1. MPTinR: analysis of multinomial processing tree models in R.

    PubMed

    Singmann, Henrik; Kellen, David

    2013-06-01

    We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .

  2. Latent log-linear models for handwritten digit classification.

    PubMed

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  3. Understanding and Predicting Urban Propagation Losses

    DTIC Science & Technology

    2009-09-01

    6. Extended Hata Model ..........................22 7. Modified Hata Model ..........................22 8. Walfisch – Ikegami Model...39 4. COST (Extended) Hata Model ...................40 5. Modified Hata Model ..........................41 6. Walfisch- Ikegami Model...47 1. Scenario One – Walfisch- Ikegami Model ........51 2. Scenario Two – Modified Hata Model ...........52 3. Scenario Three – Urban Hata

  4. A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service

    PubMed Central

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016

  5. A framework for sharing and integrating remote sensing and GIS models based on Web service.

    PubMed

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.

  6. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  7. The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.

    Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e,more » MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.« less

  8. Semi-automated Modular Program Constructor for physiological modeling: Building cell and organ models.

    PubMed

    Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B

    2015-01-01

    The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.

  9. Comparison of dark energy models after Planck 2015

    NASA Astrophysics Data System (ADS)

    Xu, Yue-Yao; Zhang, Xin

    2016-11-01

    We make a comparison for ten typical, popular dark energy models according to their capabilities of fitting the current observational data. The observational data we use in this work include the JLA sample of type Ia supernovae observation, the Planck 2015 distance priors of cosmic microwave background observation, the baryon acoustic oscillations measurements, and the direct measurement of the Hubble constant. Since the models have different numbers of parameters, in order to make a fair comparison, we employ the Akaike and Bayesian information criteria to assess the worth of the models. The analysis results show that, according to the capability of explaining observations, the cosmological constant model is still the best one among all the dark energy models. The generalized Chaplygin gas model, the constant w model, and the α dark energy model are worse than the cosmological constant model, but still are good models compared to others. The holographic dark energy model, the new generalized Chaplygin gas model, and the Chevalliear-Polarski-Linder model can still fit the current observations well, but from an economically feasible perspective, they are not so good. The new agegraphic dark energy model, the Dvali-Gabadadze-Porrati model, and the Ricci dark energy model are excluded by the current observations.

  10. Parametric regression model for survival data: Weibull regression model as an example

    PubMed Central

    2016-01-01

    Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846

  11. Inner Magnetosphere Modeling at the CCMC: Ring Current, Radiation Belt and Magnetic Field Mapping

    NASA Astrophysics Data System (ADS)

    Rastaetter, L.; Mendoza, A. M.; Chulaki, A.; Kuznetsova, M. M.; Zheng, Y.

    2013-12-01

    Modeling of the inner magnetosphere has entered center stage with the launch of the Van Allen Probes (RBSP) in 2012. The Community Coordinated Modeling Center (CCMC) has drastically improved its offerings of inner magnetosphere models that cover energetic particles in the Earth's ring current and radiation belts. Models added to the CCMC include the stand-alone Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model by M.C. Fok, the Rice Convection Model (RCM) by R. Wolf and S. Sazykin and numerous versions of the Tsyganenko magnetic field model (T89, T96, T01quiet, TS05). These models join the LANL* model by Y. Yu hat was offered for instant run earlier in the year. In addition to these stand-alone models, the Comprehensive Ring Current Model (CRCM) by M.C. Fok and N. Buzulukova joined as a component of the Space Weather Modeling Framework (SWMF) in the magnetosphere model run-on-request category. We present modeling results of the ring current and radiation belt models and demonstrate tracking of satellites such as RBSP. Calculations using the magnetic field models include mappings to the magnetic equator or to minimum-B positions and the determination of foot points in the ionosphere.

  12. A diversity index for model space selection in the estimation of benchmark and infectious doses via model averaging.

    PubMed

    Kim, Steven B; Kodell, Ralph L; Moon, Hojin

    2014-03-01

    In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.

  13. Standard fire behavior fuel models: a comprehensive set for use with Rothermel's surface fire spread model

    Treesearch

    Joe H. Scott; Robert E. Burgan

    2005-01-01

    This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.

  14. [Parameters modification and evaluation of two evapotranspiration models based on Penman-Monteith model for summer maize].

    PubMed

    Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing

    2017-06-18

    The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.

  15. Implementation of Dryden Continuous Turbulence Model into Simulink for LSA-02 Flight Test Simulation

    NASA Astrophysics Data System (ADS)

    Ichwanul Hakim, Teuku Mohd; Arifianto, Ony

    2018-04-01

    Turbulence is a movement of air on small scale in the atmosphere that caused by instabilities of pressure and temperature distribution. Turbulence model is integrated into flight mechanical model as an atmospheric disturbance. Common turbulence model used in flight mechanical model are Dryden and Von Karman model. In this minor research, only Dryden continuous turbulence model were made. Dryden continuous turbulence model has been implemented, it refers to the military specification MIL-HDBK-1797. The model was implemented into Matlab Simulink. The model will be integrated with flight mechanical model to observe response of the aircraft when it is flight through turbulence field. The turbulence model is characterized by multiplying the filter which are generated from power spectral density with band-limited Gaussian white noise input. In order to ensure that the model provide a good result, model verification has been done by comparing the implemented model with the similar model that is provided in aerospace blockset. The result shows that there are some difference for 2 linear velocities (vg and wg), and 3 angular rate (pg, qg and rg). The difference is instantly caused by different determination of turbulence scale length which is used in aerospace blockset. With the adjustment of turbulence length in the implemented model, both model result the similar output.

  16. THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability

    PubMed Central

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.

    2017-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125

  17. THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.

    PubMed

    Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R

    2016-07-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  18. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    NASA Technical Reports Server (NTRS)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; hide

    2016-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  19. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    DOE PAGES

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...

    2016-08-22

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less

  20. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less

  1. An ontology for component-based models of water resource systems

    NASA Astrophysics Data System (ADS)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  2. Novel forecasting approaches using combination of machine learning and statistical models for flood susceptibility mapping.

    PubMed

    Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah

    2018-07-01

    In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Exploring Several Methods of Groundwater Model Selection

    NASA Astrophysics Data System (ADS)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  4. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization.

    PubMed

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Models Archive and ModelWeb at NSSDC

    NASA Astrophysics Data System (ADS)

    Bilitza, D.; Papitashvili, N.; King, J. H.

    2002-05-01

    In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.

  6. Towards methodical modelling: Differences between the structure and output dynamics of multiple conceptual models

    NASA Astrophysics Data System (ADS)

    Knoben, Wouter; Woods, Ross; Freer, Jim

    2016-04-01

    Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.

  7. Synthesizing models useful for ecohydrology and ecohydraulic approaches: An emphasis on integrating models to address complex research questions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert

    Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less

  8. Synthesizing models useful for ecohydrology and ecohydraulic approaches: An emphasis on integrating models to address complex research questions

    USGS Publications Warehouse

    Brewer, Shannon K.; Worthington, Thomas; Mollenhauer, Robert; Stewart, David; McManamay, Ryan; Guertault, Lucie; Moore, Desiree

    2018-01-01

    Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio‐economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models, 43 were commonly applied due to their versatility, accessibility, user‐friendliness, and excellent user‐support. Forty‐one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user‐support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user‐friendly forms, increasing user‐support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Nonetheless, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.

  9. How does a three-dimensional continuum muscle model affect the kinematics and muscle strains of a finite element neck model compared to a discrete muscle model in rear-end, frontal, and lateral impacts.

    PubMed

    Hedenstierna, Sofia; Halldin, Peter

    2008-04-15

    A finite element (FE) model of the human neck with incorporated continuum or discrete muscles was used to simulate experimental impacts in rear, frontal, and lateral directions. The aim of this study was to determine how a continuum muscle model influences the impact behavior of a FE human neck model compared with a discrete muscle model. Most FE neck models used for impact analysis today include a spring element musculature and are limited to discrete geometries and nodal output results. A solid-element muscle model was thought to improve the behavior of the model by adding properties such as tissue inertia and compressive stiffness and by improving the geometry. It would also predict the strain distribution within the continuum elements. A passive continuum muscle model with nonlinear viscoelastic materials was incorporated into the KTH neck model together with active spring muscles and used in impact simulations. The resulting head and vertebral kinematics was compared with the results from a discrete muscle model as well as volunteer corridors. The muscle strain prediction was compared between the 2 muscle models. The head and vertebral kinematics were within the volunteer corridors for both models when activated. The continuum model behaved more stiffly than the discrete model and needed less active force to fit the experimental results. The largest difference was seen in the rear impact. The strain predicted by the continuum model was lower than for the discrete model. The continuum muscle model stiffened the response of the KTH neck model compared with a discrete model, and the strain prediction in the muscles was improved.

  10. Synthesizing models useful for ecohydrology and ecohydraulic approaches: An emphasis on integrating models to address complex research questions

    DOE PAGES

    Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert; ...

    2018-04-06

    Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less

  11. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment

    PubMed Central

    2014-01-01

    Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387

  12. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment.

    PubMed

    Cao, Renzhi; Wang, Zheng; Cheng, Jianlin

    2014-04-15

    Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.

  13. Replicating Health Economic Models: Firm Foundations or a House of Cards?

    PubMed

    Bermejo, Inigo; Tappenden, Paul; Youn, Ji-Hee

    2017-11-01

    Health economic evaluation is a framework for the comparative analysis of the incremental health gains and costs associated with competing decision alternatives. The process of developing health economic models is usually complex, financially expensive and time-consuming. For these reasons, model development is sometimes based on previous model-based analyses; this endeavour is usually referred to as model replication. Such model replication activity may involve the comprehensive reproduction of an existing model or 'borrowing' all or part of a previously developed model structure. Generally speaking, the replication of an existing model may require substantially less effort than developing a new de novo model by bypassing, or undertaking in only a perfunctory manner, certain aspects of model development such as the development of a complete conceptual model and/or comprehensive literature searching for model parameters. A further motivation for model replication may be to draw on the credibility or prestige of previous analyses that have been published and/or used to inform decision making. The acceptability and appropriateness of replicating models depends on the decision-making context: there exists a trade-off between the 'savings' afforded by model replication and the potential 'costs' associated with reduced model credibility due to the omission of certain stages of model development. This paper provides an overview of the different levels of, and motivations for, replicating health economic models, and discusses the advantages, disadvantages and caveats associated with this type of modelling activity. Irrespective of whether replicated models should be considered appropriate or not, complete replicability is generally accepted as a desirable property of health economic models, as reflected in critical appraisal checklists and good practice guidelines. To this end, the feasibility of comprehensive model replication is explored empirically across a small number of recent case studies. Recommendations are put forward for improving reporting standards to enhance comprehensive model replicability.

  14. Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination

    NASA Astrophysics Data System (ADS)

    Li, Weihua; Sankarasubramanian, A.

    2012-12-01

    Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.

  15. Comparing the cognitive differences resulting from modeling instruction: Using computer microworld and physical object instruction to model real world problems

    NASA Astrophysics Data System (ADS)

    Oursland, Mark David

    This study compared the modeling achievement of students receiving mathematical modeling instruction using the computer microworld, Interactive Physics, and students receiving instruction using physical objects. Modeling instruction included activities where students applied the (a) linear model to a variety of situations, (b) linear model to two-rate situations with a constant rate, (c) quadratic model to familiar geometric figures. Both quantitative and qualitative methods were used to analyze achievement differences between students (a) receiving different methods of modeling instruction, (b) with different levels of beginning modeling ability, or (c) with different levels of computer literacy. Student achievement was analyzed quantitatively through a three-factor analysis of variance where modeling instruction, beginning modeling ability, and computer literacy were used as the three independent factors. The SOLO (Structure of the Observed Learning Outcome) assessment framework was used to design written modeling assessment instruments to measure the students' modeling achievement. The same three independent factors were used to collect and analyze the interviews and observations of student behaviors. Both methods of modeling instruction used the data analysis approach to mathematical modeling. The instructional lessons presented problem situations where students were asked to collect data, analyze the data, write a symbolic mathematical equation, and use equation to solve the problem. The researcher recommends the following practice for modeling instruction based on the conclusions of this study. A variety of activities with a common structure are needed to make explicit the modeling process of applying a standard mathematical model. The modeling process is influenced strongly by prior knowledge of the problem context and previous modeling experiences. The conclusions of this study imply that knowledge of the properties about squares improved the students' ability to model a geometric problem more than instruction in data analysis modeling. The uses of computer microworlds such as Interactive Physics in conjunction with cooperative groups are a viable method of modeling instruction.

  16. A physical data model for fields and agents

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek

    2016-04-01

    Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data models and the raster data model, among many other data models. Our physical data model is capable of storing a first set of kinds of data, like omnipresent scalars, mobile spatio-temporal points and property values, and spatio-temporal rasters. With our poster we will provide an overview of the physical data model expressed in HDF5 and show examples of how it can be used to capture both object- and field-based information. References De Bakker, M, K. de Jong, D. Karssenberg. 2016. A conceptual data model and language for fields and agents. European Geosciences Union, EGU General Assembly, 2016, Vienna.

  17. Students' Models of Curve Fitting: A Models and Modeling Perspective

    ERIC Educational Resources Information Center

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  18. Modeling Information Accumulation in Psychological Tests Using Item Response Times

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2015-01-01

    In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…

  19. Climate and atmospheric modeling studies

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The climate and atmosphere modeling research programs have concentrated on the development of appropriate atmospheric and upper ocean models, and preliminary applications of these models. Principal models are a one-dimensional radiative-convective model, a three-dimensional global model, and an upper ocean model. Principal applications were the study of the impact of CO2, aerosols, and the solar 'constant' on climate.

  20. Models in Science Education: Applications of Models in Learning and Teaching Science

    ERIC Educational Resources Information Center

    Ornek, Funda

    2008-01-01

    In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…

  1. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.

  2. Vector models and generalized SYK models

    DOE PAGES

    Peng, Cheng

    2017-05-23

    Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.

  3. Validation of the PVSyst Performance Model for the Concentrix CPV Technology

    NASA Astrophysics Data System (ADS)

    Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault

    2011-12-01

    The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.

  4. Comparative Protein Structure Modeling Using MODELLER

    PubMed Central

    Webb, Benjamin; Sali, Andrej

    2016-01-01

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406

  5. A comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh

    1993-01-01

    A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).

  6. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.

    2017-07-01

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  7. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.

    2017-12-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  8. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.

  9. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.

  10. Analysis of terahertz dielectric properties of pork tissue

    NASA Astrophysics Data System (ADS)

    Huang, Yuqing; Xie, Qiaoling; Sun, Ping

    2017-10-01

    Seeing that about 70% component of fresh biological tissues is water, many scientists try to use water models to describe the dielectric properties of biological tissues. The classical water dielectric models are Debye model, Double Debye model and Cole-Cole model. This work aims to determine a suitable model by comparing three models above with experimental data. These models are applied to fresh pork tissue. By means of least square method, the parameters of different models are fitted with the experimental data. Comparing different models on both dielectric function, the Cole-Cole model is verified the best to describe the experiments of pork tissue. The correction factor α of the Cole-Cole model is an important modification for biological tissues. So Cole-Cole model is supposed to be a priority selection to describe the dielectric properties for biological tissues in the terahertz range.

  11. Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model

    NASA Astrophysics Data System (ADS)

    Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus

    2017-12-01

    The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.

  12. The Real and the Mathematical in Quantum Modeling: From Principles to Models and from Models to Principles

    NASA Astrophysics Data System (ADS)

    Plotnitsky, Arkady

    2017-06-01

    The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The necessity of using such models may change the nature of mathematical modeling in science and, thus, the nature of science, as it happened in the case of Q-models, which not only led to a revolutionary transformation of physics but also opened new possibilities for scientific thinking and mathematical modeling beyond physics.

  13. Vertically-Integrated Dual-Continuum Models for CO2 Injection in Fractured Aquifers

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Guo, B.; Bandilla, K.; Celia, M. A.

    2017-12-01

    Injection of CO2 into a saline aquifer leads to a two-phase flow system, with supercritical CO2 and brine being the two fluid phases. Various modeling approaches, including fully three-dimensional (3D) models and vertical-equilibrium (VE) models, have been used to study the system. Almost all of that work has focused on unfractured formations. 3D models solve the governing equations in three dimensions and are applicable to generic geological formations. VE models assume rapid and complete buoyant segregation of the two fluid phases, resulting in vertical pressure equilibrium and allowing integration of the governing equations in the vertical dimension. This reduction in dimensionality makes VE models computationally more efficient, but the associated assumptions restrict the applicability of VE model to formations with moderate to high permeability. In this presentation, we extend the VE and 3D models for CO2 injection in fractured aquifers. This is done in the context of dual-continuum modeling, where the fractured formation is modeled as an overlap of two continuous domains, one representing the fractures and the other representing the rock matrix. Both domains are treated as porous media continua and can be modeled by either a VE or a 3D formulation. The transfer of fluid mass between rock matrix and fractures is represented by a mass transfer function connecting the two domains. We have developed a computational model that combines the VE and 3D models, where we use the VE model in the fractures, which typically have high permeability, and the 3D model in the less permeable rock matrix. A new mass transfer function is derived, which couples the VE and 3D models. The coupled VE-3D model can simulate CO2 injection and migration in fractured aquifers. Results from this model compare well with a full-3D model in which both the fractures and rock matrix are modeled with 3D models, with the hybrid VE-3D model having significantly reduced computational cost. In addition to the VE-3D model, we explore simplifications of the rock matrix domain by using sugar-cube and matchstick conceptualizations and develop VE-dual porosity and VE-matchstick models. These vertically-integrated dual-permeability and dual-porosity models provide a range of computationally efficient tools to model CO2 storage in fractured saline aquifers.

  14. ATMOSPHERIC DISPERSAL AND DEPOSITION OF TEPHRA FROM A POTENTIAL VOLCANIC ERUPTION AT YUCCA MOUNTAIN, NEVADA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Harrington

    2004-10-25

    The purpose of this model report is to provide documentation of the conceptual and mathematical model (Ashplume) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. These aspects of volcanism-related dose calculation are described in the context of the entire igneous disruptive events conceptual model in ''Characterize Framework for Igneous Activity'' (BSC 2004 [DIRS 169989], Section 6.1.1). The Ashplume conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through themore » Yucca Mountain repository and downwind transport of contaminated tephra. The Ashplume mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the ground surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report update the previous documentation of the Ashplume mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model. In this report, ''Ashplume'' is used when referring to the atmospheric dispersal model and ''ASHPLUME'' is used when referencing the code of that model. Two analysis and model reports provide direct inputs to this model report, namely ''Characterize Eruptive Processes at Yucca Mountain, Nevada and Number of Waste Packages Hit by Igneous Intrusion''. This model report provides direct inputs to the TSPA, which uses the ASHPLUME software described and used in this model report. Thus, ASHPLUME software inputs are inputs to this model report for ASHPLUME runs in this model report. However, ASHPLUME software inputs are outputs of this model report for ASHPLUME runs by TSPA.« less

  15. Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.

    PubMed

    Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong

    2007-09-01

    Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.

  16. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  17. [Suitability of four stomatal conductance models in agro-pastoral ecotone in North China: A case study for potato and oil sunflower.

    PubMed

    Huang, Ming Xia; Wang, Jing; Tang, Jian Zhao; Yu, Qiang; Zhang, Jun; Xue, Qing Yu; Chang, Qing; Tan, Mei Xiu

    2016-11-18

    The suitability of four popular empirical and semi-empirical stomatal conductance models (Jarvis model, Ball-Berry model, Leuning model and Medlyn model) was evaluated based on para-llel observation data of leaf stomatal conductance, leaf net photosynthetic rate and meteorological factors during the vigorous growing period of potato and oil sunflower at Wuchuan experimental station in agro-pastoral ecotone in North China. It was found that there was a significant linear relationship between leaf stomatal conductance and leaf net photosynthetic rate for potato, whereas the linear relationship appeared weaker for oil sunflower. The results of model evaluation showed that Ball-Berry model performed best in simulating leaf stomatal conductance of potato, followed by Leuning model and Medlyn model, while Jarvis model was the last in the performance rating. The root-mean-square error (RMSE) was 0.0331, 0.0371, 0.0456 and 0.0794 mol·m -2 ·s -1 , the normalized root-mean-square error (NRMSE) was 26.8%, 30.0%, 36.9% and 64.3%, and R-squared (R 2 ) was 0.96, 0.61, 0.91 and 0.88 between simulated and observed leaf stomatal conductance of potato for Ball-Berry model, Leuning model, Medlyn model and Jarvis model, respectively. For leaf stomatal conductance of oil sunflower, Jarvis model performed slightly better than Leuning model, Ball-Berry model and Medlyn model. RMSE was 0.2221, 0.2534, 0.2547 and 0.2758 mol·m -2 ·s -1 , NRMSE was 40.3%, 46.0%, 46.2% and 50.1%, and R 2 was 0.38, 0.22, 0.23 and 0.20 between simulated and observed leaf stomatal conductance of oil sunflower for Jarvis model, Leuning model, Ball-Berry model and Medlyn model, respectively. The path analysis was conducted to identify effects of specific meteorological factors on leaf stomatal conductance. The diurnal variation of leaf stomatal conductance was principally affected by vapour pressure saturation deficit for both potato and oil sunflower. The model evaluation suggested that the stomatal conductance models for oil sunflower are to be improved in further research.

  18. Evaluation of chiller modeling approaches and their usability for fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are themore » Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  19. PyMT: A Python package for model-coupling in the Earth sciences

    NASA Astrophysics Data System (ADS)

    Hutton, E.

    2016-12-01

    The current landscape of Earth-system models is not only broad in scientific scope, but also broad in type. On the one hand, the large variety of models is exciting, as it provides fertile ground for extending or linking models together in novel ways to answer new scientific questions. However, the heterogeneity in model type acts to inhibit model coupling, model development, or even model use. Existing models are written in a variety of programming languages, operate on different grids, use their own file formats (both for input and output), have different user interfaces, have their own time steps, etc. Each of these factors become obstructions to scientists wanting to couple, extend - or simply run - existing models. For scientists whose main focus may not be computer science these barriers become even larger and become significant logistical hurdles. And this is all before the scientific difficulties of coupling or running models are addressed. The CSDMS Python Modeling Toolkit (PyMT) was developed to help non-computer scientists deal with these sorts of modeling logistics. PyMT is the fundamental package the Community Surface Dynamics Modeling System uses for the coupling of models that expose the Basic Modeling Interface (BMI). It contains: Tools necessary for coupling models of disparate time and space scales (including grid mappers) Time-steppers that coordinate the sequencing of coupled models Exchange of data between BMI-enabled models Wrappers that automatically load BMI-enabled models into the PyMT framework Utilities that support open-source interfaces (UGRID, SGRID,CSDMS Standard Names, etc.) A collection of community-submitted models, written in a variety of programminglanguages, from a variety of process domains - but all usable from within the Python programming language A plug-in framework for adding additional BMI-enabled models to the framework In this presentation we intoduce the basics of the PyMT as well as provide an example of coupling models of different domains and grid types.

  20. Can the super model (SUMO) method improve hydrological simulations? Exploratory tests with the GR hydrological models

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2017-04-01

    Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.

  1. Downscaling GISS ModelE Boreal Summer Climate over Africa

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.; Fulakeza, Matthew

    2015-01-01

    The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.

  2. A tool for multi-scale modelling of the renal nephron

    PubMed Central

    Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.

    2011-01-01

    We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210

  3. An online model composition tool for system biology models

    PubMed Central

    2013-01-01

    Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914

  4. A parsimonious dynamic model for river water quality assessment.

    PubMed

    Mannina, Giorgio; Viviani, Gaspare

    2010-01-01

    Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.

  5. The cost of simplifying air travel when modeling disease spread.

    PubMed

    Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V

    2009-01-01

    Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.

  6. Risk prediction models of breast cancer: a systematic review of model performances.

    PubMed

    Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin

    2012-05-01

    The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. A. Wasiolek

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the referencemore » biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).« less

  8. Microphysics in the Multi-Scale Modeling Systems with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.

  9. Intelligent Decisions Need Intelligent Choice of Models and Data - a Bayesian Justifiability Analysis for Models with Vastly Different Complexity

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.

    2016-12-01

    Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.

  10. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    PubMed

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Nursing resources and responsibilities according to hospital organizational model for management of inflammatory bowel disease in Spain.

    PubMed

    Marín, Laura; Torrejón, Antonio; Oltra, Lorena; Seoane, Montserrat; Hernández-Sampelayo, Paloma; Vera, María Isabel; Casellas, Francesc; Alfaro, Noelia; Lázaro, Pablo; García-Sánchez, Valle

    2011-06-01

    Nurses play an important role in the multidisciplinary management of inflammatory bowel disease (IBD), but little is known about this role and the associated resources. To improve knowledge of resource availability for health care activities and the different organizational models in managing IBD in Spain. Cross-sectional study with data obtained by questionnaire directed at Spanish Gastroenterology Services (GS). Five GS models were identified according to whether they have: no specific service for IBD management (Model A); IBD outpatient office for physician consultations (Model B); general outpatient office for nurse consultations (Model C); both, Model B and Model C (Model D); and IBD Unit (Model E) when the hospital has a Comprehensive Care Unit for IBD with telephone helpline, computer, including a Model B. Available resources and activities performed were compared according to GS model (chi-square test and test for linear trend). Responses were received from 107 GS: 33 Model A (31%), 38 Model B (36%), 4 Model C (4%), 16 Model D (15%) and 16 Model E (15%). The model in which nurses have the most resources and responsibilities is the Model E. The more complete the organizational model, the more frequent the availability of nursing resources (educational material, databases, office, and specialized software) and responsibilities (management of walk-in appointments, provision of emotional support, health education, follow-up of drug treatment and treatment adherence) (p<0.05). Nurses have more resources and responsibilities the more complete is the organizational model for IBD management. Development of these areas may improve patient outcomes. Copyright © 2011 European Crohn's and Colitis Organisation. Published by Elsevier B.V. All rights reserved.

  12. Template-free modeling by LEE and LEER in CASP11.

    PubMed

    Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung

    2016-09-01

    For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  13. Plausible combinations: An improved method to evaluate the covariate structure of Cormack-Jolly-Seber mark-recapture models

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.

    2013-01-01

    Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.

  14. Framework for Understanding Structural Errors (FUSE): A modular framework to diagnose differences between hydrological models

    USGS Publications Warehouse

    Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.

    2008-01-01

    The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.

  15. Moving alcohol prevention research forward-Part II: new directions grounded in community-based system dynamics modeling.

    PubMed

    Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller

    2018-02-01

    Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.

  16. A Comparison of Two Mathematical Modeling Frameworks for Evaluating Sexually Transmitted Infection Epidemiology.

    PubMed

    Johnson, Leigh F; Geffen, Nathan

    2016-03-01

    Different models of sexually transmitted infections (STIs) can yield substantially different conclusions about STI epidemiology, and it is important to understand how and why models differ. Frequency-dependent models make the simplifying assumption that STI incidence is proportional to STI prevalence in the population, whereas network models calculate STI incidence more realistically by classifying individuals according to their partners' STI status. We assessed a deterministic frequency-dependent model approximation to a microsimulation network model of STIs in South Africa. Sexual behavior and demographic parameters were identical in the 2 models. Six STIs were simulated using each model: HIV, herpes, syphilis, gonorrhea, chlamydia, and trichomoniasis. For all 6 STIs, the frequency-dependent model estimated a higher STI prevalence than the network model, with the difference between the 2 models being relatively large for the curable STIs. When the 2 models were fitted to the same STI prevalence data, the best-fitting parameters differed substantially between models, with the frequency-dependent model suggesting more immunity and lower transmission probabilities. The fitted frequency-dependent model estimated that the effects of a hypothetical elimination of concurrent partnerships and a reduction in commercial sex were both smaller than estimated by the fitted network model, whereas the latter model estimated a smaller impact of a reduction in unprotected sex in spousal relationships. The frequency-dependent assumption is problematic when modeling short-term STIs. Frequency-dependent models tend to underestimate the importance of high-risk groups in sustaining STI epidemics, while overestimating the importance of long-term partnerships and low-risk groups.

  17. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    NASA Astrophysics Data System (ADS)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  18. ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST

    USGS Publications Warehouse

    Winston, Richard B.

    2009-01-01

    ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.

  19. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models inmore » which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.« less

  20. Comparison of chiller models for use in model-based fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya; Haves, Philip

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  1. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  2. Geospace environment modeling 2008--2009 challenge: Dst index

    USGS Publications Warehouse

    Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.

    2013-01-01

    This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.

  3. Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján

    2017-06-01

    It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.

  4. RECURSIVE PROTEIN MODELING: A DIVIDE AND CONQUER STRATEGY FOR PROTEIN STRUCTURE PREDICTION AND ITS CASE STUDY IN CASP9

    PubMed Central

    CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN

    2013-01-01

    After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379

  5. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  6. Comparison of childbirth care models in public hospitals, Brazil.

    PubMed

    Vogt, Sibylle Emilie; Silva, Kátia Silveira da; Dias, Marcos Augusto Bastos

    2014-04-01

    To compare collaborative and traditional childbirth care models. Cross-sectional study with 655 primiparous women in four public health system hospitals in Belo Horizonte, MG, Southeastern Brazil, in 2011 (333 women for the collaborative model and 322 for the traditional model, including those with induced or premature labor). Data were collected using interviews and medical records. The Chi-square test was used to compare the outcomes and multivariate logistic regression to determine the association between the model and the interventions used. Paid work and schooling showed significant differences in distribution between the models. Oxytocin (50.2% collaborative model and 65.5% traditional model; p < 0.001), amniotomy (54.3% collaborative model and 65.9% traditional model; p = 0.012) and episiotomy (collaborative model 16.1% and traditional model 85.2%; p < 0.001) were less used in the collaborative model with increased application of non-pharmacological pain relief (85.0% collaborative model and 78.9% traditional model; p = 0.042). The association between the collaborative model and the reduction in the use of oxytocin, artificial rupture of membranes and episiotomy remained after adjustment for confounding. The care model was not associated with complications in newborns or mothers neither with the use of spinal or epidural analgesia. The results suggest that collaborative model may reduce interventions performed in labor care with similar perinatal outcomes.

  7. Developing and upgrading of solar system thermal energy storage simulation models. Technical progress report, March 1, 1979-February 29, 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhn, J K; von Fuchs, G F; Zob, A P

    1980-05-01

    Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less

  8. Modeling approaches in avian conservation and the role of field biologists

    USGS Publications Warehouse

    Beissinger, Steven R.; Walters, J.R.; Catanzaro, D.G.; Smith, Kimberly G.; Dunning, J.B.; Haig, Susan M.; Noon, Barry; Stith, Bradley M.

    2006-01-01

    This review grew out of our realization that models play an increasingly important role in conservation but are rarely used in the research of most avian biologists. Modelers are creating models that are more complex and mechanistic and that can incorporate more of the knowledge acquired by field biologists. Such models require field biologists to provide more specific information, larger sample sizes, and sometimes new kinds of data, such as habitat-specific demography and dispersal information. Field biologists need to support model development by testing key model assumptions and validating models. The best conservation decisions will occur where cooperative interaction enables field biologists, modelers, statisticians, and managers to contribute effectively. We begin by discussing the general form of ecological models—heuristic or mechanistic, "scientific" or statistical—and then highlight the structure, strengths, weaknesses, and applications of six types of models commonly used in avian conservation: (1) deterministic single-population matrix models, (2) stochastic population viability analysis (PVA) models for single populations, (3) metapopulation models, (4) spatially explicit models, (5) genetic models, and (6) species distribution models. We end by considering their unique attributes, determining whether the assumptions that underlie the structure are valid, and testing the ability of the model to predict the future correctly.

  9. Review: Regional groundwater flow modeling in heavily irrigated basins of selected states in the western United States

    NASA Astrophysics Data System (ADS)

    Rossman, Nathan R.; Zlotnik, Vitaly A.

    2013-09-01

    Water resources in agriculture-dominated basins of the arid western United States are stressed due to long-term impacts from pumping. A review of 88 regional groundwater-flow modeling applications from seven intensively irrigated western states (Arizona, California, Colorado, Idaho, Kansas, Nebraska and Texas) was conducted to provide hydrogeologists, modelers, water managers, and decision makers insight about past modeling studies that will aid future model development. Groundwater models were classified into three types: resource evaluation models (39 %), which quantify water budgets and act as preliminary models intended to be updated later, or constitute re-calibrations of older models; management/planning models (55 %), used to explore and identify management plans based on the response of the groundwater system to water-development or climate scenarios, sometimes under water-use constraints; and water rights models (7 %), used to make water administration decisions based on model output and to quantify water shortages incurred by water users or climate changes. Results for 27 model characteristics are summarized by state and model type, and important comparisons and contrasts are highlighted. Consideration of modeling uncertainty and the management focus toward sustainability, adaptive management and resilience are discussed, and future modeling recommendations, in light of the reviewed models and other published works, are presented.

  10. Interpreting Musculoskeletal Models and Dynamic Simulations: Causes and Effects of Differences Between Models.

    PubMed

    Roelker, Sarah A; Caruthers, Elena J; Baker, Rachel K; Pelz, Nicholas C; Chaudhari, Ajit M W; Siston, Robert A

    2017-11-01

    With more than 29,000 OpenSim users, several musculoskeletal models with varying levels of complexity are available to study human gait. However, how different model parameters affect estimated joint and muscle function between models is not fully understood. The purpose of this study is to determine the effects of four OpenSim models (Gait2392, Lower Limb Model 2010, Full-Body OpenSim Model, and Full Body Model 2016) on gait mechanics and estimates of muscle forces and activations. Using OpenSim 3.1 and the same experimental data for all models, six young adults were scaled in each model, gait kinematics were reproduced, and static optimization estimated muscle function. Simulated measures differed between models by up to 6.5° knee range of motion, 0.012 Nm/Nm peak knee flexion moment, 0.49 peak rectus femoris activation, and 462 N peak rectus femoris force. Differences in coordinate system definitions between models altered joint kinematics, influencing joint moments. Muscle parameter and joint moment discrepancies altered muscle activations and forces. Additional model complexity yielded greater error between experimental and simulated measures; therefore, this study suggests Gait2392 is a sufficient model for studying walking in healthy young adults. Future research is needed to determine which model(s) is best for tasks with more complex motion.

  11. Inter-sectoral comparison of model uncertainty of climate change impacts in Africa

    NASA Astrophysics Data System (ADS)

    van Griensven, Ann; Vetter, Tobias; Piontek, Franzisca; Gosling, Simon N.; Kamali, Bahareh; Reinhardt, Julia; Dinkneh, Aklilu; Yang, Hong; Alemayehu, Tadesse

    2016-04-01

    We present the model results and their uncertainties of an inter-sectoral impact model inter-comparison initiative (ISI-MIP) for climate change impacts in Africa. The study includes results on hydrological, crop and health aspects. The impact models used ensemble inputs consisting of 20 time series of daily rainfall and temperature data obtained from 5 Global Circulation Models (GCMs) and 4 Representative concentration pathway (RCP). In this study, we analysed model uncertainty for the Regional Hydrological Models, Global Hydrological Models, Malaria models and Crop models. For the regional hydrological models, we used 2 African test cases: the Blue Nile in Eastern Africa and the Niger in Western Africa. For both basins, the main sources of uncertainty are originating from the GCM and RCPs, while the uncertainty of the regional hydrological models is relatively low. The hydrological model uncertainty becomes more important when predicting changes on low flows compared to mean or high flows. For the other sectors, the impact models have the largest share of uncertainty compared to GCM and RCP, especially for Malaria and crop modelling. The overall conclusion of the ISI-MIP is that it is strongly advised to use ensemble modeling approach for climate change impact studies throughout the whole modelling chain.

  12. Extended behavioural modelling of FET and lattice-mismatched HEMT devices

    NASA Astrophysics Data System (ADS)

    Khawam, Yahya; Albasha, Lutfi

    2017-07-01

    This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.

  13. The regionalization of national-scale SPARROW models for stream nutrients

    USGS Publications Warehouse

    Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.

    2011-01-01

    This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.

  14. Modeling of Stiffness and Strength of Bone at Nanoscale.

    PubMed

    Abueidda, Diab W; Sabet, Fereshteh A; Jasiuk, Iwona M

    2017-05-01

    Two distinct geometrical models of bone at the nanoscale (collagen fibril and mineral platelets) are analyzed computationally. In the first model (model I), minerals are periodically distributed in a staggered manner in a collagen matrix while in the second model (model II), minerals form continuous layers outside the collagen fibril. Elastic modulus and strength of bone at the nanoscale, represented by these two models under longitudinal tensile loading, are studied using a finite element (FE) software abaqus. The analysis employs a traction-separation law (cohesive surface modeling) at various interfaces in the models to account for interfacial delaminations. Plane stress, plane strain, and axisymmetric versions of the two models are considered. Model II is found to have a higher stiffness than model I for all cases. For strength, the two models alternate the superiority of performance depending on the inputs and assumptions used. For model II, the axisymmetric case gives higher results than the plane stress and plane strain cases while an opposite trend is observed for model I. For axisymmetric case, model II shows greater strength and stiffness compared to model I. The collagen-mineral arrangement of bone at nanoscale forms a basic building block of bone. Thus, knowledge of its mechanical properties is of high scientific and clinical interests.

  15. The Use of Behavior Models for Predicting Complex Operations

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2010-01-01

    Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.

  16. Processing Speed in Children: Examination of the Structure in Middle Childhood and Its Impact on Reading

    ERIC Educational Resources Information Center

    Gerst, Elyssa H.

    2017-01-01

    The primary aim of this study was to examine the structure of processing speed (PS) in middle childhood by comparing five theoretically driven models of PS. The models consisted of two conceptual models (a unitary model, a complexity model) and three methodological models (a stimulus material model, an output modality model, and a timing modality…

  17. The Application of Various Nonlinear Models to Describe Academic Growth Trajectories: An Empirical Analysis Using Four-Wave Longitudinal Achievement Data from a Large Urban School District

    ERIC Educational Resources Information Center

    Shin, Tacksoo

    2012-01-01

    This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…

  18. Competency Modeling in Extension Education: Integrating an Academic Extension Education Model with an Extension Human Resource Management Model

    ERIC Educational Resources Information Center

    Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.

    2011-01-01

    The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…

  19. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    ERIC Educational Resources Information Center

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  20. A toolbox and a record for scientific model development

    NASA Technical Reports Server (NTRS)

    Ellman, Thomas

    1994-01-01

    Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.

  1. A decision support model for investment on P2P lending platform.

    PubMed

    Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.

  2. A decision support model for investment on P2P lending platform

    PubMed Central

    Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234

  3. First-Order Model Management With Variable-Fidelity Physics Applied to Multi-Element Airfoil Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.

    2000-01-01

    First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.

  4. Macro-level pedestrian and bicycle crash analysis: Incorporating spatial spillover effects in dual state count models.

    PubMed

    Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed

    2016-08-01

    This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. BioModels Database: a repository of mathematical models of biological processes.

    PubMed

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  6. Documenting Models for Interoperability and Reusability ...

    EPA Pesticide Factsheets

    Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration between scientific communities, since component-based modeling can integrate models from different disciplines. Integrated Environmental Modeling (IEM) systems focus on transferring information between components by capturing a conceptual site model; establishing local metadata standards for input/output of models and databases; managing data flow between models and throughout the system; facilitating quality control of data exchanges (e.g., checking units, unit conversions, transfers between software languages); warning and error handling; and coordinating sensitivity/uncertainty analyses. Although many computational software systems facilitate communication between, and execution of, components, there are no common approaches, protocols, or standards for turn-key linkages between software systems and models, especially if modifying components is not the intent. Using a standard ontology, this paper reviews how models can be described for discovery, understanding, evaluation, access, and implementation to facilitate interoperability and reusability. In the proceedings of the International Environmental Modelling and Software Society (iEMSs), 8th International Congress on Environmental Mod

  7. CSR Model Implementation from School Stakeholder Perspectives

    ERIC Educational Resources Information Center

    Herrmann, Suzannah

    2006-01-01

    Despite comprehensive school reform (CSR) model developers' best intentions to make school stakeholders adhere strictly to the implementation of model components, school stakeholders implementing CSR models inevitably make adaptations to the CSR model. Adaptations are made to CSR models because school stakeholders internalize CSR model practices…

  8. A comparison of simple global kinetic models for coal devolatilization with the CPD model

    DOE PAGES

    Richards, Andrew P.; Fletcher, Thomas H.

    2016-08-01

    Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less

  9. [Bone remodeling and modeling/mini-modeling.

    PubMed

    Hasegawa, Tomoka; Amizuka, Norio

    Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.

  10. An Introduction to Markov Modeling: Concepts and Uses

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Lau, Sonie (Technical Monitor)

    1998-01-01

    Kharkov modeling is a modeling technique that is widely useful for dependability analysis of complex fault tolerant systems. It is very flexible in the type of systems and system behavior it can model. It is not, however, the most appropriate modeling technique for every modeling situation. The first task in obtaining a reliability or availability estimate for a system is selecting which modeling technique is most appropriate to the situation at hand. A person performing a dependability analysis must confront the question: is Kharkov modeling most appropriate to the system under consideration, or should another technique be used instead? The need to answer this gives rise to other more basic questions regarding Kharkov modeling: what are the capabilities and limitations of Kharkov modeling as a modeling technique? How does it relate to other modeling techniques? What kind of system behavior can it model? What kinds of software tools are available for performing dependability analyses with Kharkov modeling techniques? These questions and others will be addressed in this tutorial.

  11. The cerebro-cerebellum: Could it be loci of forward models?

    PubMed

    Ishikawa, Takahiro; Tomatsu, Saeka; Izawa, Jun; Kakei, Shinji

    2016-03-01

    It is widely accepted that the cerebellum acquires and maintain internal models for motor control. An internal model simulates mapping between a set of causes and effects. There are two candidates of cerebellar internal models, forward models and inverse models. A forward model transforms a motor command into a prediction of the sensory consequences of a movement. In contrast, an inverse model inverts the information flow of the forward model. Despite the clearly different formulations of the two internal models, it is still controversial whether the cerebro-cerebellum, the phylogenetically newer part of the cerebellum, provides inverse models or forward models for voluntary limb movements or other higher brain functions. In this article, we review physiological and morphological evidence that suggests the existence in the cerebro-cerebellum of a forward model for limb movement. We will also discuss how the characteristic input-output organization of the cerebro-cerebellum may contribute to forward models for non-motor higher brain functions. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  12. Second Generation Crop Yield Models Review

    NASA Technical Reports Server (NTRS)

    Hodges, T. (Principal Investigator)

    1982-01-01

    Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.

  13. Microphysics in Multi-scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  14. Mechanical model development of rolling bearing-rotor systems: A review

    NASA Astrophysics Data System (ADS)

    Cao, Hongrui; Niu, Linkai; Xi, Songtao; Chen, Xuefeng

    2018-03-01

    The rolling bearing rotor (RBR) system is the kernel of many rotating machines, which affects the performance of the whole machine. Over the past decades, extensive research work has been carried out to investigate the dynamic behavior of RBR systems. However, to the best of the authors' knowledge, no comprehensive review on RBR modelling has been reported yet. To address this gap in the literature, this paper reviews and critically discusses the current progress of mechanical model development of RBR systems, and identifies future trends for research. Firstly, five kinds of rolling bearing models, i.e., the lumped-parameter model, the quasi-static model, the quasi-dynamic model, the dynamic model, and the finite element (FE) model are summarized. Then, the coupled modelling between bearing models and various rotor models including De Laval/Jeffcott rotor, rigid rotor, transfer matrix method (TMM) models and FE models are presented. Finally, the paper discusses the key challenges of previous works and provides new insights into understanding of RBR systems for their advanced future engineering applications.

  15. `Models of' versus `Models for'. Toward an Agent-Based Conception of Modeling in the Science Classroom

    NASA Astrophysics Data System (ADS)

    Gouvea, Julia; Passmore, Cynthia

    2017-03-01

    The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.

  16. Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread

    PubMed Central

    Miller, Joel C.; Volz, Erik M.

    2012-01-01

    We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242

  17. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  18. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE PAGES

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...

    2017-07-11

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  19. Modeling of near-wall turbulence

    NASA Technical Reports Server (NTRS)

    Shih, T. H.; Mansour, N. N.

    1990-01-01

    An improved k-epsilon model and a second order closure model is presented for low Reynolds number turbulence near a wall. For the k-epsilon model, a modified form of the eddy viscosity having correct asymptotic near wall behavior is suggested, and a model for the pressure diffusion term in the turbulent kinetic energy equation is proposed. For the second order closure model, the existing models are modified for the Reynolds stress equations to have proper near wall behavior. A dissipation rate equation for the turbulent kinetic energy is also reformulated. The proposed models satisfy realizability and will not produce unphysical behavior. Fully developed channel flows are used for model testing. The calculations are compared with direct numerical simulations. It is shown that the present models, both the k-epsilon model and the second order closure model, perform well in predicting the behavior of the near wall turbulence. Significant improvements over previous models are obtained.

  20. [Modeling in value-based medicine].

    PubMed

    Neubauer, A S; Hirneiss, C; Kampik, A

    2010-03-01

    Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.

  1. Sequential Modelling of Building Rooftops by Integrating Airborne LIDAR Data and Optical Imagery: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Sohn, G.; Jung, J.; Jwa, Y.; Armenakis, C.

    2013-05-01

    This paper presents a sequential rooftop modelling method to refine initial rooftop models derived from airborne LiDAR data by integrating it with linear cues retrieved from single imagery. A cue integration between two datasets is facilitated by creating new topological features connecting between the initial model and image lines, with which new model hypotheses (variances to the initial model) are produced. We adopt Minimum Description Length (MDL) principle for competing the model candidates and selecting the optimal model by considering the balanced trade-off between the model closeness and the model complexity. Our preliminary results, combined with the Vaihingen data provided by ISPRS WGIII/4 demonstrate the image-driven modelling cues can compensate the limitations posed by LiDAR data in rooftop modelling.

  2. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  3. [Model-based biofuels system analysis: a review].

    PubMed

    Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin

    2011-03-01

    Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.

  4. An Immuno-epidemiological Model of Paratuberculosis

    NASA Astrophysics Data System (ADS)

    Martcheva, M.

    2011-11-01

    The primary objective of this article is to introduce an immuno-epidemiological model of paratuberculosis (Johne's disease). To develop the immuno-epidemiological model, we first develop an immunological model and an epidemiological model. Then, we link the two models through time-since-infection structure and parameters of the epidemiological model. We use the nested approach to compose the immuno-epidemiological model. Our immunological model captures the switch between the T-cell immune response and the antibody response in Johne's disease. The epidemiological model is a time-since-infection model and captures the variability of transmission rate and the vertical transmission of the disease. We compute the immune-response-dependent epidemiological reproduction number. Our immuno-epidemiological model can be used for investigation of the impact of the immune response on the epidemiology of Johne's disease.

  5. Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration

    NASA Technical Reports Server (NTRS)

    Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.

    1993-01-01

    Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.

  6. FacetModeller: Software for manual creation, manipulation and analysis of 3D surface-based models

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter G.; Carter-McAuslan, Angela E.; Dunham, Michael W.; Jones, Drew J.; Nalepa, Mariella; Squires, Chelsea L.; Tycholiz, Cassandra J.; Vallée, Marc A.; Farquharson, Colin G.

    2018-01-01

    The creation of 3D models is commonplace in many disciplines. Models are often built from a collection of tessellated surfaces. To apply numerical methods to such models it is often necessary to generate a mesh of space-filling elements that conforms to the model surfaces. While there are meshing algorithms that can do so, they place restrictive requirements on the surface-based models that are rarely met by existing 3D model building software. Hence, we have developed a Java application named FacetModeller, designed for efficient manual creation, modification and analysis of 3D surface-based models destined for use in numerical modelling.

  7. ModelTest Server: a web-based tool for the statistical selection of models of nucleotide substitution online

    PubMed Central

    Posada, David

    2006-01-01

    ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102

  8. Application of surface complexation models to anion adsorption by natural materials

    USDA-ARS?s Scientific Manuscript database

    Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...

  9. Space Environments and Effects: Trapped Proton Model

    NASA Technical Reports Server (NTRS)

    Huston, S. L.; Kauffman, W. (Technical Monitor)

    2002-01-01

    An improved model of the Earth's trapped proton environment has been developed. This model, designated Trapped Proton Model version 1 (TPM-1), determines the omnidirectional flux of protons with energy between 1 and 100 MeV throughout near-Earth space. The model also incorporates a true solar cycle dependence. The model consists of several data files and computer software to read them. There are three versions of the mo'del: a FORTRAN-Callable library, a stand-alone model, and a Web-based model.

  10. The NASA Marshall engineering thermosphere model

    NASA Technical Reports Server (NTRS)

    Hickey, Michael Philip

    1988-01-01

    Described is the NASA Marshall Engineering Thermosphere (MET) Model, which is a modified version of the MFSC/J70 Orbital Atmospheric Density Model as currently used in the J70MM program at MSFC. The modifications to the MFSC/J70 model required for the MET model are described, graphical and numerical examples of the models are included, as is a listing of the MET model computer program. Major differences between the numerical output from the MET model and the MFSC/J70 model are discussed.

  11. Wind turbine model and loop shaping controller design

    NASA Astrophysics Data System (ADS)

    Gilev, Bogdan

    2017-12-01

    A model of a wind turbine is evaluated, consisting of: wind speed model, mechanical and electrical model of generator and tower oscillation model. Model of the whole system is linearized around of a nominal point. By using the linear model with uncertainties is synthesized a uncertain model. By using the uncertain model is developed a H∞ controller, which provide mode of stabilizing the rotor frequency and damping the tower oscillations. Finally is simulated work of nonlinear system and H∞ controller.

  12. Simulated Students and Classroom Use of Model-Based Intelligent Tutoring

    NASA Technical Reports Server (NTRS)

    Koedinger, Kenneth R.

    2008-01-01

    Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement.

  13. Modeling for Battery Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick

    2017-01-01

    For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient, and is of suitable accuracy for reliable EOD prediction in a variety of operational profiles. The model can be considered an electrochemical engineering model, but unlike most such models found in the literature, certain approximations are done that allow to retain computational efficiency for online implementation of the model. Although the focus here is on Li-ion batteries, the model is quite general and can be applied to different chemistries through a change of model parameter values. Progress on model development, providing model validation results and EOD prediction results is being presented.

  14. Comparison of modeling methods to predict the spatial distribution of deep-sea coral and sponge in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.

    2017-08-01

    Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.

  15. A toy terrestrial carbon flow model

    NASA Technical Reports Server (NTRS)

    Parton, William J.; Running, Steven W.; Walker, Brian

    1992-01-01

    A generalized carbon flow model for the major terrestrial ecosystems of the world is reported. The model is a simplification of the Century model and the Forest-Biogeochemical model. Topics covered include plant production, decomposition and nutrient cycling, biomes, the utility of the carbon flow model for predicting carbon dynamics under global change, and possible applications to state-and-transition models and environmentally driven global vegetation models.

  16. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    PubMed Central

    2010-01-01

    Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024

  17. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. Dixon

    The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Reportmore » is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC Seepage Model and is not used for calibration to measured data.« less

  18. Review: To be or not to be an identifiable model. Is this a relevant question in animal science modelling?

    PubMed

    Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P

    2018-04-01

    What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.

  19. Ecosystem Model Skill Assessment. Yes We Can!

    PubMed Central

    Olsen, Erik; Fay, Gavin; Gaichas, Sarah; Gamble, Robert; Lucey, Sean; Link, Jason S.

    2016-01-01

    Need to Assess the Skill of Ecosystem Models Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted. Northeast US Atlantis Marine Ecosystem Model We assessed the skill of the Northeast U.S. (NEUS) Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation), and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators) were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes. Skill Assessment Is Both Possible and Advisable We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods are applicable to any type of predictive model, and should be considered for use in fields outside ecology (e.g. economics, climate change, and risk assessment). PMID:26731540

  20. Challenges and opportunities for integrating lake ecosystem modelling approaches

    USGS Publications Warehouse

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative view on the functioning of lake ecosystems. We end with a set of specific recommendations that may be of help in the further development of lake ecosystem models.

  1. Combination of Alternative Models by Mutual Data Assimilation: Supermodeling With A Suite of Primitive Equation Models

    NASA Astrophysics Data System (ADS)

    Duane, G. S.; Selten, F.

    2016-12-01

    Different models of climate and weather commonly give projections/predictions that differ widely in their details. While averaging of model outputs almost always improves results, nonlinearity implies that further improvement can be obtained from model interaction in run time, as has already been demonstrated with toy systems of ODEs and idealized quasigeostrophic models. In the supermodeling scheme, models effectively assimilate data from one another and partially synchronize with one another. Spread among models is manifest as a spread in possible inter-model connection coefficients, so that the models effectively "agree to disagree". Here, we construct a supermodel formed from variants of the SPEEDO model, a primitive-equation atmospheric model (SPEEDY) coupled to ocean and land. A suite of atmospheric models, coupled to the same ocean and land, is chosen to represent typical differences among climate models by varying model parameters. Connections are introduced between all pairs of corresponding independent variables at synoptic-scale intervals. Strengths of the inter-atmospheric connections can be considered to represent inverse inter-model observation error. Connection strengths are adapted based on an established procedure that extends the dynamical equations of a pair of synchronizing systems to synchronize parameters as well. The procedure is applied to synchronize the suite of SPEEDO models with another SPEEDO model regarded as "truth", adapting the inter-model connections along the way. The supermodel with trained connections gives marginally lower error in all fields than any weighted combination of the separate model outputs when used in "weather-prediction mode", i.e. with constant nudging to truth. Stronger results are obtained if a supermodel is used to predict the formation of coherent structures or the frequency of such. Partially synchronized SPEEDO models give a better representation of the blocked-zonal index cycle than does a weighted average of the constituent model outputs. We have thus shown that supermodeling and the synchronization-based procedure to adapt inter-model connections give results superior to output averaging not only with highly nonlinear toy systems, but with smaller nonlinearities as occur in climate models.

  2. [Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].

    PubMed

    Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong

    2014-10-01

    In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.

  3. Bayesian multimodel inference of soil microbial respiration models: Theory, application and future prospective

    NASA Astrophysics Data System (ADS)

    Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.

    2015-12-01

    Models in biogeoscience involve uncertainties in observation data, model inputs, model structure, model processes and modeling scenarios. To accommodate for different sources of uncertainty, multimodal analysis such as model combination, model selection, model elimination or model discrimination are becoming more popular. To illustrate theoretical and practical challenges of multimodal analysis, we use an example about microbial soil respiration modeling. Global soil respiration releases more than ten times more carbon dioxide to the atmosphere than all anthropogenic emissions. Thus, improving our understanding of microbial soil respiration is essential for improving climate change models. This study focuses on a poorly understood phenomena, which is the soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect"). We hypothesize that the "Birch effect" is generated by the following three mechanisms. To test our hypothesis, we developed and assessed five evolving microbial-enzyme models against field measurements from a semiarid Savannah that is characterized by pulsed precipitation. These five model evolve step-wise such that the first model includes none of these three mechanism, while the fifth model includes the three mechanisms. The basic component of Bayesian multimodal analysis is the estimation of marginal likelihood to rank the candidate models based on their overall likelihood with respect to observation data. The first part of the study focuses on using this Bayesian scheme to discriminate between these five candidate models. The second part discusses some theoretical and practical challenges, which are mainly the effect of likelihood function selection and the marginal likelihood estimation methods on both model ranking and Bayesian model averaging. The study shows that making valid inference from scientific data is not a trivial task, since we are not only uncertain about the candidate scientific models, but also about the statistical methods that are used to discriminate between these models.

  4. Ecosystem Model Skill Assessment. Yes We Can!

    PubMed

    Olsen, Erik; Fay, Gavin; Gaichas, Sarah; Gamble, Robert; Lucey, Sean; Link, Jason S

    2016-01-01

    Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted. We assessed the skill of the Northeast U.S. (NEUS) Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation), and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators) were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes. We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods are applicable to any type of predictive model, and should be considered for use in fields outside ecology (e.g. economics, climate change, and risk assessment).

  5. iMarNet: an ocean biogeochemistry model inter-comparison project within a common physical ocean modelling framework

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.

    2014-07-01

    Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.

  6. iMarNet: an ocean biogeochemistry model intercomparison project within a common physical ocean modelling framework

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.

    2014-12-01

    Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.

  7. Design of Soil Salinity Policies with Tinamit, a Flexible and Rapid Tool to Couple Stakeholder-Built System Dynamics Models with Physically-Based Models

    NASA Astrophysics Data System (ADS)

    Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.

    2016-12-01

    Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with limited technical background.

  8. Bayesian Model Selection under Time Constraints

    NASA Astrophysics Data System (ADS)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  9. Prediction-error variance in Bayesian model updating: a comparative study

    NASA Astrophysics Data System (ADS)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.

  10. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR

    PubMed Central

    Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong

    2016-01-01

    Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973

  11. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    USGS Publications Warehouse

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead of promoting passive or self-righteous decisions.

  12. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  13. Preparing the Model for Prediction Across Scales (MPAS) for global retrospective air quality modeling

    EPA Science Inventory

    The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...

  14. Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum

    2011-01-01

    Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…

  15. National Centers for Environmental Prediction

    Science.gov Websites

    Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post / VISION | About EMC EMC > Mesoscale Modeling > MODELS Home Mission Models R & D Collaborators Cyclone Tracks & Verification Implementation Info FAQ Disclaimer More Info MESOSCALE MODELING SREF

  16. Computer Models of Personality: Implications for Measurement

    ERIC Educational Resources Information Center

    Cranton, P. A.

    1976-01-01

    Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…

  17. Uses of Computer Simulation Models in Ag-Research and Everyday Life

    USDA-ARS?s Scientific Manuscript database

    When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...

  18. A Framework of Operating Models for Interdisciplinary Research Programs in Clinical Service Organizations

    ERIC Educational Resources Information Center

    King, Gillian; Currie, Melissa; Smith, Linda; Servais, Michelle; McDougall, Janette

    2008-01-01

    A framework of operating models for interdisciplinary research programs in clinical service organizations is presented, consisting of a "clinician-researcher" skill development model, a program evaluation model, a researcher-led knowledge generation model, and a knowledge conduit model. Together, these models comprise a tailored, collaborative…

  19. Modelling Students' Visualisation of Chemical Reaction

    ERIC Educational Resources Information Center

    Cheng, Maurice M. W.; Gilbert, John K.

    2017-01-01

    This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…

  20. Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders

    2007-01-01

    Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…

  1. Planning Major Curricular Change.

    ERIC Educational Resources Information Center

    Kirkland, Travis P.

    Decision-making and change models can take many forms. One researcher (Nordvall, 1982) has suggested five conceptual models for introducing change: a political model; a rational decision-making model; a social interaction decision model; the problem-solving method; and an adaptive/linkage model which is an amalgam of each of the other models.…

  2. UNITED STATES METEOROLOGICAL DATA - DAILY AND HOURLY FILES TO SUPPORT PREDICTIVE EXPOSURE MODELING

    EPA Science Inventory

    ORD numerical models for pesticide exposure include a model of spray drift (AgDisp), a cropland pesticide persistence model (PRZM), a surface water exposure model (EXAMS), and a model of fish bioaccumulation (BASS). A unified climatological database for these models has been asse...

  3. Enhancement of the Acquisition Process for a Combat System-A Case Study to Model the Workflow Processes for an Air Defense System Acquisition

    DTIC Science & Technology

    2009-12-01

    Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture

  4. Improving a complex finite-difference ground water flow model through the use of an analytic element screening model

    USGS Publications Warehouse

    Hunt, R.J.; Anderson, M.P.; Kelson, V.A.

    1998-01-01

    This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.

  5. A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca

    2014-02-15

    Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxelsmore » on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.« less

  6. The Radiative Forcing Model Intercomparison Project (RFMIP): Assessment and characterization of forcing to enable feedback studies

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Stevens, B. B.; Forster, P.; Collins, W.; Ramaswamy, V.

    2014-12-01

    The Radiative Forcing Model Intercomparison Project (RFMIP): Assessment and characterization of forcing to enable feedback studies An enormous amount of attention has been paid to the diversity of responses in the CMIP and other multi-model ensembles. This diversity is normally interpreted as a distribution in climate sensitivity driven by some distribution of feedback mechanisms. Identification of these feedbacks relies on precise identification of the forcing to which each model is subject, including distinguishing true error from model diversity. The Radiative Forcing Model Intercomparison Project (RFMIP) aims to disentangle the role of forcing from model sensitivity as determinants of varying climate model response by carefully characterizing the radiative forcing to which such models are subject and by coordinating experiments in which it is specified. RFMIP consists of four activities: 1) An assessment of accuracy in flux and forcing calculations for greenhouse gases under past, present, and future climates, using off-line radiative transfer calculations in specified atmospheres with climate model parameterizations and reference models 2) Characterization and assessment of model-specific historical forcing by anthropogenic aerosols, based on coordinated diagnostic output from climate models and off-line radiative transfer calculations with reference models 3) Characterization of model-specific effective radiative forcing, including contributions of model climatology and rapid adjustments, using coordinated climate model integrations and off-line radiative transfer calculations with a single fast model 4) Assessment of climate model response to precisely-characterized radiative forcing over the historical record, including efforts to infer true historical forcing from patterns of response, by direct specification of non-greenhouse-gas forcing in a series of coordinated climate model integrations This talk discusses the rationale for RFMIP, provides an overview of the four activities, and presents preliminary motivating results.

  7. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  8. Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.

    PubMed

    Kolossa, Antonio; Kopp, Bruno

    2016-01-01

    The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.

  9. Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks

    NASA Astrophysics Data System (ADS)

    Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.

    2015-12-01

    Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.

  10. Clarity versus complexity: land-use modeling as a practical tool for decision-makers

    USGS Publications Warehouse

    Sohl, Terry L.; Claggett, Peter

    2013-01-01

    The last decade has seen a remarkable increase in the number of modeling tools available to examine future land-use and land-cover (LULC) change. Integrated modeling frameworks, agent-based models, cellular automata approaches, and other modeling techniques have substantially improved the representation of complex LULC systems, with each method using a different strategy to address complexity. However, despite the development of new and better modeling tools, the use of these tools is limited for actual planning, decision-making, or policy-making purposes. LULC modelers have become very adept at creating tools for modeling LULC change, but complicated models and lack of transparency limit their utility for decision-makers. The complicated nature of many LULC models also makes it impractical or even impossible to perform a rigorous analysis of modeling uncertainty. This paper provides a review of land-cover modeling approaches and the issues causes by the complicated nature of models, and provides suggestions to facilitate the increased use of LULC models by decision-makers and other stakeholders. The utility of LULC models themselves can be improved by 1) providing model code and documentation, 2) through the use of scenario frameworks to frame overall uncertainties, 3) improving methods for generalizing key LULC processes most important to stakeholders, and 4) adopting more rigorous standards for validating models and quantifying uncertainty. Communication with decision-makers and other stakeholders can be improved by increasing stakeholder participation in all stages of the modeling process, increasing the transparency of model structure and uncertainties, and developing user-friendly decision-support systems to bridge the link between LULC science and policy. By considering these options, LULC science will be better positioned to support decision-makers and increase real-world application of LULC modeling results.

  11. Modeling Methods

    USGS Publications Warehouse

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  12. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    DOE PAGES

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    2015-10-30

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less

  13. Forecasting plant phenology: evaluating the phenological models for Betula pendula and Padus racemosa spring phases, Latvia.

    PubMed

    Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta

    2015-02-01

    A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.

  14. A toolbox and record for scientific models

    NASA Technical Reports Server (NTRS)

    Ellman, Thomas

    1994-01-01

    Computational science presents a host of challenges for the field of knowledge-based software design. Scientific computation models are difficult to construct. Models constructed by one scientist are easily misapplied by other scientists to problems for which they are not well-suited. Finally, models constructed by one scientist are difficult for others to modify or extend to handle new types of problems. Construction of scientific models actually involves much more than the mechanics of building a single computational model. In the course of developing a model, a scientist will often test a candidate model against experimental data or against a priori expectations. Test results often lead to revisions of the model and a consequent need for additional testing. During a single model development session, a scientist typically examines a whole series of alternative models, each using different simplifying assumptions or modeling techniques. A useful scientific software design tool must support these aspects of the model development process as well. In particular, it should propose and carry out tests of candidate models. It should analyze test results and identify models and parts of models that must be changed. It should determine what types of changes can potentially cure a given negative test result. It should organize candidate models, test data, and test results into a coherent record of the development process. Finally, it should exploit the development record for two purposes: (1) automatically determining the applicability of a scientific model to a given problem; (2) supporting revision of a scientific model to handle a new type of problem. Existing knowledge-based software design tools must be extended in order to provide these facilities.

  15. More than a name: Heterogeneity in characteristics of models of maternity care reported from the Australian Maternity Care Classification System validation study.

    PubMed

    Donnolley, Natasha R; Chambers, Georgina M; Butler-Henderson, Kerryn A; Chapman, Michael G; Sullivan, Elizabeth A

    2017-08-01

    Without a standard terminology to classify models of maternity care, it is problematic to compare and evaluate clinical outcomes across different models. The Maternity Care Classification System is a novel system developed in Australia to classify models of maternity care based on their characteristics and an overarching broad model descriptor (Major Model Category). This study aimed to assess the extent of variability in the defining characteristics of models of care grouped to the same Major Model Category, using the Maternity Care Classification System. All public hospital maternity services in New South Wales, Australia, were invited to complete a web-based survey classifying two local models of care using the Maternity Care Classification System. A descriptive analysis of the variation in 15 attributes of models of care was conducted to evaluate the level of heterogeneity within and across Major Model Categories. Sixty-nine out of seventy hospitals responded, classifying 129 models of care. There was wide variation in a number of important attributes of models classified to the same Major Model Category. The category of 'Public hospital maternity care' contained the most variation across all characteristics. This study demonstrated that although models of care can be grouped into a distinct set of Major Model Categories, there are significant variations in models of the same type. This could result in seemingly 'like' models of care being incorrectly compared if grouped only by the Major Model Category. Copyright © 2017 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  16. The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)

    PubMed Central

    Smith, Philip L.; Ratcliff, Roger; McKoon, Gail

    2015-01-01

    Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314

  17. Evaluating model structure adequacy: The case of the Maggia Valley groundwater system, southern Switzerland

    USGS Publications Warehouse

    Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,

    2013-01-01

    Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.

  18. The Hyper-Envelope Modeling Interface (HEMI): A Novel Approach Illustrated Through Predicting Tamarisk (Tamarix spp.) Habitat in the Western USA

    USGS Publications Warehouse

    Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.

    2013-01-01

    Habitat suitability maps are commonly created by modeling a species’ environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk (Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.

  19. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less

  20. Probabilistic Graphical Model Representation in Phylogenetics

    PubMed Central

    Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.

    2014-01-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559

  1. Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.

    PubMed

    Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J

    2016-01-01

    Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.

  2. Documenting Models for Interoperability and Reusability (proceedings)

    EPA Science Inventory

    Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration be...

  3. Documenting Models for Interoperability and Reusability

    EPA Science Inventory

    Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration be...

  4. Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process

    NASA Astrophysics Data System (ADS)

    Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.

    2018-06-01

    A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.

  5. Accounting for uncertainty in health economic decision models by using model averaging.

    PubMed

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-04-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.

  6. Palm: Easing the Burden of Analytical Performance Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Hoisie, Adolfy

    2014-06-01

    Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less

  7. A Hybrid 3D Indoor Space Model

    NASA Astrophysics Data System (ADS)

    Jamali, Ali; Rahman, Alias Abdul; Boguslawski, Pawel

    2016-10-01

    GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM), Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.

  8. Modified hyperbolic sine model for titanium dioxide-based memristive thin films

    NASA Astrophysics Data System (ADS)

    Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana

    2018-03-01

    Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.

  9. Resident Role Modeling: "It Just Happens".

    PubMed

    Sternszus, Robert; Macdonald, Mary Ellen; Steinert, Yvonne

    2016-03-01

    Role modeling by staff physicians is a significant component of the clinical teaching of students and residents. However, the importance of resident role modeling has only recently emerged, and residents' understanding of themselves as role models has yet to be explored. This study sought to understand residents' perceptions of themselves as role models, describe how residents learn about role modeling, and identify ways to improve resident role modeling. Fourteen semistructured interviews were conducted with residents in internal medicine, general surgery, and pediatrics at the McGill University Faculty of Medicine between April and September 2013. Interviews were audio-recorded and subsequently transcribed for analysis; iterative analysis followed principles of qualitative description. Four primary themes were identified through data analysis: residents perceived role modeling as the demonstration of "good" behaviors in the clinical context; residents believed that learning from their role modeling "just happens" as long as learners are "watching"; residents did not equate role modeling with being a role model; and residents learned about role modeling from watching their positive and negative role models. While residents were aware that students and junior colleagues learned from their modeling, they were often not aware of role modeling as it was occurring; they also believed that learning from role modeling "just happens" and did not always see themselves as role models. Helping residents view effective role modeling as a deliberate process rather than something that "just happens" may improve clinical teaching across the continuum of medical education.

  10. Why Bother to Calibrate? Model Consistency and the Value of Prior Information

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal

    2015-04-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  11. Why Bother and Calibrate? Model Consistency and the Value of Prior Information.

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.

    2014-12-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  12. Process consistency in models: The importance of system signatures, expert knowledge, and process complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.

    2014-09-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.

  13. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  14. Modeling habitat for Marbled Murrelets on the Siuslaw National Forest, Oregon, using lidar data

    USGS Publications Warehouse

    Hagar, Joan C.; Aragon, Ramiro; Haggerty, Patricia; Hollenbeck, Jeff P.

    2018-03-28

    Habitat models using lidar-derived variables that quantify fine-scale variation in vegetation structure can improve the accuracy of occupancy estimates for canopy-dwelling species over models that use variables derived from other remote sensing techniques. However, the ability of models developed at such a fine spatial scale to maintain accuracy at regional or larger spatial scales has not been tested. We tested the transferability of a lidar-based habitat model for the threatened Marbled Murrelet (Brachyramphus marmoratus) between two management districts within a larger regional conservation zone in coastal western Oregon. We compared the performance of the transferred model against models developed with data from the application location. The transferred model had good discrimination (AUC = 0.73) at the application location, and model performance was further improved by fitting the original model with coefficients from the application location dataset (AUC = 0.79). However, the model selection procedure indicated that neither of these transferred models were considered competitive with a model trained on local data. The new model trained on data from the application location resulted in the selection of a slightly different set of lidar metrics from the original model, but both transferred and locally trained models consistently indicated positive relationships between the probability of occupancy and lidar measures of canopy structural complexity. We conclude that while the locally trained model had superior performance for local application, the transferred model could reasonably be applied to the entire conservation zone.

  15. How Qualitative Methods Can be Used to Inform Model Development.

    PubMed

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  16. Large-scale model quality assessment for improving protein tertiary structure prediction.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-06-15

    Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.

  17. An algorithm to detect and communicate the differences in computational models describing biological systems.

    PubMed

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-02-15

    Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.

  18. Experiments in concept modeling for radiographic image reports.

    PubMed Central

    Bell, D S; Pattison-Gordon, E; Greenes, R A

    1994-01-01

    OBJECTIVE: Development of methods for building concept models to support structured data entry and image retrieval in chest radiography. DESIGN: An organizing model for chest-radiographic reporting was built by analyzing manually a set of natural-language chest-radiograph reports. During model building, clinician-informaticians judged alternative conceptual structures according to four criteria: content of clinically relevant detail, provision for semantic constraints, provision for canonical forms, and simplicity. The organizing model was applied in representing three sample reports in their entirety. To explore the potential for automatic model discovery, the representation of one sample report was compared with the noun phrases derived from the same report by the CLARIT natural-language processing system. RESULTS: The organizing model for chest-radiographic reporting consists of 62 concept types and 17 relations, arranged in an inheritance network. The broadest types in the model include finding, anatomic locus, procedure, attribute, and status. Diagnoses are modeled as a subtype of finding. Representing three sample reports in their entirety added 79 narrower concept types. Some CLARIT noun phrases suggested valid associations among subtypes of finding, status, and anatomic locus. CONCLUSIONS: A manual modeling process utilizing explicitly stated criteria for making modeling decisions produced an organizing model that showed consistency in early testing. A combination of top-down and bottom-up modeling was required. Natural-language processing may inform model building, but algorithms that would replace manual modeling were not discovered. Further progress in modeling will require methods for objective model evaluation and tools for formalizing the model-building process. PMID:7719807

  19. A strategy to establish Food Safety Model Repositories.

    PubMed

    Plaza-Rodríguez, C; Thoens, C; Falenski, A; Weiser, A A; Appel, B; Kaesbohrer, A; Filter, M

    2015-07-02

    Transferring the knowledge of predictive microbiology into real world food manufacturing applications is still a major challenge for the whole food safety modelling community. To facilitate this process, a strategy for creating open, community driven and web-based predictive microbial model repositories is proposed. These collaborative model resources could significantly improve the transfer of knowledge from research into commercial and governmental applications and also increase efficiency, transparency and usability of predictive models. To demonstrate the feasibility, predictive models of Salmonella in beef previously published in the scientific literature were re-implemented using an open source software tool called PMM-Lab. The models were made publicly available in a Food Safety Model Repository within the OpenML for Predictive Modelling in Food community project. Three different approaches were used to create new models in the model repositories: (1) all information relevant for model re-implementation is available in a scientific publication, (2) model parameters can be imported from tabular parameter collections and (3) models have to be generated from experimental data or primary model parameters. All three approaches were demonstrated in the paper. The sample Food Safety Model Repository is available via: http://sourceforge.net/projects/microbialmodelingexchange/files/models and the PMM-Lab software can be downloaded from http://sourceforge.net/projects/pmmlab/. This work also illustrates that a standardized information exchange format for predictive microbial models, as the key component of this strategy, could be established by adoption of resources from the Systems Biology domain. Copyright © 2015. Published by Elsevier B.V.

  20. The LUE data model for representation of agents and fields

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2017-04-01

    Traditionally, agents-based and field-based modelling environments use different data models to represent the state of information they manipulate. In agent-based modelling, involving the representation of phenomena as objects bounded in space and time, agents are often represented by classes, each of which represents a particular kind of agent and all its properties. Such classes can be used to represent entities like people, birds, cars and countries. In field-based modelling, involving the representation of the environment as continuous fields, fields are often represented by a discretization of space, using multidimensional arrays, each storing mostly a single attribute. Such arrays can be used to represent the elevation of the land-surface, the pH of the soil, or the population density in an area, for example. Representing a population of agents by class instances grouped in collections is an intuitive way of organizing information. A drawback, though, is that models in which class instances grouping properties are stored in collections are less efficient (execute slower) than models in which collections of properties are grouped. The field representation, on the other hand, is convenient for the efficient execution of models. Another drawback is that, because the data models used are so different, integrating agent-based and field-based models becomes difficult, since the model builder has to deal with multiple concepts, and often multiple modelling environments. With the development of the LUE data model [1] we aim at representing agents and fields within a single paradigm, by combining the advantages of the data models used in agent-based and field-based data modelling. This removes the barrier for writing integrated agent-based and field-based models. The resulting data model is intuitive to use and allows for efficient execution of models. LUE is both a high-level conceptual data model and a low-level physical data model. The LUE conceptual data model is a generalization of the data models used in agent-based and field-based modelling. The LUE physical data model [2] is an implementation of the LUE conceptual data model in HDF5. In our presentation we will provide details of our approach to organizing information about agents and fields. We will show examples of agent and field data represented by the conceptual and physical data model. References: [1] de Bakker, M.P., de Jong, K., Schmitz, O., Karssenberg, D., 2016. Design and demonstration of a data model to integrate agent-based and field-based modelling. Environmental Modelling and Software. http://dx.doi.org/10.1016/j.envsoft.2016.11.016 [2] de Jong, K., 2017. LUE source code. https://github.com/pcraster/lue

  1. A New Simplified Source Model to Explain Strong Ground Motions from a Mega-Thrust Earthquake - Application to the 2011 Tohoku Earthquake (Mw9.0) -

    NASA Astrophysics Data System (ADS)

    Nozu, A.

    2013-12-01

    A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.

  2. Model and Interoperability using Meta Data Annotations

    NASA Astrophysics Data System (ADS)

    David, O.

    2011-12-01

    Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside. While providing all those capabilities, a significant reduction in the size of the model source code was achieved. To support the benefit of annotations for a modeler, studies were conducted to evaluate the effectiveness of an annotation based framework approach with other modeling frameworks and libraries, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A typical hydrological model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks.

  3. A BRDF statistical model applying to space target materials modeling

    NASA Astrophysics Data System (ADS)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  4. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    PubMed

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  5. Lessons from Climate Modeling on the Design and Use of Ensembles for Crop Modeling

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Mearns, Linda O.; Ruane, Alexander C.; Roetter, Reimund P.; Asseng, Senthold

    2016-01-01

    Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.

  6. Assessing Ecosystem Model Performance in Semiarid Systems

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  7. The relationship between digital model accuracy and time-dependent deformation of alginate impressions.

    PubMed

    Alcan, Toros; Ceylanoğlu, Cenk; Baysal, Bekir

    2009-01-01

    To investigate the effects of different storage periods of alginate impressions on digital model accuracy. A total of 105 impressions were taken from a master model with three different brands of alginates and were poured into stone models in five different storage periods. In all, 21 stone models were poured and immediately were scanned, and 21 digital models were prepared. The remaining 84 impressions were poured after 1, 2, 3, and 4 days, respectively. Five linear measurements were made by three researchers on the master model, the stone models, and the digital models. Time-dependent deformation of alginate impressions at different storage periods and the accuracy of traditional stone models and digital models were evaluated separately. Both the stone models and the digital models were highly correlated with the master model. Significant deformities in the alginate impressions were noted at different storage periods of 1 to 4 days. Alginate impressions of different brands also showed significant differences between each other on the first, third, and fourth days. Digital orthodontic models are as reliable as traditional stone models and probably will become the standard for orthodontic clinical use. Storing alginate impressions in sealed plastic bags for up to 4 days caused statistically significant deformation of alginate impressions, but the magnitude of these deformations did not appear to be clinically relevant and had no adverse effect on digital modeling.

  8. Chemometrics-assisted spectrophotometry method for the determination of chemical oxygen demand in pulping effluent.

    PubMed

    Chen, Honglei; Chen, Yuancai; Zhan, Huaiyu; Fu, Shiyu

    2011-04-01

    A new method has been developed for the determination of chemical oxygen demand (COD) in pulping effluent using chemometrics-assisted spectrophotometry. Two calibration models were established by inducing UV-visible spectroscopy (model 1) and derivative spectroscopy (model 2), combined with the chemometrics software Smica-P. Correlation coefficients of the two models are 0.9954 (model 1) and 0.9963 (model 2) when COD of samples is in the range of 0 to 405 mg/L. Sensitivities of the two models are 0.0061 (model 1) and 0.0056 (model 2) and method detection limits are 2.02-2.45 mg/L (model 1) and 2.13-2.51 mg/L (model 2). Validation experiment showed that the average standard deviation of model 2 was 1.11 and that of model 1 was 1.54. Similarly, average relative error of model 2 (4.25%) was lower than model 1 (5.00%), which indicated that the predictability of model 2 was better than that of model 1. Chemometrics-assisted spectrophotometry method did not need chemical reagents and digestion which were required in the conventional methods, and the testing time of the new method was significantly shorter than the conventional ones. The proposed method can be used to measure COD in pulping effluent as an environmentally friendly approach with satisfactory results.

  9. Improved two-equation k-omega turbulence models for aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Menter, Florian R.

    1992-01-01

    Two new versions of the k-omega two-equation turbulence model will be presented. The new Baseline (BSL) model is designed to give results similar to those of the original k-omega model of Wilcox, but without its strong dependency on arbitrary freestream values. The BSL model is identical to the Wilcox model in the inner 50 percent of the boundary-layer but changes gradually to the high Reynolds number Jones-Launder k-epsilon model (in a k-omega formulation) towards the boundary-layer edge. The new model is also virtually identical to the Jones-Lauder model for free shear layers. The second version of the model is called Shear-Stress Transport (SST) model. It is based on the BSL model, but has the additional ability to account for the transport of the principal shear stress in adverse pressure gradient boundary-layers. The model is based on Bradshaw's assumption that the principal shear stress is proportional to the turbulent kinetic energy, which is introduced into the definition of the eddy-viscosity. Both models are tested for a large number of different flowfields. The results of the BSL model are similar to those of the original k-omega model, but without the undesirable freestream dependency. The predictions of the SST model are also independent of the freestream values and show excellent agreement with experimental data for adverse pressure gradient boundary-layer flows.

  10. Efficient polarimetric BRDF model.

    PubMed

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  11. SBML Level 3 package: Hierarchical Model Composition, Version 1 Release 3

    PubMed Central

    Smith, Lucian P.; Hucka, Michael; Hoops, Stefan; Finney, Andrew; Ginkel, Martin; Myers, Chris J.; Moraru, Ion; Liebermeister, Wolfram

    2017-01-01

    Summary Constructing a model in a hierarchical fashion is a natural approach to managing model complexity, and offers additional opportunities such as the potential to re-use model components. The SBML Level 3 Version 1 Core specification does not directly provide a mechanism for defining hierarchical models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Hierarchical Model Composition package for SBML Level 3 adds the necessary features to SBML to support hierarchical modeling. The package enables a modeler to include submodels within an enclosing SBML model, delete unneeded or redundant elements of that submodel, replace elements of that submodel with element of the containing model, and replace elements of the containing model with elements of the submodel. In addition, the package defines an optional “port” construct, allowing a model to be defined with suggested interfaces between hierarchical components; modelers can chose to use these interfaces, but they are not required to do so and can still interact directly with model elements if they so chose. Finally, the SBML Hierarchical Model Composition package is defined in such a way that a hierarchical model can be “flattened” to an equivalent, non-hierarchical version that uses only plain SBML constructs, thus enabling software tools that do not yet support hierarchy to nevertheless work with SBML hierarchical models. PMID:26528566

  12. A demonstrative model of a lunar base simulation on a personal computer

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.

  13. Equivalent Dynamic Models.

    PubMed

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  14. Application of multiple modelling to hyperthermia estimation: reducing the effects of model mismatch.

    PubMed

    Potocki, J K; Tharp, H S

    1993-01-01

    Multiple model estimation is a viable technique for dealing with the spatial perfusion model mismatch associated with hyperthermia dosimetry. Using multiple models, spatial discrimination can be obtained without increasing the number of unknown perfusion zones. Two multiple model estimators based on the extended Kalman filter (EKF) are designed and compared with two EKFs based on single models having greater perfusion zone segmentation. Results given here indicate that multiple modelling is advantageous when the number of thermal sensors is insufficient for convergence of single model estimators having greater perfusion zone segmentation. In situations where sufficient measured outputs exist for greater unknown perfusion parameter estimation, the multiple model estimators and the single model estimators yield equivalent results.

  15. Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models

    USGS Publications Warehouse

    Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.

    2011-01-01

    We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.

  16. Reliability of a new biokinetic model of zirconium in internal dosimetry: part I, parameter uncertainty analysis.

    PubMed

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a certain extent depending on the strength of the correlation. In the case of model prediction, the qualitative comparison of the model predictions with the measured plasma and urinary data showed the HMGU model to be more reliable than the ICRP model; quantitatively, the uncertainty model prediction by the HMGU systemic biokinetic model is smaller than that of the ICRP model. The uncertainty information on the model parameters analyzed in this study was used in the second part of the paper regarding a sensitivity analysis of the Zr biokinetic models.

  17. EzGal: A Flexible Interface for Stellar Population Synthesis Models

    NASA Astrophysics Data System (ADS)

    Mancone, Conor L.; Gonzalez, Anthony H.

    2012-06-01

    We present EzGal, a flexible Python program designed to easily generate observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models. As has been demonstrated by various authors, for many applications the choice of input SPS models can be a significant source of systematic uncertainty. A key strength of EzGal is that it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. Its ability to work with new models will allow EzGal to remain useful as SPS modeling evolves to keep up with the latest research (such as varying IMFs). EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and it can be used to interpolate between metallicities for a given model set. To facilitate use, we have created an online interface to run EzGal and quickly generate magnitude and mass-to-light ratio predictions for a variety of star-formation histories and model sets. We make many commonly used SPS models available from the online interface, including the canonical Bruzual & Charlot models, an updated version of these models, the Maraston models, the BaSTI models, and the Flexible Stellar Population Synthesis (FSPS) models. We use EzGal to compare magnitude predictions for the model sets as a function of wavelength, age, metallicity, and star-formation history. From this comparison we quickly recover the well-known result that the models agree best in the optical for old solar-metallicity models, with differences at the level. Similarly, the most problematic regime for SPS modeling is for young ages (≲2 Gyr) and long wavelengths (λ ≳ 7500 Å), where thermally pulsating AGB stars are important and scatter between models can vary from 0.3 mag (Sloan i) to 0.7 mag (Ks). We find that these differences are not caused by one discrepant model set and should therefore be interpreted as general uncertainties in SPS modeling. Finally, we connect our results to a more physically motivated example by generating CSPs with a star-formation history matching the global star-formation history of the universe. We demonstrate that the wavelength and age dependence of SPS model uncertainty translates into a redshift-dependent model uncertainty, highlighting the importance of a quantitative understanding of model differences when comparing observations with models as a function of redshift.

  18. System and method of designing models in a feedback loop

    DOEpatents

    Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.

    2017-02-14

    A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.

  19. Comment on ``Glassy Potts model: A disordered Potts model without a ferromagnetic phase''

    NASA Astrophysics Data System (ADS)

    Carlucci, Domenico M.

    1999-10-01

    We report the equivalence of the ``glassy Potts model,'' recently introduced by Marinari et al. and the ``chiral Potts model'' investigated by Nishimori and Stephen. Both models do not exhibit any spontaneous magnetization at low temperature, differently from the ordinary glass Potts model. The phase transition of the glassy Potts model is easily interpreted as the spin-glass transition of the ordinary random Potts model.

  20. The Disk Instability Model for SU UMa systems - a Comparison of the Thermal-Tidal Model and Plain Vanilla Model

    NASA Astrophysics Data System (ADS)

    Cannizzo, John K.

    2017-01-01

    We utilize the time dependent accretion disk model described by Ichikawa & Osaki (1992) to explore two basic ideas for the outbursts in the SU UMa systems, Osaki's Thermal-Tidal Model, and the basic accretion disk limit cycle model. We explore a range in possible input parameters and model assumptions to delineate under what conditions each model may be preferred.

  1. A novel microfluidic model can mimic organ-specific metastasis of circulating tumor cells.

    PubMed

    Kong, Jing; Luo, Yong; Jin, Dong; An, Fan; Zhang, Wenyuan; Liu, Lilu; Li, Jiao; Fang, Shimeng; Li, Xiaojie; Yang, Xuesong; Lin, Bingcheng; Liu, Tingjiao

    2016-11-29

    A biomimetic microsystem might compensate costly and time-consuming animal metastatic models. Herein we developed a biomimetic microfluidic model to study cancer metastasis. Primary cells isolated from different organs were cultured on the microlfuidic model to represent individual organs. Breast and salivary gland cancer cells were driven to flow over primary cell culture chambers, mimicking dynamic adhesion of circulating tumor cells (CTCs) to endothelium in vivo. These flowing artificial CTCs showed different metastatic potentials to lung on the microfluidic model. The traditional nude mouse model of lung metastasis was performed to investigate the physiological similarity of the microfluidic model to animal models. It was found that the metastatic potential of different cancer cells assessed by the microfluidic model was in agreement with that assessed by the nude mouse model. Furthermore, it was demonstrated that the metastatic inhibitor AMD3100 inhibited lung metastasis effectively in both the microfluidic model and the nude mouse model. Then the microfluidic model was used to mimick liver and bone metastasis of CTCs and confirm the potential for research of multiple-organ metastasis. Thus, the metastasis of CTCs to different organs was reconstituted on the microfluidic model. It may expand the capabilities of traditional cell culture models, providing a low-cost, time-saving, and rapid alternative to animal models.

  2. A simple analytical infiltration model for short-duration rainfall

    NASA Astrophysics Data System (ADS)

    Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming

    2017-12-01

    Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.

  3. Mutant mice: experimental organisms as materialised models in biomedicine.

    PubMed

    Huber, Lara; Keuck, Lara K

    2013-09-01

    Animal models have received particular attention as key examples of material models. In this paper, we argue that the specificities of establishing animal models-acknowledging their status as living beings and as epistemological tools-necessitate a more complex account of animal models as materialised models. This becomes particularly evident in animal-based models of diseases that only occur in humans: in these cases, the representational relation between animal model and human patient needs to be generated and validated. The first part of this paper presents an account of how disease-specific animal models are established by drawing on the example of transgenic mice models for Alzheimer's disease. We will introduce an account of validation that involves a three-fold process including (1) from human being to experimental organism; (2) from experimental organism to animal model; and (3) from animal model to human patient. This process draws upon clinical relevance as much as scientific practices and results in disease-specific, yet incomplete, animal models. The second part of this paper argues that the incompleteness of models can be described in terms of multi-level abstractions. We qualify this notion by pointing to different experimental techniques and targets of modelling, which give rise to a plurality of models for a specific disease. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Modelling and characterization of primary settlers in view of whole plant and resource recovery modelling.

    PubMed

    Bachis, Giulia; Maruéjouls, Thibaud; Tik, Sovanna; Amerlinck, Youri; Melcer, Henryk; Nopens, Ingmar; Lessard, Paul; Vanrolleghem, Peter A

    2015-01-01

    Characterization and modelling of primary settlers have been neglected pretty much to date. However, whole plant and resource recovery modelling requires primary settler model development, as current models lack detail in describing the dynamics and the diversity of the removal process for different particulate fractions. This paper focuses on the improved modelling and experimental characterization of primary settlers. First, a new modelling concept based on particle settling velocity distribution is proposed which is then applied for the development of an improved primary settler model as well as for its characterization under addition of chemicals (chemically enhanced primary treatment, CEPT). This model is compared to two existing simple primary settler models (Otterpohl and Freund; Lessard and Beck), showing to be better than the first one and statistically comparable to the second one, but with easier calibration thanks to the ease with which wastewater characteristics can be translated into model parameters. Second, the changes in the activated sludge model (ASM)-based chemical oxygen demand fractionation between inlet and outlet induced by primary settling is investigated, showing that typical wastewater fractions are modified by primary treatment. As they clearly impact the downstream processes, both model improvements demonstrate the need for more detailed primary settler models in view of whole plant modelling.

  5. ERM model analysis for adaptation to hydrological model errors

    NASA Astrophysics Data System (ADS)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  6. Predictive QSAR modeling workflow, model applicability domains, and virtual screening.

    PubMed

    Tropsha, Alexander; Golbraikh, Alexander

    2007-01-01

    Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.

  7. Inter-model comparison of the landscape determinants of vector-borne disease: implications for epidemiological and entomological risk modeling.

    PubMed

    Lorenz, Alyson; Dhingra, Radhika; Chang, Howard H; Bisanzio, Donal; Liu, Yang; Remais, Justin V

    2014-01-01

    Extrapolating landscape regression models for use in assessing vector-borne disease risk and other applications requires thoughtful evaluation of fundamental model choice issues. To examine implications of such choices, an analysis was conducted to explore the extent to which disparate landscape models agree in their epidemiological and entomological risk predictions when extrapolated to new regions. Agreement between six literature-drawn landscape models was examined by comparing predicted county-level distributions of either Lyme disease or Ixodes scapularis vector using Spearman ranked correlation. AUC analyses and multinomial logistic regression were used to assess the ability of these extrapolated landscape models to predict observed national data. Three models based on measures of vegetation, habitat patch characteristics, and herbaceous landcover emerged as effective predictors of observed disease and vector distribution. An ensemble model containing these three models improved precision and predictive ability over individual models. A priori assessment of qualitative model characteristics effectively identified models that subsequently emerged as better predictors in quantitative analysis. Both a methodology for quantitative model comparison and a checklist for qualitative assessment of candidate models for extrapolation are provided; both tools aim to improve collaboration between those producing models and those interested in applying them to new areas and research questions.

  8. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    PubMed

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.

  9. Money Earlier or Later? Simple Heuristics Explain Intertemporal Choices Better than Delay Discounting1

    PubMed Central

    Marzilli Ericson, Keith M.; White, John Myles; Laibson, David; Cohen, Jonathan D.

    2015-01-01

    Heuristic models have been proposed for many domains of choice. We compare heuristic models of intertemporal choice, which can account for many of the known intertemporal choice anomalies, to discounting models. We conduct an out-of-sample, cross-validated comparison of intertemporal choice models. Heuristic models outperform traditional utility discounting models, including models of exponential and hyperbolic discounting. The best performing models predict choices by using a weighted average of absolute differences and relative (percentage) differences of the attributes of the goods in a choice set. We conclude that heuristic models explain time-money tradeoff choices in experiments better than utility discounting models. PMID:25911124

  10. ASTP ranging system mathematical model

    NASA Technical Reports Server (NTRS)

    Ellis, M. R.; Robinson, L. H.

    1973-01-01

    A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.

  11. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    PubMed Central

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-01-01

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697

  12. Quantification of model uncertainty in aerosol optical thickness retrieval from Ozone Monitoring Instrument (OMI) measurements

    NASA Astrophysics Data System (ADS)

    Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.

    2013-09-01

    We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.

  13. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.

    PubMed

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-02-08

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  14. Atmospheric Dispersal and Dispostion of Tephra From a Potential Volcanic Eruption at Yucca Mountain, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. Keating; W.Statham

    2004-02-12

    The purpose of this model report is to provide documentation of the conceptual and mathematical model (ASHPLUME) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. The ASHPLUME conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through the Yucca Mountain repository and downwind transport of contaminated tephra. The ASHPLUME mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the groundmore » surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report will improve and clarify the previous documentation of the ASHPLUME mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model.« less

  15. Model-Based Reasoning in Upper-division Lab Courses

    NASA Astrophysics Data System (ADS)

    Lewandowski, Heather

    2015-05-01

    Modeling, which includes developing, testing, and refining models, is a central activity in physics. Well-known examples from AMO physics include everything from the Bohr model of the hydrogen atom to the Bose-Hubbard model of interacting bosons in a lattice. Modeling, while typically considered a theoretical activity, is most fully represented in the laboratory where measurements of real phenomena intersect with theoretical models, leading to refinement of models and experimental apparatus. However, experimental physicists use models in complex ways and the process is often not made explicit in physics laboratory courses. We have developed a framework to describe the modeling process in physics laboratory activities. The framework attempts to abstract and simplify the complex modeling process undertaken by expert experimentalists. The framework can be applied to understand typical processes such the modeling of the measurement tools, modeling ``black boxes,'' and signal processing. We demonstrate that the framework captures several important features of model-based reasoning in a way that can reveal common student difficulties in the lab and guide the development of curricula that emphasize modeling in the laboratory. We also use the framework to examine troubleshooting in the lab and guide students to effective methods and strategies.

  16. Trends in parameterization, economics and host behaviour in influenza pandemic modelling: a review and reporting protocol

    PubMed Central

    2013-01-01

    Background The volume of influenza pandemic modelling studies has increased dramatically in the last decade. Many models incorporate now sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals. Methods We reviewed trends in these aspects in models for influenza pandemic preparedness that aimed to generate policy insights for epidemic management and were published from 2000 to September 2011, i.e. before and after the 2009 pandemic. Results We find that many influenza pandemics models rely on parameters from previous modelling studies, models are rarely validated using observed data and are seldom applied to low-income countries. Mechanisms for international data sharing would be necessary to facilitate a wider adoption of model validation. The variety of modelling decisions makes it difficult to compare and evaluate models systematically. Conclusions We propose a model Characteristics, Construction, Parameterization and Validation aspects protocol (CCPV protocol) to contribute to the systematisation of the reporting of models with an emphasis on the incorporation of economic aspects and host behaviour. Model reporting, as already exists in many other fields of modelling, would increase confidence in model results, and transparency in their assessment and comparison. PMID:23651557

  17. Model Selection in Systems Biology Depends on Experimental Design

    PubMed Central

    Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.

    2014-01-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483

  18. Model selection in systems biology depends on experimental design.

    PubMed

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  19. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  20. A nonlinear model of gold production in Malaysia

    NASA Astrophysics Data System (ADS)

    Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi

    2014-06-01

    Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.

Top