A systematic analysis of model performance during simulations based on observed landcover/use change is used to quantify errors associated with simulations of known "future" conditions. Calibrated and uncalibrated assessments of relative change over different lengths of...
Final Technical Report: Increasing Prediction Accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Bruce Hardison; Hansen, Clifford; Stein, Joshua
2015-12-01
PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.
Modeling and analysis to quantify MSE wall behavior and performance.
DOT National Transportation Integrated Search
2009-08-01
To better understand potential sources of adverse performance of mechanically stabilized earth (MSE) walls, a suite of analytical models was studied using the computer program FLAC, a numerical modeling computer program widely used in geotechnical en...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne
2007-01-01
A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
Displacement Models for THUNDER Actuators having General Loads and Boundary Conditions
NASA Technical Reports Server (NTRS)
Wieman, Robert; Smith, Ralph C.; Kackley, Tyson; Ounaies, Zoubeida; Bernd, Jeff; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
This paper summarizes techniques for quantifying the displacements generated in THUNDER actuators in response to applied voltages for a variety of boundary conditions and exogenous loads. The PDE (partial differential equations) models for the actuators are constructed in two steps. In the first, previously developed theory quantifying thermal and electrostatic strains is employed to model the actuator shapes which result from the manufacturing process and subsequent repoling. Newtonian principles are then employed to develop PDE models which quantify displacements in the actuator due to voltage inputs to the piezoceramic patch. For this analysis, drive levels are assumed to be moderate so that linear piezoelectric relations can be employed. Finite element methods for discretizing the models are developed and the performance of the discretized models are illustrated through comparison with experimental data.
Probabilistic simulation of the human factor in structural reliability
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Chamis, Christos C.
1991-01-01
Many structural failures have occasionally been attributed to human factors in engineering design, analyses maintenance, and fabrication processes. Every facet of the engineering process is heavily governed by human factors and the degree of uncertainty associated with them. Factors such as societal, physical, professional, psychological, and many others introduce uncertainties that significantly influence the reliability of human performance. Quantifying human factors and associated uncertainties in structural reliability require: (1) identification of the fundamental factors that influence human performance, and (2) models to describe the interaction of these factors. An approach is being developed to quantify the uncertainties associated with the human performance. This approach consists of a multi factor model in conjunction with direct Monte-Carlo simulation.
Performance monitoring can boost turboexpander efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntire, R.
1982-07-05
This paper discusses ways of improving the productivity of the turboexpander/refrigeration system's radial expander and radial compressor through systematic review of component performance. It reviews several techniques to determine the performance of an expander and compressor. It suggests that any performance improvement program requires quantifying the performance of separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. The model is used to quantify the economic benefits of any change in the system, eithermore » a change in operating procedures or a hardware modification. Topics include proper ways of using antisurge control valves and modifying flow rate/shaft speed (Q/N). It is noted that compressor efficiency depends on the incidence angle of blade at the rotor leading edge and the angle of the incoming gas stream.« less
Uncertainty in tsunami sediment transport modeling
Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.
2016-01-01
Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.
Gray, Adrian J; Shorter, Kathleen; Cummins, Cloe; Murphy, Aron; Waldron, Mark
2018-06-01
Quantifying the training and competition loads of players in contact team sports can be performed in a variety of ways, including kinematic, perceptual, heart rate or biochemical monitoring methods. Whilst these approaches provide data relevant for team sports practitioners and athletes, their application to a contact team sport setting can sometimes be challenging or illogical. Furthermore, these methods can generate large fragmented datasets, do not provide a single global measure of training load and cannot adequately quantify all key elements of performance in contact team sports. A previous attempt to address these limitations via the estimation of metabolic energy demand (global energy measurement) has been criticised for its inability to fully quantify the energetic costs of team sports, particularly during collisions. This is despite the seemingly unintentional misapplication of the model's principles to settings outside of its intended use. There are other hindrances to the application of such models, which are discussed herein, such as the data-handling procedures of Global Position System manufacturers and the unrealistic expectations of end users. Nevertheless, we propose an alternative energetic approach, based on Global Positioning System-derived data, to improve the assessment of mechanical load in contact team sports. We present a framework for the estimation of mechanical work performed during locomotor and contact events with the capacity to globally quantify the work done during training and matches.
Campbell, William; Ganna, Andrea; Ingelsson, Erik; Janssens, A Cecile J W
2016-01-01
We propose a new measure of assessing the performance of risk models, the area under the prediction impact curve (auPIC), which quantifies the performance of risk models in terms of their average health impact in the population. Using simulated data, we explain how the prediction impact curve (PIC) estimates the percentage of events prevented when a risk model is used to assign high-risk individuals to an intervention. We apply the PIC to the Atherosclerosis Risk in Communities (ARIC) Study to illustrate its application toward prevention of coronary heart disease. We estimated that if the ARIC cohort received statins at baseline, 5% of events would be prevented when the risk model was evaluated at a cutoff threshold of 20% predicted risk compared to 1% when individuals were assigned to the intervention without the use of a model. By calculating the auPIC, we estimated that an average of 15% of events would be prevented when considering performance across the entire interval. We conclude that the PIC is a clinically meaningful measure for quantifying the expected health impact of risk models that supplements existing measures of model performance. Copyright © 2016 Elsevier Inc. All rights reserved.
The Five Key Questions of Human Performance Modeling.
Wu, Changxu
2018-01-01
Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.
Sim, K S; Lim, M S; Yeap, Z X
2016-07-01
A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.
2015-12-01
The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.
Can We Use Regression Modeling to Quantify Mean Annual Streamflow at a Global-Scale?
NASA Astrophysics Data System (ADS)
Barbarossa, V.; Huijbregts, M. A. J.; Hendriks, J. A.; Beusen, A.; Clavreul, J.; King, H.; Schipper, A.
2016-12-01
Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF using observations of discharge and catchment characteristics from 1,885 catchments worldwide, ranging from 2 to 106 km2 in size. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB [van Beek et al., 2011] by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area, mean annual precipitation and air temperature, average slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error values were lower (0.29 - 0.38 compared to 0.49 - 0.57) and the modified index of agreement was higher (0.80 - 0.83 compared to 0.72 - 0.75). Our regression model can be applied globally at any point of the river network, provided that the input parameters are within the range of values employed in the calibration of the model. The performance is reduced for water scarce regions and further research should focus on improving such an aspect for regression-based global hydrological models.
A New Metric for Quantifying Performance Impairment on the Psychomotor Vigilance Test
2012-01-01
used the coefficient of determination (R2) and the P-values based on Bartelss test of randomness of the residual error to quantify the goodness - of - fit ...we used the goodness - of - fit between each metric and the corresponding individualized two-process model output (Rajaraman et al., 2008, 2009) to assess...individualized two-process model fits for each of the 12 subjects using the five metrics. The P-values are for Bartelss
Janssen, Daniël M C; van Kuijk, Sander M J; d'Aumerie, Boudewijn B; Willems, Paul C
2018-05-16
A prediction model for surgical site infection (SSI) after spine surgery was developed in 2014 by Lee et al. This model was developed to compute an individual estimate of the probability of SSI after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Before any prediction model can be validly implemented in daily medical practice, it should be externally validated to assess how the prediction model performs in patients sampled independently from the derivation cohort. We included 898 consecutive patients who underwent instrumented thoracolumbar spine surgery. To quantify overall performance using Nagelkerke's R 2 statistic, the discriminative ability was quantified as the area under the receiver operating characteristic curve (AUC). We computed the calibration slope of the calibration plot, to judge prediction accuracy. Sixty patients developed an SSI. The overall performance of the prediction model in our population was poor: Nagelkerke's R 2 was 0.01. The AUC was 0.61 (95% confidence interval (CI) 0.54-0.68). The estimated slope of the calibration plot was 0.52. The previously published prediction model showed poor performance in our academic external validation cohort. To predict SSI after instrumented thoracolumbar spine surgery for the present population, a better fitting prediction model should be developed.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
A Generalizable Methodology for Quantifying User Satisfaction
NASA Astrophysics Data System (ADS)
Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung
Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
An integrated physiology model to study regional lung damage effects and the physiologic response
2014-01-01
Background This work expands upon a previously developed exercise dynamic physiology model (DPM) with the addition of an anatomic pulmonary system in order to quantify the impact of lung damage on oxygen transport and physical performance decrement. Methods A pulmonary model is derived with an anatomic structure based on morphometric measurements, accounting for heterogeneous ventilation and perfusion observed experimentally. The model is incorporated into an existing exercise physiology model; the combined system is validated using human exercise data. Pulmonary damage from blast, blunt trauma, and chemical injury is quantified in the model based on lung fluid infiltration (edema) which reduces oxygen delivery to the blood. The pulmonary damage component is derived and calibrated based on published animal experiments; scaling laws are used to predict the human response to lung injury in terms of physical performance decrement. Results The augmented dynamic physiology model (DPM) accurately predicted the human response to hypoxia, altitude, and exercise observed experimentally. The pulmonary damage parameters (shunt and diffusing capacity reduction) were fit to experimental animal data obtained in blast, blunt trauma, and chemical damage studies which link lung damage to lung weight change; the model is able to predict the reduced oxygen delivery in damage conditions. The model accurately estimates physical performance reduction with pulmonary damage. Conclusions We have developed a physiologically-based mathematical model to predict performance decrement endpoints in the presence of thoracic damage; simulations can be extended to estimate human performance and escape in extreme situations. PMID:25044032
NASA Astrophysics Data System (ADS)
Steinschneider, S.; Wi, S.; Brown, C. M.
2013-12-01
Flood risk management performance is investigated within the context of integrated climate and hydrologic modeling uncertainty to explore system robustness. The research question investigated is whether structural and hydrologic parameterization uncertainties are significant relative to other uncertainties such as climate change when considering water resources system performance. Two hydrologic models are considered, a conceptual, lumped parameter model that preserves the water balance and a physically-based model that preserves both water and energy balances. In the conceptual model, parameter and structural uncertainties are quantified and propagated through the analysis using a Bayesian modeling framework with an innovative error model. Mean climate changes and internal climate variability are explored using an ensemble of simulations from a stochastic weather generator. The approach presented can be used to quantify the sensitivity of flood protection adequacy to different sources of uncertainty in the climate and hydrologic system, enabling the identification of robust projects that maintain adequate performance despite the uncertainties. The method is demonstrated in a case study for the Coralville Reservoir on the Iowa River, where increased flooding over the past several decades has raised questions about potential impacts of climate change on flood protection adequacy.
Validation of the PVSyst Performance Model for the Concentrix CPV Technology
NASA Astrophysics Data System (ADS)
Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault
2011-12-01
The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.
NASA Technical Reports Server (NTRS)
Campbell, B. H.
1974-01-01
A study is described which was initiated to identify and quantify the interrelationships between and within the performance, safety, cost, and schedule parameters for unmanned, automated payload programs. The result of the investigation was a systems cost/performance model which was implemented as a digital computer program and could be used to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses for mission model and advanced payload studies. Program objectives and results are described briefly.
Validation of a national hydrological model
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Booker, D. J.; Cattoën, C.
2016-10-01
Nationwide predictions of flow time-series are valuable for development of policies relating to environmental flows, calculating reliability of supply to water users, or assessing risk of floods or droughts. This breadth of model utility is possible because various hydrological signatures can be derived from simulated flow time-series. However, producing national hydrological simulations can be challenging due to strong environmental diversity across catchments and a lack of data available to aid model parameterisation. A comprehensive and consistent suite of test procedures to quantify spatial and temporal patterns in performance across various parts of the hydrograph is described and applied to quantify the performance of an uncalibrated national rainfall-runoff model of New Zealand. Flow time-series observed at 485 gauging stations were used to calculate Nash-Sutcliffe efficiency and percent bias when simulating between-site differences in daily series, between-year differences in annual series, and between-site differences in hydrological signatures. The procedures were used to assess the benefit of applying a correction to the modelled flow duration curve based on an independent statistical analysis. They were used to aid understanding of climatological, hydrological and model-based causes of differences in predictive performance by assessing multiple hypotheses that describe where and when the model was expected to perform best. As the procedures produce quantitative measures of performance, they provide an objective basis for model assessment that could be applied when comparing observed daily flow series with competing simulated flow series from any region-wide or nationwide hydrological model. Model performance varied in space and time with better scores in larger and medium-wet catchments, and in catchments with smaller seasonal variations. Surprisingly, model performance was not sensitive to aquifer fraction or rain gauge density.
NASA Astrophysics Data System (ADS)
He, Jingjing; Wang, Dengjiang; Zhang, Weifang
2015-03-01
This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.
Data driven models of the performance and repeatability of NIF high foot implosions
NASA Astrophysics Data System (ADS)
Gaffney, Jim; Casey, Dan; Callahan, Debbie; Hartouni, Ed; Ma, Tammy; Spears, Brian
2015-11-01
Recent high foot (HF) inertial confinement fusion (ICF) experiments performed at the national ignition facility (NIF) have consisted of enough laser shots that a data-driven analysis of capsule performance is feasible. In this work we use 20-30 individual implosions of similar design, spanning laser drive energies from 1.2 to 1.8 MJ, to quantify our current understanding of the behavior of HF ICF implosions. We develop a probabilistic model for the projected performance of a given implosion and use it to quantify uncertainties in predicted performance including shot-shot variations and observation uncertainties. We investigate the statistical significance of the observed performance differences between different laser pulse shapes, ablator materials, and capsule designs. Finally, using a cross-validation technique, we demonstrate that 5-10 repeated shots of a similar design are required before real trends in the data can be distinguished from shot-shot variations. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-674957.
Developing and testing a global-scale regression model to quantify mean annual streamflow
NASA Astrophysics Data System (ADS)
Barbarossa, Valerio; Huijbregts, Mark A. J.; Hendriks, A. Jan; Beusen, Arthur H. W.; Clavreul, Julie; King, Henry; Schipper, Aafke M.
2017-01-01
Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF based on a dataset unprecedented in size, using observations of discharge and catchment characteristics from 1885 catchments worldwide, measuring between 2 and 106 km2. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area and catchment averaged mean annual precipitation and air temperature, slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error (RMSE) values were lower (0.29-0.38 compared to 0.49-0.57) and the modified index of agreement (d) was higher (0.80-0.83 compared to 0.72-0.75). Our regression model can be applied globally to estimate MAF at any point of the river network, thus providing a feasible alternative to spatially explicit process-based global hydrological models.
Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model
NASA Astrophysics Data System (ADS)
Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.
2018-03-01
Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.
Multi-Scale Multi-Domain Model | Transportation Research | NREL
framework for NREL's MSMD model. NREL's MSMD model quantifies the impacts of electrical/thermal pathway : NREL Macroscopic design factors and highly dynamic environmental conditions significantly influence the design of affordable, long-lasting, high-performing, and safe large battery systems. The MSMD framework
Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow
NASA Astrophysics Data System (ADS)
Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke
2017-04-01
Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.
Temporal Delineation and Quantification of Short Term Clustered Mining Seismicity
NASA Astrophysics Data System (ADS)
Woodward, Kyle; Wesseloo, Johan; Potvin, Yves
2017-07-01
The assessment of the temporal characteristics of seismicity is fundamental to understanding and quantifying the seismic hazard associated with mining, the effectiveness of strategies and tactics used to manage seismic hazard, and the relationship between seismicity and changes to the mining environment. This article aims to improve the accuracy and precision in which the temporal dimension of seismic responses can be quantified and delineated. We present a review and discussion on the occurrence of time-dependent mining seismicity with a specific focus on temporal modelling and the modified Omori law (MOL). This forms the basis for the development of a simple weighted metric that allows for the consistent temporal delineation and quantification of a seismic response. The optimisation of this metric allows for the selection of the most appropriate modelling interval given the temporal attributes of time-dependent mining seismicity. We evaluate the performance weighted metric for the modelling of a synthetic seismic dataset. This assessment shows that seismic responses can be quantified and delineated by the MOL, with reasonable accuracy and precision, when the modelling is optimised by evaluating the weighted MLE metric. Furthermore, this assessment highlights that decreased weighted MLE metric performance can be expected if there is a lack of contrast between the temporal characteristics of events associated with different processes.
Performance prediction of a synchronization link for distributed aerospace wireless systems.
Wang, Wen-Qin; Shao, Huaizong
2013-01-01
For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link.
NASA Technical Reports Server (NTRS)
Blosser, Max L.
2002-01-01
A study was performed to develop an understanding of the key factors that govern the performance of metallic thermal protection systems for reusable launch vehicles. A current advanced metallic thermal protection system (TPS) concept was systematically analyzed to discover the most important factors governing the thermal performance of metallic TPS. A large number of relevant factors that influence the thermal analysis and thermal performance of metallic TPS were identified and quantified. Detailed finite element models were developed for predicting the thermal performance of design variations of the advanced metallic TPS concept mounted on a simple, unstiffened structure. The computational models were also used, in an automated iterative procedure, for sizing the metallic TPS to maintain the structure below a specified temperature limit. A statistical sensitivity analysis method, based on orthogonal matrix techniques used in robust design, was used to quantify and rank the relative importance of the various modeling and design factors considered in this study. Results of the study indicate that radiation, even in small gaps between panels, can reduce significantly the thermal performance of metallic TPS, so that gaps should be eliminated by design if possible. Thermal performance was also shown to be sensitive to several analytical assumptions that should be chosen carefully. One of the factors that was found to have the greatest effect on thermal performance is the heat capacity of the underlying structure. Therefore the structure and TPS should be designed concurrently.
Quantifying errors in trace species transport modeling.
Prather, Michael J; Zhu, Xin; Strahan, Susan E; Steenrod, Stephen D; Rodriguez, Jose M
2008-12-16
One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO(2) using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error.
Parametric Modeling of the Safety Effects of NextGen Terminal Maneuvering Area Conflict Scenarios
NASA Technical Reports Server (NTRS)
Rogers, William H.; Waldron, Timothy P.; Stroiney, Steven R.
2011-01-01
The goal of this work was to analytically identify and quantify the issues, challenges, technical hurdles, and pilot-vehicle interface issues associated with conflict detection and resolution (CD&R)in emerging operational concepts for a NextGen terminal aneuvering area, including surface operations. To this end, the work entailed analytical and trade studies focused on modeling the achievable safety benefits of different CD&R strategies and concepts in the current and future airport environment. In addition, crew-vehicle interface and pilot performance enhancements and potential issues were analyzed based on review of envisioned NextGen operations, expected equipage advances, and human factors expertise. The results of perturbation analysis, which quantify the high-level performance impact of changes to key parameters such as median response time and surveillance position error, show that the analytical model developed could be useful in making technology investment decisions.
Scholey, J J; Wilcox, P D; Wisnom, M R; Friswell, M I
2009-06-01
A model for quantifying the performance of acoustic emission (AE) systems on plate-like structures is presented. Employing a linear transfer function approach the model is applicable to both isotropic and anisotropic materials. The model requires several inputs including source waveforms, phase velocity and attenuation. It is recognised that these variables may not be readily available, thus efficient measurement techniques are presented for obtaining phase velocity and attenuation in a form that can be exploited directly in the model. Inspired by previously documented methods, the application of these techniques is examined and some important implications for propagation characterisation in plates are discussed. Example measurements are made on isotropic and anisotropic plates and, where possible, comparisons with numerical solutions are made. By inputting experimentally obtained data into the model, quantitative system metrics are examined for different threshold values and sensor locations. By producing plots describing areas of hit success and source location error, the ability to measure the performance of different AE system configurations is demonstrated. This quantitative approach will help to place AE testing on a more solid foundation, underpinning its use in industrial AE applications.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
Energy Efficient Operation of Ammonia Refrigeration Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Abdul Qayyum; Wenning, Thomas J; Sever, Franc
Ammonia refrigeration systems typically offer many energy efficiency opportunities because of their size and complexity. This paper develops a model for simulating single-stage ammonia refrigeration systems, describes common energy saving opportunities, and uses the model to quantify those opportunities. The simulation model uses data that are typically available during site visits to ammonia refrigeration plants and can be calibrated to actual consumption and performance data if available. Annual electricity consumption for a base-case ammonia refrigeration system is simulated. The model is then used to quantify energy savings for six specific energy efficiency opportunities; reduce refrigeration load, increase suction pressure, employmore » dual suction, decrease minimum head pressure set-point, increase evaporative condenser capacity, and reclaim heat. Methods and considerations for achieving each saving opportunity are discussed. The model captures synergistic effects that result when more than one component or parameter is changed. This methodology represents an effective method to model and quantify common energy saving opportunities in ammonia refrigeration systems. The results indicate the range of savings that might be expected from common energy efficiency opportunities.« less
The probability heuristics model of syllogistic reasoning.
Chater, N; Oaksford, M
1999-03-01
A probability heuristic model (PHM) for syllogistic reasoning is proposed. An informational ordering over quantified statements suggests simple probability based heuristics for syllogistic reasoning. The most important is the "min-heuristic": choose the type of the least informative premise as the type of the conclusion. The rationality of this heuristic is confirmed by an analysis of the probabilistic validity of syllogistic reasoning which treats logical inference as a limiting case of probabilistic inference. A meta-analysis of past experiments reveals close fits with PHM. PHM also compares favorably with alternative accounts, including mental logics, mental models, and deduction as verbal reasoning. Crucially, PHM extends naturally to generalized quantifiers, such as Most and Few, which have not been characterized logically and are, consequently, beyond the scope of current mental logic and mental model theories. Two experiments confirm the novel predictions of PHM when generalized quantifiers are used in syllogistic arguments. PHM suggests that syllogistic reasoning performance may be determined by simple but rational informational strategies justified by probability theory rather than by logic. Copyright 1999 Academic Press.
Design optimization of a prescribed vibration system using conjoint value analysis
NASA Astrophysics Data System (ADS)
Malinga, Bongani; Buckner, Gregory D.
2016-12-01
This article details a novel design optimization strategy for a prescribed vibration system (PVS) used to mechanically filter solids from fluids in oil and gas drilling operations. A dynamic model of the PVS is developed, and the effects of disturbance torques are detailed. This model is used to predict the effects of design parameters on system performance and efficiency, as quantified by system attributes. Conjoint value analysis, a statistical technique commonly used in marketing science, is utilized to incorporate designer preferences. This approach effectively quantifies and optimizes preference-based trade-offs in the design process. The effects of designer preferences on system performance and efficiency are simulated. This novel optimization strategy yields improvements in all system attributes across all simulated vibration profiles, and is applicable to other industrial electromechanical systems.
Electronic Equalization of Multikilometer 10-Gb/s Multimode Fiber Links: Mode-Coupling Effects
NASA Astrophysics Data System (ADS)
Balemarthy, Kasyapa; Polley, Arup; Ralph, Stephen E.
2006-12-01
This paper investigates the ability of electronic equalization to compensate for modal dispersion in the presence of mode coupling in multimode fibers (MMFs) at 10 Gb/s. Using a new time-domain experimental method, mode coupling is quantified in MMF. These results, together with a comprehensive link model, allow to determine the impact of mode coupling on the performance of MMF. The equalizer performance on links from 300 m to 8 km is quantified with and without modal coupling. It is shown that the mode-coupling effects are influenced by the specific index profile and increase the equalizer penalty by as much as 1 dBo for 1-km links and 2.3 dBo for 2-km links when using a standard model of fiber profiles at 1310 nm.
A mathematical method for quantifying in vivo mechanical behaviour of heel pad under dynamic load.
Naemi, Roozbeh; Chatzistergos, Panagiotis E; Chockalingam, Nachiappan
2016-03-01
Mechanical behaviour of the heel pad, as a shock attenuating interface during a foot strike, determines the loading on the musculoskeletal system during walking. The mathematical models that describe the force deformation relationship of the heel pad structure can determine the mechanical behaviour of heel pad under load. Hence, the purpose of this study was to propose a method of quantifying the heel pad stress-strain relationship using force-deformation data from an indentation test. The energy input and energy returned densities were calculated by numerically integrating the area below the stress-strain curve during loading and unloading, respectively. Elastic energy and energy absorbed densities were calculated as the sum of and the difference between energy input and energy returned densities, respectively. By fitting the energy function, derived from a nonlinear viscoelastic model, to the energy density-strain data, the elastic and viscous model parameters were quantified. The viscous and elastic exponent model parameters were significantly correlated with maximum strain, indicating the need to perform indentation tests at realistic maximum strains relevant to walking. The proposed method showed to be able to differentiate between the elastic and viscous components of the heel pad response to loading and to allow quantifying the corresponding stress-strain model parameters.
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...
2016-10-20
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.
2016-01-01
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187
Computational Modelling and Optimal Control of Ebola Virus Disease with non-Linear Incidence Rate
NASA Astrophysics Data System (ADS)
Takaidza, I.; Makinde, O. D.; Okosun, O. K.
2017-03-01
The 2014 Ebola outbreak in West Africa has exposed the need to connect modellers and those with relevant data as pivotal to better understanding of how the disease spreads and quantifying the effects of possible interventions. In this paper, we model and analyse the Ebola virus disease with non-linear incidence rate. The epidemic model created is used to describe how the Ebola virus could potentially evolve in a population. We perform an uncertainty analysis of the basic reproductive number R 0 to quantify its sensitivity to other disease-related parameters. We also analyse the sensitivity of the final epidemic size to the time control interventions (education, vaccination, quarantine and safe handling) and provide the cost effective combination of the interventions.
NASA Astrophysics Data System (ADS)
Williams, Jason J.; Chung, Serena H.; Johansen, Anne M.; Lamb, Brian K.; Vaughan, Joseph K.; Beutel, Marc
2017-02-01
Air quality models are widely used to estimate pollutant deposition rates and thereby calculate critical loads and critical load exceedances (model deposition > critical load). However, model operational performance is not always quantified specifically to inform these applications. We developed a performance assessment approach designed to inform critical load and exceedance calculations, and applied it to the Pacific Northwest region of the U.S. We quantified wet inorganic N deposition performance of several widely-used air quality models, including five different Community Multiscale Air Quality Model (CMAQ) simulations, the Tdep model, and 'PRISM x NTN' model. Modeled wet inorganic N deposition estimates were compared to wet inorganic N deposition measurements at 16 National Trends Network (NTN) monitoring sites, and to annual bulk inorganic N deposition measurements at Mount Rainier National Park. Model bias (model - observed) and error (|model - observed|) were expressed as a percentage of regional critical load values for diatoms and lichens. This novel approach demonstrated that wet inorganic N deposition bias in the Pacific Northwest approached or exceeded 100% of regional diatom and lichen critical load values at several individual monitoring sites, and approached or exceeded 50% of critical loads when averaged regionally. Even models that adjusted deposition estimates based on deposition measurements to reduce bias or that spatially-interpolated measurement data, had bias that approached or exceeded critical loads at some locations. While wet inorganic N deposition model bias is only one source of uncertainty that can affect critical load and exceedance calculations, results demonstrate expressing bias as a percentage of critical loads at a spatial scale consistent with calculations may be a useful exercise for those performing calculations. It may help decide if model performance is adequate for a particular calculation, help assess confidence in calculation results, and highlight cases where a non-deterministic approach may be needed.
NASA Astrophysics Data System (ADS)
Carpenter, Matthew H.; Jernigan, J. G.
2007-05-01
We present examples of an analysis progression consisting of a synthesis of the Photon Clean Method (Carpenter, Jernigan, Brown, Beiersdorfer 2007) and bootstrap methods to quantify errors and variations in many-parameter models. The Photon Clean Method (PCM) works well for model spaces with large numbers of parameters proportional to the number of photons, therefore a Monte Carlo paradigm is a natural numerical approach. Consequently, PCM, an "inverse Monte-Carlo" method, requires a new approach for quantifying errors as compared to common analysis methods for fitting models of low dimensionality. This presentation will explore the methodology and presentation of analysis results derived from a variety of public data sets, including observations with XMM-Newton, Chandra, and other NASA missions. Special attention is given to the visualization of both data and models including dynamic interactive presentations. This work was performed under the auspices of the Department of Energy under contract No. W-7405-Eng-48. We thank Peter Beiersdorfer and Greg Brown for their support of this technical portion of a larger program related to science with the LLNL EBIT program.
Investigating comfort temperatures and heat transfer in sleeping bags
NASA Astrophysics Data System (ADS)
Hill, Trevor; Hill, Lara
2017-07-01
After many years of confusion, thermal performance of sleeping bags has now been quantified and unified using expensive test techniques. Based on Newton’s law of cooling, we present a simple inexpensive test and model to check manufacturers’ claims on the temperature performance of a range of modern sleeping bags.
Investigating Comfort Temperatures and Heat Transfer in Sleeping Bags
ERIC Educational Resources Information Center
Hill, Trevor; Hill, Lara
2017-01-01
After many years of confusion, thermal performance of sleeping bags has now been quantified and unified using expensive test techniques. Based on Newton's law of cooling, we present a simple inexpensive test and model to check manufacturers' claims on the temperature performance of a range of modern sleeping bags.
NASA Astrophysics Data System (ADS)
Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph
2018-05-01
This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.
A multi-fidelity analysis selection method using a constrained discrete optimization formulation
NASA Astrophysics Data System (ADS)
Stults, Ian C.
The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model uncertainty present in analyses with 4 or fewer input variables could be effectively quantified using a strategic distribution creation method; if more than 4 input variables exist, a Frontier Finding Particle Swarm Optimization should instead be used. Once model uncertainty in contributing analysis code choices has been quantified, a selection method is required to determine which of these choices should be used in simulations. Because much of the selection done for engineering problems is driven by the physics of the problem, these are poor candidate problems for testing the true fitness of a candidate selection method. Specifically moderate and high dimensional problems' variability can often be reduced to only a few dimensions and scalability often cannot be easily addressed. For these reasons a simple academic function was created for the uncertainty quantification, and a canonical form of the Fidelity Selection Problem (FSP) was created. Fifteen best- and worst-case scenarios were identified in an effort to challenge the candidate selection methods both with respect to the characteristics of the tradeoff between time cost and model uncertainty and with respect to the stringency of the constraints and problem dimensionality. The results from this experiment show that a Genetic Algorithm (GA) was able to consistently find the correct answer, but under certain circumstances, a discrete form of Particle Swarm Optimization (PSO) was able to find the correct answer more quickly. To better illustrate how the uncertainty quantification and discrete optimization might be conducted for a "real world" problem, an illustrative example was conducted using gas turbine engines.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong
2011-12-01
Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.
NASA Astrophysics Data System (ADS)
Shepson, P. B.; Lavoie, T. N.; Kerlo, A. E.; Stirm, B. H.
2016-12-01
Understanding the contribution of anthropogenic activities to atmospheric greenhouse gas concentrations requires an accurate characterization of emission sources. Previously, we have reported the use of a novel aircraft-based mass balance measurement technique to quantify greenhouse gas emission rates from point and area sources, however, the accuracy of this approach has not been evaluated to date. Here, an assessment of method accuracy and precision was performed by conducting a series of six aircraft-based mass balance experiments at a power plant in southern Indiana and comparing the calculated CO2 emission rates to the reported hourly emission measurements made by continuous emissions monitoring systems (CEMS) installed directly in the exhaust stacks at the facility. For all flights, CO2 emissions were quantified before CEMS data were released online to ensure unbiased analysis. Additionally, we assess the uncertainties introduced to the final emission rate caused by our analysis method, which employs a statistical kriging model to interpolate and extrapolate the CO2 fluxes across the flight transects from the ground to the top of the boundary layer. Subsequently, using the results from these flights combined with the known emissions reported by the CEMS, we perform an inter-model comparison of alternative kriging methods to evaluate the performance of the kriging approach.
Development of task network models of human performance in microgravity
NASA Technical Reports Server (NTRS)
Diaz, Manuel F.; Adam, Susan
1992-01-01
This paper discusses the utility of task-network modeling for quantifying human performance variability in microgravity. The data are gathered for: (1) improving current methodologies for assessing human performance and workload in the operational space environment; (2) developing tools for assessing alternative system designs; and (3) developing an integrated set of methodologies for the evaluation of performance degradation during extended duration spaceflight. The evaluation entailed an analysis of the Remote Manipulator System payload-grapple task performed on many shuttle missions. Task-network modeling can be used as a tool for assessing and enhancing human performance in man-machine systems, particularly for modeling long-duration manned spaceflight. Task-network modeling can be directed toward improving system efficiency by increasing the understanding of basic capabilities of the human component in the system and the factors that influence these capabilities.
GIS-based hydrologic modeling offers a convenient means of assessing the impacts associated with land-cover/use change for environmental planning efforts. Future scenarios can be developed through a combination of modifications to the land-cover/use maps used to parameterize hydr...
HYDROLOGIC MODEL CALIBRATION AND UNCERTAINTY IN SCENARIO ANALYSIS
A systematic analysis of model performance during simulations based on
observed land-cover/use change is used to quantify error associated with water-yield
simulations for a series of known landscape conditions over a 24-year period with the
goal of evaluatin...
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, D.L.
1995-11-01
The objective of this work was to develop improved performance model for modules and systems for for all operating conditions for use in module specifications, system and BOS component design, and system rating or monitoring. The approach taken was to identify and quantify the influence of dominant factors of solar irradiance, cell temperature, angle-of-incidence; and solar spectrum; use outdoor test procedures to separate the effects of electrical, thermal, and optical performance; use fundamental cell characteristics to improve analysis; and combine factors in simple model using the common variables.
Quantifying Parkinson's disease progression by simulating gait patterns
NASA Astrophysics Data System (ADS)
Cárdenas, Luisa; Martínez, Fabio; Atehortúa, Angélica; Romero, Eduardo
2015-12-01
Modern rehabilitation protocols of most neurodegenerative diseases, in particular the Parkinson Disease, rely on a clinical analysis of gait patterns. Currently, such analysis is highly dependent on both the examiner expertise and the type of evaluation. Development of evaluation methods with objective measures is then crucial. Physical models arise as a powerful alternative to quantify movement patterns and to emulate the progression and performance of specific treatments. This work introduces a novel quantification of the Parkinson disease progression using a physical model that accurately represents the main gait biomarker, the body Center of Gravity (CoG). The model tracks the whole gait cycle by a coupled double inverted pendulum that emulates the leg swinging for the single support phase and by a damper-spring System (SDP) that recreates both legs in contact with the ground for the double phase. The patterns generated by the proposed model are compared with actual ones learned from 24 subjects in stages 2,3, and 4. The evaluation performed demonstrates a better performance of the proposed model when compared with a baseline model(SP) composed of a coupled double pendulum and a mass-spring system. The Frechet distance measured differences between model estimations and real trajectories, showing for stages 2, 3 and 4 distances of 0.137, 0.155, 0.38 for the baseline and 0.07, 0.09, 0.29 for the proposed method.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
Performance Prediction of a Synchronization Link for Distributed Aerospace Wireless Systems
Shao, Huaizong
2013-01-01
For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link. PMID:23970828
Materials Flow through Industry Supply Chain Modeling Tool | Advanced
efficiency. It also performs supply chain scale analyses to quantify the impacts and benefits of next , read Evaluating opportunities to improve material and energy impacts in commodity supply chains
Kanevce, A.; Reese, Matthew O.; Barnes, T. M.; ...
2017-06-06
CdTe devices have reached efficiencies of 22% due to continuing improvements in bulk material properties, including minority carrier lifetime. Device modeling has helped to guide these device improvements by quantifying the impacts of material properties and different device designs on device performance. One of the barriers to truly predictive device modeling is the interdependence of these material properties. For example, interfaces become more critical as bulk properties, particularly, hole density and carrier lifetime, increase. We present device-modeling analyses that describe the effects of recombination at the interfaces and grain boundaries as lifetime and doping of the CdTe layer change. Themore » doping and lifetime should be priorities for maximizing open-circuit voltage (V oc) and efficiency improvements. However, interface and grain boundary recombination become bottlenecks for device performance at increased lifetime and doping levels. In conclusion, this work quantifies and discusses these emerging challenges for next-generation CdTe device efficiency.« less
Using deep neural networks to augment NIF post-shot analysis
NASA Astrophysics Data System (ADS)
Humbird, Kelli; Peterson, Luc; McClarren, Ryan; Field, John; Gaffney, Jim; Kruse, Michael; Nora, Ryan; Spears, Brian
2017-10-01
Post-shot analysis of National Ignition Facility (NIF) experiments is the process of determining which simulation inputs yield results consistent with experimental observations. This analysis is typically accomplished by running suites of manually adjusted simulations, or Monte Carlo sampling surrogate models that approximate the response surfaces of the physics code. These approaches are expensive and often find simulations that match only a small subset of observables simultaneously. We demonstrate an alternative method for performing post-shot analysis using inverse models, which map directly from experimental observables to simulation inputs with quantified uncertainties. The models are created using a novel machine learning algorithm which automates the construction and initialization of deep neural networks to optimize predictive accuracy. We show how these neural networks, trained on large databases of post-shot simulations, can rigorously quantify the agreement between simulation and experiment. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiser, I; Lu, Z
2014-06-01
Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions includedmore » two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task.« less
Dripps, W.R.; Bradbury, K.R.
2007-01-01
Quantifying the spatial and temporal distribution of natural groundwater recharge is usually a prerequisite for effective groundwater modeling and management. As flow models become increasingly utilized for management decisions, there is an increased need for simple, practical methods to delineate recharge zones and quantify recharge rates. Existing models for estimating recharge distributions are data intensive, require extensive parameterization, and take a significant investment of time in order to establish. The Wisconsin Geological and Natural History Survey (WGNHS) has developed a simple daily soil-water balance (SWB) model that uses readily available soil, land cover, topographic, and climatic data in conjunction with a geographic information system (GIS) to estimate the temporal and spatial distribution of groundwater recharge at the watershed scale for temperate humid areas. To demonstrate the methodology and the applicability and performance of the model, two case studies are presented: one for the forested Trout Lake watershed of north central Wisconsin, USA and the other for the urban-agricultural Pheasant Branch Creek watershed of south central Wisconsin, USA. Overall, the SWB model performs well and presents modelers and planners with a practical tool for providing recharge estimates for modeling and water resource planning purposes in humid areas. ?? Springer-Verlag 2007.
Are Water-lean Solvent Systems Viable for Post-Combustion CO 2 Capture?
Heldebrant, David J.; Koech, Phillip K.; Rousseau, Roger; ...
2017-08-18
Here, we present here an overview of water-lean solvents that compares their projected costs and performance to aqueous amine systems, emphasizing critical areas of study needed to evaluate their performance against their water-based brethren. The work presented her focuses on bridging these knowledge gaps. Because the majority of water-lean solvents are still at the lab scale, substantial studies are still needed to model their performance at scale. This presents a significant challenge as eachformulation has different physical and thermodynamic properties and behavior, and quantifying how these different properties manifest themselves in conventional absorber-stripper configurations, or identifying new configurations that aremore » specific for a solvent’s signature behavior. We identify critical areas of study that are needed, and our efforts (e.g. custom infrastructure, molecular models) to predict, measure, and model these behaviors. Such findings are critical for determining the rheology required for heat exchanger design; absorber designs and packing to accommodate solvents with gradient changes (e.g. viscosity, contact angle, surface tension), and stripper configurations without direct steam utilization or water reflux. Another critical area of research need is to understand the molecular structure of the liquid interface and bulk as a function of CO 2 loading, and to assess whether conventional film theories accurately quantify solvent behavior, or if thermodynamic models adequately quantify activity coefficients of ions in solution. We conclude with an assessment of our efforts to aid in bridging the knowledge gaps in understanding water-lean solvents, and suggestions of what is needed to enable large-scale demonstrations to meet the United States Department of Energy’s year 2030 goal.« less
Are Water-lean Solvent Systems Viable for Post-Combustion CO 2 Capture?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heldebrant, David J.; Koech, Phillip K.; Rousseau, Roger
Here, we present here an overview of water-lean solvents that compares their projected costs and performance to aqueous amine systems, emphasizing critical areas of study needed to evaluate their performance against their water-based brethren. The work presented her focuses on bridging these knowledge gaps. Because the majority of water-lean solvents are still at the lab scale, substantial studies are still needed to model their performance at scale. This presents a significant challenge as eachformulation has different physical and thermodynamic properties and behavior, and quantifying how these different properties manifest themselves in conventional absorber-stripper configurations, or identifying new configurations that aremore » specific for a solvent’s signature behavior. We identify critical areas of study that are needed, and our efforts (e.g. custom infrastructure, molecular models) to predict, measure, and model these behaviors. Such findings are critical for determining the rheology required for heat exchanger design; absorber designs and packing to accommodate solvents with gradient changes (e.g. viscosity, contact angle, surface tension), and stripper configurations without direct steam utilization or water reflux. Another critical area of research need is to understand the molecular structure of the liquid interface and bulk as a function of CO 2 loading, and to assess whether conventional film theories accurately quantify solvent behavior, or if thermodynamic models adequately quantify activity coefficients of ions in solution. We conclude with an assessment of our efforts to aid in bridging the knowledge gaps in understanding water-lean solvents, and suggestions of what is needed to enable large-scale demonstrations to meet the United States Department of Energy’s year 2030 goal.« less
Andrew J. Shirk; Michael A. Schroeder; Leslie A. Robb; Samuel A. Cushman
2015-01-01
The ability of landscapes to impede speciesâ movement or gene flow may be quantified by resistance models. Few studies have assessed the performance of resistance models parameterized by expert opinion. In addition, resistance models differ in terms of spatial and thematic resolution as well as their focus on the ecology of a particular species or more generally on the...
Ali, A F; Taha, M M Reda; Thornton, G M; Shrive, N G; Frank, C B
2005-06-01
In normal daily activities, ligaments are subjected to repeated loads, and respond to this environment with creep and fatigue. While progressive recruitment of the collagen fibers is responsible for the toe region of the ligament stress-strain curve, recruitment also represents an elegant feature to help ligaments resist creep. The use of artificial intelligence techniques in computational modeling allows a large number of parameters and their interactions to be incorporated beyond the capacity of classical mathematical models. The objective of the work described here is to demonstrate a tool for modeling creep of the rabbit medial collateral ligament that can incorporate the different parameters while quantifying the effect of collagen fiber recruitment during creep. An intelligent algorithm was developed to predict ligament creep. The modeling is performed in two steps: first, the ill-defined fiber recruitment is quantified using the fuzzy logic. Second, this fiber recruitment is incorporated along with creep stress and creep time to model creep using an adaptive neurofuzzy inference system. The model was trained and tested using an experimental database including creep tests and crimp image analysis. The model confirms that quantification of fiber recruitment is important for accurate prediction of ligament creep behavior at physiological loads.
AerChemMIP: Quantifying the effects of chemistry and aerosols in CMIP6
Collins, William J.; Lamarque, Jean -François; Schulz, Michael; ...
2017-02-09
The Aerosol Chemistry Model Intercomparison Project (AerChemMIP) is endorsed by the Coupled-Model Intercomparison Project 6 (CMIP6) and is designed to quantify the climate and air quality impacts of aerosols and chemically reactive gases. These are specifically near-term climate forcers (NTCFs: methane, tropospheric ozone and aerosols, and their precursors), nitrous oxide and ozone-depleting halocarbons. The aim of AerChemMIP is to answer four scientific questions. 1. How have anthropogenic emissions contributed to global radiative forcing and affected regional climate over the historical period? 2. How might future policies (on climate, air quality and land use) affect the abundances of NTCFs and theirmore » climate impacts? 3.How do uncertainties in historical NTCF emissions affect radiative forcing estimates? 4. How important are climate feedbacks to natural NTCF emissions, atmospheric composition, and radiative effects? These questions will be addressed through targeted simulations with CMIP6 climate models that include an interactive representation of tropospheric aerosols and atmospheric chemistry. These simulations build on the CMIP6 Diagnostic, Evaluation and Characterization of Klima (DECK) experiments, the CMIP6 historical simulations, and future projections performed elsewhere in CMIP6, allowing the contributions from aerosols and/or chemistry to be quantified. As a result, specific diagnostics are requested as part of the CMIP6 data request to highlight the chemical composition of the atmosphere, to evaluate the performance of the models, and to understand differences in behaviour between them.« less
AerChemMIP: Quantifying the effects of chemistry and aerosols in CMIP6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William J.; Lamarque, Jean -François; Schulz, Michael
The Aerosol Chemistry Model Intercomparison Project (AerChemMIP) is endorsed by the Coupled-Model Intercomparison Project 6 (CMIP6) and is designed to quantify the climate and air quality impacts of aerosols and chemically reactive gases. These are specifically near-term climate forcers (NTCFs: methane, tropospheric ozone and aerosols, and their precursors), nitrous oxide and ozone-depleting halocarbons. The aim of AerChemMIP is to answer four scientific questions. 1. How have anthropogenic emissions contributed to global radiative forcing and affected regional climate over the historical period? 2. How might future policies (on climate, air quality and land use) affect the abundances of NTCFs and theirmore » climate impacts? 3.How do uncertainties in historical NTCF emissions affect radiative forcing estimates? 4. How important are climate feedbacks to natural NTCF emissions, atmospheric composition, and radiative effects? These questions will be addressed through targeted simulations with CMIP6 climate models that include an interactive representation of tropospheric aerosols and atmospheric chemistry. These simulations build on the CMIP6 Diagnostic, Evaluation and Characterization of Klima (DECK) experiments, the CMIP6 historical simulations, and future projections performed elsewhere in CMIP6, allowing the contributions from aerosols and/or chemistry to be quantified. As a result, specific diagnostics are requested as part of the CMIP6 data request to highlight the chemical composition of the atmosphere, to evaluate the performance of the models, and to understand differences in behaviour between them.« less
Bio-inspired online variable recruitment control of fluidic artificial muscles
NASA Astrophysics Data System (ADS)
Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew
2016-12-01
This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.
NASA Astrophysics Data System (ADS)
Shim, J. S.; Tsagouri, I.; Goncharenko, L. P.; Kuznetsova, M. M.
2017-12-01
To address challenges of assessment of space weather modeling capabilities, the CCMC (Community Coordinated Modeling Center) is leading the newly established "International Forum for Space Weather Modeling Capabilities Assessment." This presentation will focus on preliminary outcomes of the International Forum on validation of modeled foF2 and TEC during geomagnetic storms. We investigate the ionospheric response to 2013 Mar. geomagnetic storm event using ionosonde and GPS TEC observations in North American and European sectors. To quantify storm impacts on foF2 and TEC, we first quantify quiet-time variations of foF2 and TEC (e.g., the median and the average of the five quietest days for the 30 days during quiet conditions). It appears that the quiet time variation of foF2 and TEC are about 10% and 20-30%, respectively. Therefore, to quantify storm impact, we focus on foF2 and TEC changes during the storm main phase larger than 20% and 50%, respectively, compared to 30-day median. We find that in European sector, both foF2 and TEC response to the storm are mainly positive phase with foF2 increase of up to 100% and TEC increase of 150%. In North America sector, however, foF2 shows negative effects (up to about 50% decrease), while TEC shows positive response (the largest increase is about 200%). To assess modeling capability of reproducing the changes of foF2 and TEC due to the storm, we use various model simulations, which are obtained from empirical, physics-based, and data assimilation models. The performance of each model depends on the selected metrics, therefore, only one metrics is not enough to evaluate the models' predictive capabilities in capturing the storm impact. The performance of the model also varies with latitude and longitude.
NASA Astrophysics Data System (ADS)
Luna, Aderval S.; Gonzaga, Fabiano B.; da Rocha, Werickson F. C.; Lima, Igor C. A.
2018-01-01
Laser-induced breakdown spectroscopy (LIBS) analysis was carried out on eleven steel samples to quantify the concentrations of chromium, nickel, and manganese. LIBS spectral data were correlated to known concentrations of the samples using different strategies in partial least squares (PLS) regression models. For the PLS analysis, one predictive model was separately generated for each element, while different approaches were used for the selection of variables (VIP: variable importance in projection and iPLS: interval partial least squares) in the PLS model to quantify the contents of the elements. The comparison of the performance of the models showed that there was no significant statistical difference using the Wilcoxon signed rank test. The elliptical joint confidence region (EJCR) did not detect systematic errors in these proposed methodologies for each metal.
Control of maglev vehicles with aerodynamic and guideway disturbances
NASA Technical Reports Server (NTRS)
Flueckiger, Karl; Mark, Steve; Caswell, Ruth; Mccallum, Duncan
1994-01-01
A modeling, analysis, and control design methodology is presented for maglev vehicle ride quality performance improvement as measured by the Pepler Index. Ride quality enhancement is considered through active control of secondary suspension elements and active aerodynamic surfaces mounted on the train. To analyze and quantify the benefits of active control, the authors have developed a five degree-of-freedom lumped parameter model suitable for describing a large class of maglev vehicles, including both channel and box-beam guideway configurations. Elements of this modeling capability have been recently employed in studies sponsored by the U.S. Department of Transportation (DOT). A perturbation analysis about an operating point, defined by vehicle and average crosswind velocities, yields a suitable linearized state space model for multivariable control system analysis and synthesis. Neglecting passenger compartment noise, the ride quality as quantified by the Pepler Index is readily computed from the system states. A statistical analysis is performed by modeling the crosswind disturbances and guideway variations as filtered white noise, whereby the Pepler Index is established in closed form through the solution to a matrix Lyapunov equation. Data is presented which indicates the anticipated ride quality achieved through various closed-loop control arrangements.
Analytical performance evaluation of SAR ATR with inaccurate or estimated models
NASA Astrophysics Data System (ADS)
DeVore, Michael D.
2004-09-01
Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.
POPEYE: A production rule-based model of multitask supervisory control (POPCORN)
NASA Technical Reports Server (NTRS)
Townsend, James T.; Kadlec, Helena; Kantowitz, Barry H.
1988-01-01
Recent studies of relationships between subjective ratings of mental workload, performance, and human operator and task characteristics have indicated that these relationships are quite complex. In order to study the various relationships and place subjective mental workload within a theoretical framework, we developed a production system model for the performance component of the complex supervisory task called POPCORN. The production system model is represented by a hierarchial structure of goals and subgoals, and the information flow is controlled by a set of condition-action rules. The implementation of this production system, called POPEYE, generates computer simulated data under different task difficulty conditions which are comparable to those of human operators performing the task. This model is the performance aspect of an overall dynamic psychological model which we are developing to examine and quantify relationships between performance and psychological aspects in a complex environment.
Net reclassification index at event rate: properties and relationships.
Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B
2017-12-10
The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Opportunities of probabilistic flood loss models
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Lüdtke, Stefan; Vogel, Kristin; Merz, Bruno
2016-04-01
Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. However, reliable flood damage models are a prerequisite for the practical usefulness of the model results. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of sharpness of the predictions the reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The comparison of the uni-variable Stage damage function and the multivariable model approach emphasises the importance to quantify predictive uncertainty. With each explanatory variable, the multi-variable model reveals an additional source of uncertainty. However, the predictive performance in terms of precision (mbe), accuracy (mae) and reliability (HR) is clearly improved in comparison to uni-variable Stage damage function. Overall, Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
NASA Astrophysics Data System (ADS)
Deng, Wei; Wang, Jun
2015-06-01
We investigate and quantify the multifractal detrended cross-correlation of return interval series for Chinese stock markets and a proposed price model, the price model is established by oriented percolation. The return interval describes the waiting time between two successive price volatilities which are above some threshold, the present work is an attempt to quantify the level of multifractal detrended cross-correlation for the return intervals. Further, the concept of MF-DCCA coefficient of return intervals is introduced, and the corresponding empirical research is performed. The empirical results show that the return intervals of SSE and SZSE are weakly positive multifractal power-law cross-correlated, and exhibit the fluctuation patterns of MF-DCCA coefficients. The similar behaviors of return intervals for the price model is also demonstrated.
Focal ratio degradation: a new perspective
NASA Astrophysics Data System (ADS)
Haynes, Dionne M.; Withford, Michael J.; Dawes, Judith M.; Haynes, Roger; Bland-Hawthorn, Joss
2008-07-01
We have developed an alternative FRD empirical model for the parallel laser beam technique which can accommodate contributions from both scattering and modal diffusion. It is consistent with scattering inducing a Lorentzian contribution and modal diffusion inducing a Gaussian contribution. The convolution of these two functions produces a Voigt function which is shown to better simulate the observed behavior of the FRD distribution and provides a greatly improved fit over the standard Gaussian fitting approach. The Voigt model can also be used to quantify the amount of energy displaced by FRD, therefore allowing astronomical instrument scientists to identify, quantify and potentially minimize the various sources of FRD, and optimise the fiber and instrument performance.
Quantifying climate feedbacks in polar regions.
Goosse, Hugues; Kay, Jennifer E; Armour, Kyle C; Bodas-Salcedo, Alejandro; Chepfer, Helene; Docquier, David; Jonko, Alexandra; Kushner, Paul J; Lecomte, Olivier; Massonnet, François; Park, Hyo-Seok; Pithan, Felix; Svensson, Gunilla; Vancoppenolle, Martin
2018-05-15
The concept of feedback is key in assessing whether a perturbation to a system is amplified or damped by mechanisms internal to the system. In polar regions, climate dynamics are controlled by both radiative and non-radiative interactions between the atmosphere, ocean, sea ice, ice sheets and land surfaces. Precisely quantifying polar feedbacks is required for a process-oriented evaluation of climate models, a clear understanding of the processes responsible for polar climate changes, and a reduction in uncertainty associated with model projections. This quantification can be performed using a simple and consistent approach that is valid for a wide range of feedbacks, offering the opportunity for more systematic feedback analyses and a better understanding of polar climate changes.
Thermal comparison of buried-heterostructure and shallow-ridge lasers
NASA Astrophysics Data System (ADS)
Rustichelli, V.; Lemaître, F.; Ambrosius, H. P. M. M.; Brenot, R.; Williams, K. A.
2018-02-01
We present finite difference thermal modeling to predict temperature distribution, heat flux, and thermal resistance inside lasers with different waveguide geometries. We provide a quantitative experimental and theoretical comparison of the thermal behavior of shallow-ridge (SR) and buried-heterostructure (BH) lasers. We investigate the influence of a split heat source to describe p-layer Joule heating and nonradiative energy loss in the active layer and the heat-sinking from top as well as bottom when quantifying thermal impedance. From both measured values and numerical modeling we can quantify the thermal resistance for BH lasers and SR lasers, showing an improved thermal performance from 50K/W to 30K/W for otherwise equivalent BH laser designs.
Quantifying transfer after perceptual-motor sequence learning: how inflexible is implicit learning?
Sanchez, Daniel J; Yarnik, Eric N; Reber, Paul J
2015-03-01
Studies of implicit perceptual-motor sequence learning have often shown learning to be inflexibly tied to the training conditions during learning. Since sequence learning is seen as a model task of skill acquisition, limits on the ability to transfer knowledge from the training context to a performance context indicates important constraints on skill learning approaches. Lack of transfer across contexts has been demonstrated by showing that when task elements are changed following training, this leads to a disruption in performance. These results have typically been taken as suggesting that the sequence knowledge relies on integrated representations across task elements (Abrahamse, Jiménez, Verwey, & Clegg, Psychon Bull Rev 17:603-623, 2010a). Using a relatively new sequence learning task, serial interception sequence learning, three experiments are reported that quantify this magnitude of performance disruption after selectively manipulating individual aspects of motor performance or perceptual information. In Experiment 1, selective disruption of the timing or order of sequential actions was examined using a novel response manipulandum that allowed for separate analysis of these two motor response components. In Experiments 2 and 3, transfer was examined after selective disruption of perceptual information that left the motor response sequence intact. All three experiments provided quantifiable estimates of partial transfer to novel contexts that suggest some level of information integration across task elements. However, the ability to identify quantifiable levels of successful transfer indicates that integration is not all-or-none and that measurement sensitivity is a key in understanding sequence knowledge representations.
WATER COLUMN DATA AND SPECTRAL IRRADIANCE MODEL
Water samples collected monthly, for 18 months, from six sites in the Laguna Madre were analyzed to identify and quantify phytopigments using High Performance Liquid Chromatography (HPLC). In addition, water column pigment and nutrient data were acquired at 12 stations in Upper ...
Simulating the Use of Alternative Fuels in a Turbofan Engine
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Chin, Jeffrey Chevoor; Liu, Yuan
2013-01-01
The interest in alternative fuels for aviation has created a need to evaluate their effect on engine performance. The use of dynamic turbofan engine simulations enables the comparative modeling of the performance of these fuels on a realistic test bed in terms of dynamic response and control compared to traditional fuels. The analysis of overall engine performance and response characteristics can lead to a determination of the practicality of using specific alternative fuels in commercial aircraft. This paper describes a procedure to model the use of alternative fuels in a large commercial turbofan engine, and quantifies their effects on engine and vehicle performance. In addition, the modeling effort notionally demonstrates that engine performance may be maintained by modifying engine control system software parameters to account for the alternative fuel.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, WanYin; Zhang, Jie; Florita, Anthony
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less
Assessment of Modeling Capability for Reproducing Storm Impacts on TEC
NASA Astrophysics Data System (ADS)
Shim, J. S.; Kuznetsova, M. M.; Rastaetter, L.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B. A.; Foerster, M.; Foster, B.; Fuller-Rowell, T. J.; Huba, J. D.; Goncharenko, L. P.; Mannucci, A. J.; Namgaladze, A. A.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.
2014-12-01
During geomagnetic storm, the energy transfer from solar wind to magnetosphere-ionosphere system adversely affects the communication and navigation systems. Quantifying storm impacts on TEC (Total Electron Content) and assessment of modeling capability of reproducing storm impacts on TEC are of importance to specifying and forecasting space weather. In order to quantify storm impacts on TEC, we considered several parameters: TEC changes compared to quiet time (the day before storm), TEC difference between 24-hour intervals, and maximum increase/decrease during the storm. We investigated the spatial and temporal variations of the parameters during the 2006 AGU storm event (14-15 Dec. 2006) using ground-based GPS TEC measurements in the selected 5 degree eight longitude sectors. The latitudinal variations were also studied in two longitude sectors among the eight sectors where data coverage is relatively better. We obtained modeled TEC from various ionosphere/thermosphere (IT) models. The parameters from the models were compared with each other and with the observed values. We quantified performance of the models in reproducing the TEC variations during the storm using skill scores. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) for the space science communities to use.
Quantifying risk and benchmarking performance in the adult intensive care unit.
Higgins, Thomas L
2007-01-01
Morbidity, mortality, and length-of-stay outcomes in patients receiving critical care are difficult to interpret unless they are risk-stratified for diagnosis, presenting severity of illness, and other patient characteristics. Acuity adjustment systems for adults include the Acute Physiology And Chronic Health Evaluation (APACHE), the Mortality Probability Model (MPM), and the Simplified Acute Physiology Score (SAPS). All have recently been updated and recalibrated to reflect contemporary results. Specialized scores are also available for patient subpopulations where general acuity scores have drawbacks. Demand for outcomes data is likely to grow with pay-for-performance initiatives as well as for routine clinical, prognostic, administrative, and research applications. It is important for clinicians to understand how these scores are derived and how they are properly applied to quantify patient severity of illness and benchmark intensive care unit performance.
Energetic arousal and language: predictions from the computational theory of quantifiers processing.
Zajenkowski, Marcin
2013-10-01
The author examines the relationship between energetic arousal (EA) and the processing of sentences containing natural-language quantifiers. Previous studies and theories have shown that energy may differentially affect various cognitive functions. Recent investigations devoted to quantifiers strongly support the theory that various types of quantifiers involve different cognitive functions in the sentence-picture verification task. In the present study, 201 students were presented with a sentence-picture verification task consisting of simple propositions containing a quantifier that referred to the color of a car on display. Color pictures of cars accompanied the propositions. In addition, the level of participants' EA was measured before and after the verification task. It was found that EA and performance on proportional quantifiers (e.g., "More than half of the cars are red") are in an inverted U-shaped relationship. This result may be explained by the fact that proportional sentences engage working memory to a high degree, and previous models of EA-cognition associations have been based on the assumption that tasks that require parallel attentional and memory processes are best performed when energy is moderate. The research described in the present article has several applications, as it shows the optimal human conditions for verbal comprehension. For instance, it may be important in workplace design to control the level of arousal experienced by office staff when work is mostly related to the processing of complex texts. Energy level may be influenced by many factors, such as noise, time of day, or thermal conditions.
Kaneko, Hiromasa; Funatsu, Kimito
2013-09-23
We propose predictive performance criteria for nonlinear regression models without cross-validation. The proposed criteria are the determination coefficient and the root-mean-square error for the midpoints between k-nearest-neighbor data points. These criteria can be used to evaluate predictive ability after the regression models are updated, whereas cross-validation cannot be performed in such a situation. The proposed method is effective and helpful in handling big data when cross-validation cannot be applied. By analyzing data from numerical simulations and quantitative structural relationships, we confirm that the proposed criteria enable the predictive ability of the nonlinear regression models to be appropriately quantified.
Uncertainty in the Modeling of Tsunami Sediment Transport
NASA Astrophysics Data System (ADS)
Jaffe, B. E.; Sugawara, D.; Goto, K.; Gelfenbaum, G. R.; La Selle, S.
2016-12-01
Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. A recent study (Jaffe et al., 2016) explores sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami properties, study site characteristics, available input data, sediment grain size, and the model used. Although uncertainty has the potential to be large, case studies for both forward and inverse models have shown that sediment transport modeling provides useful information on tsunami inundation and hydrodynamics that can be used to improve tsunami hazard assessment. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and the development of hybrid modeling approaches to exploit the strengths of forward and inverse models. As uncertainty in tsunami sediment transport modeling is reduced, and with increased ability to quantify uncertainty, the geologic record of tsunamis will become more valuable in the assessment of tsunami hazard. Jaffe, B., Goto, K., Sugawara, D., Gelfenbaum, G., and La Selle, S., "Uncertainty in Tsunami Sediment Transport Modeling", Journal of Disaster Research Vol. 11 No. 4, pp. 647-661, 2016, doi: 10.20965/jdr.2016.p0647 https://www.fujipress.jp/jdr/dr/dsstr001100040647/
Sun, Guoxiang; Zhang, Jingxian
2009-05-01
The three wavelength fusion high performance liquid chromatographic fingerprin (TWFFP) of Longdanxiegan pill (LDXGP) was established to identify the quality of LDXGP by the systematic quantified fingerprint method. The chromatographic fingerprints (CFPs) of the 12 batches of LDXGP were determined by reversed-phase high performance liquid chromatography. The technique of multi-wavelength fusion fingerprint was applied during processing the fingerprints. The TWFFPs containing 63 co-possessing peaks were obtained when choosing baicalin peak as the referential peak. The 12 batches of LDXGP were identified with hierarchical clustering analysis by using macro qualitative similarity (S(m)) as the variable. According to the results of classification, the referential fingerprint (RFP) was synthesized from 10 batches of LDXGP. Taking the RFP for the qualified model, all the 12 batches of LDXGP were evaluated by the systematic quantified fingerprint method. Among the 12 batches of LDXGP, 9 batches were completely qualified, the contents of 1 batch were obviously higher while the chemical constituents quantity and distributed proportion in 2 batches were not qualified. The systematic quantified fingerprint method based on the technique of multi-wavelength fusion fingerprint ca effectively identify the authentic quality of traditional Chinese medicine.
Chimpanzees (Pan troglodytes) and bonobos (Pan paniscus) quantify split solid objects.
Cacchione, Trix; Hrubesch, Christine; Call, Josep
2013-01-01
Recent research suggests that gorillas' and orangutans' object representations survive cohesion violations (e.g., a split of a solid object into two halves), but that their processing of quantities may be affected by them. We assessed chimpanzees' (Pan troglodytes) and bonobos' (Pan paniscus) reactions to various fission events in the same series of action tasks modelled after infant studies previously run on gorillas and orangutans (Cacchione and Call in Cognition 116:193-203, 2010b). Results showed that all four non-human great ape species managed to quantify split objects but that their performance varied as a function of the non-cohesiveness produced in the splitting event. Spatial ambiguity and shape invariance had the greatest impact on apes' ability to represent and quantify objects. Further, we observed species differences with gorillas performing lower than other species. Finally, we detected a substantial age effect, with ape infants below 6 years of age being outperformed by both juvenile/adolescent and adult apes.
Bradley, Paul S; Ade, Jack D
2018-01-18
Time-motion analysis is a valuable data-collection technique used to quantify the physical match performance of elite soccer players. For over 40 years researchers have adopted a 'traditional' approach when evaluating match demands by simply reporting the distance covered or time spent along a motion continuum of walking through to sprinting. This methodology quantifies physical metrics in isolation without integrating other factors and this ultimately leads to a one-dimensional insight into match performance. Thus, this commentary proposes a novel 'integrated' approach that focuses on a sensitive physical metric such as high-intensity running but contextualizes this in relation to key tactical activities for each position and collectively for the team. In the example presented, the 'integrated' model clearly unveils the unique high-intensity profile that exists due to distinct tactical roles, rather than one-dimensional 'blind' distances produced by 'traditional' models. Intuitively this innovative concept may aid the coaches understanding of the physical performance in relation to the tactical roles and instructions given to the players. Additionally, it will enable practitioners to more effectively translate match metrics into training and testing protocols. This innovative model may well aid advances in other team sports that incorporate similar intermittent movements with tactical purpose. Evidence of the merits and application of this new concept are needed before the scientific community accepts this model as it may well add complexity to an area that conceivably needs simplicity.
A framework for quantifying net benefits of alternative prognostic models.
Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G
2012-01-30
New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.
PDF investigations of turbulent non-premixed jet flames with thin reaction zones
NASA Astrophysics Data System (ADS)
Wang, Haifeng; Pope, Stephen
2012-11-01
PDF (probability density function) modeling studies are carried out for the Sydney piloted jet flames. These Sydney flames feature much thinner reaction zones in the mixture fraction space compared to those in the well-studied Sandia piloted jet flames. The performance of the different turbulent combustion models in the Sydney flames with thin reaction zones has not been examined extensively before, and this work aims at evaluating the capability of the PDF method to represent the thin turbulent flame structures in the Sydney piloted flames. Parametric and sensitivity PDF studies are performed with respect to the different models and model parameters. A global error parameter is defined to quantify the departure of the simulation results from the experimental data, and is used to assess the performance of the different set of models and model parameters.
Performance monitoring can boost turboexpander efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntire, R.
1982-07-05
Focuses on the turboexpander/refrigeration system's radial expander and radial compressor. Explains that radial expander efficiency depends on mass flow rate, inlet pressure, inlet temperature, discharge pressure, gas composition, and shaft speed. Discusses quantifying the performance of the separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. Emphasizes antisurge control and modifying Q/N (flow rate/ shaft speed).
Alpha1 LASSO data bundles Lamont, OK
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)
2016-08-03
A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
Williamson, Tanja N.; Taylor, Charles J.; Newson, Jeremy K.
2013-01-01
The Water Availability Tool for Environmental Resources (WATER) is a TOPMODEL-based hydrologic model that depends on spatially accurate soils data to function in diverse terranes. In Kentucky, this includes mountainous regions, karstic plateau, and alluvial plains. Soils data are critical because they quantify the space to store water, as well as how water moves through the soil to the stream during storm events. We compared how the model performs using two different sources of soils data--Soil Survey Geographic Database (SSURGO) and State Soil Geographic Database laboratory data (STATSGO)--for 21 basins ranging in size from 17 to 1564 km2. Model results were consistently better when SSURGO data were used, likely due to the higher field capacity, porosity, and available-water holding capacity, which cause the model to store more soil-water in the landscape and improve streamflow estimates for both low- and high-flow conditions. In addition, there were significant differences in the conductivity multiplier and scaling parameter values that describe how water moves vertically and laterally, respectively, as quantified by TOPMODEL. We also evaluated whether partitioning areas that drain to streams via sinkholes in karstic basins as separate hydrologic modeling units (HMUs) improved model performance. There were significant differences between HMUs in properties that control soil-water storage in the model, although the effect of partitioning these HMUs on streamflow simulation was inconclusive.
The value of compressed air energy storage in energy and reserve markets
Drury, Easan; Denholm, Paul; Sioshansi, Ramteen
2011-06-28
Storage devices can provide several grid services, however it is challenging to quantify the value of providing several services and to optimally allocate storage resources to maximize value. We develop a co-optimized Compressed Air Energy Storage (CAES) dispatch model to characterize the value of providing operating reserves in addition to energy arbitrage in several U.S. markets. We use the model to: (1) quantify the added value of providing operating reserves in addition to energy arbitrage; (2) evaluate the dynamic nature of optimally allocating storage resources into energy and reserve markets; and (3) quantify the sensitivity of CAES net revenues tomore » several design and performance parameters. We find that conventional CAES systems could earn an additional 23 ± 10/kW-yr by providing operating reserves, and adiabatic CAES systems could earn an additional 28 ± 13/kW-yr. We find that arbitrage-only revenues are unlikely to support a CAES investment in most market locations, but the addition of reserve revenues could support a conventional CAES investment in several markets. Adiabatic CAES revenues are not likely to support an investment in most regions studied. As a result, modifying CAES design and performance parameters primarily impacts arbitrage revenues, and optimizing CAES design will be nearly independent of dispatch strategy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A
2014-01-01
Despite the ubiquitous existence of dams within riverscapes, much of our knowledge about dams and their environmental effects remains context-specific. Hydrology, more than any other environmental variable, has been studied in great detail with regard to dam regulation. While much progress has been made in generalizing the hydrologic effects of regulation by large dams, many aspects of hydrology show site-specific fidelity to dam operations, small dams (including diversions), and regional hydrologic regimes. A statistical modeling framework is presented to quantify and generalize hydrologic responses to varying degrees of dam regulation. Specifically, the objectives were to 1) compare the effects ofmore » local versus cumulative dam regulation, 2) determine the importance of different regional hydrologic regimes in influencing hydrologic responses to dams, and 3) evaluate how different regulation contexts lead to error in predicting hydrologic responses to dams. Overall, model performance was poor in quantifying the magnitude of hydrologic responses, but performance was sufficient in classifying hydrologic responses as negative or positive. Responses of some hydrologic indices to dam regulation were highly dependent upon hydrologic class membership and the purpose of the dam. The opposing coefficients between local and cumulative-dam predictors suggested that hydrologic responses to cumulative dam regulation are complex, and predicting the hydrology downstream of individual dams, as opposed to multiple dams, may be more easy accomplished using statistical approaches. Results also suggested that particular contexts, including multipurpose dams, high cumulative regulation by multiple dams, diversions, close proximity to dams, and certain hydrologic classes are all sources of increased error when predicting hydrologic responses to dams. Statistical models, such as the ones presented herein, show promise in their ability to model the effects of dam regulation effects at large spatial scales as to generalize the directionality of hydrologic responses.« less
Muñoz, Mario A; Smith-Miles, Kate A
2017-01-01
This article presents a method for the objective assessment of an algorithm's strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.
NASA Astrophysics Data System (ADS)
Nacif el Alaoui, Reda
Mechanical structure-property relations have been quantified for AISI 4140 steel. under different strain rates and temperatures. The structure-property relations were used. to calibrate a microstructure-based internal state variable plasticity-damage model for. monotonic tension, compression and torsion plasticity, as well as damage evolution. Strong stress state and temperature dependences were observed for the AISI 4140 steel. Tension tests on three different notched Bridgman specimens were undertaken to study. the damage-triaxiality dependence for model validation purposes. Fracture surface. analysis was performed using Scanning Electron Microscopy (SEM) to quantify the void. nucleation and void sizes in the different specimens. The stress-strain behavior exhibited. a fairly large applied stress state (tension, compression dependence, and torsion), a. moderate temperature dependence, and a relatively small strain rate dependence.
Fogt, Donovan L; Kalns, John E; Michael, Darren J
2010-12-01
Fatigue is known to impair cognitive performance, but it remains unclear whether concurrent common stressors affect cognitive performance similarly. We used the Stroop Color-Word Conflict Test to assess cognitive performance over 24 hours for four groups: control, sleep-deprived (SD), SD + energy deficit, and SD + energy deficit + fluid restricted. Fatigue levels were quantified using the Profile of Mood States (POMS) survey. Linear mixed-effects (LME) models allowed for testing of group-specific differences in cognitive performance while accounting for subject-level variation. Starting fatigue levels were similar among all groups, while 24-hour fatigue levels differed significantly. For each cognitive performance test, results were modeled separately. The simplest LME model contained a significant fixed-effects term for slope and intercept. Moreover, the simplest LME model used a single slope coefficient to fit data from all four groups, suggesting that loss in cognitive performance over a 24-hour duty cycle with respect to fatigue level is similar regardless of the cause.
Quantifying climate feedbacks in polar regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goosse, Hugues; Kay, Jennifer E.; Armour, Kyle C.
The concept of feedback is key in assessing whether a perturbation to a system is amplified or damped by mechanisms internal to the system. In polar regions, climate dynamics are controlled by both radiative and non-radiative interactions between the atmosphere, ocean, sea ice, ice sheets and land surfaces. Precisely quantifying polar feedbacks is required for a process-oriented evaluation of climate models, a clear understanding of the processes responsible for polar climate changes, and a reduction in uncertainty associated with model projections. This quantification can be performed using a simple and consistent approach that is valid for a wide range ofmore » feedbacks, thus offering the opportunity for more systematic feedback analyses and a better understanding of polar climate changes.« less
Quantifying climate feedbacks in polar regions
Goosse, Hugues; Kay, Jennifer E.; Armour, Kyle C.; ...
2018-05-15
The concept of feedback is key in assessing whether a perturbation to a system is amplified or damped by mechanisms internal to the system. In polar regions, climate dynamics are controlled by both radiative and non-radiative interactions between the atmosphere, ocean, sea ice, ice sheets and land surfaces. Precisely quantifying polar feedbacks is required for a process-oriented evaluation of climate models, a clear understanding of the processes responsible for polar climate changes, and a reduction in uncertainty associated with model projections. This quantification can be performed using a simple and consistent approach that is valid for a wide range ofmore » feedbacks, thus offering the opportunity for more systematic feedback analyses and a better understanding of polar climate changes.« less
Geothermal Impact Analysis | Geothermal Technologies | NREL
on potential geothermal growth scenarios, jobs and economic impacts, clean energy manufacturing geothermal resources. We: Perform resource analysis Develop techno-economic models Quantify environmental growth scenarios across multiple market sectors. Learn more about the GeoVision Study. Jobs and Economic
Nassar, Cíntia Cristina Souza; Bondan, Eduardo Fernandes; Alouche, Sandra Regina
2009-09-01
Multiple sclerosis is a demyelinating disease of the central nervous system associated with varied levels of disability. The impact of early physiotherapeutic interventions in the disease progression is unknown. We used an experimental model of demyelination with the gliotoxic agent ethidium bromide and early aquatic exercises to evaluate the motor performance of the animals. We quantified the number of footsteps and errors during the beam walking test. The demyelinated animals walked fewer steps with a greater number of errors than the control group. The demyelinated animals that performed aquatic exercises presented a better motor performance than those that did not exercise. Therefore aquatic exercising was beneficial to the motor performance of rats in this experimental model of demyelination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
A model-based test for treatment effects with probabilistic classifications.
Cavagnaro, Daniel R; Davis-Stober, Clintin P
2018-05-21
Within modern psychology, computational and statistical models play an important role in describing a wide variety of human behavior. Model selection analyses are typically used to classify individuals according to the model(s) that best describe their behavior. These classifications are inherently probabilistic, which presents challenges for performing group-level analyses, such as quantifying the effect of an experimental manipulation. We answer this challenge by presenting a method for quantifying treatment effects in terms of distributional changes in model-based (i.e., probabilistic) classifications across treatment conditions. The method uses hierarchical Bayesian mixture modeling to incorporate classification uncertainty at the individual level into the test for a treatment effect at the group level. We illustrate the method with several worked examples, including a reanalysis of the data from Kellen, Mata, and Davis-Stober (2017), and analyze its performance more generally through simulation studies. Our simulations show that the method is both more powerful and less prone to type-1 errors than Fisher's exact test when classifications are uncertain. In the special case where classifications are deterministic, we find a near-perfect power-law relationship between the Bayes factor, derived from our method, and the p value obtained from Fisher's exact test. We provide code in an online supplement that allows researchers to apply the method to their own data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A framework for quantifying net benefits of alternative prognostic models‡
Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G
2012-01-01
New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066
Quantifying transfer after perceptual-motor sequence learning: how inflexible is implicit learning?
Sanchez, Daniel J.; Yarnik, Eric N.
2015-01-01
Studies of implicit perceptual-motor sequence learning have often shown learning to be inflexibly tied to the training conditions during learning. Since sequence learning is seen as a model task of skill acquisition, limits on the ability to transfer knowledge from the training context to a performance context indicates important constraints on skill learning approaches. Lack of transfer across contexts has been demonstrated by showing that when task elements are changed following training, this leads to a disruption in performance. These results have typically been taken as suggesting that the sequence knowledge relies on integrated representations across task elements (Abrahamse, Jiménez, Verwey, & Clegg, Psychon Bull Rev 17:603–623, 2010a). Using a relatively new sequence learning task, serial interception sequence learning, three experiments are reported that quantify this magnitude of performance disruption after selectively manipulating individual aspects of motor performance or perceptual information. In Experiment 1, selective disruption of the timing or order of sequential actions was examined using a novel response manipulandum that allowed for separate analysis of these two motor response components. In Experiments 2 and 3, transfer was examined after selective disruption of perceptual information that left the motor response sequence intact. All three experiments provided quantifiable estimates of partial transfer to novel contexts that suggest some level of information integration across task elements. However, the ability to identify quantifiable levels of successful transfer indicates that integration is not all-or-none and that measurement sensitivity is a key in understanding sequence knowledge representations. PMID:24668505
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.
2017-12-01
Complexity of hydrogeological systems arises from the multi-scale heterogeneity and insufficient measurements of their underlying parameters such as hydraulic conductivity and porosity. An inadequate characterization of hydrogeological properties can significantly decrease the trustworthiness of numerical models that predict groundwater flow and solute transport. Therefore, a variety of data assimilation methods have been proposed in order to estimate hydrogeological parameters from spatially scarce data by incorporating the governing physical models. In this work, we propose a novel framework for evaluating the performance of these estimation methods. We focus on the Ensemble Kalman Filter (EnKF) approach that is a widely used data assimilation technique. It reconciles multiple sources of measurements to sequentially estimate model parameters such as the hydraulic conductivity. Several methods have been used in the literature to quantify the accuracy of the estimations obtained by EnKF, including Rank Histograms, RMSE and Ensemble Spread. However, these commonly used methods do not regard the spatial information and variability of geological formations. This can cause hydraulic conductivity fields with very different spatial structures to have similar histograms or RMSE. We propose a vision-based approach that can quantify the accuracy of estimations by considering the spatial structure embedded in the estimated fields. Our new approach consists of adapting a new metric, Color Coherent Vectors (CCV), to evaluate the accuracy of estimated fields achieved by EnKF. CCV is a histogram-based technique for comparing images that incorporate spatial information. We represent estimated fields as digital three-channel images and use CCV to compare and quantify the accuracy of estimations. The sensitivity of CCV to spatial information makes it a suitable metric for assessing the performance of spatial data assimilation techniques. Under various factors of data assimilation methods such as number, layout, and type of measurements, we compare the performance of CCV with other metrics such as RMSE. By simulating hydrogeological processes using estimated and true fields, we observe that CCV outperforms other existing evaluation metrics.
MIXING STUDY FOR JT-71/72 TANKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.
2013-11-26
All modeling calculations for the mixing operations of miscible fluids contained in HBLine tanks, JT-71/72, were performed by taking a three-dimensional Computational Fluid Dynamics (CFD) approach. The CFD modeling results were benchmarked against the literature results and the previous SRNL test results to validate the model. Final performance calculations were performed by using the validated model to quantify the mixing time for the HB-Line tanks. The mixing study results for the JT-71/72 tanks show that, for the cases modeled, the mixing time required for blending of the tank contents is no more than 35 minutes, which is well below 2.5more » hours of recirculation pump operation. Therefore, the results demonstrate the adequacy of 2.5 hours’ mixing time of the tank contents by one recirculation pump to get well mixed.« less
Quantitative Evaluation of Ionosphere Models for Reproducing Regional TEC During Geomagnetic Storms
NASA Astrophysics Data System (ADS)
Shim, J. S.; Kuznetsova, M.; Rastaetter, L.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B.; Foster, B.; Fuller-Rowell, T. J.; Goncharenko, L. P.; Huba, J.; Mitchell, C. N.; Ridley, A. J.; Fedrizzi, M.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.
2015-12-01
TEC (Total Electron Content) is one of the key parameters in description of the ionospheric variability that has influence on the accuracy of navigation and communication systems. To assess current TEC modeling capability of ionospheric models during geomagnetic storms and to establish a baseline against which future improvement can be compared, we quantified the ionospheric models' performance by comparing modeled vertical TEC values with ground-based GPS TEC measurements and Multi-Instrument Data Analysis System (MIDAS) TEC. The comparison focused on North America and Europe sectors during selected two storm events: 2006 AGU storm (14-15 Dec. 2006) and 2013 March storm (17-19 Mar. 2013). The ionospheric models used for this study range from empirical to physics-based, and physics-based data assimilation models. We investigated spatial and temporal variations of TEC during the storms. In addition, we considered several parameters to quantify storm impacts on TEC: TEC changes compared to quiet time, rate of TEC change, and maximum increase/decrease during the storms. In this presentation, we focus on preliminary results of the comparison of the models performance in reproducing the storm-time TEC variations using the parameters and skill scores. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) for the space science communities to use.
Performance analysis of successive over relaxation method for solving glioma growth model
NASA Astrophysics Data System (ADS)
Hussain, Abida; Faye, Ibrahima; Muthuvalu, Mohana Sundaram
2016-11-01
Brain tumor is one of the prevalent cancers in the world that lead to death. In light of the present information of the properties of gliomas, mathematical models have been developed by scientists to quantify the proliferation and invasion dynamics of glioma. In this study, one-dimensional glioma growth model is considered, and finite difference method is used to discretize the problem. Then, two stationary methods, namely Gauss-Seidel (GS) and Successive Over Relaxation (SOR) are used to solve the governing algebraic system. The performance of the methods are evaluated in terms of number of iteration and computational time. On the basis of performance analysis, SOR method is shown to be more superior compared to GS method.
NASA Astrophysics Data System (ADS)
Huang, Brendan K.; Gamm, Ute A.; Jonas, Stephan; Khokha, Mustafa K.; Choma, Michael A.
2015-03-01
Cilia-driven fluid flow is a critical yet poorly understood aspect of pulmonary physiology. Here, we demonstrate that optical coherence tomography-based particle tracking velocimetry can be used to quantify subtle variability in cilia-driven flow performance in Xenopus, an important animal model of ciliary biology. Changes in flow performance were quantified in the setting of normal development, as well as in response to three types of perturbations: mechanical (increased fluid viscosity), pharmacological (disrupted serotonin signaling), and genetic (diminished ciliary motor protein expression). Of note, we demonstrate decreased flow secondary to gene knockdown of kif3a, a protein involved in ciliogenesis, as well as a dose-response decrease in flow secondary to knockdown of dnah9, an important ciliary motor protein.
Antonelli, Cristian; Mecozzi, Antonio; Shtaif, Mark; Winzer, Peter J
2015-02-09
Mode-dependent loss (MDL) is a major factor limiting the achievable information rate in multiple-input multiple-output space-division multiplexed systems. In this paper we show that its impact on system performance, which we quantify in terms of the capacity reduction relative to a reference MDL-free system, may depend strongly on the operation of the inline optical amplifiers. This dependency is particularly strong in low mode-count systems. In addition, we discuss ways in which the signal-to-noise ratio of the MDL-free reference system can be defined and quantify the differences in the predicted capacity loss. Finally, we stress the importance of correctly accounting for the effect of MDL on the accumulation of amplification noise.
NASA Astrophysics Data System (ADS)
Platiša, Ljiljana; Goossens, Bart; Vansteenkiste, Ewout; Badano, Aldo; Philips, Wilfried
2010-02-01
Clinical practice is rapidly moving in the direction of volumetric imaging. Often, radiologists interpret these images in liquid crystal displays at browsing rates of 30 frames per second or higher. However, recent studies suggest that the slow response of the display can compromise image quality. In order to quantify the temporal effect of medical displays on detection performance, we investigate two designs of a multi-slice channelized Hotelling observer (msCHO) model in the task of detecting a single-slice signal in multi-slice simulated images. The design of msCHO models is inspired by simplifying assumptions about how humans observe while viewing in the stack-browsing mode. For comparison, we consider a standard CHO applied only on the slice where the signal is located, recently used in a similar study. We refer to it as a single-slice CHO (ssCHO). Overall, our results confirm previous findings that the slow response of displays degrades the detection performance of the observers. More specifically, the observed performance range of msCHO designs is higher compared to the ssCHO suggesting that the extent and rate of degradation, though significant, may be less drastic than previously estimated by the ssCHO. Especially, the difference between msCHO and ssCHO is more significant for higher browsing speeds than for slow image sequences or static images. This, together with their design criteria driven by the assumptions about humans, makes the msCHO models promising candidates for further studies aimed at building anthropomorphic observer models for the stack-mode image presentation.
Gradient approach to quantify the gradation smoothness for output media
NASA Astrophysics Data System (ADS)
Kim, Youn Jin; Bang, Yousun; Choh, Heui-Keun
2010-01-01
We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Performance improvement: the organization's quest.
McKinley, C O; Parmer, D E; Saint-Amand, R A; Harbin, C B; Roulston, J C; Ellis, R A; Buchanan, J R; Leonard, R B
1999-01-01
In today's health care marketplace, quality has become an expectation. Stakeholders are demanding quality clinical outcomes, and accrediting bodies are requiring clinical performance data. The Roosevelt Institute's quest was to define and quantify quality outcomes, develop an organizational culture of performance improvement, and ensure customer satisfaction. Several of the organization's leaders volunteered to work as a team to develop a specific performance improvement approach tailored to the organization. To date, over 200 employees have received an orientation to the model and its philosophy and nine problem action and process improvement teams have been formed.
Maximizing Educator Enhancement: Aligned Seminar and Online Professional Development
ERIC Educational Resources Information Center
Shaha, Steven; Glassett, Kelly; Copas, Aimee; Huddleston, T. Lisa
2016-01-01
Professional development and learning has a long history in seminar-like models, as well as in the more educator-personal delivery approaches. The question is whether an intentionally coordinated, integrated combination of the two PDL approaches will have best impacts for educators as quantified in improved student performance. Contrasts between…
Quantifying Non-Equilibrium in Hypersonic Flows Using Entropy Generation
2007-03-01
do this, two experimental cases performed at the Calspan- University of Buffalo Research Center ( CUBRC ) were modeled using Navier-Stokes based CFD...data provided by the CUBRC hypersonic wind tunnel facility (Holden and Wadhams, 2004). The wall data in Figure 9 and Figure 10 reveals some difference
Application of a Transient Storage Zone Model o Soil Pipeflow Tracer Injection Experiments
USDA-ARS?s Scientific Manuscript database
Soil pipes, defined here as discrete preferential flow paths generally parallel to the slope, are important subsurface flow pathways that play a role in many soil erosion phenomena. However, limited research has been performed on quantifying and characterizing their flow and transport characteristic...
Soil pipe flow tracer experiments: 2. Application of a transient storage zone model
USDA-ARS?s Scientific Manuscript database
Soil pipes, defined here as discrete preferential flow paths generally parallel to the slope, are important subsurface flow pathways that play a role in many soil erosion phenomena. However, limited research has been performed on quantifying and characterizing their flow and transport characteristic...
This presentation covers work performed by the authors to characterize changes in emissions over the 1990 – 2010 time period, quantify the effects of these emission changes on air quality and aerosol/radiation feedbacks using both observations and model simulations, and fin...
Binder, Harald; Porzelius, Christine; Schumacher, Martin
2011-03-01
Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quantitative Rheological Model Selection
NASA Astrophysics Data System (ADS)
Freund, Jonathan; Ewoldt, Randy
2014-11-01
The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.
Alexanderian, Alen; Zhu, Liang; Salloum, Maher; Ma, Ronghui; Yu, Meilin
2017-09-01
In this study, statistical models are developed for modeling uncertain heterogeneous permeability and porosity in tumors, and the resulting uncertainties in pressure and velocity fields during an intratumoral injection are quantified using a nonintrusive spectral uncertainty quantification (UQ) method. Specifically, the uncertain permeability is modeled as a log-Gaussian random field, represented using a truncated Karhunen-Lòeve (KL) expansion, and the uncertain porosity is modeled as a log-normal random variable. The efficacy of the developed statistical models is validated by simulating the concentration fields with permeability and porosity of different uncertainty levels. The irregularity in the concentration field bears reasonable visual agreement with that in MicroCT images from experiments. The pressure and velocity fields are represented using polynomial chaos (PC) expansions to enable efficient computation of their statistical properties. The coefficients in the PC expansion are computed using a nonintrusive spectral projection method with the Smolyak sparse quadrature. The developed UQ approach is then used to quantify the uncertainties in the random pressure and velocity fields. A global sensitivity analysis is also performed to assess the contribution of individual KL modes of the log-permeability field to the total variance of the pressure field. It is demonstrated that the developed UQ approach can effectively quantify the flow uncertainties induced by uncertain material properties of the tumor.
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris
2018-03-01
Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.
Quantification of Hepatitis C Virus Cell-to-Cell Spread Using a Stochastic Modeling Approach
Martin, Danyelle N.; Perelson, Alan S.; Dahari, Harel
2015-01-01
ABSTRACT It has been proposed that viral cell-to-cell transmission plays a role in establishing and maintaining chronic infections. Thus, understanding the mechanisms and kinetics of cell-to-cell spread is fundamental to elucidating the dynamics of infection and may provide insight into factors that determine chronicity. Because hepatitis C virus (HCV) spreads from cell to cell and has a chronicity rate of up to 80% in exposed individuals, we examined the dynamics of HCV cell-to-cell spread in vitro and quantified the effect of inhibiting individual host factors. Using a multidisciplinary approach, we performed HCV spread assays and assessed the appropriateness of different stochastic models for describing HCV focus expansion. To evaluate the effect of blocking specific host cell factors on HCV cell-to-cell transmission, assays were performed in the presence of blocking antibodies and/or small-molecule inhibitors targeting different cellular HCV entry factors. In all experiments, HCV-positive cells were identified by immunohistochemical staining and the number of HCV-positive cells per focus was assessed to determine focus size. We found that HCV focus expansion can best be explained by mathematical models assuming focus size-dependent growth. Consistent with previous reports suggesting that some factors impact HCV cell-to-cell spread to different extents, modeling results estimate a hierarchy of efficacies for blocking HCV cell-to-cell spread when targeting different host factors (e.g., CLDN1 > NPC1L1 > TfR1). This approach can be adapted to describe focus expansion dynamics under a variety of experimental conditions as a means to quantify cell-to-cell transmission and assess the impact of cellular factors, viral factors, and antivirals. IMPORTANCE The ability of viruses to efficiently spread by direct cell-to-cell transmission is thought to play an important role in the establishment and maintenance of viral persistence. As such, elucidating the dynamics of cell-to-cell spread and quantifying the effect of blocking the factors involved has important implications for the design of potent antiviral strategies and controlling viral escape. Mathematical modeling has been widely used to understand HCV infection dynamics and treatment response; however, these models typically assume only cell-free virus infection mechanisms. Here, we used stochastic models describing focus expansion as a means to understand and quantify the dynamics of HCV cell-to-cell spread in vitro and determined the degree to which cell-to-cell spread is reduced when individual HCV entry factors are blocked. The results demonstrate the ability of this approach to recapitulate and quantify cell-to-cell transmission, as well as the impact of specific factors and potential antivirals. PMID:25833046
Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita
2013-03-01
Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.
NASA Astrophysics Data System (ADS)
Allec, N.; Abbaszadeh, S.; Scott, C. C.; Lewin, J. M.; Karim, K. S.
2012-12-01
In contrast-enhanced mammography (CEM), the dual-energy dual-exposure technique, which can leverage existing conventional mammography infrastructure, relies on acquiring the low- and high-energy images using two separate exposures. The finite time between image acquisition leads to motion artifacts in the combined image. Motion artifacts can lead to greater anatomical noise in the combined image due to increased mismatch of the background tissue in the images to be combined, however the impact has not yet been quantified. In this study we investigate a method to include motion artifacts in the dual-energy noise and performance analysis. The motion artifacts are included via an extended cascaded systems model. To validate the model, noise power spectra of a previous dual-energy clinical study are compared to that of the model. The ideal observer detectability is used to quantify the effect of motion artifacts on tumor detectability. It was found that the detectability can be significantly degraded when motion is present (e.g., detectability of 2.5 mm radius tumor decreased by approximately a factor of 2 for translation motion on the order of 1000 μm). The method presented may be used for a more comprehensive theoretical noise and performance analysis and fairer theoretical performance comparison between dual-exposure techniques, where motion artifacts are present, and single-exposure techniques, where low- and high-energy images are acquired simultaneously and motion artifacts are absent.
Allec, N; Abbaszadeh, S; Scott, C C; Lewin, J M; Karim, K S
2012-12-21
In contrast-enhanced mammography (CEM), the dual-energy dual-exposure technique, which can leverage existing conventional mammography infrastructure, relies on acquiring the low- and high-energy images using two separate exposures. The finite time between image acquisition leads to motion artifacts in the combined image. Motion artifacts can lead to greater anatomical noise in the combined image due to increased mismatch of the background tissue in the images to be combined, however the impact has not yet been quantified. In this study we investigate a method to include motion artifacts in the dual-energy noise and performance analysis. The motion artifacts are included via an extended cascaded systems model. To validate the model, noise power spectra of a previous dual-energy clinical study are compared to that of the model. The ideal observer detectability is used to quantify the effect of motion artifacts on tumor detectability. It was found that the detectability can be significantly degraded when motion is present (e.g., detectability of 2.5 mm radius tumor decreased by approximately a factor of 2 for translation motion on the order of 1000 μm). The method presented may be used for a more comprehensive theoretical noise and performance analysis and fairer theoretical performance comparison between dual-exposure techniques, where motion artifacts are present, and single-exposure techniques, where low- and high-energy images are acquired simultaneously and motion artifacts are absent.
McEntire, John E.; Kuo, Kenneth C.; Smith, Mark E.; Stalling, David L.; Richens, Jack W.; Zumwalt, Robert W.; Gehrke, Charles W.; Papermaster, Ben W.
1989-01-01
A wide spectrum of modified nucleosides has been quantified by high-performance liquid chromatography in serum of 49 male lung cancer patients, 35 patients with other cancers, and 48 patients hospitalized for nonneoplastic diseases. Data for 29 modified nucleoside peaks were normalized to an internal standard and analyzed by discriminant analysis and stepwise discriminant analysis. A model based on peaks selected by a stepwise discriminant procedure correctly classified 79% of the cancer and 75% of the noncancer subjects. It also demonstrated 84% sensitivity and 79% specificity when comparing lung cancer to noncancer subjects, and 80% sensitivity and 55% specificity in comparing lung cancer to other cancers. The nucleoside peaks having the greatest influence on the models varied dependent on the subgroups compared, confirming the importance of quantifying a wide array of nucleosides. These data support and expand previous studies which reported the utility of measuring modified nucleoside levels in serum and show that precise measurement of an array of 29 modified nucleosides in serum by high-performance liquid chromatography with UV scanning with subsequent data modeling may provide a clinically useful approach to patient classification in diagnosis and subsequent therapeutic monitoring.
NASA Technical Reports Server (NTRS)
Orme, John S.; Schkolnik, Gerard S.
1995-01-01
Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.
A Taxonomy-Based Approach to Shed Light on the Babel of Mathematical Models for Rice Simulation
NASA Technical Reports Server (NTRS)
Confalonieri, Roberto; Bregaglio, Simone; Adam, Myriam; Ruget, Francoise; Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Buis, Samuel;
2016-01-01
For most biophysical domains, differences in model structures are seldom quantified. Here, we used a taxonomy-based approach to characterise thirteen rice models. Classification keys and binary attributes for each key were identified, and models were categorised into five clusters using a binary similarity measure and the unweighted pair-group method with arithmetic mean. Principal component analysis was performed on model outputs at four sites. Results indicated that (i) differences in structure often resulted in similar predictions and (ii) similar structures can lead to large differences in model outputs. User subjectivity during calibration may have hidden expected relationships between model structure and behaviour. This explanation, if confirmed, highlights the need for shared protocols to reduce the degrees of freedom during calibration, and to limit, in turn, the risk that user subjectivity influences model performance.
Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.
2000-01-01
A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.
Cathcart, Kelsey; Shin, Seo Yim; Milton, Joanna; Ellerby, David
2017-10-01
Mobility is essential to the fitness of many animals, and the costs of locomotion can dominate daily energy budgets. Locomotor costs are determined by the physiological demands of sustaining mechanical performance, yet performance is poorly understood for most animals in the field, particularly aquatic organisms. We have used 3-D underwater videography to quantify the swimming trajectories and propulsive modes of bluegills sunfish ( Lepomis macrochirus , Rafinesque) in the field with high spatial (1-3 mm per pixel) and temporal (60 Hz frame rate) resolution. Although field swimming trajectories were variable and nonlinear in comparison to quasi steady-state swimming in recirculating flumes, they were much less unsteady than the volitional swimming behaviors that underlie existing predictive models of field swimming cost. Performance analyses suggested that speed and path curvature data could be used to derive reasonable estimates of locomotor cost that fit within measured capacities for sustainable activity. The distinct differences between field swimming behavior and performance measures obtained under steady-state laboratory conditions suggest that field observations are essential for informing approaches to quantifying locomotor performance in the laboratory.
Chemical Feedback From Decreasing Carbon Monoxide Emissions
NASA Astrophysics Data System (ADS)
Gaubert, B.; Worden, H. M.; Arellano, A. F. J.; Emmons, L. K.; Tilmes, S.; Barré, J.; Martinez Alonso, S.; Vitt, F.; Anderson, J. L.; Alkemade, F.; Houweling, S.; Edwards, D. P.
2017-10-01
Understanding changes in the burden and growth rate of atmospheric methane (CH4) has been the focus of several recent studies but still lacks scientific consensus. Here we investigate the role of decreasing anthropogenic carbon monoxide (CO) emissions since 2002 on hydroxyl radical (OH) sinks and tropospheric CH4 loss. We quantify this impact by contrasting two model simulations for 2002-2013: (1) a Measurement of the Pollution in the Troposphere (MOPITT) CO reanalysis and (2) a Control-Run without CO assimilation. These simulations are performed with the Community Atmosphere Model with Chemistry of the Community Earth System Model fully coupled chemistry climate model with prescribed CH4 surface concentrations. The assimilation of MOPITT observations constrains the global CO burden, which significantly decreased over this period by 20%. We find that this decrease results to (a) increase in CO chemical production, (b) higher CH4 oxidation by OH, and (c) 8% shorter CH4 lifetime. We elucidate this coupling by a surrogate mechanism for CO-OH-CH4 that is quantified from the full chemistry simulations.
Liang, Ningjian; Lu, Xiaonan; Hu, Yaxi; Kitts, David D
2016-01-27
The chlorogenic acid isomer profile and antioxidant activity of both green and roasted coffee beans are reported herein using ATR-FTIR spectroscopy combined with chemometric analyses. High-performance liquid chromatography (HPLC) quantified different chlorogenic acid isomer contents for reference, whereas ORAC, ABTS, and DPPH were used to determine the antioxidant activity of the same coffee bean extracts. FTIR spectral data and reference data of 42 coffee bean samples were processed to build optimized PLSR models, and 18 samples were used for external validation of constructed PLSR models. In total, six PLSR models were constructed for six chlorogenic acid isomers to predict content, with three PLSR models constructed to forecast the free radical scavenging activities, obtained using different chemical assays. In conclusion, FTIR spectroscopy, coupled with PLSR, serves as a reliable, nondestructive, and rapid analytical method to quantify chlorogenic acids and to assess different free radical-scavenging capacities in coffee beans.
NASA Astrophysics Data System (ADS)
Dallon, Kathryn L.; Yao, Jing; Wheeler, Dean R.; Mazzeo, Brian A.
2018-04-01
Measurements of the mechanical properties of lithium-ion battery electrode films can be used to quantify and improve manufacturing processes and to predict the mechanical and electrochemical performance of the battery. This paper demonstrates the use of acoustic resonances to distinguish among commercial-grade battery films with different active electrode materials, thicknesses, and densities. Resonances are excited in a clamped circular area of the film using a pulsed infrared laser, and responses are measured using an electret condenser microphone. A numerical model is used to quantify the sensitivity of resonances to changes in mechanical properties. When the numerical model is compared to simple analytical models for thin plates and membranes, the battery films measured here trend more similarly to the membrane model. Resonance measurements are also used to monitor the drying process. Results from a scanning laser Doppler vibrometer verify the modes excited in the films, and a combination of experimental and simulated results is used to estimate the Young's modulus of the battery electrode coating layer.
NASA Technical Reports Server (NTRS)
Genc, K. O.; Gopalakrishnan, R.; Kuklis, M. M.; Maender, C. C.; Rice, A. J.; Cavanagh, P. R.
2009-01-01
Despite the use of exercise countermeasures during long-duration space missions, bone mineral density (BMD) and predicted bone strength of astronauts continue to show decreases in the lower extremities and spine. This site-specific bone adaptation is most likely caused by the effects of microgravity on the mechanical loading environment of the crew member. There is, therefore, a need to quantify the mechanical loading experienced on Earth and on-orbit to define the effect of a given "dose" of loading on bone homeostasis. Gene et al. recently proposed an enhanced DLS (EDLS) model that, when used with entire days of in-shoe forces, takes into account recently developed theories on the importance of factors such as saturation, recovery, and standing and their effects on the osteogenic response of bone to daily physical activity. This algorithm can also quantify the tinting and type of activity (sit/unload, stand, walk, run or other loaded activity) performed throughout the day. The purpose of the current study was to use in-shoe force measurements from entire typical work days on Earth and on-orbit in order to quantify the type and amount of loading experienced by crew members. The specific aim was to use these measurements as inputs into the EDLS model to determine activity timing/type and the mechanical "dose" imparted on the musculoskeletal system of crew members and relate this dose to changes in bone homeostasis.
Quantifying structural states of soft mudrocks
NASA Astrophysics Data System (ADS)
Li, B.; Wong, R. C. K.
2016-05-01
In this paper, a cm model is proposed to quantify structural states of soft mudrocks, which are dependent on clay fractions and porosities. Physical properties of natural and reconstituted soft mudrock samples are used to derive two parameters in the cm model. With the cm model, a simplified homogenization approach is proposed to estimate geomechanical properties and fabric orientation distributions of soft mudrocks based on the mixture theory. Soft mudrocks are treated as a mixture of nonclay minerals and clay-water composites. Nonclay minerals have a high stiffness and serve as a structural framework of mudrocks when they have a high volume fraction. Clay-water composites occupy the void space among nonclay minerals and serve as an in-fill matrix. With the increase of volume fraction of clay-water composites, there is a transition in the structural state from the state of framework supported to the state of matrix supported. The decreases in shear strength and pore size as well as increases in compressibility and anisotropy in fabric are quantitatively related to such transition. The new homogenization approach based on the proposed cm model yields better performance evaluation than common effective medium modeling approaches because the interactions among nonclay minerals and clay-water composites are considered. With wireline logging data, the cm model is applied to quantify the structural states of Colorado shale formations at different depths in the Cold Lake area, Alberta, Canada. Key geomechancial parameters are estimated based on the proposed homogenization approach and the critical intervals with low strength shale formations are identified.
Bioenergetics modeling of percid fishes: Chapter 14
Madenjian, Charles P.; Kestemont, Patrick; Dabrowski, Konrad; Summerfelt, Robert C.
2015-01-01
A bioenergetics model for a percid fish represents a quantitative description of the fish’s energy budget. Bioenergetics modeling can be used to identify the important factors determining growth of percids in lakes, rivers, or seas. For example, bioenergetics modeling applied to yellow perch (Perca flavescens) in the western and central basins of Lake Erie revealed that the slower growth in the western basin was attributable to limitations in suitably sized prey in western Lake Erie, rather than differences in water temperature between the two basins. Bioenergetics modeling can also be applied to a percid population to estimate the amount of food being annually consumed by the percid population. For example, bioenergetics modeling applied to the walleye (Sander vitreus) population in Lake Erie has provided fishery managers valuable insights into changes in the population’s predatory demand over time. In addition, bioenergetics modeling has been used to quantify the effect of the difference in growth between the sexes on contaminant accumulation in walleye. Field and laboratory evaluations of percid bioenergetics model performance have documented a systematic bias, such that the models overestimate consumption at low feeding rates but underestimate consumption at high feeding rates. However, more recent studies have shown that this systematic bias was due, at least in part, to an error in the energy budget balancing algorithm used in the computer software. Future research work is needed to more thoroughly assess the field and laboratory performance of percid bioenergetics models and to quantify differences in activity and standard metabolic rate between the sexes of mature percids.
Evaluation of calibration efficacy under different levels of uncertainty
Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...
2014-06-10
This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less
An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures
NASA Astrophysics Data System (ADS)
Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.
2009-07-01
A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.
Lei, Huan; Yang, Xiu; Zheng, Bin; ...
2015-11-05
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
NASA Astrophysics Data System (ADS)
Mandal, D.; Bhatia, N.; Srivastav, R. K.
2016-12-01
Soil Water Assessment Tool (SWAT) is one of the most comprehensive hydrologic models to simulate streamflow for a watershed. The two major inputs for a SWAT model are: (i) Digital Elevation Models (DEM), and (ii) Land Use and Land Cover Maps (LULC). This study aims to quantify the uncertainty in streamflow predictions using SWAT for San Bernard River in Brazos-Colorado coastal watershed, Texas, by incorporating the respective datasets from different sources: (i) DEM data will be obtained from ASTER GDEM V2, GMTED2010, NHD DEM, and SRTM DEM datasets with ranging resolution from 1/3 arc-second to 30 arc-second, and (ii) LULC data will be obtained from GLCC V2, MRLC NLCD2011, NOAA's C-CAP, USGS GAP, and TCEQ databases. Weather variables (Precipitation and Max-Min Temperature at daily scale) will be obtained from National Climatic Data Centre (NCDC) and SWAT in-built STASGO tool will be used to obtain the soil maps. The SWAT model will be calibrated using SWAT-CUP SUFI-2 approach and its performance will be evaluated using the statistical indices of Nash-Sutcliffe efficiency (NSE), ratio of Root-Mean-Square-Error to standard deviation of observed streamflow (RSR), and Percent-Bias Error (PBIAS). The study will help understand the performance of SWAT model with varying data sources and eventually aid the regional state water boards in planning, designing, and managing hydrologic systems.
Characterization of a hypersonic quiet wind tunnel nozzle
NASA Astrophysics Data System (ADS)
Sweeney, Cameron J.
The Boeing/AFOSR Mach-6 Quiet Tunnel at Purdue University has been able to achieve low-disturbance flows at high Reynolds numbers for approximately ten years. The flow in the nozzle was last characterized in 2010. However, researchers have noted that the performance of the nozzle has changed in the intervening years. Understand ing the tunnel characteristics is critical for the hypersonic boundary-layer transition research performed at the facility and any change in performance could have signif icant effects on research performed at the facility. Pitot probe measurements were made using Kulite and PCB pressure transducers to quantify the performance changes since characterization was last performed. Aspects of the nozzle that were investi gated include the radial uniformity of the flow, the effects that time and stagnation pressure have on the flow, and the Reynolds number limits of low-disturbance flows. Measurements showed that freestream noise levels are consistently around 0.01% to 0.02% for the majority of the quiet flow core, with quiet flow now achievable for Reynolds numbers up to Re = 13.0x10 6/m. Additionally, while pitot probes are a widely used measurement technique for quantifying freestream disturbances, pitot probes are not without drawbacks. In order to provide a more complete methodology for freestream noise measurement other researchers have started experimenting with alternate geometries, such as cones. Using a newly designed 30° half-angle cone model, measurements were performed to quantify the freestream noise in the BAM6QT and compare the performance with another hypersonic wind tunnel. Also, measurements were made with three newly designed pitot sleeves to study the effects of probe geometry on freestream noise measurements. The results were compared to recent DNS calculations.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong; Kim, Keunwoo
2013-03-01
The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.
Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19
ERIC Educational Resources Information Center
Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2008-01-01
Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the…
Dabbah, M A; Graham, J; Petropoulos, I N; Tavakoli, M; Malik, R A
2011-10-01
Diabetic peripheral neuropathy (DPN) is one of the most common long term complications of diabetes. Corneal confocal microscopy (CCM) image analysis is a novel non-invasive technique which quantifies corneal nerve fibre damage and enables diagnosis of DPN. This paper presents an automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm. The algorithm exploits the curvilinear structure of the nerve fibres and adapts itself to the local image information. Detected nerve fibres are then quantified and used as feature vectors for classification using random forest (RF) and neural networks (NNT) classifiers. We show, in a comparative study with other well known curvilinear detectors, that the best performance is achieved by the multi-scale dual model in conjunction with the NNT classifier. An evaluation of clinical effectiveness shows that the performance of the automated system matches that of ground-truth defined by expert manual annotation. Copyright © 2011 Elsevier B.V. All rights reserved.
DARPA/AFRL/NASA Smart Wing Second Wind Tunnel Test Results
NASA Technical Reports Server (NTRS)
Scherer, L. B.; Martin, C. A.; West, M.; Florance, J. P.; Wieseman, C. D.; Burner, A. W.; Fleming, G. A.
2001-01-01
To quantify the benefits of smart materials and structures adaptive wing technology, Northrop Grumman Corp. (NGC) built and tested two 16% scale wind tunnel models (a conventional and a "smart" model) of a fighter/attack aircraft under the DARPA/AFRL/NASA Smart Materials and Structures Development - Smart Wing Phase 1. Performance gains quantified included increased pitching moment (C(sub M)), increased rolling moment (C(subl)) and improved pressure distribution. The benefits were obtained for hingeless, contoured trailing edge control surfaces with embedded shape memory alloy (SMA) wires and spanwise wing twist effected by SMA torque tube mechanisms, compared to conventional hinged control surfaces. This paper presents an overview of the results from the second wind tunnel test performed at the NASA Langley Research Center s (LaRC) 16ft Transonic Dynamic Tunnel (TDT) in June 1998. Successful results obtained were: 1) 5 degrees of spanwise twist and 8-12% increase in rolling moment utilizing a single SMA torque tube, 2) 12 degrees of deflection, and 10% increase in rolling moment due to hingeless, contoured aileron, and 3) demonstration of optical techniques for measuring spanwise twist and deflected shape.
Geothermal probabilistic cost study
NASA Technical Reports Server (NTRS)
Orren, L. H.; Ziman, G. M.; Jones, S. C.; Lee, T. K.; Noll, R.; Wilde, L.; Sadanand, V.
1981-01-01
A tool is presented to quantify the risks of geothermal projects, the Geothermal Probabilistic Cost Model (GPCM). The GPCM model was used to evaluate a geothermal reservoir for a binary-cycle electric plant at Heber, California. Three institutional aspects of the geothermal risk which can shift the risk among different agents was analyzed. The leasing of geothermal land, contracting between the producer and the user of the geothermal heat, and insurance against faulty performance were examined.
Hydroclimatic Controls over Global Variations in Phenology and Carbon Flux
NASA Technical Reports Server (NTRS)
Koster, Randal; Walker, G.; Thornton, Patti; Collatz, G. J.
2012-01-01
The connection between phenological and hydroclimatological variations are quantified through joint analyses of global NDVI, LAI, and precipitation datasets. The global distributions of both NDVI and LAI in the warm season are strongly controlled by three quantities: mean annual precipitation, the standard deviation of annual precipitation, and Budyko's index of dryness. Upon demonstrating that these same basic (if biased) relationships are produced by a dynamic vegetation model (the dynamic vegetation and carbon storage components of the NCAR Community Land Model version 4 combined with the water and energy balance framework of the Catchment Land Surface Model of the NASA Global Modeling and Assimilation Office), we use the model to perform a sensitivity study focusing on how phenology and carbon flux might respond to climatic change. The offline (decoupled from the atmosphere) simulations show us, for example, where on the globe a given small increment in precipitation mean or variability would have the greatest impact on carbon uptake. The analysis framework allows us in addition to quantify the degree to which climatic biases in a free-running GCM are manifested as biases in simulated phenology.
Hydroclimatic Controls over Global Variations in Phenology and Carbon Flux
NASA Astrophysics Data System (ADS)
Koster, R. D.; Walker, G.; Thornton, P. E.; Collatz, G. J.
2012-12-01
The connection between phenological and hydroclimatological variations are quantified through joint analyses of global NDVI, LAI, and precipitation datasets. The global distributions of both NDVI and LAI in the warm season are strongly controlled by three quantities: mean annual precipitation, the standard deviation of annual precipitation, and Budyko's index of dryness. Upon demonstrating that these same basic (if somewhat biased) relationships are produced by a dynamic vegetation model (the dynamic vegetation and carbon storage components of the NCAR Community Land Model version 4 combined with the water and energy balance framework of the Catchment Land Surface Model of the NASA Global Modeling and Assimilation Office), we use the model to perform a sensitivity study focusing on how phenology and carbon flux might respond to climatic change. The offline (decoupled from the atmosphere) simulations show us, for example, where on the globe a given small increment in precipitation mean or variability would have the greatest impact on carbon uptake. The analysis framework allows us in addition to quantify the degree to which climatic biases in a free-running GCM are manifested as biases in simulated phenology.
Histological Image Feature Mining Reveals Emergent Diagnostic Properties for Renal Cancer
Kothari, Sonal; Phan, John H.; Young, Andrew N.; Wang, May D.
2016-01-01
Computer-aided histological image classification systems are important for making objective and timely cancer diagnostic decisions. These systems use combinations of image features that quantify a variety of image properties. Because researchers tend to validate their diagnostic systems on specific cancer endpoints, it is difficult to predict which image features will perform well given a new cancer endpoint. In this paper, we define a comprehensive set of common image features (consisting of 12 distinct feature subsets) that quantify a variety of image properties. We use a data-mining approach to determine which feature subsets and image properties emerge as part of an “optimal” diagnostic model when applied to specific cancer endpoints. Our goal is to assess the performance of such comprehensive image feature sets for application to a wide variety of diagnostic problems. We perform this study on 12 endpoints including 6 renal tumor subtype endpoints and 6 renal cancer grade endpoints. Keywords-histology, image mining, computer-aided diagnosis PMID:28163980
Passive autonomous infrared sensor technology
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz
1987-10-01
This study was conducted in response to the DoD's need for establishing understanding of algorithm's modules for passive infrared sensors and seekers and establishing a standardized systematic procedure for applying this understanding to DoD applications. We quantified the performances of Honeywell's Background Adaptive Convexity Operator Region Extractor (BACORE) detection and segmentation modules, as functions of a set of image metrics for both single-frame and multiframe processing. We established an understanding of the behavior of the BACORE's internal parameters. We characterized several sets of stationary and sequential imagery and extracted TIR squared, TBIR squared, ESR, and range for each target. We generated a set of performance models for multi-frame processing BACORE that could be used to predict the behavior of BACORE in image metric space. A similar study was conducted for another of Honeywell's segmentors, namely Texture Boundary Locator (TBL), and its performances were quantified. Finally, a comparison of TBL and BACORE on the same data base and same number of frames was made.
NASA Astrophysics Data System (ADS)
Mignani, A. G.; Ciaccheri, L.; Mencaglia, A. A.; Di Sanzo, R.; Carabetta, S.; Russo, M. T.
2005-05-01
Raman spectroscopy performed using optical fibers, with excitation at 1064 nm and a dispersive detection scheme, was utilized to analyze a selection of unifloral honeys produced in the Italian region of Calabria. The honey samples had three different botanical origins: chestnut, citrus, and acacia, respectively. A multivariate processing of the spectroscopic data enabled us to distinguish their botanical origin, and to build predictive models for quantifying their main sugars. This experiment indicates the excellent potentials of Raman spectroscopy as an analytical tool for the nondestructive and rapid assessment of food-quality indicators.
Hingerl, Ferdinand F.; Yang, Feifei; Pini, Ronny; ...
2016-02-02
In this paper we present the results of an extensive multiscale characterization of the flow properties and structural and capillary heterogeneities of the Heletz sandstone. We performed petrographic, porosity and capillary pressure measurements on several subsamples. We quantified mm-scale heterogeneity in saturation distributions in a rock core during multi-phase flow using conventional X-ray CT scanning. Core-flooding experiments were conducted under reservoirs conditions (9 MPa, 50 °C) to obtain primary drainage and secondary imbibition relative permeabilities and residual trapping was analyzed and quantified. We provide parameters for relative permeability, capillary pressure and trapping models for further modeling studies. A synchrotron-based microtomographymore » study complements our cm- to mm-scale investigation by providing links between the micromorphology and mm-scale saturation heterogeneities.« less
Measuring and Modeling Behavioral Decision Dynamics in Collective Evacuation
Carlson, Jean M.; Alderson, David L.; Stromberg, Sean P.; Bassett, Danielle S.; Craparo, Emily M.; Guiterrez-Villarreal, Francisco; Otani, Thomas
2014-01-01
Identifying and quantifying factors influencing human decision making remains an outstanding challenge, impacting the performance and predictability of social and technological systems. In many cases, system failures are traced to human factors including congestion, overload, miscommunication, and delays. Here we report results of a behavioral network science experiment, targeting decision making in a natural disaster. In a controlled laboratory setting, our results quantify several key factors influencing individual evacuation decision making in a controlled laboratory setting. The experiment includes tensions between broadcast and peer-to-peer information, and contrasts the effects of temporal urgency associated with the imminence of the disaster and the effects of limited shelter capacity for evacuees. Based on empirical measurements of the cumulative rate of evacuations as a function of the instantaneous disaster likelihood, we develop a quantitative model for decision making that captures remarkably well the main features of observed collective behavior across many different scenarios. Moreover, this model captures the sensitivity of individual- and population-level decision behaviors to external pressures, and systematic deviations from the model provide meaningful estimates of variability in the collective response. Identification of robust methods for quantifying human decisions in the face of risk has implications for policy in disasters and other threat scenarios, specifically the development and testing of robust strategies for training and control of evacuations that account for human behavior and network topologies. PMID:24520331
Roberti, Joshua A.; SanClements, Michael D.; Loescher, Henry W.; Ayres, Edward
2014-01-01
Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023
Functional constraints on tooth morphology in carnivorous mammals
2012-01-01
Background The range of potential morphologies resulting from evolution is limited by complex interacting processes, ranging from development to function. Quantifying these interactions is important for understanding adaptation and convergent evolution. Using three-dimensional reconstructions of carnivoran and dasyuromorph tooth rows, we compared statistical models of the relationship between tooth row shape and the opposing tooth row, a static feature, as well as measures of mandibular motion during chewing (occlusion), which are kinetic features. This is a new approach to quantifying functional integration because we use measures of movement and displacement, such as the amount the mandible translates laterally during occlusion, as opposed to conventional morphological measures, such as mandible length and geometric landmarks. By sampling two distantly related groups of ecologically similar mammals, we study carnivorous mammals in general rather than a specific group of mammals. Results Statistical model comparisons demonstrate that the best performing models always include some measure of mandibular motion, indicating that functional and statistical models of tooth shape as purely a function of the opposing tooth row are too simple and that increased model complexity provides a better understanding of tooth form. The predictors of the best performing models always included the opposing tooth row shape and a relative linear measure of mandibular motion. Conclusions Our results provide quantitative support of long-standing hypotheses of tooth row shape as being influenced by mandibular motion in addition to the opposing tooth row. Additionally, this study illustrates the utility and necessity of including kinetic features in analyses of morphological integration. PMID:22899809
A side-by-side comparison of CPV module and system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Matthew; Marion, Bill; Kurtz, Sarah
A side-by-side comparison is made between concentrator photovoltaic module and system direct current aperture efficiency data with a focus on quantifying system performance losses. The individual losses measured/calculated, when combined, are in good agreement with the total loss seen between the module and the system. Results indicate that for the given test period, the largest individual loss of 3.7% relative is due to the baseline performance difference between the individual module and the average for the 200 modules in the system. A basic empirical model is derived based on module spectral performance data and the tabulated losses between the modulemore » and the system. The model predicts instantaneous system direct current aperture efficiency with a root mean square error of 2.3% relative.« less
Quantifying the impact of smoke aerosol on the UV radiation
NASA Astrophysics Data System (ADS)
Sokolik, I. N.; Tatarskii, V.; Hall, S. R.; Petropavlovskikh, I. V.
2017-12-01
We present an analysis of the impact of smoke on the UV radiation. The analysis is performed for a case study by combining the modeling and measurements. The case study is focusing in wildfires occurred in California in ????. The fires have been affecting the environment in the region, posing a serious threat to the human well - being.The modeling is performed using a fully couple WRF- Chem- SMOKE model. The model uses the FRP MODIS satellite data to generate the smoke emission for an actual event. The smoke aerosol is treated in a size and composition resolved manner. The optical properties are computed online and provided to the TUV model that is incorporated in the WRF - Chem-SMOKE model. The analysis of the impact of smoke on the UV radiation is performed. We assess the impact of smoke on the TOA radiative forcing. Our results show a significant impact of smoke on the radiative regime of the atmosphere.
Multilingual Twitter Sentiment Classification: The Role of Human Annotators
Mozetič, Igor; Grčar, Miha; Smailović, Jasmina
2016-01-01
What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered. PMID:27149621
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Bonacuse, Peter J.; Mital, Subodh K.
2012-01-01
To develop methods for quantifying the effects of the microstructural variations of woven ceramic matrix composites on the effective properties and response of the material, a research program has been undertaken which is described in this paper. In order to characterize and quantify the variations in the microstructure of a five harness satin weave, CVI SiC/SiC, composite material, specimens were serially sectioned and polished to capture images that detailed the fiber tows, matrix, and porosity. Open source quantitative image analysis tools were then used to isolate the constituents and collect relevant statistics such as within ply tow spacing. This information was then used to build two dimensional finite element models that approximated the observed section geometry. With the aid of geometrical models generated by the microstructural characterization process, finite element models were generated and analyses were performed to quantify the effects of the microstructure and its variation on the effective stiffness and areas of stress concentration of the material. The results indicated that the geometry and distribution of the porosity appear to have significant effects on the through-thickness modulus. Similarly, stress concentrations on the outer surface of the composite appear to correlate to regions where the transverse tows are separated by a critical amount.
System cost/performance analysis (study 2.3). Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Kazangey, T.
1973-01-01
The relationships between performance, safety, cost, and schedule parameters were identified and quantified in support of an overall effort to generate program models and methodology that provide insight into a total space vehicle program. A specific space vehicle system, the attitude control system (ACS), was used, and a modeling methodology was selected that develops a consistent set of quantitative relationships among performance, safety, cost, and schedule, based on the characteristics of the components utilized in candidate mechanisms. These descriptive equations were developed for a three-axis, earth-pointing, mass expulsion ACS. A data base describing typical candidate ACS components was implemented, along with a computer program to perform sample calculations. This approach, implemented on a computer, is capable of determining the effect of a change in functional requirements to the ACS mechanization and the resulting cost and schedule. By a simple extension of this modeling methodology to the other systems in a space vehicle, a complete space vehicle model can be developed. Study results and recommendations are presented.
Surface Adsorption in Nonpolarizable Atomic Models.
Whitmer, Jonathan K; Joshi, Abhijeet A; Carlton, Rebecca J; Abbott, Nicholas L; de Pablo, Juan J
2014-12-09
Many ionic solutions exhibit species-dependent properties, including surface tension and the salting-out of proteins. These effects may be loosely quantified in terms of the Hofmeister series, first identified in the context of protein solubility. Here, our interest is to develop atomistic models capable of capturing Hofmeister effects rigorously. Importantly, we aim to capture this dependence in computationally cheap "hard" ionic models, which do not exhibit dynamic polarization. To do this, we have performed an investigation detailing the effects of the water model on these properties. Though incredibly important, the role of water models in simulation of ionic solutions and biological systems is essentially unexplored. We quantify this via the ion-dependent surface attraction of the halide series (Cl, Br, I) and, in so doing, determine the relative importance of various hypothesized contributions to ionic surface free energies. Importantly, we demonstrate surface adsorption can result in hard ionic models combined with a thermodynamically accurate representation of the water molecule (TIP4Q). The effect observed in simulations of iodide is commensurate with previous calculations of the surface potential of mean force in rigid molecular dynamics and polarizable density-functional models. Our calculations are direct simulation evidence of the subtle but sensitive role of water thermodynamics in atomistic simulations.
Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.
Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael
2015-03-03
Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.
Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel
2014-11-01
With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can therefore have a large effect on model predictions, but are rarely quantified. With Monte Carlo simulation we assessed the effect of input uncertainty on the prediction of radio-frequency electromagnetic fields (RF-EMF) from mobile phone base stations at 252 receptor sites in Amsterdam, The Netherlands. The impact on ranking and classification was determined by computing the Spearman correlations and weighted Cohen's Kappas (based on tertiles of the RF-EMF exposure distribution) between modelled values and RF-EMF measurements performed at the receptor sites. The uncertainty in modelled RF-EMF levels was large with a median coefficient of variation of 1.5. Uncertainty in receptor site height, building damping and building height contributed most to model output uncertainty. For exposure ranking and classification, the heights of buildings and receptor sites were the most important sources of uncertainty, followed by building damping, antenna- and site location. Uncertainty in antenna power, tilt, height and direction had a smaller impact on model performance. We quantified the effect of input data uncertainty on the prediction accuracy of an RF-EMF environmental exposure model, thereby identifying the most important sources of uncertainty and estimating the total uncertainty stemming from potential errors in the input data. This approach can be used to optimize the model and better interpret model output. Copyright © 2014 Elsevier Inc. All rights reserved.
Predictive ability of a comprehensive incremental test in mountain bike marathon.
Ahrend, Marc-Daniel; Schneeweiss, Patrick; Martus, Peter; Niess, Andreas M; Krauss, Inga
2018-01-01
Traditional performance tests in mountain bike marathon (XCM) primarily quantify aerobic metabolism and may not describe the relevant capacities in XCM. We aimed to validate a comprehensive test protocol quantifying its intermittent demands. Forty-nine athletes (38.8±9.1 years; 38 male; 11 female) performed a laboratory performance test, including an incremental test, to determine individual anaerobic threshold (IAT), peak power output (PPO) and three maximal efforts (10 s all-out sprint, 1 min maximal effort and 5 min maximal effort). Within 2 weeks, the athletes participated in one of three XCM races (n=15, n=9 and n=25). Correlations between test variables and race times were calculated separately. In addition, multiple regression models of the predictive value of laboratory outcomes were calculated for race 3 and across all races (z-transformed data). All variables were correlated with race times 1, 2 and 3: 10 s all-out sprint (r=-0.72; r=-0.59; r=-0.61), 1 min maximal effort (r=-0.85; r=-0.84; r=-0.82), 5 min maximal effort (r=-0.57; r=-0.85; r=-0.76), PPO (r=-0.77; r=-0.73; r=-0.76) and IAT (r=-0.71; r=-0.67; r=-0.68). The best-fitting multiple regression models for race 3 (r 2 =0.868) and across all races (r 2 =0.757) comprised 1 min maximal effort, IAT and body weight. Aerobic and intermittent variables correlated least strongly with race times. Their use in a multiple regression model confirmed additional explanatory power to predict XCM performance. These findings underline the usefulness of the comprehensive incremental test to predict performance in that sport more precisely.
Biophysical Assessment and Predicted Thermophysiologic Effects of Body Armor
Potter, Adam W.; Gonzalez, Julio A.; Karis, Anthony J.; Xu, Xiaojiang
2015-01-01
Introduction Military personnel are often required to wear ballistic protection in order to defend against enemies. However, this added protection increases mass carried and imposes additional thermal burden on the individual. Body armor (BA) is known to reduce combat casualties, but the effects of BA mass and insulation on the physical performance of soldiers are less well documented. Until recently, the emphasis has been increasing personal protection, with little consideration of the adverse impacts on human performance. Objective The purpose of this work was to use sweating thermal manikin and mathematical modeling techniques to quantify the tradeoff between increased BA protection, the accompanying mass, and thermal effects on human performance. Methods Using a sweating thermal manikin, total insulation (IT, clo) and vapor permeability indexes (im) were measured for a baseline clothing ensemble with and without one of seven increasingly protective U.S. Army BA configurations. Using mathematical modeling, predictions were made of thermal impact on humans wearing each configuration while working in hot/dry (desert), hot/humid (jungle), and temperate environmental conditions. Results In nearly still air (0.4 m/s), IT ranged from 1.57 to 1.63 clo and im from 0.35 to 0.42 for the seven BA conditions, compared to IT and im values of 1.37 clo and 0.45 respectively, for the baseline condition (no BA). Conclusion Biophysical assessments and predictive modeling show a quantifiable relationship exists among increased protection and increased thermal burden and decreased work capacity. This approach enables quantitative analysis of the tradeoffs between ballistic protection, thermal-work strain, and physical work performance. PMID:26200906
Biophysical Assessment and Predicted Thermophysiologic Effects of Body Armor.
Potter, Adam W; Gonzalez, Julio A; Karis, Anthony J; Xu, Xiaojiang
2015-01-01
Military personnel are often required to wear ballistic protection in order to defend against enemies. However, this added protection increases mass carried and imposes additional thermal burden on the individual. Body armor (BA) is known to reduce combat casualties, but the effects of BA mass and insulation on the physical performance of soldiers are less well documented. Until recently, the emphasis has been increasing personal protection, with little consideration of the adverse impacts on human performance. The purpose of this work was to use sweating thermal manikin and mathematical modeling techniques to quantify the tradeoff between increased BA protection, the accompanying mass, and thermal effects on human performance. Using a sweating thermal manikin, total insulation (IT, clo) and vapor permeability indexes (im) were measured for a baseline clothing ensemble with and without one of seven increasingly protective U.S. Army BA configurations. Using mathematical modeling, predictions were made of thermal impact on humans wearing each configuration while working in hot/dry (desert), hot/humid (jungle), and temperate environmental conditions. In nearly still air (0.4 m/s), IT ranged from 1.57 to 1.63 clo and im from 0.35 to 0.42 for the seven BA conditions, compared to IT and im values of 1.37 clo and 0.45 respectively, for the baseline condition (no BA). Biophysical assessments and predictive modeling show a quantifiable relationship exists among increased protection and increased thermal burden and decreased work capacity. This approach enables quantitative analysis of the tradeoffs between ballistic protection, thermal-work strain, and physical work performance.
NASA Astrophysics Data System (ADS)
Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew
2007-04-01
One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.
Revisiting the Boeing B-47 and the Avro Vulcan with implications on aircraft design today
NASA Astrophysics Data System (ADS)
van Seeters, Philip A.
This project compares the cruise mission performance of the historic Boeing B-47 and Avro Vulcan. The author aims to demonstrate that despite superficial similarities, these aircraft perform quite differently away from their intended design points. The investigation uses computer aided design software, and an aircraft sizing program to generate digital models of both airplanes. Subsequent simulations of various missions quantify the performance mainly in terms of fuel efficiency, and productivity. Based on this comparison, the efforts conclude that these aircraft perform indeed differently, and that a performance comparison based on a design mission alone, is insufficient.
Performance bounds on parallel self-initiating discrete-event
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.
Forecasting seasonal hydrologic response in major river basins
NASA Astrophysics Data System (ADS)
Bhuiyan, A. M.
2014-05-01
Seasonal precipitation variation due to natural climate variation influences stream flow and the apparent frequency and severity of extreme hydrological conditions such as flood and drought. To study hydrologic response and understand the occurrence of extreme hydrological events, the relevant forcing variables must be identified. This study attempts to assess and quantify the historical occurrence and context of extreme hydrologic flow events and quantify the relation between relevant climate variables. Once identified, the flow data and climate variables are evaluated to identify the primary relationship indicators of hydrologic extreme event occurrence. Existing studies focus on developing basin-scale forecasting techniques based on climate anomalies in El Nino/La Nina episodes linked to global climate. Building on earlier work, the goal of this research is to quantify variations in historical river flows at seasonal temporal-scale, and regional to continental spatial-scale. The work identifies and quantifies runoff variability of major river basins and correlates flow with environmental forcing variables such as El Nino, La Nina, sunspot cycle. These variables are expected to be the primary external natural indicators of inter-annual and inter-seasonal patterns of regional precipitation and river flow. Relations between continental-scale hydrologic flows and external climate variables are evaluated through direct correlations in a seasonal context with environmental phenomenon such as sun spot numbers (SSN), Southern Oscillation Index (SOI), and Pacific Decadal Oscillation (PDO). Methods including stochastic time series analysis and artificial neural networks are developed to represent the seasonal variability evident in the historical records of river flows. River flows are categorized into low, average and high flow levels to evaluate and simulate flow variations under associated climate variable variations. Results demonstrated not any particular method is suited to represent scenarios leading to extreme flow conditions. For selected flow scenarios, the persistence model performance may be comparable to more complex multivariate approaches, and complex methods did not always improve flow estimation. Overall model performance indicates inclusion of river flows and forcing variables on average improve model extreme event forecasting skills. As a means to further refine the flow estimation, an ensemble forecast method is implemented to provide a likelihood-based indication of expected river flow magnitude and variability. Results indicate seasonal flow variations are well-captured in the ensemble range, therefore the ensemble approach can often prove efficient in estimating extreme river flow conditions. The discriminant prediction approach, a probabilistic measure to forecast streamflow, is also adopted to derive model performance. Results show the efficiency of the method in terms of representing uncertainties in the forecasts.
Estimate of main local sources to ambient ultrafine particle number concentrations in an urban area
NASA Astrophysics Data System (ADS)
Rahman, Md Mahmudur; Mazaheri, Mandana; Clifford, Sam; Morawska, Lidia
2017-09-01
Quantifying and apportioning the contribution of a range of sources to ultrafine particles (UFPs, D < 100 nm) is a challenge due to the complex nature of the urban environments. Although vehicular emissions have long been considered one of the major sources of ultrafine particles in urban areas, the contribution of other major urban sources is not yet fully understood. This paper aims to determine and quantify the contribution of local ground traffic, nucleated particle (NP) formation and distant non-traffic (e.g. airport, oil refineries, and seaport) sources to the total ambient particle number concentration (PNC) in a busy, inner-city area in Brisbane, Australia using Bayesian statistical modelling and other exploratory tools. The Bayesian model was trained on the PNC data on days where NP formations were known to have not occurred, hourly traffic counts, solar radiation data, and smooth daily trend. The model was applied to apportion and quantify the contribution of NP formations and local traffic and non-traffic sources to UFPs. The data analysis incorporated long-term measured time-series of total PNC (D ≥ 6 nm), particle number size distributions (PSD, D = 8 to 400 nm), PM2.5, PM10, NOx, CO, meteorological parameters and traffic counts at a stationary monitoring site. The developed Bayesian model showed reliable predictive performances in quantifying the contribution of NP formation events to UFPs (up to 4 × 104 particles cm- 3), with a significant day to day variability. The model identified potential NP formation and no-formations days based on PNC data and quantified the sources contribution to UFPs. Exploratory statistical analyses show that total mean PNC during the middle of the day was up to 32% higher than during peak morning and evening traffic periods, which were associated with NP formation events. The majority of UFPs measured during the peak traffic and NP formation periods were between 30-100 nm and smaller than 30 nm, respectively. To date, this is the first application of Bayesian model to apportion different sources contribution to UFPs, and therefore the importance of this study is not only in its modelling outcomes but in demonstrating the applicability and advantages of this statistical approach to air pollution studies.
Pitot-tube flowmeter for quantification of airflow during sleep.
Kirkness, J P; Verma, M; McGinley, B M; Erlacher, M; Schwartz, A R; Smith, P L; Wheatley, J R; Patil, S P; Amis, T C; Schneider, H
2011-02-01
The gold-standard pneumotachograph is not routinely used to quantify airflow during overnight polysomnography due to the size, weight, bulkiness and discomfort of the equipment that must be worn. To overcome these deficiencies that have precluded the use of a pneumotachograph in routine sleep studies, our group developed a lightweight, low dead space 'pitot flowmeter' (based on pitot-tube principle) for use during sleep. We aimed to examine the characteristics and validate the flowmeter for quantifying airflow and detecting hypopneas during polysomnography by performing a head-to-head comparison with a pneumotachograph. Four experimental paradigms were utilized to determine the technical performance characteristics and the clinical usefulness of the pitot flowmeter in a head-to-head comparison with a pneumotachograph. In each study (1-4), the pitot flowmeter was connected in series with a pneumotachograph under either static flow (flow generator inline or on a face model) or dynamic flow (subject breathing via a polyester face model or on a nasal mask) conditions. The technical characteristics of the pitot flowmeter showed that, (1) the airflow resistance ranged from 0.065 ± 0.002 to 0.279 ± 0.004 cm H(2)O L(-1) s(-1) over the airflow rates of 10 to 50 L min(-1). (2) On the polyester face model there was a linear relationship between airflow as measured by the pitot flowmeter output voltage and the calibrated pneumotachograph signal a (β(1) = 1.08 V L(-1) s(-1); β(0) = 2.45 V). The clinically relevant performance characteristics (hypopnea detection) showed that (3) when the pitot flowmeter was connected via a mask to the human face model, both the sensitivity and specificity for detecting a 50% decrease in peak-to-peak airflow amplitude was 99.2%. When tested in sleeping human subjects, (4) the pitot flowmeter signal displayed 94.5% sensitivity and 91.5% specificity for the detection of 50% peak-to-peak reductions in pneumotachograph-measured airflow. Our data validate the pitot flowmeter for quantification of airflow and detecting breathing reduction during polysomnographic sleep studies. We speculate that quantifying airflow during sleep can differentiate phenotypic traits related to sleep disordered breathing.
Thermodynamic work from operational principles
NASA Astrophysics Data System (ADS)
Gallego, R.; Eisert, J.; Wilming, H.
2016-10-01
In recent years we have witnessed a concentrated effort to make sense of thermodynamics for small-scale systems. One of the main difficulties is to capture a suitable notion of work that models realistically the purpose of quantum machines, in an analogous way to the role played, for macroscopic machines, by the energy stored in the idealisation of a lifted weight. Despite several attempts to resolve this issue by putting forward specific models, these are far from realistically capturing the transitions that a quantum machine is expected to perform. In this work, we adopt a novel strategy by considering arbitrary kinds of systems that one can attach to a quantum thermal machine and defining work quantifiers. These are functions that measure the value of a transition and generalise the concept of work beyond those models familiar from phenomenological thermodynamics. We do so by imposing simple operational axioms that any reasonable work quantifier must fulfil and by deriving from them stringent mathematical condition with a clear physical interpretation. Our approach allows us to derive much of the structure of the theory of thermodynamics without taking the definition of work as a primitive. We can derive, for any work quantifier, a quantitative second law in the sense of bounding the work that can be performed using some non-equilibrium resource by the work that is needed to create it. We also discuss in detail the role of reversibility and correlations in connection with the second law. Furthermore, we recover the usual identification of work with energy in degrees of freedom with vanishing entropy as a particular case of our formalism. Our mathematical results can be formulated abstractly and are general enough to carry over to other resource theories than quantum thermodynamics.
Briggs, Martin A.; Buckley, Sean F.; Bagtzoglou, Amvrossios C.; Werkema, Dale D.; Lane, John W.
2016-01-01
Zones of strong groundwater upwelling to streams enhance thermal stability and moderate thermal extremes, which is particularly important to aquatic ecosystems in a warming climate. Passive thermal tracer methods used to quantify vertical upwelling rates rely on downward conduction of surface temperature signals. However, moderate to high groundwater flux rates (>−1.5 m d−1) restrict downward propagation of diurnal temperature signals, and therefore the applicability of several passive thermal methods. Active streambed heating from within high-resolution fiber-optic temperature sensors (A-HRTS) has the potential to define multidimensional fluid-flux patterns below the extinction depth of surface thermal signals, allowing better quantification and separation of local and regional groundwater discharge. To demonstrate this concept, nine A-HRTS were emplaced vertically into the streambed in a grid with ∼0.40 m lateral spacing at a stream with strong upward vertical flux in Mashpee, Massachusetts, USA. Long-term (8–9 h) heating events were performed to confirm the dominance of vertical flow to the 0.6 m depth, well below the extinction of ambient diurnal signals. To quantify vertical flux, short-term heating events (28 min) were performed at each A-HRTS, and heat-pulse decay over vertical profiles was numerically modeled in radial two dimension (2-D) using SUTRA. Modeled flux values are similar to those obtained with seepage meters, Darcy methods, and analytical modeling of shallow diurnal signals. We also observed repeatable differential heating patterns along the length of vertically oriented sensors that may indicate sediment layering and hyporheic exchange superimposed on regional groundwater discharge.
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
The Urban Forest Effects (UFORE) model: quantifying urban forest structure and functions
David J. Nowak; Daniel E. Crane
2000-01-01
The Urban Forest Effects (UFORE) computer model was developed to help managers and researchers quantify urban forest structure and functions. The model quantifies species composition and diversity, diameter distribution, tree density and health, leaf area, leaf biomass, and other structural characteristics; hourly volatile organic compound emissions (emissions that...
A novel framework for virtual prototyping of rehabilitation exoskeletons.
Agarwal, Priyanshu; Kuo, Pei-Hsin; Neptune, Richard R; Deshpande, Ashish D
2013-06-01
Human-worn rehabilitation exoskeletons have the potential to make therapeutic exercises increasingly accessible to disabled individuals while reducing the cost and labor involved in rehabilitation therapy. In this work, we propose a novel human-model-in-the-loop framework for virtual prototyping (design, control and experimentation) of rehabilitation exoskeletons by merging computational musculoskeletal analysis with simulation-based design techniques. The framework allows to iteratively optimize design and control algorithm of an exoskeleton using simulation. We introduce biomechanical, morphological, and controller measures to quantify the performance of the device for optimization study. Furthermore, the framework allows one to carry out virtual experiments for testing specific "what-if" scenarios to quantify device performance and recovery progress. To illustrate the application of the framework, we present a case study wherein the design and analysis of an index-finger exoskeleton is carried out using the proposed framework.
Computation of Neutral Gas Flow from a Hall Thruster into a Vacuum Chamber
2002-10-18
try to quantify these effects, the direct simulation Monte Carlo method is applied to model a cold flow of xenon gas expanding from a Hall thruster into...a vacuum chamber. The simulations are performed for the P5 Hall thruster operating in a large vacuum tank at the University of Michigan. Comparison
ERIC Educational Resources Information Center
Ellerby, David J.
2009-01-01
The medicinal leech is a useful animal model for investigating undulatory swimming in the classroom. Unlike many swimming organisms, its swimming performance can be quantified without specialized equipment. A large blood meal alters swimming behavior in a way that can be used to generate a discussion of the hydrodynamics of swimming, muscle…
Quantifying light-dependent circadian disruption in humans and animal models.
Rea, Mark S; Figueiro, Mariana G
2014-12-01
Although circadian disruption is an accepted term, little has been done to develop methods to quantify the degree of disruption or entrainment individual organisms actually exhibit in the field. A variety of behavioral, physiological and hormonal responses vary in amplitude over a 24-h period and the degree to which these circadian rhythms are synchronized to the daily light-dark cycle can be quantified with a technique known as phasor analysis. Several studies have been carried out using phasor analysis in an attempt to measure circadian disruption exhibited by animals and by humans. To perform these studies, species-specific light measurement and light delivery technologies had to be developed based upon a fundamental understanding of circadian phototransduction mechanisms in the different species. When both nocturnal rodents and diurnal humans, experienced different species-specific light-dark shift schedules, they showed, based upon phasor analysis of the light-dark and activity-rest patterns, similar levels of light-dependent circadian disruption. Indeed, both rodents and humans show monotonically increasing and quantitatively similar levels of light-dependent circadian disruption with increasing shift-nights per week. Thus, phasor analysis provides a method for quantifying circadian disruption in the field and in the laboratory as well as a bridge between ecological measurements of circadian entrainment in humans and parametric studies of circadian disruption in animal models, including nocturnal rodents.
Quantifying the Incoming Jet Past Heart Valve Prostheses Using Vortex Formation Dynamics
NASA Astrophysics Data System (ADS)
Pierrakos, Olga
2005-11-01
Heart valve (HV) replacement prostheses are associated with hemodynamic compromises compared to their native counterparts. Traditionally, HV performance and hemodynamics have been quantified using effective orifice size and pressure gradients. However, quality and direction of flow are also important aspects of HV function and relate to HV design, implantation technique, and orientation. The flow past any HV is governed by the generation of shear layers followed by the formation and shedding of organized flow structures in the form of vortex rings (VR). For the first time, vortex formation (VF) in the LV is quantified. Vortex energy measurements allow for calculation of the critical formation number (FN), which is the time at which the VR reaches its maximum strength. Inefficiencies in HV function result in critical FN decrease. This study uses the concept of FN to compare mitral HV prostheses in an in-vitro model (a silicone LV model housed in a piston-driven heart simulator) using Time-resolved Digital Particle Image Velocimetry. Two HVs were studied: a porcine HV and bileaflet MHV, which was tested in an anatomic and non-anatomic orientation. The results suggest that HV orientation and design affect the critical FN. We propose that the critical FN, which is contingent on the HV design, orientation, and physical flow characteristics, serve as a parameter to quantify the incoming jet and the efficiency of the HV.
Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.
2017-12-01
Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.
NASA Technical Reports Server (NTRS)
Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.
2010-01-01
Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project
Schaefferkoetter, Joshua; Casey, Michael; Townsend, David; Fakhri, Georges El
2013-01-01
Time-of-flight (TOF) and point spread function (PSF) modeling have been shown to improve PET reconstructions, but the impact on physicians in the clinical setting has not been thoroughly investigated. A lesion detection and localization study was performed using simulated lesions in real patient images. Four reconstruction schemes were considered: ordinary Poisson OSEM (OP) alone and combined with TOF, PSF, and TOF+PSF. The images were presented to physicians experienced in reading PET images, and the performance of each was quantified using localization receiver operating characteristic (LROC). Numerical observers (non-prewhitening and Hotelling) were used to identify optimal reconstruction parameters, and observer SNR was compared to the performance of the physicians. The numerical models showed good agreement with human performance, and best performance was achieved by both when using TOF+PSF. These findings suggest a large potential benefit of TOF+PSF for oncology PET studies, especially in the detection of small, low-intensity, focal disease in larger patients. PMID:23403399
NASA Astrophysics Data System (ADS)
Schaefferkoetter, Joshua; Casey, Michael; Townsend, David; El Fakhri, Georges
2013-03-01
Time-of-flight (TOF) and point spread function (PSF) modeling have been shown to improve PET reconstructions, but the impact on physicians in the clinical setting has not been thoroughly investigated. A lesion detection and localization study was performed using simulated lesions in real patient images. Four reconstruction schemes were considered: ordinary Poisson OSEM (OP) alone and combined with TOF, PSF, and TOF + PSF. The images were presented to physicians experienced in reading PET images, and the performance of each was quantified using localization receiver operating characteristic. Numerical observers (non-prewhitening and Hotelling) were used to identify optimal reconstruction parameters, and observer SNR was compared to the performance of the physicians. The numerical models showed good agreement with human performance, and best performance was achieved by both when using TOF + PSF. These findings suggest a large potential benefit of TOF + PSF for oncology PET studies, especially in the detection of small, low-intensity, focal disease in larger patients.
Major hydrogeochemical processes in an acid mine drainage affected estuary.
Asta, Maria P; Calleja, Maria Ll; Pérez-López, Rafael; Auqué, Luis F
2015-02-15
This study provides geochemical data with the aim of identifying and quantifying the main processes occurring in an Acid Mine Drainage (AMD) affected estuary. With that purpose, water samples of the Huelva estuary were collected during a tidal half-cycle and ion-ion plots and geochemical modeling were performed to obtain a general conceptual model. Modeling results indicated that the main processes responsible for the hydrochemical evolution of the waters are: (i) the mixing of acid fluvial water with alkaline ocean water; (ii) precipitation of Fe oxyhydroxysulfates (schwertmannite) and hydroxides (ferrihydrite); (iii) precipitation of Al hydroxysulfates (jurbanite) and hydroxides (amorphous Al(OH)3); (iv) dissolution of calcite; and (v) dissolution of gypsum. All these processes, thermodynamically feasible in the light of their calculated saturation states, were quantified by mass-balance calculations and validated by reaction-path calculations. In addition, sorption processes were deduced by the non-conservative behavior of some elements (e.g., Cu and Zn). Copyright © 2014 Elsevier Ltd. All rights reserved.
Surface area-volume ratios in insects.
Kühsel, Sara; Brückner, Adrian; Schmelzle, Sebastian; Heethoff, Michael; Blüthgen, Nico
2017-10-01
Body mass, volume and surface area are important for many aspects of the physiology and performance of species. Whereas body mass scaling received a lot of attention in the literature, surface areas of animals have not been measured explicitly in this context. We quantified surface area-volume (SA/V) ratios for the first time using 3D surface models based on a structured light scanning method for 126 species of pollinating insects from 4 orders (Diptera, Hymenoptera, Lepidoptera, and Coleoptera). Water loss of 67 species was measured gravimetrically at very dry conditions for 2 h at 15 and 30 °C to demonstrate the applicability of the new 3D surface measurements and relevance for predicting the performance of insects. Quantified SA/V ratios significantly explained the variation in water loss across species, both directly or after accounting for isometric scaling (residuals of the SA/V ∼ mass 2/3 relationship). Small insects with a proportionally larger surface area had the highest water loss rates. Surface scans of insects to quantify allometric SA/V ratios thus provide a promising method to predict physiological responses, improving the potential of body mass isometry alone that assume geometric similarity. © 2016 Institute of Zoology, Chinese Academy of Sciences.
AR(p) -based detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Alvarez-Ramirez, J.; Rodriguez, E.
2018-07-01
Autoregressive models are commonly used for modeling time-series from nature, economics and finance. This work explored simple autoregressive AR(p) models to remove long-term trends in detrended fluctuation analysis (DFA). Crude oil prices and bitcoin exchange rate were considered, with the former corresponding to a mature market and the latter to an emergent market. Results showed that AR(p) -based DFA performs similar to traditional DFA. However, the former DFA provides information on stability of long-term trends, which is valuable for understanding and quantifying the dynamics of complex time series from financial systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, H., E-mail: hengxiao@vt.edu; Wu, J.-L.; Wang, J.-X.
Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations.more » Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has potential implications in many fields in which the governing equations are well understood but the model uncertainty comes from unresolved physical processes. - Highlights: • Proposed a physics–informed framework to quantify uncertainty in RANS simulations. • Framework incorporates physical prior knowledge and observation data. • Based on a rigorous Bayesian framework yet fully utilizes physical model. • Applicable for many complex physical systems beyond turbulent flows.« less
On the predictability of land surface fluxes from meteorological variables
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.
2018-01-01
Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.
Evaluation of Supply Chain Efficiency Based on a Novel Network of Data Envelopment Analysis Model
NASA Astrophysics Data System (ADS)
Fu, Li Fang; Meng, Jun; Liu, Ying
2015-12-01
Performance evaluation of supply chain (SC) is a vital topic in SC management and inherently complex problems with multilayered internal linkages and activities of multiple entities. Recently, various Network Data Envelopment Analysis (NDEA) models, which opened the “black box” of conventional DEA, were developed and applied to evaluate the complex SC with a multilayer network structure. However, most of them are input or output oriented models which cannot take into consideration the nonproportional changes of inputs and outputs simultaneously. This paper extends the Slack-based measure (SBM) model to a nonradial, nonoriented network model named as U-NSBM with the presence of undesirable outputs in the SC. A numerical example is presented to demonstrate the applicability of the model in quantifying the efficiency and ranking the supply chain performance. By comparing with the CCR and U-SBM models, it is shown that the proposed model has higher distinguishing ability and gives feasible solution in the presence of undesirable outputs. Meanwhile, it provides more insights for decision makers about the source of inefficiency as well as the guidance to improve the SC performance.
NASA Astrophysics Data System (ADS)
Moonen, P.; Gromke, C.; Dorer, V.
2013-08-01
The potential of a Large Eddy Simulation (LES) model to reliably predict near-field pollutant dispersion is assessed. To that extent, detailed time-resolved numerical simulations of coupled flow and dispersion are conducted for a street canyon with tree planting. Different crown porosities are considered. The model performance is assessed in several steps, ranging from a qualitative comparison to measured concentrations, over statistical data analysis by means of scatter plots and box plots, up to the calculation of objective validation metrics. The extensive validation effort highlights and quantifies notable features and shortcomings of the model, which would otherwise remain unnoticed. The model performance is found to be spatially non-uniform. Closer agreement with measurement data is achieved near the canyon ends than for the central part of the canyon, and typical model acceptance criteria are satisfied more easily for the leeward than for the windward canyon wall. This demonstrates the need for rigorous model evaluation. Only quality-assured models can be used with confidence to support assessment, planning and implementation of pollutant mitigation strategies.
Evaluation of Limb Load Asymmetry Using Two New Mathematical Models
Kumar, Senthil NS; Omar, Baharudin; Joseph, Leonard H.; Htwe, Ohnmar; Jagannathan, K.; Hamdan, Nor M Y; Rajalakshmi, D.
2015-01-01
Quantitative measurement of limb loading is important in orthopedic and neurological rehabilitation. In current practice, mathematical models such as Symmetry index (SI), Symmetry ratio (SR), and Symmetry angle (SA) are used to quantify limb loading asymmetry. Literatures have identified certain limitations with the above mathematical models. Hence this study presents two new mathematical models Modified symmetry index (MSI) and Limb loading error (LLE) that would address these limitations. Furthermore, the current mathematical models were compared against the new model with the goal of achieving a better model. This study uses hypothetical data to simulate an algorithmic preliminary computational measure to perform with all numerical possibilities of even and uneven limb loading that can occur in human legs. Descriptive statistics are used to interpret the limb loading patterns: symmetry, asymmetry and maximum asymmetry. The five mathematical models were similar in analyzing symmetry between limbs. However, for asymmetry and maximum asymmetry data, the SA and SR values do not give any meaningful interpretation, and SI gives an inflated value. The MSI and LLE are direct, easy to interpret and identify the loading patterns with the side of asymmetry. The new models are notable as they quantify the amount and side of asymmetry under different loading patterns. PMID:25716372
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; ...
2016-03-16
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less
Modeling Unsteady Cavitation and Dynamic Loads in Turbopumps
NASA Technical Reports Server (NTRS)
Hosangadi, Ashvin; Ahuja, Vineet; Ungewitter, Ronald; Dash, Sanford M.
2009-01-01
A computational fluid dynamics (CFD) model that includes representations of effects of unsteady cavitation and associated dynamic loads has been developed to increase the accuracy of simulations of the performances of turbopumps. Although the model was originally intended to serve as a means of analyzing preliminary designs of turbopumps that supply cryogenic propellant liquids to rocket engines, the model could also be applied to turbopumping of other liquids: this can be considered to have been already demonstrated, in that the validation of the model was performed by comparing results of simulations performed by use of the model with results of sub-scale experiments in water. The need for this or a similar model arises as follows: Cavitation instabilities in a turbopump are generated as inlet pressure drops and vapor cavities grow on inducer blades, eventually becoming unsteady. The unsteady vapor cavities lead to rotation cavitation, in which the cavities detach from the blades and become part of a fluid mass that rotates relative to the inducer, thereby generating a fluctuating load. Other instabilities (e.g., surge instabilities) can couple with cavitation instabilities, thereby compounding the deleterious effects of unsteadiness on other components of the fluid-handling system of which the turbopump is a part and thereby, further, adversely affecting the mechanical integrity and safety of the system. Therefore, an ability to predict cavitation- instability-induced dynamic pressure loads on the blades, the shaft, and other pump parts would be valuable in helping to quantify safe margins of inducer operation and in contributing to understanding of design compromises. Prior CFD models do not afford this ability. Heretofore, the primary parameter used in quantifying cavitation performance of a turbopump inducer has been the critical suction specific speed at which head breakdown occurs. This parameter is a mean quantity calculated on the basis of assumed steady-state operation of the inducer; it does not account for dynamic pressure loads associated with unsteady flow caused by instabilities. Because cavitation instabilities occur well before mean breakdown in inducers, engineers have, until now, found it necessary to use conservative factors of safety when analyzing the results of numerical simulations of flows in turbopumps.
Campbell, Kieran R.
2016-01-01
Single cell gene expression profiling can be used to quantify transcriptional dynamics in temporal processes, such as cell differentiation, using computational methods to label each cell with a ‘pseudotime’ where true time series experimentation is too difficult to perform. However, owing to the high variability in gene expression between individual cells, there is an inherent uncertainty in the precise temporal ordering of the cells. Pre-existing methods for pseudotime estimation have predominantly given point estimates precluding a rigorous analysis of the implications of uncertainty. We use probabilistic modelling techniques to quantify pseudotime uncertainty and propagate this into downstream differential expression analysis. We demonstrate that reliance on a point estimate of pseudotime can lead to inflated false discovery rates and that probabilistic approaches provide greater robustness and measures of the temporal resolution that can be obtained from pseudotime inference. PMID:27870852
Toward agile control of a flexible-spine model for quadruped bounding
NASA Astrophysics Data System (ADS)
Byl, Katie; Satzinger, Brian; Strizic, Tom; Terry, Pat; Pusey, Jason
2015-05-01
Legged systems should exploit non-steady gaits both for improved recovery from unexpected perturbations and also to enlarge the set of reachable states toward negotiating a range of known upcoming terrain obstacles. We present a 4-link planar, bounding, quadruped model with compliance in its legs and spine and describe design of an intuitive and effective low-level gait controller. We extend our previous work on meshing hybrid dynamic systems and demonstrate that our control strategy results in stable gaits with meshable, low-dimension step- to-step variability. This meshability is a first step toward enabling switching control, to increase stability after perturbations compared with any single gait control, and we describe how this framework can also be used to find the set of n-step reachable states. Finally, we propose new guidelines for quantifying "agility" for legged robots, providing a preliminary framework for quantifying and improving performance of legged systems.
NASA Astrophysics Data System (ADS)
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.
Recall Performance for Content-Addressable Memory Using Adiabatic Quantum Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Humble, Travis S.; McCaskey, Alex
A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps the recurrent neural network to a commercially available quantum processing unit by taking advantage of the common underlying Ising spin model. We then assess the accuracy of the quantum processor to store key-value associations by quantifying recall performance against an ensemble of problem sets. We observe that different learning rules from the neural network community influence recallmore » accuracy but performance appears to be limited by potential noise in the processor. The strong connection established between quantum processors and neural network problems supports the growing intersection of these two ideas.« less
NASA Technical Reports Server (NTRS)
Santanello, Joseph A., Jr.; Peters-Lidard, Christa D.; Kumar, Sujay V.
2011-01-01
The inherent coupled nature of earth s energy and water cycles places significant importance on the proper representation and diagnosis of land atmosphere (LA) interactions in hydrometeorological prediction models. However, the precise nature of the soil moisture precipitation relationship at the local scale is largely determined by a series of nonlinear processes and feedbacks that are difficult to quantify. To quantify the strength of the local LA coupling (LoCo), this process chain must be considered both in full and as individual components through their relationships and sensitivities. To address this, recent modeling and diagnostic studies have been extended to 1) quantify the processes governing LoCo utilizing the thermodynamic properties of mixing diagrams, and 2) diagnose the sensitivity of coupled systems, including clouds and moist processes, to perturbations in soil moisture. This work employs NASA s Land Information System (LIS) coupled to the Weather Research and Forecasting (WRF) mesoscale model and simulations performed over the U.S. Southern Great Plains. The behavior of different planetary boundary layers (PBL) and land surface scheme couplings in LIS WRF are examined in the context of the evolution of thermodynamic quantities that link the surface soil moisture condition to the PBL regime, clouds, and precipitation. Specifically, the tendency toward saturation in the PBL is quantified by the lifting condensation level (LCL) deficit and addressed as a function of time and space. The sensitivity of the LCL deficit to the soil moisture condition is indicative of the strength of LoCo, where both positive and negative feedbacks can be identified. Overall, this methodology can be applied to any model or observations and is a crucial step toward improved evaluation and quantification of LoCo within models, particularly given the advent of next-generation satellite measurements of PBL and land surface properties along with advances in data assimilation schemes.
Model-based synthesis of aircraft noise to quantify human perception of sound quality and annoyance
NASA Astrophysics Data System (ADS)
Berckmans, D.; Janssens, K.; Van der Auweraer, H.; Sas, P.; Desmet, W.
2008-04-01
This paper presents a method to synthesize aircraft noise as perceived on the ground. The developed method gives designers the opportunity to make a quick and economic evaluation concerning sound quality of different design alternatives or improvements on existing aircraft. By presenting several synthesized sounds to a jury, it is possible to evaluate the quality of different aircraft sounds and to construct a sound that can serve as a target for future aircraft designs. The combination of using a sound synthesis method that can perform changes to a recorded aircraft sound together with executing jury tests allows to quantify the human perception of aircraft noise.
Patient-specific in silico models can quantify primary implant stability in elderly human bone.
Steiner, Juri A; Hofmann, Urs A T; Christen, Patrik; Favre, Jean M; Ferguson, Stephen J; van Lenthe, G Harry
2018-03-01
Secure implant fixation is challenging in osteoporotic bone. Due to the high variability in inter- and intra-patient bone quality, ex vivo mechanical testing of implants in bone is very material- and time-consuming. Alternatively, in silico models could substantially reduce costs and speed up the design of novel implants if they had the capability to capture the intricate bone microstructure. Therefore, the aim of this study was to validate a micro-finite element model of a multi-screw fracture fixation system. Eight human cadaveric humerii were scanned using micro-CT and mechanically tested to quantify bone stiffness. Osteotomy and fracture fixation were performed, followed by mechanical testing to quantify displacements at 12 different locations on the instrumented bone. For each experimental case, a micro-finite element model was created. From the micro-finite element analyses of the intact model, the patient-specific bone tissue modulus was determined such that the simulated apparent stiffness matched the measured stiffness of the intact bone. Similarly, the tissue modulus of a small damage region around each screw was determined for the instrumented bone. For validation, all in silico models were rerun using averaged material properties, resulting in an average coefficient of determination of 0.89 ± 0.04 with a slope of 0.93 ± 0.19 and a mean absolute error of 43 ± 10 μm when correlating in silico marker displacements with the ex vivo test. In conclusion, we validated a patient-specific computer model of an entire organ bone-implant system at the tissue-level at high resolution with excellent overall accuracy. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 36:954-962, 2018. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C. J.
2016-11-01
Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has potential implications in many fields in which the governing equations are well understood but the model uncertainty comes from unresolved physical processes.
Balmer, Nigel; Pleasence, Pascoe; Nevill, Alan
2012-01-01
A number of studies have pointed to a plateauing of athletic performance, with the suggestion that further improvements will need to be driven by revolutions in technology or technique. In the present study, we examine post-war men's Olympic performance in jumping events (pole vault, long jump, high jump, triple jump) to determine whether performance has indeed plateaued and to present techniques, derived from models of human growth, for assessing the impact of technological and technical innovation over time (logistic and double logistic models of growth). Significantly, two of the events involve well-documented changes in technology (pole material in pole vault) or technique (the Fosbury Flop in high jump), while the other two do not. We find that in all four cases, performance appears to have plateaued and that no further "general" improvement should be expected. In the case of high jump, the double logistic model provides a convenient method for modelling and quantifying a performance intervention (in this case the Fosbury Flop). However, some shortcomings are revealed for pole vault, where evolutionary post-war improvements and innovation (fibre glass poles) were concurrent, preventing their separate identification in the model. In all four events, it is argued that further general growth in performance will indeed need to rely predominantly on technological or technical innovation.
NASA Astrophysics Data System (ADS)
Ilias, Mohammad
Pavement preservation is a rapidly growing strategy for prolonging pavement service life. Pavement preservation consists of applying a thin layer of asphalt binder or emulsion with or without aggregate to the surface of an existing pavement. Preservation treatments do not provide any structural strength to the pavement but restores skid resistance, seals existing cracks, protects the underlying pavement from intrusion of water, and reduces further oxidative aging of the underlying pavement. In recent years, significant research has been dedicated to improving design of pavement preservation treatments. In pavement preservation treatments, asphalt emulsion is the predominant binding material used because of its low viscosity compared to asphalt cement which allows for production at greatly reduced temperatures, leading to energy efficiency, and potential cost savings. Currently, specifications for emulsions used in pavement preservation treatments are empirical and lack of direct relationship to performance. This study seeks to improve specifications for emulsions used in preservation treatments by developing performancerelated specifications (PRS) for (a) fresh emulsion properties, (b) microsurfacing emulsion residue, and (c) low-temperature raveling of chip seal emulsion residues. Fresh emulsion properties dictate constructability and stability, and consequently the resultant performance of a preservation treatment once placed. Specification test methods are proposed for chip seals, microsurfacings, and spray seals that reflect storage and construction conditions of the emulsions. Performance is quantified using viscosity measurements. Specification limits are determined based on a prior knowledge of emulsion performance coupled with statistical analyses. Microsurfacings are a preservation treatment consisting of application of a thin layer of asphalt emulsion -- fine aggregate mixture. Presently, mixture design and performances of microsurfacing mixtures are appraised using the procedure specified by the International Slurry Surfacing Association (ISSA) with no provision for quantifying microsurfacing residue performance. In this study, residue performance is quantified using the Multiple Stress Creep and Recovery (MSCR) test for rutting and bleeding, the Bitumen Bond Strength Test (BBS) for raveling, Low Temperature Frequency (LTF) test for low temperature Bending Beam Rheometer (BBR) properties prediction, and Single Edge Notched Bend (SENB) fracture test developed under this work. Microsurfacing mixture performance is quantified using the Wet Track Abrasion Test (WTAT) for raveling, Model Mobile Load Simulator (MMLS3) for rutting and bleeding, and SENB test developed for low-temperature cracking. Microsurfacing mixture performance is correlated to residue properties in order to identify critical emulsion residue properties in determining performance and to derive specification limits. Results indicate rutting and thermal cracking are the distresses most directly related to the emulsion residue performance. Correspondingly, specifications are proposed to address rutting at high temperature and thermal cracking at low temperature based on the relationship between residue and mixture results coupled. In addition, test methods and specification criteria are developed to address lowtemperature raveling resistance of emulsion residues used in chip seals. The SENB test is used to quantify residue resistance to thermal cracking under the assumption that lowtemperature raveling occurs primarily by cohesive fracture of the residue in the chip seal. The Vialit test is modified and employed for quantifying raveling resistance of chip seals mixture for determining if the SENB test captures binder contribution to mixture raveling. The correlation between residue and mixture properties have been used to assess applicability of the residue tests and to derive specification limits.
Calhelha, Ricardo C; Martínez, Mireia A; Prieto, M A; Ferreira, Isabel C F R
2017-10-23
The development of convenient tools for describing and quantifying the effects of standard and novel therapeutic agents is essential for the research community, to perform more precise evaluations. Although mathematical models and quantification criteria have been exchanged in the last decade between different fields of study, there are relevant methodologies that lack proper mathematical descriptions and standard criteria to quantify their responses. Therefore, part of the relevant information that can be drawn from the experimental results obtained and the quantification of its statistical reliability are lost. Despite its relevance, there is not a standard form for the in vitro endpoint tumor cell lines' assays (TCLA) that enables the evaluation of the cytotoxic dose-response effects of anti-tumor drugs. The analysis of all the specific problems associated with the diverse nature of the available TCLA used is unfeasible. However, since most TCLA share the main objectives and similar operative requirements, we have chosen the sulforhodamine B (SRB) colorimetric assay for cytotoxicity screening of tumor cell lines as an experimental case study. In this work, the common biological and practical non-linear dose-response mathematical models are tested against experimental data and, following several statistical analyses, the model based on the Weibull distribution was confirmed as the convenient approximation to test the cytotoxic effectiveness of anti-tumor compounds. Then, the advantages and disadvantages of all the different parametric criteria derived from the model, which enable the quantification of the dose-response drug-effects, are extensively discussed. Therefore, model and standard criteria for easily performing the comparisons between different compounds are established. The advantages include a simple application, provision of parametric estimations that characterize the response as standard criteria, economization of experimental effort and enabling rigorous comparisons among the effects of different compounds and experimental approaches. In all experimental data fitted, the calculated parameters were always statistically significant, the equations proved to be consistent and the correlation coefficient of determination was, in most of the cases, higher than 0.98.
Effect of climate variables on cocoa black pod incidence in Sabah using ARIMAX model
NASA Astrophysics Data System (ADS)
Ling Sheng Chang, Albert; Ramba, Haya; Mohd. Jaaffar, Ahmad Kamil; Kim Phin, Chong; Chong Mun, Ho
2016-06-01
Cocoa black pod disease is one of the major diseases affecting the cocoa production in Malaysia and also around the world. Studies have shown that the climate variables have influenced the cocoa black pod disease incidence and it is important to quantify the black pod disease variation due to the effect of climate variables. Application of time series analysis especially auto-regressive moving average (ARIMA) model has been widely used in economics study and can be used to quantify the effect of climate variables on black pod incidence to forecast the right time to control the incidence. However, ARIMA model does not capture some turning points in cocoa black pod incidence. In order to improve forecasting performance, other explanatory variables such as climate variables should be included into ARIMA model as ARIMAX model. Therefore, this paper is to study the effect of climate variables on the cocoa black pod disease incidence using ARIMAX model. The findings of the study showed ARIMAX model using MA(1) and relative humidity at lag 7 days, RHt - 7 gave better R square value compared to ARIMA model using MA(1) which could be used to forecast the black pod incidence to assist the farmers determine timely application of fungicide spraying and culture practices to control the black pod incidence.
Drosophila Melanogaster as an Emerging Translational Model of Human Nephrolithiasis
Miller, Joe; Chi, Thomas; Kapahi, Pankaj; Kahn, Arnold J.; Kim, Man Su; Hirata, Taku; Romero, Michael F.; Dow, Julian A.T.; Stoller, Marshall L.
2013-01-01
Purpose The limitations imposed by human clinical studies and mammalian models of nephrolithiasis have hampered the development of effective medical treatments and preventative measures for decades. The simple but elegant Drosophila melanogaster is emerging as a powerful translational model of human disease, including nephrolithiasis and may provide important information essential to our understanding of stone formation. We present the current state of research using D. melanogaster as a model of human nephrolithiasis. Materials and Methods A comprehensive review of the English language literature was performed using PUBMED. When necessary, authoritative texts on relevant subtopics were consulted. Results The genetic composition, anatomic structure and physiologic function of Drosophila Malpighian tubules are remarkably similar to those of the human nephron. The direct effects of dietary manipulation, environmental alteration, and genetic variation on stone formation can be observed and quantified in a matter of days. Several Drosophila models of human nephrolithiasis, including genetically linked and environmentally induced stones, have been developed. A model of calcium oxalate stone formation is among the most recent fly models of human nephrolithiasis. Conclusions The ability to readily manipulate and quantify stone formation in D. melanogaster models of human nephrolithiasis presents the urologic community with a unique opportunity to increase our understanding of this enigmatic disease. PMID:23500641
Modeling level-of-safety for bus stops in China.
Ye, Zhirui; Wang, Chao; Yu, Yongbo; Shi, Xiaomeng; Wang, Wei
2016-08-17
Safety performance at bus stops is generally evaluated by using historical traffic crash data or traffic conflict data. However, in China, it is quite difficult to obtain such data mainly due to the lack of traffic data management and organizational issues. In light of this, the primary objective of this study is to develop a quantitative approach to evaluate bus stop safety performance. The concept of level-of-safety for bus stops is introduced and corresponding models are proposed to quantify safety levels, which consider conflict points, traffic factors, geometric characteristics, traffic signs and markings, pavement conditions, and lighting conditions. Principal component analysis and k-means clustering methods were used to model and quantify safety levels for bus stops. A case study was conducted to show the applicability of the proposed model with data collected from 46 samples for the 7 most common types of bus stops in China, using 32 of the samples for modeling and 14 samples for illustration. Based on the case study, 6 levels of safety for bus stops were defined. Finally, a linear regression analysis between safety levels and the number of traffic conflicts showed that they had a strong relationship (R(2) value of 0.908). The results indicated that the method was well validated and could be practically used for the analysis and evaluation of bus stop safety in China. The proposed model was relatively easy to implement without the requirement of traffic crash data and/or traffic conflict data. In addition, with the proposed method, it was feasible to evaluate countermeasures to improve bus stop safety (e.g., exclusive bus lanes).
Full field reservoir modelling of Central Oman gas/condensate fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leemput, L.E.C. van de; Bertram, D.A.; Bentley, M.R.
1995-12-31
Gas reserves sufficient for a major export scheme have been found in Central Oman. To support appraisal and development planning of the gas/condensate fields, a dedicated, multi-disciplinary study team comprising both surface and subsurface engineers was assembled. The team fostered a high level of awareness of cross-disciplinary needs and challenges, resulting in timely data acquisition and a good fit between the various work-activities. The foundation of the subsurface contributions was a suite of advanced full-field reservoir models which: (1) provided production and well requirement forecasts; (2) quantified the impact of uncertainties on field performance and project costs; (3) supported themore » appraisal campaign; (4) optimised the field development plan; and (5) derived recovery factor ranges for reserves estimates. Geological/petrophysical uncertainties were quantified using newly-developed, 3-D probabilistic modelling tools. An efficient computing environment allowed a large number of sensitivities to be run in a timely, cost-effective manner. The models also investigated a key concern in gas/condensate fields: well impairment due to near-well condensate precipitation. Its impact was assessed using measured, capillary number-dependent, relative permeability curves. Well performance ranges were established on the basis of Equation of State single-well. simulations, and translated into the volatile oil full-field models using pseudo relative permeability curves for the wells. The models used the sparse available data in an optimal way and, as part of the field development plan, sustained confidence in the reserves estimates and the project, which is currently in the project specification phase.« less
Plasma brake model for preliminary mission analysis
NASA Astrophysics Data System (ADS)
Orsini, Leonardo; Niccolai, Lorenzo; Mengali, Giovanni; Quarta, Alessandro A.
2018-03-01
Plasma brake is an innovative propellantless propulsion system concept that exploits the Coulomb collisions between a charged tether and the ions in the surrounding environment (typically, the ionosphere) to generate an electrostatic force orthogonal to the tether direction. Previous studies on the plasma brake effect have emphasized the existence of a number of different parameters necessary to obtain an accurate description of the propulsive acceleration from a physical viewpoint. The aim of this work is to discuss an analytical model capable of estimating, with the accuracy required by a preliminary mission analysis, the performance of a spacecraft equipped with a plasma brake in a (near-circular) low Earth orbit. The simplified mathematical model is first validated through numerical simulations, and is then used to evaluate the plasma brake performance in some typical mission scenarios, in order to quantify the influence of the system parameters on the mission performance index.
Modeling Urban Scenarios & Experiments: Fort Indiantown Gap Data Collections Summary and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Daniel E.; Bandstra, Mark S.; Davidson, Gregory G.
This report summarizes experimental radiation detector, contextual sensor, weather, and global positioning system (GPS) data collected to inform and validate a comprehensive, operational radiation transport modeling framework to evaluate radiation detector system and algorithm performance. This framework will be used to study the influence of systematic effects (such as geometry, background activity, background variability, environmental shielding, etc.) on detector responses and algorithm performance using synthetic time series data. This work consists of performing data collection campaigns at a canonical, controlled environment for complete radiological characterization to help construct and benchmark a high-fidelity model with quantified system geometries, detector response functions,more » and source terms for background and threat objects. This data also provides an archival, benchmark dataset that can be used by the radiation detection community. The data reported here spans four data collection campaigns conducted between May 2015 and September 2016.« less
Stabilization of benthic algal biomass in a temperate stream draining agroecosystems.
Ford, William I; Fox, James F
2017-01-01
Results of the present study quantified carbon sequestration due to algal stabilization in low order streams, which has not been considered previously in carbon stream ecosystem studies. The authors used empirical mode decomposition of an 8-year carbon elemental and isotope dataset to quantify carbon accrual and fingerprint carbon derived from algal stabilization. The authors then applied a calibrated, process-based stream carbon model (ISOFLOC) that elicits further evidence of algal stabilization. Data and modeling results suggested that processes of shielding and burial during an extreme hydrologic event enhance algal stabilization. Given that previous studies assumed stream algae are turned over or sloughed downstream, the authors performed scenario simulations of the calibrated model in order to assess how changing environmental conditions might impact algae stabilization within the stream. Results from modeling scenarios showed an increase in algal stabilization as mean annual water temperature increases ranging from 0 to 0.04 tC km -2 °C -1 for the study watershed. The dependence of algal stabilization on temperature highlighted the importance of accounting for benthic fate of carbon in streams under projected warming scenarios. This finding contradicts the evolving paradigm that net efflux of CO 2 from streams increases with increasing temperatures. Results also quantified sloughed algae that is transported and potentially stabilized downstream and showed that benthos-derived sloughed algae was on the same order of magnitude, and at times greater, than phytoplankton within downstream water bodies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M
2015-03-01
It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Localized Smart-Interpretation
NASA Astrophysics Data System (ADS)
Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas; Bach, Torben; Pallesen, Tom
2014-05-01
The complex task of setting up a geological model consists not only of combining available geological information into a conceptual plausible model, but also requires consistency with availably data, e.g. geophysical data. However, in many cases the direct geological information, e.g borehole samples, are very sparse, so in order to create a geological model, the geologist needs to rely on the geophysical data. The problem is however, that the amount of geophysical data in many cases are so vast that it is practically impossible to integrate all of them in the manual interpretation process. This means that a lot of the information available from the geophysical surveys are unexploited, which is a problem, due to the fact that the resulting geological model does not fulfill its full potential and hence are less trustworthy. We suggest an approach to geological modeling that 1. allow all geophysical data to be considered when building the geological model 2. is fast 3. allow quantification of geological modeling. The method is constructed to build a statistical model, f(d,m), describing the relation between what the geologists interpret, d, and what the geologist knows, m. The para- meter m reflects any available information that can be quantified, such as geophysical data, the result of a geophysical inversion, elevation maps, etc... The parameter d reflects an actual interpretation, such as for example the depth to the base of a ground water reservoir. First we infer a statistical model f(d,m), by examining sets of actual interpretations made by a geological expert, [d1, d2, ...], and the information used to perform the interpretation; [m1, m2, ...]. This makes it possible to quantify how the geological expert performs interpolation through f(d,m). As the geological expert proceeds interpreting, the number of interpreted datapoints from which the statistical model is inferred increases, and therefore the accuracy of the statistical model increases. When a model f(d,m) successfully has been inferred, we are able to simulate how the geological expert would perform an interpretation given some external information m, through f(d|m). We will demonstrate this method applied on geological interpretation and densely sampled airborne electromagnetic data. In short, our goal is to build a statistical model describing how a geological expert performs geological interpretation given some geophysical data. We then wish to use this statistical model to perform semi automatic interpretation, everywhere where such geophysical data exist, in a manner consistent with the choices made by a geological expert. Benefits of such a statistical model are that 1. it provides a quantification of how a geological expert performs interpretation based on available diverse data 2. all available geophysical information can be used 3. it allows much faster interpretation of large data sets.
Role of Nitric Oxide in MPTP-Induced Dopaminergic Neuron Degeneration
2004-09-01
peroxynitrite exposure, that of dityrosine and nitrotyrosine by gas chromatography with mass spectrometry. 6 Quantification will be performed in...following MPTP administration by quantifying the two main products of peroxynitrite oxidation of tyrosine, dityrosine and nitrotyrosine using gas ...effectiveness as a neuroprotective agent was demonstrated against experimental brain ischaemia (21) and disease progression in the R6/2 mouse model of
RESPIRATORY DYSFUNCTION IN UNSEDATED DOGS WITH GOLDEN RETRIEVER MUSCULAR DYSTROPHY
DeVanna, Justin C.; Kornegay, Joe N.; Bogan, Daniel J.; Bogan, Janet R.; Dow, Jennifer L.; Hawkins, Eleanor C.
2013-01-01
Golden retriever muscular dystrophy (GRMD) is a well-established model of Duchenne muscular dystrophy. The value of this model would be greatly enhanced with practical tools to monitor progression of respiratory dysfunction during treatment trials. Arterial blood gas analysis, tidal breathing spirometry, and respiratory inductance plethysmography (RIP) were performed to determine if quantifiable abnormalities could be identified in unsedated, untrained, GRMD dogs. Results from 11 dogs with a mild phenotype of GRMD and 11 age-matched carriers were compared. Arterial blood gas analysis was successfully performed in all dogs, spirometry in 21 of 22 (95%) dogs, and RIP in 18 of 20 (90%) dogs. Partial pressure of carbon dioxide and bicarbonate concentration were higher in GRMD dogs. Tidal breathing peak expiratory flows were markedly higher in GRMD dogs. Abnormal abdominal motion was present in 7 of 10 (70%) GRMD dogs. Each technique provided objective, quantifiable measures that will be useful for monitoring respiratory function in GRMD dogs during clinical trials while avoiding the influence of sedation on results. Increased expiratory flows and the pattern of abdominal breathing are novel findings, not reported in people with Duchenne muscular dystrophy, and might be a consequence of hyperinflation. PMID:24295812
Pattern formation study of dissolution-driven convection
NASA Astrophysics Data System (ADS)
Aljahdaly, Noufe; Hadji, Layachi
2017-11-01
A three-dimensional pattern formation analysis is performed to investigate the dissolution-driven convection induced by the sequestration of carbon dioxide. We model this situation by considering a Rayleigh-Taylor like base state consisting of carbon-rich heavy brine overlying a carbon-free layer and seek, through a linear stability analysis, the instability threshold conditions as function of the thickness of the CO2-rich brine layer. Our model accounts for carbon diffusion anisotropy, permeability dependence on depth and the presence of a first order chemical reaction between the carbon-rich brine and host mineralogy. A small amplitude nonlinear stability analysis is performed to isolate the preferred regular pattern and solute flux conditions at the interface. The latter are used to derive equations for the time and space evolution of the interface as it migrates upward. We quantify the terminal time when the interface reaches the top boundary as function of the type of solute boundary conditions at the top boundary thereby also quantifying the beginning of the shutdown regime. The analysis will also shed light on the development of the three-dimensional fingering pattern that is observed when the constant flux regime is attained.
Comparisons of non-Gaussian statistical models in DNA methylation analysis.
Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-06-16
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.
Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis
Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-01-01
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687
NASA Astrophysics Data System (ADS)
Lazzaro, G.; Soulsby, C.; Tetzlaff, D.; Botter, G.
2017-03-01
Atlantic salmon is an economically and ecologically important fish species, whose survival is dependent on successful spawning in headwater rivers. Streamflow dynamics often have a strong control on spawning because fish require sufficiently high discharges to move upriver and enter spawning streams. However, these streamflow effects are modulated by biological factors such as the number and the timing of returning fish in relation to the annual spawning window in the fall/winter. In this paper, we develop and apply a novel probabilistic approach to quantify these interactions using a parsimonious outflux-influx model linking the number of female salmon emigrating (i.e., outflux) and returning (i.e., influx) to a spawning stream in Scotland. The model explicitly accounts for the interannual variability of the hydrologic regime and the hydrological connectivity of spawning streams to main rivers. Model results are evaluated against a detailed long-term (40 years) hydroecological data set that includes annual fluxes of salmon, allowing us to explicitly assess the role of discharge variability. The satisfactory model results show quantitatively that hydrologic variability contributes to the observed dynamics of salmon returns, with a good correlation between the positive (negative) peaks in the immigration data set and the exceedance (nonexceedance) probability of a threshold flow (0.3 m3/s). Importantly, model performance deteriorates when the interannual variability of flow regime is disregarded. The analysis suggests that flow thresholds and hydrological connectivity for spawning return represent a quantifiable and predictable feature of salmon rivers, which may be helpful in decision making where flow regimes are altered by water abstractions.
NASA Astrophysics Data System (ADS)
Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.
2011-12-01
Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch of soil (C factor), slope angle (L and S factor), and percentage of land area covered by surface cover (C factor). Our findings give further support to the importance of vegetation as a vital ecosystem service provider - soil loss reduction. Concurrent, progress is already been made in Costa Rica, where dam managers are moving forward on a Payment for Ecosystem Services scheme to help keep private lands forested and to improve crop management through targeted investments. Use of complex watershed models, such as RUSLE can help managers quantify the effect of specific land use changes. Moreover, effective land management of vegetation has other important benefits, such as bundled ecosystem services (e.g. pollination, habitat connectivity, etc) and improvements of communities' livelihoods.
Storm water infiltration in a monitored green roof for hydrologic restoration.
Palla, A; Sansalone, J J; Gnecco, I; Lanza, L G
2011-01-01
The objectives of this study are to provide detailed information about green roof performance in the Mediterranean climate (retained volume, peak flow reduction, runoff delay) and to identify a suitable modelling approach for describing the associated hydrologic response. Data collected during a 13-month monitoring campaign and a seasonal monitoring campaign (September-December 2008) at the green roof experimental site of the University of Genova (Italy) are presented together with results obtained in quantifying the green roof hydrologic performance. In order to examine the green roof hydrologic response, the SWMS_2D model, that solves the Richards' equation for two-dimensional saturated-unsaturated water flow, has been implemented. Modelling results confirm the suitability of the SWMS_2D model to properly describe the hydrologic response of the green roofs. The model adequately reproduces the hydrographs; furthermore, the predicted soil water content profile generally matches the observed values along a vertical profile where measurements are available.
NASA Technical Reports Server (NTRS)
Campbell, Anthony B.; Nair, Satish S.; Miles, John B.; Iovine, John V.; Lin, Chin H.
1998-01-01
The present NASA space suit (the Shuttle EMU) is a self-contained environmental control system, providing life support, environmental protection, earth-like mobility, and communications. This study considers the thermal dynamics of the space suit as they relate to astronaut thermal comfort control. A detailed dynamic lumped capacitance thermal model of the present space suit is used to analyze the thermal dynamics of the suit with observations verified using experimental and flight data. Prior to using the model to define performance characteristics and limitations for the space suit, the model is first evaluated and improved. This evaluation includes determining the effect of various model parameters on model performance and quantifying various temperature prediction errors in terms of heat transfer and heat storage. The observations from this study are being utilized in two future design efforts, automatic thermal comfort control design for the present space suit and design of future space suit systems for Space Station, Lunar, and Martian missions.
The confluence model: birth order as a within-family or between-family dynamic?
Zajonc, R B; Sulloway, Frank J
2007-09-01
The confluence model explains birth-order differences in intellectual performance by quantifying the changing dynamics within the family. Wichman, Rodgers, and MacCallum (2006) claimed that these differences are a between-family phenomenon--and hence are not directly related to birth order itself. The study design and analyses presented by Wichman et al. nevertheless suffer from crucial shortcomings, including their use of unfocused tests, which cause statistically significant trends to be overlooked. In addition, Wichman et al. treated birth-order effects as a linear phenomenon thereby ignoring the confluence model's prediction that these two samples may manifest opposing results based on age. This article cites between- and within-family data that demonstrate systematic birth-order effects as predicted by the confluence model. The corpus of evidence invoked here offers strong support for the assumption of the confluence model that birth-order differences in intellectual performance are primarily a within-family phenomenon.
Interactive design and analysis of future large spacecraft concepts
NASA Technical Reports Server (NTRS)
Garrett, L. B.
1981-01-01
An interactive computer aided design program used to perform systems level design and analysis of large spacecraft concepts is presented. Emphasis is on rapid design, analysis of integrated spacecraft, and automatic spacecraft modeling for lattice structures. Capabilities and performance of multidiscipline applications modules, the executive and data management software, and graphics display features are reviewed. A single user at an interactive terminal create, design, analyze, and conduct parametric studies of Earth orbiting spacecraft with relative ease. Data generated in the design, analysis, and performance evaluation of an Earth-orbiting large diameter antenna satellite are used to illustrate current capabilities. Computer run time statistics for the individual modules quantify the speed at which modeling, analysis, and design evaluation of integrated spacecraft concepts is accomplished in a user interactive computing environment.
Mapping Tamarix: New techniques for field measurements, spatial modeling and remote sensing
NASA Astrophysics Data System (ADS)
Evangelista, Paul H.
Native riparian ecosystems throughout the southwestern United States are being altered by the rapid invasion of Tamarix species, commonly known as tamarisk. The effects that tamarisk has on ecosystem processes have been poorly quantified largely due to inadequate survey methods. I tested new approaches for field measurements, spatial models and remote sensing to improve our ability measure and to map tamarisk occurrence, and provide new methods that will assist in management and control efforts. Examining allometric relationships between basal cover and height measurements collected in the field, I was able to produce several models to accurately estimate aboveground biomass. The best two models were explained 97% of the variance (R 2 = 0.97). Next, I tested five commonly used predictive spatial models to identify which methods performed best for tamarisk using different types of data collected in the field. Most spatial models performed well for tamarisk, with logistic regression performing best with an Area Under the receiver-operating characteristic Curve (AUC) of 0.89 and overall accuracy of 85%. The results of this study also suggested that models may not perform equally with different invasive species, and that results may be influenced by species traits and their interaction with environmental factors. Lastly, I tested several approaches to improve the ability to remotely sense tamarisk occurrence. Using Landsat7 ETM+ satellite scenes and derived vegetation indices for six different months of the growing season, I examined their ability to detect tamarisk individually (single-scene analyses) and collectively (time-series). My results showed that time-series analyses were best suited to distinguish tamarisk from other vegetation and landscape features (AUC = 0.96, overall accuracy = 90%). June, August and September were the best months to detect unique phenological attributes that are likely related to the species' extended growing season and green-up during peak growing months. These studies demonstrate that new techniques can further our understanding of tamarisk's impacts on ecosystem processes, predict potential distribution and new invasions, and improve our ability to detect occurrence using remote sensing techniques. Collectively, the results of my studies may increase our ability to map tamarisk distributions and better quantify its impacts over multiple spatial and temporal scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy; Lunden, Melissa; Wilson, Daniel
2012-08-01
Air pollution levels in Ulaanbaatar, Mongolia’s capital, are among the highest in the world. A primary source of this pollution is emissions from traditional coal - burning space heating stoves used in the Ger (tent) regions around Ulaanbaatar. Significant investment has been made to replace traditional heating stoves with improved low - emission high-efficiency stoves. Testing performed to support selection of replacement stoves or for optimizing performance may not be representative of true field performance of the improved stoves. Field observations and lab measurements indicate that performance is impacted , often adversely, by how stoves are actually being used inmore » the field. The objective of this project is to identify factors that influence stove emissions under typical field operating conditions and to quantify the impact of these factors. A highly - instrumented stove testing facility was constructed to allow for rapid and precise adjustment of factors influencing stove performance. Tests were performed using one of the improved stove models currently available in Ulaanbaatar. Complete burn cycles were conducted with Nailakh coal from the Ulaanbaatar region using various startup parameters, refueling conditions , and fuel characteristics . Measurements were collected simultaneously from undiluted chimney gas, diluted gas drawn directly from the chimney and plume gas collected from a dilution tunnel above the chimney. CO, CO 2, O 2, temperature, pressure, and particulate matter (PM) were measured . We found that both refueling events and coal characteristics strongly influenced PM emissions and stove performance. Start-up and refueling events lead to increased PM emissions with more than 98% of PM mass emitted during the 20% of the burn where coal ignition occurs. CO emissions are distributed more evenly over the burn cycle, peaking both during ignition and late in the burn cycle . We anticipate these results being useful for quantifying public health outcomes related to the distribution of improved stoves and to identify opportunities for improving and sustaining performance of the new stoves .« less
Validation of Storm Water Management Model Storm Control Measures Modules
NASA Astrophysics Data System (ADS)
Simon, M. A.; Platz, M. C.
2017-12-01
EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.
Cognitive performance modeling based on general systems performance theory.
Kondraske, George V
2010-01-01
General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).
NASA Astrophysics Data System (ADS)
Poppett, Claire; Allington-Smith, Jeremy
2010-07-01
We investigate the FRD performance of a 150 μm core fibre for its suitability to the SIDE project.1 This work builds on our previous work2 (Paper 1) where we examined the dependence of FRD on length in fibres with a core size of 100 μm and proposed a new multi-component model to explain the results. In order to predict the FRD characteristics of a fibre, the most commonly used model is an adaptation of the Gloge8model by Carrasco and Parry3 which quantifies the the number of scattering defects within an optical bre using a single parameter, d0. The model predicts many trends which are seen experimentally, for example, a decrease in FRD as core diameter increases, and also as wavelength increases. However the model also predicts a strong dependence on FRD with length that is not seen experimentally. By adapting the single fibre model to include a second fibre, we can quantify the amount of FRD due to stress caused by the method of termination. By fitting the model to experimental data we find that polishing the fibre causes a small increase in stress to be induced in the end of the fibre compared to a simple cleave technique.
Robust Bayesian Experimental Design for Conceptual Model Discrimination
NASA Astrophysics Data System (ADS)
Pham, H. V.; Tsai, F. T. C.
2015-12-01
A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.
Technology developments integrating a space network communications testbed
NASA Technical Reports Server (NTRS)
Kwong, Winston; Jennings, Esther; Clare, Loren; Leang, Dee
2006-01-01
As future manned and robotic space explorations missions involve more complex systems, it is essential to verify, validate, and optimize such systems through simulation and emulation in a low cost testbed environment. The goal of such a testbed is to perform detailed testing of advanced space and ground communications networks, technologies, and client applications that are essential for future space exploration missions. We describe the development of new technologies enhancing our Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) that enables its integration in a distributed space communications testbed. MACHETE combines orbital modeling, link analysis, and protocol and service modeling to quantify system performance based on comprehensive considerations of different aspects of space missions.
Understanding neuromotor strategy during functional upper extremity tasks using symbolic dynamics.
Nathan, Dominic E; Guastello, Stephen J; Prost, Robert W; Jeutter, Dean C
2012-01-01
The ability to model and quantify brain activation patterns that pertain to natural neuromotor strategy of the upper extremities during functional task performance is critical to the development of therapeutic interventions such as neuroprosthetic devices. The mechanisms of information flow, activation sequence and patterns, and the interaction between anatomical regions of the brain that are specific to movement planning, intention and execution of voluntary upper extremity motor tasks were investigated here. This paper presents a novel method using symbolic dynamics (orbital decomposition) and nonlinear dynamic tools of entropy, self-organization and chaos to describe the underlying structure of activation shifts in regions of the brain that are involved with the cognitive aspects of functional upper extremity task performance. Several questions were addressed: (a) How is it possible to distinguish deterministic or causal patterns of activity in brain fMRI from those that are really random or non-contributory to the neuromotor control process? (b) Can the complexity of activation patterns over time be quantified? (c) What are the optimal ways of organizing fMRI data to preserve patterns of activation, activation levels, and extract meaningful temporal patterns as they evolve over time? Analysis was performed using data from a custom developed time resolved fMRI paradigm involving human subjects (N=18) who performed functional upper extremity motor tasks with varying time delays between the onset of intention and onset of actual movements. The results indicate that there is structure in the data that can be quantified through entropy and dimensional complexity metrics and statistical inference, and furthermore, orbital decomposition is sensitive in capturing the transition of states that correlate with the cognitive aspects of functional task performance.
Wylie, Bruce K.; Rigge, Matthew B.; Brisco, Brian; Mrnaghan, Kevin; Rover, Jennifer R.; Long, Jordan
2014-01-01
A warming climate influences boreal forest productivity, dynamics, and disturbance regimes. We used ecosystem models and 250 m satellite Normalized Difference Vegetation Index (NDVI) data averaged over the growing season (GSN) to model current, and estimate future, ecosystem performance. We modeled Expected Ecosystem Performance (EEP), or anticipated productivity, in undisturbed stands over the 2000–2008 period from a variety of abiotic data sources, using a rule-based piecewise regression tree. The EEP model was applied to a future climate ensemble A1B projection to quantify expected changes to mature boreal forest performance. Ecosystem Performance Anomalies (EPA), were identified as the residuals of the EEP and GSN relationship and represent performance departures from expected performance conditions. These performance data were used to monitor successional events following fire. Results suggested that maximum EPA occurs 30–40 years following fire, and deciduous stands generally have higher EPA than coniferous stands. Mean undisturbed EEP is projected to increase 5.6% by 2040 and 8.7% by 2070, suggesting an increased deciduous component in boreal forests. Our results contribute to the understanding of boreal forest successional dynamics and its response to climate change. This information enables informed decisions to prepare for, and adapt to, climate change in the Yukon River Basin forest.
Kwon, Seolim; Wadholm, Robert R; Carmody, Laurie E
2014-06-01
The American Society of Training and Development's (ASTD) Certified Professional in Learning and Performance (CPLP) program is purported to be based on the ASTD's competency model, a model which outlines foundational competencies, roles, and areas of expertise in the field of training and performance improvement. This study seeks to uncover the relationship between the competency model and the CPLP knowledge exam questions and work product submissions (two of the major instruments used to test for competency of CPLP applicants). A mixed qualitative-quantitative approach is used to identify themes, quantify relationships, and assess questions and guidelines. Multiple raters independently analyzed the data and identified key themes, and Fleiss' Kappa coefficient was used in measuring inter-rater agreement. The study concludes that several discrepancies exist between the competency model and the knowledge exam and work product submission guidelines. Recommendations are given for possible improvement of the CPLP program. Copyright © 2014 Elsevier Ltd. All rights reserved.
Current Status in Cavitation Modeling
NASA Technical Reports Server (NTRS)
Singhal, Ashok K.; Avva, Ram K.
1993-01-01
Cavitation is a common problem for many engineering devices in which the main working fluid is in liquid state. In turbomachinery applications, cavitation generally occurs on the inlet side of pumps. The deleterious effects of cavitation include: lowered performance, load asymmetry, erosion and pitting of blade surfaces, vibration and noise, and reduction of the overall machine life. Cavitation models in use today range from rather crude approximations to sophisticated bubble dynamics models. Details about bubble inception, growth and collapse are relevant to the prediction of blade erosion, but are not necessary to predict the performance of pumps. An engineering model of cavitation is proposed to predict the extent of cavitation and performance. The vapor volume fraction is used as an indicator variable to quantify cavitation. A two-phase flow approach is employed with the assumption of the thermal equilibrium between liquid and vapor. At present velocity slip between the two phases is selected. Preliminary analyses of 2D flows shows qualitatively correct results.
Novikov Engine with Fluctuating Heat Bath Temperature
NASA Astrophysics Data System (ADS)
Schwalbe, Karsten; Hoffmann, Karl Heinz
2018-04-01
The Novikov engine is a model for heat engines that takes the irreversible character of heat fluxes into account. Using this model, the maximum power output as well as the corresponding efficiency of the heat engine can be deduced, leading to the well-known Curzon-Ahlborn efficiency. The classical model assumes constant heat bath temperatures, which is not a reasonable assumption in the case of fluctuating heat sources. Therefore, in this article the influence of stochastic fluctuations of the hot heat bath's temperature on the optimal performance measures is investigated. For this purpose, a Novikov engine with fluctuating heat bath temperature is considered. Doing so, a generalization of the Curzon-Ahlborn efficiency is found. The results can help to quantify how the distribution of fluctuating quantities affects the performance measures of power plants.
Groundwater flux estimation in streams: A thermal equilibrium approach
Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon K.
2018-01-01
Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash–Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.
Nenadic, Ivan Z.; Urban, Matthew W.; Mitchell, Scott A.; Greenleaf, James F.
2011-01-01
Diastolic dysfunction is the inability of the left ventricle to supply sufficient stroke volumes under normal physiological conditions and is often accompanied by stiffening of the left-ventricular myocardium. A noninvasive technique capable of quantifying viscoelasticity of the myocardium would be beneficial in clinical settings. Our group has been investigating the use of Shearwave Dispersion Ultrasound Vibrometry (SDUV), a noninvasive ultrasound based method for quantifying viscoelasticity of soft tissues. The primary motive of this study is the design and testing of viscoelastic materials suitable for validation of the Lamb wave Dispersion Ultrasound Vibrometry (LDUV), an SDUV-based technique for measuring viscoelasticity of tissues with plate-like geometry. We report the results of quantifying viscoelasticity of urethane rubber and gelatin samples using LDUV and an embedded sphere method. The LDUV method was used to excite antisymmetric Lamb waves and measure the dispersion in urethane rubber and gelatin plates. An antisymmetric Lamb wave model was fitted to the wave speed dispersion data to estimate elasticity and viscosity of the materials. A finite element model of a viscoelastic plate submerged in water was used to study the appropriateness of the Lamb wave dispersion equations. An embedded sphere method was used as an independent measurement of the viscoelasticity of the urethane rubber and gelatin. The FEM dispersion data were in excellent agreement with the theoretical predictions. Viscoelasticity of the urethane rubber and gelatin obtained using the LDUV and embedded sphere methods agreed within one standard deviation. LDUV studies on excised porcine myocardium sample were performed to investigate the feasibility of the approach in preparation for open-chest in vivo studies. The results suggest that the LDUV technique can be used to quantify mechanical properties of soft tissues with a plate-like geometry. PMID:21403186
Groundwater flux estimation in streams: A thermal equilibrium approach
NASA Astrophysics Data System (ADS)
Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon
2018-06-01
Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash-Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.
Nenadic, Ivan Z; Urban, Matthew W; Mitchell, Scott A; Greenleaf, James F
2011-04-07
Diastolic dysfunction is the inability of the left ventricle to supply sufficient stroke volumes under normal physiological conditions and is often accompanied by stiffening of the left-ventricular myocardium. A noninvasive technique capable of quantifying viscoelasticity of the myocardium would be beneficial in clinical settings. Our group has been investigating the use of shear wave dispersion ultrasound vibrometry (SDUV), a noninvasive ultrasound-based method for quantifying viscoelasticity of soft tissues. The primary motive of this study is the design and testing of viscoelastic materials suitable for validation of the Lamb wave dispersion ultrasound vibrometry (LDUV), an SDUV-based technique for measuring viscoelasticity of tissues with plate-like geometry. We report the results of quantifying viscoelasticity of urethane rubber and gelatin samples using LDUV and an embedded sphere method. The LDUV method was used to excite antisymmetric Lamb waves and measure the dispersion in urethane rubber and gelatin plates. An antisymmetric Lamb wave model was fitted to the wave speed dispersion data to estimate elasticity and viscosity of the materials. A finite element model of a viscoelastic plate submerged in water was used to study the appropriateness of the Lamb wave dispersion equations. An embedded sphere method was used as an independent measurement of the viscoelasticity of the urethane rubber and gelatin. The FEM dispersion data were in excellent agreement with the theoretical predictions. Viscoelasticity of the urethane rubber and gelatin obtained using the LDUV and embedded sphere methods agreed within one standard deviation. LDUV studies on excised porcine myocardium sample were performed to investigate the feasibility of the approach in preparation for open-chest in vivo studies. The results suggest that the LDUV technique can be used to quantify the mechanical properties of soft tissues with a plate-like geometry.
Garcés-Vega, Francisco; Marks, Bradley P
2014-08-01
In the last 20 years, the use of microbial reduction models has expanded significantly, including inactivation (linear and nonlinear), survival, and transfer models. However, a major constraint for model development is the impossibility to directly quantify the number of viable microorganisms below the limit of detection (LOD) for a given study. Different approaches have been used to manage this challenge, including ignoring negative plate counts, using statistical estimations, or applying data transformations. Our objective was to illustrate and quantify the effect of negative plate count data management approaches on parameter estimation for microbial reduction models. Because it is impossible to obtain accurate plate counts below the LOD, we performed simulated experiments to generate synthetic data for both log-linear and Weibull-type microbial reductions. We then applied five different, previously reported data management practices and fit log-linear and Weibull models to the resulting data. The results indicated a significant effect (α = 0.05) of the data management practices on the estimated model parameters and performance indicators. For example, when the negative plate counts were replaced by the LOD for log-linear data sets, the slope of the subsequent log-linear model was, on average, 22% smaller than for the original data, the resulting model underpredicted lethality by up to 2.0 log, and the Weibull model was erroneously selected as the most likely correct model for those data. The results demonstrate that it is important to explicitly report LODs and related data management protocols, which can significantly affect model results, interpretation, and utility. Ultimately, we recommend using only the positive plate counts to estimate model parameters for microbial reduction curves and avoiding any data value substitutions or transformations when managing negative plate counts to yield the most accurate model parameters.
Quantifying Ballistic Armor Performance: A Minimally Invasive Approach
NASA Astrophysics Data System (ADS)
Holmes, Gale; Kim, Jaehyun; Blair, William; McDonough, Walter; Snyder, Chad
2006-03-01
Theoretical and non-dimensional analyses suggest a critical link between the performance of ballistic resistant armor and the fundamental mechanical properties of the polymeric materials that comprise them. Therefore, a test methodology that quantifies these properties without compromising an armored vest that is exposed to the industry standard V-50 ballistic performance test is needed. Currently, there is considerable speculation about the impact that competing degradation mechanisms (e.g., mechanical, humidity, ultraviolet) may have on ballistic resistant armor. We report on the use of a new test methodology that quantifies the mechanical properties of ballistic fibers and how each proposed degradation mechanism may impact a vest's ballistic performance.
Mukherjee, Anondo; Stanton, Levi G; Graham, Ashley R; Roberts, Paul T
2017-08-05
The use of low-cost air quality sensors has proliferated among non-profits and citizen scientists, due to their portability, affordability, and ease of use. Researchers are examining the sensors for their potential use in a wide range of applications, including the examination of the spatial and temporal variability of particulate matter (PM). However, few studies have quantified the performance (e.g., accuracy, precision, and reliability) of the sensors under real-world conditions. This study examined the performance of two models of PM sensors, the AirBeam and the Alphasense Optical Particle Counter (OPC-N2), over a 12-week period in the Cuyama Valley of California, where PM concentrations are impacted by wind-blown dust events and regional transport. The sensor measurements were compared with observations from two well-characterized instruments: the GRIMM 11-R optical particle counter, and the Met One beta attenuation monitor (BAM). Both sensor models demonstrated a high degree of collocated precision (R² = 0.8-0.99), and a moderate degree of correlation against the reference instruments (R² = 0.6-0.76). Sensor measurements were influenced by the meteorological environment and the aerosol size distribution. Quantifying the performance of sensors in real-world conditions is a requisite step to ensuring that sensors will be used in ways commensurate with their data quality.
Roberts, Paul T.
2017-01-01
The use of low-cost air quality sensors has proliferated among non-profits and citizen scientists, due to their portability, affordability, and ease of use. Researchers are examining the sensors for their potential use in a wide range of applications, including the examination of the spatial and temporal variability of particulate matter (PM). However, few studies have quantified the performance (e.g., accuracy, precision, and reliability) of the sensors under real-world conditions. This study examined the performance of two models of PM sensors, the AirBeam and the Alphasense Optical Particle Counter (OPC-N2), over a 12-week period in the Cuyama Valley of California, where PM concentrations are impacted by wind-blown dust events and regional transport. The sensor measurements were compared with observations from two well-characterized instruments: the GRIMM 11-R optical particle counter, and the Met One beta attenuation monitor (BAM). Both sensor models demonstrated a high degree of collocated precision (R2 = 0.8–0.99), and a moderate degree of correlation against the reference instruments (R2 = 0.6–0.76). Sensor measurements were influenced by the meteorological environment and the aerosol size distribution. Quantifying the performance of sensors in real-world conditions is a requisite step to ensuring that sensors will be used in ways commensurate with their data quality. PMID:28783065
Comparative study on the wake deflection behind yawed wind turbine models
NASA Astrophysics Data System (ADS)
Schottler, Jannik; Mühle, Franz; Bartl, Jan; Peinke, Joachim; Adaramola, Muyiwa S.; Sætran, Lars; Hölling, Michael
2017-05-01
In this wind tunnel campaign, detailed wake measurements behind two different model wind turbines in yawed conditions were performed. The wake deflections were quantified by estimating the rotor-averaged available power within the wake. By using two different model wind turbines, the influence of the rotor design and turbine geometry on the wake deflection caused by a yaw misalignment of 30° could be judged. It was found that the wake deflections three rotor diameters downstream were equal while at six rotor diameters downstream insignificant differences were observed. The results compare well with previous experimental and numerical studies.
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Muscular contribution to low-back loading and stiffness during standard and suspended push-ups.
Beach, Tyson A C; Howarth, Samuel J; Callaghan, Jack P
2008-06-01
Push-up exercises are normally performed to challenge muscles that span upper extremity joints. However, it is also recognized that push-ups provide an effective abdominal muscle challenge, especially when the hands are in contact with a labile support surface. The purpose of this study was to compare trunk muscle activation levels and resultant intervertebral joint (IVJ) loading when standard and suspended push-ups were performed, and to quantify and compare the contribution of trunk muscles to IVJ rotational stiffness in both exercises. Eleven recreationally trained male volunteers performed sets of standard and suspended push-ups. Upper body kinematic, kinetic, and EMG data were collected and input into a 3D biomechanical model of the lumbar torso to quantify lumbar IVJ loading and the contributions of trunk muscles to IVJ rotational stiffness. When performing suspended push-ups, muscles of the abdominal wall and the latissimus dorsi were activated to levels that were significantly greater than those elicited when performing standard push-ups (p<.05). As a direct result of these increased activation levels, model-predicted muscle forces increased and consequently led to significantly greater mean (p=.0008) and peak (p=.0012) lumbar IVJ compressive forces when performing suspended push-ups. Also directly resulting from the increased activation levels of the abdominal muscles and the latissimus dorsi during suspended push-ups was increased muscular contribution to lumbar IVJ rotational stiffness (p<.05). In comparison to the standard version of the exercise, suspended push-ups appear to provide a superior abdominal muscle challenge. However, for individuals unable to tolerate high lumbar IVJ compressive loads, potential benefits gained by incorporating suspended push-ups into their resistance training regimen may be outweighed by the risk of overloading low-back tissues.
Application of Probabilistic Analysis to Aircraft Impact Dynamics
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Padula, Sharon L.; Stockwell, Alan E.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stressstrain behaviors, laminated composites, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the uncertainty in the simulated responses. Several criteria are used to determine that a response surface method is the most appropriate probabilistic approach. The work is extended to compare optimization results with and without probabilistic constraints.
Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.
2017-01-01
Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237
Extending rule-based methods to model molecular geometry and 3D model resolution.
Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia
2016-08-01
Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.
Mapping Reef Fish and the Seascape: Using Acoustics and Spatial Modeling to Guide Coastal Management
Costa, Bryan; Taylor, J. Christopher; Kracker, Laura; Battista, Tim; Pittman, Simon
2014-01-01
Reef fish distributions are patchy in time and space with some coral reef habitats supporting higher densities (i.e., aggregations) of fish than others. Identifying and quantifying fish aggregations (particularly during spawning events) are often top priorities for coastal managers. However, the rapid mapping of these aggregations using conventional survey methods (e.g., non-technical SCUBA diving and remotely operated cameras) are limited by depth, visibility and time. Acoustic sensors (i.e., splitbeam and multibeam echosounders) are not constrained by these same limitations, and were used to concurrently map and quantify the location, density and size of reef fish along with seafloor structure in two, separate locations in the U.S. Virgin Islands. Reef fish aggregations were documented along the shelf edge, an ecologically important ecotone in the region. Fish were grouped into three classes according to body size, and relationships with the benthic seascape were modeled in one area using Boosted Regression Trees. These models were validated in a second area to test their predictive performance in locations where fish have not been mapped. Models predicting the density of large fish (≥29 cm) performed well (i.e., AUC = 0.77). Water depth and standard deviation of depth were the most influential predictors at two spatial scales (100 and 300 m). Models of small (≤11 cm) and medium (12–28 cm) fish performed poorly (i.e., AUC = 0.49 to 0.68) due to the high prevalence (45–79%) of smaller fish in both locations, and the unequal prevalence of smaller fish in the training and validation areas. Integrating acoustic sensors with spatial modeling offers a new and reliable approach to rapidly identify fish aggregations and to predict the density large fish in un-surveyed locations. This integrative approach will help coastal managers to prioritize sites, and focus their limited resources on areas that may be of higher conservation value. PMID:24454886
Uncertainty Propagation in Hypersonic Vehicle Aerothermoelastic Analysis
NASA Astrophysics Data System (ADS)
Lamorte, Nicolas Etienne
Hypersonic vehicles face a challenging flight environment. The aerothermoelastic analysis of its components requires numerous simplifying approximations. Identifying and quantifying the effect of uncertainties pushes the limits of the existing deterministic models, and is pursued in this work. An uncertainty quantification framework is used to propagate the effects of identified uncertainties on the stability margins and performance of the different systems considered. First, the aeroelastic stability of a typical section representative of a control surface on a hypersonic vehicle is examined. Variability in the uncoupled natural frequencies of the system is modeled to mimic the effect of aerodynamic heating. Next, the stability of an aerodynamically heated panel representing a component of the skin of a generic hypersonic vehicle is considered. Uncertainty in the location of transition from laminar to turbulent flow and the heat flux prediction is quantified using CFD. In both cases significant reductions of the stability margins are observed. A loosely coupled airframe--integrated scramjet engine is considered next. The elongated body and cowl of the engine flow path are subject to harsh aerothermodynamic loading which causes it to deform. Uncertainty associated with deformation prediction is propagated to the engine performance analysis. The cowl deformation is the main contributor to the sensitivity of the propulsion system performance. Finally, a framework for aerothermoelastic stability boundary calculation for hypersonic vehicles using CFD is developed. The usage of CFD enables one to consider different turbulence conditions, laminar or turbulent, and different models of the air mixture, in particular real gas model which accounts for dissociation of molecules at high temperature. The system is found to be sensitive to turbulence modeling as well as the location of the transition from laminar to turbulent flow. Real gas effects play a minor role in the flight conditions considered. These studies demonstrate the advantages of accounting for uncertainty at an early stage of the analysis. They emphasize the important relation between heat flux modeling, thermal stresses and stability margins of hypersonic vehicles.
Quantifying predictive capability of electronic health records for the most harmful breast cancer
NASA Astrophysics Data System (ADS)
Wu, Yirong; Fan, Jun; Peissig, Peggy; Berg, Richard; Tafti, Ahmad Pahlavan; Yin, Jie; Yuan, Ming; Page, David; Cox, Jennifer; Burnside, Elizabeth S.
2018-03-01
Improved prediction of the "most harmful" breast cancers that cause the most substantive morbidity and mortality would enable physicians to target more intense screening and preventive measures at those women who have the highest risk; however, such prediction models for the "most harmful" breast cancers have rarely been developed. Electronic health records (EHRs) represent an underused data source that has great research and clinical potential. Our goal was to quantify the value of EHR variables in the "most harmful" breast cancer risk prediction. We identified 794 subjects who had breast cancer with primary non-benign tumors with their earliest diagnosis on or after 1/1/2004 from an existing personalized medicine data repository, including 395 "most harmful" breast cancer cases and 399 "least harmful" breast cancer cases. For these subjects, we collected EHR data comprised of 6 components: demographics, diagnoses, symptoms, procedures, medications, and laboratory results. We developed two regularized prediction models, Ridge Logistic Regression (Ridge-LR) and Lasso Logistic Regression (Lasso-LR), to predict the "most harmful" breast cancer one year in advance. The area under the ROC curve (AUC) was used to assess model performance. We observed that the AUCs of Ridge-LR and Lasso-LR models were 0.818 and 0.839 respectively. For both the Ridge-LR and LassoLR models, the predictive performance of the whole EHR variables was significantly higher than that of each individual component (p<0.001). In conclusion, EHR variables can be used to predict the "most harmful" breast cancer, providing the possibility to personalize care for those women at the highest risk in clinical practice.
Quantifying predictive capability of electronic health records for the most harmful breast cancer.
Wu, Yirong; Fan, Jun; Peissig, Peggy; Berg, Richard; Tafti, Ahmad Pahlavan; Yin, Jie; Yuan, Ming; Page, David; Cox, Jennifer; Burnside, Elizabeth S
2018-02-01
Improved prediction of the "most harmful" breast cancers that cause the most substantive morbidity and mortality would enable physicians to target more intense screening and preventive measures at those women who have the highest risk; however, such prediction models for the "most harmful" breast cancers have rarely been developed. Electronic health records (EHRs) represent an underused data source that has great research and clinical potential. Our goal was to quantify the value of EHR variables in the "most harmful" breast cancer risk prediction. We identified 794 subjects who had breast cancer with primary non-benign tumors with their earliest diagnosis on or after 1/1/2004 from an existing personalized medicine data repository, including 395 "most harmful" breast cancer cases and 399 "least harmful" breast cancer cases. For these subjects, we collected EHR data comprised of 6 components: demographics, diagnoses, symptoms, procedures, medications, and laboratory results. We developed two regularized prediction models, Ridge Logistic Regression (Ridge-LR) and Lasso Logistic Regression (Lasso-LR), to predict the "most harmful" breast cancer one year in advance. The area under the ROC curve (AUC) was used to assess model performance. We observed that the AUCs of Ridge-LR and Lasso-LR models were 0.818 and 0.839 respectively. For both the Ridge-LR and Lasso-LR models, the predictive performance of the whole EHR variables was significantly higher than that of each individual component (p<0.001). In conclusion, EHR variables can be used to predict the "most harmful" breast cancer, providing the possibility to personalize care for those women at the highest risk in clinical practice.
Kelder, Johannes C; Cowie, Martin R; McDonagh, Theresa A; Hardman, Suzanna M C; Grobbee, Diederick E; Cost, Bernard; Hoes, Arno W
2011-06-01
Diagnosing early stages of heart failure with mild symptoms is difficult. B-type natriuretic peptide (BNP) has promising biochemical test characteristics, but its diagnostic yield on top of readily available diagnostic knowledge has not been sufficiently quantified in early stages of heart failure. To quantify the added diagnostic value of BNP for the diagnosis of heart failure in a population relevant to GPs and validate the findings in an independent primary care patient population. Individual patient data meta-analysis followed by external validation. The additional diagnostic yield of BNP above standard clinical information was compared with ECG and chest x-ray results. Derivation was performed on two existing datasets from Hillingdon (n=127) and Rotterdam (n=149) while the UK Natriuretic Peptide Study (n=306) served as validation dataset. Included were patients with suspected heart failure referred to a rapid-access diagnostic outpatient clinic. Case definition was according to the ESC guideline. Logistic regression was used to assess discrimination (with the c-statistic) and calibration. Of the 276 patients in the derivation set, 30.8% had heart failure. The clinical model (encompassing age, gender, known coronary artery disease, diabetes, orthopnoea, elevated jugular venous pressure, crackles, pitting oedema and S3 gallop) had a c-statistic of 0.79. Adding, respectively, chest x-ray results, ECG results or BNP to the clinical model increased the c-statistic to 0.84, 0.85 and 0.92. Neither ECG nor chest x-ray added significantly to the 'clinical plus BNP' model. All models had adequate calibration. The 'clinical plus BNP' diagnostic model performed well in an independent cohort with comparable inclusion criteria (c-statistic=0.91 and adequate calibration). Using separate cut-off values for 'ruling in' (typically implying referral for echocardiography) and for 'ruling out' heart failure--creating a grey zone--resulted in insufficient proportions of patients with a correct diagnosis. BNP has considerable diagnostic value in addition to signs and symptoms in patients suspected of heart failure in primary care. However, using BNP alone with the currently recommended cut-off levels is not sufficient to make a reliable diagnosis of heart failure.
Performance Benefits for Wave Rotor-Topped Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Jones, Scott M.; Welch, Gerard E.
1996-01-01
The benefits of wave rotor-topping in turboshaft engines, subsonic high-bypass turbofan engines, auxiliary power units, and ground power units are evaluated. The thermodynamic cycle performance is modeled using a one-dimensional steady-state code; wave rotor performance is modeled using one-dimensional design/analysis codes. Design and off-design engine performance is calculated for baseline engines and wave rotor-topped engines, where the wave rotor acts as a high pressure spool. The wave rotor-enhanced engines are shown to have benefits in specific power and specific fuel flow over the baseline engines without increasing turbine inlet temperature. The off-design steady-state behavior of a wave rotor-topped engine is shown to be similar to a conventional engine. Mission studies are performed to quantify aircraft performance benefits for various wave rotor cycle and weight parameters. Gas turbine engine cycles most likely to benefit from wave rotor-topping are identified. Issues of practical integration and the corresponding technical challenges with various engine types are discussed.
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system. Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System spacecraft system.Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.
Huijbregts, Mark A J; Gilijamse, Wim; Ragas, Ad M J; Reijnders, Lucas
2003-06-01
The evaluation of uncertainty is relatively new in environmental life-cycle assessment (LCA). It provides useful information to assess the reliability of LCA-based decisions and to guide future research toward reducing uncertainty. Most uncertainty studies in LCA quantify only one type of uncertainty, i.e., uncertainty due to input data (parameter uncertainty). However, LCA outcomes can also be uncertain due to normative choices (scenario uncertainty) and the mathematical models involved (model uncertainty). The present paper outlines a new methodology that quantifies parameter, scenario, and model uncertainty simultaneously in environmental life-cycle assessment. The procedure is illustrated in a case study that compares two insulation options for a Dutch one-family dwelling. Parameter uncertainty was quantified by means of Monte Carlo simulation. Scenario and model uncertainty were quantified by resampling different decision scenarios and model formulations, respectively. Although scenario and model uncertainty were not quantified comprehensively, the results indicate that both types of uncertainty influence the case study outcomes. This stresses the importance of quantifying parameter, scenario, and model uncertainty simultaneously. The two insulation options studied were found to have significantly different impact scores for global warming, stratospheric ozone depletion, and eutrophication. The thickest insulation option has the lowest impact on global warming and eutrophication, and the highest impact on stratospheric ozone depletion.
Adaptation of Mesoscale Weather Models to Local Forecasting
NASA Technical Reports Server (NTRS)
Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.
2003-01-01
Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes objective and subjective verification methodologies. Objective (e.g., statistical) verification of point forecasts is a stringent measure of model performance, but when used alone, it is not usually sufficient for quantifying the value of the overall contribution of the model to the weather-forecasting process. This is especially true for mesoscale models with enhanced spatial and temporal resolution that may be capable of predicting meteorologically consistent, though not necessarily accurate, fine-scale weather phenomena. Therefore, subjective (phenomenological) evaluation, focusing on selected case studies and specific weather features, such as sea breezes and precipitation, has been performed to help quantify the added value that cannot be inferred solely from objective evaluation.
Quantifying Safety Performance of Driveways on State Highways
DOT National Transportation Integrated Search
2012-08-01
This report documents a research effort to quantify the safety performance of driveways in the State of Oregon. In : particular, this research effort focuses on driveways located adjacent to principal arterial state highways with urban or : rural des...
Role of Nitric Oxide in MPTP-Induced Dopaminergic Neuron Degeneration
2006-06-01
peroxynitrite exposure, that of dityrosine and nitrotyrosine by gas chromatography with mass spectrometry. 6 Quantification will be performed in different...MPTP administration by quantifying the two main products of peroxynitrite oxidation of tyrosine, dityrosine and nitrotyrosine using gas ...as a neuroprotective agent was demonstrated against experimental brain ischaemia (21) and disease progression in the R6/2 mouse model of Huntington’s
Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology
NASA Technical Reports Server (NTRS)
Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus
2013-01-01
Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.
Zhou, Xiangrong; Xu, Rui; Hara, Takeshi; Hirano, Yasushi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Kido, Shoji; Fujita, Hiroshi
2014-07-01
The shapes of the inner organs are important information for medical image analysis. Statistical shape modeling provides a way of quantifying and measuring shape variations of the inner organs in different patients. In this study, we developed a universal scheme that can be used for building the statistical shape models for different inner organs efficiently. This scheme combines the traditional point distribution modeling with a group-wise optimization method based on a measure called minimum description length to provide a practical means for 3D organ shape modeling. In experiments, the proposed scheme was applied to the building of five statistical shape models for hearts, livers, spleens, and right and left kidneys by use of 50 cases of 3D torso CT images. The performance of these models was evaluated by three measures: model compactness, model generalization, and model specificity. The experimental results showed that the constructed shape models have good "compactness" and satisfied the "generalization" performance for different organ shape representations; however, the "specificity" of these models should be improved in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia
Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less
Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia; ...
2017-07-27
Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less
Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.;
2011-01-01
Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.
Untangling Performance from Success
NASA Astrophysics Data System (ADS)
Yucesoy, Burcu; Barabasi, Albert-Laszlo
Fame, popularity and celebrity status, frequently used tokens of success, are often loosely related to, or even divorced from professional performance. This dichotomy is partly rooted in the difficulty to distinguish performance, an individual measure that captures the actions of a performer, from success, a collective measure that captures a community's reactions to these actions. Yet, finding the relationship between the two measures is essential for all areas that aim to objectively reward excellence, from science to business. Here we quantify the relationship between performance and success by focusing on tennis, an individual sport where the two quantities can be independently measured. We show that a predictive model, relying only on a tennis player's performance in tournaments, can accurately predict an athlete's popularity, both during a player's active years and after retirement. Hence the model establishes a direct link between performance and momentary popularity. The agreement between the performance-driven and observed popularity suggests that in most areas of human achievement exceptional visibility may be rooted in detectable performance measures. This research was supported by Air Force Office of Scientific Research (AFOSR) under agreement FA9550-15-1-0077.
SPS pilot signal design and power transponder analysis, volume 2, phase 3
NASA Technical Reports Server (NTRS)
Lindsey, W. C.; Scholtz, R. A.; Chie, C. M.
1980-01-01
The problem of pilot signal parameter optimization and the related problem of power transponder performance analysis for the Solar Power Satellite reference phase control system are addressed. Signal and interference models were established to enable specifications of the front end filters including both the notch filter and the antenna frequency response. A simulation program package was developed to be included in SOLARSIM to perform tradeoffs of system parameters based on minimizing the phase error for the pilot phase extraction. An analytical model that characterizes the overall power transponder operation was developed. From this model, the effects of different phase noise disturbance sources that contribute to phase variations at the output of the power transponders were studied and quantified. Results indicate that it is feasible to hold the antenna array phase error to less than one degree per power module for the type of disturbances modeled.
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
Steingroever, Helen; Pachur, Thorsten; Šmíra, Martin; Lee, Michael D
2018-06-01
The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.
Characterizing Time Series Data Diversity for Wind Forecasting: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Chartan, Erol Kevin; Feng, Cong
Wind forecasting plays an important role in integrating variable and uncertain wind power into the power grid. Various forecasting models have been developed to improve the forecasting accuracy. However, it is challenging to accurately compare the true forecasting performances from different methods and forecasters due to the lack of diversity in forecasting test datasets. This paper proposes a time series characteristic analysis approach to visualize and quantify wind time series diversity. The developed method first calculates six time series characteristic indices from various perspectives. Then the principal component analysis is performed to reduce the data dimension while preserving the importantmore » information. The diversity of the time series dataset is visualized by the geometric distribution of the newly constructed principal component space. The volume of the 3-dimensional (3D) convex polytope (or the length of 1D number axis, or the area of the 2D convex polygon) is used to quantify the time series data diversity. The method is tested with five datasets with various degrees of diversity.« less
Effects of vehicle front-end stiffness on rear seat dummies in NCAP and FMVSS208 tests.
Sahraei, Elham; Digges, Kennerly; Marzougui, Dhafer
2013-01-01
This study is devoted to quantifying changes in mass and stiffness of vehicles tested by the National Highway Traffic Safety Administration (NHTSA) over the past 3 decades (model years 1982 to 2010) and understanding the effect of those changes on protection of rear seat occupants. A total of 1179 tests were used, and the changes in their mass and stiffness versus their model year was quantified. Additionally, data from 439 dummies tested in rear seats of NHTSA's full frontal crashes were analyzed. Dummies were divided into 3 groups based on their reference injury criteria. Multiple regressions were performed with speed, stiffness, and mass as predicting variables for head, neck, and chest injury criteria. A significant increase in mass and stiffness over model year of vehicles was observed, for passenger cars as well as large platform vehicles. The result showed a significant correlation (P-value < .05) between the increase in stiffness of the vehicles and increase in head and chest injury criteria for all dummy sizes. These results explain that stiffness is a significant contributor to previously reported decreases in protection of rear seat occupants over model years of vehicles.
Callén, M S; Iturmendi, A; López, J M; Mastral, A M
2014-02-01
In order to perform a study of the carcinogenic potential of polycyclic aromatic hydrocarbons (PAH), benzo(a)pyrene equivalent (BaP-eq) concentration was calculated and modelled by a receptor model based on positive matrix factorization (PMF). Nineteen PAH associated to airborne PM10 of Zaragoza, Spain, were quantified during the sampling period 2001-2009 and used as potential variables by the PMF model. Afterwards, multiple linear regression analysis was used to quantify the potential sources of BaP-eq. Five sources were obtained as the optimal solution and vehicular emission was identified as the main carcinogenic source (35 %) followed by heavy-duty vehicles (28 %), light-oil combustion (18 %), natural gas (10 %) and coal combustion (9 %). Two of the most prevailing directions contributing to this carcinogenic character were the NE and N directions associated with a highway, industrial parks and a paper factory. The lifetime lung cancer risk exceeded the unit risk of 8.7 x 10(-5) per ng/m(3) BaP in both winter and autumn seasons and the most contributing source was the vehicular emission factor becoming an important issue in control strategies.
A comparison of hydrologic models for ecological flows and water availability
Caldwell, Peter V; Kennen, Jonathan G.; Sun, Ge; Kiang, Julie E.; Butcher, John B; Eddy, Michelle C; Hay, Lauren E.; LaFontaine, Jacob H.; Hain, Ernie F.; Nelson, Stacy C; McNulty, Steve G
2015-01-01
Robust hydrologic models are needed to help manage water resources for healthy aquatic ecosystems and reliable water supplies for people, but there is a lack of comprehensive model comparison studies that quantify differences in streamflow predictions among model applications developed to answer management questions. We assessed differences in daily streamflow predictions by four fine-scale models and two regional-scale monthly time step models by comparing model fit statistics and bias in ecologically relevant flow statistics (ERFSs) at five sites in the Southeastern USA. Models were calibrated to different extents, including uncalibrated (level A), calibrated to a downstream site (level B), calibrated specifically for the site (level C) and calibrated for the site with adjusted precipitation and temperature inputs (level D). All models generally captured the magnitude and variability of observed streamflows at the five study sites, and increasing level of model calibration generally improved performance. All models had at least 1 of 14 ERFSs falling outside a +/−30% range of hydrologic uncertainty at every site, and ERFSs related to low flows were frequently over-predicted. Our results do not indicate that any specific hydrologic model is superior to the others evaluated at all sites and for all measures of model performance. Instead, we provide evidence that (1) model performance is as likely to be related to calibration strategy as it is to model structure and (2) simple, regional-scale models have comparable performance to the more complex, fine-scale models at a monthly time step.
Detailed performance and environmental monitoring of aquifer heating and cooling systems
NASA Astrophysics Data System (ADS)
Acuna, José; Ahlkrona, Malva; Zandin, Hanna; Singh, Ashutosh
2016-04-01
The project intends to quantify the performance and environmental impact of large scale aquifer thermal energy storage, as well as point at recommendations for operating and estimating the environmental footprint of future systems. Field measurements, test of innovative equipment as well as advanced modelling work and analysis will be performed. The following aspects are introduced and covered in the presentation: -Thermal, chemical and microbiological influence of akvifer thermal energy storage systems: measurement and evaluation of real conditions and the influence of one system in operation. -Follow up of energy extraction from aquifer as compared to projected values, recommendations for improvements. -Evaluation of the most used thermal modeling tool for design and calculation of groundwater temperatures, calculations with MODFLOW/MT3DMS -Test and evaluation of optical fiber cables as a way to measure temperatures in aquifer thermal energy storages
Evaluation of the durability of composite tidal turbine blades.
Davies, Peter; Germain, Grégory; Gaurier, Benoît; Boisseau, Amélie; Perreux, Dominique
2013-02-28
The long-term reliability of tidal turbines is critical if these structures are to be cost effective. Optimized design requires a combination of material durability models and structural analyses. Composites are a natural choice for turbine blades, but there are few data available to predict material behaviour under coupled environmental and cycling loading. The present study addresses this problem, by introducing a multi-level framework for turbine blade qualification. At the material scale, static and cyclic tests have been performed, both in air and in sea water. The influence of ageing in sea water on fatigue performance is then quantified, and much lower fatigue lives are measured after ageing. At a higher level, flume tank tests have been performed on three-blade tidal turbines. Strain gauging of blades has provided data to compare with numerical models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-05-30
Xanthos is a Python package designed to quantify and analyze global water availability in history and in future at 0.5° × 0.5° spatial resolution and a monthly time step under a changing climate. Its performance was also tested through real applications. It is open-source, extendable and convenient to researchers who work on long-term climate data for studies of global water supply, and Global Change Assessment Model (GCAM). This package integrates inherent global gridded data maps, I/O modules, Water-Balance Model modules and diagnostics modules by user-defined configuration.
Predictive Modeling for NASA Entry, Descent and Landing Missions
NASA Technical Reports Server (NTRS)
Wright, Michael
2016-01-01
Entry, Descent and Landing (EDL) Modeling and Simulation (MS) is an enabling capability for complex NASA entry missions such as MSL and Orion. MS is used in every mission phase to define mission concepts, select appropriate architectures, design EDL systems, quantify margin and risk, ensure correct system operation, and analyze data returned from the entry. In an environment where it is impossible to fully test EDL concepts on the ground prior to use, accurate MS capability is required to extrapolate ground test results to expected flight performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Consuelo Juanita
Recent amendments to the Safe Drinking Water Act emphasize efforts toward safeguarding our nation's water supplies against attack and contamination. Specifically, the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 established requirements for each community water system serving more than 3300 people to conduct an assessment of the vulnerability of its system to a terrorist attack or other intentional acts. Integral to evaluating system vulnerability is the threat assessment, which is the process by which the credibility of a threat is quantified. Unfortunately, full probabilistic assessment is generally not feasible, as there is insufficient experience and/or datamore » to quantify the associated probabilities. For this reason, an alternative approach is proposed based on Markov Latent Effects (MLE) modeling, which provides a framework for quantifying imprecise subjective metrics through possibilistic or fuzzy mathematics. Here, an MLE model for water systems is developed and demonstrated to determine threat assessments for different scenarios identified by the assailant, asset, and means. Scenario assailants include terrorists, insiders, and vandals. Assets include a water treatment plant, water storage tank, node, pipeline, well, and a pump station. Means used in attacks include contamination (onsite chemicals, biological and chemical), explosives and vandalism. Results demonstrated highest threats are vandalism events and least likely events are those performed by a terrorist.« less
ERIC Educational Resources Information Center
Aida, Misako; Watanabe, Satoshi P.
2016-01-01
Universities throughout the world are trending toward more performance based methods to capture their strengths, weaknesses and productivity. Hiroshima University has developed an integrated objective measure for quantifying multifaceted faculty activities, namely the "Achievement-Motivated Key Performance Indicator" (A-KPI), in order to…
Weijerman, Mariska; Fulton, Elizabeth A; Brainard, Russell E
2016-01-01
Ecosystem modelling is increasingly used to explore ecosystem-level effects of changing environmental conditions and management actions. For coral reefs there has been increasing interest in recent decades in the use of ecosystem models for evaluating the effects of fishing and the efficacy of marine protected areas. However, ecosystem models that integrate physical forcings, biogeochemical and ecological dynamics, and human induced perturbations are still underdeveloped. We applied an ecosystem model (Atlantis) to the coral reef ecosystem of Guam using a suite of management scenarios prioritized in consultation with local resource managers to review the effects of each scenario on performance measures related to the ecosystem, the reef-fish fishery (e.g., fish landings) and coral habitat. Comparing tradeoffs across the selected scenarios showed that each scenario performed best for at least one of the selected performance indicators. The integrated 'full regulation' scenario outperformed other scenarios with four out of the six performance metrics at the cost of reef-fish landings. This model application quantifies the socio-ecological costs and benefits of alternative management scenarios. When the effects of climate change were taken into account, several scenarios performed equally well, but none prevented a collapse in coral biomass over the next few decades assuming a business-as-usual greenhouse gas emissions scenario.
Weijerman, Mariska; Fulton, Elizabeth A.; Brainard, Russell E.
2016-01-01
Ecosystem modelling is increasingly used to explore ecosystem-level effects of changing environmental conditions and management actions. For coral reefs there has been increasing interest in recent decades in the use of ecosystem models for evaluating the effects of fishing and the efficacy of marine protected areas. However, ecosystem models that integrate physical forcings, biogeochemical and ecological dynamics, and human induced perturbations are still underdeveloped. We applied an ecosystem model (Atlantis) to the coral reef ecosystem of Guam using a suite of management scenarios prioritized in consultation with local resource managers to review the effects of each scenario on performance measures related to the ecosystem, the reef-fish fishery (e.g., fish landings) and coral habitat. Comparing tradeoffs across the selected scenarios showed that each scenario performed best for at least one of the selected performance indicators. The integrated ‘full regulation’ scenario outperformed other scenarios with four out of the six performance metrics at the cost of reef-fish landings. This model application quantifies the socio-ecological costs and benefits of alternative management scenarios. When the effects of climate change were taken into account, several scenarios performed equally well, but none prevented a collapse in coral biomass over the next few decades assuming a business-as-usual greenhouse gas emissions scenario. PMID:27023183
Predictive ability of a comprehensive incremental test in mountain bike marathon
Schneeweiss, Patrick; Martus, Peter; Niess, Andreas M; Krauss, Inga
2018-01-01
Objectives Traditional performance tests in mountain bike marathon (XCM) primarily quantify aerobic metabolism and may not describe the relevant capacities in XCM. We aimed to validate a comprehensive test protocol quantifying its intermittent demands. Methods Forty-nine athletes (38.8±9.1 years; 38 male; 11 female) performed a laboratory performance test, including an incremental test, to determine individual anaerobic threshold (IAT), peak power output (PPO) and three maximal efforts (10 s all-out sprint, 1 min maximal effort and 5 min maximal effort). Within 2 weeks, the athletes participated in one of three XCM races (n=15, n=9 and n=25). Correlations between test variables and race times were calculated separately. In addition, multiple regression models of the predictive value of laboratory outcomes were calculated for race 3 and across all races (z-transformed data). Results All variables were correlated with race times 1, 2 and 3: 10 s all-out sprint (r=−0.72; r=−0.59; r=−0.61), 1 min maximal effort (r=−0.85; r=−0.84; r=−0.82), 5 min maximal effort (r=−0.57; r=−0.85; r=−0.76), PPO (r=−0.77; r=−0.73; r=−0.76) and IAT (r=−0.71; r=−0.67; r=−0.68). The best-fitting multiple regression models for race 3 (r2=0.868) and across all races (r2=0.757) comprised 1 min maximal effort, IAT and body weight. Conclusion Aerobic and intermittent variables correlated least strongly with race times. Their use in a multiple regression model confirmed additional explanatory power to predict XCM performance. These findings underline the usefulness of the comprehensive incremental test to predict performance in that sport more precisely. PMID:29387445
Validation of the SWMF Magnetosphere: Fields and Particles
NASA Astrophysics Data System (ADS)
Welling, D. T.; Ridley, A. J.
2009-05-01
The Space Weather Modeling Framework has been developed at the University of Michigan to allow many independent space environment numerical models to be executed simultaneously and coupled together to create a more accurate, all-encompassing system. This work explores the capabilities of the framework when using the BATS-R-US MHD code, Rice Convection Model (RCM), the Ridley Ionosphere Model (RIM), and the Polar Wind Outflow Model (PWOM). Ten space weather events, ranging from quiet to extremely stormy periods, are modeled by the framework. All simulations are executed in a manner that mimics an operational environment where fewer resources are available and predictions are required in a timely manner. The results are compared against in-situ measurements of magnetic fields from GOES, Polar, Geotail, and Cluster satellites as well as MPA particle measurements from the LANL geosynchronous spacecraft. Various metrics are calculated to quantify performance. Results when using only two to all four components are compared to evaluate the increase in performance as new physics are included in the system.
NASA Astrophysics Data System (ADS)
Wernet, A. K.; Beighley, R. E.
2006-12-01
Soil erosion is a power process that continuously alters the Earth's landscape. Human activities, such as construction and agricultural practices, and natural events, such as forest fires and landslides, disturb the landscape and intensify erosion processes leading to sudden increases in runoff sediment concentrations and degraded stream water quality. Understanding soil erosion and sediment transport processes is of great importance to researchers and practicing engineers, who routinely use models to predict soil erosion and sediment movement for varied land use and climate change scenarios. However, existing erosion models are limited in their applicability to constructions sites which have highly variable soil conditions (density, moisture, surface roughness, and best management practices) that change often in both space and time. The goal of this research is to improve the understanding, predictive capabilities and integration of treatment methodologies for controlling soil erosion and sediment export from construction sites. This research combines modeling with field monitoring and laboratory experiments to quantify: (a) spatial and temporal distribution of soil conditions on construction sites, (b) soil erosion due to event rainfall, and (c) potential offsite discharge of sediment with and without treatment practices. Field sites in southern California were selected to monitor the effects of common construction activities (ex., cut/fill, grading, foundations, roads) on soil conditions and sediment discharge. Laboratory experiments were performed in the Soil Erosion Research Laboratory (SERL), part of the Civil and Environmental Engineering department at San Diego State University, to quantify the impact of individual factors leading to sediment export. SERL experiments utilize a 3-m by 10-m tilting soil bed with soil depths up to 1 m, slopes ranging from 0 to 50 percent, and rainfall rates up to 150 mm/hr (6 in/hr). Preliminary modeling, field and laboratory results are presented.
Nguyen, Tri-Hung; Lieu, Linh Thuy; Nguyen, Gary; Bischof, Robert J.; Meeusen, Els N.; Li, Jian; Nation, Roger L.
2016-01-01
ABSTRACT Colistin, administered as its inactive prodrug colistin methanesulfonate (CMS), is often used in multidrug-resistant Gram-negative pulmonary infections. The CMS and colistin pharmacokinetics in plasma and epithelial lining fluid (ELF) following intravenous and pulmonary dosing have not been evaluated in a large-animal model with pulmonary architecture similar to that of humans. Six merino sheep (34 to 43 kg body weight) received an intravenous or pulmonary dose of 4 to 8 mg/kg CMS (sodium) or 2 to 3 mg/kg colistin (sulfate) in a 4-way crossover study. Pulmonary dosing was achieved via jet nebulization through an endotracheal tube cuff. CMS and colistin were quantified in plasma and bronchoalveolar lavage fluid (BALF) samples by high-performance liquid chromatography (HPLC). ELF concentrations were calculated via the urea method. CMS and colistin were comodeled in S-ADAPT. Following intravenous CMS or colistin administration, no concentrations were quantifiable in BALF samples. Elimination clearance was 1.97 liters/h (4% interindividual variability) for CMS (other than conversion to colistin) and 1.08 liters/h (25%) for colistin. On average, 18% of a CMS dose was converted to colistin. Following pulmonary delivery, colistin was not quantifiable in plasma and CMS was detected in only one sheep. Average ELF concentrations (standard deviations [SD]) of formed colistin were 400 (243), 384 (187), and 184 (190) mg/liter at 1, 4, and 24 h after pulmonary CMS administration. The population pharmacokinetic model described well CMS and colistin in plasma and ELF following intravenous and pulmonary administration. Pulmonary dosing provided high ELF and low plasma colistin concentrations, representing a substantial targeting advantage over intravenous administration. Predictions from the pharmacokinetic model indicate that sheep are an advantageous model for translational research. PMID:27821445
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Modelling the multidimensional niche by linking functional traits to competitive performance
Maynard, Daniel S.; Leonard, Kenneth E.; Drake, John M.; Hall, David W.; Crowther, Thomas W.; Bradford, Mark A.
2015-01-01
Linking competitive outcomes to environmental conditions is necessary for understanding species' distributions and responses to environmental change. Despite this importance, generalizable approaches for predicting competitive outcomes across abiotic gradients are lacking, driven largely by the highly complex and context-dependent nature of biotic interactions. Here, we present and empirically test a novel niche model that uses functional traits to model the niche space of organisms and predict competitive outcomes of co-occurring populations across multiple resource gradients. The model makes no assumptions about the underlying mode of competition and instead applies to those settings where relative competitive ability across environments correlates with a quantifiable performance metric. To test the model, a series of controlled microcosm experiments were conducted using genetically related strains of a widespread microbe. The model identified trait microevolution and performance differences among strains, with the predicted competitive ability of each organism mapped across a two-dimensional carbon and nitrogen resource space. Areas of coexistence and competitive dominance between strains were identified, and the predicted competitive outcomes were validated in approximately 95% of the pairings. By linking trait variation to competitive ability, our work demonstrates a generalizable approach for predicting and modelling competitive outcomes across changing environmental contexts. PMID:26136444
Validated Predictions of Metabolic Energy Consumption for Submaximal Effort Movement
Tsianos, George A.; MacFadden, Lisa N.
2016-01-01
Physical performance emerges from complex interactions among many physiological systems that are largely driven by the metabolic energy demanded. Quantifying metabolic demand is an essential step for revealing the many mechanisms of physical performance decrement, but accurate predictive models do not exist. The goal of this study was to investigate if a recently developed model of muscle energetics and force could be extended to reproduce the kinematics, kinetics, and metabolic demand of submaximal effort movement. Upright dynamic knee extension against various levels of ergometer load was simulated. Task energetics were estimated by combining the model of muscle contraction with validated models of lower limb musculotendon paths and segment dynamics. A genetic algorithm was used to compute the muscle excitations that reproduced the movement with the lowest energetic cost, which was determined to be an appropriate criterion for this task. Model predictions of oxygen uptake rate (VO2) were well within experimental variability for the range over which the model parameters were confidently known. The model's accurate estimates of metabolic demand make it useful for assessing the likelihood and severity of physical performance decrement for a given task as well as investigating underlying physiologic mechanisms. PMID:27248429
Semiparametric mixed-effects analysis of PK/PD models using differential equations.
Wang, Yi; Eskridge, Kent M; Zhang, Shunpu
2008-08-01
Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.
Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.
2008-01-01
The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.
Modeling and experimental study on near-field acoustic levitation by flexural mode.
Liu, Pinkuan; Li, Jin; Ding, Han; Cao, Wenwu
2009-12-01
Near-field acoustic levitation (NFAL) has been used in noncontact handling and transportation of small objects to avoid contamination. We have performed a theoretical analysis based on nonuniform vibrating surface to quantify the levitation force produced by the air film and also conducted experimental tests to verify our model. Modal analysis was performed using ANSYS on the flexural plate radiator to obtain its natural frequency of desired mode, which is used to design the measurement system. Then, the levitation force was calculated as a function of levitation distance based on squeeze gas film theory using measured amplitude and phase distributions on the vibrator surface. Compared with previous fluid-structural analyses using a uniform piston motion, our model based on the nonuniform radiating surface of the vibrator is more realistic and fits better with experimentally measured levitation force.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
Application of Probability Methods to Assess Crash Modeling Uncertainty
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.
2007-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.
The Crossover Time as an Evaluation of Ocean Models Against Persistence
NASA Astrophysics Data System (ADS)
Phillipson, L. M.; Toumi, R.
2018-01-01
A new ocean evaluation metric, the crossover time, is defined as the time it takes for a numerical model to equal the performance of persistence. As an example, the average crossover time calculated using the Lagrangian separation distance (the distance between simulated trajectories and observed drifters) for the global MERCATOR ocean model analysis is found to be about 6 days. Conversely, the model forecast has an average crossover time longer than 6 days, suggesting limited skill in Lagrangian predictability by the current generation of global ocean models. The crossover time of the velocity error is less than 3 days, which is similar to the average decorrelation time of the observed drifters. The crossover time is a useful measure to quantify future ocean model improvements.
O'Connor, B.L.; Hondzo, Miki; Harvey, J.W.
2009-01-01
Traditionally, dissolved oxygen (DO) fluxes have been calculated using the thin-film theory with DO microstructure data in systems characterized by fine sediments and low velocities. However, recent experimental evidence of fluctuating DO concentrations near the sediment-water interface suggests that turbulence and coherent motions control the mass transfer, and the surface renewal theory gives a more mechanistic model for quantifying fluxes. Both models involve quantifying the mass transfer coefficient (k) and the relevant concentration difference (??C). This study compared several empirical models for quantifying k based on both thin-film and surface renewal theories, as well as presents a new method for quantifying ??C (dynamic approach) that is consistent with the observed DO concentration fluctuations near the interface. Data were used from a series of flume experiments that includes both physical and kinetic uptake limitations of the flux. Results indicated that methods for quantifying k and ??C using the surface renewal theory better estimated the DO flux across a range of fluid-flow conditions. ?? 2009 ASCE.
Márquez-Ruiz, G; Holgado, F; García-Martínez, M C; Dobarganes, M C
2007-09-21
A new method based on high-performance size-exclusion chromatography (HPSEC) is proposed to quantitate primary and secondary oxidation compounds in model fatty acid methyl esters (FAMEs). The method consists on simply injecting an aliquot sample in HPSEC, without preliminary isolation procedures neither addition of standard internal. Four groups of compounds can be quantified, namely, unoxidised FAME, oxidised FAME monomers including hydroperoxides, FAME dimers and FAME polymers. Results showed high repeatability and sensitivity, and substantial advantages versus determination of residual substrate by gas-liquid chromatography. Applicability of the method is shown through selected data obtained by numerous oxidation experiments on pure FAME, mainly methyl linoleate, at ambient and moderate temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Riley E.; Mangan, Niall M.; Li, Jian V.
2016-11-21
In novel photovoltaic absorbers, it is often difficult to assess the root causes of low open-circuit voltages, which may be due to bulk recombination or sub-optimal contacts. In the present work, we discuss the role of temperature- and illumination-dependent device electrical measurements in quantifying and distinguishing these performance losses - in particular, for determining bounds on interface recombination velocities, band alignment, and minority carrier lifetime. We assess the accuracy of this approach by direct comparison to photoelectron spectroscopy. Then, we demonstrate how more computationally intensive model parameter fitting approaches can draw more insights from this broad measurement space. We applymore » this measurement and modeling approach to high-performance III-V and thin-film chalcogenide devices.« less
The performativity of numbers in illness management: The case of Swedish Rheumatology.
Essén, Anna; Oborn, Eivor
2017-07-01
While there is a proliferation of numerical data in healthcare, little attention has been paid to the role of numbers in constituting the healthcare reality they are intended to depict. This study explores the performativity of numbers in the microlevel management of rheumatoid disease. We draw on a study of patients' and physicians' use of the numbers in the Swedish Rheumatology Quality Registry, conducted between 2009 and 2014. We show how the numbers performed by constructing the disease across time, and by framing action. The numerical performances influenced patients and physicians in different ways, challenging the former to quantify embodied disease and the latter to subsume the disease into one of many possible trajectory standards. Based on our findings, we provide a model of the dynamic performativity of numbers in the on-going management of illness. The model conceptualises how numbers generate new possibilities; by creating tension and alignment they may open up new avenues for communication between patients and physicians. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Rodent Model of Dynamic Facial Reanimation Using Functional Electrical Stimulation
Attiah, Mark A.; de Vries, Julius; Richardson, Andrew G.; Lucas, Timothy H.
2017-01-01
Facial paralysis can be a devastating condition, causing disfiguring facial droop, slurred speech, eye dryness, scarring and blindness. This study investigated the utility of closed-loop functional electric stimulation (FES) for reanimating paralyzed facial muscles in a quantitative rodent model. The right buccal and marginal mandibular branches of the rat facial nerve were transected for selective, unilateral paralysis of whisker muscles. Microwire electrodes were implanted bilaterally into the facial musculature for FES and electromyographic (EMG) recording. With the rats awake and head-fixed, whisker trajectories were tracked bilaterally with optical micrometers. First, the relationship between EMG and volitional whisker movement was quantified on the intact side of the face. Second, the effect of FES on whisker trajectories was quantified on the paralyzed side. Third, closed-loop experiments were performed in which the EMG signal on the intact side triggered FES on the paralyzed side to restore symmetric whisking. The results demonstrate a novel in vivo platform for developing control strategies for neuromuscular facial prostheses. PMID:28424583
Local figure-ground cues are valid for natural images.
Fowlkes, Charless C; Martin, David R; Malik, Jitendra
2007-06-08
Figure-ground organization refers to the visual perception that a contour separating two regions belongs to one of the regions. Recent studies have found neural correlates of figure-ground assignment in V2 as early as 10-25 ms after response onset, providing strong support for the role of local bottom-up processing. How much information about figure-ground assignment is available from locally computed cues? Using a large collection of natural images, in which neighboring regions were assigned a figure-ground relation by human observers, we quantified the extent to which figural regions locally tend to be smaller, more convex, and lie below ground regions. Our results suggest that these Gestalt cues are ecologically valid, and we quantify their relative power. We have also developed a simple bottom-up computational model of figure-ground assignment that takes image contours as input. Using parameters fit to natural image statistics, the model is capable of matching human-level performance when scene context limited.
Controlled laboratory experiments and modeling of vegetative filter strips with shallow water tables
NASA Astrophysics Data System (ADS)
Fox, Garey A.; Muñoz-Carpena, Rafael; Purvis, Rebecca A.
2018-01-01
Natural or planted vegetation at the edge of fields or adjacent to streams, also known as vegetative filter strips (VFS), are commonly used as an environmental mitigation practice for runoff pollution and agrochemical spray drift. The VFS position in lowlands near water bodies often implies the presence of a seasonal shallow water table (WT). In spite of its potential importance, there is limited experimental work that systematically studies the effect of shallow WTs on VFS efficacy. Previous research recently coupled a new physically based algorithm describing infiltration into soils bounded by a water table into the VFS numerical overland flow and transport model, VFSMOD, to simulate VFS dynamics under shallow WT conditions. In this study, we tested the performance of the model against laboratory mesoscale data under controlled conditions. A laboratory soil box (1.0 m wide, 2.0 m long, and 0.7 m deep) was used to simulate a VFS and quantify the influence of shallow WTs on runoff. Experiments included planted Bermuda grass on repacked silt loam and sandy loam soils. A series of experiments were performed including a free drainage case (no WT) and a static shallow water table (0.3-0.4 m below ground surface). For each soil type, this research first calibrated VFSMOD to the observed outflow hydrograph for the free drainage experiments to parameterize the soil hydraulic and vegetation parameters, and then evaluated the model based on outflow hydrographs for the shallow WT experiments. This research used several statistical metrics and a new approach based on hypothesis testing of the Nash-Sutcliffe model efficiency coefficient (NSE) to evaluate model performance. The new VFSMOD routines successfully simulated the outflow hydrographs under both free drainage and shallow WT conditions. Statistical metrics considered the model performance valid with greater than 99.5% probability across all scenarios. This research also simulated the shallow water table experiments with both free drainage and various water table depths to quantify the effect of assuming the former boundary condition. For these two soil types, shallow WTs within 1.0-1.2 m below the soil surface influenced infiltration. Existing models will suggest a more protective vegetative filter strip than what actually exists if shallow water table conditions are not considered.
Fitts' Law in the Control of Isometric Grip Force With Naturalistic Targets.
Thumser, Zachary C; Slifkin, Andrew B; Beckler, Dylan T; Marasco, Paul D
2018-01-01
Fitts' law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts' law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts' law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts' law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts' law (average r 2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts' law for explicit targets with vision ( r 2 = 0.96) and implicit targets ( r 2 = 0.89), but not as well-described for explicit targets without vision ( r 2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts' law to quantify the relative speed-accuracy relationship of any given grasper.
Nallikuzhy, Jiss J; Dandapat, S
2017-06-01
In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Balasubramanian, Chitralakshmi K.; Neptune, Richard R.; Kautz, Steven A.
2010-01-01
Background Foot placement during walking is closely linked to the body position, yet it is typically quantified relative to the other foot. The purpose of this study was to quantify foot placement patterns relative to body post-stroke and investigate its relationship to hemiparetic walking performance. Methods Thirty-nine participants with hemiparesis walked on a split-belt treadmill at their self-selected speeds and twenty healthy participants walked at matched slow speeds. Anterior-posterior and medial-lateral foot placements (foot center-of-mass) relative to body (pelvis center-of-mass) quantified stepping in body reference frame. Walking performance was quantified using step length asymmetry ratio, percent of paretic propulsion and paretic weight support. Findings Participants with hemiparesis placed their paretic foot further anterior than posterior during walking compared to controls walking at matched slow speeds (p < .05). Participants also placed their paretic foot further lateral relative to pelvis than non-paretic (p < .05). Anterior-posterior asymmetry correlated with step length asymmetry and percent paretic propulsion but some persons revealed differing asymmetry patterns in the translating reference frame. Lateral foot placement asymmetry correlated with paretic weight support (r = .596; p < .001), whereas step widths showed no relation to paretic weight support. Interpretation Post-stroke gait is asymmetric when quantifying foot placement in a body reference frame and this asymmetry related to the hemiparetic walking performance and explained motor control mechanisms beyond those explained by step lengths and step widths alone. We suggest that biomechanical analyses quantifying stepping performance in impaired populations should investigate foot placement in a body reference frame. PMID:20193972
Balasubramanian, Chitralakshmi K; Neptune, Richard R; Kautz, Steven A
2010-06-01
Foot placement during walking is closely linked to the body position, yet it is typically quantified relative to the other foot. The purpose of this study was to quantify foot placement patterns relative to body post-stroke and investigate its relationship to hemiparetic walking performance. Thirty-nine participants with hemiparesis walked on a split-belt treadmill at their self-selected speeds and 20 healthy participants walked at matched slow speeds. Anterior-posterior and medial-lateral foot placements (foot center-of-mass) relative to body (pelvis center-of-mass) quantified stepping in body reference frame. Walking performance was quantified using step length asymmetry ratio, percent of paretic propulsion and paretic weight support. Participants with hemiparesis placed their paretic foot further anterior than posterior during walking compared to controls walking at matched slow speeds (P<.05). Participants also placed their paretic foot further lateral relative to pelvis than non-paretic (P<.05). Anterior-posterior asymmetry correlated with step length asymmetry and percent paretic propulsion but some persons revealed differing asymmetry patterns in the translating reference frame. Lateral foot placement asymmetry correlated with paretic weight support (r=.596; P<.001), whereas step widths showed no relation to paretic weight support. Post-stroke gait is asymmetric when quantifying foot placement in a body reference frame and this asymmetry related to the hemiparetic walking performance and explained motor control mechanisms beyond those explained by step lengths and step widths alone. We suggest that biomechanical analyses quantifying stepping performance in impaired populations should investigate foot placement in a body reference frame. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.
1991-01-01
The accuracy of a nonintrusive high angle-of-attack flush airdata sensing (HI-FADS) system was verified for quasi-steady flight conditions up to 55 deg angle of attack during the F-18 High Alpha Research Vehicle (HARV) Program. The system is a matrix of nine pressure ports arranged in annular rings on the aircraft nose. The complete airdata set is estimated using nonlinear regression. Satisfactory frequency response was verified to the system Nyquist frequency (12.5 Hz). The effects of acoustical distortions within the individual pressure sensors of the nonintrusive pressure matrix on overall system performance are addressed. To quantify these effects, a frequency-response model describing the dynamics of acoustical distortion is developed and simple design criteria are derived. The model adjusts measured HI-FADS pressure data for the acoustical distortion and quantifies the effects of internal sensor geometries on system performance. Analysis results indicate that sensor frequency response characteristics very greatly with altitude, thus it is difficult to select satisfactory sensor geometry for all altitudes. The solution used presample filtering to eliminate resonance effects, and short pneumatic tubing sections to reduce lag effects. Without presample signal conditioning the system designer must use the pneumatic transmission line to attenuate the resonances and accept the resulting altitude variability.
Chen, Y. M.; Lin, P.; He, Y.; He, J. Q.; Zhang, J.; Li, X. L.
2016-01-01
A novel strategy based on the near infrared hyperspectral imaging techniques and chemometrics were explored for fast quantifying the collision strength index of ethylene-vinyl acetate copolymer (EVAC) coverings on the fields. The reflectance spectral data of EVAC coverings was obtained by using the near infrared hyperspectral meter. The collision analysis equipment was employed to measure the collision intensity of EVAC materials. The preprocessing algorithms were firstly performed before the calibration. The algorithms of random frog and successive projection (SP) were applied to extracting the fingerprint wavebands. A correlation model between the significant spectral curves which reflected the cross-linking attributions of the inner organic molecules and the degree of collision strength was set up by taking advantage of the support vector machine regression (SVMR) approach. The SP-SVMR model attained the residual predictive deviation of 3.074, the square of percentage of correlation coefficient of 93.48% and 93.05% and the root mean square error of 1.963 and 2.091 for the calibration and validation sets, respectively, which exhibited the best forecast performance. The results indicated that the approaches of integrating the near infrared hyperspectral imaging techniques with the chemometrics could be utilized to rapidly determine the degree of collision strength of EVAC. PMID:26875544
Witzke, Kathrin E; Rosowski, Kristin; Müller, Christian; Ahrens, Maike; Eisenacher, Martin; Megger, Dominik A; Knobloch, Jürgen; Koch, Andrea; Bracht, Thilo; Sitek, Barbara
2017-01-06
Quantitative secretome analyses are a high-performance tool for the discovery of physiological and pathophysiological changes in cellular processes. However, serum supplements in cell culture media limit secretome analyses, but serum depletion often leads to cell starvation and consequently biased results. To overcome these limiting factors, we investigated a model of T cell activation (Jurkat cells) and performed an approach for the selective enrichment of secreted proteins from conditioned medium utilizing metabolic marking of newly synthesized glycoproteins. Marked glycoproteins were labeled via bioorthogonal click chemistry and isolated by affinity purification. We assessed two labeling compounds conjugated with either biotin or desthiobiotin and the respective secretome fractions. 356 proteins were quantified using the biotin probe and 463 using desthiobiotin. 59 proteins were found differentially abundant (adjusted p-value ≤0.05, absolute fold change ≥1.5) between inactive and activated T cells using the biotin method and 86 using the desthiobiotin approach, with 31 mutual proteins cross-verified by independent experiments. Moreover, we analyzed the cellular proteome of the same model to demonstrate the benefit of secretome analyses and provide comprehensive data sets of both. 336 proteins (61.3%) were quantified exclusively in the secretome. Data are available via ProteomeXchange with identifier PXD004280.
NASA Astrophysics Data System (ADS)
Dobson, B.; Pianosi, F.; Reed, P. M.; Wagener, T.
2017-12-01
In previous work, we have found that water supply companies are typically hesitant to use reservoir operation tools to inform their release decisions. We believe that this is, in part, due to a lack of faith in the fidelity of the optimization exercise with regards to its ability to represent the real world. In an attempt to quantify this, recent literature has studied the impact on performance from uncertainty arising in: forcing (e.g. reservoir inflows), parameters (e.g. parameters for the estimation of evaporation rate) and objectives (e.g. worst first percentile or worst case). We suggest that there is also epistemic uncertainty in the choices made during model creation, for example in the formulation of an evaporation model or aggregating regional storages. We create `rival framings' (a methodology originally developed to demonstrate the impact of uncertainty arising from alternate objective formulations), each with different modelling choices, and determine their performance impacts. We identify the Pareto approximate set of policies for several candidate formulations and then make them compete with one another in a large ensemble re-evaluation in each other's modelled spaces. This enables us to distinguish the impacts of different structural changes in the model used to evaluate system performance in an effort to generalize the validity of the optimized performance expectations.
Data assimilation and bathymetric inversion in a two-dimensional horizontal surf zone model
NASA Astrophysics Data System (ADS)
Wilson, G. W.; Ã-Zkan-Haller, H. T.; Holman, R. A.
2010-12-01
A methodology is described for assimilating observations in a steady state two-dimensional horizontal (2-DH) model of nearshore hydrodynamics (waves and currents), using an ensemble-based statistical estimator. In this application, we treat bathymetry as a model parameter, which is subject to a specified prior uncertainty. The statistical estimator uses state augmentation to produce posterior (inverse, updated) estimates of bathymetry, wave height, and currents, as well as their posterior uncertainties. A case study is presented, using data from a 2-D array of in situ sensors on a natural beach (Duck, NC). The prior bathymetry is obtained by interpolation from recent bathymetric surveys; however, the resulting prior circulation is not in agreement with measurements. After assimilating data (significant wave height and alongshore current), the accuracy of modeled fields is improved, and this is quantified by comparing with observations (both assimilated and unassimilated). Hence, for the present data, 2-DH bathymetric uncertainty is an important source of error in the model and can be quantified and corrected using data assimilation. Here the bathymetric uncertainty is ascribed to inadequate temporal sampling; bathymetric surveys were conducted on a daily basis, but bathymetric change occurred on hourly timescales during storms, such that hydrodynamic model skill was significantly degraded. Further tests are performed to analyze the model sensitivities used in the assimilation and to determine the influence of different observation types and sampling schemes.
Infrastructure Vulnerability Assessment Model (I-VAM).
Ezell, Barry Charles
2007-06-01
Quantifying vulnerability to critical infrastructure has not been adequately addressed in the literature. Thus, the purpose of this article is to present a model that quantifies vulnerability. Vulnerability is defined as a measure of system susceptibility to threat scenarios. This article asserts that vulnerability is a condition of the system and it can be quantified using the Infrastructure Vulnerability Assessment Model (I-VAM). The model is presented and then applied to a medium-sized clean water system. The model requires subject matter experts (SMEs) to establish value functions and weights, and to assess protection measures of the system. Simulation is used to account for uncertainty in measurement, aggregate expert assessment, and to yield a vulnerability (Omega) density function. Results demonstrate that I-VAM is useful to decisionmakers who prefer quantification to qualitative treatment of vulnerability. I-VAM can be used to quantify vulnerability to other infrastructures, supervisory control and data acquisition systems (SCADA), and distributed control systems (DCS).
Hansen, Scott K.; Berkowitz, Brian; Vesselinov, Velimir V.; ...
2016-12-01
Path reversibility and radial symmetry are often assumed in push-pull tracer test analysis. In reality, heterogeneous flow fields mean that both assumptions are idealizations. In this paper, to understand their impact, we perform a parametric study which quantifies the scattering effects of ambient flow, local-scale dispersion, and velocity field heterogeneity on push-pull breakthrough curves and compares them to the effects of mobile-immobile mass transfer (MIMT) processes including sorption and diffusion into secondary porosity. We identify specific circumstances in which MIMT overwhelmingly determines the breakthrough curve, which may then be considered uninformative about drift and local-scale dispersion. Assuming path reversibility, wemore » develop a continuous-time-random-walk-based interpretation framework which is flow-field-agnostic and well suited to quantifying MIMT. Adopting this perspective, we show that the radial flow assumption is often harmless: to the extent that solute paths are reversible, the breakthrough curve is uninformative about velocity field heterogeneity. Our interpretation method determines a mapping function (i.e., subordinator) from travel time in the absence of MIMT to travel time in its presence. A mathematical theory allowing this function to be directly “plugged into” an existing Laplace-domain transport model to incorporate MIMT is presented and demonstrated. Algorithms implementing the calibration are presented and applied to interpretation of data from a push-pull test performed in a heterogeneous environment. A successful four-parameter fit is obtained, of comparable fidelity to one obtained using a million-node 3-D numerical model. In conclusion, we demonstrate analytically and numerically how push-pull tests quantifying MIMT are sensitive to remobilization, but not immobilization, kinetics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott K.; Berkowitz, Brian; Vesselinov, Velimir V.
Path reversibility and radial symmetry are often assumed in push-pull tracer test analysis. In reality, heterogeneous flow fields mean that both assumptions are idealizations. In this paper, to understand their impact, we perform a parametric study which quantifies the scattering effects of ambient flow, local-scale dispersion, and velocity field heterogeneity on push-pull breakthrough curves and compares them to the effects of mobile-immobile mass transfer (MIMT) processes including sorption and diffusion into secondary porosity. We identify specific circumstances in which MIMT overwhelmingly determines the breakthrough curve, which may then be considered uninformative about drift and local-scale dispersion. Assuming path reversibility, wemore » develop a continuous-time-random-walk-based interpretation framework which is flow-field-agnostic and well suited to quantifying MIMT. Adopting this perspective, we show that the radial flow assumption is often harmless: to the extent that solute paths are reversible, the breakthrough curve is uninformative about velocity field heterogeneity. Our interpretation method determines a mapping function (i.e., subordinator) from travel time in the absence of MIMT to travel time in its presence. A mathematical theory allowing this function to be directly “plugged into” an existing Laplace-domain transport model to incorporate MIMT is presented and demonstrated. Algorithms implementing the calibration are presented and applied to interpretation of data from a push-pull test performed in a heterogeneous environment. A successful four-parameter fit is obtained, of comparable fidelity to one obtained using a million-node 3-D numerical model. In conclusion, we demonstrate analytically and numerically how push-pull tests quantifying MIMT are sensitive to remobilization, but not immobilization, kinetics.« less
Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation
NASA Technical Reports Server (NTRS)
DePriest, Douglas; Morgan, Carolyn
2003-01-01
The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Kim, Jinwon; Krishna, Bhargavi
2015-08-31
The Alpha 2 release is the second release from the LASSO Pilot Phase that builds upon the Alpha 1 release. Alpha 2 contains additional diagnostics in the data bundles and focuses on cases from spring-summer 2016. A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input include model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
Improving Climate Projections Using "Intelligent" Ensembles
NASA Technical Reports Server (NTRS)
Baker, Noel C.; Taylor, Patrick C.
2015-01-01
Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and that these metrics can be used to evaluate model quality in both current and future climate states. This information will be used to produce new consensus projections and provide communities with improved climate projections for urgent decision-making.
A simple mathematical model of society collapse applied to Easter Island
NASA Astrophysics Data System (ADS)
Bologna, M.; Flores, J. C.
2008-02-01
In this paper we consider a mathematical model for the evolution and collapse of the Easter Island society. Based on historical reports, the available primary resources consisted almost exclusively in the trees, then we describe the inhabitants and the resources as an isolated dynamical system. A mathematical, and numerical, analysis about the Easter Island community collapse is performed. In particular, we analyze the critical values of the fundamental parameters and a demographic curve is presented. The technological parameter, quantifying the exploitation of the resources, is calculated and applied to the case of another extinguished civilization (Copán Maya) confirming the consistency of the adopted model.
Ultrasound finite element simulation sensitivity to anisotropic titanium microstructures
NASA Astrophysics Data System (ADS)
Freed, Shaun; Blackshire, James L.; Na, Jeong K.
2016-02-01
Analytical wave models are inadequate to describe complex metallic microstructure interactions especially for near field anisotropic property effects and through geometric features smaller than the wavelength. In contrast, finite element ultrasound simulations inherently capture microstructure influences due to their reliance on material definitions rather than wave descriptions. To better understand and quantify heterogeneous crystal orientation effects to ultrasonic wave propagation, a finite element modeling case study has been performed with anisotropic titanium grain structures. A parameterized model has been developed utilizing anisotropic spheres within a bulk material. The resulting wave parameters are analyzed as functions of both wavelength and sphere to bulk crystal mismatch angle.
Sakieh, Yousef; Salmanmahiny, Abdolrassoul
2016-03-01
Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.
Macroscopic models for shape memory alloy characterization and design
NASA Astrophysics Data System (ADS)
Massad, Jordan Elias
Shape memory alloys (SMAs) are being considered for a number of high performance applications, such as deformable aircraft wings, earthquake-resistant structures, and microdevices, due to their capability to achieve very high work densities, produce large deformations, and generate high stresses. In general, the material behavior of SMAs is nonlinear and hysteresic. To achieve the full potential of SMA actuators, it is necessary to develop models that characterize the nonlinearities and hysteresis inherent in the constituent materials. Additionally, the design of SMA actuators necessitates the development of control algorithms based on those models. We develop two models that quantify the nonlinearities and hysteresis inherent to SMAs, each in formulations suitable for subsequent control design. In the first model, we employ domain theory to quantify SMA behavior under isothermal conditions. The model involves a single first-order, nonlinear ordinary differential equation and requires as few as seven parameters that are identifiable from measurements. We develop the second model using the Muller-Achenbach-Seelecke framework where a transition state theory of nonequilibrium processes is used to derive rate laws for the evolution of material phase fractions. The fully thermomechanical model predicts rate-dependent, polycrystalline SMA behavior, and it accommodates heat transfer issues pertinent to thin-film SMAs. Furthermore, the model admits a low-order formulation and has a small number of parameters which can be readily identified using attributes of measured data. We illustrate aspects of both models through comparison with experimental bulk and thin-film SMA data.
NASA Astrophysics Data System (ADS)
Kong, Hyun-Joon
This dissertation investigates a dispersion/stabilization technique to improve the fluidity of heteroflocculating concentrated suspensions, and applies the technique to develop self-compacting Engineered Cementitious Composites (ECC), defined as a cementitious material which compacts without any external consolidation in the fresh state, while exhibiting strain-hardening performance in the hardened state. To meet the criteria of micromechanical design to achieve the ductile performance and processing design to attain high fluidity, this work has focused on preparing cement suspensions with low viscosity and high cohesiveness at a particle loading determined by the micromechanical design. Therefore, the goal of this work is to quantify how to adjust the strong flocculation between cement particles due to electrostatic and van der Waals attractive forces. For this purpose, a strong polyelectrolyte, melamine formaldehyde sulfonate (MFS), to disperse the oppositely-charged particles present in the cement dispersion, is combined with a non-ionic polymer, hydroxypropylmethylcellulose (HPMC). The combination of these two polymers to prevent re-flocculation leads to "complementary electrosteric dispersion/ stabilization". With these polymers, suspensions with the desired fluidity for processing are obtained. To quantify the roles of the two polymers in imparting stability, a heteroflocculating model suspension was developed, which facilitates the control of the interactions typical of cement suspensions, but without irreversible hydration. This model suspension is composed of alumina and silica particles, which bear surface potentials of opposite sign at intermediate pHs, as well as has a comparable magnitude of the Hamaker constant as compared to cement particles. As a result, the model system displays not only van der Waals attraction but also electrostatic attraction between dissimilar particles. Rheological studies of the model system stabilized by MFS and HPMC show behavior identical to that of the cement suspensions, allowing the model system to be used to interpret the role of the stabilizers in altering the system microstructure and fluidity. Finally, the self-compacting performance of fresh ECC mixes made with the electrosterically stabilized fresh matrix mix and the ductile strain-hardening performance of the hardened ECC were demonstrated.
On the use of musculoskeletal models to interpret motor control strategies from performance data
NASA Astrophysics Data System (ADS)
Cheng, Ernest J.; Loeb, Gerald E.
2008-06-01
The intrinsic viscoelastic properties of muscle are central to many theories of motor control. Much of the debate over these theories hinges on varying interpretations of these muscle properties. In the present study, we describe methods whereby a comprehensive musculoskeletal model can be used to make inferences about motor control strategies that would account for behavioral data. Muscle activity and kinematic data from a monkey were recorded while the animal performed a single degree-of-freedom pointing task in the presence of pseudo-random torque perturbations. The monkey's movements were simulated by a musculoskeletal model with accurate representations of musculotendon morphometry and contractile properties. The model was used to quantify the impedance of the limb while moving rapidly, the differential action of synergistic muscles, the relative contribution of reflexes to task performance and the completeness of recorded EMG signals. Current methods to address these issues in the absence of musculoskeletal models were compared with the methods used in the present study. We conclude that musculoskeletal models and kinetic analysis can improve the interpretation of kinematic and electrophysiological data, in some cases by illuminating shortcomings of the experimental methods or underlying assumptions that may otherwise escape notice.
Quality of protection evaluation of security mechanisms.
Ksiezopolski, Bogdan; Zurek, Tomasz; Mokkas, Michail
2014-01-01
Recent research indicates that during the design of teleinformatic system the tradeoff between the systems performance and the system protection should be made. The traditional approach assumes that the best way is to apply the strongest possible security measures. Unfortunately, the overestimation of security measures can lead to the unreasonable increase of system load. This is especially important in multimedia systems where the performance has critical character. In many cases determination of the required level of protection and adjustment of some security measures to these requirements increase system efficiency. Such an approach is achieved by means of the quality of protection models where the security measures are evaluated according to their influence on the system security. In the paper, we propose a model for QoP evaluation of security mechanisms. Owing to this model, one can quantify the influence of particular security mechanisms on ensuring security attributes. The methodology of our model preparation is described and based on it the case study analysis is presented. We support our method by the tool where the models can be defined and QoP evaluation can be performed. Finally, we have modelled TLS cryptographic protocol and presented the QoP security mechanisms evaluation for the selected versions of this protocol.
Kim, Chang-Hyun; Song, Kwang-Soup; Trayanova, Natalia A; Lim, Ki Moo
2018-05-01
Intra-aortic balloon pump (IABP) is normally contraindicated in significant aortic regurgitation (AR). It causes and aggravates pre-existing AR while performing well in the event of mitral regurgitation (MR). Indirect parameters, such as the mean systolic pressure, product of heart rate and peak systolic pressure, and pressure-volume are used to quantify the effect of IABP on ventricular workload. However, to date, no studies have directly quantified the reduction in workload with IABP. The goal of this study is to examine the effect of IABP therapy on ventricular mechanics under valvular insufficiency by using a computational model of the heart. For this purpose, the 3D electromechanical model of the failing ventricles used in previous studies was coupled with a lumped parameter model of valvular regurgitation and the IABP-treated vascular system. The IABP therapy was disturbed in terms of reducing the myocardial tension generation and contractile ATP consumption by valvular regurgitation, particularly in the AR condition. The IABP worsened the problem of ventricular expansion induced as a result of the regurgitated blood volume during the diastole under the AR condition. The IABP reduced the LV stroke work in the AR, MR, and no regurgitation conditions. Therefore, the IABP helped the ventricle to pump blood and reduced the ventricular workload. In conclusion, the IABP partially performed its role in the MR condition. However, it was disturbed by the AR and worsened the cardiovascular responses that followed the AR. Therefore, this study computationally proved the reason for the clinical contraindication of IABP in AR patients.
Roofline model toolkit: A practical tool for architectural and program analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo, Yu Jung; Williams, Samuel; Van Straalen, Brian
We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measuremore » sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.« less
Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle
NASA Astrophysics Data System (ADS)
Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun
2018-05-01
The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.
Hansen, Trine Lund; Bhander, Gurbakhash S; Christensen, Thomas Højlund; Bruun, Sander; Jensen, Lars Stoumann
2006-04-01
A model capable of quantifying the potential environmental impacts of agricultural application of composted or anaerobically digested source-separated organic municipal solid waste (MSW) is presented. In addition to the direct impacts, the model accounts for savings by avoiding the production and use of commercial fertilizers. The model is part of a larger model, Environmental Assessment of Solid Waste Systems and Technology (EASEWASTE), developed as a decision-support model, focusing on assessment of alternative waste management options. The environmental impacts of the land application of processed organic waste are quantified by emission coefficients referring to the composition of the processed waste and related to specific crop rotation as well as soil type. The model contains several default parameters based on literature data, field experiments and modelling by the agro-ecosystem model, Daisy. All data can be modified by the user allowing application of the model to other situations. A case study including four scenarios was performed to illustrate the use of the model. One tonne of nitrogen in composted and anaerobically digested MSW was applied as fertilizer to loamy and sandy soil at a plant farm in western Denmark. Application of the processed organic waste mainly affected the environmental impact categories global warming (0.4-0.7 PE), acidification (-0.06 (saving)-1.6 PE), nutrient enrichment (-1.0 (saving)-3.1 PE), and toxicity. The main contributors to these categories were nitrous oxide formation (global warming), ammonia volatilization (acidification and nutrient enrichment), nitrate losses (nutrient enrichment and groundwater contamination), and heavy metal input to soil (toxicity potentials). The local agricultural conditions as well as the composition of the processed MSW showed large influence on the environmental impacts. A range of benefits, mainly related to improved soil quality from long-term application of the processed organic waste, could not be generally quantified with respect to the chosen life cycle assessment impact categories and were therefore not included in the model. These effects should be considered in conjunction with the results of the life cycle assessment.
Quantifying parametric uncertainty in the Rothermel model
S. Goodrick
2008-01-01
The purpose of the present work is to quantify parametric uncertainty in the Rothermel wildland fire spreadmodel (implemented in software such as fire spread models in the United States. This model consists of a non-linear system of equations that relates environmentalvariables (input parameter groups...
Understanding and quantifying the uncertainty of model parameters and predictions has gained more interest in recent years with the increased use of computational models in chemical risk assessment. Fully characterizing the uncertainty in risk metrics derived from linked quantita...
Quantifying Drosophila food intake: comparative analysis of current methodology
Deshpande, Sonali A.; Carvalho, Gil B.; Amador, Ariadna; Phillips, Angela M.; Hoxha, Sany; Lizotte, Keith J.; Ja, William W.
2014-01-01
Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the Capillary Feeder (CAFE), food-labeling with a radioactive tracer or a colorimetric dye, and observations of proboscis extension (PE). We show that the CAFE and radioisotope-labeling provide the most consistent results, have the highest sensitivity, and can resolve differences in feeding that dye-labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of food intake methodology will greatly advance Drosophila studies of nutrition, behavior, and disease. PMID:24681694
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey C. JOe; Ronald L. Boring
Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less
Extensions to the visual predictive check to facilitate model performance evaluation.
Post, Teun M; Freijer, Jan I; Ploeger, Bart A; Danhof, Meindert
2008-04-01
The Visual Predictive Check (VPC) is a valuable and supportive instrument for evaluating model performance. However in its most commonly applied form, the method largely depends on a subjective comparison of the distribution of the simulated data with the observed data, without explicitly quantifying and relating the information in both. In recent adaptations to the VPC this drawback is taken into consideration by presenting the observed and predicted data as percentiles. In addition, in some of these adaptations the uncertainty in the predictions is represented visually. However, it is not assessed whether the expected random distribution of the observations around the predicted median trend is realised in relation to the number of observations. Moreover the influence of and the information residing in missing data at each time point is not taken into consideration. Therefore, in this investigation the VPC is extended with two methods to support a less subjective and thereby more adequate evaluation of model performance: (i) the Quantified Visual Predictive Check (QVPC) and (ii) the Bootstrap Visual Predictive Check (BVPC). The QVPC presents the distribution of the observations as a percentage, thus regardless the density of the data, above and below the predicted median at each time point, while also visualising the percentage of unavailable data. The BVPC weighs the predicted median against the 5th, 50th and 95th percentiles resulting from a bootstrap of the observed data median at each time point, while accounting for the number and the theoretical position of unavailable data. The proposed extensions to the VPC are illustrated by a pharmacokinetic simulation example and applied to a pharmacodynamic disease progression example.
Novelli, M D; Barreto, E; Matos, D; Saad, S S; Borra, R C
1997-01-01
The authors present the experimental results of the computerized quantifying of tissular structures involved in the reparative process of colonic anastomosis performed by manual suture and biofragmentable ring. The quantified variables in this study were: oedema fluid, myofiber tissue, blood vessel and cellular nuclei. An image processing software developed at Laboratório de Informática Dedicado à Odontologia (LIDO) was utilized to quantifying the pathognomonic alterations in the inflammatory process in colonic anastomosis performed in 14 dogs. The results were compared to those obtained through traditional way diagnosis by two pathologists in view of counterproof measures. The criteria for these diagnoses were defined in levels represented by absent, light, moderate and intensive which were compared to analysis performed by the computer. There was significant statistical difference between two techniques: the biofragmentable ring technique exhibited low oedema fluid, organized myofiber tissue and higher number of alongated cellular nuclei in relation to manual suture technique. The analysis of histometric variables through computational image processing was considered efficient and powerful to quantify the main tissular inflammatory and reparative changing.
NASA Astrophysics Data System (ADS)
Seneca, S. M.; Rabideau, A. J.; Bandilla, K.
2010-12-01
Experimental and modeling studies are in progress to evaluate the long-term performance of a permeable treatment wall comprised of zeolite-rich rock for the removal of strontium-90 from groundwater. Multiple column tests were performed at the University at Buffalo and on-site West Valley Environmental Services; columns were supplied with synthetic groundwater referenced to anticipate field conditions and radioactive groundwater on-site WVES. The primary focus in this work is on quantifying the competitive ion exchange among five cations (Na+, K+, Ca2+, Mg2+, and Sr2+); the data obtained from the column studies is used to support the robust estimation of zeolite cation exchange parameters. This research will produce a five-solute cation exchange model describing the removal efficiency of the zeolite, using the various column tests to calibrate and validate the geochemical transport model. The field-scale transport model provides flexibility to explore design parameters and potential variations in groundwater geochemistry to investigate the long-term performance of a full scale treatment wall at the Western New York nuclear facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
NASA Astrophysics Data System (ADS)
Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus
2017-04-01
Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.
The Role of Multimodel Combination in Improving Streamflow Prediction
NASA Astrophysics Data System (ADS)
Arumugam, S.; Li, W.
2008-12-01
Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.
Quantifier Comprehension in Corticobasal Degeneration
ERIC Educational Resources Information Center
McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray
2006-01-01
In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.…
The extension of total gain (TG) statistic in survival models: properties and applications.
Choodari-Oskooei, Babak; Royston, Patrick; Parmar, Mahesh K B
2015-07-01
The results of multivariable regression models are usually summarized in the form of parameter estimates for the covariates, goodness-of-fit statistics, and the relevant p-values. These statistics do not inform us about whether covariate information will lead to any substantial improvement in prediction. Predictive ability measures can be used for this purpose since they provide important information about the practical significance of prognostic factors. R (2)-type indices are the most familiar forms of such measures in survival models, but they all have limitations and none is widely used. In this paper, we extend the total gain (TG) measure, proposed for a logistic regression model, to survival models and explore its properties using simulations and real data. TG is based on the binary regression quantile plot, otherwise known as the predictiveness curve. Standardised TG ranges from 0 (no explanatory power) to 1 ('perfect' explanatory power). The results of our simulations show that unlike many of the other R (2)-type predictive ability measures, TG is independent of random censoring. It increases as the effect of a covariate increases and can be applied to different types of survival models, including models with time-dependent covariate effects. We also apply TG to quantify the predictive ability of multivariable prognostic models developed in several disease areas. Overall, TG performs well in our simulation studies and can be recommended as a measure to quantify the predictive ability in survival models.
Non-uniform overland flow-infiltration model for roadside swales
NASA Astrophysics Data System (ADS)
García-Serrana, María; Gulliver, John S.; Nieber, John L.
2017-09-01
There is a need to quantify the hydrologic performance of vegetated roadside swales (drainage ditches) as stormwater control measures (SCMs). To quantify their infiltration performance in both the side slope and the channel of the swale, a model has been developed for coupling a Green-Ampt-Mein-Larson (GAML) infiltration submodel with kinematic wave submodels for both overland flow down the side slope and open channel flow for flow in the ditch. The coupled GAML submodel and overland flow submodel has been validated using data collected in twelve simulated runoff tests in three different highways located in the Minneapolis-St. Paul metropolitan area, MN. The percentage of the total water infiltrated into the side slope is considerably greater than into the channel. Thus, the side slope of a roadside swale is the main component contributing to the loss of runoff by infiltration and the channel primarily conveys the water that runs off the side slope, for the typical design found in highways. Finally, as demonstrated in field observations and the model, the fraction of the runoff/rainfall infiltrated (Vi∗) into the roadside swale appears to increase with a dimensionless saturated hydraulic conductivity (Ks∗), which is a function of the saturated hydraulic conductivity, rainfall intensity, and dimensions of the swale and contributing road surface. For design purposes, the relationship between Vi∗ and Ks∗ can provide a rough estimate of the fraction of runoff/rainfall infiltrated with the few essential parameters that appear to dominate the results.
NASA Astrophysics Data System (ADS)
Zaherpour, Jamal; Gosling, Simon N.; Mount, Nick; Müller Schmied, Hannes; Veldkamp, Ted I. E.; Dankers, Rutger; Eisner, Stephanie; Gerten, Dieter; Gudmundsson, Lukas; Haddeland, Ingjerd; Hanasaki, Naota; Kim, Hyungjun; Leng, Guoyong; Liu, Junguo; Masaki, Yoshimitsu; Oki, Taikan; Pokhrel, Yadu; Satoh, Yusuke; Schewe, Jacob; Wada, Yoshihide
2018-06-01
Global-scale hydrological models are routinely used to assess water scarcity, flood hazards and droughts worldwide. Recent efforts to incorporate anthropogenic activities in these models have enabled more realistic comparisons with observations. Here we evaluate simulations from an ensemble of six models participating in the second phase of the Inter-Sectoral Impact Model Inter-comparison Project (ISIMIP2a). We simulate monthly runoff in 40 catchments, spatially distributed across eight global hydrobelts. The performance of each model and the ensemble mean is examined with respect to their ability to replicate observed mean and extreme runoff under human-influenced conditions. Application of a novel integrated evaluation metric to quantify the models’ ability to simulate timeseries of monthly runoff suggests that the models generally perform better in the wetter equatorial and northern hydrobelts than in drier southern hydrobelts. When model outputs are temporally aggregated to assess mean annual and extreme runoff, the models perform better. Nevertheless, we find a general trend in the majority of models towards the overestimation of mean annual runoff and all indicators of upper and lower extreme runoff. The models struggle to capture the timing of the seasonal cycle, particularly in northern hydrobelts, while in southern hydrobelts the models struggle to reproduce the magnitude of the seasonal cycle. It is noteworthy that over all hydrological indicators, the ensemble mean fails to perform better than any individual model—a finding that challenges the commonly held perception that model ensemble estimates deliver superior performance over individual models. The study highlights the need for continued model development and improvement. It also suggests that caution should be taken when summarising the simulations from a model ensemble based upon its mean output.
Hug, T; Maurer, M
2012-01-01
Distributed (decentralized) wastewater treatment can, in many situations, be a valuable alternative to a centralized sewer network and wastewater treatment plant. However, it is critical for its acceptance whether the same overall treatment performance can be achieved without on-site staff, and whether its performance can be measured. In this paper we argue and illustrate that the system performance depends not only on the design performance and reliability of the individual treatment units, but also significantly on the monitoring scheme, i.e. on the reliability of the process information. For this purpose, we present a simple model of a fleet of identical treatment units. Thereby, their performance depends on four stochastic variables: the reliability of the treatment unit, the respond time for the repair of failed units, the reliability of on-line sensors, and the frequency of routine inspections. The simulated scenarios show a significant difference between the true performance and the observations by the sensors and inspections. The results also illustrate the trade-off between investing in reactor and sensor technology and in human interventions in order to achieve a certain target performance. Modeling can quantify such effects and thereby support the identification of requirements for the centralized monitoring of distributed treatment units. The model approach is generic and can be extended and applied to various distributed wastewater treatment technologies and contexts.
Dose-dependent model of caffeine effects on human vigilance during total sleep deprivation.
Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Wesensten, Nancy J; Kamimori, Gary H; Balkin, Thomas J; Reifman, Jaques
2014-10-07
Caffeine is the most widely consumed stimulant to counter sleep-loss effects. While the pharmacokinetics of caffeine in the body is well-understood, its alertness-restoring effects are still not well characterized. In fact, mathematical models capable of predicting the effects of varying doses of caffeine on objective measures of vigilance are not available. In this paper, we describe a phenomenological model of the dose-dependent effects of caffeine on psychomotor vigilance task (PVT) performance of sleep-deprived subjects. We used the two-process model of sleep regulation to quantify performance during sleep loss in the absence of caffeine and a dose-dependent multiplier factor derived from the Hill equation to model the effects of single and repeated caffeine doses. We developed and validated the model fits and predictions on PVT lapse (number of reaction times exceeding 500 ms) data from two separate laboratory studies. At the population-average level, the model captured the effects of a range of caffeine doses (50-300 mg), yielding up to a 90% improvement over the two-process model. Individual-specific caffeine models, on average, predicted the effects up to 23% better than population-average caffeine models. The proposed model serves as a useful tool for predicting the dose-dependent effects of caffeine on the PVT performance of sleep-deprived subjects and, therefore, can be used for determining caffeine doses that optimize the timing and duration of peak performance. Published by Elsevier Ltd.
Acoustic Noise Test Report for the U.S. Department of Energy 1.5-Megawatt Wind Turbine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roadman, Jason; Huskey, Arlinda
2015-07-01
A series of tests were conducted to characterize the baseline properties and performance of the U.S. Department of Energy (DOE) 1.5-megawatt wind turbine (DOE 1.5) to enable research model development and quantify the effects of future turbine research modifications. The DOE 1.5 is built on the platform of GE's 1.5-MW SLE commercial wind turbine model. It was installed in a nonstandard configuration at the NWTC with the objective of supporting DOE Wind Program research initiatives such as A2e. Therefore, the test results may not represent the performance capabilities of other GE 1.5-MW SLE turbines. The acoustic noise test documented inmore » this report is one of a series of tests carried out to establish a performance baseline for the DOE 1.5 in the NWTC inflow environment.« less
Zibordi, Giuseppe
2016-03-21
Determination of the water-leaving radiance LW through above-water radiometry requires knowledge of accurate reflectance factors ρ of the sea surface. Publicly available ρ relevant to above-water radiometry include theoretical data sets generated: i. by assuming a sky radiance distribution accounting for aerosols and multiple scattering, but neglecting polarization, and quantifying sea surface effects through Cox-Munk wave slope statistics; or differently ii. accounting for polarization, but assuming an ideal Rayleigh sky radiance distribution, and quantifying sea surface effects through modeled wave elevation and slope variance spectra. The impact on above-water data products of differences between those factors ρ was quantified through comparison of LW from the Ocean Color component of the Aerosol Robotic Network (AERONET-OC) with collocated LW from in-water radiometry. Results from the analysis of radiance measurements from the sea performed with 40 degrees viewing angle and 90 degrees azimuth offset with respect to the sun plane, indicated a slightly better agreement between above- and in-water LW determined for wind speeds tentatively lower than 4 m s-1 with ρ computed accounting for aerosols, multiple scattering and Cox-Munk surfaces. Nevertheless, analyses performed by partitioning the investigated data set also indicated that actual ρ values would exhibit dependence on sun zenith comprised between those characterizing the two sets of reflectance factors.
A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric
NASA Technical Reports Server (NTRS)
Simon, Donald L.
2011-01-01
Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
The dependence of the properties of optical fibres on length
NASA Astrophysics Data System (ADS)
Poppett, C. L.; Allington-Smith, J. R.
2010-05-01
We investigate the dependence on length of optical fibres used in astronomy, especially the focal ratio degradation (FRD) which places constraints on the performance of fibre-fed spectrographs used for multiplexed spectroscopy. To this end, we present a modified version of the FRD model proposed by Carrasco & Parry to quantify the number of scattering defects within an optical fibre using a single parameter. The model predicts many trends which are seen experimentally, for example, a decrease in FRD as core diameter increases, and also as wavelength increases. However, the model also predicts a strong dependence on FRD with length that is not seen experimentally. By adapting the single fibre model to include a second fibre, we can quantify the amount of FRD due to stress caused by the method of termination. By fitting the model to experimental data, we find that polishing the fibre causes more stress to be induced in the end of the fibre compared to a simple cleave technique. We estimate that the number of scattering defects caused by polishing is approximately double that produced by cleaving. By placing limits on the end effect, the model can be used to estimate the residual-length dependence in very long fibres, such as those required for Extremely Large Telescopes, without having to carry out costly experiments. We also use our data to compare different methods of fibre termination.
Comparing GIS-based habitat models for applications in EIA and SEA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gontier, Mikael, E-mail: gontier@kth.s; Moertberg, Ulla, E-mail: mortberg@kth.s; Balfors, Berit, E-mail: balfors@kth.s
Land use changes, urbanisation and infrastructure developments in particular, cause fragmentation of natural habitats and threaten biodiversity. Tools and measures must be adapted to assess and remedy the potential effects on biodiversity caused by human activities and developments. Within physical planning, environmental impact assessment (EIA) and strategic environmental assessment (SEA) play important roles in the prediction and assessment of biodiversity-related impacts from planned developments. However, adapted prediction tools to forecast and quantify potential impacts on biodiversity components are lacking. This study tested and compared four different GIS-based habitat models and assessed their relevance for applications in environmental assessment. The modelsmore » were implemented in the Stockholm region in central Sweden and applied to data on the crested tit (Parus cristatus), a sedentary bird species of coniferous forest. All four models performed well and allowed the distribution of suitable habitats for the crested tit in the Stockholm region to be predicted. The models were also used to predict and quantify habitat loss for two regional development scenarios. The study highlighted the importance of model selection in impact prediction. Criteria that are relevant for the choice of model for predicting impacts on biodiversity were identified and discussed. Finally, the importance of environmental assessment for the preservation of biodiversity within the general frame of biodiversity conservation is emphasised.« less
NASA Astrophysics Data System (ADS)
Erol, Serdar; Serkan Isık, Mustafa; Erol, Bihter
2016-04-01
The recent Earth gravity field satellite missions data lead significant improvement in Global Geopotential Models in terms of both accuracy and resolution. However the improvement in accuracy is not the same everywhere in the Earth and therefore quantifying the level of improvement locally is necessary using the independent data. The validations of the level-3 products from the gravity field satellite missions, independently from the estimation procedures of these products, are possible using various arbitrary data sets, as such the terrestrial gravity observations, astrogeodetic vertical deflections, GPS/leveling data, the stationary sea surface topography. Quantifying the quality of the gravity field functionals via recent products has significant importance for determination of the regional geoid modeling, base on the satellite and terrestrial data fusion with an optimal algorithm, beside the statistical reporting the improvement rates depending on spatial location. In the validations, the errors and the systematic differences between the data and varying spectral content of the compared signals should be considered in order to have comparable results. In this manner this study compares the performance of Wavelet decomposition and spectral enhancement techniques in validation of the GOCE/GRACE based Earth gravity field models using GPS/leveling and terrestrial gravity data in Turkey. The terrestrial validation data are filtered using Wavelet decomposition technique and the numerical results from varying levels of decomposition are compared with the results which are derived using the spectral enhancement approach with contribution of an ultra-high resolution Earth gravity field model. The tests include the GO-DIR-R5, GO-TIM-R5, GOCO05S, EIGEN-6C4 and EGM2008 global models. The conclusion discuss the superiority and drawbacks of both concepts as well as reporting the performance of tested gravity field models with an estimate of their contribution to modeling the geoid in Turkish territory.
Understanding and quantifying foliar temperature acclimation for Earth System Models
NASA Astrophysics Data System (ADS)
Smith, N. G.; Dukes, J.
2015-12-01
Photosynthesis and respiration on land are the two largest carbon fluxes between the atmosphere and Earth's surface. The parameterization of these processes represent major uncertainties in the terrestrial component of the Earth System Models used to project future climate change. Research has shown that much of this uncertainty is due to the parameterization of the temperature responses of leaf photosynthesis and autotrophic respiration, which are typically based on short-term empirical responses. Here, we show that including longer-term responses to temperature, such as temperature acclimation, can help to reduce this uncertainty and improve model performance, leading to drastic changes in future land-atmosphere carbon feedbacks across multiple models. However, these acclimation formulations have many flaws, including an underrepresentation of many important global flora. In addition, these parameterizations were done using multiple studies that employed differing methodology. As such, we used a consistent methodology to quantify the short- and long-term temperature responses of maximum Rubisco carboxylation (Vcmax), maximum rate of Ribulos-1,5-bisphosphate regeneration (Jmax), and dark respiration (Rd) in multiple species representing each of the plant functional types used in global-scale land surface models. Short-term temperature responses of each process were measured in individuals acclimated for 7 days at one of 5 temperatures (15-35°C). The comparison of short-term curves in plants acclimated to different temperatures were used to evaluate long-term responses. Our analyses indicated that the instantaneous response of each parameter was highly sensitive to the temperature at which they were acclimated. However, we found that this sensitivity was larger in species whose leaves typically experience a greater range of temperatures over the course of their lifespan. These data indicate that models using previous acclimation formulations are likely incorrectly simulating leaf carbon exchange responses to future warming. Therefore, our data, if used to parameterize large-scale models, are likely to provide an even greater improvement in model performance, resulting in more reliable projections of future carbon-clime feedbacks.
Ultrasound elasticity imaging of human posterior tibial tendon
NASA Astrophysics Data System (ADS)
Gao, Liang
Posterior tibial tendon dysfunction (PTTD) is a common degenerative condition leading to a severe impairment of gait. There is currently no effective method to determine whether a patient with advanced PTTD would benefit from several months of bracing and physical therapy or ultimately require surgery. Tendon degeneration is closely associated with irreversible degradation of its collagen structure, leading to changes to its mechanical properties. If these properties could be monitored in vivo, it could be used to quantify the severity of tendonosis and help determine the appropriate treatment. Ultrasound elasticity imaging (UEI) is a real-time, noninvasive technique to objectively measure mechanical properties in soft tissue. It consists of acquiring a sequence of ultrasound frames and applying speckle tracking to estimate displacement and strain at each pixel. The goals of my dissertation were to 1) use acoustic simulations to investigate the performance of UEI during tendon deformation with different geometries; 2) develop and validate UEI as a potentially noninvasive technique for quantifying tendon mechanical properties in human cadaver experiments; 3) design a platform for UEI to measure mechanical properties of the PTT in vivo and determine whether there are detectable and quantifiable differences between healthy and diseased tendons. First, ultrasound simulations of tendon deformation were performed using an acoustic modeling program. The effects of different tendon geometries (cylinder and curved cylinder) on the performance of UEI were investigated. Modeling results indicated that UEI accurately estimated the strain in the cylinder geometry, but underestimated in the curved cylinder. The simulation also predicted that the out-of-the-plane motion of the PTT would cause a non-uniform strain pattern within incompressible homogeneous isotropic material. However, to average within a small region of interest determined by principal component analysis (PCA) would improve the estimation. Next, UEI was performed on five human cadaver feet mounted in a materials testing system (MTS) while the PTT was attached to a force actuator. A portable ultrasound scanner collected 2D data during loading cycles. Young's modulus was calculated from the strain, loading force and cross sectional area of the PTT. Average Young's modulus for the five tendons was (0.45+/-0.16GPa) using UEI. This was consistent with simultaneous measurements made by the MTS across the whole tendon (0.52+/-0.18GPa). We also calculated the scaling factor (0.12+/-0.01) between the load on the PTT and the inversion force at the forefoot, a measurable quantity in vivo. This study suggests that UEI could be a reliable in vivo technique for estimating the mechanical properties of the human PTT. Finally, we built a custom ankle inversion platform for in vivo imaging of human subjects (eight healthy volunteers and nine advanced PTTD patients). We found non-linear elastic properties of the PTTD, which could be quantified by the slope between the elastic modulus (E) and the inversion force (F). This slope (DeltaE/DeltaF), or Non-linear Elasticity Parameter (NEP), was significantly different for the two groups: 0.16+/-0.20 MPa/N for healthy tendons and 0.45+/-0.43 MPa/N for PTTD tendons. A receiver operating characteristic (ROC) curve revealed an area under the curve (AUC) of 0.83+/-0.07, which indicated that the classifier system is valid. In summary, the acoustic modeling, cadaveric studies, and in vivo experiments together demonstrated that UEI accurately quantifies tendon mechanical properties. As a valuable clinical tool, UEI also has the potential to help guide treatment decisions for advanced PTTD and other tendinopathies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Venkat; Das, Trishna
Increasing variable generation penetration and the consequent increase in short-term variability makes energy storage technologies look attractive, especially in the ancillary market for providing frequency regulation services. This paper presents slow dynamics model for compressed air energy storage and battery storage technologies that can be used in automatic generation control studies to assess the system frequency response and quantify the benefits from storage technologies in providing regulation service. The paper also represents the slow dynamics model of the power system integrated with storage technologies in a complete state space form. The storage technologies have been integrated to the IEEE 24more » bus system with single area, and a comparative study of various solution strategies including transmission enhancement and combustion turbine have been performed in terms of generation cycling and frequency response performance metrics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Kaiguang; Valle, Denis; Popescu, Sorin
2013-05-15
Model specification remains challenging in spectroscopy of plant biochemistry, as exemplified by the availability of various spectral indices or band combinations for estimating the same biochemical. This lack of consensus in model choice across applications argues for a paradigm shift in hyperspectral methods to address model uncertainty and misspecification. We demonstrated one such method using Bayesian model averaging (BMA), which performs variable/band selection and quantifies the relative merits of many candidate models to synthesize a weighted average model with improved predictive performances. The utility of BMA was examined using a portfolio of 27 foliage spectral–chemical datasets representing over 80 speciesmore » across the globe to estimate multiple biochemical properties, including nitrogen, hydrogen, carbon, cellulose, lignin, chlorophyll (a or b), carotenoid, polar and nonpolar extractives, leaf mass per area, and equivalent water thickness. We also compared BMA with partial least squares (PLS) and stepwise multiple regression (SMR). Results showed that all the biochemicals except carotenoid were accurately estimated from hyerspectral data with R2 values > 0.80.« less
Learning About Climate and Atmospheric Models Through Machine Learning
NASA Astrophysics Data System (ADS)
Lucas, D. D.
2017-12-01
From the analysis of ensemble variability to improving simulation performance, machine learning algorithms can play a powerful role in understanding the behavior of atmospheric and climate models. To learn about model behavior, we create training and testing data sets through ensemble techniques that sample different model configurations and values of input parameters, and then use supervised machine learning to map the relationships between the inputs and outputs. Following this procedure, we have used support vector machines, random forests, gradient boosting and other methods to investigate a variety of atmospheric and climate model phenomena. We have used machine learning to predict simulation crashes, estimate the probability density function of climate sensitivity, optimize simulations of the Madden Julian oscillation, assess the impacts of weather and emissions uncertainty on atmospheric dispersion, and quantify the effects of model resolution changes on precipitation. This presentation highlights recent examples of our applications of machine learning to improve the understanding of climate and atmospheric models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Numerical modelling of glacial lake outburst floods using physically based dam-breach models
NASA Astrophysics Data System (ADS)
Westoby, M. J.; Brasington, J.; Glasser, N. F.; Hambrey, M. J.; Reynolds, J. M.; Hassan, M. A. A. M.; Lowe, A.
2015-03-01
The instability of moraine-dammed proglacial lakes creates the potential for catastrophic glacial lake outburst floods (GLOFs) in high-mountain regions. In this research, we use a unique combination of numerical dam-breach and two-dimensional hydrodynamic modelling, employed within a generalised likelihood uncertainty estimation (GLUE) framework, to quantify predictive uncertainty in model outputs associated with a reconstruction of the Dig Tsho failure in Nepal. Monte Carlo analysis was used to sample the model parameter space, and morphological descriptors of the moraine breach were used to evaluate model performance. Multiple breach scenarios were produced by differing parameter ensembles associated with a range of breach initiation mechanisms, including overtopping waves and mechanical failure of the dam face. The material roughness coefficient was found to exert a dominant influence over model performance. The downstream routing of scenario-specific breach hydrographs revealed significant differences in the timing and extent of inundation. A GLUE-based methodology for constructing probabilistic maps of inundation extent, flow depth, and hazard is presented and provides a useful tool for communicating uncertainty in GLOF hazard assessment.
Modeling and Prediction of Corrosion-Fatigue Failures in AF1410 Steel Test Specimens
2009-01-12
PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Structures Division, Code 4.3.3 University of Dayton Research Bldg. 2187 Room 2340A Institute Naval...AND ADDRESS(ES) Office of Naval Research One Liberty Center 875 North Randolph St., Suite 1425 Arlington, VA 22203-1995 11. SPONSOR/MONITOR’S...costs. To address these issues, NAVAIR has initiated a multiyear research program to investigate and quantify the fatigue life reduction due to
NASA Astrophysics Data System (ADS)
Mues, A.; Kuenen, J.; Hendriks, C.; Manders, A.; Segers, A.; Scholz, Y.; Hueglin, C.; Builtjes, P.; Schaap, M.
2013-07-01
In this study the sensitivity of the model performance of the chemistry transport model (CTM) LOTOS-EUROS to the description of the temporal variability of emissions was investigated. Currently the temporal release of anthropogenic emissions is described by European average diurnal, weekly and seasonal time profiles per sector. These default time profiles largely neglect the variation of emission strength with activity patterns, region, species, emission process and meteorology. The three sources dealt with in this study are combustion in energy and transformation industries (SNAP1), non-industrial combustion (SNAP2) and road transport (SNAP7). First the impact of neglecting the temporal emission profiles for these SNAP categories on simulated concentrations was explored. In a~second step, we constructed more detailed emission time profiles for the three categories and quantified their impact on the model performance separately as well as combined. The performance in comparison to observations for Germany was quantified for the pollutants NO2, SO2 and PM10 and compared to a simulation using the default LOTOS-EUROS emission time profiles. In general the largest impact on the model performance was found when neglecting the default time profiles for the three categories. The daily average correlation coefficient for instance decreased by 0.04 (NO2), 0.11 (SO2) and 0.01 (PM10) at German urban background stations compared to the default simulation. A systematic increase of the correlation coefficient is found when using the new time profiles. The size of the increase depends on the source category, the component and station. Using national profiles for road transport showed important improvements of the explained variability over the weekdays as well as the diurnal cycle for NO2. The largest impact of the SNAP1 and 2 profiles were found for SO2. When using all new time profiles simultaneously in one simulation the daily average correlation coefficient increased by 0.05 (NO2), 0.07 (SO2) and 0.03 (PM10) at urban background stations in Germany. This exercise showed that to improve the performance of a CTM a better representation of the distribution of anthropogenic emission in time is recommendable. This can be done by developing a dynamical emission model which takes into account regional specific factors and meteorology.
Mabood, Fazal; Abbas, Ghulam; Jabeen, Farah; Naureen, Zakira; Al-Harrasi, Ahmed; Hamaed, Ahmad M; Hussain, Javid; Al-Nabhani, Mahmood; Al Shukaili, Maryam S; Khan, Alamgir; Manzoor, Suryyia
2018-03-01
Cows' butterfat may be adulterated with animal fat materials like tallow which causes increased serum cholesterol and triglycerides levels upon consumption. There is no reliable technique to detect and quantify tallow adulteration in butter samples in a feasible way. In this study a highly sensitive near-infrared (NIR) spectroscopy combined with chemometric methods was developed to detect as well as quantify the level of tallow adulterant in clarified butter samples. For this investigation the pure clarified butter samples were intentionally adulterated with tallow at the following percentage levels: 1%, 3%, 5%, 7%, 9%, 11%, 13%, 15%, 17% and 20% (wt/wt). Altogether 99 clarified butter samples were used including nine pure samples (un-adulterated clarified butter) and 90 clarified butter samples adulterated with tallow. Each sample was analysed by using NIR spectroscopy in the reflection mode in the range 10,000-4000 cm -1 , at 2 cm -1 resolution and using the transflectance sample accessory which provided a total path length of 0.5 mm. Chemometric models including principal components analysis (PCA), partial least-squares discriminant analysis (PLSDA), and partial least-squares regressions (PLSR) were applied for statistical treatment of the obtained NIR spectral data. The PLSDA model was employed to differentiate pure butter samples from those adulterated with tallow. The employed model was then externally cross-validated by using a test set which included 30% of the total butter samples. The excellent performance of the model was proved by the low RMSEP value of 1.537% and the high correlation factor of 0.95. This newly developed method is robust, non-destructive, highly sensitive, and economical with very minor sample preparation and good ability to quantify less than 1.5% of tallow adulteration in clarified butter samples.
Incorporating uncertainty in predictive species distribution modelling.
Beale, Colin M; Lennon, Jack J
2012-01-19
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.
NASA Astrophysics Data System (ADS)
Rounds, S. A.; Sullivan, A. B.
2004-12-01
Assessing a model's ability to reproduce field data is a critical step in the modeling process. For any model, some method of determining goodness-of-fit to measured data is needed to aid in calibration and to evaluate model performance. Visualizations and graphical comparisons of model output are an excellent way to begin that assessment. At some point, however, model performance must be quantified. Goodness-of-fit statistics, including the mean error (ME), mean absolute error (MAE), root mean square error, and coefficient of determination, typically are used to measure model accuracy. Statistical tools such as the sign test or Wilcoxon test can be used to test for model bias. The runs test can detect phase errors in simulated time series. Each statistic is useful, but each has its limitations. None provides a complete quantification of model accuracy. In this study, a suite of goodness-of-fit statistics was applied to a model of Henry Hagg Lake in northwest Oregon. Hagg Lake is a man-made reservoir on Scoggins Creek, a tributary to the Tualatin River. Located on the west side of the Portland metropolitan area, the Tualatin Basin is home to more than 450,000 people. Stored water in Hagg Lake helps to meet the agricultural and municipal water needs of that population. Future water demands have caused water managers to plan for a potential expansion of Hagg Lake, doubling its storage to roughly 115,000 acre-feet. A model of the lake was constructed to evaluate the lake's water quality and estimate how that quality might change after raising the dam. The laterally averaged, two-dimensional, U.S. Army Corps of Engineers model CE-QUAL-W2 was used to construct the Hagg Lake model. Calibrated for the years 2000 and 2001 and confirmed with data from 2002 and 2003, modeled parameters included water temperature, ammonia, nitrate, phosphorus, algae, zooplankton, and dissolved oxygen. Several goodness-of-fit statistics were used to quantify model accuracy and bias. Model performance was judged to be excellent for water temperature (annual ME: -0.22 to 0.05 ° C; annual MAE: 0.62 to 0.68 ° C) and dissolved oxygen (annual ME: -0.28 to 0.18 mg/L; annual MAE: 0.43 to 0.92 mg/L), showing that the model is sufficiently accurate for future water resources planning and management.
Performance-based maintenance of gas turbines for reliable control of degraded power systems
NASA Astrophysics Data System (ADS)
Mo, Huadong; Sansavini, Giovanni; Xie, Min
2018-03-01
Maintenance actions are necessary for ensuring proper operations of control systems under component degradation. However, current condition-based maintenance (CBM) models based on component health indices are not suitable for degraded control systems. Indeed, failures of control systems are only determined by the controller outputs, and the feedback mechanism compensates the control performance loss caused by the component deterioration. Thus, control systems may still operate normally even if the component health indices exceed failure thresholds. This work investigates the CBM model of control systems and employs the reduced control performance as a direct degradation measure for deciding maintenance activities. The reduced control performance depends on the underlying component degradation modelled as a Wiener process and the feedback mechanism. To this aim, the controller features are quantified by developing a dynamic and stochastic control block diagram-based simulation model, consisting of the degraded components and the control mechanism. At each inspection, the system receives a maintenance action if the control performance deterioration exceeds its preventive-maintenance or failure thresholds. Inspired by realistic cases, the component degradation model considers random start time and unit-to-unit variability. The cost analysis of maintenance model is conducted via Monte Carlo simulation. Optimal maintenance strategies are investigated to minimize the expected maintenance costs, which is a direct consequence of the control performance. The proposed framework is able to design preventive maintenance actions on a gas power plant, to ensuring required load frequency control performance against a sudden load increase. The optimization results identify the trade-off between system downtime and maintenance costs as a function of preventive maintenance thresholds and inspection frequency. Finally, the control performance-based maintenance model can reduce maintenance costs as compared to CBM and pre-scheduled maintenance.
A Statistical Comparison of Coupled Thermosphere-Ionosphere Models
NASA Astrophysics Data System (ADS)
Liuzzo, L. R.
2014-12-01
The thermosphere-ionosphere system is a highly dynamic, non-linearly coupled interaction that fluctuates on a daily basis. Many models exist to attempt to quantify the relationship between the two atmospheric layers, and each approaches the problem differently. Because these models differ in the implementation of the equations that govern the dynamics of the thermosphere-ionosphere system, it is important to understand under which conditions each model performs best, and under which conditions each model may have limitations in accuracy. With this in consideration, this study examines the ability of two of the leading coupled thermosphere-ionosphere models in the community, TIE-GCM and GITM, to reproduce thermospheric and ionospheric quantities observed by the CHAMP satellite during times of differing geomagnetic activity. Neutral and electron densities are studied for three geomagnetic activity levels, ranging form high to minimal activity. Metrics used to quantify differences between the two models include root-mean-square error and prediction efficiency, and qualitative differences between a model and observed data is also considered. The metrics are separated into the high- mid- and low-latitude region to depict any latitudinal dependencies of the models during the various events. Despite solving for the same parameters, the models are shown to be highly dependent on the amount of activity level that occurs and can be significantly different from each other. In addition, in comparing previous statistical studies that use the models, a clear improvement is observed in the evolution of each model as thermospheric and ionosphericconstituents during the differing levels of activity are solved.
Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong
2016-01-01
Arterial cerebral blood volume (aCBV) is associated with many physiologic and pathologic conditions. Recently, multiphase balanced steady state free precession (bSSFP) readout was introduced to measure labeled blood signals in the arterial compartment, based on the fact that signal difference between labeled and unlabeled blood decreases with the number of RF pulses that is affected by blood velocity. In this study, we evaluated the feasibility of a new 2D inter-slice bSSFP-based arterial spin labeling (ASL) technique termed, alternate ascending/descending directional navigation (ALADDIN), to quantify aCBV using multiphase acquisition in six healthy subjects. A new kinetic model considering bSSFP RF perturbations was proposed to describe the multiphase data and thus to quantify aCBV. Since the inter-slice time delay (TD) and gap affected the distribution of labeled blood spins in the arterial and tissue compartments, we performed the experiments with two TDs (0 and 500 ms) and two gaps (300% and 450% of slice thickness) to evaluate their roles in quantifying aCBV. Comparison studies using our technique and an existing method termed arterial volume using arterial spin tagging (AVAST) were also separately performed in five subjects. At 300% gap or 500-ms TD, significant tissue perfusion signals were demonstrated, while tissue perfusion signals were minimized and arterial signals were maximized at 450% gap and 0-ms TD. ALADDIN has an advantage of visualizing bi-directional flow effects (ascending/descending) in a single experiment. Labeling efficiency (α) of inter-slice blood flow effects could be measured in the superior sagittal sinus (SSS) (20.8±3.7%.) and was used for aCBV quantification. As a result of fitting to the proposed model, aCBV values in gray matter (1.4-2.3 mL/100 mL) were in good agreement with those from literature. Our technique showed high correlation with AVAST, especially when arterial signals were accentuated (i.e., when TD = 0 ms) (r = 0.53). The bi-directional perfusion imaging with multiphase ALADDIN approach can be an alternative to existing techniques for quantification of aCBV.
Targeted numerical simulations of binary black holes for GW170104
NASA Astrophysics Data System (ADS)
Healy, J.; Lange, J.; O'Shaughnessy, R.; Lousto, C. O.; Campanelli, M.; Williamson, A. R.; Zlochower, Y.; Calderón Bustillo, J.; Clark, J. A.; Evans, C.; Ferguson, D.; Ghonge, S.; Jani, K.; Khamesra, B.; Laguna, P.; Shoemaker, D. M.; Boyle, M.; García, A.; Hemberger, D. A.; Kidder, L. E.; Kumar, P.; Lovelace, G.; Pfeiffer, H. P.; Scheel, M. A.; Teukolsky, S. A.
2018-03-01
In response to LIGO's observation of GW170104, we performed a series of full numerical simulations of binary black holes, each designed to replicate likely realizations of its dynamics and radiation. These simulations have been performed at multiple resolutions and with two independent techniques to solve Einstein's equations. For the nonprecessing and precessing simulations, we demonstrate the two techniques agree mode by mode, at a precision substantially in excess of statistical uncertainties in current LIGO's observations. Conversely, we demonstrate our full numerical solutions contain information which is not accurately captured with the approximate phenomenological models commonly used to infer compact binary parameters. To quantify the impact of these differences on parameter inference for GW170104 specifically, we compare the predictions of our simulations and these approximate models to LIGO's observations of GW170104.
Experimental and numerical modeling of heat transfer in directed thermoplates
Khalil, Imane; Hayes, Ryan; Pratt, Quinn; ...
2018-03-20
We present three-dimensional numerical simulations to quantify the design specifications of a directional thermoplate expanded channel heat exchanger, also called dimpleplate. Parametric thermofluidic simulations were performed independently varying the number of spot welds, the diameter of the spot welds, and the thickness of the fluid channel within the laminar flow regime. Results from computational fluid dynamics simulations show an improvement in heat transfer is achieved under a variety of conditions: when the thermoplate has a relatively large cross-sectional area normal to the flow, a ratio of spot weld spacing to channel length of 0.2, and a ratio of the spotmore » weld diameter with respect to channel width of 0.3. Lastly, experimental results performed to validate the model are also presented.« less
Experimental and numerical modeling of heat transfer in directed thermoplates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Imane; Hayes, Ryan; Pratt, Quinn
We present three-dimensional numerical simulations to quantify the design specifications of a directional thermoplate expanded channel heat exchanger, also called dimpleplate. Parametric thermofluidic simulations were performed independently varying the number of spot welds, the diameter of the spot welds, and the thickness of the fluid channel within the laminar flow regime. Results from computational fluid dynamics simulations show an improvement in heat transfer is achieved under a variety of conditions: when the thermoplate has a relatively large cross-sectional area normal to the flow, a ratio of spot weld spacing to channel length of 0.2, and a ratio of the spotmore » weld diameter with respect to channel width of 0.3. Lastly, experimental results performed to validate the model are also presented.« less
Great Lakes O shore Wind Project: Utility and Regional Integration Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sajadi, Amirhossein; Loparo, Kenneth A.; D'Aquila, Robert
This project aims to identify transmission system upgrades needed to facilitate offshore wind projects as well as operational impacts of offshore generation on operation of the regional transmission system in the Great Lakes region. A simulation model of the US Eastern Interconnection was used as the test system as a case study for investigating the impact of the integration of a 1000MW offshore wind farm operating in Lake Erie into FirstEnergy/PJM service territory. The findings of this research provide recommendations on offshore wind integration scenarios, the locations of points of interconnection, wind profile modeling and simulation, and computational methods tomore » quantify performance, along with operating changes and equipment upgrades needed to mitigate system performance issues introduced by an offshore wind project.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roxas, R. M.; Monterola, C.; Carreon-Monterola, S. L.
2010-07-28
We probe the effect of seating arrangement, group composition and group-based competition on students' performance in Physics using a teaching technique adopted from Mazur's peer instruction method. Ninety eight lectures, involving 2339 students, were conducted across nine learning institutions from February 2006 to June 2009. All the lectures were interspersed with student interaction opportunities (SIO), in which students work in groups to discuss and answer concept tests. Two individual assessments were administered before and after the SIO. The ratio of the post-assessment score to the pre-assessment score and the Hake factor were calculated to establish the improvement in student performance.more » Using actual assessment results and neural network (NN) modeling, an optimal seating arrangement for a class was determined based on student seating location. The NN model also provided a quantifiable method for sectioning students. Lastly, the study revealed that competition-driven interactions increase within-group cooperation and lead to higher improvement on the students' performance.« less
Oxidation kinetics for conversion of U 3O 8 to ε-UO 3 with NO 2
Johnson, J. A.; Rawn, C. J.; Spencer, B. B.; ...
2017-04-04
The oxidation kinetics of U 3O 8 powder to ε-UO 3 in an NO 2 environment was measured by in situ x-ray diffraction (XRD). Experiments were performed at temperatures of 195, 210, 235, and 250°C using a custom designed and fabricated sample isolation stage. Data were refined to quantify phase fractions using a newly proposed structure for the ε-UO 3 polymorph. The kinetics data were modeled using a shrinking core approach. A proposed two-step reaction process is presented based on the developed models.
Deaf Learners' Knowledge of English Universal Quantifiers
ERIC Educational Resources Information Center
Berent, Gerald P.; Kelly, Ronald R.; Porter, Jeffrey E.; Fonzi, Judith
2008-01-01
Deaf and hearing students' knowledge of English sentences containing universal quantifiers was compared through their performance on a 50-item, multiple-picture task that required students to decide whether each of five pictures represented a possible meaning of a target sentence. The task assessed fundamental knowledge of quantifier sentences,…
Ultrasound image filtering using the mutiplicative model
NASA Astrophysics Data System (ADS)
Navarrete, Hugo; Frery, Alejandro C.; Sanchez, Fermin; Anto, Joan
2002-04-01
Ultrasound images, as a special case of coherent images, are normally corrupted with multiplicative noise i.e. speckle noise. Speckle noise reduction is a difficult task due to its multiplicative nature, but good statistical models of speckle formation are useful to design adaptive speckle reduction filters. In this article a new statistical model, emerging from the Multiplicative Model framework, is presented and compared to previous models (Rayleigh, Rice and K laws). It is shown that the proposed model gives the best performance when modeling the statistics of ultrasound images. Finally, the parameters of the model can be used to quantify the extent of speckle formation; this quantification is applied to adaptive speckle reduction filter design. The effectiveness of the filter is demonstrated on typical in-vivo log-compressed B-scan images obtained by a clinical ultrasound system.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
Zanetti, Noelia I; Ferrero, Adriana A; Centeno, Néstor D
2016-12-01
The aims of this study were to detect and quantify fluoxetine, an antidepressant, from entomological samples. Larvae, pupae and adults of Dermestes maculatus (Coleoptera, Dermestidae) were reared on pig muscle previously treated with fluoxetine. The concentration selected, 2000mg/kg, emulates a fluoxetine overdose lethal to humans and laboratory animals. Thirty larvae on the fourth and fifth stages, 50 adults and several exuviae were analyzed for fluoxetine content. Detection of fluoxetine was performed by UV spectrophotometry at 270 and 277nm. All developmental stages of D. maculatus and exuviae were positive for fluoxetine. We also quantified the drug and no significant differences were found either between the days or the stages in the general model, but at 277nm a tendency of the concentration to decrease with time was observed. Concentrations of fluoxetine at 277nm were almost equal or greater than those at 270nm. This is the first study to detect and quantify fluoxetine from entomological samples and, in particular, from D. maculatus beetles. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Giancardo, Luca; Ellmore, Timothy M.; Suescun, Jessika; Ocasio, Laura; Kamali, Arash; Riascos-Castaneda, Roy; Schiess, Mya C.
2018-02-01
Methods to identify neuroplasticity patterns in human brains are of the utmost importance in understanding and potentially treating neurodegenerative diseases. Parkinson disease (PD) research will greatly benefit and advance from the discovery of biomarkers to quantify brain changes in the early stages of the disease, a prodromal period when subjects show no obvious clinical symptoms. Diffusion tensor imaging (DTI) allows for an in-vivo estimation of the structural connectome inside the brain and may serve to quantify the degenerative process before the appearance of clinical symptoms. In this work, we introduce a novel strategy to compute longitudinal structural connectomes in the context of a whole-brain data-driven pipeline. In these initial tests, we show that our predictive models are able to distinguish controls from asymptomatic subjects at high risk of developing PD (REM sleep behavior disorder, RBD) with an area under the receiving operating characteristic curve of 0.90 (p<0.001) and a longitudinal dataset of 46 subjects part of the Parkinson's Progression Markers Initiative. By analyzing the brain connections most relevant for the predictive ability of the best performing model, we find connections that are biologically relevant to the disease.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
The comprehension and production of quantifiers in isiXhosa-speaking Grade 1 learners
Southwood, Frenette
2016-01-01
Background Quantifiers form part of the discourse-internal linguistic devices that children need to access and produce narratives and other classroom discourse. Little is known about the development - especially the prodiction - of quantifiers in child language, specifically in speakers of an African language. Objectives The study aimed to ascertain how well Grade 1 isiXhosa first language (L1) learners perform at the beginning and at the end of Grade 1 on quantifier comprehension and production tasks. Method Two low socioeconomic groups of L1 isiXhosa learners with either isiXhosa or English as language of learning and teaching (LOLT) were tested in February and November of their Grade 1 year with tasks targeting several quantifiers. Results The isiXhosa LOLT group comprehended no/none, any and all fully either in February or then in November of Grade 1, and they produced all assessed quantifiers in February of Grade 1. For the English LOLT group, neither the comprehension nor the production of quantifiers was mastered by the end of Grade 1, although there was a significant increase in both their comprehension and production scores. Conclusion The English LOLT group made significant progress in comprehension and production of quantifiers, but still performed worse than peers who had their L1 as LOLT. Generally, children with no or very little prior knowledge of the LOLT need either, (1) more deliberate exposure to quantifier-rich language or, (2) longer exposure to general classroom language before quantifiers can be expected to be mastered sufficiently to allow access to quantifier-related curriculum content. PMID:27245132
Nonlinear dynamic model for magnetically-tunable Galfenol vibration absorbers
NASA Astrophysics Data System (ADS)
Scheidler, Justin J.; Dapino, Marcelo J.
2013-03-01
This paper presents a single degree of freedom model for the nonlinear vibration of a metal-matrix composite manufactured by ultrasonic additive manufacturing that contains seamlessly embedded magnetostrictive Galfenol alloys (FeGa). The model is valid under arbitrary stress and magnetic field. Changes in the composite's natural frequency are quantified to assess its performance as a semi-active vibration absorber. The effects of Galfenol volume fraction and location within the composite on natural frequency are quantified. The bandwidth over which the composite's natural frequency can be tuned with a bias magnetic field is studied for varying displacement excitation amplitudes. The natural frequency is tunable for all excitation amplitudes considered, but the maximum tunability occurs below an excitation amplitude threshold of 1 × 10-6 m for the composite geometry considered. Natural frequency shifts between 6% and 50% are found as the Galfenol volume fraction varies from 25% to 100% when Galfenol is located at the composite neutral axis. At a modest 25% Galfenol by volume, the model shows that up to 15% shifts in composite resonance are possible through magnetic bias field modulation if Galfenol is embedded away from the composite midplane. As the Galfenol volume fraction and distance between Galfenol and composite midplane are increased, linear and quadratic increases in tunability result, respectively.
NASA Technical Reports Server (NTRS)
Rhode, Matthew N.; Oberkampf, William L.
2012-01-01
A high-quality model validation experiment was performed in the NASA Langley Research Center Unitary Plan Wind Tunnel to assess the predictive accuracy of computational fluid dynamics (CFD) models for a blunt-body supersonic retro-propulsion configuration at Mach numbers from 2.4 to 4.6. Static and fluctuating surface pressure data were acquired on a 5-inch-diameter test article with a forebody composed of a spherically-blunted, 70-degree half-angle cone and a cylindrical aft body. One non-powered configuration with a smooth outer mold line was tested as well as three different powered, forward-firing nozzle configurations: a centerline nozzle, three nozzles equally spaced around the forebody, and a combination with all four nozzles. A key objective of the experiment was the determination of experimental uncertainties from a range of sources such as random measurement error, flowfield non-uniformity, and model/instrumentation asymmetries. This paper discusses the design of the experiment towards capturing these uncertainties for the baseline non-powered configuration, the methodology utilized in quantifying the various sources of uncertainty, and examples of the uncertainties applied to non-powered and powered experimental results. The analysis showed that flowfield nonuniformity was the dominant contributor to the overall uncertainty a finding in agreement with other experiments that have quantified various sources of uncertainty.
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Burley, Casey L.; Guo, Yueping
2016-01-01
Aircraft system noise predictions have been performed for NASA modeled hybrid wing body aircraft advanced concepts with 2025 entry-into-service technology assumptions. The system noise predictions developed over a period from 2009 to 2016 as a result of improved modeling of the aircraft concepts, design changes, technology development, flight path modeling, and the use of extensive integrated system level experimental data. In addition, the system noise prediction models and process have been improved in many ways. An additional process is developed here for quantifying the uncertainty with a 95% confidence level. This uncertainty applies only to the aircraft system noise prediction process. For three points in time during this period, the vehicle designs, technologies, and noise prediction process are documented. For each of the three predictions, and with the information available at each of those points in time, the uncertainty is quantified using the direct Monte Carlo method with 10,000 simulations. For the prediction of cumulative noise of an advanced aircraft at the conceptual level of design, the total uncertainty band has been reduced from 12.2 to 9.6 EPNL dB. A value of 3.6 EPNL dB is proposed as the lower limit of uncertainty possible for the cumulative system noise prediction of an advanced aircraft concept.
On the Way to Appropriate Model Complexity
NASA Astrophysics Data System (ADS)
Höge, M.
2016-12-01
When statistical models are used to represent natural phenomena they are often too simple or too complex - this is known. But what exactly is model complexity? Among many other definitions, the complexity of a model can be conceptualized as a measure of statistical dependence between observations and parameters (Van der Linde, 2014). However, several issues remain when working with model complexity: A unique definition for model complexity is missing. Assuming a definition is accepted, how can model complexity be quantified? How can we use a quantified complexity to the better of modeling? Generally defined, "complexity is a measure of the information needed to specify the relationships between the elements of organized systems" (Bawden & Robinson, 2015). The complexity of a system changes as the knowledge about the system changes. For models this means that complexity is not a static concept: With more data or higher spatio-temporal resolution of parameters, the complexity of a model changes. There are essentially three categories into which all commonly used complexity measures can be classified: (1) An explicit representation of model complexity as "Degrees of freedom" of a model, e.g. effective number of parameters. (2) Model complexity as code length, a.k.a. "Kolmogorov complexity": The longer the shortest model code, the higher its complexity (e.g. in bits). (3) Complexity defined via information entropy of parametric or predictive uncertainty. Preliminary results show that Bayes theorem allows for incorporating all parts of the non-static concept of model complexity like data quality and quantity or parametric uncertainty. Therefore, we test how different approaches for measuring model complexity perform in comparison to a fully Bayesian model selection procedure. Ultimately, we want to find a measure that helps to assess the most appropriate model.
Selecting Tasks for Evaluating Human Performance as a Function of Gravity
NASA Technical Reports Server (NTRS)
Norcross, Jason R.; Gernhardt, Michael L.
2011-01-01
A challenge in understanding human performance as a function of gravity is determining which tasks to research. Initial studies began with treadmill walking, which was easy to quantify and control. However, with the development of pressurized rovers, it is less important to optimize human performance for ambulation as pressurized rovers will likely perform gross translation for them. Future crews are likely to spend much of their extravehicular activity (EVA) performing geology, construction,a nd maintenance type tasks. With these types of tasks, people have different performance strategies, and it is often difficult to quantify the task and measure steady-state metabolic rates or perform biomechanical analysis. For many of these types of tasks, subjective feedback may be the only data that can be collected. However, subjective data may not fully support a rigorous scientific comparison of human performance across different gravity levels and suit factors. NASA would benefit from having a wide variety of quantifiable tasks that allow human performance comparison across different conditions. In order to determine which tasks will effectively support scientific studies, many different tasks and data analysis techniques will need to be employed. Many of these tasks and techniques will not be effective, but some will produce quantifiable results that are sensitive enough to show performance differences. One of the primary concerns related to EVA performance is metabolic rate. The higher the metabolic rate, the faster the astronaut will exhaust consumables. The focus of this poster will be on how different tasks affect metabolic rate across different gravity levels.
Spectral Induced Polarization approaches to characterize reactive transport parameters and processes
NASA Astrophysics Data System (ADS)
Schmutz, M.; Franceschi, M.; Revil, A.; Peruzzo, L.; Maury, T.; Vaudelet, P.; Ghorbani, A.; Hubbard, S. S.
2017-12-01
For almost a decade, geophysical methods have explored the potential for characterization of reactive transport parameters and processes relevant to hydrogeology, contaminant remediation, and oil and gas applications. Spectral Induced Polarization (SIP) methods show particular promise in this endeavour, given the sensitivity of the SIP signature to geological material electrical double layer properties and the critical role of the electrical double layer on reactive transport processes, such as adsorption. In this presentation, we discuss results from several recent studies that have been performed to quantify the value of SIP parameters for characterizing reactive transport parameters. The advances have been realized through performing experimental studies and interpreting their responses using theoretical and numerical approaches. We describe a series of controlled experimental studies that have been performed to quantify the SIP responses to variations in grain size and specific surface area, pore fluid geochemistry, and other factors. We also model chemical reactions at the interface fluid/matrix linked to part of our experimental data set. For some examples, both geochemical modelling and measurements are integrated into a SIP physico-chemical based model. Our studies indicate both the potential of and the opportunity for using SIP to estimate reactive transport parameters. In case of well sorted granulometry of the samples, we find that the grain size characterization (as well as the permeabililty for some specific examples) value can be estimated using SIP. We show that SIP is sensitive to physico-chemical conditions at the fluid/mineral interface, including the different pore fluid dissolved ions (Na+, Cu2+, Zn2+, Pb2+) due to their different adsorption behavior. We also showed the relevance of our approach to characterize the fluid/matrix interaction for various organic contents (wetting and non-wetting oils). We also discuss early efforts to jointly interpret SIP and other information for improved estimation, approaches to use SIP information to constrain mechanistic flow and transport models, and the potential to apply some of the approaches to field scale applications.
NASA Astrophysics Data System (ADS)
Kim, Y.; Suk, H.
2011-12-01
In this study, about 2,000 deep observation wells, stream and/or river distribution, and river's density were analyzed to identify regional groundwater flow trend, based on the regional groundwater survey of four major river watersheds including Geum river, Han river, Youngsan-Seomjin river, and Nakdong river in Korea. Hydrogeologial data were collected to analyze regional groundwater flow characteristics according to geological units. Additionally, hydrological soil type data were collected to estimate direct runoff through SCS-CN method. Temperature and precipitation data were used to quantify infiltration rate. The temperature and precipitation data were also used to quantify evaporation by Thornthwaite method and to evaluate groundwater recharge, respectively. Understanding the regional groundwater characteristics requires the database of groundwater flow parameters, but most hydrogeological data include limited information such as groundwater level and well configuration. In this study, therefore, groundwater flow parameters such as hydraulic conductivities or transmissivities were estimated using observed groundwater level by inverse model, namely PEST (Non-linear Parameter ESTimation). Since groundwater modeling studies have some uncertainties in data collection, conceptualization, and model results, model calibration should be performed. The calibration may be manually performed by changing parameters step by step, or various parameters are simultaneously changed by automatic procedure using PEST program. In this study, both manual and automatic procedures were employed to calibrate and estimate hydraulic parameter distributions. In summary, regional groundwater survey data obtained from four major river watersheds and various data of hydrology, meteorology, geology, soil, and topography in Korea were used to estimate hydraulic conductivities using PEST program. Especially, in order to estimate hydraulic conductivity effectively, it is important to perform in such a way that areas of same or similar hydrogeological characteristics should be grouped into zones. Keywords: regional groundwater, database, hydraulic conductivity, PEST, Korean peninsular Acknowledgements: This work was supported by the Radioactive Waste Management of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (2011T100200152)
Modeling habitat for Marbled Murrelets on the Siuslaw National Forest, Oregon, using lidar data
Hagar, Joan C.; Aragon, Ramiro; Haggerty, Patricia; Hollenbeck, Jeff P.
2018-03-28
Habitat models using lidar-derived variables that quantify fine-scale variation in vegetation structure can improve the accuracy of occupancy estimates for canopy-dwelling species over models that use variables derived from other remote sensing techniques. However, the ability of models developed at such a fine spatial scale to maintain accuracy at regional or larger spatial scales has not been tested. We tested the transferability of a lidar-based habitat model for the threatened Marbled Murrelet (Brachyramphus marmoratus) between two management districts within a larger regional conservation zone in coastal western Oregon. We compared the performance of the transferred model against models developed with data from the application location. The transferred model had good discrimination (AUC = 0.73) at the application location, and model performance was further improved by fitting the original model with coefficients from the application location dataset (AUC = 0.79). However, the model selection procedure indicated that neither of these transferred models were considered competitive with a model trained on local data. The new model trained on data from the application location resulted in the selection of a slightly different set of lidar metrics from the original model, but both transferred and locally trained models consistently indicated positive relationships between the probability of occupancy and lidar measures of canopy structural complexity. We conclude that while the locally trained model had superior performance for local application, the transferred model could reasonably be applied to the entire conservation zone.
Skill assessment of Korea operational oceanographic system (KOOS)
NASA Astrophysics Data System (ADS)
Kim, J.; Park, K.
2016-02-01
For the ocean forecast system in Korea, the Korea operational oceanographic system (KOOS) has been developed and pre-operated since 2009 by the Korea institute of ocean science and technology (KIOST) funded by the Korean government. KOOS provides real time information and forecasts for marine environmental conditions in order to support all kinds of activities in the sea. Furthermore, more significant purpose of the KOOS information is to response and support to maritime problems and accidents such as oil spill, red-tide, shipwreck, extraordinary wave, coastal inundation and so on. Accordingly, it is essential to evaluate prediction accuracy and efforts to improve accuracy. The forecast accuracy should meet or exceed target benchmarks before its products are approved for release to the public.In this paper, we conduct error quantification of the forecasts using skill assessment technique for judgement of the KOOS performance. Skill assessment statistics includes the measures of errors and correlations such as root-mean-square-error (RMSE), mean bias (MB), correlation coefficient (R), and index of agreement (IOA) and the frequency with which errors lie within specified limits termed the central frequency (CF).The KOOS provides 72-hour daily forecast data such as air pressure, wind, water elevation, currents, wave, water temperature, and salinity produced by meteorological and hydrodynamic numerical models of WRF, ROMS, MOM5, WAM, WW3, and MOHID. The skill assessment has been performed through comparison of model results with in-situ observation data (Figure 1) for the period from 1 July, 2010 to 31 March, 2015 in Table 1 and model errors have been quantified with skill scores and CF determined by acceptable criteria depending on predicted variables (Table 2). Moreover, we conducted quantitative evaluation of spatio-temporal pattern correlation between numerical models and observation data such as sea surface temperature (SST) and sea surface current produced by ocean sensor in satellites and high frequency (HF) radar, respectively. Those quantified errors can allow to objective assessment of the KOOS performance and used can reveal different aspects of model inefficiency. Based on these results, various model components are tested and developed in order to improve forecast accuracy.
Human casualties in earthquakes: Modelling and mitigation
Spence, R.J.S.; So, E.K.M.
2011-01-01
Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.
NASA Astrophysics Data System (ADS)
Fugett, James H.; Bennett, Haydon E.; Shrout, Joshua L.; Coad, James E.
2017-02-01
Expansions in minimally invasive medical devices and technologies with thermal mechanisms of action are continuing to advance the practice of medicine. These expansions have led to an increasing need for appropriate animal models to validate and quantify device performance. The planning of these studies should take into consideration a variety of parameters, including the appropriate animal model (test system - ex vivo or in vivo; species; tissue type), treatment conditions (test conditions), predicate device selection (as appropriate, control article), study timing (Day 0 acute to more than Day 90 chronic survival studies), and methods of tissue analysis (tissue dissection - staining methods). These considerations are discussed and illustrated using the fresh extirpated porcine longissimus muscle model for endometrial ablation.
This presentation, Particle-Resolved Simulations for Quantifying Black Carbon Climate Impact and Model Uncertainty, was given at the STAR Black Carbon 2016 Webinar Series: Changing Chemistry over Time held on Oct. 31, 2016.
Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John
2018-03-07
DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.
Nguyen, Huu-Tho; Dawal, Siti Zawiah Md; Nukman, Yusoff; Rifai, Achmad P; Aoyama, Hideki
2016-01-01
The conveyor system plays a vital role in improving the performance of flexible manufacturing cells (FMCs). The conveyor selection problem involves the evaluation of a set of potential alternatives based on qualitative and quantitative criteria. This paper presents an integrated multi-criteria decision making (MCDM) model of a fuzzy AHP (analytic hierarchy process) and fuzzy ARAS (additive ratio assessment) for conveyor evaluation and selection. In this model, linguistic terms represented as triangular fuzzy numbers are used to quantify experts' uncertain assessments of alternatives with respect to the criteria. The fuzzy set is then integrated into the AHP to determine the weights of the criteria. Finally, a fuzzy ARAS is used to calculate the weights of the alternatives. To demonstrate the effectiveness of the proposed model, a case study is performed of a practical example, and the results obtained demonstrate practical potential for the implementation of FMCs.
Performance Evaluation of the Gravity Probe B Design
NASA Technical Reports Server (NTRS)
Francis, Ronnie; Wells, Eugene M.
1996-01-01
This report documents the simulation of the Lockheed Martin designed Gravity Probe B (GPB) spacecraft developed tool by bd Systems Inc using the TREETOPS simulation. This study quantifies the effects of flexibility and liquid helium slosh on GPB spacecraft control performance. The TREETOPS simulation tool permits the simulation of flexible structures given that a flexible body model of the structure is available. For purposes of this study, a flexible model of the GPB spacecraft was obtained from Lockheed Martin. To model the liquid helium slosh effects, computational fluid dynamics (CFD) results' were obtained, and used to develop a dynamic model of the slosh effects. The flexible body and slosh effects were incorporated separately into the TREETOPS simulation, which places the vehicle in a 650 km circular polar orbit and subjects the spacecraft to realistic environmental disturbances and sensor error quantities. In all of the analysis conducted in this study the spacecraft is pointed at an inertially fixed guide star (GS) and is rotating at a constant rate about this line of sight.
Nguyen, Huu-Tho; Md Dawal, Siti Zawiah; Nukman, Yusoff; P. Rifai, Achmad; Aoyama, Hideki
2016-01-01
The conveyor system plays a vital role in improving the performance of flexible manufacturing cells (FMCs). The conveyor selection problem involves the evaluation of a set of potential alternatives based on qualitative and quantitative criteria. This paper presents an integrated multi-criteria decision making (MCDM) model of a fuzzy AHP (analytic hierarchy process) and fuzzy ARAS (additive ratio assessment) for conveyor evaluation and selection. In this model, linguistic terms represented as triangular fuzzy numbers are used to quantify experts’ uncertain assessments of alternatives with respect to the criteria. The fuzzy set is then integrated into the AHP to determine the weights of the criteria. Finally, a fuzzy ARAS is used to calculate the weights of the alternatives. To demonstrate the effectiveness of the proposed model, a case study is performed of a practical example, and the results obtained demonstrate practical potential for the implementation of FMCs. PMID:27070543
The evaluation system of city's smart growth success rates
NASA Astrophysics Data System (ADS)
Huang, Yifan
2018-04-01
"Smart growth" is to pursue the best integrated perform+-ance of the Economically prosperous, socially Equitable, and Environmentally Sustainable(3E). Firstly, we establish the smart growth evaluation system(SGI) and the sustainable development evaluation system(SDI). Based on the ten principles and the definition of three E's of sustainability. B y using the Z-score method and the principal component analysis method, we evaluate and quantify indexes synthetically. Then we define the success of smart growth as the ratio of the SDI to the SGI composite score growth rate (SSG). After that we select two cities — Canberra and Durres as the objects of our model in view of the model. Based on the development plans and key data of these two cities, we can figure out the success of smart growth. And according to our model, we adjust some of the growth indicators for both cities. Then observe the results before and after adjustment, and finally verify the accuracy of the model.
Novel indexes based on network structure to indicate financial market
NASA Astrophysics Data System (ADS)
Zhong, Tao; Peng, Qinke; Wang, Xiao; Zhang, Jing
2016-02-01
There have been various achievements to understand and to analyze the financial market by complex network model. However, current studies analyze the financial network model but seldom present quantified indexes to indicate or forecast the price action of market. In this paper, the stock market is modeled as a dynamic network, in which the vertices refer to listed companies and edges refer to their rank-based correlation based on price series. Characteristics of the network are analyzed and then novel indexes are introduced into market analysis, which are calculated from maximum and fully-connected subnets. The indexes are compared with existing ones and the results confirm that our indexes perform better to indicate the daily trend of market composite index in advance. Via investment simulation, the performance of our indexes is analyzed in detail. The results indicate that the dynamic complex network model could not only serve as a structural description of the financial market, but also work to predict the market and guide investment by indexes.
Kulcsár, Caroline; Raynaud, Henri-François; Garcia-Rissmann, Aurea
2016-01-01
This paper studies the effect of pupil displacements on the best achievable performance of retinal imaging adaptive optics (AO) systems, using 52 trajectories of horizontal and vertical displacements sampled at 80 Hz by a pupil tracker (PT) device on 13 different subjects. This effect is quantified in the form of minimal root mean square (rms) of the residual phase affecting image formation, as a function of the delay between PT measurement and wavefront correction. It is shown that simple dynamic models identified from data can be used to predict horizontal and vertical pupil displacements with greater accuracy (in terms of average rms) over short-term time horizons. The potential impact of these improvements on residual wavefront rms is investigated. These results allow to quantify the part of disturbances corrected by retinal imaging systems that are caused by relative displacements of an otherwise fixed or slowy-varying subject-dependent aberration. They also suggest that prediction has a limited impact on wavefront rms and that taking into account PT measurements in real time improves the performance of AO retinal imaging systems. PMID:27231607
DARPA/ARFL/NASA Smart Wing second wind tunnel test results
NASA Astrophysics Data System (ADS)
Scherer, Lewis B.; Martin, Christopher A.; West, Mark N.; Florance, Jennifer P.; Wieseman, Carol D.; Burner, Alpheus W.; Fleming, Gary A.
1999-07-01
To quantify the benefits of smart materials and structures adaptive wing technology. Northrop Grumman Corp. built and tested two 16 percent scale wind tunnel models of a fighter/attach aircraft under the DARPA/AFRL/NASA Smart Materials and Structures Development - Smart Wing Phase 1. Performance gains quantified included increased pitching moment, increased rolling moment and improved pressure distribution. The benefits were obtained for hingeless, contoured trailing edge control surfaces with embedded shape memory alloy wires and spanwise wing twist effected by SMA torque tube mechanism, compared to convention hinged control surfaces. This paper presents an overview of the results from the second wind tunnel test performed at the NASA Langley Research Center's 16 ft Transonic Dynamic Tunnel in June 1998. Successful results obtained were: 1) 5 degrees of spanwise twist and 8-12 percent increase in rolling moment utilizing a single SMA torque tube, 2) 12 degrees of deflection, and 10 percent increase in rolling moment due to hingeless, contoured aileron, and 3) demonstration of optical techniques for measuring spanwise twist and deflected shape.
Kuss, S.; Tanner, E. E. L.; Ordovas-Montanes, M.
2017-01-01
The colorimetric identification of pathogenic and non-pathogenic bacteria in cell culture is commonly performed using the redox mediator N,N,N′,N′-tetramethyl-para-phenylene-diamine (TMPD) in the so-called oxidase test, which indicates the presence of bacterial cytochrome c oxidases. The presented study demonstrates the ability of electrochemistry to employ TMPD to detect bacteria and quantify the activity of bacterial cytochrome c oxidases. Cyclic voltammetry studies and chronoamperometry measurements performed on the model organism Bacillus subtilis result in a turnover number, calculated for single bacteria. Furthermore, trace amounts of cytochrome c oxidases were revealed in aerobically cultured Escherichia coli, which to our knowledge no other technique is currently able to quantify in molecular biology. The reported technique could be applied to a variety of pathogenic bacteria and has the potential to be employed in future biosensing technology. PMID:29568431
To help address the Food Quality Protection Act of 1996, a physically-based, two-stage Monte Carlo probabilistic model has been developed to quantify and analyze aggregate exposure and dose to pesticides via multiple routes and pathways. To illustrate model capabilities and ide...
NASA Astrophysics Data System (ADS)
Papanikolaou, T. D.; Papadopoulos, N.
2015-06-01
The present study aims at the validation of global gravity field models through numerical investigation in gravity field functionals based on spherical harmonic synthesis of the geopotential models and the analysis of terrestrial data. We examine gravity models produced according to the latest approaches for gravity field recovery based on the principles of the Gravity field and steadystate Ocean Circulation Explorer (GOCE) and Gravity Recovery And Climate Experiment (GRACE) satellite missions. Furthermore, we evaluate the overall spectrum of the ultra-high degree combined gravity models EGM2008 and EIGEN-6C3stat. The terrestrial data consist of gravity and collocated GPS/levelling data in the overall Hellenic region. The software presented here implements the algorithm of spherical harmonic synthesis in a degree-wise cumulative sense. This approach may quantify the bandlimited performance of the individual models by monitoring the degree-wise computed functionals against the terrestrial data. The degree-wise analysis performed yields insight in the short-wavelengths of the Earth gravity field as these are expressed by the high degree harmonics.
NASA Astrophysics Data System (ADS)
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
2017-09-01
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Effects of high-latitude drivers on Ionosphere/Thermosphere parameters
NASA Astrophysics Data System (ADS)
Shim, J.; Kuznetsova, M. M.; Rastaetter, L.; Berrios, D.; Codrescu, M.; Emery, B. A.; Fedrizzi, M.; Foerster, M.; Foster, B. T.; Fuller-Rowell, T. J.; Mannucci, A.; Negrea, C.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Coster, A. J.; Goncharenko, L.; Lomidze, L.; Scherliess, L.
2012-12-01
In order to study effects of high-latitude drivers, we compared Ionosphere/Thermosphere (IT) model performance for predicting IT parameters, which were obtained using different models for the high-latitude ionospheric electric potential including Weimer 2005, AMIE (assimilative mapping of ionospheric electrodynamics) and global magnetosphere models (e.g. Space Weather Modeling Framework). For this study, the physical parameters selected are Total Electron Content (TEC) obtained by GPS ground stations, and NmF2 and hmF2 from COSMIC LEO satellites in the selected 5 degree eight longitude sectors. In addition, Ne, Te, Ti, and Tn at about 300 km height from ISRs are considered. We compared the modeled values with the observations for the 2006 AGU storm period and quantified the performance of the models using skill scores. Furthermore, the skill scores are obtained for three latitude regions (low, middle and high latitudes) in order to investigate latitudinal dependence of the models' performance. This study is supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. The CCMC converted ionosphere drivers from a variety of sources and developed an interpolation tool that can be employed by any modelers for easy driver swapping. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) as a resource for the space science communities to use.
Modem transmission of data for 3D fracture modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhary, S.A.; Rodgerson, J.L.; Martinez, A.D.
1996-06-01
Hydraulic fracturing treatments require measurement of numerous parameters, including surface rates and pressures, to quantify fluids, proppant, and additives. Computers are used to acquire data for the purpose of calculating bottomhole pressure (BHP), compiling quality-control data, generating diagnostic plots, and, often, for modeling fracture geometry in real time. In the recent past, modems have been routinely used in conjunction with cellular phone systems to transmit field-monitored data to a remote office. More recently, these data have been used at the remote site to perform 3D fracture modeling for design verification and adjustment. This paper describes data-transmission technology and discusses themore » related cost and reliability.« less
The Logic of Collective Rating
NASA Astrophysics Data System (ADS)
Nax, Heinrich
2016-05-01
The introduction of participatory rating mechanisms on online sales platforms has had substantial impact on firms' sales and profits. In this note, we develop a dynamic model of consumer influences on ratings and of rating influences on consumers, focussing on standard 5-star mechanisms as implemented by many platforms. The key components of our social influence model are the consumer trust in the `wisdom of crowds' during the purchase phase and indirect reciprocity during the rating decision. Our model provides an overarching explanation for well-corroborated empirical regularities. We quantify the performance of the voluntary rating mechanism in terms of realized consumer surplus with the no-mechanism and full-information benchmarks, and identify how it could be improved.
NASA Astrophysics Data System (ADS)
Yang, T.; Lee, C.
2017-12-01
The biases in the Global Circulation Models (GCMs) are crucial for understanding future climate changes. Currently, most bias correction methodologies suffer from the assumption that model bias is stationary. This paper provides a non-stationary bias correction model, termed Residual-based Bagging Tree (RBT) model, to reduce simulation biases and to quantify the contributions of single models. Specifically, the proposed model estimates the residuals between individual models and observations, and takes the differences between observations and the ensemble mean into consideration during the model training process. A case study is conducted for 10 major river basins in Mainland China during different seasons. Results show that the proposed model is capable of providing accurate and stable predictions while including the non-stationarities into the modeling framework. Significant reductions in both bias and root mean squared error are achieved with the proposed RBT model, especially for the central and western parts of China. The proposed RBT model has consistently better performance in reducing biases when compared to the raw ensemble mean, the ensemble mean with simple additive bias correction, and the single best model for different seasons. Furthermore, the contribution of each single GCM in reducing the overall bias is quantified. The single model importance varies between 3.1% and 7.2%. For different future scenarios (RCP 2.6, RCP 4.5, and RCP 8.5), the results from RBT model suggest temperature increases of 1.44 ºC, 2.59 ºC, and 4.71 ºC by the end of the century, respectively, when compared to the average temperature during 1970 - 1999.
Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Abdol-Hamid, Khaled S.
2005-01-01
Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.
Limitations of bootstrap current models
Belli, Emily A.; Candy, Jefferey M.; Meneghini, Orso; ...
2014-03-27
We assess the accuracy and limitations of two analytic models of the tokamak bootstrap current: (1) the well-known Sauter model and (2) a recent modification of the Sauter model by Koh et al. For this study, we use simulations from the first-principles kinetic code NEO as the baseline to which the models are compared. Tests are performed using both theoretical parameter scans as well as core- to-edge scans of real DIII-D and NSTX plasma profiles. The effects of extreme aspect ratio, large impurity fraction, energetic particles, and high collisionality are studied. In particular, the error in neglecting cross-species collisional couplingmore » – an approximation inherent to both analytic models – is quantified. Moreover, the implications of the corrections from kinetic NEO simulations on MHD equilibrium reconstructions is studied via integrated modeling with kinetic EFIT.« less
Using multilevel models to quantify heterogeneity in resource selection
Wagner, Tyler; Diefenbach, Duane R.; Christensen, Sonja; Norton, Andrew S.
2011-01-01
Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
Calibration of the k- ɛ model constants for use in CFD applications
NASA Astrophysics Data System (ADS)
Glover, Nina; Guillias, Serge; Malki-Epshtein, Liora
2011-11-01
The k- ɛ turbulence model is a popular choice in CFD modelling due to its robust nature and the fact that it has been well validated. However it has been noted in previous research that the k- ɛ model has problems predicting flow separation as well as unconfined and transient flows. The model contains five empirical model constants whose values were found through data fitting for a wide range of flows (Launder 1972) but ad-hoc adjustments are often made to these values depending on the situation being modeled. Here we use the example of flow within a regular street canyon to perform a Bayesian calibration of the model constants against wind tunnel data. This allows us to assess the sensitivity of the CFD model to changes in these constants, find the most suitable values for the constants as well as quantifying the uncertainty related to the constants and the CFD model as a whole.
Towards national-scale greenhouse gas emissions evaluation with robust uncertainty estimates
NASA Astrophysics Data System (ADS)
Rigby, Matthew; Swallow, Ben; Lunt, Mark; Manning, Alistair; Ganesan, Anita; Stavert, Ann; Stanley, Kieran; O'Doherty, Simon
2016-04-01
Through the Deriving Emissions related to Climate Change (DECC) network and the Greenhouse gAs Uk and Global Emissions (GAUGE) programme, the UK's greenhouse gases are now monitored by instruments mounted on telecommunications towers and churches, on a ferry that performs regular transects of the North Sea, on-board a research aircraft and from space. When combined with information from high-resolution chemical transport models such as the Met Office Numerical Atmospheric dispersion Modelling Environment (NAME), these measurements are allowing us to evaluate emissions more accurately than has previously been possible. However, it has long been appreciated that current methods for quantifying fluxes using atmospheric data suffer from uncertainties, primarily relating to the chemical transport model, that have been largely ignored to date. Here, we use novel model reduction techniques for quantifying the influence of a set of potential systematic model errors on the outcome of a national-scale inversion. This new technique has been incorporated into a hierarchical Bayesian framework, which can be shown to reduce the influence of subjective choices on the outcome of inverse modelling studies. Using estimates of the UK's methane emissions derived from DECC and GAUGE tall-tower measurements as a case study, we will show that such model systematic errors have the potential to significantly increase the uncertainty on national-scale emissions estimates. Therefore, we conclude that these factors must be incorporated in national emissions evaluation efforts, if they are to be credible.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
NASA Technical Reports Server (NTRS)
Patel, Bhogila M.; Hoge, Peter A.; Nagpal, Vinod K.; Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2004-01-01
This paper describes the methods employed to apply probabilistic modeling techniques to the International Space Station (ISS) power system. These techniques were used to quantify the probabilistic variation in the power output, also called the response variable, due to variations (uncertainties) associated with knowledge of the influencing factors called the random variables. These uncertainties can be due to unknown environmental conditions, variation in the performance of electrical power system components or sensor tolerances. Uncertainties in these variables, cause corresponding variations in the power output, but the magnitude of that effect varies with the ISS operating conditions, e.g. whether or not the solar panels are actively tracking the sun. Therefore, it is important to quantify the influence of these uncertainties on the power output for optimizing the power available for experiments.
Calovi, Daniel S.; Litchinko, Alexandra; Lopez, Ugo; Chaté, Hugues; Sire, Clément
2018-01-01
The development of tracking methods for automatically quantifying individual behavior and social interactions in animal groups has open up new perspectives for building quantitative and predictive models of collective behavior. In this work, we combine extensive data analyses with a modeling approach to measure, disentangle, and reconstruct the actual functional form of interactions involved in the coordination of swimming in Rummy-nose tetra (Hemigrammus rhodostomus). This species of fish performs burst-and-coast swimming behavior that consists of sudden heading changes combined with brief accelerations followed by quasi-passive, straight decelerations. We quantify the spontaneous stochastic behavior of a fish and the interactions that govern wall avoidance and the reaction to a neighboring fish, the latter by exploiting general symmetry constraints for the interactions. In contrast with previous experimental works, we find that both attraction and alignment behaviors control the reaction of fish to a neighbor. We then exploit these results to build a model of spontaneous burst-and-coast swimming and interactions of fish, with all parameters being estimated or directly measured from experiments. This model quantitatively reproduces the key features of the motion and spatial distributions observed in experiments with a single fish and with two fish. This demonstrates the power of our method that exploits large amounts of data for disentangling and fully characterizing the interactions that govern collective behaviors in animals groups. PMID:29324853
The importance of data curation on QSAR Modeling ...
During the last few decades many QSAR models and tools have been developed at the US EPA, including the widely used EPISuite. During this period the arsenal of computational capabilities supporting cheminformatics has broadened dramatically with multiple software packages. These modern tools allow for more advanced techniques in terms of chemical structure representation and storage, as well as enabling automated data-mining and standardization approaches to examine and fix data quality issues.This presentation will investigate the impact of data curation on the reliability of QSAR models being developed within the EPA‘s National Center for Computational Toxicology. As part of this work we have attempted to disentangle the influence of the quality versus quantity of data based on the Syracuse PHYSPROP database partly used by EPISuite software. We will review our automated approaches to examining key datasets related to the EPISuite data to validate across chemical structure representations (e.g., mol file and SMILES) and identifiers (chemical names and registry numbers) and approaches to standardize data into QSAR-ready formats prior to modeling procedures. Our efforts to quantify and segregate data into quality categories has allowed us to evaluate the resulting models that can be developed from these data slices and to quantify to what extent efforts developing high-quality datasets have the expected pay-off in terms of predicting performance. The most accur
NASA Astrophysics Data System (ADS)
Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.
2017-12-01
Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.
Classification and Feature Selection Algorithms for Modeling Ice Storm Climatology
NASA Astrophysics Data System (ADS)
Swaminathan, R.; Sridharan, M.; Hayhoe, K.; Dobbie, G.
2015-12-01
Ice storms account for billions of dollars of winter storm loss across the continental US and Canada. In the future, increasing concentration of human populations in areas vulnerable to ice storms such as the northeastern US will only exacerbate the impacts of these extreme events on infrastructure and society. Quantifying the potential impacts of global climate change on ice storm prevalence and frequency is challenging, as ice storm climatology is driven by complex and incompletely defined atmospheric processes, processes that are in turn influenced by a changing climate. This makes the underlying atmospheric and computational modeling of ice storm climatology a formidable task. We propose a novel computational framework that uses sophisticated stochastic classification and feature selection algorithms to model ice storm climatology and quantify storm occurrences from both reanalysis and global climate model outputs. The framework is based on an objective identification of ice storm events by key variables derived from vertical profiles of temperature, humidity and geopotential height. Historical ice storm records are used to identify days with synoptic-scale upper air and surface conditions associated with ice storms. Evaluation using NARR reanalysis and historical ice storm records corresponding to the northeastern US demonstrates that an objective computational model with standard performance measures, with a relatively high degree of accuracy, identify ice storm events based on upper-air circulation patterns and provide insights into the relationships between key climate variables associated with ice storms.
NASA Astrophysics Data System (ADS)
Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang
2014-05-01
Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate our suggested approach with an application to model selection between different soil-plant models following up on a study by Wöhling et al. (2013). Results show that measurement noise compromises the reliability of model ranking and causes a significant amount of weighting uncertainty, if the calibration data time series is not long enough to compensate for its noisiness. This additional contribution to the overall predictive uncertainty is neglected without our approach. Thus, we strongly advertise to include our suggested upgrade in the Bayesian model averaging routine.
Solar Power System Options for the Radiation and Technology Demonstration Spacecraft
NASA Technical Reports Server (NTRS)
Kerslake, Thomas W.; Haraburda, Francis M.; Riehl, John P.
2000-01-01
The Radiation and Technology Demonstration (RTD) Mission has the primary objective of demonstrating high-power (10 kilowatts) electric thruster technologies in Earth orbit. This paper discusses the conceptual design of the RTD spacecraft photovoltaic (PV) power system and mission performance analyses. These power system studies assessed multiple options for PV arrays, battery technologies and bus voltage levels. To quantify performance attributes of these power system options, a dedicated Fortran code was developed to predict power system performance and estimate system mass. The low-thrust mission trajectory was analyzed and important Earth orbital environments were modeled. Baseline power system design options are recommended on the basis of performance, mass and risk/complexity. Important findings from parametric studies are discussed and the resulting impacts to the spacecraft design and cost.
On the short-term uncertainty in performance f a point absorber wave energy converter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coe, Ryan Geoffrey; Michelen, Carlos; Manuel, Lance
2016-03-01
Of interest, in this study, is the quantification of uncertainty in the performance of a two-body wave point absorber (Reference Model 3 or RM3), which serves as a wave energy converter (WEC). We demonstrate how simulation tools may be used to establish short-term relationships between any performance parameter of the WEC device and wave height in individual sea states. We demonstrate this methodology for two sea states. Efficient structural reliability methods, validated using more expensive Monte Carlo sampling, allow the estimation of uncertainty in performance of the device. Such methods, when combined with metocean data quantifying the likelihood of differentmore » sea states, can be useful in long-term studies and in reliability-based design.« less
Cost and performance prospects for composite bipolar plates in fuel cells and redox flow batteries
NASA Astrophysics Data System (ADS)
Minke, Christine; Hickmann, Thorsten; dos Santos, Antonio R.; Kunz, Ulrich; Turek, Thomas
2016-02-01
Carbon-polymer-composite bipolar plates (BPP) are suitable for fuel cell and flow battery applications. The advantages of both components are combined in a product with high electrical conductivity and good processability in convenient polymer forming processes. In a comprehensive techno-economic analysis of materials and production processes cost factors are quantified. For the first time a technical cost model for BPP is set up with tight integration of material characterization measurements.
Environmental Flow for Sungai Johor Estuary
NASA Astrophysics Data System (ADS)
Adilah, A. Kadir; Zulkifli, Yusop; Zainura, Z. Noor; Bakhiah, Baharim N.
2018-03-01
Sungai Johor estuary is a vital water body in the south of Johor and greatly affects the water quality in the Johor Straits. In the development of the hydrodynamic and water quality models for Sungai Johor estuary, the Environmental Fluid Dynamics Code (EFDC) model was selected. In this application, the EFDC hydrodynamic model was configured to simulate time varying surface elevation, velocity, salinity, and water temperature. The EFDC water quality model was configured to simulate dissolved oxygen (DO), dissolved organic carbon (DOC), chemical oxygen demand (COD), ammoniacal nitrogen (NH3-N), nitrate nitrogen (NO3-N), phosphate (PO4), and Chlorophyll a. The hydrodynamic and water quality model calibration was performed utilizing a set of site specific data acquired in January 2008. The simulated water temperature, salinity and DO showed good and fairly good agreement with observations. The calculated correlation coefficients between computed and observed temperature and salinity were lower compared with the water level. Sensitivity analysis was performed on hydrodynamic and water quality models input parameters to quantify their impact on modeling results such as water surface elevation, salinity and dissolved oxygen concentration. It is anticipated and recommended that the development of this model be continued to synthesize additional field data into the modeling process.
Wicha, Sebastian G; Kloft, Charlotte
2016-08-15
For pharmacokinetic/pharmacodynamic (PK/PD) assessment of antibiotics combinations in in vitro infection models, accurate and precise quantification of drug concentrations in bacterial growth medium is crucial for derivation of valid PK/PD relationships. We aimed to (i) develop a high-performance liquid chromatography (HPLC) assay to simultaneously quantify linezolid (LZD), vancomycin (VAN) and meropenem (MER), as typical components of broad-spectrum antibiotic combination therapy, in bacterial growth medium cation-adjusted Mueller-Hinton broth (CaMHB) and (ii) determine the stability profiles of LZD, VAN and MER under conditions in in vitro infection models. To separate sample matrix components, the final method comprised the pretreatment of 100μL sample with 400μL methanol, the evaporation of supernatant and its reconstitution in water. A low sample volume of 2μL processed sample was injected onto an Accucore C-18 column (2.6μm, 100×2.1mm) coupled to a Dionex Ultimate 3000 HPLC+ system. UV detection at 251, 240 and 302nm allowed quantification limits of 0.5, 2 and 0.5μg/mL for LZD, VAN and MER, respectively. The assay was successfully validated according to the relevant EMA guideline. The rapid method (14min) was successfully applied to quantify significant degradation of LZD, VAN and MER in in vitro infection models: LZD was stable, VAN degraded to 90.6% and MER to 62.9% within 24h compared to t=0 in CaMHB at 37°C, which should be considered when deriving PK/PD relationships in in vitro infection models. Inclusion of further antibiotics into the flexible gradient-based HPLC assay seems promising. Copyright © 2016 Elsevier B.V. All rights reserved.
A Multialgorithm Approach to Land Surface Modeling of Suspended Sediment in the Colorado Front Range
Stewart, J. R.; Kasprzyk, J. R.; Rajagopalan, B.; Minear, J. T.; Raseman, W. J.
2017-01-01
Abstract A new paradigm of simulating suspended sediment load (SSL) with a Land Surface Model (LSM) is presented here. Five erosion and SSL algorithms were applied within a common LSM framework to quantify uncertainties and evaluate predictability in two steep, forested catchments (>1,000 km2). The algorithms were chosen from among widely used sediment models, including empirically based: monovariate rating curve (MRC) and the Modified Universal Soil Loss Equation (MUSLE); stochastically based: the Load Estimator (LOADEST); conceptually based: the Hydrologic Simulation Program—Fortran (HSPF); and physically based: the Distributed Hydrology Soil Vegetation Model (DHSVM). The algorithms were driven by the hydrologic fluxes and meteorological inputs generated from the Variable Infiltration Capacity (VIC) LSM. A multiobjective calibration was applied to each algorithm and optimized parameter sets were validated over an excluded period, as well as in a transfer experiment to a nearby catchment to explore parameter robustness. Algorithm performance showed consistent decreases when parameter sets were applied to periods with greatly differing SSL variability relative to the calibration period. Of interest was a joint calibration of all sediment algorithm and streamflow parameters simultaneously, from which trade‐offs between streamflow performance and partitioning of runoff and base flow to optimize SSL timing were noted, decreasing the flexibility and robustness of the streamflow to adapt to different time periods. Parameter transferability to another catchment was most successful in more process‐oriented algorithms, the HSPF and the DHSVM. This first‐of‐its‐kind multialgorithm sediment scheme offers a unique capability to portray acute episodic loading while quantifying trade‐offs and uncertainties across a range of algorithm structures. PMID:29399268
Advanced Hydrogen Liquefaction Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, Joseph; Kromer, Brian; Neu, Ben
2011-09-28
The project identified and quantified ways to reduce the cost of hydrogen liquefaction, and reduce the cost of hydrogen distribution. The goal was to reduce the power consumption by 20% and then to reduce the capital cost. Optimizing the process, improving process equipment, and improving ortho-para conversion significantly reduced the power consumption of liquefaction, but by less than 20%. Because the efficiency improvement was less than the target, the program was stopped before the capital cost was addressed. These efficiency improvements could provide a benefit to the public to improve the design of future hydrogen liquefiers. The project increased themore » understanding of hydrogen liquefaction by modeling different processes and thoroughly examining ortho-para separation and conversion. The process modeling provided a benefit to the public because the project incorporated para hydrogen into the process modeling software, so liquefaction processes can be modeled more accurately than using only normal hydrogen. Adding catalyst to the first heat exchanger, a simple method to reduce liquefaction power, was identified, analyzed, and quantified. The demonstrated performance of ortho-para separation is sufficient for at least one identified process concept to show reduced power cost when compared to hydrogen liquefaction processes using conventional ortho-para conversion. The impact of improved ortho-para conversion can be significant because ortho para conversion uses about 20-25% of the total liquefaction power, but performance improvement is necessary to realize a substantial benefit. Most of the energy used in liquefaction is for gas compression. Improvements in hydrogen compression will have a significant impact on overall liquefier efficiency. Improvements to turbines, heat exchangers, and other process equipment will have less impact.« less
NASA Astrophysics Data System (ADS)
Ibrahim, Elsy; Kim, Wonkook; Crawford, Melba; Monbaliu, Jaak
2017-02-01
Remote sensing has been successfully utilized to distinguish and quantify sediment properties in the intertidal environment. Classification approaches of imagery are popular and powerful yet can lead to site- and case-specific results. Such specificity creates challenges for temporal studies. Thus, this paper investigates the use of regression models to quantify sediment properties instead of classifying them. Two regression approaches, namely multiple regression (MR) and support vector regression (SVR), are used in this study for the retrieval of bio-physical variables of intertidal surface sediment of the IJzermonding, a Belgian nature reserve. In the regression analysis, mud content, chlorophyll a concentration, organic matter content, and soil moisture are estimated using radiometric variables of two airborne sensors, namely airborne hyperspectral sensor (AHS) and airborne prism experiment (APEX) and and using field hyperspectral acquisitions by analytical spectral device (ASD). The performance of the two regression approaches is best for the estimation of moisture content. SVR attains the highest accuracy without feature reduction while MR achieves good results when feature reduction is carried out. Sediment property maps are successfully obtained using the models and hyperspectral imagery where SVR used with all bands achieves the best performance. The study also involves the extraction of weights identifying the contribution of each band of the images in the quantification of each sediment property when MR and principal component analysis are used.
Peng, Xian; Yuan, Han; Chen, Wufan; Ding, Lei
2017-01-01
Continuous loop averaging deconvolution (CLAD) is one of the proven methods for recovering transient auditory evoked potentials (AEPs) in rapid stimulation paradigms, which requires an elaborated stimulus sequence design to attenuate impacts from noise in data. The present study aimed to develop a new metric in gauging a CLAD sequence in terms of noise gain factor (NGF), which has been proposed previously but with less effectiveness in the presence of pink (1/f) noise. We derived the new metric by explicitly introducing the 1/f model into the proposed time-continuous sequence. We selected several representative CLAD sequences to test their noise property on typical EEG recordings, as well as on five real CLAD electroencephalogram (EEG) recordings to retrieve the middle latency responses. We also demonstrated the merit of the new metric in generating and quantifying optimized sequences using a classic genetic algorithm. The new metric shows evident improvements in measuring actual noise gains at different frequencies, and better performance than the original NGF in various aspects. The new metric is a generalized NGF measurement that can better quantify the performance of a CLAD sequence, and provide a more efficient mean of generating CLAD sequences via the incorporation with optimization algorithms. The present study can facilitate the specific application of CLAD paradigm with desired sequences in the clinic. PMID:28414803
NASA Astrophysics Data System (ADS)
Mues, A.; Kuenen, J.; Hendriks, C.; Manders, A.; Segers, A.; Scholz, Y.; Hueglin, C.; Builtjes, P.; Schaap, M.
2014-01-01
In this study the sensitivity of the model performance of the chemistry transport model (CTM) LOTOS-EUROS to the description of the temporal variability of emissions was investigated. Currently the temporal release of anthropogenic emissions is described by European average diurnal, weekly and seasonal time profiles per sector. These default time profiles largely neglect the variation of emission strength with activity patterns, region, species, emission process and meteorology. The three sources dealt with in this study are combustion in energy and transformation industries (SNAP1), nonindustrial combustion (SNAP2) and road transport (SNAP7). First of all, the impact of neglecting the temporal emission profiles for these SNAP categories on simulated concentrations was explored. In a second step, we constructed more detailed emission time profiles for the three categories and quantified their impact on the model performance both separately as well as combined. The performance in comparison to observations for Germany was quantified for the pollutants NO2, SO2 and PM10 and compared to a simulation using the default LOTOS-EUROS emission time profiles. The LOTOS-EUROS simulations were performed for the year 2006 with a temporal resolution of 1 h and a horizontal resolution of approximately 25 × 25km2. In general the largest impact on the model performance was found when neglecting the default time profiles for the three categories. The daily average correlation coefficient for instance decreased by 0.04 (NO2), 0.11 (SO2) and 0.01 (PM10) at German urban background stations compared to the default simulation. A systematic increase in the correlation coefficient is found when using the new time profiles. The size of the increase depends on the source category, component and station. Using national profiles for road transport showed important improvements in the explained variability over the weekdays as well as the diurnal cycle for NO2. The largest impact of the SNAP1 and 2 profiles were found for SO2. When using all new time profiles simultaneously in one simulation, the daily average correlation coefficient increased by 0.05 (NO2), 0.07 (SO2) and 0.03 (PM10) at urban background stations in Germany. This exercise showed that to improve the performance of a CTM, a better representation of the distribution of anthropogenic emission in time is recommendable. This can be done by developing a dynamical emission model that takes into account regional specific factors and meteorology.
Factors Influencing Obstacle Crossing Performance in Patients with Parkinson's Disease
Liao, Ying-Yi; Yang, Yea-Ru; Wu, Yih-Ru; Wang, Ray-Yau
2014-01-01
Background Tripping over obstacles is the major cause of falls in community-dwelling patients with Parkinson's disease (PD). Understanding the factors associated with the obstacle crossing behavior may help to develop possible training programs for crossing performance. This study aimed to identify the relationships and important factors determining obstacle crossing performance in patients with PD. Methods Forty-two idiopathic patients with PD (Hoehn and Yahr stages I to III) participated in this study. Obstacle crossing performance was recorded by the Liberty system, a three-dimensional motion capture device. Maximal isometric strength of the lower extremity was measured by a handheld dynamometer. Dynamic balance and sensory integration ability were assessed using the Balance Master system. Movement velocity (MV), maximal excursion (ME), and directional control (DC) were obtained during the limits of stability test to quantify dynamic balance. The sum of sensory organization test (SOT) scores was used to quantify sensory organization ability. Results Both crossing stride length and stride velocity correlated significantly with lower extremity muscle strength, dynamic balance control (forward and sideward), and sum of SOT scores. From the regression model, forward DC and ankle dorsiflexor strength were identified as two major determinants for crossing performance (R2 = .37 to.41 for the crossing stride length, R2 = .43 to.44 for the crossing stride velocity). Conclusions Lower extremity muscle strength, dynamic balance control and sensory integration ability significantly influence obstacle crossing performance. We suggest an emphasis on muscle strengthening exercises (especially ankle dorsiflexors), balance training (especially forward DC), and sensory integration training to improve obstacle crossing performance in patients with PD. PMID:24454723
Solar Field Optical Characterization at Stillwater Geothermal/Solar Hybrid Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Guangdong; Turchi, Craig
Concentrating solar power (CSP) can provide additional thermal energy to boost geothermal plant power generation. For a newly constructed solar field at a geothermal power plant site, it is critical to properly characterize its performance so that the prediction of thermal power generation can be derived to develop an optimum operating strategy for a hybrid system. In the past, laboratory characterization of a solar collector has often extended into the solar field performance model and has been used to predict the actual solar field performance, disregarding realistic impacting factors. In this work, an extensive measurement on mirror slope error andmore » receiver position error has been performed in the field by using the optical characterization tool called Distant Observer (DO). Combining a solar reflectance sampling procedure, a newly developed solar characterization program called FirstOPTIC and public software for annual performance modeling called System Advisor Model (SAM), a comprehensive solar field optical characterization has been conducted, thus allowing for an informed prediction of solar field annual performance. The paper illustrates this detailed solar field optical characterization procedure and demonstrates how the results help to quantify an appropriate tracking-correction strategy to improve solar field performance. In particular, it is found that an appropriate tracking-offset algorithm can improve the solar field performance by about 15%. The work here provides a valuable reference for the growing CSP industry.« less
Solar Field Optical Characterization at Stillwater Geothermal/Solar Hybrid Plant
Zhu, Guangdong; Turchi, Craig
2017-01-27
Concentrating solar power (CSP) can provide additional thermal energy to boost geothermal plant power generation. For a newly constructed solar field at a geothermal power plant site, it is critical to properly characterize its performance so that the prediction of thermal power generation can be derived to develop an optimum operating strategy for a hybrid system. In the past, laboratory characterization of a solar collector has often extended into the solar field performance model and has been used to predict the actual solar field performance, disregarding realistic impacting factors. In this work, an extensive measurement on mirror slope error andmore » receiver position error has been performed in the field by using the optical characterization tool called Distant Observer (DO). Combining a solar reflectance sampling procedure, a newly developed solar characterization program called FirstOPTIC and public software for annual performance modeling called System Advisor Model (SAM), a comprehensive solar field optical characterization has been conducted, thus allowing for an informed prediction of solar field annual performance. The paper illustrates this detailed solar field optical characterization procedure and demonstrates how the results help to quantify an appropriate tracking-correction strategy to improve solar field performance. In particular, it is found that an appropriate tracking-offset algorithm can improve the solar field performance by about 15%. The work here provides a valuable reference for the growing CSP industry.« less
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Quantifying uncertainty in climate change science through empirical information theory.
Majda, Andrew J; Gershgorin, Boris
2010-08-24
Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.
An optomechanical model eye for ophthalmological refractive studies.
Arianpour, Ashkan; Tremblay, Eric J; Stamenov, Igor; Ford, Joseph E; Schanzlin, David J; Lo, Yuhwa
2013-02-01
To create an accurate, low-cost optomechanical model eye for investigation of refractive errors in clinical and basic research studies. An optomechanical fluid-filled eye model with dimensions consistent with the human eye was designed and fabricated. Optical simulations were performed on the optomechanical eye model, and the quantified resolution and refractive errors were compared with the widely used Navarro eye model using the ray-tracing software ZEMAX (Radiant Zemax, Redmond, WA). The resolution of the physical optomechanical eye model was then quantified with a complementary metal-oxide semiconductor imager using the image resolution software SFR Plus (Imatest, Boulder, CO). Refractive, manufacturing, and assembling errors were also assessed. A refractive intraocular lens (IOL) and a diffractive IOL were added to the optomechanical eye model for tests and analyses of a 1951 U.S. Air Force target chart. Resolution and aberrations of the optomechanical eye model and the Navarro eye model were qualitatively similar in ZEMAX simulations. Experimental testing found that the optomechanical eye model reproduced properties pertinent to human eyes, including resolution better than 20/20 visual acuity and a decrease in resolution as the field of view increased in size. The IOLs were also integrated into the optomechanical eye model to image objects at distances of 15, 10, and 3 feet, and they indicated a resolution of 22.8 cycles per degree at 15 feet. A life-sized optomechanical eye model with the flexibility to be patient-specific was designed and constructed. The model had the resolution of a healthy human eye and recreated normal refractive errors. This model may be useful in the evaluation of IOLs for cataract surgery. Copyright 2013, SLACK Incorporated.
Suitability of ANSI standards for quantifying communication satellite system performance
NASA Technical Reports Server (NTRS)
Cass, Robert D.
1988-01-01
A study on the application of American National Standards X3.102 and X3.141 to various classes of communication satellite systems from the simple analog bent-pipe to NASA's Advanced Communications Technology Satellite (ACTS) is discussed. These standards are proposed as means for quantifying the end-to-end communication system performance of communication satellite systems. An introductory overview of the two standards are given followed by a review of the characteristics, applications, and advantages of using X3.102 and X3.141 to quantify with a description of the application of these standards to ACTS.
Hamel, Perrine; Falinski, Kim; Sharp, Richard; Auerbach, Daniel A; Sánchez-Canales, María; Dennedy-Frank, P James
2017-02-15
Geospatial models are commonly used to quantify sediment contributions at the watershed scale. However, the sensitivity of these models to variation in hydrological and geomorphological features, in particular to land use and topography data, remains uncertain. Here, we assessed the performance of one such model, the InVEST sediment delivery model, for six sites comprising a total of 28 watersheds varying in area (6-13,500km 2 ), climate (tropical, subtropical, mediterranean), topography, and land use/land cover. For each site, we compared uncalibrated and calibrated model predictions with observations and alternative models. We then performed correlation analyses between model outputs and watershed characteristics, followed by sensitivity analyses on the digital elevation model (DEM) resolution. Model performance varied across sites (overall r 2 =0.47), but estimates of the magnitude of specific sediment export were as or more accurate than global models. We found significant correlations between metrics of sediment delivery and watershed characteristics, including erosivity, suggesting that empirical relationships may ultimately be developed for ungauged watersheds. Model sensitivity to DEM resolution varied across and within sites, but did not correlate with other observed watershed variables. These results were corroborated by sensitivity analyses performed on synthetic watersheds ranging in mean slope and DEM resolution. Our study provides modelers using InVEST or similar geospatial sediment models with practical insights into model behavior and structural uncertainty: first, comparison of model predictions across regions is possible when environmental conditions differ significantly; second, local knowledge on the sediment budget is needed for calibration; and third, model outputs often show significant sensitivity to DEM resolution. Copyright © 2016 Elsevier B.V. All rights reserved.
Effect of Voltage Level on Power System Design for Solar Electric Propulsion Missions
NASA Technical Reports Server (NTRS)
Kerslake, Thomas W.
2003-01-01
This paper presents study results quantifying the benefits of higher voltage, electric power system designs for a typical solar electric propulsion spacecraft Earth orbiting mission. A conceptual power system architecture was defined and design points were generated for system voltages of 28-V, 50-V, 120-V, and 300-V using state-of-the-art or advanced technologies. A 300-V 'direct-drive' architecture was also analyzed to assess the benefits of directly powering the electric thruster from the photovoltaic array without up-conversion. Fortran and spreadsheet computational models were exercised to predict the performance and size power system components to meet spacecraft mission requirements. Pertinent space environments, such as electron and proton radiation, were calculated along the spiral trajectory. In addition, a simplified electron current collection model was developed to estimate photovoltaic array losses for the orbital plasma environment and that created by the thruster plume. The secondary benefits of power system mass savings for spacecraft propulsion and attitude control systems were also quantified. Results indicate that considerable spacecraft wet mass savings were achieved by the 300-V and 300-V direct-drive architectures.
Lag and seasonality considerations in evaluating AVHRR NDVI response to precipitation
Ji, Lei; Peters, Albert J.
2005-01-01
Assessment of the relationship between the normalized difference vegetation index (NDVI) and precipitation is important in understanding vegetation and climate interaction at a large scale. NDVI response to precipitation, however, is difficult to quantify due to the lag and seasonality effects, which will vary due to vegetation cover type, soils and climate. A time series analysis was performed on biweekly NDVI and precipitation around weather stations in the northern and central U.S. Great Plains. Regression models that incorporate lag and seasonality effects were used to quantify the relationship between NDVI and lagged precipitation in grasslands and croplands. It was found that the time lag was shorter in the early growing season, but longer in the mid- to late-growing season for most locations. The regression models with seasonal adjustment indicate that the relationship between NDVI and precipitation over the entire growing season was strong, with R2 values of 0.69 and 0.72 for grasslands and croplands, respectively. We conclude that vegetation greenness can be predicted using current and antecedent precipitation, if seasonal effects are taken into account.
NASA Astrophysics Data System (ADS)
Tran, H. N. Q.; Tran, T. T.; Mansfield, M. L.; Lyman, S. N.
2014-12-01
Contributions of emissions from oil and gas activities to elevated ozone concentrations in the Uintah Basin - Utah were evaluated using the CMAQ Integrated Source Apportionment Method (CMAQ-ISAM) technique, and were compared with the results of traditional budgeting methods. Unlike the traditional budgeting method, which compares simulations with and without emissions of the source(s) in question to quantify its impacts, the CMAQ-ISAM technique assigns tags to emissions of each source and tracks their evolution through physical and chemical processes to quantify the final ozone product yield from the source. Model simulations were performed for two episodes in winter 2013 of low and high ozone to provide better understanding of source contributions under different weather conditions. Due to the highly nonlinear ozone chemistry, results obtained from the two methods differed significantly. The growing oil and gas industry in the Uintah Basin is the largest contributor to the elevated zone (>75 ppb) observed in the Basin. This study therefore provides an insight into the impact of oil and gas industry on the ozone issue, and helps in determining effective control strategies.
An anatomical study comparing two surgical approaches for isolated talonavicular arthrodesis.
Higgs, Zoe; Jamal, Bilal; Fogg, Quentin A; Kumar, C Senthil
2014-10-01
Two operative approaches are commonly used for isolated talonavicular arthrodesis: the medial and the dorsal approach. It is recognized that access to the lateral aspect of the talonavicular joint can be limited when using the medial approach, and it is our experience that using the dorsal approach addresses this issue. We performed an anatomical study using cadaver specimens, to compare the amount of articular surface that can be accessed by each operative approach. Medial and dorsal approaches to the talonavicular joint were performed on each of 11 cadaveric specimens (10 fresh frozen, 1 embalmed). Distraction of the joint was performed as used intraoperatively and the accessible area of articular surfaces was marked for each of the 2 approaches using a previously reported technique. Disarticulation was performed and the marked surface area was quantified using an immersion digital microscribe, allowing a 3-dimensional virtual model of the articular surfaces to be assessed. The median percentage of total accessible talonavicular articular surface area for the medial and dorsal approaches was 71% and 92%, respectively (Wilcoxon signed-rank test, P < .001). This study provides quantifiable measurements of the articular surface accessible by the medial and dorsal approaches to the talonavicular joint. These data support for the use of the dorsal approach for talonavicular arthrodesis, particularly in cases where access to the lateral half of the joint is necessary. © The Author(s) 2014.
Millimeter wave sensor requirements for maritime small craft identification
NASA Astrophysics Data System (ADS)
Krapels, Keith; Driggers, Ronald G.; Garcia, Jose; Boettcher, Evelyn; Prather, Dennis; Schuetz, Chrisopher; Samluk, Jesse; Stein, Lee; Kiser, William; Visnansky, Andrew; Grata, Jeremy; Wikner, David; Harris, Russ
2009-09-01
Passive millimeter wave (mmW) imagers have improved in terms of resolution sensitivity and frame rate. Currently, the Office of Naval Research (ONR), along with the US Army Research, Development and Engineering Command, Communications Electronics Research Development and Engineering Center (RDECOM CERDEC) Night Vision and Electronic Sensor Directorate (NVESD), are investigating the current state-of-the-art of mmW imaging systems. The focus of this study was the performance of mmW imaging systems for the task of small watercraft / boat identification field performance. First mmW signatures were collected. This consisted of a set of eight small watercrafts; at 5 different aspects, during the daylight hours over a 48 hour period in the spring of 2008. Target characteristics were measured and characteristic dimension, signatures, and Root Sum Squared of Target's Temperature (RRSΔT) tabulated. Then an eight-alternative, forced choice (8AFC) human perception experiment was developed and conducted at NVESD. The ability of observers to discriminate between small watercraft was quantified. Next, the task difficulty criterion, V50, was quantified by applying this data to NVESD's target acquisition models using the Targeting Task Performance (TTP) metric. These parameters can be used to evaluate sensor field performance for Anti-Terrorism / Force Protection (AT/FP) and navigation tasks for the U.S. Navy, as well as for design and evaluation of imaging passive mmW sensors for both the U.S. Navy and U.S. Coast Guard.
Quality of Protection Evaluation of Security Mechanisms
Ksiezopolski, Bogdan; Zurek, Tomasz; Mokkas, Michail
2014-01-01
Recent research indicates that during the design of teleinformatic system the tradeoff between the systems performance and the system protection should be made. The traditional approach assumes that the best way is to apply the strongest possible security measures. Unfortunately, the overestimation of security measures can lead to the unreasonable increase of system load. This is especially important in multimedia systems where the performance has critical character. In many cases determination of the required level of protection and adjustment of some security measures to these requirements increase system efficiency. Such an approach is achieved by means of the quality of protection models where the security measures are evaluated according to their influence on the system security. In the paper, we propose a model for QoP evaluation of security mechanisms. Owing to this model, one can quantify the influence of particular security mechanisms on ensuring security attributes. The methodology of our model preparation is described and based on it the case study analysis is presented. We support our method by the tool where the models can be defined and QoP evaluation can be performed. Finally, we have modelled TLS cryptographic protocol and presented the QoP security mechanisms evaluation for the selected versions of this protocol. PMID:25136683
Comparing the Performance of Blue Gene/Q with Leading Cray XE6 and InfiniBand Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav
2013-01-21
Abstract—Three types of systems dominate the current High Performance Computing landscape: the Cray XE6, the IBM Blue Gene, and commodity clusters using InfiniBand. These systems have quite different characteristics making the choice for a particular deployment difficult. The XE6 uses Cray’s proprietary Gemini 3-D torus interconnect with two nodes at each network endpoint. The latest IBM Blue Gene/Q uses a single socket integrating processor and communication in a 5-D torus network. InfiniBand provides the flexibility of using nodes from many vendors connected in many possible topologies. The performance characteristics of each vary vastly along with their utilization model. In thismore » work we compare the performance of these three systems using a combination of micro-benchmarks and a set of production applications. In particular we discuss the causes of variability in performance across the systems and also quantify where performance is lost using a combination of measurements and models. Our results show that significant performance can be lost in normal production operation of the Cray XT6 and InfiniBand Clusters in comparison to Blue Gene/Q.« less
Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D
2017-01-01
If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.
Rank Order Entropy: why one metric is not enough
McLellan, Margaret R.; Ryan, M. Dominic; Breneman, Curt M.
2011-01-01
The use of Quantitative Structure-Activity Relationship models to address problems in drug discovery has a mixed history, generally resulting from the mis-application of QSAR models that were either poorly constructed or used outside of their domains of applicability. This situation has motivated the development of a variety of model performance metrics (r2, PRESS r2, F-tests, etc) designed to increase user confidence in the validity of QSAR predictions. In a typical workflow scenario, QSAR models are created and validated on training sets of molecules using metrics such as Leave-One-Out or many-fold cross-validation methods that attempt to assess their internal consistency. However, few current validation methods are designed to directly address the stability of QSAR predictions in response to changes in the information content of the training set. Since the main purpose of QSAR is to quickly and accurately estimate a property of interest for an untested set of molecules, it makes sense to have a means at hand to correctly set user expectations of model performance. In fact, the numerical value of a molecular prediction is often less important to the end user than knowing the rank order of that set of molecules according to their predicted endpoint values. Consequently, a means for characterizing the stability of predicted rank order is an important component of predictive QSAR. Unfortunately, none of the many validation metrics currently available directly measure the stability of rank order prediction, making the development of an additional metric that can quantify model stability a high priority. To address this need, this work examines the stabilities of QSAR rank order models created from representative data sets, descriptor sets, and modeling methods that were then assessed using Kendall Tau as a rank order metric, upon which the Shannon Entropy was evaluated as a means of quantifying rank-order stability. Random removal of data from the training set, also known as Data Truncation Analysis (DTA), was used as a means for systematically reducing the information content of each training set while examining both rank order performance and rank order stability in the face of training set data loss. The premise for DTA ROE model evaluation is that the response of a model to incremental loss of training information will be indicative of the quality and sufficiency of its training set, learning method, and descriptor types to cover a particular domain of applicability. This process is termed a “rank order entropy” evaluation, or ROE. By analogy with information theory, an unstable rank order model displays a high level of implicit entropy, while a QSAR rank order model which remains nearly unchanged during training set reductions would show low entropy. In this work, the ROE metric was applied to 71 data sets of different sizes, and was found to reveal more information about the behavior of the models than traditional metrics alone. Stable, or consistently performing models, did not necessarily predict rank order well. Models that performed well in rank order did not necessarily perform well in traditional metrics. In the end, it was shown that ROE metrics suggested that some QSAR models that are typically used should be discarded. ROE evaluation helps to discern which combinations of data set, descriptor set, and modeling methods lead to usable models in prioritization schemes, and provides confidence in the use of a particular model within a specific domain of applicability. PMID:21875058
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanninen, M.F.; O'Donoghue, P.E.; Popelar, C.F.
1993-02-01
The project was undertaken for the purposes of quantifying the Battelle slow crack growth (SCG) test for predicting long-term performance of polyethylene gas distribution pipes, and of demonstrating the applicability of the methodology for use by the gas industry for accelerated characterization testing, thereby bringing the SCG test development effort to a closure. The work has revealed that the Battelle SCG test, and the linear fracture mechanics interpretation that it currently utilizes, is valid for a class of PE materials. The long-term performance of these materials in various operating conditions can therefore be effectively predicted.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
A model of motor performance during surface penetration: from physics to voluntary control.
Klatzky, Roberta L; Gershon, Pnina; Shivaprabhu, Vikas; Lee, Randy; Wu, Bing; Stetten, George; Swendsen, Robert H
2013-10-01
The act of puncturing a surface with a hand-held tool is a ubiquitous but complex motor behavior that requires precise force control to avoid potentially severe consequences. We present a detailed model of puncture over a time course of approximately 1,000 ms, which is fit to kinematic data from individual punctures, obtained via a simulation with high-fidelity force feedback. The model describes puncture as proceeding from purely physically determined interactions between the surface and tool, through decline of force due to biomechanical viscosity, to cortically mediated voluntary control. When fit to the data, it yields parameters for the inertial mass of the tool/person coupling, time characteristic of force decline, onset of active braking, stopping time and distance, and late oscillatory behavior, all of which the analysis relates to physical variables manipulated in the simulation. While the present data characterize distinct phases of motor performance in a group of healthy young adults, the approach could potentially be extended to quantify the performance of individuals from other populations, e.g., with sensory-motor impairments. Applications to surgical force control devices are also considered.
Flexible Fabrics with High Thermal Conductivity for Advanced Spacesuits
NASA Technical Reports Server (NTRS)
Trevino, Luis A.; Bue, Grant; Orndoff, Evelyne; Kesterson, Matt; Connel, John W.; Smith, Joseph G., Jr.; Southward, Robin E.; Working, Dennis; Watson, Kent A.; Delozier, Donovan M.
2006-01-01
This paper describes the effort and accomplishments for developing flexible fabrics with high thermal conductivity (FFHTC) for spacesuits to improve thermal performance, lower weight and reduce complexity. Commercial and additional space exploration applications that require substantial performance enhancements in removal and transport of heat away from equipment as well as from the human body can benefit from this technology. Improvements in thermal conductivity were achieved through the use of modified polymers containing thermally conductive additives. The objective of the FFHTC effort is to significantly improve the thermal conductivity of the liquid cooled ventilation garment by improving the thermal conductivity of the subcomponents (i.e., fabric and plastic tubes). This paper presents the initial system modeling studies, including a detailed liquid cooling garment model incorporated into the Wissler human thermal regulatory model, to quantify the necessary improvements in thermal conductivity and garment geometries needed to affect system performance. In addition, preliminary results of thermal conductivity improvements of the polymer components of the liquid cooled ventilation garment are presented. By improving thermal garment performance, major technology drivers will be addressed for lightweight, high thermal conductivity, flexible materials for spacesuits that are strategic technical challenges of the Exploration
Systems engineering analysis of five 'as-manufactured' SXI telescopes
NASA Astrophysics Data System (ADS)
Harvey, James E.; Atanassova, Martina; Krywonos, Andrey
2005-09-01
Four flight models and a spare of the Solar X-ray Imager (SXI) telescope mirrors have been fabricated. The first of these is scheduled to be launched on the NOAA GOES- N satellite on July 29, 2005. A complete systems engineering analysis of the "as-manufactured" telescope mirrors has been performed that includes diffraction effects, residual design errors (aberrations), surface scatter effects, and all of the miscellaneous errors in the mirror manufacturer's error budget tree. Finally, a rigorous analysis of mosaic detector effects has been included. SXI is a staring telescope providing full solar disc images at X-ray wavelengths. For wide-field applications such as this, a field-weighted-average measure of resolution has been modeled. Our performance predictions have allowed us to use metrology data to model the "as-manufactured" performance of the X-ray telescopes and to adjust the final focal plane location to optimize the number of spatial resolution elements in a given operational field-of-view (OFOV) for either the aerial image or the detected image. The resulting performance predictions from five separate mirrors allow us to evaluate and quantify the optical fabrication process for producing these very challenging grazing incidence X-ray optics.
Task-based lens design with application to digital mammography
NASA Astrophysics Data System (ADS)
Chen, Liying; Barrett, Harrison H.
2005-01-01
Recent advances in model observers that predict human perceptual performance now make it possible to optimize medical imaging systems for human task performance. We illustrate the procedure by considering the design of a lens for use in an optically coupled digital mammography system. The channelized Hotelling observer is used to model human performance, and the channels chosen are differences of Gaussians. The task performed by the model observer is detection of a lesion at a random but known location in a clustered lumpy background mimicking breast tissue. The entire system is simulated with a Monte Carlo application according to physics principles, and the main system component under study is the imaging lens that couples a fluorescent screen to a CCD detector. The signal-to-noise ratio (SNR) of the channelized Hotelling observer is used to quantify this detectability of the simulated lesion (signal) on the simulated mammographic background. Plots of channelized Hotelling SNR versus signal location for various lens apertures, various working distances, and various focusing places are presented. These plots thus illustrate the trade-off between coupling efficiency and blur in a task-based manner. In this way, the channelized Hotelling SNR is used as a merit function for lens design.
Task-based lens design, with application to digital mammography
NASA Astrophysics Data System (ADS)
Chen, Liying
Recent advances in model observers that predict human perceptual performance now make it possible to optimize medical imaging systems for human task performance. We illustrate the procedure by considering the design of a lens for use in an optically coupled digital mammography system. The channelized Hotelling observer is used to model human performance, and the channels chosen are differences of Gaussians (DOGs). The task performed by the model observer is detection of a lesion at a random but known location in a clustered lumpy background mimicking breast tissue. The entire system is simulated with a Monte Carlo application according to the physics principles, and the main system component under study is the imaging lens that couples a fluorescent screen to a CCD detector. The SNR of the channelized Hotelling observer is used to quantify the detectability of the simulated lesion (signal) upon the simulated mammographic background. In this work, plots of channelized Hotelling SNR vs. signal location for various lens apertures, various working distances, and various focusing places are shown. These plots thus illustrate the trade-off between coupling efficiency and blur in a task-based manner. In this way, the channelized Hotelling SNR is used as a merit function for lens design.
Performance Analysis of Stop-Skipping Scheduling Plans in Rail Transit under Time-Dependent Demand
Cao, Zhichao; Yuan, Zhenzhou; Zhang, Silin
2016-01-01
Stop-skipping is a key method for alleviating congestion in rail transit, where schedules are sometimes difficult to implement. Several mechanisms have been proposed and analyzed in the literature, but very few performance comparisons are available. This study formulated train choice behavior estimation into the model considering passengers’ perception. If a passenger’s train path can be identified, this information would be useful for improving the stop-skipping schedule service. Multi-performance is a key characteristic of our proposed five stop-skipping schedules, but quantified analysis can be used to illustrate the different effects of well-known deterministic and stochastic forms. Problems in the novel category of forms were justified in the context of a single line rather than transit network. We analyzed four deterministic forms based on the well-known A/B stop-skipping operating strategy. A stochastic form was innovatively modeled as a binary integer programming problem. We present a performance analysis of our proposed model to demonstrate that stop-skipping can feasibly be used to improve the service of passengers and enhance the elasticity of train operations under demand variations along with an explicit parametric discussion. PMID:27420087
Performance Analysis of Stop-Skipping Scheduling Plans in Rail Transit under Time-Dependent Demand.
Cao, Zhichao; Yuan, Zhenzhou; Zhang, Silin
2016-07-13
Stop-skipping is a key method for alleviating congestion in rail transit, where schedules are sometimes difficult to implement. Several mechanisms have been proposed and analyzed in the literature, but very few performance comparisons are available. This study formulated train choice behavior estimation into the model considering passengers' perception. If a passenger's train path can be identified, this information would be useful for improving the stop-skipping schedule service. Multi-performance is a key characteristic of our proposed five stop-skipping schedules, but quantified analysis can be used to illustrate the different effects of well-known deterministic and stochastic forms. Problems in the novel category of forms were justified in the context of a single line rather than transit network. We analyzed four deterministic forms based on the well-known A/B stop-skipping operating strategy. A stochastic form was innovatively modeled as a binary integer programming problem. We present a performance analysis of our proposed model to demonstrate that stop-skipping can feasibly be used to improve the service of passengers and enhance the elasticity of train operations under demand variations along with an explicit parametric discussion.
Beckett, P; Tata, L J; Hubbard, R B
2014-03-01
Survival after diagnosis of lung cancer is poor and seemingly lower in the UK than other Western countries, due in large part to late presentation with advanced disease precluding curative treatment. Recent research suggests that around one-third of lung cancer patients reach specialist care after emergency presentation and have a worse survival outcome. Confirmation of these data and understanding which patients are affected may allow a targeted approach to improving outcomes. We used data from the UK National Lung Cancer Audit in a multivariate logistic regression model to quantify the association of non-elective referral in non-small cell lung cancer patients with covariates including age, sex, stage, performance status, co-morbidity and socioeconomic status and used the Kaplan-Meier method and Cox proportional hazards model to quantify survival by source of referral. In an analysis of 133,530 cases of NSCLC who presented 2006-2011, 19% of patients were referred non-electively (following an emergency admission to hospital or following an emergency presentation to A&E). This route of referral was strongly associated with more advanced disease stage (e.g. in Stage IV - OR: 2.34, 95% CI: 2.14-2.57, p<0.001) and worse performance status (e.g. in PS 4 - OR: 7.28, 95% CI: 6.75-7.86, p<0.001), but was also independently associated with worse socioeconomic status, and extremes of age. These patients were more likely to have died within 1 year of diagnosis (hazard ratio of 1.51 (95% CI: 1.49-1.54) after adjustment for key clinical variables. Our data confirm and quantify poorer survival in lung cancer patients who are referred non-electively to specialist care, which is more common in patients with poorer performance status, higher disease stage and less advantaged socioeconomic status. Work to tackle this late presentation should be urgently accelerated, since its realisation holds the promise of improved outcomes and better healthcare resource utilisation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Aerodynamic Challenges for the Mars Science Laboratory Entry, Descent and Landing
NASA Technical Reports Server (NTRS)
Schoenenberger, Mark; Dyakonov, Artem; Buning, Pieter; Scallion, William; Norman, John Van
2009-01-01
An overview of several important aerodynamics challenges new to the Mars Science Laboratory (MSL) entry vehicle are presented. The MSL entry capsule is a 70 degree sphere cone-based on the original Mars Viking entry capsule. Due to payload and landing accuracy requirements, MSL will be flying at the highest lift-to-drag ratio of any capsule sent to Mars (L/D = 0.24). The capsule will also be flying a guided entry, performing bank maneuvers, a first for Mars entry. The system's mechanical design and increased performance requirements require an expansion of the MSL flight envelope beyond those of historical missions. In certain areas, the experience gained by Viking and other recent Mars missions can no longer be claimed as heritage information. New analysis and testing is re1quired to ensure the safe flight of the MSL entry vehicle. The challenge topics include: hypersonic gas chemistry and laminar-versus-turbulent flow effects on trim angle, a general risk assessment of flying at greater angles-of-attack than Viking, quantifying the aerodynamic interactions induced by a new reaction control system and a risk assessment of recontact of a series of masses jettisoned prior to parachute deploy. An overview of the analysis and tests being conducted to understand and reduce risk in each of these areas is presented. The need for proper modeling and implementation of uncertainties for use in trajectory simulation has resulted in a revision of prior models and additional analysis for the MSL entry vehicle. The six degree-of-freedom uncertainty model and new analysis to quantify roll torque dispersions are presented.
Operational Performance Risk Assessment in Support of A Supervisory Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denning, Richard S.; Muhlheim, Michael David; Cetiner, Sacit M.
Supervisory control system (SCS) is developed for multi-unit advanced small modular reactors to minimize human interventions in both normal and abnormal operations. In SCS, control action decisions made based on probabilistic risk assessment approach via Event Trees/Fault Trees. Although traditional PRA tools are implemented, their scope is extended to normal operations and application is reversed; success of non-safety related system instead failure of safety systems this extended PRA approach called as operational performance risk assessment (OPRA). OPRA helps to identify success paths, combination of control actions for transients and to quantify these success paths to provide possible actions without activatingmore » plant protection system. In this paper, a case study of the OPRA in supervisory control system is demonstrated within the context of the ALMR PRISM design, specifically power conversion system. The scenario investigated involved a condition that the feed water control valve is observed to be drifting to the closed position. Alternative plant configurations were identified via OPRA that would allow the plant to continue to operate at full or reduced power. Dynamic analyses were performed with a thermal-hydraulic model of the ALMR PRISM system using Modelica to evaluate remained safety margins. Successful recovery paths for the selected scenario are identified and quantified via SCS.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magnotti, G. M.; Genzale, C. L.
The physical mechanisms characterizing the breakup of a diesel spray into droplets are still unknown. This gap in knowledge has largely been due to the challenges of directly imaging this process or quantitatively measuring the outcomes of spray breakup, such as droplet size. Recent x-ray measurements by Argonne National Laboratory, utilized in this work, provide needed information about the spatial evolution of droplet sizes in selected regions of the spray under a range of injection pressures (50–150 MPa) and ambient densities (7.6–22.8 kg/m3) relevant for diesel operating conditions. Ultra-small angle x-ray scattering (USAXS) measurements performed at the Advanced Photon Sourcemore » are presented, which quantify Sauter mean diameters (SMD) within optically thick regions of the spray that are inaccessible by conventional droplet sizing measurement techniques, namely in the near-nozzle region, along the spray centerline, and within the core of the spray. To quantify droplet sizes along the periphery of the spray, a complementary technique is proposed and introduced, which leverages the ratio of path-integrated x-ray and visible laser extinction (SAMR) measurements to quantify SMD. The SAMR and USAXS measurements are then utilized to evaluate current spray models used for engine computational fluid dynamic (CFD) simulations. We explore the ability of a carefully calibrated spray model, premised on aerodynamic wave growth theory, to capture the experimentally observed trends of SMD throughout the spray. The spray structure is best predicted with an aerodynamic primary and secondary breakup process that is represented with a slower time constant and larger formed droplet size than conventionally recommended for diesel spray models. Additionally, spray model predictions suggest that droplet collisions may not influence the resultant droplet size distribution along the spray centerline in downstream regions of the spray.« less
The Dissolution Behavior of Borosilicate Glasses in Far-From Equilibrium Conditions
Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.; ...
2018-02-10
An area of agreement in the waste glass corrosion community is that, at far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this work is to study the effects of temperature and pH on the dissolution rate of three model nuclear waste glasses (SON68, ISG, AFCI). The dissolution rate data are then used to parameterize a kinetic rate model based on Transition State Theory that has been developed to model glass corrosion behavior in dilute conditions. To do this, experiments were conducted atmore » temperatures of 23, 40, 70, and 90 °C and pH(22 °C) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. Both the absolute dissolution rates and the rate model parameters are compared with previous results. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies though quantifiable differences exist. The glass dissolution rates were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), with which a robust uncertainty analysis is performed. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, a mathematical description of the effect of glass composition on the rate parameter values should be possible. This would allow for the possibility of calculating the forward dissolution rate of glass based solely on composition. In addition, the method of determination of parameter uncertainty and correlation provides a framework for other rate models that describe the dissolution rates of other amorphous and crystalline materials in a wide range of chemical conditions. As a result, the higher level of uncertainty analysis would provide a basis for comparison of different rate models and allow for a better means of quantifiably comparing the various models.« less
Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.
2018-01-01
The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.
The dissolution behavior of borosilicate glasses in far-from equilibrium conditions
NASA Astrophysics Data System (ADS)
Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.; Ryan, Joseph V.; Asmussen, R. Matthew
2018-04-01
An area of agreement in the waste glass corrosion community is that, at far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this work is to study the effects of temperature and pH on the dissolution rate of three model nuclear waste glasses (SON68, ISG, AFCI). The dissolution rate data are then used to parameterize a kinetic rate model based on Transition State Theory that has been developed to model glass corrosion behavior in dilute conditions. To do this, experiments were conducted at temperatures of 23, 40, 70, and 90 °C and pH (22 °C) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. Both the absolute dissolution rates and the rate model parameters are compared with previous results. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies though quantifiable differences exist. The glass dissolution rates were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), with which a robust uncertainty analysis is performed. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, a mathematical description of the effect of glass composition on the rate parameter values should be possible. This would allow for the possibility of calculating the forward dissolution rate of glass based solely on composition. In addition, the method of determination of parameter uncertainty and correlation provides a framework for other rate models that describe the dissolution rates of other amorphous and crystalline materials in a wide range of chemical conditions. The higher level of uncertainty analysis would provide a basis for comparison of different rate models and allow for a better means of quantifiably comparing the various models.
The Dissolution Behavior of Borosilicate Glasses in Far-From Equilibrium Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.
An area of agreement in the waste glass corrosion community is that, at far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this work is to study the effects of temperature and pH on the dissolution rate of three model nuclear waste glasses (SON68, ISG, AFCI). The dissolution rate data are then used to parameterize a kinetic rate model based on Transition State Theory that has been developed to model glass corrosion behavior in dilute conditions. To do this, experiments were conducted atmore » temperatures of 23, 40, 70, and 90 °C and pH(22 °C) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. Both the absolute dissolution rates and the rate model parameters are compared with previous results. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies though quantifiable differences exist. The glass dissolution rates were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), with which a robust uncertainty analysis is performed. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, a mathematical description of the effect of glass composition on the rate parameter values should be possible. This would allow for the possibility of calculating the forward dissolution rate of glass based solely on composition. In addition, the method of determination of parameter uncertainty and correlation provides a framework for other rate models that describe the dissolution rates of other amorphous and crystalline materials in a wide range of chemical conditions. As a result, the higher level of uncertainty analysis would provide a basis for comparison of different rate models and allow for a better means of quantifiably comparing the various models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wall, Nathalie A.; Neeway, James J.; Qafoku, Nikolla P.
2015-09-30
Assessments of waste form and disposal options start with the degradation of the waste forms and consequent mobilization of radionuclides. Long-term static tests, single-pass flow-through tests, and the pressurized unsaturated flow test are often employed to study the durability of potential waste forms and to help create models that predict their durability throughout the lifespan of the disposal site. These tests involve the corrosion of the material in the presence of various leachants, with different experimental designs yielding desired information about the behavior of the material. Though these tests have proved instrumental in elucidating various mechanisms responsible for material corrosion,more » the chemical environment to which the material is subject is often not representative of a potential radioactive waste repository where factors such as pH and leachant composition will be controlled by the near-field environment. Near-field materials include, but are not limited to, the original engineered barriers, their resulting corrosion products, backfill materials, and the natural host rock. For an accurate performance assessment of a nuclear waste repository, realistic waste corrosion experimental data ought to be modeled to allow for a better understanding of waste form corrosion mechanisms and the effect of immediate geochemical environment on these mechanisms. Additionally, the migration of radionuclides in the resulting chemical environment during and after waste form corrosion must be quantified and mechanisms responsible for migrations understood. The goal of this research was to understand the mechanisms responsible for waste form corrosion in the presence of relevant repository sediments to allow for accurate radionuclide migration quantifications. The rationale for this work is that a better understanding of waste form corrosion in relevant systems will enable increased reliance on waste form performance in repository environments and potentially decrease the need for expensive engineered barriers.Our current work aims are 1) quantifying and understanding the processes associated with glass alteration in contact with Fe-bearing materials; 2) quantifying and understanding the processes associated with glass alteration in presence of MgO (example of engineered barrier used in WIPP); 3) identifying glass alteration suppressants and the processes involved to reach glass alteration suppression; 4) quantifying and understanding the processes associated with Saltstone and Cast Stone (SRS and Hanford cementitious waste forms) in various representative groundwaters; 5) investigating positron annihilation as a new tool for the study of glass alteration; and 6) quantifying and understanding the processes associated with glass alteration under gamma irradiation.« less
DOT National Transportation Integrated Search
2009-04-01
This study details the development of a series of enhancements to the Trip Reduction Impacts of : Mobility Management Strategies (TRIMMS) model. TRIMMS allows quantifying the net social : benefits of a wide range of transportation demand management...
Partial Shade Stress Test for Thin-Film Photovoltaic Modules: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverman, Timothy J.; Deceglie, Michael G.; Deline, Chris
2015-09-02
Partial shade of monolithic thin-film PV modules can cause reverse-bias conditions leading to permanent damage. In this work, we propose a partial shade stress test for thin-film PV modules that quantifies permanent performance loss. We designed the test with the aid of a computer model that predicts the local voltage, current and temperature stress that result from partial shade. The model predicts the module-scale interactions among the illumination pattern, the electrical properties of the photovoltaic material and the thermal properties of the module package. The test reproduces shading and loading conditions that may occur in the field. It accounts formore » reversible light-induced performance changes and for additional stress that may be introduced by light-enhanced reverse breakdown. We present simulated and experimental results from the application of the proposed test.« less
Rolling friction and energy dissipation in a spinning disc
Ma, Daolin; Liu, Caishan; Zhao, Zhen; Zhang, Hongjian
2014-01-01
This paper presents the results of both experimental and theoretical investigations for the dynamics of a steel disc spinning on a horizontal rough surface. With a pair of high-speed cameras, a stereoscopic vision method is adopted to perform omnidirectional measurements for the temporal evolution of the disc's motion. The experiment data allow us to detail the dynamics of the disc, and consequently to quantify its energy. From our experimental observations, it is confirmed that rolling friction is a primary factor responsible for the dissipation of the energy. Furthermore, a mathematical model, in which the rolling friction is characterized by a resistance torque proportional to the square of precession rate, is also proposed. By employing the model, we perform qualitative analysis and numerical simulations. Both of them provide results that precisely agree with our experimental findings. PMID:25197246
Preliminary Analysis of a Water Shield for a Surface Power Reactor
NASA Technical Reports Server (NTRS)
Pearson, J. Boise
2006-01-01
A water based shielding system is being investigated for use on initial lunar surface power systems. The use of water may lower overall cost (as compared to development cost for other materials) and simplify operations in the setup and handling. The thermal hydraulic performance of the shield is of significant interest. The mechanism for transferring heat through the shield is natural convection. A simple 1-D thermal model indicates the necessity of natural convection to maintain acceptable temperatures and pressures in the water shield. CFD analysis is done to quantify the natural convection in the shield, and predicts sufficient natural convection to transfer heat through the shield with small temperature gradients. A test program will he designed to experimentally verify the thermal hydraulic performance of the shield, and to anchor the CFD models to experimental results.
Chen, W P; Tang, F T; Ju, C W
2001-08-01
To quantify stress distribution of the foot during mid-stance to push-off in barefoot gait using 3-D finite element analysis. To simulate the foot structure and facilitate later consideration of footwear. Finite element model was generated and loading condition simulating barefoot gait during mid-stance to push-off was used to quantify the stress distributions. A computational model can provide overall stress distributions of the foot subject to various loading conditions. A preliminary 3-D finite element foot model was generated based on the computed tomography data of a male subject and the bone and soft tissue structures were modeled. Analysis was performed for loading condition simulating barefoot gait during mid-stance to push-off. The peak plantar pressure ranged from 374 to 1003 kPa and the peak von Mises stress in the bone ranged from 2.12 to 6.91 MPa at different instants. The plantar pressure patterns were similar to measurement result from previous literature. The present study provides a preliminary computational model that is capable of estimating the overall plantar pressure and bone stress distributions. It can also provide quantitative analysis for normal and pathological foot motion. This model can identify areas of increased pressure and correlate the pressure with foot pathology. Potential applications can be found in the study of foot deformities, footwear, surgical interventions. It may assist pre-treatment planning, design of pedorthotic appliances, and predict the treatment effect of foot orthosis.
Tang, T.; Oh, Sungho; Sadleir, R. J.
2010-01-01
We compared two 16-electrode electrical impedance tomography (EIT) current patterns on their ability to reconstruct and quantify small amounts of bleeding inside a neonatal human head using both simulated and phantom data. The current patterns used were an adjacent injection RING pattern (with electrodes located equidistantly on the equator of a sphere) and an EEG current pattern based on the 10–20 EEG electrode layout. Structures mimicking electrically important structures in the infant skull were included in a spherical numerical forward model and their effects on reconstructions were determined. The EEG pattern was found to be a better topology to localize and quantify anomalies within lateral ventricular regions. The RING electrode pattern could not reconstruct anomaly location well, as it could not distinguish different axial positions. The quantification accuracy of the RING pattern was as good as the EEG pattern in noise-free environments. However, the EEG pattern showed better quantification ability than the RING pattern when noise was added. The performance of the EEG pattern improved further with respect to the RING pattern when a fontanel was included in forward models. Significantly better resolution and contrast of reconstructed anomalies was achieved when generated from a model containing such an opening and 50 dB added noise. The EEG method was further applied to reconstruct data from a realistic neonatal head model. Overall, acceptable reconstructions and quantification results were obtained using this model and the homogeneous spherical forward model. PMID:20238166
NASA Astrophysics Data System (ADS)
Salha, A. A.; Stevens, D. K.
2015-12-01
Distributed watershed models are essential for quantifying sediment and nutrient loads that originate from point and nonpoint sources. Such models are primary means towards generating pollutant estimates in ungaged watersheds and respond well at watershed scales by capturing the variability in soils, climatic conditions, land uses/covers and management conditions over extended periods of time. This effort evaluates the performance of the Soil and Water Assessment Tool (SWAT) model as a watershed level tool to investigate, manage, and characterize the transport and fate of nutrients in Lower Bear Malad River (LBMR) watershed (Subbasin HUC 16010204) in Utah. Water quality concerns have been documented and are primarily attributed to high phosphorus and total suspended sediment concentrations caused by agricultural and farming practices along with identified point sources (WWTPs). Input data such as Digital Elevation Model (DEM), land use/Land cover (LULC), soils, and climate data for 10 years (2000-2010) is utilized to quantify the LBMR streamflow. Such modeling is useful in developing the required water quality regulations such as Total Maximum Daily Loads (TMDL). Measured concentrations of nutrients were closely captured by simulated monthly nutrient concentrations based on the R2 and Nash- Sutcliffe fitness criteria. The model is expected to be able to identify contaminant non-point sources, identify areas of high pollution risk, locate optimal monitoring sites, and evaluate best management practices to cost-effectively reduce pollution and improve water quality as required by the LBMR watershed's TMDL.
Fitzpatrick, Matthew C; Blois, Jessica L; Williams, John W; Nieto-Lugilde, Diego; Maguire, Kaitlin C; Lorenz, David J
2018-03-23
Future climates are projected to be highly novel relative to recent climates. Climate novelty challenges models that correlate ecological patterns to climate variables and then use these relationships to forecast ecological responses to future climate change. Here, we quantify the magnitude and ecological significance of future climate novelty by comparing it to novel climates over the past 21,000 years in North America. We then use relationships between model performance and climate novelty derived from the fossil pollen record from eastern North America to estimate the expected decrease in predictive skill of ecological forecasting models as future climate novelty increases. We show that, in the high emissions scenario (RCP 8.5) and by late 21st century, future climate novelty is similar to or higher than peak levels of climate novelty over the last 21,000 years. The accuracy of ecological forecasting models is projected to decline steadily over the coming decades in response to increasing climate novelty, although models that incorporate co-occurrences among species may retain somewhat higher predictive skill. In addition to quantifying future climate novelty in the context of late Quaternary climate change, this work underscores the challenges of making reliable forecasts to an increasingly novel future, while highlighting the need to assess potential avenues for improvement, such as increased reliance on geological analogs for future novel climates and improving existing models by pooling data through time and incorporating assemblage-level information. © 2018 John Wiley & Sons Ltd.
Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.
2012-12-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).
A simple parametric model observer for quality assurance in computer tomography
NASA Astrophysics Data System (ADS)
Anton, M.; Khanin, A.; Kretz, T.; Reginatto, M.; Elster, C.
2018-04-01
Model observers are mathematical classifiers that are used for the quality assessment of imaging systems such as computer tomography. The quality of the imaging system is quantified by means of the performance of a selected model observer. For binary classification tasks, the performance of the model observer is defined by the area under its ROC curve (AUC). Typically, the AUC is estimated by applying the model observer to a large set of training and test data. However, the recording of these large data sets is not always practical for routine quality assurance. In this paper we propose as an alternative a parametric model observer that is based on a simple phantom, and we provide a Bayesian estimation of its AUC. It is shown that a limited number of repeatedly recorded images (10–15) is already sufficient to obtain results suitable for the quality assessment of an imaging system. A MATLAB® function is provided for the calculation of the results. The performance of the proposed model observer is compared to that of the established channelized Hotelling observer and the nonprewhitening matched filter for simulated images as well as for images obtained from a low-contrast phantom on an x-ray tomography scanner. The results suggest that the proposed parametric model observer, along with its Bayesian treatment, can provide an efficient, practical alternative for the quality assessment of CT imaging systems.
Stata Modules for Calculating Novel Predictive Performance Indices for Logistic Models.
Barkhordari, Mahnaz; Padyab, Mojgan; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza
2016-01-01
Prediction is a fundamental part of prevention of cardiovascular diseases (CVD). The development of prediction algorithms based on the multivariate regression models loomed several decades ago. Parallel with predictive models development, biomarker researches emerged in an impressively great scale. The key question is how best to assess and quantify the improvement in risk prediction offered by new biomarkers or more basically how to assess the performance of a risk prediction model. Discrimination, calibration, and added predictive value have been recently suggested to be used while comparing the predictive performances of the predictive models' with and without novel biomarkers. Lack of user-friendly statistical software has restricted implementation of novel model assessment methods while examining novel biomarkers. We intended, thus, to develop a user-friendly software that could be used by researchers with few programming skills. We have written a Stata command that is intended to help researchers obtain cut point-free and cut point-based net reclassification improvement index and (NRI) and relative and absolute Integrated discriminatory improvement index (IDI) for logistic-based regression analyses.We applied the commands to a real data on women participating the Tehran lipid and glucose study (TLGS) to examine if information of a family history of premature CVD, waist circumference, and fasting plasma glucose can improve predictive performance of the Framingham's "general CVD risk" algorithm. The command is addpred for logistic regression models. The Stata package provided herein can encourage the use of novel methods in examining predictive capacity of ever-emerging plethora of novel biomarkers.
A Case Study Using Modeling and Simulation to Predict Logistics Supply Chain Issues
NASA Technical Reports Server (NTRS)
Tucker, David A.
2007-01-01
Optimization of critical supply chains to deliver thousands of parts, materials, sub-assemblies, and vehicle structures as needed is vital to the success of the Constellation Program. Thorough analysis needs to be performed on the integrated supply chain processes to plan, source, make, deliver, and return critical items efficiently. Process modeling provides simulation technology-based, predictive solutions for supply chain problems which enable decision makers to reduce costs, accelerate cycle time and improve business performance. For example, United Space Alliance, LLC utilized this approach in late 2006 to build simulation models that recreated shuttle orbiter thruster failures and predicted the potential impact of thruster removals on logistics spare assets. The main objective was the early identification of possible problems in providing thruster spares for the remainder of the Shuttle Flight Manifest. After extensive analysis the model results were used to quantify potential problems and led to improvement actions in the supply chain. Similarly the proper modeling and analysis of Constellation parts, materials, operations, and information flows will help ensure the efficiency of the critical logistics supply chains and the overall success of the program.
Fitts’ Law in the Control of Isometric Grip Force With Naturalistic Targets
Thumser, Zachary C.; Slifkin, Andrew B.; Beckler, Dylan T.; Marasco, Paul D.
2018-01-01
Fitts’ law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts’ law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts’ law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts’ law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts’ law (average r2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts’ law for explicit targets with vision (r2 = 0.96) and implicit targets (r2 = 0.89), but not as well-described for explicit targets without vision (r2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts’ law to quantify the relative speed-accuracy relationship of any given grasper. PMID:29773999
Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models
NASA Astrophysics Data System (ADS)
Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.
2007-01-01
Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.
Comparing colon cancer outcomes: The impact of low hospital case volume and case-mix adjustment.
Fischer, C; Lingsma, H F; van Leersum, N; Tollenaar, R A E M; Wouters, M W; Steyerberg, E W
2015-08-01
When comparing performance across hospitals it is essential to consider the noise caused by low hospital case volume and to perform adequate case-mix adjustment. We aimed to quantify the role of noise and case-mix adjustment on standardized postoperative mortality and anastomotic leakage (AL) rates. We studied 13,120 patients who underwent colon cancer resection in 85 Dutch hospitals. We addressed differences between hospitals in postoperative mortality and AL, using fixed (ignoring noise) and random effects (incorporating noise) logistic regression models with general and additional, disease specific, case-mix adjustment. Adding disease specific variables improved the performance of the case-mix adjustment models for postoperative mortality (c-statistic increased from 0.77 to 0.81). The overall variation in standardized mortality ratios was similar, but some individual hospitals changed considerably. For the standardized AL rates the performance of the adjustment models was poor (c-statistic 0.59 and 0.60) and overall variation was small. Most of the observed variation between hospitals was actually noise. Noise had a larger effect on hospital performance than extended case-mix adjustment, although some individual hospital outcome rates were affected by more detailed case-mix adjustment. To compare outcomes between hospitals it is crucial to consider noise due to low hospital case volume with a random effects model. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chen, R S; Nadkarni, P; Marenco, L; Levin, F; Erdos, J; Miller, P L
2000-01-01
The entity-attribute-value representation with classes and relationships (EAV/CR) provides a flexible and simple database schema to store heterogeneous biomedical data. In certain circumstances, however, the EAV/CR model is known to retrieve data less efficiently than conventionally based database schemas. To perform a pilot study that systematically quantifies performance differences for database queries directed at real-world microbiology data modeled with EAV/CR and conventional representations, and to explore the relative merits of different EAV/CR query implementation strategies. Clinical microbiology data obtained over a ten-year period were stored using both database models. Query execution times were compared for four clinically oriented attribute-centered and entity-centered queries operating under varying conditions of database size and system memory. The performance characteristics of three different EAV/CR query strategies were also examined. Performance was similar for entity-centered queries in the two database models. Performance in the EAV/CR model was approximately three to five times less efficient than its conventional counterpart for attribute-centered queries. The differences in query efficiency became slightly greater as database size increased, although they were reduced with the addition of system memory. The authors found that EAV/CR queries formulated using multiple, simple SQL statements executed in batch were more efficient than single, large SQL statements. This paper describes a pilot project to explore issues in and compare query performance for EAV/CR and conventional database representations. Although attribute-centered queries were less efficient in the EAV/CR model, these inefficiencies may be addressable, at least in part, by the use of more powerful hardware or more memory, or both.
Instruction-level performance modeling and characterization of multimedia applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y.; Cameron, K.W.
1999-06-01
One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less
Southern Regional Center for Lightweight Innovative Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horstemeyer, Mark F.; Wang, Paul
The three major objectives of this Phase III project are: To develop experimentally validated cradle-to-grave modeling and simulation tools to optimize automotive and truck components for lightweighting materials (aluminum, steel, and Mg alloys and polymer-based composites) with consideration of uncertainty to decrease weight and cost, yet increase the performance and safety in impact scenarios; To develop multiscale computational models that quantify microstructure-property relations by evaluating various length scales, from the atomic through component levels, for each step of the manufacturing process for vehicles; and To develop an integrated K-12 educational program to educate students on lightweighting designs and impact scenarios.
Systems Security Engineering Capability Maturity Model SSE-CMM Model Description Document
1999-04-01
management is the process of accessing and quantifying risk , and establishing an acceptable level of risk for the organization. Managing risk is an...Process of assessing and quantifying risk and establishing acceptable level of risk for the organization. [IEEE 13335-1:1996] Security Engineering